Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 4.7-rc4 into char-misc-next

We want those fixes in here to help with merge issues.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+2637 -1574
+2
.mailmap
··· 89 89 Linas Vepstas <linas@austin.ibm.com> 90 90 Mark Brown <broonie@sirena.org.uk> 91 91 Matthieu CASTET <castet.matthieu@free.fr> 92 + Mauro Carvalho Chehab <mchehab@kernel.org> <maurochehab@gmail.com> <mchehab@infradead.org> <mchehab@redhat.com> <m.chehab@samsung.com> <mchehab@osg.samsung.com> <mchehab@s-opensource.com> 92 93 Mayuresh Janorkar <mayur@ti.com> 93 94 Michael Buesch <m@bues.ch> 94 95 Michel Dänzer <michel@tungstengraphics.com> ··· 123 122 Sascha Hauer <s.hauer@pengutronix.de> 124 123 S.Çağlar Onur <caglar@pardus.org.tr> 125 124 Shiraz Hashim <shiraz.linux.kernel@gmail.com> <shiraz.hashim@st.com> 125 + Shuah Khan <shuah@kernel.org> <shuahkhan@gmail.com> <shuah.khan@hp.com> <shuahkh@osg.samsung.com> <shuah.kh@samsung.com> 126 126 Simon Kelley <simon@thekelleys.org.uk> 127 127 Stéphane Witzmann <stephane.witzmann@ubpmes.univ-bpclermont.fr> 128 128 Stephen Hemminger <shemminger@osdl.org>
+1
CREDITS
··· 649 649 650 650 N: Mauro Carvalho Chehab 651 651 E: m.chehab@samsung.org 652 + E: mchehab@osg.samsung.com 652 653 E: mchehab@infradead.org 653 654 D: Media subsystem (V4L/DVB) drivers and core 654 655 D: EDAC drivers and EDAC 3.0 core rework
+29 -29
Documentation/ABI/testing/configfs-usb-gadget-uvc
··· 1 1 What: /config/usb-gadget/gadget/functions/uvc.name 2 2 Date: Dec 2014 3 - KernelVersion: 3.20 3 + KernelVersion: 4.0 4 4 Description: UVC function directory 5 5 6 6 streaming_maxburst - 0..15 (ss only) ··· 9 9 10 10 What: /config/usb-gadget/gadget/functions/uvc.name/control 11 11 Date: Dec 2014 12 - KernelVersion: 3.20 12 + KernelVersion: 4.0 13 13 Description: Control descriptors 14 14 15 15 What: /config/usb-gadget/gadget/functions/uvc.name/control/class 16 16 Date: Dec 2014 17 - KernelVersion: 3.20 17 + KernelVersion: 4.0 18 18 Description: Class descriptors 19 19 20 20 What: /config/usb-gadget/gadget/functions/uvc.name/control/class/ss 21 21 Date: Dec 2014 22 - KernelVersion: 3.20 22 + KernelVersion: 4.0 23 23 Description: Super speed control class descriptors 24 24 25 25 What: /config/usb-gadget/gadget/functions/uvc.name/control/class/fs 26 26 Date: Dec 2014 27 - KernelVersion: 3.20 27 + KernelVersion: 4.0 28 28 Description: Full speed control class descriptors 29 29 30 30 What: /config/usb-gadget/gadget/functions/uvc.name/control/terminal 31 31 Date: Dec 2014 32 - KernelVersion: 3.20 32 + KernelVersion: 4.0 33 33 Description: Terminal descriptors 34 34 35 35 What: /config/usb-gadget/gadget/functions/uvc.name/control/terminal/output 36 36 Date: Dec 2014 37 - KernelVersion: 3.20 37 + KernelVersion: 4.0 38 38 Description: Output terminal descriptors 39 39 40 40 What: /config/usb-gadget/gadget/functions/uvc.name/control/terminal/output/default 41 41 Date: Dec 2014 42 - KernelVersion: 3.20 42 + KernelVersion: 4.0 43 43 Description: Default output terminal descriptors 44 44 45 45 All attributes read only: ··· 53 53 54 54 What: /config/usb-gadget/gadget/functions/uvc.name/control/terminal/camera 55 55 Date: Dec 2014 56 - KernelVersion: 3.20 56 + KernelVersion: 4.0 57 57 Description: Camera terminal descriptors 58 58 59 59 What: /config/usb-gadget/gadget/functions/uvc.name/control/terminal/camera/default 60 60 Date: Dec 2014 61 - KernelVersion: 3.20 61 + KernelVersion: 4.0 62 62 Description: Default camera terminal descriptors 63 63 64 64 All attributes read only: ··· 75 75 76 76 What: /config/usb-gadget/gadget/functions/uvc.name/control/processing 77 77 Date: Dec 2014 78 - KernelVersion: 3.20 78 + KernelVersion: 4.0 79 79 Description: Processing unit descriptors 80 80 81 81 What: /config/usb-gadget/gadget/functions/uvc.name/control/processing/default 82 82 Date: Dec 2014 83 - KernelVersion: 3.20 83 + KernelVersion: 4.0 84 84 Description: Default processing unit descriptors 85 85 86 86 All attributes read only: ··· 94 94 95 95 What: /config/usb-gadget/gadget/functions/uvc.name/control/header 96 96 Date: Dec 2014 97 - KernelVersion: 3.20 97 + KernelVersion: 4.0 98 98 Description: Control header descriptors 99 99 100 100 What: /config/usb-gadget/gadget/functions/uvc.name/control/header/name 101 101 Date: Dec 2014 102 - KernelVersion: 3.20 102 + KernelVersion: 4.0 103 103 Description: Specific control header descriptors 104 104 105 105 dwClockFrequency 106 106 bcdUVC 107 107 What: /config/usb-gadget/gadget/functions/uvc.name/streaming 108 108 Date: Dec 2014 109 - KernelVersion: 3.20 109 + KernelVersion: 4.0 110 110 Description: Streaming descriptors 111 111 112 112 What: /config/usb-gadget/gadget/functions/uvc.name/streaming/class 113 113 Date: Dec 2014 114 - KernelVersion: 3.20 114 + KernelVersion: 4.0 115 115 Description: Streaming class descriptors 116 116 117 117 What: /config/usb-gadget/gadget/functions/uvc.name/streaming/class/ss 118 118 Date: Dec 2014 119 - KernelVersion: 3.20 119 + KernelVersion: 4.0 120 120 Description: Super speed streaming class descriptors 121 121 122 122 What: /config/usb-gadget/gadget/functions/uvc.name/streaming/class/hs 123 123 Date: Dec 2014 124 - KernelVersion: 3.20 124 + KernelVersion: 4.0 125 125 Description: High speed streaming class descriptors 126 126 127 127 What: /config/usb-gadget/gadget/functions/uvc.name/streaming/class/fs 128 128 Date: Dec 2014 129 - KernelVersion: 3.20 129 + KernelVersion: 4.0 130 130 Description: Full speed streaming class descriptors 131 131 132 132 What: /config/usb-gadget/gadget/functions/uvc.name/streaming/color_matching 133 133 Date: Dec 2014 134 - KernelVersion: 3.20 134 + KernelVersion: 4.0 135 135 Description: Color matching descriptors 136 136 137 137 What: /config/usb-gadget/gadget/functions/uvc.name/streaming/color_matching/default 138 138 Date: Dec 2014 139 - KernelVersion: 3.20 139 + KernelVersion: 4.0 140 140 Description: Default color matching descriptors 141 141 142 142 All attributes read only: ··· 150 150 151 151 What: /config/usb-gadget/gadget/functions/uvc.name/streaming/mjpeg 152 152 Date: Dec 2014 153 - KernelVersion: 3.20 153 + KernelVersion: 4.0 154 154 Description: MJPEG format descriptors 155 155 156 156 What: /config/usb-gadget/gadget/functions/uvc.name/streaming/mjpeg/name 157 157 Date: Dec 2014 158 - KernelVersion: 3.20 158 + KernelVersion: 4.0 159 159 Description: Specific MJPEG format descriptors 160 160 161 161 All attributes read only, ··· 174 174 175 175 What: /config/usb-gadget/gadget/functions/uvc.name/streaming/mjpeg/name/name 176 176 Date: Dec 2014 177 - KernelVersion: 3.20 177 + KernelVersion: 4.0 178 178 Description: Specific MJPEG frame descriptors 179 179 180 180 dwFrameInterval - indicates how frame interval can be ··· 196 196 197 197 What: /config/usb-gadget/gadget/functions/uvc.name/streaming/uncompressed 198 198 Date: Dec 2014 199 - KernelVersion: 3.20 199 + KernelVersion: 4.0 200 200 Description: Uncompressed format descriptors 201 201 202 202 What: /config/usb-gadget/gadget/functions/uvc.name/streaming/uncompressed/name 203 203 Date: Dec 2014 204 - KernelVersion: 3.20 204 + KernelVersion: 4.0 205 205 Description: Specific uncompressed format descriptors 206 206 207 207 bmaControls - this format's data for bmaControls in ··· 221 221 222 222 What: /config/usb-gadget/gadget/functions/uvc.name/streaming/uncompressed/name/name 223 223 Date: Dec 2014 224 - KernelVersion: 3.20 224 + KernelVersion: 4.0 225 225 Description: Specific uncompressed frame descriptors 226 226 227 227 dwFrameInterval - indicates how frame interval can be ··· 243 243 244 244 What: /config/usb-gadget/gadget/functions/uvc.name/streaming/header 245 245 Date: Dec 2014 246 - KernelVersion: 3.20 246 + KernelVersion: 4.0 247 247 Description: Streaming header descriptors 248 248 249 249 What: /config/usb-gadget/gadget/functions/uvc.name/streaming/header/name 250 250 Date: Dec 2014 251 - KernelVersion: 3.20 251 + KernelVersion: 4.0 252 252 Description: Specific streaming header descriptors 253 253 254 254 All attributes read only:
+1 -1
Documentation/ABI/testing/sysfs-bus-iio-proximity-as3935
··· 1 - What /sys/bus/iio/devices/iio:deviceX/in_proximity_raw 1 + What /sys/bus/iio/devices/iio:deviceX/in_proximity_input 2 2 Date: March 2014 3 3 KernelVersion: 3.15 4 4 Contact: Matt Ranostay <mranostay@gmail.com>
+2 -2
Documentation/devicetree/bindings/i2c/i2c-arb-gpio-challenge.txt
··· 44 44 - our-claim-gpio: The GPIO that we use to claim the bus. 45 45 - their-claim-gpios: The GPIOs that the other sides use to claim the bus. 46 46 Note that some implementations may only support a single other master. 47 - - Standard I2C mux properties. See mux.txt in this directory. 48 - - Single I2C child bus node at reg 0. See mux.txt in this directory. 47 + - Standard I2C mux properties. See i2c-mux.txt in this directory. 48 + - Single I2C child bus node at reg 0. See i2c-mux.txt in this directory. 49 49 50 50 Optional properties: 51 51 - slew-delay-us: microseconds to wait for a GPIO to go high. Default is 10 us.
+2 -1
Documentation/devicetree/bindings/i2c/i2c-demux-pinctrl.txt
··· 27 27 - i2c-bus-name: The name of this bus. Also needed as pinctrl-name for the I2C 28 28 parents. 29 29 30 - Furthermore, I2C mux properties and child nodes. See mux.txt in this directory. 30 + Furthermore, I2C mux properties and child nodes. See i2c-mux.txt in this 31 + directory. 31 32 32 33 Example: 33 34
+3 -3
Documentation/devicetree/bindings/i2c/i2c-mux-gpio.txt
··· 22 22 - i2c-parent: The phandle of the I2C bus that this multiplexer's master-side 23 23 port is connected to. 24 24 - mux-gpios: list of gpios used to control the muxer 25 - * Standard I2C mux properties. See mux.txt in this directory. 26 - * I2C child bus nodes. See mux.txt in this directory. 25 + * Standard I2C mux properties. See i2c-mux.txt in this directory. 26 + * I2C child bus nodes. See i2c-mux.txt in this directory. 27 27 28 28 Optional properties: 29 29 - idle-state: value to set the muxer to when idle. When no value is ··· 33 33 be numbered based on their order in the device tree. 34 34 35 35 Whenever an access is made to a device on a child bus, the value set 36 - in the revelant node's reg property will be output using the list of 36 + in the relevant node's reg property will be output using the list of 37 37 GPIOs, the first in the list holding the least-significant value. 38 38 39 39 If an idle state is defined, using the idle-state (optional) property,
+2 -2
Documentation/devicetree/bindings/i2c/i2c-mux-pinctrl.txt
··· 28 28 * Standard pinctrl properties that specify the pin mux state for each child 29 29 bus. See ../pinctrl/pinctrl-bindings.txt. 30 30 31 - * Standard I2C mux properties. See mux.txt in this directory. 31 + * Standard I2C mux properties. See i2c-mux.txt in this directory. 32 32 33 - * I2C child bus nodes. See mux.txt in this directory. 33 + * I2C child bus nodes. See i2c-mux.txt in this directory. 34 34 35 35 For each named state defined in the pinctrl-names property, an I2C child bus 36 36 will be created. I2C child bus numbers are assigned based on the index into
+3 -3
Documentation/devicetree/bindings/i2c/i2c-mux-reg.txt
··· 7 7 - compatible: i2c-mux-reg 8 8 - i2c-parent: The phandle of the I2C bus that this multiplexer's master-side 9 9 port is connected to. 10 - * Standard I2C mux properties. See mux.txt in this directory. 11 - * I2C child bus nodes. See mux.txt in this directory. 10 + * Standard I2C mux properties. See i2c-mux.txt in this directory. 11 + * I2C child bus nodes. See i2c-mux.txt in this directory. 12 12 13 13 Optional properties: 14 14 - reg: this pair of <offset size> specifies the register to control the mux. ··· 24 24 given, it defaults to the last value used. 25 25 26 26 Whenever an access is made to a device on a child bus, the value set 27 - in the revelant node's reg property will be output to the register. 27 + in the relevant node's reg property will be output to the register. 28 28 29 29 If an idle state is defined, using the idle-state (optional) property, 30 30 whenever an access is not being made to a device on a child bus, the
+4 -4
Documentation/devicetree/bindings/net/marvell-bt-sd8xxx.txt
··· 13 13 initialization. This is an array of 28 values(u8). 14 14 15 15 - marvell,wakeup-pin: It represents wakeup pin number of the bluetooth chip. 16 - firmware will use the pin to wakeup host system. 16 + firmware will use the pin to wakeup host system (u16). 17 17 - marvell,wakeup-gap-ms: wakeup gap represents wakeup latency of the host 18 18 platform. The value will be configured to firmware. This 19 - is needed to work chip's sleep feature as expected. 19 + is needed to work chip's sleep feature as expected (u16). 20 20 - interrupt-parent: phandle of the parent interrupt controller 21 21 - interrupts : interrupt pin number to the cpu. Driver will request an irq based 22 22 on this interrupt number. During system suspend, the irq will be ··· 50 50 0x37 0x01 0x1c 0x00 0xff 0xff 0xff 0xff 0x01 0x7f 0x04 0x02 51 51 0x00 0x00 0xba 0xce 0xc0 0xc6 0x2d 0x00 0x00 0x00 0x00 0x00 52 52 0x00 0x00 0xf0 0x00>; 53 - marvell,wakeup-pin = <0x0d>; 54 - marvell,wakeup-gap-ms = <0x64>; 53 + marvell,wakeup-pin = /bits/ 16 <0x0d>; 54 + marvell,wakeup-gap-ms = /bits/ 16 <0x64>; 55 55 }; 56 56 };
+2
Documentation/devicetree/bindings/vendor-prefixes.txt
··· 255 255 SUNW Sun Microsystems, Inc 256 256 tbs TBS Technologies 257 257 tcl Toby Churchill Ltd. 258 + technexion TechNexion 258 259 technologic Technologic Systems 259 260 thine THine Electronics, Inc. 260 261 ti Texas Instruments ··· 270 269 truly Truly Semiconductors Limited 271 270 tyan Tyan Computer Corporation 272 271 upisemi uPI Semiconductor Corp. 272 + uniwest United Western Technologies Corp (UniWest) 273 273 urt United Radiant Technology Corporation 274 274 usi Universal Scientific Industrial Co., Ltd. 275 275 v3 V3 Semiconductor
+2 -2
Documentation/leds/leds-class.txt
··· 74 74 however, it is better to use the API function led_blink_set(), as it 75 75 will check and implement software fallback if necessary. 76 76 77 - To turn off blinking again, use the API function led_brightness_set() 78 - as that will not just set the LED brightness but also stop any software 77 + To turn off blinking, use the API function led_brightness_set() 78 + with brightness value LED_OFF, which should stop any software 79 79 timers that may have been required for blinking. 80 80 81 81 The blink_set() function should choose a user friendly blinking value
+43 -23
MAINTAINERS
··· 1159 1159 ARM/FREESCALE IMX / MXC ARM ARCHITECTURE 1160 1160 M: Shawn Guo <shawnguo@kernel.org> 1161 1161 M: Sascha Hauer <kernel@pengutronix.de> 1162 + R: Fabio Estevam <fabio.estevam@nxp.com> 1162 1163 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1163 1164 S: Maintained 1164 1165 T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git ··· 2243 2242 F: net/ax25/ 2244 2243 2245 2244 AZ6007 DVB DRIVER 2246 - M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 2245 + M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 2246 + M: Mauro Carvalho Chehab <mchehab@kernel.org> 2247 2247 L: linux-media@vger.kernel.org 2248 2248 W: https://linuxtv.org 2249 2249 T: git git://linuxtv.org/media_tree.git ··· 2711 2709 F: fs/btrfs/ 2712 2710 2713 2711 BTTV VIDEO4LINUX DRIVER 2714 - M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 2712 + M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 2713 + M: Mauro Carvalho Chehab <mchehab@kernel.org> 2715 2714 L: linux-media@vger.kernel.org 2716 2715 W: https://linuxtv.org 2717 2716 T: git git://linuxtv.org/media_tree.git ··· 3347 3344 F: drivers/media/dvb-frontends/cx24120* 3348 3345 3349 3346 CX88 VIDEO4LINUX DRIVER 3350 - M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 3347 + M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 3348 + M: Mauro Carvalho Chehab <mchehab@kernel.org> 3351 3349 L: linux-media@vger.kernel.org 3352 3350 W: https://linuxtv.org 3353 3351 T: git git://linuxtv.org/media_tree.git ··· 3778 3774 S: Maintained 3779 3775 F: drivers/dma/ 3780 3776 F: include/linux/dmaengine.h 3777 + F: Documentation/devicetree/bindings/dma/ 3781 3778 F: Documentation/dmaengine/ 3782 3779 T: git git://git.infradead.org/users/vkoul/slave-dma.git 3783 3780 ··· 4296 4291 EDAC-CORE 4297 4292 M: Doug Thompson <dougthompson@xmission.com> 4298 4293 M: Borislav Petkov <bp@alien8.de> 4299 - M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 4294 + M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 4295 + M: Mauro Carvalho Chehab <mchehab@kernel.org> 4300 4296 L: linux-edac@vger.kernel.org 4301 4297 T: git git://git.kernel.org/pub/scm/linux/kernel/git/bp/bp.git for-next 4302 4298 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-edac.git linux_next ··· 4342 4336 F: drivers/edac/e7xxx_edac.c 4343 4337 4344 4338 EDAC-GHES 4345 - M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 4339 + M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 4340 + M: Mauro Carvalho Chehab <mchehab@kernel.org> 4346 4341 L: linux-edac@vger.kernel.org 4347 4342 S: Maintained 4348 4343 F: drivers/edac/ghes_edac.c ··· 4367 4360 F: drivers/edac/i5000_edac.c 4368 4361 4369 4362 EDAC-I5400 4370 - M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 4363 + M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 4364 + M: Mauro Carvalho Chehab <mchehab@kernel.org> 4371 4365 L: linux-edac@vger.kernel.org 4372 4366 S: Maintained 4373 4367 F: drivers/edac/i5400_edac.c 4374 4368 4375 4369 EDAC-I7300 4376 - M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 4370 + M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 4371 + M: Mauro Carvalho Chehab <mchehab@kernel.org> 4377 4372 L: linux-edac@vger.kernel.org 4378 4373 S: Maintained 4379 4374 F: drivers/edac/i7300_edac.c 4380 4375 4381 4376 EDAC-I7CORE 4382 - M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 4377 + M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 4378 + M: Mauro Carvalho Chehab <mchehab@kernel.org> 4383 4379 L: linux-edac@vger.kernel.org 4384 4380 S: Maintained 4385 4381 F: drivers/edac/i7core_edac.c ··· 4419 4409 F: drivers/edac/r82600_edac.c 4420 4410 4421 4411 EDAC-SBRIDGE 4422 - M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 4412 + M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 4413 + M: Mauro Carvalho Chehab <mchehab@kernel.org> 4423 4414 L: linux-edac@vger.kernel.org 4424 4415 S: Maintained 4425 4416 F: drivers/edac/sb_edac.c ··· 4479 4468 F: drivers/net/ethernet/ibm/ehea/ 4480 4469 4481 4470 EM28XX VIDEO4LINUX DRIVER 4482 - M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 4471 + M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 4472 + M: Mauro Carvalho Chehab <mchehab@kernel.org> 4483 4473 L: linux-media@vger.kernel.org 4484 4474 W: https://linuxtv.org 4485 4475 T: git git://linuxtv.org/media_tree.git ··· 6499 6487 6500 6488 KERNEL SELFTEST FRAMEWORK 6501 6489 M: Shuah Khan <shuahkh@osg.samsung.com> 6490 + M: Shuah Khan <shuah@kernel.org> 6502 6491 L: linux-kselftest@vger.kernel.org 6503 6492 T: git git://git.kernel.org/pub/scm/shuah/linux-kselftest 6504 6493 S: Maintained ··· 7371 7358 F: drivers/media/pci/netup_unidvb/* 7372 7359 7373 7360 MEDIA INPUT INFRASTRUCTURE (V4L/DVB) 7374 - M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 7361 + M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 7362 + M: Mauro Carvalho Chehab <mchehab@kernel.org> 7375 7363 P: LinuxTV.org Project 7376 7364 L: linux-media@vger.kernel.org 7377 7365 W: https://linuxtv.org ··· 8421 8407 OPEN FIRMWARE AND FLATTENED DEVICE TREE 8422 8408 M: Rob Herring <robh+dt@kernel.org> 8423 8409 M: Frank Rowand <frowand.list@gmail.com> 8424 - M: Grant Likely <grant.likely@linaro.org> 8425 8410 L: devicetree@vger.kernel.org 8426 8411 W: http://www.devicetree.org/ 8427 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/glikely/linux.git 8412 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux.git 8428 8413 S: Maintained 8429 8414 F: drivers/of/ 8430 8415 F: include/linux/of*.h ··· 8431 8418 8432 8419 OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS 8433 8420 M: Rob Herring <robh+dt@kernel.org> 8434 - M: Pawel Moll <pawel.moll@arm.com> 8435 8421 M: Mark Rutland <mark.rutland@arm.com> 8436 - M: Ian Campbell <ijc+devicetree@hellion.org.uk> 8437 - M: Kumar Gala <galak@codeaurora.org> 8438 8422 L: devicetree@vger.kernel.org 8439 8423 T: git git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux.git 8424 + Q: http://patchwork.ozlabs.org/project/devicetree-bindings/list/ 8440 8425 S: Maintained 8441 8426 F: Documentation/devicetree/ 8442 8427 F: arch/*/boot/dts/ ··· 9866 9855 F: drivers/media/i2c/saa6588* 9867 9856 9868 9857 SAA7134 VIDEO4LINUX DRIVER 9869 - M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 9858 + M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 9859 + M: Mauro Carvalho Chehab <mchehab@kernel.org> 9870 9860 L: linux-media@vger.kernel.org 9871 9861 W: https://linuxtv.org 9872 9862 T: git git://linuxtv.org/media_tree.git ··· 10386 10374 F: drivers/media/radio/si4713/radio-usb-si4713.c 10387 10375 10388 10376 SIANO DVB DRIVER 10389 - M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 10377 + M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 10378 + M: Mauro Carvalho Chehab <mchehab@kernel.org> 10390 10379 L: linux-media@vger.kernel.org 10391 10380 W: https://linuxtv.org 10392 10381 T: git git://linuxtv.org/media_tree.git ··· 11153 11140 F: drivers/media/i2c/tda9840* 11154 11141 11155 11142 TEA5761 TUNER DRIVER 11156 - M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 11143 + M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 11144 + M: Mauro Carvalho Chehab <mchehab@kernel.org> 11157 11145 L: linux-media@vger.kernel.org 11158 11146 W: https://linuxtv.org 11159 11147 T: git git://linuxtv.org/media_tree.git ··· 11162 11148 F: drivers/media/tuners/tea5761.* 11163 11149 11164 11150 TEA5767 TUNER DRIVER 11165 - M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 11151 + M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 11152 + M: Mauro Carvalho Chehab <mchehab@kernel.org> 11166 11153 L: linux-media@vger.kernel.org 11167 11154 W: https://linuxtv.org 11168 11155 T: git git://linuxtv.org/media_tree.git ··· 11550 11535 F: mm/shmem.c 11551 11536 11552 11537 TM6000 VIDEO4LINUX DRIVER 11553 - M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 11538 + M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 11539 + M: Mauro Carvalho Chehab <mchehab@kernel.org> 11554 11540 L: linux-media@vger.kernel.org 11555 11541 W: https://linuxtv.org 11556 11542 T: git git://linuxtv.org/media_tree.git ··· 11905 11889 11906 11890 USB OVER IP DRIVER 11907 11891 M: Valentina Manea <valentina.manea.m@gmail.com> 11908 - M: Shuah Khan <shuah.kh@samsung.com> 11892 + M: Shuah Khan <shuahkh@osg.samsung.com> 11893 + M: Shuah Khan <shuah@kernel.org> 11909 11894 L: linux-usb@vger.kernel.org 11910 11895 S: Maintained 11911 11896 F: Documentation/usb/usbip_protocol.txt ··· 11977 11960 W: http://www.linux-usb.org 11978 11961 T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb.git 11979 11962 S: Supported 11963 + F: Documentation/devicetree/bindings/usb/ 11980 11964 F: Documentation/usb/ 11981 11965 F: drivers/usb/ 11982 11966 F: include/linux/usb.h ··· 12151 12133 M: "Michael S. Tsirkin" <mst@redhat.com> 12152 12134 L: virtualization@lists.linux-foundation.org 12153 12135 S: Maintained 12136 + F: Documentation/devicetree/bindings/virtio/ 12154 12137 F: drivers/virtio/ 12155 12138 F: tools/virtio/ 12156 12139 F: drivers/net/virtio_net.c ··· 12540 12521 F: arch/x86/entry/vdso/ 12541 12522 12542 12523 XC2028/3028 TUNER DRIVER 12543 - M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 12524 + M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 12525 + M: Mauro Carvalho Chehab <mchehab@kernel.org> 12544 12526 L: linux-media@vger.kernel.org 12545 12527 W: https://linuxtv.org 12546 12528 T: git git://linuxtv.org/media_tree.git
+1 -1
Makefile
··· 1 1 VERSION = 4 2 2 PATCHLEVEL = 7 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc2 4 + EXTRAVERSION = -rc4 5 5 NAME = Psychotic Stoned Sheep 6 6 7 7 # *DOCUMENTATION*
+3
arch/Kconfig
··· 606 606 file which provides platform-specific implementations of some 607 607 functions in <linux/hash.h> or fs/namei.c. 608 608 609 + config ISA_BUS_API 610 + def_bool ISA 611 + 609 612 # 610 613 # ABI hall of shame 611 614 #
+1
arch/arm/boot/dts/Makefile
··· 741 741 sun7i-a20-olimex-som-evb.dtb \ 742 742 sun7i-a20-olinuxino-lime.dtb \ 743 743 sun7i-a20-olinuxino-lime2.dtb \ 744 + sun7i-a20-olinuxino-lime2-emmc.dtb \ 744 745 sun7i-a20-olinuxino-micro.dtb \ 745 746 sun7i-a20-orangepi.dtb \ 746 747 sun7i-a20-orangepi-mini.dtb \
+1 -1
arch/arm/boot/dts/am437x-sk-evm.dts
··· 418 418 status = "okay"; 419 419 pinctrl-names = "default"; 420 420 pinctrl-0 = <&i2c0_pins>; 421 - clock-frequency = <400000>; 421 + clock-frequency = <100000>; 422 422 423 423 tps@24 { 424 424 compatible = "ti,tps65218";
+17 -15
arch/arm/boot/dts/am57xx-idk-common.dtsi
··· 60 60 61 61 tps659038_pmic { 62 62 compatible = "ti,tps659038-pmic"; 63 + 64 + smps12-in-supply = <&vmain>; 65 + smps3-in-supply = <&vmain>; 66 + smps45-in-supply = <&vmain>; 67 + smps6-in-supply = <&vmain>; 68 + smps7-in-supply = <&vmain>; 69 + smps8-in-supply = <&vmain>; 70 + smps9-in-supply = <&vmain>; 71 + ldo1-in-supply = <&vmain>; 72 + ldo2-in-supply = <&vmain>; 73 + ldo3-in-supply = <&vmain>; 74 + ldo4-in-supply = <&vmain>; 75 + ldo9-in-supply = <&vmain>; 76 + ldoln-in-supply = <&vmain>; 77 + ldousb-in-supply = <&vmain>; 78 + ldortc-in-supply = <&vmain>; 79 + 63 80 regulators { 64 81 smps12_reg: smps12 { 65 82 /* VDD_MPU */ 66 - vin-supply = <&vmain>; 67 83 regulator-name = "smps12"; 68 84 regulator-min-microvolt = <850000>; 69 85 regulator-max-microvolt = <1250000>; ··· 89 73 90 74 smps3_reg: smps3 { 91 75 /* VDD_DDR EMIF1 EMIF2 */ 92 - vin-supply = <&vmain>; 93 76 regulator-name = "smps3"; 94 77 regulator-min-microvolt = <1350000>; 95 78 regulator-max-microvolt = <1350000>; ··· 99 84 smps45_reg: smps45 { 100 85 /* VDD_DSPEVE on AM572 */ 101 86 /* VDD_IVA + VDD_DSP on AM571 */ 102 - vin-supply = <&vmain>; 103 87 regulator-name = "smps45"; 104 88 regulator-min-microvolt = <850000>; 105 89 regulator-max-microvolt = <1250000>; ··· 108 94 109 95 smps6_reg: smps6 { 110 96 /* VDD_GPU */ 111 - vin-supply = <&vmain>; 112 97 regulator-name = "smps6"; 113 98 regulator-min-microvolt = <850000>; 114 99 regulator-max-microvolt = <1250000>; ··· 117 104 118 105 smps7_reg: smps7 { 119 106 /* VDD_CORE */ 120 - vin-supply = <&vmain>; 121 107 regulator-name = "smps7"; 122 108 regulator-min-microvolt = <850000>; 123 109 regulator-max-microvolt = <1150000>; ··· 127 115 smps8_reg: smps8 { 128 116 /* 5728 - VDD_IVAHD */ 129 117 /* 5718 - N.C. test point */ 130 - vin-supply = <&vmain>; 131 118 regulator-name = "smps8"; 132 119 }; 133 120 134 121 smps9_reg: smps9 { 135 122 /* VDD_3_3D */ 136 - vin-supply = <&vmain>; 137 123 regulator-name = "smps9"; 138 124 regulator-min-microvolt = <3300000>; 139 125 regulator-max-microvolt = <3300000>; ··· 142 132 ldo1_reg: ldo1 { 143 133 /* VDDSHV8 - VSDMMC */ 144 134 /* NOTE: on rev 1.3a, data supply */ 145 - vin-supply = <&vmain>; 146 135 regulator-name = "ldo1"; 147 136 regulator-min-microvolt = <1800000>; 148 137 regulator-max-microvolt = <3300000>; ··· 151 142 152 143 ldo2_reg: ldo2 { 153 144 /* VDDSH18V */ 154 - vin-supply = <&vmain>; 155 145 regulator-name = "ldo2"; 156 146 regulator-min-microvolt = <1800000>; 157 147 regulator-max-microvolt = <1800000>; ··· 160 152 161 153 ldo3_reg: ldo3 { 162 154 /* R1.3a 572x V1_8PHY_LDO3: USB, SATA */ 163 - vin-supply = <&vmain>; 164 155 regulator-name = "ldo3"; 165 156 regulator-min-microvolt = <1800000>; 166 157 regulator-max-microvolt = <1800000>; ··· 169 162 170 163 ldo4_reg: ldo4 { 171 164 /* R1.3a 572x V1_8PHY_LDO4: PCIE, HDMI*/ 172 - vin-supply = <&vmain>; 173 165 regulator-name = "ldo4"; 174 166 regulator-min-microvolt = <1800000>; 175 167 regulator-max-microvolt = <1800000>; ··· 180 174 181 175 ldo9_reg: ldo9 { 182 176 /* VDD_RTC */ 183 - vin-supply = <&vmain>; 184 177 regulator-name = "ldo9"; 185 178 regulator-min-microvolt = <840000>; 186 179 regulator-max-microvolt = <1160000>; ··· 189 184 190 185 ldoln_reg: ldoln { 191 186 /* VDDA_1V8_PLL */ 192 - vin-supply = <&vmain>; 193 187 regulator-name = "ldoln"; 194 188 regulator-min-microvolt = <1800000>; 195 189 regulator-max-microvolt = <1800000>; ··· 198 194 199 195 ldousb_reg: ldousb { 200 196 /* VDDA_3V_USB: VDDA_USBHS33 */ 201 - vin-supply = <&vmain>; 202 197 regulator-name = "ldousb"; 203 198 regulator-min-microvolt = <3300000>; 204 199 regulator-max-microvolt = <3300000>; ··· 207 204 208 205 ldortc_reg: ldortc { 209 206 /* VDDA_RTC */ 210 - vin-supply = <&vmain>; 211 207 regulator-name = "ldortc"; 212 208 regulator-min-microvolt = <1800000>; 213 209 regulator-max-microvolt = <1800000>;
+8
arch/arm/boot/dts/dm8148-evm.dts
··· 93 93 }; 94 94 }; 95 95 96 + &mmc1 { 97 + status = "disabled"; 98 + }; 99 + 96 100 &mmc2 { 97 101 pinctrl-names = "default"; 98 102 pinctrl-0 = <&sd1_pins>; 99 103 vmmc-supply = <&vmmcsd_fixed>; 100 104 bus-width = <4>; 101 105 cd-gpios = <&gpio2 6 GPIO_ACTIVE_LOW>; 106 + }; 107 + 108 + &mmc3 { 109 + status = "disabled"; 102 110 }; 103 111 104 112 &pincntl {
+9
arch/arm/boot/dts/dm8148-t410.dts
··· 45 45 phy-mode = "rgmii"; 46 46 }; 47 47 48 + &mmc1 { 49 + status = "disabled"; 50 + }; 51 + 52 + &mmc2 { 53 + status = "disabled"; 54 + }; 55 + 48 56 &mmc3 { 49 57 pinctrl-names = "default"; 50 58 pinctrl-0 = <&sd2_pins>; ··· 61 53 dmas = <&edma_xbar 8 0 1 /* use SDTXEVT1 instead of MCASP0TX */ 62 54 &edma_xbar 9 0 2>; /* use SDRXEVT1 instead of MCASP0RX */ 63 55 dma-names = "tx", "rx"; 56 + non-removable; 64 57 }; 65 58 66 59 &pincntl {
+2
arch/arm/boot/dts/dra7.dtsi
··· 1451 1451 ti,hwmods = "gpmc"; 1452 1452 reg = <0x50000000 0x37c>; /* device IO registers */ 1453 1453 interrupts = <GIC_SPI 15 IRQ_TYPE_LEVEL_HIGH>; 1454 + dmas = <&edma_xbar 4 0>; 1455 + dma-names = "rxtx"; 1454 1456 gpmc,num-cs = <8>; 1455 1457 gpmc,num-waitpins = <2>; 1456 1458 #address-cells = <2>;
+2 -2
arch/arm/boot/dts/dra74x.dtsi
··· 107 107 reg = <0x58000000 0x80>, 108 108 <0x58004054 0x4>, 109 109 <0x58004300 0x20>, 110 - <0x58005054 0x4>, 111 - <0x58005300 0x20>; 110 + <0x58009054 0x4>, 111 + <0x58009300 0x20>; 112 112 reg-names = "dss", "pll1_clkctrl", "pll1", 113 113 "pll2_clkctrl", "pll2"; 114 114
+10 -3
arch/arm/boot/dts/exynos5250-snow-common.dtsi
··· 242 242 hpd-gpios = <&gpx0 7 GPIO_ACTIVE_HIGH>; 243 243 244 244 ports { 245 - port0 { 245 + port { 246 246 dp_out: endpoint { 247 247 remote-endpoint = <&bridge_in>; 248 248 }; ··· 485 485 edid-emulation = <5>; 486 486 487 487 ports { 488 - port0 { 488 + #address-cells = <1>; 489 + #size-cells = <0>; 490 + 491 + port@0 { 492 + reg = <0>; 493 + 489 494 bridge_out: endpoint { 490 495 remote-endpoint = <&panel_in>; 491 496 }; 492 497 }; 493 498 494 - port1 { 499 + port@1 { 500 + reg = <1>; 501 + 495 502 bridge_in: endpoint { 496 503 remote-endpoint = <&dp_out>; 497 504 };
+10 -3
arch/arm/boot/dts/exynos5420-peach-pit.dts
··· 163 163 hpd-gpios = <&gpx2 6 GPIO_ACTIVE_HIGH>; 164 164 165 165 ports { 166 - port0 { 166 + port { 167 167 dp_out: endpoint { 168 168 remote-endpoint = <&bridge_in>; 169 169 }; ··· 631 631 use-external-pwm; 632 632 633 633 ports { 634 - port0 { 634 + #address-cells = <1>; 635 + #size-cells = <0>; 636 + 637 + port@0 { 638 + reg = <0>; 639 + 635 640 bridge_out: endpoint { 636 641 remote-endpoint = <&panel_in>; 637 642 }; 638 643 }; 639 644 640 - port1 { 645 + port@1 { 646 + reg = <1>; 647 + 641 648 bridge_in: endpoint { 642 649 remote-endpoint = <&dp_out>; 643 650 };
+1 -1
arch/arm/boot/dts/omap3-evm-37xx.dts
··· 85 85 OMAP3_CORE1_IOPAD(0x2158, PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_clk.sdmmc2_clk */ 86 86 OMAP3_CORE1_IOPAD(0x215a, PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_cmd.sdmmc2_cmd */ 87 87 OMAP3_CORE1_IOPAD(0x215c, PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat0.sdmmc2_dat0 */ 88 - OMAP3_CORE1_IOPAD(0x215e, WAKEUP_EN | PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat1.sdmmc2_dat1 */ 88 + OMAP3_CORE1_IOPAD(0x215e, PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat1.sdmmc2_dat1 */ 89 89 OMAP3_CORE1_IOPAD(0x2160, PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat2.sdmmc2_dat2 */ 90 90 OMAP3_CORE1_IOPAD(0x2162, PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_dat3.sdmmc2_dat3 */ 91 91 >;
+1
arch/arm/boot/dts/omap3-igep.dtsi
··· 188 188 vmmc-supply = <&vmmc1>; 189 189 vmmc_aux-supply = <&vsim>; 190 190 bus-width = <4>; 191 + cd-gpios = <&twl_gpio 0 GPIO_ACTIVE_LOW>; 191 192 }; 192 193 193 194 &mmc3 {
+11
arch/arm/boot/dts/omap3-igep0020-common.dtsi
··· 194 194 OMAP3630_CORE2_IOPAD(0x25f8, PIN_OUTPUT | MUX_MODE4) /* etk_d14.gpio_28 */ 195 195 >; 196 196 }; 197 + 198 + mmc1_wp_pins: pinmux_mmc1_cd_pins { 199 + pinctrl-single,pins = < 200 + OMAP3630_CORE2_IOPAD(0x25fa, PIN_INPUT | MUX_MODE4) /* etk_d15.gpio_29 */ 201 + >; 202 + }; 197 203 }; 198 204 199 205 &i2c3 { ··· 255 249 data-lines = <24>; 256 250 }; 257 251 }; 252 + }; 253 + 254 + &mmc1 { 255 + pinctrl-0 = <&mmc1_pins &mmc1_wp_pins>; 256 + wp-gpios = <&gpio1 29 GPIO_ACTIVE_LOW>; /* gpio_29 */ 258 257 };
+2 -2
arch/arm/boot/dts/omap3-n900.dts
··· 288 288 pinctrl-single,pins = < 289 289 OMAP3_CORE1_IOPAD(0x2180, PIN_INPUT_PULLUP | MUX_MODE1) /* ssi1_rdy_tx */ 290 290 OMAP3_CORE1_IOPAD(0x217e, PIN_OUTPUT | MUX_MODE1) /* ssi1_flag_tx */ 291 - OMAP3_CORE1_IOPAD(0x2182, PIN_INPUT | WAKEUP_EN | MUX_MODE4) /* ssi1_wake_tx (cawake) */ 291 + OMAP3_CORE1_IOPAD(0x2182, PIN_INPUT | MUX_MODE4) /* ssi1_wake_tx (cawake) */ 292 292 OMAP3_CORE1_IOPAD(0x217c, PIN_OUTPUT | MUX_MODE1) /* ssi1_dat_tx */ 293 293 OMAP3_CORE1_IOPAD(0x2184, PIN_INPUT | MUX_MODE1) /* ssi1_dat_rx */ 294 294 OMAP3_CORE1_IOPAD(0x2186, PIN_INPUT | MUX_MODE1) /* ssi1_flag_rx */ ··· 300 300 modem_pins: pinmux_modem { 301 301 pinctrl-single,pins = < 302 302 OMAP3_CORE1_IOPAD(0x20dc, PIN_OUTPUT | MUX_MODE4) /* gpio 70 => cmt_apeslpx */ 303 - OMAP3_CORE1_IOPAD(0x20e0, PIN_INPUT | WAKEUP_EN | MUX_MODE4) /* gpio 72 => ape_rst_rq */ 303 + OMAP3_CORE1_IOPAD(0x20e0, PIN_INPUT | MUX_MODE4) /* gpio 72 => ape_rst_rq */ 304 304 OMAP3_CORE1_IOPAD(0x20e2, PIN_OUTPUT | MUX_MODE4) /* gpio 73 => cmt_rst_rq */ 305 305 OMAP3_CORE1_IOPAD(0x20e4, PIN_OUTPUT | MUX_MODE4) /* gpio 74 => cmt_en */ 306 306 OMAP3_CORE1_IOPAD(0x20e6, PIN_OUTPUT | MUX_MODE4) /* gpio 75 => cmt_rst */
+3 -3
arch/arm/boot/dts/omap3-n950-n9.dtsi
··· 97 97 OMAP3_CORE1_IOPAD(0x217c, PIN_OUTPUT | MUX_MODE1) /* ssi1_dat_tx */ 98 98 OMAP3_CORE1_IOPAD(0x217e, PIN_OUTPUT | MUX_MODE1) /* ssi1_flag_tx */ 99 99 OMAP3_CORE1_IOPAD(0x2180, PIN_INPUT_PULLUP | MUX_MODE1) /* ssi1_rdy_tx */ 100 - OMAP3_CORE1_IOPAD(0x2182, PIN_INPUT | WAKEUP_EN | MUX_MODE4) /* ssi1_wake_tx (cawake) */ 100 + OMAP3_CORE1_IOPAD(0x2182, PIN_INPUT | MUX_MODE4) /* ssi1_wake_tx (cawake) */ 101 101 OMAP3_CORE1_IOPAD(0x2184, PIN_INPUT | MUX_MODE1) /* ssi1_dat_rx */ 102 102 OMAP3_CORE1_IOPAD(0x2186, PIN_INPUT | MUX_MODE1) /* ssi1_flag_rx */ 103 103 OMAP3_CORE1_IOPAD(0x2188, PIN_OUTPUT | MUX_MODE1) /* ssi1_rdy_rx */ ··· 110 110 OMAP3_CORE1_IOPAD(0x217c, PIN_OUTPUT | MUX_MODE7) /* ssi1_dat_tx */ 111 111 OMAP3_CORE1_IOPAD(0x217e, PIN_OUTPUT | MUX_MODE7) /* ssi1_flag_tx */ 112 112 OMAP3_CORE1_IOPAD(0x2180, PIN_INPUT_PULLDOWN | MUX_MODE7) /* ssi1_rdy_tx */ 113 - OMAP3_CORE1_IOPAD(0x2182, PIN_INPUT | WAKEUP_EN | MUX_MODE4) /* ssi1_wake_tx (cawake) */ 113 + OMAP3_CORE1_IOPAD(0x2182, PIN_INPUT | MUX_MODE4) /* ssi1_wake_tx (cawake) */ 114 114 OMAP3_CORE1_IOPAD(0x2184, PIN_INPUT | MUX_MODE7) /* ssi1_dat_rx */ 115 115 OMAP3_CORE1_IOPAD(0x2186, PIN_INPUT | MUX_MODE7) /* ssi1_flag_rx */ 116 116 OMAP3_CORE1_IOPAD(0x2188, PIN_OUTPUT | MUX_MODE4) /* ssi1_rdy_rx */ ··· 120 120 121 121 modem_pins1: pinmux_modem_core1_pins { 122 122 pinctrl-single,pins = < 123 - OMAP3_CORE1_IOPAD(0x207a, PIN_INPUT | WAKEUP_EN | MUX_MODE4) /* gpio_34 (ape_rst_rq) */ 123 + OMAP3_CORE1_IOPAD(0x207a, PIN_INPUT | MUX_MODE4) /* gpio_34 (ape_rst_rq) */ 124 124 OMAP3_CORE1_IOPAD(0x2100, PIN_OUTPUT | MUX_MODE4) /* gpio_88 (cmt_rst_rq) */ 125 125 OMAP3_CORE1_IOPAD(0x210a, PIN_OUTPUT | MUX_MODE4) /* gpio_93 (cmt_apeslpx) */ 126 126 >;
+3 -3
arch/arm/boot/dts/omap3-zoom3.dts
··· 98 98 pinctrl-single,pins = < 99 99 OMAP3_CORE1_IOPAD(0x2174, PIN_INPUT_PULLUP | MUX_MODE0) /* uart2_cts.uart2_cts */ 100 100 OMAP3_CORE1_IOPAD(0x2176, PIN_OUTPUT | MUX_MODE0) /* uart2_rts.uart2_rts */ 101 - OMAP3_CORE1_IOPAD(0x217a, WAKEUP_EN | PIN_INPUT | MUX_MODE0) /* uart2_rx.uart2_rx */ 101 + OMAP3_CORE1_IOPAD(0x217a, PIN_INPUT | MUX_MODE0) /* uart2_rx.uart2_rx */ 102 102 OMAP3_CORE1_IOPAD(0x2178, PIN_OUTPUT | MUX_MODE0) /* uart2_tx.uart2_tx */ 103 103 >; 104 104 }; ··· 107 107 pinctrl-single,pins = < 108 108 OMAP3_CORE1_IOPAD(0x219a, PIN_INPUT_PULLDOWN | MUX_MODE0) /* uart3_cts_rctx.uart3_cts_rctx */ 109 109 OMAP3_CORE1_IOPAD(0x219c, PIN_OUTPUT | MUX_MODE0) /* uart3_rts_sd.uart3_rts_sd */ 110 - OMAP3_CORE1_IOPAD(0x219e, WAKEUP_EN | PIN_INPUT | MUX_MODE0) /* uart3_rx_irrx.uart3_rx_irrx */ 110 + OMAP3_CORE1_IOPAD(0x219e, PIN_INPUT | MUX_MODE0) /* uart3_rx_irrx.uart3_rx_irrx */ 111 111 OMAP3_CORE1_IOPAD(0x21a0, PIN_OUTPUT | MUX_MODE0) /* uart3_tx_irtx.uart3_tx_irtx */ 112 112 >; 113 113 }; ··· 125 125 pinctrl-single,pins = < 126 126 OMAP3630_CORE2_IOPAD(0x25d8, PIN_INPUT_PULLUP | MUX_MODE2) /* etk_clk.sdmmc3_clk */ 127 127 OMAP3630_CORE2_IOPAD(0x25e4, PIN_INPUT_PULLUP | MUX_MODE2) /* etk_d4.sdmmc3_dat0 */ 128 - OMAP3630_CORE2_IOPAD(0x25e6, WAKEUP_EN | PIN_INPUT_PULLUP | MUX_MODE2) /* etk_d5.sdmmc3_dat1 */ 128 + OMAP3630_CORE2_IOPAD(0x25e6, PIN_INPUT_PULLUP | MUX_MODE2) /* etk_d5.sdmmc3_dat1 */ 129 129 OMAP3630_CORE2_IOPAD(0x25e8, PIN_INPUT_PULLUP | MUX_MODE2) /* etk_d6.sdmmc3_dat2 */ 130 130 OMAP3630_CORE2_IOPAD(0x25e2, PIN_INPUT_PULLUP | MUX_MODE2) /* etk_d3.sdmmc3_dat3 */ 131 131 >;
+46 -2
arch/arm/boot/dts/omap5-board-common.dtsi
··· 14 14 display0 = &hdmi0; 15 15 }; 16 16 17 + vmain: fixedregulator-vmain { 18 + compatible = "regulator-fixed"; 19 + regulator-name = "vmain"; 20 + regulator-min-microvolt = <5000000>; 21 + regulator-max-microvolt = <5000000>; 22 + }; 23 + 24 + vsys_cobra: fixedregulator-vsys_cobra { 25 + compatible = "regulator-fixed"; 26 + regulator-name = "vsys_cobra"; 27 + vin-supply = <&vmain>; 28 + regulator-min-microvolt = <5000000>; 29 + regulator-max-microvolt = <5000000>; 30 + }; 31 + 32 + vdds_1v8_main: fixedregulator-vdds_1v8_main { 33 + compatible = "regulator-fixed"; 34 + regulator-name = "vdds_1v8_main"; 35 + vin-supply = <&smps7_reg>; 36 + regulator-min-microvolt = <1800000>; 37 + regulator-max-microvolt = <1800000>; 38 + }; 39 + 17 40 vmmcsd_fixed: fixedregulator-mmcsd { 18 41 compatible = "regulator-fixed"; 19 42 regulator-name = "vmmcsd_fixed"; ··· 332 309 333 310 wlcore_irq_pin: pinmux_wlcore_irq_pin { 334 311 pinctrl-single,pins = < 335 - OMAP5_IOPAD(0x40, WAKEUP_EN | PIN_INPUT_PULLUP | MUX_MODE6) /* llia_wakereqin.gpio1_wk14 */ 312 + OMAP5_IOPAD(0x40, PIN_INPUT_PULLUP | MUX_MODE6) /* llia_wakereqin.gpio1_wk14 */ 336 313 >; 337 314 }; 338 315 }; ··· 431 408 interrupt-names = "short-irq"; 432 409 433 410 ti,ldo6-vibrator; 411 + 412 + smps123-in-supply = <&vsys_cobra>; 413 + smps45-in-supply = <&vsys_cobra>; 414 + smps6-in-supply = <&vsys_cobra>; 415 + smps7-in-supply = <&vsys_cobra>; 416 + smps8-in-supply = <&vsys_cobra>; 417 + smps9-in-supply = <&vsys_cobra>; 418 + smps10_out2-in-supply = <&vsys_cobra>; 419 + smps10_out1-in-supply = <&vsys_cobra>; 420 + ldo1-in-supply = <&vsys_cobra>; 421 + ldo2-in-supply = <&vsys_cobra>; 422 + ldo3-in-supply = <&vdds_1v8_main>; 423 + ldo4-in-supply = <&vdds_1v8_main>; 424 + ldo5-in-supply = <&vsys_cobra>; 425 + ldo6-in-supply = <&vdds_1v8_main>; 426 + ldo7-in-supply = <&vsys_cobra>; 427 + ldo8-in-supply = <&vsys_cobra>; 428 + ldo9-in-supply = <&vmmcsd_fixed>; 429 + ldoln-in-supply = <&vsys_cobra>; 430 + ldousb-in-supply = <&vsys_cobra>; 434 431 435 432 regulators { 436 433 smps123_reg: smps123 { ··· 643 600 pinctrl-0 = <&twl6040_pins>; 644 601 645 602 interrupts = <GIC_SPI 119 IRQ_TYPE_NONE>; /* IRQ_SYS_2N cascaded to gic */ 646 - ti,audpwron-gpio = <&gpio5 13 GPIO_ACTIVE_HIGH>; /* gpio line 141 */ 603 + 604 + /* audpwron gpio defined in the board specific dts */ 647 605 648 606 vio-supply = <&smps7_reg>; 649 607 v2v1-supply = <&smps9_reg>;
+26
arch/arm/boot/dts/omap5-igep0050.dts
··· 35 35 }; 36 36 }; 37 37 38 + /* LDO4 is VPP1 - ball AD9 */ 39 + &ldo4_reg { 40 + regulator-min-microvolt = <2000000>; 41 + regulator-max-microvolt = <2000000>; 42 + }; 43 + 44 + /* 45 + * LDO7 is used for HDMI: VDDA_DSIPORTA - ball AA33, VDDA_DSIPORTC - ball AE33, 46 + * VDDA_HDMI - ball AN25 47 + */ 48 + &ldo7_reg { 49 + status = "okay"; 50 + regulator-min-microvolt = <1800000>; 51 + regulator-max-microvolt = <1800000>; 52 + }; 53 + 38 54 &omap5_pmx_core { 39 55 i2c4_pins: pinmux_i2c4_pins { 40 56 pinctrl-single,pins = < ··· 68 52 <&gpio7 3 0>; /* 195, SDA */ 69 53 }; 70 54 55 + &twl6040 { 56 + ti,audpwron-gpio = <&gpio5 16 GPIO_ACTIVE_HIGH>; /* gpio line 144 */ 57 + }; 58 + 59 + &twl6040_pins { 60 + pinctrl-single,pins = < 61 + OMAP5_IOPAD(0x1c4, PIN_OUTPUT | MUX_MODE6) /* mcspi1_somi.gpio5_144 */ 62 + OMAP5_IOPAD(0x1ca, PIN_OUTPUT | MUX_MODE6) /* perslimbus2_clock.gpio5_145 */ 63 + >; 64 + };
+10
arch/arm/boot/dts/omap5-uevm.dts
··· 51 51 <&gpio9 1 GPIO_ACTIVE_HIGH>, /* TCA6424A P00, LS OE */ 52 52 <&gpio7 1 GPIO_ACTIVE_HIGH>; /* GPIO 193, HPD */ 53 53 }; 54 + 55 + &twl6040 { 56 + ti,audpwron-gpio = <&gpio5 13 GPIO_ACTIVE_HIGH>; /* gpio line 141 */ 57 + }; 58 + 59 + &twl6040_pins { 60 + pinctrl-single,pins = < 61 + OMAP5_IOPAD(0x1be, PIN_OUTPUT | MUX_MODE6) /* mcspi1_somi.gpio5_141 */ 62 + >; 63 + };
+1
arch/arm/boot/dts/socfpga_cyclone5_vining_fpga.dts
··· 136 136 &gmac1 { 137 137 status = "okay"; 138 138 phy-mode = "rgmii"; 139 + phy-handle = <&phy1>; 139 140 140 141 snps,reset-gpio = <&porta 0 GPIO_ACTIVE_LOW>; 141 142 snps,reset-active-low;
+3
arch/arm/boot/dts/stih407-family.dtsi
··· 24 24 compatible = "shared-dma-pool"; 25 25 reg = <0x40000000 0x01000000>; 26 26 no-map; 27 + status = "disabled"; 27 28 }; 28 29 29 30 gp1_reserved: rproc@41000000 { 30 31 compatible = "shared-dma-pool"; 31 32 reg = <0x41000000 0x01000000>; 32 33 no-map; 34 + status = "disabled"; 33 35 }; 34 36 35 37 audio_reserved: rproc@42000000 { 36 38 compatible = "shared-dma-pool"; 37 39 reg = <0x42000000 0x01000000>; 38 40 no-map; 41 + status = "disabled"; 39 42 }; 40 43 41 44 dmu_reserved: rproc@43000000 {
-2
arch/arm/boot/dts/sun6i-a31s-primo81.dts
··· 176 176 }; 177 177 178 178 &reg_dc1sw { 179 - regulator-min-microvolt = <3000000>; 180 - regulator-max-microvolt = <3000000>; 181 179 regulator-name = "vcc-lcd"; 182 180 }; 183 181
-2
arch/arm/boot/dts/sun6i-a31s-yones-toptech-bs1078-v2.dts
··· 135 135 136 136 &reg_dc1sw { 137 137 regulator-name = "vcc-lcd-usb2"; 138 - regulator-min-microvolt = <3000000>; 139 - regulator-max-microvolt = <3000000>; 140 138 }; 141 139 142 140 &reg_dc5ldo {
+1
arch/arm/configs/exynos_defconfig
··· 82 82 CONFIG_INPUT_MISC=y 83 83 CONFIG_INPUT_MAX77693_HAPTIC=y 84 84 CONFIG_INPUT_MAX8997_HAPTIC=y 85 + CONFIG_KEYBOARD_SAMSUNG=y 85 86 CONFIG_SERIAL_8250=y 86 87 CONFIG_SERIAL_SAMSUNG=y 87 88 CONFIG_SERIAL_SAMSUNG_CONSOLE=y
+1
arch/arm/configs/multi_v7_defconfig
··· 264 264 CONFIG_KEYBOARD_SPEAR=y 265 265 CONFIG_KEYBOARD_ST_KEYSCAN=y 266 266 CONFIG_KEYBOARD_CROS_EC=m 267 + CONFIG_KEYBOARD_SAMSUNG=m 267 268 CONFIG_MOUSE_PS2_ELANTECH=y 268 269 CONFIG_MOUSE_CYAPA=m 269 270 CONFIG_MOUSE_ELAN_I2C=y
+1
arch/arm/include/asm/pgtable-2level.h
··· 193 193 194 194 #define pmd_large(pmd) (pmd_val(pmd) & 2) 195 195 #define pmd_bad(pmd) (pmd_val(pmd) & 2) 196 + #define pmd_present(pmd) (pmd_val(pmd)) 196 197 197 198 #define copy_pmd(pmdpd,pmdps) \ 198 199 do { \
+3 -2
arch/arm/include/asm/pgtable-3level.h
··· 211 211 : !!(pmd_val(pmd) & (val))) 212 212 #define pmd_isclear(pmd, val) (!(pmd_val(pmd) & (val))) 213 213 214 + #define pmd_present(pmd) (pmd_isset((pmd), L_PMD_SECT_VALID)) 214 215 #define pmd_young(pmd) (pmd_isset((pmd), PMD_SECT_AF)) 215 216 #define pte_special(pte) (pte_isset((pte), L_PTE_SPECIAL)) 216 217 static inline pte_t pte_mkspecial(pte_t pte) ··· 250 249 #define pfn_pmd(pfn,prot) (__pmd(((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))) 251 250 #define mk_pmd(page,prot) pfn_pmd(page_to_pfn(page),prot) 252 251 253 - /* represent a notpresent pmd by zero, this is used by pmdp_invalidate */ 252 + /* represent a notpresent pmd by faulting entry, this is used by pmdp_invalidate */ 254 253 static inline pmd_t pmd_mknotpresent(pmd_t pmd) 255 254 { 256 - return __pmd(0); 255 + return __pmd(pmd_val(pmd) & ~L_PMD_SECT_VALID); 257 256 } 258 257 259 258 static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
-1
arch/arm/include/asm/pgtable.h
··· 182 182 #define pgd_offset_k(addr) pgd_offset(&init_mm, addr) 183 183 184 184 #define pmd_none(pmd) (!pmd_val(pmd)) 185 - #define pmd_present(pmd) (pmd_val(pmd)) 186 185 187 186 static inline pte_t *pmd_page_vaddr(pmd_t pmd) 188 187 {
+1 -1
arch/arm/kernel/smp.c
··· 486 486 487 487 static void smp_cross_call(const struct cpumask *target, unsigned int ipinr) 488 488 { 489 - trace_ipi_raise(target, ipi_types[ipinr]); 489 + trace_ipi_raise_rcuidle(target, ipi_types[ipinr]); 490 490 __smp_cross_call(target, ipinr); 491 491 } 492 492
-1
arch/arm/mach-exynos/Kconfig
··· 61 61 select CLKSRC_SAMSUNG_PWM if CPU_EXYNOS4210 62 62 select CPU_EXYNOS4210 63 63 select GIC_NON_BANKED 64 - select KEYBOARD_SAMSUNG if INPUT_KEYBOARD 65 64 select MIGHT_HAVE_CACHE_L2X0 66 65 help 67 66 Samsung EXYNOS4 (Cortex-A9) SoC based systems
+1 -1
arch/arm/mach-imx/mach-imx6ul.c
··· 46 46 static void __init imx6ul_enet_phy_init(void) 47 47 { 48 48 if (IS_BUILTIN(CONFIG_PHYLIB)) 49 - phy_register_fixup_for_uid(PHY_ID_KSZ8081, 0xffffffff, 49 + phy_register_fixup_for_uid(PHY_ID_KSZ8081, MICREL_PHY_ID_MASK, 50 50 ksz8081_phy_fixup); 51 51 } 52 52
+3 -3
arch/arm/mach-omap1/ams-delta-fiq-handler.S
··· 43 43 #define OTHERS_MASK (MODEM_IRQ_MASK | HOOK_SWITCH_MASK) 44 44 45 45 /* IRQ handler register bitmasks */ 46 - #define DEFERRED_FIQ_MASK (0x1 << (INT_DEFERRED_FIQ % IH2_BASE)) 47 - #define GPIO_BANK1_MASK (0x1 << INT_GPIO_BANK1) 46 + #define DEFERRED_FIQ_MASK OMAP_IRQ_BIT(INT_DEFERRED_FIQ) 47 + #define GPIO_BANK1_MASK OMAP_IRQ_BIT(INT_GPIO_BANK1) 48 48 49 49 /* Driver buffer byte offsets */ 50 50 #define BUF_MASK (FIQ_MASK * 4) ··· 110 110 mov r8, #2 @ reset FIQ agreement 111 111 str r8, [r12, #IRQ_CONTROL_REG_OFFSET] 112 112 113 - cmp r10, #INT_GPIO_BANK1 @ is it GPIO bank interrupt? 113 + cmp r10, #(INT_GPIO_BANK1 - NR_IRQS_LEGACY) @ is it GPIO interrupt? 114 114 beq gpio @ yes - process it 115 115 116 116 mov r8, #1
+3 -2
arch/arm/mach-omap1/ams-delta-fiq.c
··· 109 109 * Since no set_type() method is provided by OMAP irq chip, 110 110 * switch to edge triggered interrupt type manually. 111 111 */ 112 - offset = IRQ_ILR0_REG_OFFSET + INT_DEFERRED_FIQ * 0x4; 112 + offset = IRQ_ILR0_REG_OFFSET + 113 + ((INT_DEFERRED_FIQ - NR_IRQS_LEGACY) & 0x1f) * 0x4; 113 114 val = omap_readl(DEFERRED_FIQ_IH_BASE + offset) & ~(1 << 1); 114 115 omap_writel(val, DEFERRED_FIQ_IH_BASE + offset); 115 116 ··· 150 149 /* 151 150 * Redirect GPIO interrupts to FIQ 152 151 */ 153 - offset = IRQ_ILR0_REG_OFFSET + INT_GPIO_BANK1 * 0x4; 152 + offset = IRQ_ILR0_REG_OFFSET + (INT_GPIO_BANK1 - NR_IRQS_LEGACY) * 0x4; 154 153 val = omap_readl(OMAP_IH1_BASE + offset) | 1; 155 154 omap_writel(val, OMAP_IH1_BASE + offset); 156 155 }
+2
arch/arm/mach-omap1/include/mach/ams-delta-fiq.h
··· 14 14 #ifndef __AMS_DELTA_FIQ_H 15 15 #define __AMS_DELTA_FIQ_H 16 16 17 + #include <mach/irqs.h> 18 + 17 19 /* 18 20 * Interrupt number used for passing control from FIQ to IRQ. 19 21 * IRQ12, described as reserved, has been selected.
+12
arch/arm/mach-omap2/Kconfig
··· 17 17 select PM_OPP if PM 18 18 select PM if CPU_IDLE 19 19 select SOC_HAS_OMAP2_SDRC 20 + select ARM_ERRATA_430973 20 21 21 22 config ARCH_OMAP4 22 23 bool "TI OMAP4" ··· 37 36 select PM if CPU_IDLE 38 37 select ARM_ERRATA_754322 39 38 select ARM_ERRATA_775420 39 + select OMAP_INTERCONNECT 40 40 41 41 config SOC_OMAP5 42 42 bool "TI OMAP5" ··· 69 67 select HAVE_ARM_SCU 70 68 select GENERIC_CLOCKEVENTS_BROADCAST 71 69 select HAVE_ARM_TWD 70 + select ARM_ERRATA_754322 71 + select ARM_ERRATA_775420 72 72 73 73 config SOC_DRA7XX 74 74 bool "TI DRA7XX" ··· 243 239 endmenu 244 240 245 241 endif 242 + 243 + config OMAP5_ERRATA_801819 244 + bool "Errata 801819: An eviction from L1 data cache might stall indefinitely" 245 + depends on SOC_OMAP5 || SOC_DRA7XX 246 + help 247 + A livelock can occur in the L2 cache arbitration that might prevent 248 + a snoop from completing. Under certain conditions this can cause the 249 + system to deadlock. 246 250 247 251 endmenu
+1
arch/arm/mach-omap2/omap-secure.h
··· 46 46 47 47 #define OMAP5_DRA7_MON_SET_CNTFRQ_INDEX 0x109 48 48 #define OMAP5_MON_AMBA_IF_INDEX 0x108 49 + #define OMAP5_DRA7_MON_SET_ACR_INDEX 0x107 49 50 50 51 /* Secure PPA(Primary Protected Application) APIs */ 51 52 #define OMAP4_PPA_L2_POR_INDEX 0x23
+43 -5
arch/arm/mach-omap2/omap-smp.c
··· 50 50 return scu_base; 51 51 } 52 52 53 + #ifdef CONFIG_OMAP5_ERRATA_801819 54 + void omap5_erratum_workaround_801819(void) 55 + { 56 + u32 acr, revidr; 57 + u32 acr_mask; 58 + 59 + /* REVIDR[3] indicates erratum fix available on silicon */ 60 + asm volatile ("mrc p15, 0, %0, c0, c0, 6" : "=r" (revidr)); 61 + if (revidr & (0x1 << 3)) 62 + return; 63 + 64 + asm volatile ("mrc p15, 0, %0, c1, c0, 1" : "=r" (acr)); 65 + /* 66 + * BIT(27) - Disables streaming. All write-allocate lines allocate in 67 + * the L1 or L2 cache. 68 + * BIT(25) - Disables streaming. All write-allocate lines allocate in 69 + * the L1 cache. 70 + */ 71 + acr_mask = (0x3 << 25) | (0x3 << 27); 72 + /* do we already have it done.. if yes, skip expensive smc */ 73 + if ((acr & acr_mask) == acr_mask) 74 + return; 75 + 76 + acr |= acr_mask; 77 + omap_smc1(OMAP5_DRA7_MON_SET_ACR_INDEX, acr); 78 + 79 + pr_debug("%s: ARM erratum workaround 801819 applied on CPU%d\n", 80 + __func__, smp_processor_id()); 81 + } 82 + #else 83 + static inline void omap5_erratum_workaround_801819(void) { } 84 + #endif 85 + 53 86 static void omap4_secondary_init(unsigned int cpu) 54 87 { 55 88 /* ··· 97 64 omap_secure_dispatcher(OMAP4_PPA_CPU_ACTRL_SMP_INDEX, 98 65 4, 0, 0, 0, 0, 0); 99 66 100 - /* 101 - * Configure the CNTFRQ register for the secondary cpu's which 102 - * indicates the frequency of the cpu local timers. 103 - */ 104 - if (soc_is_omap54xx() || soc_is_dra7xx()) 67 + if (soc_is_omap54xx() || soc_is_dra7xx()) { 68 + /* 69 + * Configure the CNTFRQ register for the secondary cpu's which 70 + * indicates the frequency of the cpu local timers. 71 + */ 105 72 set_cntfreq(); 73 + /* Configure ACR to disable streaming WA for 801819 */ 74 + omap5_erratum_workaround_801819(); 75 + } 106 76 107 77 /* 108 78 * Synchronise with the boot thread. ··· 254 218 255 219 if (cpu_is_omap446x()) 256 220 startup_addr = omap4460_secondary_startup; 221 + if (soc_is_dra74x() || soc_is_omap54xx()) 222 + omap5_erratum_workaround_801819(); 257 223 258 224 /* 259 225 * Write the address of secondary startup routine into the
+5 -4
arch/arm/mach-omap2/powerdomain.c
··· 186 186 trace_state = (PWRDM_TRACE_STATES_FLAG | 187 187 ((next & OMAP_POWERSTATE_MASK) << 8) | 188 188 ((prev & OMAP_POWERSTATE_MASK) << 0)); 189 - trace_power_domain_target(pwrdm->name, trace_state, 190 - smp_processor_id()); 189 + trace_power_domain_target_rcuidle(pwrdm->name, 190 + trace_state, 191 + smp_processor_id()); 191 192 } 192 193 break; 193 194 default: ··· 524 523 525 524 if (arch_pwrdm && arch_pwrdm->pwrdm_set_next_pwrst) { 526 525 /* Trace the pwrdm desired target state */ 527 - trace_power_domain_target(pwrdm->name, pwrst, 528 - smp_processor_id()); 526 + trace_power_domain_target_rcuidle(pwrdm->name, pwrst, 527 + smp_processor_id()); 529 528 /* Program the pwrdm desired target state */ 530 529 ret = arch_pwrdm->pwrdm_set_next_pwrst(pwrdm, pwrst); 531 530 }
+2 -74
arch/arm/mach-omap2/powerdomains7xx_data.c
··· 36 36 .prcm_offs = DRA7XX_PRM_IVA_INST, 37 37 .prcm_partition = DRA7XX_PRM_PARTITION, 38 38 .pwrsts = PWRSTS_OFF_ON, 39 - .pwrsts_logic_ret = PWRSTS_OFF, 40 39 .banks = 4, 41 - .pwrsts_mem_ret = { 42 - [0] = PWRSTS_OFF_RET, /* hwa_mem */ 43 - [1] = PWRSTS_OFF_RET, /* sl2_mem */ 44 - [2] = PWRSTS_OFF_RET, /* tcm1_mem */ 45 - [3] = PWRSTS_OFF_RET, /* tcm2_mem */ 46 - }, 47 40 .pwrsts_mem_on = { 48 41 [0] = PWRSTS_ON, /* hwa_mem */ 49 42 [1] = PWRSTS_ON, /* sl2_mem */ ··· 69 76 .prcm_offs = DRA7XX_PRM_IPU_INST, 70 77 .prcm_partition = DRA7XX_PRM_PARTITION, 71 78 .pwrsts = PWRSTS_OFF_ON, 72 - .pwrsts_logic_ret = PWRSTS_OFF, 73 79 .banks = 2, 74 - .pwrsts_mem_ret = { 75 - [0] = PWRSTS_OFF_RET, /* aessmem */ 76 - [1] = PWRSTS_OFF_RET, /* periphmem */ 77 - }, 78 80 .pwrsts_mem_on = { 79 81 [0] = PWRSTS_ON, /* aessmem */ 80 82 [1] = PWRSTS_ON, /* periphmem */ ··· 83 95 .prcm_offs = DRA7XX_PRM_DSS_INST, 84 96 .prcm_partition = DRA7XX_PRM_PARTITION, 85 97 .pwrsts = PWRSTS_OFF_ON, 86 - .pwrsts_logic_ret = PWRSTS_OFF, 87 98 .banks = 1, 88 - .pwrsts_mem_ret = { 89 - [0] = PWRSTS_OFF_RET, /* dss_mem */ 90 - }, 91 99 .pwrsts_mem_on = { 92 100 [0] = PWRSTS_ON, /* dss_mem */ 93 101 }, ··· 95 111 .name = "l4per_pwrdm", 96 112 .prcm_offs = DRA7XX_PRM_L4PER_INST, 97 113 .prcm_partition = DRA7XX_PRM_PARTITION, 98 - .pwrsts = PWRSTS_RET_ON, 99 - .pwrsts_logic_ret = PWRSTS_RET, 114 + .pwrsts = PWRSTS_ON, 100 115 .banks = 2, 101 - .pwrsts_mem_ret = { 102 - [0] = PWRSTS_OFF_RET, /* nonretained_bank */ 103 - [1] = PWRSTS_OFF_RET, /* retained_bank */ 104 - }, 105 116 .pwrsts_mem_on = { 106 117 [0] = PWRSTS_ON, /* nonretained_bank */ 107 118 [1] = PWRSTS_ON, /* retained_bank */ ··· 111 132 .prcm_partition = DRA7XX_PRM_PARTITION, 112 133 .pwrsts = PWRSTS_OFF_ON, 113 134 .banks = 1, 114 - .pwrsts_mem_ret = { 115 - [0] = PWRSTS_OFF_RET, /* gpu_mem */ 116 - }, 117 135 .pwrsts_mem_on = { 118 136 [0] = PWRSTS_ON, /* gpu_mem */ 119 137 }, ··· 124 148 .prcm_partition = DRA7XX_PRM_PARTITION, 125 149 .pwrsts = PWRSTS_ON, 126 150 .banks = 1, 127 - .pwrsts_mem_ret = { 128 - }, 129 151 .pwrsts_mem_on = { 130 152 [0] = PWRSTS_ON, /* wkup_bank */ 131 153 }, ··· 135 161 .prcm_offs = DRA7XX_PRM_CORE_INST, 136 162 .prcm_partition = DRA7XX_PRM_PARTITION, 137 163 .pwrsts = PWRSTS_ON, 138 - .pwrsts_logic_ret = PWRSTS_RET, 139 164 .banks = 5, 140 - .pwrsts_mem_ret = { 141 - [0] = PWRSTS_OFF_RET, /* core_nret_bank */ 142 - [1] = PWRSTS_OFF_RET, /* core_ocmram */ 143 - [2] = PWRSTS_OFF_RET, /* core_other_bank */ 144 - [3] = PWRSTS_OFF_RET, /* ipu_l2ram */ 145 - [4] = PWRSTS_OFF_RET, /* ipu_unicache */ 146 - }, 147 165 .pwrsts_mem_on = { 148 166 [0] = PWRSTS_ON, /* core_nret_bank */ 149 167 [1] = PWRSTS_ON, /* core_ocmram */ ··· 192 226 .prcm_offs = DRA7XX_PRM_VPE_INST, 193 227 .prcm_partition = DRA7XX_PRM_PARTITION, 194 228 .pwrsts = PWRSTS_OFF_ON, 195 - .pwrsts_logic_ret = PWRSTS_OFF, 196 229 .banks = 1, 197 - .pwrsts_mem_ret = { 198 - [0] = PWRSTS_OFF_RET, /* vpe_bank */ 199 - }, 200 230 .pwrsts_mem_on = { 201 231 [0] = PWRSTS_ON, /* vpe_bank */ 202 232 }, ··· 222 260 .name = "l3init_pwrdm", 223 261 .prcm_offs = DRA7XX_PRM_L3INIT_INST, 224 262 .prcm_partition = DRA7XX_PRM_PARTITION, 225 - .pwrsts = PWRSTS_RET_ON, 226 - .pwrsts_logic_ret = PWRSTS_RET, 263 + .pwrsts = PWRSTS_ON, 227 264 .banks = 3, 228 - .pwrsts_mem_ret = { 229 - [0] = PWRSTS_OFF_RET, /* gmac_bank */ 230 - [1] = PWRSTS_OFF_RET, /* l3init_bank1 */ 231 - [2] = PWRSTS_OFF_RET, /* l3init_bank2 */ 232 - }, 233 265 .pwrsts_mem_on = { 234 266 [0] = PWRSTS_ON, /* gmac_bank */ 235 267 [1] = PWRSTS_ON, /* l3init_bank1 */ ··· 239 283 .prcm_partition = DRA7XX_PRM_PARTITION, 240 284 .pwrsts = PWRSTS_OFF_ON, 241 285 .banks = 1, 242 - .pwrsts_mem_ret = { 243 - [0] = PWRSTS_OFF_RET, /* eve3_bank */ 244 - }, 245 286 .pwrsts_mem_on = { 246 287 [0] = PWRSTS_ON, /* eve3_bank */ 247 288 }, ··· 252 299 .prcm_partition = DRA7XX_PRM_PARTITION, 253 300 .pwrsts = PWRSTS_OFF_ON, 254 301 .banks = 1, 255 - .pwrsts_mem_ret = { 256 - [0] = PWRSTS_OFF_RET, /* emu_bank */ 257 - }, 258 302 .pwrsts_mem_on = { 259 303 [0] = PWRSTS_ON, /* emu_bank */ 260 304 }, ··· 264 314 .prcm_partition = DRA7XX_PRM_PARTITION, 265 315 .pwrsts = PWRSTS_OFF_ON, 266 316 .banks = 3, 267 - .pwrsts_mem_ret = { 268 - [0] = PWRSTS_OFF_RET, /* dsp2_edma */ 269 - [1] = PWRSTS_OFF_RET, /* dsp2_l1 */ 270 - [2] = PWRSTS_OFF_RET, /* dsp2_l2 */ 271 - }, 272 317 .pwrsts_mem_on = { 273 318 [0] = PWRSTS_ON, /* dsp2_edma */ 274 319 [1] = PWRSTS_ON, /* dsp2_l1 */ ··· 279 334 .prcm_partition = DRA7XX_PRM_PARTITION, 280 335 .pwrsts = PWRSTS_OFF_ON, 281 336 .banks = 3, 282 - .pwrsts_mem_ret = { 283 - [0] = PWRSTS_OFF_RET, /* dsp1_edma */ 284 - [1] = PWRSTS_OFF_RET, /* dsp1_l1 */ 285 - [2] = PWRSTS_OFF_RET, /* dsp1_l2 */ 286 - }, 287 337 .pwrsts_mem_on = { 288 338 [0] = PWRSTS_ON, /* dsp1_edma */ 289 339 [1] = PWRSTS_ON, /* dsp1_l1 */ ··· 294 354 .prcm_partition = DRA7XX_PRM_PARTITION, 295 355 .pwrsts = PWRSTS_OFF_ON, 296 356 .banks = 1, 297 - .pwrsts_mem_ret = { 298 - [0] = PWRSTS_OFF_RET, /* vip_bank */ 299 - }, 300 357 .pwrsts_mem_on = { 301 358 [0] = PWRSTS_ON, /* vip_bank */ 302 359 }, ··· 307 370 .prcm_partition = DRA7XX_PRM_PARTITION, 308 371 .pwrsts = PWRSTS_OFF_ON, 309 372 .banks = 1, 310 - .pwrsts_mem_ret = { 311 - [0] = PWRSTS_OFF_RET, /* eve4_bank */ 312 - }, 313 373 .pwrsts_mem_on = { 314 374 [0] = PWRSTS_ON, /* eve4_bank */ 315 375 }, ··· 320 386 .prcm_partition = DRA7XX_PRM_PARTITION, 321 387 .pwrsts = PWRSTS_OFF_ON, 322 388 .banks = 1, 323 - .pwrsts_mem_ret = { 324 - [0] = PWRSTS_OFF_RET, /* eve2_bank */ 325 - }, 326 389 .pwrsts_mem_on = { 327 390 [0] = PWRSTS_ON, /* eve2_bank */ 328 391 }, ··· 333 402 .prcm_partition = DRA7XX_PRM_PARTITION, 334 403 .pwrsts = PWRSTS_OFF_ON, 335 404 .banks = 1, 336 - .pwrsts_mem_ret = { 337 - [0] = PWRSTS_OFF_RET, /* eve1_bank */ 338 - }, 339 405 .pwrsts_mem_on = { 340 406 [0] = PWRSTS_ON, /* eve1_bank */ 341 407 },
+5 -2
arch/arm/mach-omap2/timer.c
··· 496 496 __omap_sync32k_timer_init(1, "timer_32k_ck", "ti,timer-alwon", 497 497 2, "timer_sys_ck", NULL, false); 498 498 499 - if (of_have_populated_dt()) 500 - clocksource_probe(); 499 + clocksource_probe(); 501 500 } 502 501 503 502 #if defined(CONFIG_ARCH_OMAP3) || defined(CONFIG_SOC_AM43XX) ··· 504 505 { 505 506 __omap_sync32k_timer_init(12, "secure_32k_fck", "ti,timer-secure", 506 507 2, "timer_sys_ck", NULL, false); 508 + 509 + clocksource_probe(); 507 510 } 508 511 #endif /* CONFIG_ARCH_OMAP3 */ 509 512 ··· 514 513 { 515 514 __omap_sync32k_timer_init(2, "timer_sys_ck", NULL, 516 515 1, "timer_sys_ck", "ti,timer-alwon", true); 516 + 517 + clocksource_probe(); 517 518 } 518 519 #endif 519 520
+1 -1
arch/arm/plat-samsung/devs.c
··· 68 68 #include <linux/platform_data/asoc-s3c.h> 69 69 #include <linux/platform_data/spi-s3c64xx.h> 70 70 71 - static u64 samsung_device_dma_mask = DMA_BIT_MASK(32); 71 + #define samsung_device_dma_mask (*((u64[]) { DMA_BIT_MASK(32) })) 72 72 73 73 /* AC97 */ 74 74 #ifdef CONFIG_CPU_S3C2440
+1 -1
arch/arm64/boot/dts/lg/lg1312.dtsi
··· 125 125 #size-cells = <1>; 126 126 #interrupts-cells = <3>; 127 127 128 - compatible = "arm,amba-bus"; 128 + compatible = "simple-bus"; 129 129 interrupt-parent = <&gic>; 130 130 ranges; 131 131
+1 -1
arch/arm64/boot/dts/rockchip/rk3399.dtsi
··· 163 163 }; 164 164 165 165 amba { 166 - compatible = "arm,amba-bus"; 166 + compatible = "simple-bus"; 167 167 #address-cells = <2>; 168 168 #size-cells = <2>; 169 169 ranges;
+37 -8
arch/arm64/include/asm/kgdb.h
··· 38 38 #endif /* !__ASSEMBLY__ */ 39 39 40 40 /* 41 - * gdb is expecting the following registers layout. 41 + * gdb remote procotol (well most versions of it) expects the following 42 + * register layout. 42 43 * 43 44 * General purpose regs: 44 45 * r0-r30: 64 bit 45 46 * sp,pc : 64 bit 46 - * pstate : 64 bit 47 - * Total: 34 47 + * pstate : 32 bit 48 + * Total: 33 + 1 48 49 * FPU regs: 49 50 * f0-f31: 128 bit 50 - * Total: 32 51 - * Extra regs 52 51 * fpsr & fpcr: 32 bit 53 - * Total: 2 52 + * Total: 32 + 2 54 53 * 54 + * To expand a little on the "most versions of it"... when the gdb remote 55 + * protocol for AArch64 was developed it depended on a statement in the 56 + * Architecture Reference Manual that claimed "SPSR_ELx is a 32-bit register". 57 + * and, as a result, allocated only 32-bits for the PSTATE in the remote 58 + * protocol. In fact this statement is still present in ARM DDI 0487A.i. 59 + * 60 + * Unfortunately "is a 32-bit register" has a very special meaning for 61 + * system registers. It means that "the upper bits, bits[63:32], are 62 + * RES0.". RES0 is heavily used in the ARM architecture documents as a 63 + * way to leave space for future architecture changes. So to translate a 64 + * little for people who don't spend their spare time reading ARM architecture 65 + * manuals, what "is a 32-bit register" actually means in this context is 66 + * "is a 64-bit register but one with no meaning allocated to any of the 67 + * upper 32-bits... *yet*". 68 + * 69 + * Perhaps then we should not be surprised that this has led to some 70 + * confusion. Specifically a patch, influenced by the above translation, 71 + * that extended PSTATE to 64-bit was accepted into gdb-7.7 but the patch 72 + * was reverted in gdb-7.8.1 and all later releases, when this was 73 + * discovered to be an undocumented protocol change. 74 + * 75 + * So... it is *not* wrong for us to only allocate 32-bits to PSTATE 76 + * here even though the kernel itself allocates 64-bits for the same 77 + * state. That is because this bit of code tells the kernel how the gdb 78 + * remote protocol (well most versions of it) describes the register state. 79 + * 80 + * Note that if you are using one of the versions of gdb that supports 81 + * the gdb-7.7 version of the protocol you cannot use kgdb directly 82 + * without providing a custom register description (gdb can load new 83 + * protocol descriptions at runtime). 55 84 */ 56 85 57 - #define _GP_REGS 34 86 + #define _GP_REGS 33 58 87 #define _FP_REGS 32 59 - #define _EXTRA_REGS 2 88 + #define _EXTRA_REGS 3 60 89 /* 61 90 * general purpose registers size in bytes. 62 91 * pstate is only 4 bytes. subtract 4 bytes
+37 -5
arch/arm64/include/asm/spinlock.h
··· 30 30 { 31 31 unsigned int tmp; 32 32 arch_spinlock_t lockval; 33 + u32 owner; 34 + 35 + /* 36 + * Ensure prior spin_lock operations to other locks have completed 37 + * on this CPU before we test whether "lock" is locked. 38 + */ 39 + smp_mb(); 40 + owner = READ_ONCE(lock->owner) << 16; 33 41 34 42 asm volatile( 35 43 " sevl\n" 36 44 "1: wfe\n" 37 45 "2: ldaxr %w0, %2\n" 46 + /* Is the lock free? */ 38 47 " eor %w1, %w0, %w0, ror #16\n" 39 - " cbnz %w1, 1b\n" 48 + " cbz %w1, 3f\n" 49 + /* Lock taken -- has there been a subsequent unlock->lock transition? */ 50 + " eor %w1, %w3, %w0, lsl #16\n" 51 + " cbz %w1, 1b\n" 52 + /* 53 + * The owner has been updated, so there was an unlock->lock 54 + * transition that we missed. That means we can rely on the 55 + * store-release of the unlock operation paired with the 56 + * load-acquire of the lock operation to publish any of our 57 + * previous stores to the new lock owner and therefore don't 58 + * need to bother with the writeback below. 59 + */ 60 + " b 4f\n" 61 + "3:\n" 62 + /* 63 + * Serialise against any concurrent lockers by writing back the 64 + * unlocked lock value 65 + */ 40 66 ARM64_LSE_ATOMIC_INSN( 41 67 /* LL/SC */ 42 68 " stxr %w1, %w0, %2\n" 43 - " cbnz %w1, 2b\n", /* Serialise against any concurrent lockers */ 44 - /* LSE atomics */ 45 69 " nop\n" 46 - " nop\n") 70 + " nop\n", 71 + /* LSE atomics */ 72 + " mov %w1, %w0\n" 73 + " cas %w0, %w0, %2\n" 74 + " eor %w1, %w1, %w0\n") 75 + /* Somebody else wrote to the lock, GOTO 10 and reload the value */ 76 + " cbnz %w1, 2b\n" 77 + "4:" 47 78 : "=&r" (lockval), "=&r" (tmp), "+Q" (*lock) 48 - : 79 + : "r" (owner) 49 80 : "memory"); 50 81 } 51 82 ··· 179 148 180 149 static inline int arch_spin_is_locked(arch_spinlock_t *lock) 181 150 { 151 + smp_mb(); /* See arch_spin_unlock_wait */ 182 152 return !arch_spin_value_unlocked(READ_ONCE(*lock)); 183 153 } 184 154
+13 -1
arch/arm64/kernel/kgdb.c
··· 58 58 { "x30", 8, offsetof(struct pt_regs, regs[30])}, 59 59 { "sp", 8, offsetof(struct pt_regs, sp)}, 60 60 { "pc", 8, offsetof(struct pt_regs, pc)}, 61 - { "pstate", 8, offsetof(struct pt_regs, pstate)}, 61 + /* 62 + * struct pt_regs thinks PSTATE is 64-bits wide but gdb remote 63 + * protocol disagrees. Therefore we must extract only the lower 64 + * 32-bits. Look for the big comment in asm/kgdb.h for more 65 + * detail. 66 + */ 67 + { "pstate", 4, offsetof(struct pt_regs, pstate) 68 + #ifdef CONFIG_CPU_BIG_ENDIAN 69 + + 4 70 + #endif 71 + }, 62 72 { "v0", 16, -1 }, 63 73 { "v1", 16, -1 }, 64 74 { "v2", 16, -1 }, ··· 138 128 memset((char *)gdb_regs, 0, NUMREGBYTES); 139 129 thread_regs = task_pt_regs(task); 140 130 memcpy((void *)gdb_regs, (void *)thread_regs->regs, GP_REG_BYTES); 131 + /* Special case for PSTATE (check comments in asm/kgdb.h for details) */ 132 + dbg_get_reg(33, gdb_regs + GP_REG_BYTES, thread_regs); 141 133 } 142 134 143 135 void kgdb_arch_set_pc(struct pt_regs *regs, unsigned long pc)
+13 -13
arch/arm64/kernel/traps.c
··· 64 64 65 65 /* 66 66 * We need to switch to kernel mode so that we can use __get_user 67 - * to safely read from kernel space. Note that we now dump the 68 - * code first, just in case the backtrace kills us. 67 + * to safely read from kernel space. 69 68 */ 70 69 fs = get_fs(); 71 70 set_fs(KERNEL_DS); ··· 110 111 print_ip_sym(where); 111 112 } 112 113 113 - static void dump_instr(const char *lvl, struct pt_regs *regs) 114 + static void __dump_instr(const char *lvl, struct pt_regs *regs) 114 115 { 115 116 unsigned long addr = instruction_pointer(regs); 116 - mm_segment_t fs; 117 117 char str[sizeof("00000000 ") * 5 + 2 + 1], *p = str; 118 118 int i; 119 - 120 - /* 121 - * We need to switch to kernel mode so that we can use __get_user 122 - * to safely read from kernel space. Note that we now dump the 123 - * code first, just in case the backtrace kills us. 124 - */ 125 - fs = get_fs(); 126 - set_fs(KERNEL_DS); 127 119 128 120 for (i = -4; i < 1; i++) { 129 121 unsigned int val, bad; ··· 129 139 } 130 140 } 131 141 printk("%sCode: %s\n", lvl, str); 142 + } 132 143 133 - set_fs(fs); 144 + static void dump_instr(const char *lvl, struct pt_regs *regs) 145 + { 146 + if (!user_mode(regs)) { 147 + mm_segment_t fs = get_fs(); 148 + set_fs(KERNEL_DS); 149 + __dump_instr(lvl, regs); 150 + set_fs(fs); 151 + } else { 152 + __dump_instr(lvl, regs); 153 + } 134 154 } 135 155 136 156 static void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk)
+1 -1
arch/arm64/mm/fault.c
··· 441 441 return 1; 442 442 } 443 443 444 - static struct fault_info { 444 + static const struct fault_info { 445 445 int (*fn)(unsigned long addr, unsigned int esr, struct pt_regs *regs); 446 446 int sig; 447 447 int code;
+2 -1
arch/mips/include/asm/kvm_host.h
··· 74 74 #define KVM_GUEST_KUSEG 0x00000000UL 75 75 #define KVM_GUEST_KSEG0 0x40000000UL 76 76 #define KVM_GUEST_KSEG23 0x60000000UL 77 - #define KVM_GUEST_KSEGX(a) ((_ACAST32_(a)) & 0x60000000) 77 + #define KVM_GUEST_KSEGX(a) ((_ACAST32_(a)) & 0xe0000000) 78 78 #define KVM_GUEST_CPHYSADDR(a) ((_ACAST32_(a)) & 0x1fffffff) 79 79 80 80 #define KVM_GUEST_CKSEG0ADDR(a) (KVM_GUEST_CPHYSADDR(a) | KVM_GUEST_KSEG0) ··· 338 338 #define KVM_MIPS_GUEST_TLB_SIZE 64 339 339 struct kvm_vcpu_arch { 340 340 void *host_ebase, *guest_ebase; 341 + int (*vcpu_run)(struct kvm_run *run, struct kvm_vcpu *vcpu); 341 342 unsigned long host_stack; 342 343 unsigned long host_gp; 343 344
+13 -6
arch/mips/kvm/emulate.c
··· 1636 1636 if (index < 0) { 1637 1637 vcpu->arch.host_cp0_entryhi = (va & VPN2_MASK); 1638 1638 vcpu->arch.host_cp0_badvaddr = va; 1639 + vcpu->arch.pc = curr_pc; 1639 1640 er = kvm_mips_emulate_tlbmiss_ld(cause, NULL, run, 1640 1641 vcpu); 1641 1642 preempt_enable(); ··· 1648 1647 * invalid exception to the guest 1649 1648 */ 1650 1649 if (!TLB_IS_VALID(*tlb, va)) { 1650 + vcpu->arch.host_cp0_badvaddr = va; 1651 + vcpu->arch.pc = curr_pc; 1651 1652 er = kvm_mips_emulate_tlbinv_ld(cause, NULL, 1652 1653 run, vcpu); 1653 1654 preempt_enable(); ··· 1669 1666 cache, op, base, arch->gprs[base], offset); 1670 1667 er = EMULATE_FAIL; 1671 1668 preempt_enable(); 1672 - goto dont_update_pc; 1669 + goto done; 1673 1670 1674 1671 } 1675 1672 ··· 1697 1694 kvm_err("NO-OP CACHE (cache: %#x, op: %#x, base[%d]: %#lx, offset: %#x\n", 1698 1695 cache, op, base, arch->gprs[base], offset); 1699 1696 er = EMULATE_FAIL; 1700 - preempt_enable(); 1701 - goto dont_update_pc; 1702 1697 } 1703 1698 1704 1699 preempt_enable(); 1700 + done: 1701 + /* Rollback PC only if emulation was unsuccessful */ 1702 + if (er == EMULATE_FAIL) 1703 + vcpu->arch.pc = curr_pc; 1705 1704 1706 1705 dont_update_pc: 1707 - /* Rollback PC */ 1708 - vcpu->arch.pc = curr_pc; 1709 - done: 1706 + /* 1707 + * This is for exceptions whose emulation updates the PC, so do not 1708 + * overwrite the PC under any circumstances 1709 + */ 1710 + 1710 1711 return er; 1711 1712 } 1712 1713
+1
arch/mips/kvm/interrupt.h
··· 28 28 #define MIPS_EXC_MAX 12 29 29 /* XXXSL More to follow */ 30 30 31 + extern char __kvm_mips_vcpu_run_end[]; 31 32 extern char mips32_exception[], mips32_exceptionEnd[]; 32 33 extern char mips32_GuestException[], mips32_GuestExceptionEnd[]; 33 34
+1
arch/mips/kvm/locore.S
··· 202 202 203 203 /* Jump to guest */ 204 204 eret 205 + EXPORT(__kvm_mips_vcpu_run_end) 205 206 206 207 VECTOR(MIPSX(exception), unknown) 207 208 /* Find out what mode we came from and jump to the proper handler. */
+10 -1
arch/mips/kvm/mips.c
··· 315 315 memcpy(gebase + offset, mips32_GuestException, 316 316 mips32_GuestExceptionEnd - mips32_GuestException); 317 317 318 + #ifdef MODULE 319 + offset += mips32_GuestExceptionEnd - mips32_GuestException; 320 + memcpy(gebase + offset, (char *)__kvm_mips_vcpu_run, 321 + __kvm_mips_vcpu_run_end - (char *)__kvm_mips_vcpu_run); 322 + vcpu->arch.vcpu_run = gebase + offset; 323 + #else 324 + vcpu->arch.vcpu_run = __kvm_mips_vcpu_run; 325 + #endif 326 + 318 327 /* Invalidate the icache for these ranges */ 319 328 local_flush_icache_range((unsigned long)gebase, 320 329 (unsigned long)gebase + ALIGN(size, PAGE_SIZE)); ··· 413 404 /* Disable hardware page table walking while in guest */ 414 405 htw_stop(); 415 406 416 - r = __kvm_mips_vcpu_run(run, vcpu); 407 + r = vcpu->arch.vcpu_run(run, vcpu); 417 408 418 409 /* Re-enable HTW before enabling interrupts */ 419 410 htw_start();
+1
arch/s390/include/asm/kvm_host.h
··· 245 245 u32 exit_stop_request; 246 246 u32 exit_validity; 247 247 u32 exit_instruction; 248 + u32 exit_pei; 248 249 u32 halt_successful_poll; 249 250 u32 halt_attempted_poll; 250 251 u32 halt_poll_invalid;
+2
arch/s390/kvm/intercept.c
··· 341 341 342 342 static int handle_partial_execution(struct kvm_vcpu *vcpu) 343 343 { 344 + vcpu->stat.exit_pei++; 345 + 344 346 if (vcpu->arch.sie_block->ipa == 0xb254) /* MVPG */ 345 347 return handle_mvpg_pei(vcpu); 346 348 if (vcpu->arch.sie_block->ipa >> 8 == 0xae) /* SIGP */
+2 -1
arch/s390/kvm/kvm-s390.c
··· 61 61 { "exit_external_request", VCPU_STAT(exit_external_request) }, 62 62 { "exit_external_interrupt", VCPU_STAT(exit_external_interrupt) }, 63 63 { "exit_instruction", VCPU_STAT(exit_instruction) }, 64 + { "exit_pei", VCPU_STAT(exit_pei) }, 64 65 { "exit_program_interruption", VCPU_STAT(exit_program_interruption) }, 65 66 { "exit_instr_and_program_int", VCPU_STAT(exit_instr_and_program) }, 66 67 { "halt_successful_poll", VCPU_STAT(halt_successful_poll) }, ··· 658 657 kvm->arch.model.cpuid = proc->cpuid; 659 658 lowest_ibc = sclp.ibc >> 16 & 0xfff; 660 659 unblocked_ibc = sclp.ibc & 0xfff; 661 - if (lowest_ibc) { 660 + if (lowest_ibc && proc->ibc) { 662 661 if (proc->ibc > unblocked_ibc) 663 662 kvm->arch.model.ibc = unblocked_ibc; 664 663 else if (proc->ibc < lowest_ibc)
+9
arch/x86/Kconfig
··· 2439 2439 2440 2440 source "drivers/pci/Kconfig" 2441 2441 2442 + config ISA_BUS 2443 + bool "ISA-style bus support on modern systems" if EXPERT 2444 + select ISA_BUS_API 2445 + help 2446 + Enables ISA-style drivers on modern systems. This is necessary to 2447 + support PC/104 devices on X86_64 platforms. 2448 + 2449 + If unsure, say N. 2450 + 2442 2451 # x86_64 have no ISA slots, but can have ISA-style DMA. 2443 2452 config ISA_DMA_API 2444 2453 bool "ISA-style DMA support" if (X86_64 && EXPERT)
+11
arch/x86/include/asm/kvm_host.h
··· 27 27 #include <linux/irqbypass.h> 28 28 #include <linux/hyperv.h> 29 29 30 + #include <asm/apic.h> 30 31 #include <asm/pvclock-abi.h> 31 32 #include <asm/desc.h> 32 33 #include <asm/mtrr.h> ··· 1368 1367 } 1369 1368 1370 1369 static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {} 1370 + 1371 + static inline int kvm_cpu_get_apicid(int mps_cpu) 1372 + { 1373 + #ifdef CONFIG_X86_LOCAL_APIC 1374 + return __default_cpu_present_to_apicid(mps_cpu); 1375 + #else 1376 + WARN_ON_ONCE(1); 1377 + return BAD_APICID; 1378 + #endif 1379 + } 1371 1380 1372 1381 #endif /* _ASM_X86_KVM_HOST_H */
+13 -8
arch/x86/kvm/svm.c
··· 238 238 239 239 /* enable / disable AVIC */ 240 240 static int avic; 241 + #ifdef CONFIG_X86_LOCAL_APIC 241 242 module_param(avic, int, S_IRUGO); 243 + #endif 242 244 243 245 static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0); 244 246 static void svm_flush_tlb(struct kvm_vcpu *vcpu); ··· 983 981 } else 984 982 kvm_disable_tdp(); 985 983 986 - if (avic && (!npt_enabled || !boot_cpu_has(X86_FEATURE_AVIC))) 987 - avic = false; 988 - 989 - if (avic) 990 - pr_info("AVIC enabled\n"); 984 + if (avic) { 985 + if (!npt_enabled || 986 + !boot_cpu_has(X86_FEATURE_AVIC) || 987 + !IS_ENABLED(CONFIG_X86_LOCAL_APIC)) 988 + avic = false; 989 + else 990 + pr_info("AVIC enabled\n"); 991 + } 991 992 992 993 return 0; 993 994 ··· 1329 1324 static void avic_set_running(struct kvm_vcpu *vcpu, bool is_run) 1330 1325 { 1331 1326 u64 entry; 1332 - int h_physical_id = __default_cpu_present_to_apicid(vcpu->cpu); 1327 + int h_physical_id = kvm_cpu_get_apicid(vcpu->cpu); 1333 1328 struct vcpu_svm *svm = to_svm(vcpu); 1334 1329 1335 1330 if (!kvm_vcpu_apicv_active(vcpu)) ··· 1354 1349 { 1355 1350 u64 entry; 1356 1351 /* ID = 0xff (broadcast), ID > 0xff (reserved) */ 1357 - int h_physical_id = __default_cpu_present_to_apicid(cpu); 1352 + int h_physical_id = kvm_cpu_get_apicid(cpu); 1358 1353 struct vcpu_svm *svm = to_svm(vcpu); 1359 1354 1360 1355 if (!kvm_vcpu_apicv_active(vcpu)) ··· 4241 4236 4242 4237 if (avic_vcpu_is_running(vcpu)) 4243 4238 wrmsrl(SVM_AVIC_DOORBELL, 4244 - __default_cpu_present_to_apicid(vcpu->cpu)); 4239 + kvm_cpu_get_apicid(vcpu->cpu)); 4245 4240 else 4246 4241 kvm_vcpu_wake_up(vcpu); 4247 4242 }
+10 -5
arch/x86/kvm/vmx.c
··· 2072 2072 unsigned int dest; 2073 2073 2074 2074 if (!kvm_arch_has_assigned_device(vcpu->kvm) || 2075 - !irq_remapping_cap(IRQ_POSTING_CAP)) 2075 + !irq_remapping_cap(IRQ_POSTING_CAP) || 2076 + !kvm_vcpu_apicv_active(vcpu)) 2076 2077 return; 2077 2078 2078 2079 do { ··· 2181 2180 struct pi_desc *pi_desc = vcpu_to_pi_desc(vcpu); 2182 2181 2183 2182 if (!kvm_arch_has_assigned_device(vcpu->kvm) || 2184 - !irq_remapping_cap(IRQ_POSTING_CAP)) 2183 + !irq_remapping_cap(IRQ_POSTING_CAP) || 2184 + !kvm_vcpu_apicv_active(vcpu)) 2185 2185 return; 2186 2186 2187 2187 /* Set SN when the vCPU is preempted */ ··· 10716 10714 struct pi_desc *pi_desc = vcpu_to_pi_desc(vcpu); 10717 10715 10718 10716 if (!kvm_arch_has_assigned_device(vcpu->kvm) || 10719 - !irq_remapping_cap(IRQ_POSTING_CAP)) 10717 + !irq_remapping_cap(IRQ_POSTING_CAP) || 10718 + !kvm_vcpu_apicv_active(vcpu)) 10720 10719 return 0; 10721 10720 10722 10721 vcpu->pre_pcpu = vcpu->cpu; ··· 10783 10780 unsigned long flags; 10784 10781 10785 10782 if (!kvm_arch_has_assigned_device(vcpu->kvm) || 10786 - !irq_remapping_cap(IRQ_POSTING_CAP)) 10783 + !irq_remapping_cap(IRQ_POSTING_CAP) || 10784 + !kvm_vcpu_apicv_active(vcpu)) 10787 10785 return; 10788 10786 10789 10787 do { ··· 10837 10833 int idx, ret = -EINVAL; 10838 10834 10839 10835 if (!kvm_arch_has_assigned_device(kvm) || 10840 - !irq_remapping_cap(IRQ_POSTING_CAP)) 10836 + !irq_remapping_cap(IRQ_POSTING_CAP) || 10837 + !kvm_vcpu_apicv_active(kvm->vcpus[0])) 10841 10838 return 0; 10842 10839 10843 10840 idx = srcu_read_lock(&kvm->irq_srcu);
+9 -3
block/blk-lib.c
··· 113 113 ret = submit_bio_wait(type, bio); 114 114 if (ret == -EOPNOTSUPP) 115 115 ret = 0; 116 + bio_put(bio); 116 117 } 117 118 blk_finish_plug(&plug); 118 119 ··· 166 165 } 167 166 } 168 167 169 - if (bio) 168 + if (bio) { 170 169 ret = submit_bio_wait(REQ_WRITE | REQ_WRITE_SAME, bio); 170 + bio_put(bio); 171 + } 171 172 return ret != -EOPNOTSUPP ? ret : 0; 172 173 } 173 174 EXPORT_SYMBOL(blkdev_issue_write_same); ··· 209 206 } 210 207 } 211 208 212 - if (bio) 213 - return submit_bio_wait(WRITE, bio); 209 + if (bio) { 210 + ret = submit_bio_wait(WRITE, bio); 211 + bio_put(bio); 212 + return ret; 213 + } 214 214 return 0; 215 215 } 216 216
+8 -9
block/blk-mq.c
··· 1262 1262 1263 1263 blk_queue_split(q, &bio, q->bio_split); 1264 1264 1265 - if (!is_flush_fua && !blk_queue_nomerges(q)) { 1266 - if (blk_attempt_plug_merge(q, bio, &request_count, 1267 - &same_queue_rq)) 1268 - return BLK_QC_T_NONE; 1269 - } else 1270 - request_count = blk_plug_queued_count(q); 1265 + if (!is_flush_fua && !blk_queue_nomerges(q) && 1266 + blk_attempt_plug_merge(q, bio, &request_count, &same_queue_rq)) 1267 + return BLK_QC_T_NONE; 1271 1268 1272 1269 rq = blk_mq_map_request(q, bio, &data); 1273 1270 if (unlikely(!rq)) ··· 1355 1358 1356 1359 blk_queue_split(q, &bio, q->bio_split); 1357 1360 1358 - if (!is_flush_fua && !blk_queue_nomerges(q) && 1359 - blk_attempt_plug_merge(q, bio, &request_count, NULL)) 1360 - return BLK_QC_T_NONE; 1361 + if (!is_flush_fua && !blk_queue_nomerges(q)) { 1362 + if (blk_attempt_plug_merge(q, bio, &request_count, NULL)) 1363 + return BLK_QC_T_NONE; 1364 + } else 1365 + request_count = blk_plug_queued_count(q); 1361 1366 1362 1367 rq = blk_mq_map_request(q, bio, &data); 1363 1368 if (unlikely(!rq))
+8 -136
drivers/acpi/acpica/hwregs.c
··· 306 306 acpi_status acpi_hw_write(u32 value, struct acpi_generic_address *reg) 307 307 { 308 308 u64 address; 309 - u8 access_width; 310 - u32 bit_width; 311 - u8 bit_offset; 312 - u64 value64; 313 - u32 new_value32, old_value32; 314 - u8 index; 315 309 acpi_status status; 316 310 317 311 ACPI_FUNCTION_NAME(hw_write); ··· 317 323 return (status); 318 324 } 319 325 320 - /* Convert access_width into number of bits based */ 321 - 322 - access_width = acpi_hw_get_access_bit_width(reg, 32); 323 - bit_width = reg->bit_offset + reg->bit_width; 324 - bit_offset = reg->bit_offset; 325 - 326 326 /* 327 327 * Two address spaces supported: Memory or IO. PCI_Config is 328 328 * not supported here because the GAS structure is insufficient 329 329 */ 330 - index = 0; 331 - while (bit_width) { 332 - /* 333 - * Use offset style bit reads because "Index * AccessWidth" is 334 - * ensured to be less than 32-bits by acpi_hw_validate_register(). 335 - */ 336 - new_value32 = ACPI_GET_BITS(&value, index * access_width, 337 - ACPI_MASK_BITS_ABOVE_32 338 - (access_width)); 330 + if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) { 331 + status = acpi_os_write_memory((acpi_physical_address) 332 + address, (u64)value, 333 + reg->bit_width); 334 + } else { /* ACPI_ADR_SPACE_SYSTEM_IO, validated earlier */ 339 335 340 - if (bit_offset >= access_width) { 341 - bit_offset -= access_width; 342 - } else { 343 - /* 344 - * Use offset style bit masks because access_width is ensured 345 - * to be less than 32-bits by acpi_hw_validate_register() and 346 - * bit_offset/bit_width is less than access_width here. 347 - */ 348 - if (bit_offset) { 349 - new_value32 &= ACPI_MASK_BITS_BELOW(bit_offset); 350 - } 351 - if (bit_width < access_width) { 352 - new_value32 &= ACPI_MASK_BITS_ABOVE(bit_width); 353 - } 354 - 355 - if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) { 356 - if (bit_offset || bit_width < access_width) { 357 - /* 358 - * Read old values in order not to modify the bits that 359 - * are beyond the register bit_width/bit_offset setting. 360 - */ 361 - status = 362 - acpi_os_read_memory((acpi_physical_address) 363 - address + 364 - index * 365 - ACPI_DIV_8 366 - (access_width), 367 - &value64, 368 - access_width); 369 - old_value32 = (u32)value64; 370 - 371 - /* 372 - * Use offset style bit masks because access_width is 373 - * ensured to be less than 32-bits by 374 - * acpi_hw_validate_register() and bit_offset/bit_width is 375 - * less than access_width here. 376 - */ 377 - if (bit_offset) { 378 - old_value32 &= 379 - ACPI_MASK_BITS_ABOVE 380 - (bit_offset); 381 - bit_offset = 0; 382 - } 383 - if (bit_width < access_width) { 384 - old_value32 &= 385 - ACPI_MASK_BITS_BELOW 386 - (bit_width); 387 - } 388 - 389 - new_value32 |= old_value32; 390 - } 391 - 392 - value64 = (u64)new_value32; 393 - status = 394 - acpi_os_write_memory((acpi_physical_address) 395 - address + 396 - index * 397 - ACPI_DIV_8 398 - (access_width), 399 - value64, access_width); 400 - } else { /* ACPI_ADR_SPACE_SYSTEM_IO, validated earlier */ 401 - 402 - if (bit_offset || bit_width < access_width) { 403 - /* 404 - * Read old values in order not to modify the bits that 405 - * are beyond the register bit_width/bit_offset setting. 406 - */ 407 - status = 408 - acpi_hw_read_port((acpi_io_address) 409 - address + 410 - index * 411 - ACPI_DIV_8 412 - (access_width), 413 - &old_value32, 414 - access_width); 415 - 416 - /* 417 - * Use offset style bit masks because access_width is 418 - * ensured to be less than 32-bits by 419 - * acpi_hw_validate_register() and bit_offset/bit_width is 420 - * less than access_width here. 421 - */ 422 - if (bit_offset) { 423 - old_value32 &= 424 - ACPI_MASK_BITS_ABOVE 425 - (bit_offset); 426 - bit_offset = 0; 427 - } 428 - if (bit_width < access_width) { 429 - old_value32 &= 430 - ACPI_MASK_BITS_BELOW 431 - (bit_width); 432 - } 433 - 434 - new_value32 |= old_value32; 435 - } 436 - 437 - status = acpi_hw_write_port((acpi_io_address) 438 - address + 439 - index * 440 - ACPI_DIV_8 441 - (access_width), 442 - new_value32, 443 - access_width); 444 - } 445 - } 446 - 447 - /* 448 - * Index * access_width is ensured to be less than 32-bits by 449 - * acpi_hw_validate_register(). 450 - */ 451 - bit_width -= 452 - bit_width > access_width ? access_width : bit_width; 453 - index++; 336 + status = acpi_hw_write_port((acpi_io_address) 337 + address, value, reg->bit_width); 454 338 } 455 339 456 340 ACPI_DEBUG_PRINT((ACPI_DB_IO, 457 341 "Wrote: %8.8X width %2d to %8.8X%8.8X (%s)\n", 458 - value, access_width, ACPI_FORMAT_UINT64(address), 342 + value, reg->bit_width, ACPI_FORMAT_UINT64(address), 459 343 acpi_ut_get_region_name(reg->space_id))); 460 344 461 345 return (status);
+1 -1
drivers/base/Makefile
··· 10 10 obj-y += power/ 11 11 obj-$(CONFIG_HAS_DMA) += dma-mapping.o 12 12 obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o 13 - obj-$(CONFIG_ISA) += isa.o 13 + obj-$(CONFIG_ISA_BUS_API) += isa.o 14 14 obj-$(CONFIG_FW_LOADER) += firmware_class.o 15 15 obj-$(CONFIG_NUMA) += node.o 16 16 obj-$(CONFIG_MEMORY_HOTPLUG_SPARSE) += memory.o
+1 -1
drivers/base/isa.c
··· 180 180 return error; 181 181 } 182 182 183 - device_initcall(isa_bus_init); 183 + postcore_initcall(isa_bus_init);
+5 -3
drivers/base/module.c
··· 24 24 25 25 static void module_create_drivers_dir(struct module_kobject *mk) 26 26 { 27 - if (!mk || mk->drivers_dir) 28 - return; 27 + static DEFINE_MUTEX(drivers_dir_mutex); 29 28 30 - mk->drivers_dir = kobject_create_and_add("drivers", &mk->kobj); 29 + mutex_lock(&drivers_dir_mutex); 30 + if (mk && !mk->drivers_dir) 31 + mk->drivers_dir = kobject_create_and_add("drivers", &mk->kobj); 32 + mutex_unlock(&drivers_dir_mutex); 31 33 } 32 34 33 35 void module_add_driver(struct module *mod, struct device_driver *drv)
+9 -3
drivers/base/power/opp/cpu.c
··· 211 211 } 212 212 213 213 /* Mark opp-table as multiple CPUs are sharing it now */ 214 - opp_table->shared_opp = true; 214 + opp_table->shared_opp = OPP_TABLE_ACCESS_SHARED; 215 215 } 216 216 unlock: 217 217 mutex_unlock(&opp_table_lock); ··· 227 227 * 228 228 * This updates the @cpumask with CPUs that are sharing OPPs with @cpu_dev. 229 229 * 230 - * Returns -ENODEV if OPP table isn't already present. 230 + * Returns -ENODEV if OPP table isn't already present and -EINVAL if the OPP 231 + * table's status is access-unknown. 231 232 * 232 233 * Locking: The internal opp_table and opp structures are RCU protected. 233 234 * Hence this function internally uses RCU updater strategy with mutex locks ··· 250 249 goto unlock; 251 250 } 252 251 252 + if (opp_table->shared_opp == OPP_TABLE_ACCESS_UNKNOWN) { 253 + ret = -EINVAL; 254 + goto unlock; 255 + } 256 + 253 257 cpumask_clear(cpumask); 254 258 255 - if (opp_table->shared_opp) { 259 + if (opp_table->shared_opp == OPP_TABLE_ACCESS_SHARED) { 256 260 list_for_each_entry(opp_dev, &opp_table->dev_list, node) 257 261 cpumask_set_cpu(opp_dev->dev->id, cpumask); 258 262 } else {
+8 -2
drivers/base/power/opp/of.c
··· 34 34 * But the OPPs will be considered as shared only if the 35 35 * OPP table contains a "opp-shared" property. 36 36 */ 37 - return opp_table->shared_opp ? opp_table : NULL; 37 + if (opp_table->shared_opp == OPP_TABLE_ACCESS_SHARED) 38 + return opp_table; 39 + 40 + return NULL; 38 41 } 39 42 } 40 43 ··· 356 353 } 357 354 358 355 opp_table->np = opp_np; 359 - opp_table->shared_opp = of_property_read_bool(opp_np, "opp-shared"); 356 + if (of_property_read_bool(opp_np, "opp-shared")) 357 + opp_table->shared_opp = OPP_TABLE_ACCESS_SHARED; 358 + else 359 + opp_table->shared_opp = OPP_TABLE_ACCESS_EXCLUSIVE; 360 360 361 361 mutex_unlock(&opp_table_lock); 362 362
+7 -1
drivers/base/power/opp/opp.h
··· 119 119 #endif 120 120 }; 121 121 122 + enum opp_table_access { 123 + OPP_TABLE_ACCESS_UNKNOWN = 0, 124 + OPP_TABLE_ACCESS_EXCLUSIVE = 1, 125 + OPP_TABLE_ACCESS_SHARED = 2, 126 + }; 127 + 122 128 /** 123 129 * struct opp_table - Device opp structure 124 130 * @node: table node - contains the devices with OPPs that ··· 172 166 /* For backward compatibility with v1 bindings */ 173 167 unsigned int voltage_tolerance_v1; 174 168 175 - bool shared_opp; 169 + enum opp_table_access shared_opp; 176 170 struct dev_pm_opp *suspend_opp; 177 171 178 172 unsigned int *supported_hw;
+1 -1
drivers/block/nbd.c
··· 941 941 debugfs_create_u64("size_bytes", 0444, dir, &nbd->bytesize); 942 942 debugfs_create_u32("timeout", 0444, dir, &nbd->xmit_timeout); 943 943 debugfs_create_u32("blocksize", 0444, dir, &nbd->blksize); 944 - debugfs_create_file("flags", 0444, dir, &nbd, &nbd_dbg_flags_ops); 944 + debugfs_create_file("flags", 0444, dir, nbd, &nbd_dbg_flags_ops); 945 945 946 946 return 0; 947 947 }
+22 -13
drivers/block/xen-blkfront.c
··· 874 874 const struct blk_mq_queue_data *qd) 875 875 { 876 876 unsigned long flags; 877 - struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)hctx->driver_data; 877 + int qid = hctx->queue_num; 878 + struct blkfront_info *info = hctx->queue->queuedata; 879 + struct blkfront_ring_info *rinfo = NULL; 878 880 881 + BUG_ON(info->nr_rings <= qid); 882 + rinfo = &info->rinfo[qid]; 879 883 blk_mq_start_request(qd->rq); 880 884 spin_lock_irqsave(&rinfo->ring_lock, flags); 881 885 if (RING_FULL(&rinfo->ring)) ··· 905 901 return BLK_MQ_RQ_QUEUE_BUSY; 906 902 } 907 903 908 - static int blk_mq_init_hctx(struct blk_mq_hw_ctx *hctx, void *data, 909 - unsigned int index) 910 - { 911 - struct blkfront_info *info = (struct blkfront_info *)data; 912 - 913 - BUG_ON(info->nr_rings <= index); 914 - hctx->driver_data = &info->rinfo[index]; 915 - return 0; 916 - } 917 - 918 904 static struct blk_mq_ops blkfront_mq_ops = { 919 905 .queue_rq = blkif_queue_rq, 920 906 .map_queue = blk_mq_map_queue, 921 - .init_hctx = blk_mq_init_hctx, 922 907 }; 923 908 924 909 static int xlvbd_init_blk_queue(struct gendisk *gd, u16 sector_size, ··· 943 950 return PTR_ERR(rq); 944 951 } 945 952 953 + rq->queuedata = info; 946 954 queue_flag_set_unlocked(QUEUE_FLAG_VIRT, rq); 947 955 948 956 if (info->feature_discard) { ··· 2143 2149 return err; 2144 2150 2145 2151 err = talk_to_blkback(dev, info); 2152 + if (!err) 2153 + blk_mq_update_nr_hw_queues(&info->tag_set, info->nr_rings); 2146 2154 2147 2155 /* 2148 2156 * We have to wait for the backend to switch to ··· 2481 2485 break; 2482 2486 2483 2487 case XenbusStateConnected: 2484 - if (dev->state != XenbusStateInitialised) { 2488 + /* 2489 + * talk_to_blkback sets state to XenbusStateInitialised 2490 + * and blkfront_connect sets it to XenbusStateConnected 2491 + * (if connection went OK). 2492 + * 2493 + * If the backend (or toolstack) decides to poke at backend 2494 + * state (and re-trigger the watch by setting the state repeatedly 2495 + * to XenbusStateConnected (4)) we need to deal with this. 2496 + * This is allowed as this is used to communicate to the guest 2497 + * that the size of disk has changed! 2498 + */ 2499 + if ((dev->state != XenbusStateInitialised) && 2500 + (dev->state != XenbusStateConnected)) { 2485 2501 if (talk_to_blkback(dev, info)) 2486 2502 break; 2487 2503 } 2504 + 2488 2505 blkfront_connect(info); 2489 2506 break; 2490 2507
+6 -2
drivers/char/ipmi/ipmi_msghandler.c
··· 3820 3820 while (!list_empty(&intf->waiting_rcv_msgs)) { 3821 3821 smi_msg = list_entry(intf->waiting_rcv_msgs.next, 3822 3822 struct ipmi_smi_msg, link); 3823 + list_del(&smi_msg->link); 3823 3824 if (!run_to_completion) 3824 3825 spin_unlock_irqrestore(&intf->waiting_rcv_msgs_lock, 3825 3826 flags); ··· 3830 3829 if (rv > 0) { 3831 3830 /* 3832 3831 * To preserve message order, quit if we 3833 - * can't handle a message. 3832 + * can't handle a message. Add the message 3833 + * back at the head, this is safe because this 3834 + * tasklet is the only thing that pulls the 3835 + * messages. 3834 3836 */ 3837 + list_add(&smi_msg->link, &intf->waiting_rcv_msgs); 3835 3838 break; 3836 3839 } else { 3837 - list_del(&smi_msg->link); 3838 3840 if (rv == 0) 3839 3841 /* Message handled */ 3840 3842 ipmi_free_smi_msg(smi_msg);
+2 -20
drivers/cpufreq/intel_pstate.c
··· 372 372 return acpi_ppc; 373 373 } 374 374 375 - /* 376 - * The max target pstate ratio is a 8 bit value in both PLATFORM_INFO MSR and 377 - * in TURBO_RATIO_LIMIT MSR, which pstate driver stores in max_pstate and 378 - * max_turbo_pstate fields. The PERF_CTL MSR contains 16 bit value for P state 379 - * ratio, out of it only high 8 bits are used. For example 0x1700 is setting 380 - * target ratio 0x17. The _PSS control value stores in a format which can be 381 - * directly written to PERF_CTL MSR. But in intel_pstate driver this shift 382 - * occurs during write to PERF_CTL (E.g. for cores core_set_pstate()). 383 - * This function converts the _PSS control value to intel pstate driver format 384 - * for comparison and assignment. 385 - */ 386 - static int convert_to_native_pstate_format(struct cpudata *cpu, int index) 387 - { 388 - return cpu->acpi_perf_data.states[index].control >> 8; 389 - } 390 - 391 375 static void intel_pstate_init_acpi_perf_limits(struct cpufreq_policy *policy) 392 376 { 393 377 struct cpudata *cpu; 394 - int turbo_pss_ctl; 395 378 int ret; 396 379 int i; 397 380 ··· 424 441 * max frequency, which will cause a reduced performance as 425 442 * this driver uses real max turbo frequency as the max 426 443 * frequency. So correct this frequency in _PSS table to 427 - * correct max turbo frequency based on the turbo ratio. 444 + * correct max turbo frequency based on the turbo state. 428 445 * Also need to convert to MHz as _PSS freq is in MHz. 429 446 */ 430 - turbo_pss_ctl = convert_to_native_pstate_format(cpu, 0); 431 - if (turbo_pss_ctl > cpu->pstate.max_pstate) 447 + if (!limits->turbo_disabled) 432 448 cpu->acpi_perf_data.states[0].core_frequency = 433 449 policy->cpuinfo.max_freq / 1000; 434 450 cpu->valid_pss_table = true;
+57 -25
drivers/dma/at_xdmac.c
··· 242 242 u32 mbr_dus; /* Destination Microblock Stride Register */ 243 243 }; 244 244 245 - 245 + /* 64-bit alignment needed to update CNDA and CUBC registers in an atomic way. */ 246 246 struct at_xdmac_desc { 247 247 struct at_xdmac_lld lld; 248 248 enum dma_transfer_direction direction; ··· 253 253 unsigned int xfer_size; 254 254 struct list_head descs_list; 255 255 struct list_head xfer_node; 256 - }; 256 + } __aligned(sizeof(u64)); 257 257 258 258 static inline void __iomem *at_xdmac_chan_reg_base(struct at_xdmac *atxdmac, unsigned int chan_nb) 259 259 { ··· 1400 1400 u32 cur_nda, check_nda, cur_ubc, mask, value; 1401 1401 u8 dwidth = 0; 1402 1402 unsigned long flags; 1403 + bool initd; 1403 1404 1404 1405 ret = dma_cookie_status(chan, cookie, txstate); 1405 1406 if (ret == DMA_COMPLETE) ··· 1425 1424 residue = desc->xfer_size; 1426 1425 /* 1427 1426 * Flush FIFO: only relevant when the transfer is source peripheral 1428 - * synchronized. 1427 + * synchronized. Flush is needed before reading CUBC because data in 1428 + * the FIFO are not reported by CUBC. Reporting a residue of the 1429 + * transfer length while we have data in FIFO can cause issue. 1430 + * Usecase: atmel USART has a timeout which means I have received 1431 + * characters but there is no more character received for a while. On 1432 + * timeout, it requests the residue. If the data are in the DMA FIFO, 1433 + * we will return a residue of the transfer length. It means no data 1434 + * received. If an application is waiting for these data, it will hang 1435 + * since we won't have another USART timeout without receiving new 1436 + * data. 1429 1437 */ 1430 1438 mask = AT_XDMAC_CC_TYPE | AT_XDMAC_CC_DSYNC; 1431 1439 value = AT_XDMAC_CC_TYPE_PER_TRAN | AT_XDMAC_CC_DSYNC_PER2MEM; ··· 1445 1435 } 1446 1436 1447 1437 /* 1448 - * When processing the residue, we need to read two registers but we 1449 - * can't do it in an atomic way. AT_XDMAC_CNDA is used to find where 1450 - * we stand in the descriptor list and AT_XDMAC_CUBC is used 1451 - * to know how many data are remaining for the current descriptor. 1452 - * Since the dma channel is not paused to not loose data, between the 1453 - * AT_XDMAC_CNDA and AT_XDMAC_CUBC read, we may have change of 1454 - * descriptor. 1455 - * For that reason, after reading AT_XDMAC_CUBC, we check if we are 1456 - * still using the same descriptor by reading a second time 1457 - * AT_XDMAC_CNDA. If AT_XDMAC_CNDA has changed, it means we have to 1458 - * read again AT_XDMAC_CUBC. 1438 + * The easiest way to compute the residue should be to pause the DMA 1439 + * but doing this can lead to miss some data as some devices don't 1440 + * have FIFO. 1441 + * We need to read several registers because: 1442 + * - DMA is running therefore a descriptor change is possible while 1443 + * reading these registers 1444 + * - When the block transfer is done, the value of the CUBC register 1445 + * is set to its initial value until the fetch of the next descriptor. 1446 + * This value will corrupt the residue calculation so we have to skip 1447 + * it. 1448 + * 1449 + * INITD -------- ------------ 1450 + * |____________________| 1451 + * _______________________ _______________ 1452 + * NDA @desc2 \/ @desc3 1453 + * _______________________/\_______________ 1454 + * __________ ___________ _______________ 1455 + * CUBC 0 \/ MAX desc1 \/ MAX desc2 1456 + * __________/\___________/\_______________ 1457 + * 1458 + * Since descriptors are aligned on 64 bits, we can assume that 1459 + * the update of NDA and CUBC is atomic. 1459 1460 * Memory barriers are used to ensure the read order of the registers. 1460 - * A max number of retries is set because unlikely it can never ends if 1461 - * we are transferring a lot of data with small buffers. 1461 + * A max number of retries is set because unlikely it could never ends. 1462 1462 */ 1463 - cur_nda = at_xdmac_chan_read(atchan, AT_XDMAC_CNDA) & 0xfffffffc; 1464 - rmb(); 1465 - cur_ubc = at_xdmac_chan_read(atchan, AT_XDMAC_CUBC); 1466 1463 for (retry = 0; retry < AT_XDMAC_RESIDUE_MAX_RETRIES; retry++) { 1467 - rmb(); 1468 1464 check_nda = at_xdmac_chan_read(atchan, AT_XDMAC_CNDA) & 0xfffffffc; 1469 - 1470 - if (likely(cur_nda == check_nda)) 1471 - break; 1472 - 1473 - cur_nda = check_nda; 1465 + rmb(); 1466 + initd = !!(at_xdmac_chan_read(atchan, AT_XDMAC_CC) & AT_XDMAC_CC_INITD); 1474 1467 rmb(); 1475 1468 cur_ubc = at_xdmac_chan_read(atchan, AT_XDMAC_CUBC); 1469 + rmb(); 1470 + cur_nda = at_xdmac_chan_read(atchan, AT_XDMAC_CNDA) & 0xfffffffc; 1471 + rmb(); 1472 + 1473 + if ((check_nda == cur_nda) && initd) 1474 + break; 1476 1475 } 1477 1476 1478 1477 if (unlikely(retry >= AT_XDMAC_RESIDUE_MAX_RETRIES)) { 1479 1478 ret = DMA_ERROR; 1480 1479 goto spin_unlock; 1480 + } 1481 + 1482 + /* 1483 + * Flush FIFO: only relevant when the transfer is source peripheral 1484 + * synchronized. Another flush is needed here because CUBC is updated 1485 + * when the controller sends the data write command. It can lead to 1486 + * report data that are not written in the memory or the device. The 1487 + * FIFO flush ensures that data are really written. 1488 + */ 1489 + if ((desc->lld.mbr_cfg & mask) == value) { 1490 + at_xdmac_write(atxdmac, AT_XDMAC_GSWF, atchan->mask); 1491 + while (!(at_xdmac_chan_read(atchan, AT_XDMAC_CIS) & AT_XDMAC_CIS_FIS)) 1492 + cpu_relax(); 1481 1493 } 1482 1494 1483 1495 /*
+6 -4
drivers/dma/mv_xor.c
··· 703 703 goto free_resources; 704 704 } 705 705 706 - src_dma = dma_map_page(dma_chan->device->dev, virt_to_page(src), 0, 707 - PAGE_SIZE, DMA_TO_DEVICE); 706 + src_dma = dma_map_page(dma_chan->device->dev, virt_to_page(src), 707 + (size_t)src & ~PAGE_MASK, PAGE_SIZE, 708 + DMA_TO_DEVICE); 708 709 unmap->addr[0] = src_dma; 709 710 710 711 ret = dma_mapping_error(dma_chan->device->dev, src_dma); ··· 715 714 } 716 715 unmap->to_cnt = 1; 717 716 718 - dest_dma = dma_map_page(dma_chan->device->dev, virt_to_page(dest), 0, 719 - PAGE_SIZE, DMA_FROM_DEVICE); 717 + dest_dma = dma_map_page(dma_chan->device->dev, virt_to_page(dest), 718 + (size_t)dest & ~PAGE_MASK, PAGE_SIZE, 719 + DMA_FROM_DEVICE); 720 720 unmap->addr[1] = dest_dma; 721 721 722 722 ret = dma_mapping_error(dma_chan->device->dev, dest_dma);
+2
drivers/extcon/extcon-palmas.c
··· 360 360 361 361 palmas_enable_irq(palmas_usb); 362 362 /* perform initial detection */ 363 + if (palmas_usb->enable_gpio_vbus_detection) 364 + palmas_vbus_irq_handler(palmas_usb->gpio_vbus_irq, palmas_usb); 363 365 palmas_gpio_id_detect(&palmas_usb->wq_detectid.work); 364 366 device_set_wakeup_capable(&pdev->dev, true); 365 367 return 0;
+5 -4
drivers/gpio/Kconfig
··· 33 33 34 34 menuconfig GPIOLIB 35 35 bool "GPIO Support" 36 + select ANON_INODES 36 37 help 37 38 This enables GPIO support through the generic GPIO library. 38 39 You only need to enable this, if you also want to enable ··· 531 530 532 531 config GPIO_104_DIO_48E 533 532 tristate "ACCES 104-DIO-48E GPIO support" 534 - depends on ISA 533 + depends on ISA_BUS_API 535 534 select GPIOLIB_IRQCHIP 536 535 help 537 536 Enables GPIO support for the ACCES 104-DIO-48E series (104-DIO-48E, ··· 541 540 542 541 config GPIO_104_IDIO_16 543 542 tristate "ACCES 104-IDIO-16 GPIO support" 544 - depends on ISA 543 + depends on ISA_BUS_API 545 544 select GPIOLIB_IRQCHIP 546 545 help 547 546 Enables GPIO support for the ACCES 104-IDIO-16 family (104-IDIO-16, ··· 552 551 553 552 config GPIO_104_IDI_48 554 553 tristate "ACCES 104-IDI-48 GPIO support" 555 - depends on ISA 554 + depends on ISA_BUS_API 556 555 select GPIOLIB_IRQCHIP 557 556 help 558 557 Enables GPIO support for the ACCES 104-IDI-48 family (104-IDI-48A, ··· 628 627 629 628 config GPIO_WS16C48 630 629 tristate "WinSystems WS16C48 GPIO support" 631 - depends on ISA 630 + depends on ISA_BUS_API 632 631 select GPIOLIB_IRQCHIP 633 632 help 634 633 Enables GPIO support for the WinSystems WS16C48. The base port
+2 -2
drivers/gpio/gpio-104-dio-48e.c
··· 75 75 { 76 76 struct dio48e_gpio *const dio48egpio = gpiochip_get_data(chip); 77 77 const unsigned io_port = offset / 8; 78 - const unsigned control_port = io_port / 2; 78 + const unsigned int control_port = io_port / 3; 79 79 const unsigned control_addr = dio48egpio->base + 3 + control_port*4; 80 80 unsigned long flags; 81 81 unsigned control; ··· 115 115 { 116 116 struct dio48e_gpio *const dio48egpio = gpiochip_get_data(chip); 117 117 const unsigned io_port = offset / 8; 118 - const unsigned control_port = io_port / 2; 118 + const unsigned int control_port = io_port / 3; 119 119 const unsigned mask = BIT(offset % 8); 120 120 const unsigned control_addr = dio48egpio->base + 3 + control_port*4; 121 121 const unsigned out_port = (io_port > 2) ? io_port + 1 : io_port;
+2 -2
drivers/gpio/gpio-bcm-kona.c
··· 547 547 /* disable interrupts and clear status */ 548 548 for (i = 0; i < kona_gpio->num_bank; i++) { 549 549 /* Unlock the entire bank first */ 550 - bcm_kona_gpio_write_lock_regs(kona_gpio, i, UNLOCK_CODE); 550 + bcm_kona_gpio_write_lock_regs(reg_base, i, UNLOCK_CODE); 551 551 writel(0xffffffff, reg_base + GPIO_INT_MASK(i)); 552 552 writel(0xffffffff, reg_base + GPIO_INT_STATUS(i)); 553 553 /* Now re-lock the bank */ 554 - bcm_kona_gpio_write_lock_regs(kona_gpio, i, LOCK_CODE); 554 + bcm_kona_gpio_write_lock_regs(reg_base, i, LOCK_CODE); 555 555 } 556 556 } 557 557
+7
drivers/gpio/gpio-zynq.c
··· 709 709 dev_err(&pdev->dev, "input clock not found.\n"); 710 710 return PTR_ERR(gpio->clk); 711 711 } 712 + ret = clk_prepare_enable(gpio->clk); 713 + if (ret) { 714 + dev_err(&pdev->dev, "Unable to enable clock.\n"); 715 + return ret; 716 + } 712 717 718 + pm_runtime_set_active(&pdev->dev); 713 719 pm_runtime_enable(&pdev->dev); 714 720 ret = pm_runtime_get_sync(&pdev->dev); 715 721 if (ret < 0) ··· 753 747 pm_runtime_put(&pdev->dev); 754 748 err_pm_dis: 755 749 pm_runtime_disable(&pdev->dev); 750 + clk_disable_unprepare(gpio->clk); 756 751 757 752 return ret; 758 753 }
+1
drivers/gpio/gpiolib-of.c
··· 16 16 #include <linux/errno.h> 17 17 #include <linux/module.h> 18 18 #include <linux/io.h> 19 + #include <linux/io-mapping.h> 19 20 #include <linux/gpio/consumer.h> 20 21 #include <linux/of.h> 21 22 #include <linux/of_address.h>
+3 -3
drivers/gpio/gpiolib.c
··· 449 449 { 450 450 struct gpio_device *gdev = dev_get_drvdata(dev); 451 451 452 - cdev_del(&gdev->chrdev); 453 452 list_del(&gdev->list); 454 453 ida_simple_remove(&gpio_ida, gdev->id); 455 454 kfree(gdev->label); ··· 481 482 482 483 /* From this point, the .release() function cleans up gpio_device */ 483 484 gdev->dev.release = gpiodevice_release; 484 - get_device(&gdev->dev); 485 485 pr_debug("%s: registered GPIOs %d to %d on device: %s (%s)\n", 486 486 __func__, gdev->base, gdev->base + gdev->ngpio - 1, 487 487 dev_name(&gdev->dev), gdev->chip->label ? : "generic"); ··· 768 770 * be removed, else it will be dangling until the last user is 769 771 * gone. 770 772 */ 773 + cdev_del(&gdev->chrdev); 774 + device_del(&gdev->dev); 771 775 put_device(&gdev->dev); 772 776 } 773 777 EXPORT_SYMBOL_GPL(gpiochip_remove); ··· 869 869 870 870 spin_lock_irqsave(&gpio_lock, flags); 871 871 list_for_each_entry(gdev, &gpio_devices, list) 872 - if (match(gdev->chip, data)) 872 + if (gdev->chip && match(gdev->chip, data)) 873 873 break; 874 874 875 875 /* No match? */
+8 -3
drivers/gpu/drm/amd/amdgpu/amdgpu.h
··· 799 799 unsigned cond_exe_offs; 800 800 u64 cond_exe_gpu_addr; 801 801 volatile u32 *cond_exe_cpu_addr; 802 - int vmid; 803 802 }; 804 803 805 804 /* ··· 936 937 unsigned vm_id, uint64_t pd_addr, 937 938 uint32_t gds_base, uint32_t gds_size, 938 939 uint32_t gws_base, uint32_t gws_size, 939 - uint32_t oa_base, uint32_t oa_size, 940 - bool vmid_switch); 940 + uint32_t oa_base, uint32_t oa_size); 941 941 void amdgpu_vm_reset_id(struct amdgpu_device *adev, unsigned vm_id); 942 942 uint64_t amdgpu_vm_map_gart(const dma_addr_t *pages_addr, uint64_t addr); 943 943 int amdgpu_vm_update_page_directory(struct amdgpu_device *adev, ··· 1820 1822 /* MM block clocks */ 1821 1823 int (*set_uvd_clocks)(struct amdgpu_device *adev, u32 vclk, u32 dclk); 1822 1824 int (*set_vce_clocks)(struct amdgpu_device *adev, u32 evclk, u32 ecclk); 1825 + /* query virtual capabilities */ 1826 + u32 (*get_virtual_caps)(struct amdgpu_device *adev); 1823 1827 }; 1824 1828 1825 1829 /* ··· 1916 1916 1917 1917 1918 1918 /* GPU virtualization */ 1919 + #define AMDGPU_VIRT_CAPS_SRIOV_EN (1 << 0) 1920 + #define AMDGPU_VIRT_CAPS_IS_VF (1 << 1) 1919 1921 struct amdgpu_virtualization { 1920 1922 bool supports_sr_iov; 1923 + bool is_virtual; 1924 + u32 caps; 1921 1925 }; 1922 1926 1923 1927 /* ··· 2210 2206 #define amdgpu_asic_get_xclk(adev) (adev)->asic_funcs->get_xclk((adev)) 2211 2207 #define amdgpu_asic_set_uvd_clocks(adev, v, d) (adev)->asic_funcs->set_uvd_clocks((adev), (v), (d)) 2212 2208 #define amdgpu_asic_set_vce_clocks(adev, ev, ec) (adev)->asic_funcs->set_vce_clocks((adev), (ev), (ec)) 2209 + #define amdgpu_asic_get_virtual_caps(adev) ((adev)->asic_funcs->get_virtual_caps((adev))) 2213 2210 #define amdgpu_asic_get_gpu_clock_counter(adev) (adev)->asic_funcs->get_gpu_clock_counter((adev)) 2214 2211 #define amdgpu_asic_read_disabled_bios(adev) (adev)->asic_funcs->read_disabled_bios((adev)) 2215 2212 #define amdgpu_asic_read_bios_from_rom(adev, b, l) (adev)->asic_funcs->read_bios_from_rom((adev), (b), (l))
+16 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 1385 1385 return 0; 1386 1386 } 1387 1387 1388 + static bool amdgpu_device_is_virtual(void) 1389 + { 1390 + #ifdef CONFIG_X86 1391 + return boot_cpu_has(X86_FEATURE_HYPERVISOR); 1392 + #else 1393 + return false; 1394 + #endif 1395 + } 1396 + 1388 1397 /** 1389 1398 * amdgpu_device_init - initialize the driver 1390 1399 * ··· 1528 1519 adev->virtualization.supports_sr_iov = 1529 1520 amdgpu_atombios_has_gpu_virtualization_table(adev); 1530 1521 1522 + /* Check if we are executing in a virtualized environment */ 1523 + adev->virtualization.is_virtual = amdgpu_device_is_virtual(); 1524 + adev->virtualization.caps = amdgpu_asic_get_virtual_caps(adev); 1525 + 1531 1526 /* Post card if necessary */ 1532 - if (!amdgpu_card_posted(adev)) { 1527 + if (!amdgpu_card_posted(adev) || 1528 + (adev->virtualization.is_virtual && 1529 + !adev->virtualization.caps & AMDGPU_VIRT_CAPS_SRIOV_EN)) { 1533 1530 if (!adev->bios) { 1534 1531 dev_err(adev->dev, "Card not posted and no BIOS - ignoring\n"); 1535 1532 return -EINVAL;
+2 -7
drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
··· 122 122 bool skip_preamble, need_ctx_switch; 123 123 unsigned patch_offset = ~0; 124 124 struct amdgpu_vm *vm; 125 - int vmid = 0, old_vmid = ring->vmid; 126 125 struct fence *hwf; 127 126 uint64_t ctx; 128 127 ··· 135 136 if (job) { 136 137 vm = job->vm; 137 138 ctx = job->ctx; 138 - vmid = job->vm_id; 139 139 } else { 140 140 vm = NULL; 141 141 ctx = 0; 142 - vmid = 0; 143 142 } 144 143 145 144 if (!ring->ready) { ··· 163 166 r = amdgpu_vm_flush(ring, job->vm_id, job->vm_pd_addr, 164 167 job->gds_base, job->gds_size, 165 168 job->gws_base, job->gws_size, 166 - job->oa_base, job->oa_size, 167 - (ring->current_ctx == ctx) && (old_vmid != vmid)); 169 + job->oa_base, job->oa_size); 168 170 if (r) { 169 171 amdgpu_ring_undo(ring); 170 172 return r; ··· 180 184 need_ctx_switch = ring->current_ctx != ctx; 181 185 for (i = 0; i < num_ibs; ++i) { 182 186 ib = &ibs[i]; 187 + 183 188 /* drop preamble IBs if we don't have a context switch */ 184 189 if ((ib->flags & AMDGPU_IB_FLAG_PREAMBLE) && skip_preamble) 185 190 continue; ··· 188 191 amdgpu_ring_emit_ib(ring, ib, job ? job->vm_id : 0, 189 192 need_ctx_switch); 190 193 need_ctx_switch = false; 191 - ring->vmid = vmid; 192 194 } 193 195 194 196 if (ring->funcs->emit_hdp_invalidate) ··· 198 202 dev_err(adev->dev, "failed to emit fence (%d)\n", r); 199 203 if (job && job->vm_id) 200 204 amdgpu_vm_reset_id(adev, job->vm_id); 201 - ring->vmid = old_vmid; 202 205 amdgpu_ring_undo(ring); 203 206 return r; 204 207 }
+3 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 298 298 unsigned vm_id, uint64_t pd_addr, 299 299 uint32_t gds_base, uint32_t gds_size, 300 300 uint32_t gws_base, uint32_t gws_size, 301 - uint32_t oa_base, uint32_t oa_size, 302 - bool vmid_switch) 301 + uint32_t oa_base, uint32_t oa_size) 303 302 { 304 303 struct amdgpu_device *adev = ring->adev; 305 304 struct amdgpu_vm_id *id = &adev->vm_manager.ids[vm_id]; ··· 312 313 int r; 313 314 314 315 if (ring->funcs->emit_pipeline_sync && ( 315 - pd_addr != AMDGPU_VM_NO_FLUSH || gds_switch_needed || vmid_switch)) 316 + pd_addr != AMDGPU_VM_NO_FLUSH || gds_switch_needed || 317 + ring->type == AMDGPU_RING_TYPE_COMPUTE)) 316 318 amdgpu_ring_emit_pipeline_sync(ring); 317 319 318 320 if (ring->funcs->emit_vm_flush &&
+7
drivers/gpu/drm/amd/amdgpu/cik.c
··· 962 962 return true; 963 963 } 964 964 965 + static u32 cik_get_virtual_caps(struct amdgpu_device *adev) 966 + { 967 + /* CIK does not support SR-IOV */ 968 + return 0; 969 + } 970 + 965 971 static const struct amdgpu_allowed_register_entry cik_allowed_read_registers[] = { 966 972 {mmGRBM_STATUS, false}, 967 973 {mmGB_ADDR_CONFIG, false}, ··· 2013 2007 .get_xclk = &cik_get_xclk, 2014 2008 .set_uvd_clocks = &cik_set_uvd_clocks, 2015 2009 .set_vce_clocks = &cik_set_vce_clocks, 2010 + .get_virtual_caps = &cik_get_virtual_caps, 2016 2011 /* these should be moved to their own ip modules */ 2017 2012 .get_gpu_clock_counter = &gfx_v7_0_get_gpu_clock_counter, 2018 2013 .wait_for_mc_idle = &gmc_v7_0_mc_wait_for_idle,
+1 -1
drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
··· 4833 4833 case 2: 4834 4834 for (i = 0; i < adev->gfx.num_compute_rings; i++) { 4835 4835 ring = &adev->gfx.compute_ring[i]; 4836 - if ((ring->me == me_id) & (ring->pipe == pipe_id)) 4836 + if ((ring->me == me_id) && (ring->pipe == pipe_id)) 4837 4837 amdgpu_fence_process(ring); 4838 4838 } 4839 4839 break;
+15
drivers/gpu/drm/amd/amdgpu/vi.c
··· 421 421 return true; 422 422 } 423 423 424 + static u32 vi_get_virtual_caps(struct amdgpu_device *adev) 425 + { 426 + u32 caps = 0; 427 + u32 reg = RREG32(mmBIF_IOV_FUNC_IDENTIFIER); 428 + 429 + if (REG_GET_FIELD(reg, BIF_IOV_FUNC_IDENTIFIER, IOV_ENABLE)) 430 + caps |= AMDGPU_VIRT_CAPS_SRIOV_EN; 431 + 432 + if (REG_GET_FIELD(reg, BIF_IOV_FUNC_IDENTIFIER, FUNC_IDENTIFIER)) 433 + caps |= AMDGPU_VIRT_CAPS_IS_VF; 434 + 435 + return caps; 436 + } 437 + 424 438 static const struct amdgpu_allowed_register_entry tonga_allowed_read_registers[] = { 425 439 {mmGB_MACROTILE_MODE7, true}, 426 440 }; ··· 1132 1118 .get_xclk = &vi_get_xclk, 1133 1119 .set_uvd_clocks = &vi_set_uvd_clocks, 1134 1120 .set_vce_clocks = &vi_set_vce_clocks, 1121 + .get_virtual_caps = &vi_get_virtual_caps, 1135 1122 /* these should be moved to their own ip modules */ 1136 1123 .get_gpu_clock_counter = &gfx_v8_0_get_gpu_clock_counter, 1137 1124 .wait_for_mc_idle = &gmc_v8_0_mc_wait_for_idle,
+51 -35
drivers/gpu/drm/amd/amdkfd/kfd_process.c
··· 242 242 pqm_uninit(&p->pqm); 243 243 244 244 /* Iterate over all process device data structure and check 245 - * if we should reset all wavefronts */ 246 - list_for_each_entry(pdd, &p->per_device_data, per_device_list) 245 + * if we should delete debug managers and reset all wavefronts 246 + */ 247 + list_for_each_entry(pdd, &p->per_device_data, per_device_list) { 248 + if ((pdd->dev->dbgmgr) && 249 + (pdd->dev->dbgmgr->pasid == p->pasid)) 250 + kfd_dbgmgr_destroy(pdd->dev->dbgmgr); 251 + 247 252 if (pdd->reset_wavefronts) { 248 253 pr_warn("amdkfd: Resetting all wave fronts\n"); 249 254 dbgdev_wave_reset_wavefronts(pdd->dev, p); 250 255 pdd->reset_wavefronts = false; 251 256 } 257 + } 252 258 253 259 mutex_unlock(&p->mutex); 254 260 ··· 410 404 411 405 idx = srcu_read_lock(&kfd_processes_srcu); 412 406 407 + /* 408 + * Look for the process that matches the pasid. If there is no such 409 + * process, we either released it in amdkfd's own notifier, or there 410 + * is a bug. Unfortunately, there is no way to tell... 411 + */ 413 412 hash_for_each_rcu(kfd_processes_table, i, p, kfd_processes) 414 - if (p->pasid == pasid) 415 - break; 413 + if (p->pasid == pasid) { 414 + 415 + srcu_read_unlock(&kfd_processes_srcu, idx); 416 + 417 + pr_debug("Unbinding process %d from IOMMU\n", pasid); 418 + 419 + mutex_lock(&p->mutex); 420 + 421 + if ((dev->dbgmgr) && (dev->dbgmgr->pasid == p->pasid)) 422 + kfd_dbgmgr_destroy(dev->dbgmgr); 423 + 424 + pqm_uninit(&p->pqm); 425 + 426 + pdd = kfd_get_process_device_data(dev, p); 427 + 428 + if (!pdd) { 429 + mutex_unlock(&p->mutex); 430 + return; 431 + } 432 + 433 + if (pdd->reset_wavefronts) { 434 + dbgdev_wave_reset_wavefronts(pdd->dev, p); 435 + pdd->reset_wavefronts = false; 436 + } 437 + 438 + /* 439 + * Just mark pdd as unbound, because we still need it 440 + * to call amd_iommu_unbind_pasid() in when the 441 + * process exits. 442 + * We don't call amd_iommu_unbind_pasid() here 443 + * because the IOMMU called us. 444 + */ 445 + pdd->bound = false; 446 + 447 + mutex_unlock(&p->mutex); 448 + 449 + return; 450 + } 416 451 417 452 srcu_read_unlock(&kfd_processes_srcu, idx); 418 - 419 - BUG_ON(p->pasid != pasid); 420 - 421 - mutex_lock(&p->mutex); 422 - 423 - if ((dev->dbgmgr) && (dev->dbgmgr->pasid == p->pasid)) 424 - kfd_dbgmgr_destroy(dev->dbgmgr); 425 - 426 - pqm_uninit(&p->pqm); 427 - 428 - pdd = kfd_get_process_device_data(dev, p); 429 - 430 - if (!pdd) { 431 - mutex_unlock(&p->mutex); 432 - return; 433 - } 434 - 435 - if (pdd->reset_wavefronts) { 436 - dbgdev_wave_reset_wavefronts(pdd->dev, p); 437 - pdd->reset_wavefronts = false; 438 - } 439 - 440 - /* 441 - * Just mark pdd as unbound, because we still need it to call 442 - * amd_iommu_unbind_pasid() in when the process exits. 443 - * We don't call amd_iommu_unbind_pasid() here 444 - * because the IOMMU called us. 445 - */ 446 - pdd->bound = false; 447 - 448 - mutex_unlock(&p->mutex); 449 453 } 450 454 451 455 struct kfd_process_device *kfd_get_first_process_device_data(struct kfd_process *p)
+1 -1
drivers/gpu/drm/amd/amdkfd/kfd_topology.c
··· 666 666 dev->node_props.simd_count); 667 667 668 668 if (dev->mem_bank_count < dev->node_props.mem_banks_count) { 669 - pr_warn("kfd: mem_banks_count truncated from %d to %d\n", 669 + pr_info_once("kfd: mem_banks_count truncated from %d to %d\n", 670 670 dev->node_props.mem_banks_count, 671 671 dev->mem_bank_count); 672 672 sysfs_show_32bit_prop(buffer, "mem_banks_count",
+1
drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr_ppt.h
··· 39 39 uint8_t phases; 40 40 uint8_t cks_enable; 41 41 uint8_t cks_voffset; 42 + uint32_t sclk_offset; 42 43 }; 43 44 44 45 typedef struct phm_ppt_v1_clock_voltage_dependency_record phm_ppt_v1_clock_voltage_dependency_record;
+17 -11
drivers/gpu/drm/amd/powerplay/hwmgr/polaris10_hwmgr.c
··· 999 999 vddci = phm_find_closest_vddci(&(data->vddci_voltage_table), 1000 1000 (dep_table->entries[i].vddc - 1001 1001 (uint16_t)data->vddc_vddci_delta)); 1002 - *voltage |= (vddci * VOLTAGE_SCALE) << VDDCI_SHIFT; 1002 + *voltage |= (vddci * VOLTAGE_SCALE) << VDDCI_SHIFT; 1003 1003 } 1004 1004 1005 1005 if (POLARIS10_VOLTAGE_CONTROL_NONE == data->mvdd_control) ··· 3520 3520 ATOM_Tonga_State *state_entry = (ATOM_Tonga_State *)state; 3521 3521 ATOM_Tonga_POWERPLAYTABLE *powerplay_table = 3522 3522 (ATOM_Tonga_POWERPLAYTABLE *)pp_table; 3523 - ATOM_Tonga_SCLK_Dependency_Table *sclk_dep_table = 3524 - (ATOM_Tonga_SCLK_Dependency_Table *) 3523 + PPTable_Generic_SubTable_Header *sclk_dep_table = 3524 + (PPTable_Generic_SubTable_Header *) 3525 3525 (((unsigned long)powerplay_table) + 3526 3526 le16_to_cpu(powerplay_table->usSclkDependencyTableOffset)); 3527 + 3527 3528 ATOM_Tonga_MCLK_Dependency_Table *mclk_dep_table = 3528 3529 (ATOM_Tonga_MCLK_Dependency_Table *) 3529 3530 (((unsigned long)powerplay_table) + ··· 3576 3575 /* Performance levels are arranged from low to high. */ 3577 3576 performance_level->memory_clock = mclk_dep_table->entries 3578 3577 [state_entry->ucMemoryClockIndexLow].ulMclk; 3579 - performance_level->engine_clock = sclk_dep_table->entries 3578 + if (sclk_dep_table->ucRevId == 0) 3579 + performance_level->engine_clock = ((ATOM_Tonga_SCLK_Dependency_Table *)sclk_dep_table)->entries 3580 + [state_entry->ucEngineClockIndexLow].ulSclk; 3581 + else if (sclk_dep_table->ucRevId == 1) 3582 + performance_level->engine_clock = ((ATOM_Polaris_SCLK_Dependency_Table *)sclk_dep_table)->entries 3580 3583 [state_entry->ucEngineClockIndexLow].ulSclk; 3581 3584 performance_level->pcie_gen = get_pcie_gen_support(data->pcie_gen_cap, 3582 3585 state_entry->ucPCIEGenLow); ··· 3591 3586 [polaris10_power_state->performance_level_count++]); 3592 3587 performance_level->memory_clock = mclk_dep_table->entries 3593 3588 [state_entry->ucMemoryClockIndexHigh].ulMclk; 3594 - performance_level->engine_clock = sclk_dep_table->entries 3589 + 3590 + if (sclk_dep_table->ucRevId == 0) 3591 + performance_level->engine_clock = ((ATOM_Tonga_SCLK_Dependency_Table *)sclk_dep_table)->entries 3595 3592 [state_entry->ucEngineClockIndexHigh].ulSclk; 3593 + else if (sclk_dep_table->ucRevId == 1) 3594 + performance_level->engine_clock = ((ATOM_Polaris_SCLK_Dependency_Table *)sclk_dep_table)->entries 3595 + [state_entry->ucEngineClockIndexHigh].ulSclk; 3596 + 3596 3597 performance_level->pcie_gen = get_pcie_gen_support(data->pcie_gen_cap, 3597 3598 state_entry->ucPCIEGenHigh); 3598 3599 performance_level->pcie_lane = get_pcie_lane_support(data->pcie_lane_cap, ··· 3656 3645 switch (state->classification.ui_label) { 3657 3646 case PP_StateUILabel_Performance: 3658 3647 data->use_pcie_performance_levels = true; 3659 - 3660 3648 for (i = 0; i < ps->performance_level_count; i++) { 3661 3649 if (data->pcie_gen_performance.max < 3662 3650 ps->performance_levels[i].pcie_gen) ··· 3671 3661 ps->performance_levels[i].pcie_lane) 3672 3662 data->pcie_lane_performance.max = 3673 3663 ps->performance_levels[i].pcie_lane; 3674 - 3675 3664 if (data->pcie_lane_performance.min > 3676 3665 ps->performance_levels[i].pcie_lane) 3677 3666 data->pcie_lane_performance.min = ··· 4196 4187 { 4197 4188 struct polaris10_hwmgr *data = (struct polaris10_hwmgr *)(hwmgr->backend); 4198 4189 uint32_t mm_boot_level_offset, mm_boot_level_value; 4199 - struct phm_ppt_v1_information *table_info = 4200 - (struct phm_ppt_v1_information *)(hwmgr->pptable); 4201 4190 4202 4191 if (!bgate) { 4203 - data->smc_state_table.SamuBootLevel = 4204 - (uint8_t) (table_info->mm_dep_table->count - 1); 4192 + data->smc_state_table.SamuBootLevel = 0; 4205 4193 mm_boot_level_offset = data->dpm_table_start + 4206 4194 offsetof(SMU74_Discrete_DpmTable, SamuBootLevel); 4207 4195 mm_boot_level_offset /= 4;
+16
drivers/gpu/drm/amd/powerplay/hwmgr/tonga_pptable.h
··· 197 197 ATOM_Tonga_SCLK_Dependency_Record entries[1]; /* Dynamically allocate entries. */ 198 198 } ATOM_Tonga_SCLK_Dependency_Table; 199 199 200 + typedef struct _ATOM_Polaris_SCLK_Dependency_Record { 201 + UCHAR ucVddInd; /* Base voltage */ 202 + USHORT usVddcOffset; /* Offset relative to base voltage */ 203 + ULONG ulSclk; 204 + USHORT usEdcCurrent; 205 + UCHAR ucReliabilityTemperature; 206 + UCHAR ucCKSVOffsetandDisable; /* Bits 0~6: Voltage offset for CKS, Bit 7: Disable/enable for the SCLK level. */ 207 + ULONG ulSclkOffset; 208 + } ATOM_Polaris_SCLK_Dependency_Record; 209 + 210 + typedef struct _ATOM_Polaris_SCLK_Dependency_Table { 211 + UCHAR ucRevId; 212 + UCHAR ucNumEntries; /* Number of entries. */ 213 + ATOM_Polaris_SCLK_Dependency_Record entries[1]; /* Dynamically allocate entries. */ 214 + } ATOM_Polaris_SCLK_Dependency_Table; 215 + 200 216 typedef struct _ATOM_Tonga_PCIE_Record { 201 217 UCHAR ucPCIEGenSpeed; 202 218 UCHAR usPCIELaneWidth;
+62 -25
drivers/gpu/drm/amd/powerplay/hwmgr/tonga_processpptables.c
··· 408 408 static int get_sclk_voltage_dependency_table( 409 409 struct pp_hwmgr *hwmgr, 410 410 phm_ppt_v1_clock_voltage_dependency_table **pp_tonga_sclk_dep_table, 411 - const ATOM_Tonga_SCLK_Dependency_Table * sclk_dep_table 411 + const PPTable_Generic_SubTable_Header *sclk_dep_table 412 412 ) 413 413 { 414 414 uint32_t table_size, i; 415 415 phm_ppt_v1_clock_voltage_dependency_table *sclk_table; 416 416 417 - PP_ASSERT_WITH_CODE((0 != sclk_dep_table->ucNumEntries), 418 - "Invalid PowerPlay Table!", return -1); 417 + if (sclk_dep_table->ucRevId < 1) { 418 + const ATOM_Tonga_SCLK_Dependency_Table *tonga_table = 419 + (ATOM_Tonga_SCLK_Dependency_Table *)sclk_dep_table; 419 420 420 - table_size = sizeof(uint32_t) + sizeof(phm_ppt_v1_clock_voltage_dependency_record) 421 - * sclk_dep_table->ucNumEntries; 421 + PP_ASSERT_WITH_CODE((0 != tonga_table->ucNumEntries), 422 + "Invalid PowerPlay Table!", return -1); 422 423 423 - sclk_table = (phm_ppt_v1_clock_voltage_dependency_table *) 424 - kzalloc(table_size, GFP_KERNEL); 424 + table_size = sizeof(uint32_t) + sizeof(phm_ppt_v1_clock_voltage_dependency_record) 425 + * tonga_table->ucNumEntries; 425 426 426 - if (NULL == sclk_table) 427 - return -ENOMEM; 427 + sclk_table = (phm_ppt_v1_clock_voltage_dependency_table *) 428 + kzalloc(table_size, GFP_KERNEL); 428 429 429 - memset(sclk_table, 0x00, table_size); 430 + if (NULL == sclk_table) 431 + return -ENOMEM; 430 432 431 - sclk_table->count = (uint32_t)sclk_dep_table->ucNumEntries; 433 + memset(sclk_table, 0x00, table_size); 432 434 433 - for (i = 0; i < sclk_dep_table->ucNumEntries; i++) { 434 - sclk_table->entries[i].vddInd = 435 - sclk_dep_table->entries[i].ucVddInd; 436 - sclk_table->entries[i].vdd_offset = 437 - sclk_dep_table->entries[i].usVddcOffset; 438 - sclk_table->entries[i].clk = 439 - sclk_dep_table->entries[i].ulSclk; 440 - sclk_table->entries[i].cks_enable = 441 - (((sclk_dep_table->entries[i].ucCKSVOffsetandDisable & 0x80) >> 7) == 0) ? 1 : 0; 442 - sclk_table->entries[i].cks_voffset = 443 - (sclk_dep_table->entries[i].ucCKSVOffsetandDisable & 0x7F); 435 + sclk_table->count = (uint32_t)tonga_table->ucNumEntries; 436 + 437 + for (i = 0; i < tonga_table->ucNumEntries; i++) { 438 + sclk_table->entries[i].vddInd = 439 + tonga_table->entries[i].ucVddInd; 440 + sclk_table->entries[i].vdd_offset = 441 + tonga_table->entries[i].usVddcOffset; 442 + sclk_table->entries[i].clk = 443 + tonga_table->entries[i].ulSclk; 444 + sclk_table->entries[i].cks_enable = 445 + (((tonga_table->entries[i].ucCKSVOffsetandDisable & 0x80) >> 7) == 0) ? 1 : 0; 446 + sclk_table->entries[i].cks_voffset = 447 + (tonga_table->entries[i].ucCKSVOffsetandDisable & 0x7F); 448 + } 449 + } else { 450 + const ATOM_Polaris_SCLK_Dependency_Table *polaris_table = 451 + (ATOM_Polaris_SCLK_Dependency_Table *)sclk_dep_table; 452 + 453 + PP_ASSERT_WITH_CODE((0 != polaris_table->ucNumEntries), 454 + "Invalid PowerPlay Table!", return -1); 455 + 456 + table_size = sizeof(uint32_t) + sizeof(phm_ppt_v1_clock_voltage_dependency_record) 457 + * polaris_table->ucNumEntries; 458 + 459 + sclk_table = (phm_ppt_v1_clock_voltage_dependency_table *) 460 + kzalloc(table_size, GFP_KERNEL); 461 + 462 + if (NULL == sclk_table) 463 + return -ENOMEM; 464 + 465 + memset(sclk_table, 0x00, table_size); 466 + 467 + sclk_table->count = (uint32_t)polaris_table->ucNumEntries; 468 + 469 + for (i = 0; i < polaris_table->ucNumEntries; i++) { 470 + sclk_table->entries[i].vddInd = 471 + polaris_table->entries[i].ucVddInd; 472 + sclk_table->entries[i].vdd_offset = 473 + polaris_table->entries[i].usVddcOffset; 474 + sclk_table->entries[i].clk = 475 + polaris_table->entries[i].ulSclk; 476 + sclk_table->entries[i].cks_enable = 477 + (((polaris_table->entries[i].ucCKSVOffsetandDisable & 0x80) >> 7) == 0) ? 1 : 0; 478 + sclk_table->entries[i].cks_voffset = 479 + (polaris_table->entries[i].ucCKSVOffsetandDisable & 0x7F); 480 + sclk_table->entries[i].sclk_offset = polaris_table->entries[i].ulSclkOffset; 481 + } 444 482 } 445 - 446 483 *pp_tonga_sclk_dep_table = sclk_table; 447 484 448 485 return 0; ··· 745 708 const ATOM_Tonga_MCLK_Dependency_Table *mclk_dep_table = 746 709 (const ATOM_Tonga_MCLK_Dependency_Table *)(((unsigned long) powerplay_table) + 747 710 le16_to_cpu(powerplay_table->usMclkDependencyTableOffset)); 748 - const ATOM_Tonga_SCLK_Dependency_Table *sclk_dep_table = 749 - (const ATOM_Tonga_SCLK_Dependency_Table *)(((unsigned long) powerplay_table) + 711 + const PPTable_Generic_SubTable_Header *sclk_dep_table = 712 + (const PPTable_Generic_SubTable_Header *)(((unsigned long) powerplay_table) + 750 713 le16_to_cpu(powerplay_table->usSclkDependencyTableOffset)); 751 714 const ATOM_Tonga_Hard_Limit_Table *pHardLimits = 752 715 (const ATOM_Tonga_Hard_Limit_Table *)(((unsigned long) powerplay_table) +
+28 -26
drivers/gpu/drm/drm_crtc_helper.c
··· 528 528 int drm_crtc_helper_set_config(struct drm_mode_set *set) 529 529 { 530 530 struct drm_device *dev; 531 - struct drm_crtc *new_crtc; 532 - struct drm_encoder *save_encoders, *new_encoder, *encoder; 531 + struct drm_crtc **save_encoder_crtcs, *new_crtc; 532 + struct drm_encoder **save_connector_encoders, *new_encoder, *encoder; 533 533 bool mode_changed = false; /* if true do a full mode set */ 534 534 bool fb_changed = false; /* if true and !mode_changed just do a flip */ 535 - struct drm_connector *save_connectors, *connector; 535 + struct drm_connector *connector; 536 536 int count = 0, ro, fail = 0; 537 537 const struct drm_crtc_helper_funcs *crtc_funcs; 538 538 struct drm_mode_set save_set; ··· 574 574 * Allocate space for the backup of all (non-pointer) encoder and 575 575 * connector data. 576 576 */ 577 - save_encoders = kzalloc(dev->mode_config.num_encoder * 578 - sizeof(struct drm_encoder), GFP_KERNEL); 579 - if (!save_encoders) 577 + save_encoder_crtcs = kzalloc(dev->mode_config.num_encoder * 578 + sizeof(struct drm_crtc *), GFP_KERNEL); 579 + if (!save_encoder_crtcs) 580 580 return -ENOMEM; 581 581 582 - save_connectors = kzalloc(dev->mode_config.num_connector * 583 - sizeof(struct drm_connector), GFP_KERNEL); 584 - if (!save_connectors) { 585 - kfree(save_encoders); 582 + save_connector_encoders = kzalloc(dev->mode_config.num_connector * 583 + sizeof(struct drm_encoder *), GFP_KERNEL); 584 + if (!save_connector_encoders) { 585 + kfree(save_encoder_crtcs); 586 586 return -ENOMEM; 587 587 } 588 588 ··· 593 593 */ 594 594 count = 0; 595 595 drm_for_each_encoder(encoder, dev) { 596 - save_encoders[count++] = *encoder; 596 + save_encoder_crtcs[count++] = encoder->crtc; 597 597 } 598 598 599 599 count = 0; 600 600 drm_for_each_connector(connector, dev) { 601 - save_connectors[count++] = *connector; 601 + save_connector_encoders[count++] = connector->encoder; 602 602 } 603 603 604 604 save_set.crtc = set->crtc; ··· 631 631 mode_changed = true; 632 632 } 633 633 634 - /* take a reference on all connectors in set */ 634 + /* take a reference on all unbound connectors in set, reuse the 635 + * already taken reference for bound connectors 636 + */ 635 637 for (ro = 0; ro < set->num_connectors; ro++) { 638 + if (set->connectors[ro]->encoder) 639 + continue; 636 640 drm_connector_reference(set->connectors[ro]); 637 641 } 638 642 ··· 758 754 } 759 755 } 760 756 761 - /* after fail drop reference on all connectors in save set */ 762 - count = 0; 763 - drm_for_each_connector(connector, dev) { 764 - drm_connector_unreference(&save_connectors[count++]); 765 - } 766 - 767 - kfree(save_connectors); 768 - kfree(save_encoders); 757 + kfree(save_connector_encoders); 758 + kfree(save_encoder_crtcs); 769 759 return 0; 770 760 771 761 fail: 772 762 /* Restore all previous data. */ 773 763 count = 0; 774 764 drm_for_each_encoder(encoder, dev) { 775 - *encoder = save_encoders[count++]; 765 + encoder->crtc = save_encoder_crtcs[count++]; 776 766 } 777 767 778 768 count = 0; 779 769 drm_for_each_connector(connector, dev) { 780 - *connector = save_connectors[count++]; 770 + connector->encoder = save_connector_encoders[count++]; 781 771 } 782 772 783 - /* after fail drop reference on all connectors in set */ 773 + /* after fail drop reference on all unbound connectors in set, let 774 + * bound connectors keep their reference 775 + */ 784 776 for (ro = 0; ro < set->num_connectors; ro++) { 777 + if (set->connectors[ro]->encoder) 778 + continue; 785 779 drm_connector_unreference(set->connectors[ro]); 786 780 } 787 781 ··· 789 787 save_set.y, save_set.fb)) 790 788 DRM_ERROR("failed to restore config after modeset failure\n"); 791 789 792 - kfree(save_connectors); 793 - kfree(save_encoders); 790 + kfree(save_connector_encoders); 791 + kfree(save_encoder_crtcs); 794 792 return ret; 795 793 } 796 794 EXPORT_SYMBOL(drm_crtc_helper_set_config);
+3 -5
drivers/gpu/drm/drm_dp_mst_topology.c
··· 2927 2927 drm_dp_port_teardown_pdt(port, port->pdt); 2928 2928 2929 2929 if (!port->input && port->vcpi.vcpi > 0) { 2930 - if (mgr->mst_state) { 2931 - drm_dp_mst_reset_vcpi_slots(mgr, port); 2932 - drm_dp_update_payload_part1(mgr); 2933 - drm_dp_mst_put_payload_id(mgr, port->vcpi.vcpi); 2934 - } 2930 + drm_dp_mst_reset_vcpi_slots(mgr, port); 2931 + drm_dp_update_payload_part1(mgr); 2932 + drm_dp_mst_put_payload_id(mgr, port->vcpi.vcpi); 2935 2933 } 2936 2934 2937 2935 kref_put(&port->kref, drm_dp_free_mst_port);
+1
drivers/gpu/drm/etnaviv/etnaviv_iommu.c
··· 225 225 226 226 etnaviv_domain->domain.type = __IOMMU_DOMAIN_PAGING; 227 227 etnaviv_domain->domain.ops = &etnaviv_iommu_ops.ops; 228 + etnaviv_domain->domain.pgsize_bitmap = SZ_4K; 228 229 etnaviv_domain->domain.geometry.aperture_start = GPU_MEM_START; 229 230 etnaviv_domain->domain.geometry.aperture_end = GPU_MEM_START + PT_ENTRIES * SZ_4K - 1; 230 231
+1
drivers/gpu/drm/i915/i915_drv.h
··· 3481 3481 bool intel_bios_is_valid_vbt(const void *buf, size_t size); 3482 3482 bool intel_bios_is_tv_present(struct drm_i915_private *dev_priv); 3483 3483 bool intel_bios_is_lvds_present(struct drm_i915_private *dev_priv, u8 *i2c_pin); 3484 + bool intel_bios_is_port_present(struct drm_i915_private *dev_priv, enum port port); 3484 3485 bool intel_bios_is_port_edp(struct drm_i915_private *dev_priv, enum port port); 3485 3486 bool intel_bios_is_port_dp_dual_mode(struct drm_i915_private *dev_priv, enum port port); 3486 3487 bool intel_bios_is_dsi_present(struct drm_i915_private *dev_priv, enum port *port);
+45 -1
drivers/gpu/drm/i915/intel_bios.c
··· 139 139 else 140 140 panel_fixed_mode->flags |= DRM_MODE_FLAG_NVSYNC; 141 141 142 + panel_fixed_mode->width_mm = (dvo_timing->himage_hi << 8) | 143 + dvo_timing->himage_lo; 144 + panel_fixed_mode->height_mm = (dvo_timing->vimage_hi << 8) | 145 + dvo_timing->vimage_lo; 146 + 142 147 /* Some VBTs have bogus h/vtotal values */ 143 148 if (panel_fixed_mode->hsync_end > panel_fixed_mode->htotal) 144 149 panel_fixed_mode->htotal = panel_fixed_mode->hsync_end + 1; ··· 1192 1187 } 1193 1188 if (bdb->version < 106) { 1194 1189 expected_size = 22; 1195 - } else if (bdb->version < 109) { 1190 + } else if (bdb->version < 111) { 1196 1191 expected_size = 27; 1197 1192 } else if (bdb->version < 195) { 1198 1193 BUILD_BUG_ON(sizeof(struct old_child_dev_config) != 33); ··· 1544 1539 * the OpRegion then they have validated the LVDS's existence. 1545 1540 */ 1546 1541 if (dev_priv->opregion.vbt) 1542 + return true; 1543 + } 1544 + 1545 + return false; 1546 + } 1547 + 1548 + /** 1549 + * intel_bios_is_port_present - is the specified digital port present 1550 + * @dev_priv: i915 device instance 1551 + * @port: port to check 1552 + * 1553 + * Return true if the device in %port is present. 1554 + */ 1555 + bool intel_bios_is_port_present(struct drm_i915_private *dev_priv, enum port port) 1556 + { 1557 + static const struct { 1558 + u16 dp, hdmi; 1559 + } port_mapping[] = { 1560 + [PORT_B] = { DVO_PORT_DPB, DVO_PORT_HDMIB, }, 1561 + [PORT_C] = { DVO_PORT_DPC, DVO_PORT_HDMIC, }, 1562 + [PORT_D] = { DVO_PORT_DPD, DVO_PORT_HDMID, }, 1563 + [PORT_E] = { DVO_PORT_DPE, DVO_PORT_HDMIE, }, 1564 + }; 1565 + int i; 1566 + 1567 + /* FIXME maybe deal with port A as well? */ 1568 + if (WARN_ON(port == PORT_A) || port >= ARRAY_SIZE(port_mapping)) 1569 + return false; 1570 + 1571 + if (!dev_priv->vbt.child_dev_num) 1572 + return false; 1573 + 1574 + for (i = 0; i < dev_priv->vbt.child_dev_num; i++) { 1575 + const union child_device_config *p_child = 1576 + &dev_priv->vbt.child_dev[i]; 1577 + if ((p_child->common.dvo_port == port_mapping[port].dp || 1578 + p_child->common.dvo_port == port_mapping[port].hdmi) && 1579 + (p_child->common.device_type & (DEVICE_TYPE_TMDS_DVI_SIGNALING | 1580 + DEVICE_TYPE_DISPLAYPORT_OUTPUT))) 1547 1581 return true; 1548 1582 } 1549 1583
+60 -28
drivers/gpu/drm/i915/intel_display.c
··· 8275 8275 { 8276 8276 struct drm_i915_private *dev_priv = dev->dev_private; 8277 8277 struct intel_encoder *encoder; 8278 + int i; 8278 8279 u32 val, final; 8279 8280 bool has_lvds = false; 8280 8281 bool has_cpu_edp = false; 8281 8282 bool has_panel = false; 8282 8283 bool has_ck505 = false; 8283 8284 bool can_ssc = false; 8285 + bool using_ssc_source = false; 8284 8286 8285 8287 /* We need to take the global config into account */ 8286 8288 for_each_intel_encoder(dev, encoder) { ··· 8309 8307 can_ssc = true; 8310 8308 } 8311 8309 8312 - DRM_DEBUG_KMS("has_panel %d has_lvds %d has_ck505 %d\n", 8313 - has_panel, has_lvds, has_ck505); 8310 + /* Check if any DPLLs are using the SSC source */ 8311 + for (i = 0; i < dev_priv->num_shared_dpll; i++) { 8312 + u32 temp = I915_READ(PCH_DPLL(i)); 8313 + 8314 + if (!(temp & DPLL_VCO_ENABLE)) 8315 + continue; 8316 + 8317 + if ((temp & PLL_REF_INPUT_MASK) == 8318 + PLLB_REF_INPUT_SPREADSPECTRUMIN) { 8319 + using_ssc_source = true; 8320 + break; 8321 + } 8322 + } 8323 + 8324 + DRM_DEBUG_KMS("has_panel %d has_lvds %d has_ck505 %d using_ssc_source %d\n", 8325 + has_panel, has_lvds, has_ck505, using_ssc_source); 8314 8326 8315 8327 /* Ironlake: try to setup display ref clock before DPLL 8316 8328 * enabling. This is only under driver's control after ··· 8361 8345 final |= DREF_CPU_SOURCE_OUTPUT_NONSPREAD; 8362 8346 } else 8363 8347 final |= DREF_CPU_SOURCE_OUTPUT_DISABLE; 8364 - } else { 8365 - final |= DREF_SSC_SOURCE_DISABLE; 8366 - final |= DREF_CPU_SOURCE_OUTPUT_DISABLE; 8348 + } else if (using_ssc_source) { 8349 + final |= DREF_SSC_SOURCE_ENABLE; 8350 + final |= DREF_SSC1_ENABLE; 8367 8351 } 8368 8352 8369 8353 if (final == val) ··· 8409 8393 POSTING_READ(PCH_DREF_CONTROL); 8410 8394 udelay(200); 8411 8395 } else { 8412 - DRM_DEBUG_KMS("Disabling SSC entirely\n"); 8396 + DRM_DEBUG_KMS("Disabling CPU source output\n"); 8413 8397 8414 8398 val &= ~DREF_CPU_SOURCE_OUTPUT_MASK; 8415 8399 ··· 8420 8404 POSTING_READ(PCH_DREF_CONTROL); 8421 8405 udelay(200); 8422 8406 8423 - /* Turn off the SSC source */ 8424 - val &= ~DREF_SSC_SOURCE_MASK; 8425 - val |= DREF_SSC_SOURCE_DISABLE; 8407 + if (!using_ssc_source) { 8408 + DRM_DEBUG_KMS("Disabling SSC source\n"); 8426 8409 8427 - /* Turn off SSC1 */ 8428 - val &= ~DREF_SSC1_ENABLE; 8410 + /* Turn off the SSC source */ 8411 + val &= ~DREF_SSC_SOURCE_MASK; 8412 + val |= DREF_SSC_SOURCE_DISABLE; 8429 8413 8430 - I915_WRITE(PCH_DREF_CONTROL, val); 8431 - POSTING_READ(PCH_DREF_CONTROL); 8432 - udelay(200); 8414 + /* Turn off SSC1 */ 8415 + val &= ~DREF_SSC1_ENABLE; 8416 + 8417 + I915_WRITE(PCH_DREF_CONTROL, val); 8418 + POSTING_READ(PCH_DREF_CONTROL); 8419 + udelay(200); 8420 + } 8433 8421 } 8434 8422 8435 8423 BUG_ON(val != final); ··· 14574 14554 if (I915_READ(PCH_DP_D) & DP_DETECTED) 14575 14555 intel_dp_init(dev, PCH_DP_D, PORT_D); 14576 14556 } else if (IS_VALLEYVIEW(dev) || IS_CHERRYVIEW(dev)) { 14557 + bool has_edp, has_port; 14558 + 14577 14559 /* 14578 14560 * The DP_DETECTED bit is the latched state of the DDC 14579 14561 * SDA pin at boot. However since eDP doesn't require DDC ··· 14584 14562 * Thus we can't rely on the DP_DETECTED bit alone to detect 14585 14563 * eDP ports. Consult the VBT as well as DP_DETECTED to 14586 14564 * detect eDP ports. 14565 + * 14566 + * Sadly the straps seem to be missing sometimes even for HDMI 14567 + * ports (eg. on Voyo V3 - CHT x7-Z8700), so check both strap 14568 + * and VBT for the presence of the port. Additionally we can't 14569 + * trust the port type the VBT declares as we've seen at least 14570 + * HDMI ports that the VBT claim are DP or eDP. 14587 14571 */ 14588 - if (I915_READ(VLV_HDMIB) & SDVO_DETECTED && 14589 - !intel_dp_is_edp(dev, PORT_B)) 14572 + has_edp = intel_dp_is_edp(dev, PORT_B); 14573 + has_port = intel_bios_is_port_present(dev_priv, PORT_B); 14574 + if (I915_READ(VLV_DP_B) & DP_DETECTED || has_port) 14575 + has_edp &= intel_dp_init(dev, VLV_DP_B, PORT_B); 14576 + if ((I915_READ(VLV_HDMIB) & SDVO_DETECTED || has_port) && !has_edp) 14590 14577 intel_hdmi_init(dev, VLV_HDMIB, PORT_B); 14591 - if (I915_READ(VLV_DP_B) & DP_DETECTED || 14592 - intel_dp_is_edp(dev, PORT_B)) 14593 - intel_dp_init(dev, VLV_DP_B, PORT_B); 14594 14578 14595 - if (I915_READ(VLV_HDMIC) & SDVO_DETECTED && 14596 - !intel_dp_is_edp(dev, PORT_C)) 14579 + has_edp = intel_dp_is_edp(dev, PORT_C); 14580 + has_port = intel_bios_is_port_present(dev_priv, PORT_C); 14581 + if (I915_READ(VLV_DP_C) & DP_DETECTED || has_port) 14582 + has_edp &= intel_dp_init(dev, VLV_DP_C, PORT_C); 14583 + if ((I915_READ(VLV_HDMIC) & SDVO_DETECTED || has_port) && !has_edp) 14597 14584 intel_hdmi_init(dev, VLV_HDMIC, PORT_C); 14598 - if (I915_READ(VLV_DP_C) & DP_DETECTED || 14599 - intel_dp_is_edp(dev, PORT_C)) 14600 - intel_dp_init(dev, VLV_DP_C, PORT_C); 14601 14585 14602 14586 if (IS_CHERRYVIEW(dev)) { 14603 - /* eDP not supported on port D, so don't check VBT */ 14604 - if (I915_READ(CHV_HDMID) & SDVO_DETECTED) 14605 - intel_hdmi_init(dev, CHV_HDMID, PORT_D); 14606 - if (I915_READ(CHV_DP_D) & DP_DETECTED) 14587 + /* 14588 + * eDP not supported on port D, 14589 + * so no need to worry about it 14590 + */ 14591 + has_port = intel_bios_is_port_present(dev_priv, PORT_D); 14592 + if (I915_READ(CHV_DP_D) & DP_DETECTED || has_port) 14607 14593 intel_dp_init(dev, CHV_DP_D, PORT_D); 14594 + if (I915_READ(CHV_HDMID) & SDVO_DETECTED || has_port) 14595 + intel_hdmi_init(dev, CHV_HDMID, PORT_D); 14608 14596 } 14609 14597 14610 14598 intel_dsi_init(dev);
+10 -8
drivers/gpu/drm/i915/intel_dp.c
··· 5725 5725 if (!fixed_mode && dev_priv->vbt.lfp_lvds_vbt_mode) { 5726 5726 fixed_mode = drm_mode_duplicate(dev, 5727 5727 dev_priv->vbt.lfp_lvds_vbt_mode); 5728 - if (fixed_mode) 5728 + if (fixed_mode) { 5729 5729 fixed_mode->type |= DRM_MODE_TYPE_PREFERRED; 5730 + connector->display_info.width_mm = fixed_mode->width_mm; 5731 + connector->display_info.height_mm = fixed_mode->height_mm; 5732 + } 5730 5733 } 5731 5734 mutex_unlock(&dev->mode_config.mutex); 5732 5735 ··· 5926 5923 return false; 5927 5924 } 5928 5925 5929 - void 5930 - intel_dp_init(struct drm_device *dev, 5931 - i915_reg_t output_reg, enum port port) 5926 + bool intel_dp_init(struct drm_device *dev, 5927 + i915_reg_t output_reg, 5928 + enum port port) 5932 5929 { 5933 5930 struct drm_i915_private *dev_priv = dev->dev_private; 5934 5931 struct intel_digital_port *intel_dig_port; ··· 5938 5935 5939 5936 intel_dig_port = kzalloc(sizeof(*intel_dig_port), GFP_KERNEL); 5940 5937 if (!intel_dig_port) 5941 - return; 5938 + return false; 5942 5939 5943 5940 intel_connector = intel_connector_alloc(); 5944 5941 if (!intel_connector) ··· 5995 5992 if (!intel_dp_init_connector(intel_dig_port, intel_connector)) 5996 5993 goto err_init_connector; 5997 5994 5998 - return; 5995 + return true; 5999 5996 6000 5997 err_init_connector: 6001 5998 drm_encoder_cleanup(encoder); ··· 6003 6000 kfree(intel_connector); 6004 6001 err_connector_alloc: 6005 6002 kfree(intel_dig_port); 6006 - 6007 - return; 6003 + return false; 6008 6004 } 6009 6005 6010 6006 void intel_dp_mst_suspend(struct drm_device *dev)
+3
drivers/gpu/drm/i915/intel_dpll_mgr.c
··· 366 366 DPLL_ID_PCH_PLL_B); 367 367 } 368 368 369 + if (!pll) 370 + return NULL; 371 + 369 372 /* reference the pll */ 370 373 intel_reference_shared_dpll(pll, crtc_state); 371 374
+1 -1
drivers/gpu/drm/i915/intel_drv.h
··· 1284 1284 void intel_csr_ucode_resume(struct drm_i915_private *); 1285 1285 1286 1286 /* intel_dp.c */ 1287 - void intel_dp_init(struct drm_device *dev, i915_reg_t output_reg, enum port port); 1287 + bool intel_dp_init(struct drm_device *dev, i915_reg_t output_reg, enum port port); 1288 1288 bool intel_dp_init_connector(struct intel_digital_port *intel_dig_port, 1289 1289 struct intel_connector *intel_connector); 1290 1290 void intel_dp_set_link_params(struct intel_dp *intel_dp,
+3
drivers/gpu/drm/i915/intel_dsi.c
··· 1545 1545 goto err; 1546 1546 } 1547 1547 1548 + connector->display_info.width_mm = fixed_mode->width_mm; 1549 + connector->display_info.height_mm = fixed_mode->height_mm; 1550 + 1548 1551 intel_panel_init(&intel_connector->panel, fixed_mode, NULL); 1549 1552 1550 1553 intel_dsi_add_properties(intel_connector);
+3
drivers/gpu/drm/i915/intel_hdmi.c
··· 2142 2142 enum port port = intel_dig_port->port; 2143 2143 uint8_t alternate_ddc_pin; 2144 2144 2145 + DRM_DEBUG_KMS("Adding HDMI connector on port %c\n", 2146 + port_name(port)); 2147 + 2145 2148 if (WARN(intel_dig_port->max_lanes < 4, 2146 2149 "Not enough lanes (%d) for HDMI on port %c\n", 2147 2150 intel_dig_port->max_lanes, port_name(port)))
+2
drivers/gpu/drm/i915/intel_lvds.c
··· 1082 1082 fixed_mode = drm_mode_duplicate(dev, dev_priv->vbt.lfp_lvds_vbt_mode); 1083 1083 if (fixed_mode) { 1084 1084 fixed_mode->type |= DRM_MODE_TYPE_PREFERRED; 1085 + connector->display_info.width_mm = fixed_mode->width_mm; 1086 + connector->display_info.height_mm = fixed_mode->height_mm; 1085 1087 goto out; 1086 1088 } 1087 1089 }
+4 -3
drivers/gpu/drm/i915/intel_vbt_defs.h
··· 403 403 u8 vsync_off:4; 404 404 u8 rsvd0:6; 405 405 u8 hsync_off_hi:2; 406 - u8 h_image; 407 - u8 v_image; 408 - u8 max_hv; 406 + u8 himage_lo; 407 + u8 vimage_lo; 408 + u8 vimage_hi:4; 409 + u8 himage_hi:4; 409 410 u8 h_border; 410 411 u8 v_border; 411 412 u8 rsvd1:3;
+1 -1
drivers/gpu/drm/nouveau/nvkm/engine/device/pci.c
··· 1614 1614 .fini = nvkm_device_pci_fini, 1615 1615 .resource_addr = nvkm_device_pci_resource_addr, 1616 1616 .resource_size = nvkm_device_pci_resource_size, 1617 - .cpu_coherent = !IS_ENABLED(CONFIG_ARM) && !IS_ENABLED(CONFIG_ARM64), 1617 + .cpu_coherent = !IS_ENABLED(CONFIG_ARM), 1618 1618 }; 1619 1619 1620 1620 int
+9 -7
drivers/gpu/drm/nouveau/nvkm/subdev/iccsense/base.c
··· 276 276 struct pwr_rail_t *r = &stbl.rail[i]; 277 277 struct nvkm_iccsense_rail *rail; 278 278 struct nvkm_iccsense_sensor *sensor; 279 + int (*read)(struct nvkm_iccsense *, 280 + struct nvkm_iccsense_rail *); 279 281 280 282 if (!r->mode || r->resistor_mohm == 0) 281 283 continue; ··· 286 284 if (!sensor) 287 285 continue; 288 286 289 - rail = kmalloc(sizeof(*rail), GFP_KERNEL); 290 - if (!rail) 291 - return -ENOMEM; 292 - 293 287 switch (sensor->type) { 294 288 case NVBIOS_EXTDEV_INA209: 295 289 if (r->rail != 0) 296 290 continue; 297 - rail->read = nvkm_iccsense_ina209_read; 291 + read = nvkm_iccsense_ina209_read; 298 292 break; 299 293 case NVBIOS_EXTDEV_INA219: 300 294 if (r->rail != 0) 301 295 continue; 302 - rail->read = nvkm_iccsense_ina219_read; 296 + read = nvkm_iccsense_ina219_read; 303 297 break; 304 298 case NVBIOS_EXTDEV_INA3221: 305 299 if (r->rail >= 3) 306 300 continue; 307 - rail->read = nvkm_iccsense_ina3221_read; 301 + read = nvkm_iccsense_ina3221_read; 308 302 break; 309 303 default: 310 304 continue; 311 305 } 312 306 307 + rail = kmalloc(sizeof(*rail), GFP_KERNEL); 308 + if (!rail) 309 + return -ENOMEM; 313 310 sensor->rail_mask |= 1 << r->rail; 311 + rail->read = read; 314 312 rail->sensor = sensor; 315 313 rail->idx = r->rail; 316 314 rail->mohm = r->resistor_mohm;
+3 -2
drivers/gpu/drm/radeon/atombios_crtc.c
··· 589 589 if (ASIC_IS_DCE41(rdev) || ASIC_IS_DCE61(rdev) || ASIC_IS_DCE8(rdev)) 590 590 radeon_crtc->pll_flags |= RADEON_PLL_USE_FRAC_FB_DIV; 591 591 /* use frac fb div on RS780/RS880 */ 592 - if ((rdev->family == CHIP_RS780) || (rdev->family == CHIP_RS880)) 592 + if (((rdev->family == CHIP_RS780) || (rdev->family == CHIP_RS880)) 593 + && !radeon_crtc->ss_enabled) 593 594 radeon_crtc->pll_flags |= RADEON_PLL_USE_FRAC_FB_DIV; 594 595 if (ASIC_IS_DCE32(rdev) && mode->clock > 165000) 595 596 radeon_crtc->pll_flags |= RADEON_PLL_USE_FRAC_FB_DIV; ··· 627 626 if (radeon_crtc->ss.refdiv) { 628 627 radeon_crtc->pll_flags |= RADEON_PLL_USE_REF_DIV; 629 628 radeon_crtc->pll_reference_div = radeon_crtc->ss.refdiv; 630 - if (ASIC_IS_AVIVO(rdev)) 629 + if (rdev->family >= CHIP_RV770) 631 630 radeon_crtc->pll_flags |= RADEON_PLL_USE_FRAC_FB_DIV; 632 631 } 633 632 }
+22 -1
drivers/gpu/drm/radeon/radeon_device.c
··· 630 630 /* 631 631 * GPU helpers function. 632 632 */ 633 + 634 + /** 635 + * radeon_device_is_virtual - check if we are running is a virtual environment 636 + * 637 + * Check if the asic has been passed through to a VM (all asics). 638 + * Used at driver startup. 639 + * Returns true if virtual or false if not. 640 + */ 641 + static bool radeon_device_is_virtual(void) 642 + { 643 + #ifdef CONFIG_X86 644 + return boot_cpu_has(X86_FEATURE_HYPERVISOR); 645 + #else 646 + return false; 647 + #endif 648 + } 649 + 633 650 /** 634 651 * radeon_card_posted - check if the hw has already been initialized 635 652 * ··· 659 642 bool radeon_card_posted(struct radeon_device *rdev) 660 643 { 661 644 uint32_t reg; 645 + 646 + /* for pass through, always force asic_init */ 647 + if (radeon_device_is_virtual()) 648 + return false; 662 649 663 650 /* required for EFI mode on macbook2,1 which uses an r5xx asic */ 664 651 if (efi_enabled(EFI_BOOT) && ··· 1652 1631 radeon_agp_suspend(rdev); 1653 1632 1654 1633 pci_save_state(dev->pdev); 1655 - if (freeze && rdev->family >= CHIP_R600) { 1634 + if (freeze && rdev->family >= CHIP_CEDAR) { 1656 1635 rdev->asic->asic_reset(rdev, true); 1657 1636 pci_restore_state(dev->pdev); 1658 1637 } else if (suspend) {
+1 -1
drivers/hid/hid-elo.c
··· 261 261 struct elo_priv *priv = hid_get_drvdata(hdev); 262 262 263 263 hid_hw_stop(hdev); 264 - flush_workqueue(wq); 264 + cancel_delayed_work_sync(&priv->work); 265 265 kfree(priv); 266 266 } 267 267
+5
drivers/hid/hid-multitouch.c
··· 1401 1401 MT_USB_DEVICE(USB_VENDOR_ID_NOVATEK, 1402 1402 USB_DEVICE_ID_NOVATEK_PCT) }, 1403 1403 1404 + /* Ntrig Panel */ 1405 + { .driver_data = MT_CLS_NSMU, 1406 + HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8, 1407 + USB_VENDOR_ID_NTRIG, 0x1b05) }, 1408 + 1404 1409 /* PixArt optical touch screen */ 1405 1410 { .driver_data = MT_CLS_INRANGE_CONTACTNUMBER, 1406 1411 MT_USB_DEVICE(USB_VENDOR_ID_PIXART,
+4 -7
drivers/hwtracing/coresight/coresight-tmc-etr.c
··· 300 300 if (local_read(&drvdata->mode) == CS_MODE_SYSFS) { 301 301 /* 302 302 * The trace run will continue with the same allocated trace 303 - * buffer. As such zero-out the buffer so that we don't end 304 - * up with stale data. 305 - * 306 - * Since the tracer is still enabled drvdata::buf 307 - * can't be NULL. 303 + * buffer. The trace buffer is cleared in tmc_etr_enable_hw(), 304 + * so we don't have to explicitly clear it. Also, since the 305 + * tracer is still enabled drvdata::buf can't be NULL. 308 306 */ 309 - memset(drvdata->buf, 0, drvdata->size); 310 307 tmc_etr_enable_hw(drvdata); 311 308 } else { 312 309 /* ··· 312 315 */ 313 316 vaddr = drvdata->vaddr; 314 317 paddr = drvdata->paddr; 315 - drvdata->buf = NULL; 318 + drvdata->buf = drvdata->vaddr = NULL; 316 319 } 317 320 318 321 drvdata->reading = false;
+9 -6
drivers/hwtracing/coresight/coresight.c
··· 385 385 int i; 386 386 bool found = false; 387 387 struct coresight_node *node; 388 - struct coresight_connection *conn; 389 388 390 389 /* An activated sink has been found. Enqueue the element */ 391 390 if ((csdev->type == CORESIGHT_DEV_TYPE_SINK || ··· 393 394 394 395 /* Not a sink - recursively explore each port found on this element */ 395 396 for (i = 0; i < csdev->nr_outport; i++) { 396 - conn = &csdev->conns[i]; 397 - if (_coresight_build_path(conn->child_dev, path) == 0) { 397 + struct coresight_device *child_dev = csdev->conns[i].child_dev; 398 + 399 + if (child_dev && _coresight_build_path(child_dev, path) == 0) { 398 400 found = true; 399 401 break; 400 402 } ··· 425 425 struct list_head *coresight_build_path(struct coresight_device *csdev) 426 426 { 427 427 struct list_head *path; 428 + int rc; 428 429 429 430 path = kzalloc(sizeof(struct list_head), GFP_KERNEL); 430 431 if (!path) ··· 433 432 434 433 INIT_LIST_HEAD(path); 435 434 436 - if (_coresight_build_path(csdev, path)) { 435 + rc = _coresight_build_path(csdev, path); 436 + if (rc) { 437 437 kfree(path); 438 - path = NULL; 438 + return ERR_PTR(rc); 439 439 } 440 440 441 441 return path; ··· 509 507 goto out; 510 508 511 509 path = coresight_build_path(csdev); 512 - if (!path) { 510 + if (IS_ERR(path)) { 513 511 pr_err("building path(s) failed\n"); 512 + ret = PTR_ERR(path); 514 513 goto out; 515 514 } 516 515
+96 -3
drivers/i2c/busses/i2c-i801.c
··· 245 245 struct platform_device *mux_pdev; 246 246 #endif 247 247 struct platform_device *tco_pdev; 248 + 249 + /* 250 + * If set to true the host controller registers are reserved for 251 + * ACPI AML use. Protected by acpi_lock. 252 + */ 253 + bool acpi_reserved; 254 + struct mutex acpi_lock; 248 255 }; 249 256 250 257 #define FEATURE_SMBUS_PEC (1 << 0) ··· 725 718 int ret = 0, xact = 0; 726 719 struct i801_priv *priv = i2c_get_adapdata(adap); 727 720 721 + mutex_lock(&priv->acpi_lock); 722 + if (priv->acpi_reserved) { 723 + mutex_unlock(&priv->acpi_lock); 724 + return -EBUSY; 725 + } 726 + 728 727 pm_runtime_get_sync(&priv->pci_dev->dev); 729 728 730 729 hwpec = (priv->features & FEATURE_SMBUS_PEC) && (flags & I2C_CLIENT_PEC) ··· 833 820 out: 834 821 pm_runtime_mark_last_busy(&priv->pci_dev->dev); 835 822 pm_runtime_put_autosuspend(&priv->pci_dev->dev); 823 + mutex_unlock(&priv->acpi_lock); 836 824 return ret; 837 825 } 838 826 ··· 1271 1257 priv->tco_pdev = pdev; 1272 1258 } 1273 1259 1260 + #ifdef CONFIG_ACPI 1261 + static acpi_status 1262 + i801_acpi_io_handler(u32 function, acpi_physical_address address, u32 bits, 1263 + u64 *value, void *handler_context, void *region_context) 1264 + { 1265 + struct i801_priv *priv = handler_context; 1266 + struct pci_dev *pdev = priv->pci_dev; 1267 + acpi_status status; 1268 + 1269 + /* 1270 + * Once BIOS AML code touches the OpRegion we warn and inhibit any 1271 + * further access from the driver itself. This device is now owned 1272 + * by the system firmware. 1273 + */ 1274 + mutex_lock(&priv->acpi_lock); 1275 + 1276 + if (!priv->acpi_reserved) { 1277 + priv->acpi_reserved = true; 1278 + 1279 + dev_warn(&pdev->dev, "BIOS is accessing SMBus registers\n"); 1280 + dev_warn(&pdev->dev, "Driver SMBus register access inhibited\n"); 1281 + 1282 + /* 1283 + * BIOS is accessing the host controller so prevent it from 1284 + * suspending automatically from now on. 1285 + */ 1286 + pm_runtime_get_sync(&pdev->dev); 1287 + } 1288 + 1289 + if ((function & ACPI_IO_MASK) == ACPI_READ) 1290 + status = acpi_os_read_port(address, (u32 *)value, bits); 1291 + else 1292 + status = acpi_os_write_port(address, (u32)*value, bits); 1293 + 1294 + mutex_unlock(&priv->acpi_lock); 1295 + 1296 + return status; 1297 + } 1298 + 1299 + static int i801_acpi_probe(struct i801_priv *priv) 1300 + { 1301 + struct acpi_device *adev; 1302 + acpi_status status; 1303 + 1304 + adev = ACPI_COMPANION(&priv->pci_dev->dev); 1305 + if (adev) { 1306 + status = acpi_install_address_space_handler(adev->handle, 1307 + ACPI_ADR_SPACE_SYSTEM_IO, i801_acpi_io_handler, 1308 + NULL, priv); 1309 + if (ACPI_SUCCESS(status)) 1310 + return 0; 1311 + } 1312 + 1313 + return acpi_check_resource_conflict(&priv->pci_dev->resource[SMBBAR]); 1314 + } 1315 + 1316 + static void i801_acpi_remove(struct i801_priv *priv) 1317 + { 1318 + struct acpi_device *adev; 1319 + 1320 + adev = ACPI_COMPANION(&priv->pci_dev->dev); 1321 + if (!adev) 1322 + return; 1323 + 1324 + acpi_remove_address_space_handler(adev->handle, 1325 + ACPI_ADR_SPACE_SYSTEM_IO, i801_acpi_io_handler); 1326 + 1327 + mutex_lock(&priv->acpi_lock); 1328 + if (priv->acpi_reserved) 1329 + pm_runtime_put(&priv->pci_dev->dev); 1330 + mutex_unlock(&priv->acpi_lock); 1331 + } 1332 + #else 1333 + static inline int i801_acpi_probe(struct i801_priv *priv) { return 0; } 1334 + static inline void i801_acpi_remove(struct i801_priv *priv) { } 1335 + #endif 1336 + 1274 1337 static int i801_probe(struct pci_dev *dev, const struct pci_device_id *id) 1275 1338 { 1276 1339 unsigned char temp; ··· 1365 1274 priv->adapter.dev.parent = &dev->dev; 1366 1275 ACPI_COMPANION_SET(&priv->adapter.dev, ACPI_COMPANION(&dev->dev)); 1367 1276 priv->adapter.retries = 3; 1277 + mutex_init(&priv->acpi_lock); 1368 1278 1369 1279 priv->pci_dev = dev; 1370 1280 switch (dev->device) { ··· 1428 1336 return -ENODEV; 1429 1337 } 1430 1338 1431 - err = acpi_check_resource_conflict(&dev->resource[SMBBAR]); 1432 - if (err) { 1339 + if (i801_acpi_probe(priv)) 1433 1340 return -ENODEV; 1434 - } 1435 1341 1436 1342 err = pcim_iomap_regions(dev, 1 << SMBBAR, 1437 1343 dev_driver_string(&dev->dev)); ··· 1438 1348 "Failed to request SMBus region 0x%lx-0x%Lx\n", 1439 1349 priv->smba, 1440 1350 (unsigned long long)pci_resource_end(dev, SMBBAR)); 1351 + i801_acpi_remove(priv); 1441 1352 return err; 1442 1353 } 1443 1354 ··· 1503 1412 err = i2c_add_adapter(&priv->adapter); 1504 1413 if (err) { 1505 1414 dev_err(&dev->dev, "Failed to add SMBus adapter\n"); 1415 + i801_acpi_remove(priv); 1506 1416 return err; 1507 1417 } 1508 1418 ··· 1530 1438 1531 1439 i801_del_mux(priv); 1532 1440 i2c_del_adapter(&priv->adapter); 1441 + i801_acpi_remove(priv); 1533 1442 pci_write_config_byte(dev, SMBHSTCFG, priv->original_hstcfg); 1534 1443 1535 1444 platform_device_unregister(priv->tco_pdev);
+10 -7
drivers/i2c/busses/i2c-octeon.c
··· 934 934 return result; 935 935 936 936 for (i = 0; i < length; i++) { 937 - /* for the last byte TWSI_CTL_AAK must not be set */ 938 - if (i + 1 == length) 937 + /* 938 + * For the last byte to receive TWSI_CTL_AAK must not be set. 939 + * 940 + * A special case is I2C_M_RECV_LEN where we don't know the 941 + * additional length yet. If recv_len is set we assume we're 942 + * not reading the final byte and therefore need to set 943 + * TWSI_CTL_AAK. 944 + */ 945 + if ((i + 1 == length) && !(recv_len && i == 0)) 939 946 final_read = true; 940 947 941 948 /* clear iflg to allow next event */ ··· 957 950 958 951 data[i] = octeon_i2c_data_read(i2c); 959 952 if (recv_len && i == 0) { 960 - if (data[i] > I2C_SMBUS_BLOCK_MAX + 1) { 961 - dev_err(i2c->dev, 962 - "%s: read len > I2C_SMBUS_BLOCK_MAX %d\n", 963 - __func__, data[i]); 953 + if (data[i] > I2C_SMBUS_BLOCK_MAX + 1) 964 954 return -EPROTO; 965 - } 966 955 length += data[i]; 967 956 } 968 957
+1
drivers/i2c/muxes/i2c-mux-reg.c
··· 260 260 .remove = i2c_mux_reg_remove, 261 261 .driver = { 262 262 .name = "i2c-mux-reg", 263 + .of_match_table = of_match_ptr(i2c_mux_reg_of_match), 263 264 }, 264 265 }; 265 266
+1 -1
drivers/iio/accel/st_accel_buffer.c
··· 91 91 92 92 int st_accel_allocate_ring(struct iio_dev *indio_dev) 93 93 { 94 - return iio_triggered_buffer_setup(indio_dev, &iio_pollfunc_store_time, 94 + return iio_triggered_buffer_setup(indio_dev, NULL, 95 95 &st_sensors_trigger_handler, &st_accel_buffer_setup_ops); 96 96 } 97 97
+1
drivers/iio/accel/st_accel_core.c
··· 741 741 static const struct iio_trigger_ops st_accel_trigger_ops = { 742 742 .owner = THIS_MODULE, 743 743 .set_trigger_state = ST_ACCEL_TRIGGER_SET_STATE, 744 + .validate_device = st_sensors_validate_device, 744 745 }; 745 746 #define ST_ACCEL_TRIGGER_OPS (&st_accel_trigger_ops) 746 747 #else
+7 -18
drivers/iio/common/st_sensors/st_sensors_buffer.c
··· 57 57 struct iio_poll_func *pf = p; 58 58 struct iio_dev *indio_dev = pf->indio_dev; 59 59 struct st_sensor_data *sdata = iio_priv(indio_dev); 60 + s64 timestamp; 60 61 61 - /* If we have a status register, check if this IRQ came from us */ 62 - if (sdata->sensor_settings->drdy_irq.addr_stat_drdy) { 63 - u8 status; 64 - 65 - len = sdata->tf->read_byte(&sdata->tb, sdata->dev, 66 - sdata->sensor_settings->drdy_irq.addr_stat_drdy, 67 - &status); 68 - if (len < 0) 69 - dev_err(sdata->dev, "could not read channel status\n"); 70 - 71 - /* 72 - * If this was not caused by any channels on this sensor, 73 - * return IRQ_NONE 74 - */ 75 - if (!(status & (u8)indio_dev->active_scan_mask[0])) 76 - return IRQ_NONE; 77 - } 62 + /* If we do timetamping here, do it before reading the values */ 63 + if (sdata->hw_irq_trigger) 64 + timestamp = sdata->hw_timestamp; 65 + else 66 + timestamp = iio_get_time_ns(); 78 67 79 68 len = st_sensors_get_buffer_element(indio_dev, sdata->buffer_data); 80 69 if (len < 0) 81 70 goto st_sensors_get_buffer_element_error; 82 71 83 72 iio_push_to_buffers_with_timestamp(indio_dev, sdata->buffer_data, 84 - pf->timestamp); 73 + timestamp); 85 74 86 75 st_sensors_get_buffer_element_error: 87 76 iio_trigger_notify_done(indio_dev->trig);
+8
drivers/iio/common/st_sensors/st_sensors_core.c
··· 363 363 if (err < 0) 364 364 return err; 365 365 366 + /* Disable DRDY, this might be still be enabled after reboot. */ 367 + err = st_sensors_set_dataready_irq(indio_dev, false); 368 + if (err < 0) 369 + return err; 370 + 366 371 if (sdata->current_fullscale) { 367 372 err = st_sensors_set_fullscale(indio_dev, 368 373 sdata->current_fullscale->num); ··· 428 423 drdy_mask = sdata->sensor_settings->drdy_irq.mask_int1; 429 424 else 430 425 drdy_mask = sdata->sensor_settings->drdy_irq.mask_int2; 426 + 427 + /* Flag to the poll function that the hardware trigger is in use */ 428 + sdata->hw_irq_trigger = enable; 431 429 432 430 /* Enable/Disable the interrupt generator for data ready. */ 433 431 err = st_sensors_write_data_with_mask(indio_dev,
+89 -7
drivers/iio/common/st_sensors/st_sensors_trigger.c
··· 17 17 #include <linux/iio/common/st_sensors.h> 18 18 #include "st_sensors_core.h" 19 19 20 + /** 21 + * st_sensors_irq_handler() - top half of the IRQ-based triggers 22 + * @irq: irq number 23 + * @p: private handler data 24 + */ 25 + irqreturn_t st_sensors_irq_handler(int irq, void *p) 26 + { 27 + struct iio_trigger *trig = p; 28 + struct iio_dev *indio_dev = iio_trigger_get_drvdata(trig); 29 + struct st_sensor_data *sdata = iio_priv(indio_dev); 30 + 31 + /* Get the time stamp as close in time as possible */ 32 + sdata->hw_timestamp = iio_get_time_ns(); 33 + return IRQ_WAKE_THREAD; 34 + } 35 + 36 + /** 37 + * st_sensors_irq_thread() - bottom half of the IRQ-based triggers 38 + * @irq: irq number 39 + * @p: private handler data 40 + */ 41 + irqreturn_t st_sensors_irq_thread(int irq, void *p) 42 + { 43 + struct iio_trigger *trig = p; 44 + struct iio_dev *indio_dev = iio_trigger_get_drvdata(trig); 45 + struct st_sensor_data *sdata = iio_priv(indio_dev); 46 + int ret; 47 + 48 + /* 49 + * If this trigger is backed by a hardware interrupt and we have a 50 + * status register, check if this IRQ came from us 51 + */ 52 + if (sdata->sensor_settings->drdy_irq.addr_stat_drdy) { 53 + u8 status; 54 + 55 + ret = sdata->tf->read_byte(&sdata->tb, sdata->dev, 56 + sdata->sensor_settings->drdy_irq.addr_stat_drdy, 57 + &status); 58 + if (ret < 0) { 59 + dev_err(sdata->dev, "could not read channel status\n"); 60 + goto out_poll; 61 + } 62 + /* 63 + * the lower bits of .active_scan_mask[0] is directly mapped 64 + * to the channels on the sensor: either bit 0 for 65 + * one-dimensional sensors, or e.g. x,y,z for accelerometers, 66 + * gyroscopes or magnetometers. No sensor use more than 3 67 + * channels, so cut the other status bits here. 68 + */ 69 + status &= 0x07; 70 + 71 + /* 72 + * If this was not caused by any channels on this sensor, 73 + * return IRQ_NONE 74 + */ 75 + if (!indio_dev->active_scan_mask) 76 + return IRQ_NONE; 77 + if (!(status & (u8)indio_dev->active_scan_mask[0])) 78 + return IRQ_NONE; 79 + } 80 + 81 + out_poll: 82 + /* It's our IRQ: proceed to handle the register polling */ 83 + iio_trigger_poll_chained(p); 84 + return IRQ_HANDLED; 85 + } 86 + 20 87 int st_sensors_allocate_trigger(struct iio_dev *indio_dev, 21 88 const struct iio_trigger_ops *trigger_ops) 22 89 { ··· 96 29 dev_err(&indio_dev->dev, "failed to allocate iio trigger.\n"); 97 30 return -ENOMEM; 98 31 } 32 + 33 + iio_trigger_set_drvdata(sdata->trig, indio_dev); 34 + sdata->trig->ops = trigger_ops; 35 + sdata->trig->dev.parent = sdata->dev; 99 36 100 37 irq = sdata->get_irq_data_ready(indio_dev); 101 38 irq_trig = irqd_get_trigger_type(irq_get_irq_data(irq)); ··· 148 77 sdata->sensor_settings->drdy_irq.addr_stat_drdy) 149 78 irq_trig |= IRQF_SHARED; 150 79 151 - err = request_threaded_irq(irq, 152 - iio_trigger_generic_data_rdy_poll, 153 - NULL, 80 + /* Let's create an interrupt thread masking the hard IRQ here */ 81 + irq_trig |= IRQF_ONESHOT; 82 + 83 + err = request_threaded_irq(sdata->get_irq_data_ready(indio_dev), 84 + st_sensors_irq_handler, 85 + st_sensors_irq_thread, 154 86 irq_trig, 155 87 sdata->trig->name, 156 88 sdata->trig); ··· 161 87 dev_err(&indio_dev->dev, "failed to request trigger IRQ.\n"); 162 88 goto iio_trigger_free; 163 89 } 164 - 165 - iio_trigger_set_drvdata(sdata->trig, indio_dev); 166 - sdata->trig->ops = trigger_ops; 167 - sdata->trig->dev.parent = sdata->dev; 168 90 169 91 err = iio_trigger_register(sdata->trig); 170 92 if (err < 0) { ··· 188 118 iio_trigger_free(sdata->trig); 189 119 } 190 120 EXPORT_SYMBOL(st_sensors_deallocate_trigger); 121 + 122 + int st_sensors_validate_device(struct iio_trigger *trig, 123 + struct iio_dev *indio_dev) 124 + { 125 + struct iio_dev *indio = iio_trigger_get_drvdata(trig); 126 + 127 + if (indio != indio_dev) 128 + return -EINVAL; 129 + 130 + return 0; 131 + } 132 + EXPORT_SYMBOL(st_sensors_validate_device); 191 133 192 134 MODULE_AUTHOR("Denis Ciocca <denis.ciocca@st.com>"); 193 135 MODULE_DESCRIPTION("STMicroelectronics ST-sensors trigger");
+1 -1
drivers/iio/dac/Kconfig
··· 247 247 248 248 config STX104 249 249 tristate "Apex Embedded Systems STX104 DAC driver" 250 - depends on X86 && ISA 250 + depends on X86 && ISA_BUS_API 251 251 help 252 252 Say yes here to build support for the 2-channel DAC on the Apex 253 253 Embedded Systems STX104 integrated analog PC/104 card. The base port
+1 -1
drivers/iio/dac/ad5592r-base.c
··· 525 525 526 526 device_for_each_child_node(st->dev, child) { 527 527 ret = fwnode_property_read_u32(child, "reg", &reg); 528 - if (ret || reg > ARRAY_SIZE(st->channel_modes)) 528 + if (ret || reg >= ARRAY_SIZE(st->channel_modes)) 529 529 continue; 530 530 531 531 ret = fwnode_property_read_u32(child, "adi,mode", &tmp);
+1 -1
drivers/iio/gyro/st_gyro_buffer.c
··· 91 91 92 92 int st_gyro_allocate_ring(struct iio_dev *indio_dev) 93 93 { 94 - return iio_triggered_buffer_setup(indio_dev, &iio_pollfunc_store_time, 94 + return iio_triggered_buffer_setup(indio_dev, NULL, 95 95 &st_sensors_trigger_handler, &st_gyro_buffer_setup_ops); 96 96 } 97 97
+1
drivers/iio/gyro/st_gyro_core.c
··· 409 409 static const struct iio_trigger_ops st_gyro_trigger_ops = { 410 410 .owner = THIS_MODULE, 411 411 .set_trigger_state = ST_GYRO_TRIGGER_SET_STATE, 412 + .validate_device = st_sensors_validate_device, 412 413 }; 413 414 #define ST_GYRO_TRIGGER_OPS (&st_gyro_trigger_ops) 414 415 #else
+1 -3
drivers/iio/humidity/am2315.c
··· 165 165 struct am2315_sensor_data sensor_data; 166 166 167 167 ret = am2315_read_data(data, &sensor_data); 168 - if (ret < 0) { 169 - mutex_unlock(&data->lock); 168 + if (ret < 0) 170 169 goto err; 171 - } 172 170 173 171 mutex_lock(&data->lock); 174 172 if (*(indio_dev->active_scan_mask) == AM2315_ALL_CHANNEL_MASK) {
+10 -10
drivers/iio/humidity/hdc100x.c
··· 55 55 }, 56 56 { /* IIO_HUMIDITYRELATIVE channel */ 57 57 .shift = 8, 58 - .mask = 2, 58 + .mask = 3, 59 59 }, 60 60 }; 61 61 ··· 164 164 dev_err(&client->dev, "cannot read high byte measurement"); 165 165 return ret; 166 166 } 167 - val = ret << 6; 167 + val = ret << 8; 168 168 169 169 ret = i2c_smbus_read_byte(client); 170 170 if (ret < 0) { 171 171 dev_err(&client->dev, "cannot read low byte measurement"); 172 172 return ret; 173 173 } 174 - val |= ret >> 2; 174 + val |= ret; 175 175 176 176 return val; 177 177 } ··· 211 211 return IIO_VAL_INT_PLUS_MICRO; 212 212 case IIO_CHAN_INFO_SCALE: 213 213 if (chan->type == IIO_TEMP) { 214 - *val = 165; 215 - *val2 = 65536 >> 2; 214 + *val = 165000; 215 + *val2 = 65536; 216 216 return IIO_VAL_FRACTIONAL; 217 217 } else { 218 - *val = 0; 219 - *val2 = 10000; 220 - return IIO_VAL_INT_PLUS_MICRO; 218 + *val = 100; 219 + *val2 = 65536; 220 + return IIO_VAL_FRACTIONAL; 221 221 } 222 222 break; 223 223 case IIO_CHAN_INFO_OFFSET: 224 - *val = -3971; 225 - *val2 = 879096; 224 + *val = -15887; 225 + *val2 = 515151; 226 226 return IIO_VAL_INT_PLUS_MICRO; 227 227 default: 228 228 return -EINVAL;
+8 -8
drivers/iio/imu/bmi160/bmi160_core.c
··· 209 209 }; 210 210 211 211 static const struct bmi160_odr bmi160_accel_odr[] = { 212 - {0x01, 0, 78125}, 213 - {0x02, 1, 5625}, 214 - {0x03, 3, 125}, 215 - {0x04, 6, 25}, 216 - {0x05, 12, 5}, 212 + {0x01, 0, 781250}, 213 + {0x02, 1, 562500}, 214 + {0x03, 3, 125000}, 215 + {0x04, 6, 250000}, 216 + {0x05, 12, 500000}, 217 217 {0x06, 25, 0}, 218 218 {0x07, 50, 0}, 219 219 {0x08, 100, 0}, ··· 229 229 {0x08, 100, 0}, 230 230 {0x09, 200, 0}, 231 231 {0x0A, 400, 0}, 232 - {0x0B, 8000, 0}, 232 + {0x0B, 800, 0}, 233 233 {0x0C, 1600, 0}, 234 234 {0x0D, 3200, 0}, 235 235 }; ··· 364 364 365 365 return regmap_update_bits(data->regmap, 366 366 bmi160_regs[t].config, 367 - bmi160_odr_table[t].tbl[i].bits, 368 - bmi160_regs[t].config_odr_mask); 367 + bmi160_regs[t].config_odr_mask, 368 + bmi160_odr_table[t].tbl[i].bits); 369 369 } 370 370 371 371 static int bmi160_get_odr(struct bmi160_data *data, enum bmi160_sensor_type t,
+18 -5
drivers/iio/industrialio-trigger.c
··· 210 210 211 211 /* Prevent the module from being removed whilst attached to a trigger */ 212 212 __module_get(pf->indio_dev->info->driver_module); 213 + 214 + /* Get irq number */ 213 215 pf->irq = iio_trigger_get_irq(trig); 216 + if (pf->irq < 0) 217 + goto out_put_module; 218 + 219 + /* Request irq */ 214 220 ret = request_threaded_irq(pf->irq, pf->h, pf->thread, 215 221 pf->type, pf->name, 216 222 pf); 217 - if (ret < 0) { 218 - module_put(pf->indio_dev->info->driver_module); 219 - return ret; 220 - } 223 + if (ret < 0) 224 + goto out_put_irq; 221 225 226 + /* Enable trigger in driver */ 222 227 if (trig->ops && trig->ops->set_trigger_state && notinuse) { 223 228 ret = trig->ops->set_trigger_state(trig, true); 224 229 if (ret < 0) 225 - module_put(pf->indio_dev->info->driver_module); 230 + goto out_free_irq; 226 231 } 227 232 233 + return ret; 234 + 235 + out_free_irq: 236 + free_irq(pf->irq, pf); 237 + out_put_irq: 238 + iio_trigger_put_irq(trig, pf->irq); 239 + out_put_module: 240 + module_put(pf->indio_dev->info->driver_module); 228 241 return ret; 229 242 } 230 243
+1
drivers/iio/light/apds9960.c
··· 1011 1011 1012 1012 iio_device_attach_buffer(indio_dev, buffer); 1013 1013 1014 + indio_dev->dev.parent = &client->dev; 1014 1015 indio_dev->info = &apds9960_info; 1015 1016 indio_dev->name = APDS9960_DRV_NAME; 1016 1017 indio_dev->channels = apds9960_channels;
+6 -4
drivers/iio/light/bh1780.c
··· 84 84 int ret; 85 85 86 86 if (!readval) 87 - bh1780_write(bh1780, (u8)reg, (u8)writeval); 87 + return bh1780_write(bh1780, (u8)reg, (u8)writeval); 88 88 89 89 ret = bh1780_read(bh1780, (u8)reg); 90 90 if (ret < 0) ··· 187 187 188 188 indio_dev->dev.parent = &client->dev; 189 189 indio_dev->info = &bh1780_info; 190 - indio_dev->name = id->name; 190 + indio_dev->name = "bh1780"; 191 191 indio_dev->channels = bh1780_channels; 192 192 indio_dev->num_channels = ARRAY_SIZE(bh1780_channels); 193 193 indio_dev->modes = INDIO_DIRECT_MODE; ··· 226 226 static int bh1780_runtime_suspend(struct device *dev) 227 227 { 228 228 struct i2c_client *client = to_i2c_client(dev); 229 - struct bh1780_data *bh1780 = i2c_get_clientdata(client); 229 + struct iio_dev *indio_dev = i2c_get_clientdata(client); 230 + struct bh1780_data *bh1780 = iio_priv(indio_dev); 230 231 int ret; 231 232 232 233 ret = bh1780_write(bh1780, BH1780_REG_CONTROL, BH1780_POFF); ··· 242 241 static int bh1780_runtime_resume(struct device *dev) 243 242 { 244 243 struct i2c_client *client = to_i2c_client(dev); 245 - struct bh1780_data *bh1780 = i2c_get_clientdata(client); 244 + struct iio_dev *indio_dev = i2c_get_clientdata(client); 245 + struct bh1780_data *bh1780 = iio_priv(indio_dev); 246 246 int ret; 247 247 248 248 ret = bh1780_write(bh1780, BH1780_REG_CONTROL, BH1780_PON);
-1
drivers/iio/light/max44000.c
··· 147 147 { 148 148 .type = IIO_PROXIMITY, 149 149 .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), 150 - .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), 151 150 .scan_index = MAX44000_SCAN_INDEX_PRX, 152 151 .scan_type = { 153 152 .sign = 'u',
+1 -1
drivers/iio/magnetometer/st_magn_buffer.c
··· 82 82 83 83 int st_magn_allocate_ring(struct iio_dev *indio_dev) 84 84 { 85 - return iio_triggered_buffer_setup(indio_dev, &iio_pollfunc_store_time, 85 + return iio_triggered_buffer_setup(indio_dev, NULL, 86 86 &st_sensors_trigger_handler, &st_magn_buffer_setup_ops); 87 87 } 88 88
+1
drivers/iio/magnetometer/st_magn_core.c
··· 572 572 static const struct iio_trigger_ops st_magn_trigger_ops = { 573 573 .owner = THIS_MODULE, 574 574 .set_trigger_state = ST_MAGN_TRIGGER_SET_STATE, 575 + .validate_device = st_sensors_validate_device, 575 576 }; 576 577 #define ST_MAGN_TRIGGER_OPS (&st_magn_trigger_ops) 577 578 #else
+2 -2
drivers/iio/pressure/bmp280.c
··· 879 879 if (ret < 0) 880 880 return ret; 881 881 if (chip_id != id->driver_data) { 882 - dev_err(&client->dev, "bad chip id. expected %x got %x\n", 883 - BMP280_CHIP_ID, chip_id); 882 + dev_err(&client->dev, "bad chip id. expected %lx got %x\n", 883 + id->driver_data, chip_id); 884 884 return -EINVAL; 885 885 } 886 886
+1 -1
drivers/iio/pressure/st_pressure_buffer.c
··· 82 82 83 83 int st_press_allocate_ring(struct iio_dev *indio_dev) 84 84 { 85 - return iio_triggered_buffer_setup(indio_dev, &iio_pollfunc_store_time, 85 + return iio_triggered_buffer_setup(indio_dev, NULL, 86 86 &st_sensors_trigger_handler, &st_press_buffer_setup_ops); 87 87 } 88 88
+51 -30
drivers/iio/pressure/st_pressure_core.c
··· 28 28 #include <linux/iio/common/st_sensors.h> 29 29 #include "st_pressure.h" 30 30 31 + #define MCELSIUS_PER_CELSIUS 1000 32 + 33 + /* Default pressure sensitivity */ 31 34 #define ST_PRESS_LSB_PER_MBAR 4096UL 32 35 #define ST_PRESS_KPASCAL_NANO_SCALE (100000000UL / \ 33 36 ST_PRESS_LSB_PER_MBAR) 37 + 38 + /* Default temperature sensitivity */ 34 39 #define ST_PRESS_LSB_PER_CELSIUS 480UL 35 - #define ST_PRESS_CELSIUS_NANO_SCALE (1000000000UL / \ 36 - ST_PRESS_LSB_PER_CELSIUS) 40 + #define ST_PRESS_MILLI_CELSIUS_OFFSET 42500UL 41 + 37 42 #define ST_PRESS_NUMBER_DATA_CHANNELS 1 38 43 39 44 /* FULLSCALE */ 45 + #define ST_PRESS_FS_AVL_1100MB 1100 40 46 #define ST_PRESS_FS_AVL_1260MB 1260 41 47 42 48 #define ST_PRESS_1_OUT_XL_ADDR 0x28 ··· 60 54 #define ST_PRESS_LPS331AP_PW_MASK 0x80 61 55 #define ST_PRESS_LPS331AP_FS_ADDR 0x23 62 56 #define ST_PRESS_LPS331AP_FS_MASK 0x30 63 - #define ST_PRESS_LPS331AP_FS_AVL_1260_VAL 0x00 64 - #define ST_PRESS_LPS331AP_FS_AVL_1260_GAIN ST_PRESS_KPASCAL_NANO_SCALE 65 - #define ST_PRESS_LPS331AP_FS_AVL_TEMP_GAIN ST_PRESS_CELSIUS_NANO_SCALE 66 57 #define ST_PRESS_LPS331AP_BDU_ADDR 0x20 67 58 #define ST_PRESS_LPS331AP_BDU_MASK 0x04 68 59 #define ST_PRESS_LPS331AP_DRDY_IRQ_ADDR 0x22 ··· 70 67 #define ST_PRESS_LPS331AP_OD_IRQ_ADDR 0x22 71 68 #define ST_PRESS_LPS331AP_OD_IRQ_MASK 0x40 72 69 #define ST_PRESS_LPS331AP_MULTIREAD_BIT true 73 - #define ST_PRESS_LPS331AP_TEMP_OFFSET 42500 74 70 75 71 /* CUSTOM VALUES FOR LPS001WP SENSOR */ 72 + 73 + /* LPS001WP pressure resolution */ 74 + #define ST_PRESS_LPS001WP_LSB_PER_MBAR 16UL 75 + /* LPS001WP temperature resolution */ 76 + #define ST_PRESS_LPS001WP_LSB_PER_CELSIUS 64UL 77 + 76 78 #define ST_PRESS_LPS001WP_WAI_EXP 0xba 77 79 #define ST_PRESS_LPS001WP_ODR_ADDR 0x20 78 80 #define ST_PRESS_LPS001WP_ODR_MASK 0x30 ··· 86 78 #define ST_PRESS_LPS001WP_ODR_AVL_13HZ_VAL 0x03 87 79 #define ST_PRESS_LPS001WP_PW_ADDR 0x20 88 80 #define ST_PRESS_LPS001WP_PW_MASK 0x40 81 + #define ST_PRESS_LPS001WP_FS_AVL_PRESS_GAIN \ 82 + (100000000UL / ST_PRESS_LPS001WP_LSB_PER_MBAR) 89 83 #define ST_PRESS_LPS001WP_BDU_ADDR 0x20 90 84 #define ST_PRESS_LPS001WP_BDU_MASK 0x04 91 85 #define ST_PRESS_LPS001WP_MULTIREAD_BIT true ··· 104 94 #define ST_PRESS_LPS25H_ODR_AVL_25HZ_VAL 0x04 105 95 #define ST_PRESS_LPS25H_PW_ADDR 0x20 106 96 #define ST_PRESS_LPS25H_PW_MASK 0x80 107 - #define ST_PRESS_LPS25H_FS_ADDR 0x00 108 - #define ST_PRESS_LPS25H_FS_MASK 0x00 109 - #define ST_PRESS_LPS25H_FS_AVL_1260_VAL 0x00 110 - #define ST_PRESS_LPS25H_FS_AVL_1260_GAIN ST_PRESS_KPASCAL_NANO_SCALE 111 - #define ST_PRESS_LPS25H_FS_AVL_TEMP_GAIN ST_PRESS_CELSIUS_NANO_SCALE 112 97 #define ST_PRESS_LPS25H_BDU_ADDR 0x20 113 98 #define ST_PRESS_LPS25H_BDU_MASK 0x04 114 99 #define ST_PRESS_LPS25H_DRDY_IRQ_ADDR 0x23 ··· 114 109 #define ST_PRESS_LPS25H_OD_IRQ_ADDR 0x22 115 110 #define ST_PRESS_LPS25H_OD_IRQ_MASK 0x40 116 111 #define ST_PRESS_LPS25H_MULTIREAD_BIT true 117 - #define ST_PRESS_LPS25H_TEMP_OFFSET 42500 118 112 #define ST_PRESS_LPS25H_OUT_XL_ADDR 0x28 119 113 #define ST_TEMP_LPS25H_OUT_L_ADDR 0x2b 120 114 ··· 165 161 .storagebits = 16, 166 162 .endianness = IIO_LE, 167 163 }, 168 - .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), 164 + .info_mask_separate = 165 + BIT(IIO_CHAN_INFO_RAW) | 166 + BIT(IIO_CHAN_INFO_SCALE), 169 167 .modified = 0, 170 168 }, 171 169 { ··· 183 177 }, 184 178 .info_mask_separate = 185 179 BIT(IIO_CHAN_INFO_RAW) | 186 - BIT(IIO_CHAN_INFO_OFFSET), 180 + BIT(IIO_CHAN_INFO_SCALE), 187 181 .modified = 0, 188 182 }, 189 183 IIO_CHAN_SOFT_TIMESTAMP(1) ··· 218 212 .addr = ST_PRESS_LPS331AP_FS_ADDR, 219 213 .mask = ST_PRESS_LPS331AP_FS_MASK, 220 214 .fs_avl = { 215 + /* 216 + * Pressure and temperature sensitivity values 217 + * as defined in table 3 of LPS331AP datasheet. 218 + */ 221 219 [0] = { 222 220 .num = ST_PRESS_FS_AVL_1260MB, 223 - .value = ST_PRESS_LPS331AP_FS_AVL_1260_VAL, 224 - .gain = ST_PRESS_LPS331AP_FS_AVL_1260_GAIN, 225 - .gain2 = ST_PRESS_LPS331AP_FS_AVL_TEMP_GAIN, 221 + .gain = ST_PRESS_KPASCAL_NANO_SCALE, 222 + .gain2 = ST_PRESS_LSB_PER_CELSIUS, 226 223 }, 227 224 }, 228 225 }, ··· 270 261 .value_off = ST_SENSORS_DEFAULT_POWER_OFF_VALUE, 271 262 }, 272 263 .fs = { 273 - .addr = 0, 264 + .fs_avl = { 265 + /* 266 + * Pressure and temperature resolution values 267 + * as defined in table 3 of LPS001WP datasheet. 268 + */ 269 + [0] = { 270 + .num = ST_PRESS_FS_AVL_1100MB, 271 + .gain = ST_PRESS_LPS001WP_FS_AVL_PRESS_GAIN, 272 + .gain2 = ST_PRESS_LPS001WP_LSB_PER_CELSIUS, 273 + }, 274 + }, 274 275 }, 275 276 .bdu = { 276 277 .addr = ST_PRESS_LPS001WP_BDU_ADDR, ··· 317 298 .value_off = ST_SENSORS_DEFAULT_POWER_OFF_VALUE, 318 299 }, 319 300 .fs = { 320 - .addr = ST_PRESS_LPS25H_FS_ADDR, 321 - .mask = ST_PRESS_LPS25H_FS_MASK, 322 301 .fs_avl = { 302 + /* 303 + * Pressure and temperature sensitivity values 304 + * as defined in table 3 of LPS25H datasheet. 305 + */ 323 306 [0] = { 324 307 .num = ST_PRESS_FS_AVL_1260MB, 325 - .value = ST_PRESS_LPS25H_FS_AVL_1260_VAL, 326 - .gain = ST_PRESS_LPS25H_FS_AVL_1260_GAIN, 327 - .gain2 = ST_PRESS_LPS25H_FS_AVL_TEMP_GAIN, 308 + .gain = ST_PRESS_KPASCAL_NANO_SCALE, 309 + .gain2 = ST_PRESS_LSB_PER_CELSIUS, 328 310 }, 329 311 }, 330 312 }, ··· 384 364 385 365 return IIO_VAL_INT; 386 366 case IIO_CHAN_INFO_SCALE: 387 - *val = 0; 388 - 389 367 switch (ch->type) { 390 368 case IIO_PRESSURE: 369 + *val = 0; 391 370 *val2 = press_data->current_fullscale->gain; 392 - break; 371 + return IIO_VAL_INT_PLUS_NANO; 393 372 case IIO_TEMP: 373 + *val = MCELSIUS_PER_CELSIUS; 394 374 *val2 = press_data->current_fullscale->gain2; 395 - break; 375 + return IIO_VAL_FRACTIONAL; 396 376 default: 397 377 err = -EINVAL; 398 378 goto read_error; 399 379 } 400 380 401 - return IIO_VAL_INT_PLUS_NANO; 402 381 case IIO_CHAN_INFO_OFFSET: 403 382 switch (ch->type) { 404 383 case IIO_TEMP: 405 - *val = 425; 406 - *val2 = 10; 384 + *val = ST_PRESS_MILLI_CELSIUS_OFFSET * 385 + press_data->current_fullscale->gain2; 386 + *val2 = MCELSIUS_PER_CELSIUS; 407 387 break; 408 388 default: 409 389 err = -EINVAL; ··· 445 425 static const struct iio_trigger_ops st_press_trigger_ops = { 446 426 .owner = THIS_MODULE, 447 427 .set_trigger_state = ST_PRESS_TRIGGER_SET_STATE, 428 + .validate_device = st_sensors_validate_device, 448 429 }; 449 430 #define ST_PRESS_TRIGGER_OPS (&st_press_trigger_ops) 450 431 #else
+12 -5
drivers/iio/proximity/as3935.c
··· 64 64 struct delayed_work work; 65 65 66 66 u32 tune_cap; 67 + u8 buffer[16]; /* 8-bit data + 56-bit padding + 64-bit timestamp */ 67 68 u8 buf[2] ____cacheline_aligned; 68 69 }; 69 70 ··· 73 72 .type = IIO_PROXIMITY, 74 73 .info_mask_separate = 75 74 BIT(IIO_CHAN_INFO_RAW) | 76 - BIT(IIO_CHAN_INFO_PROCESSED), 75 + BIT(IIO_CHAN_INFO_PROCESSED) | 76 + BIT(IIO_CHAN_INFO_SCALE), 77 77 .scan_index = 0, 78 78 .scan_type = { 79 79 .sign = 'u', ··· 183 181 /* storm out of range */ 184 182 if (*val == AS3935_DATA_MASK) 185 183 return -EINVAL; 186 - *val *= 1000; 184 + 185 + if (m == IIO_CHAN_INFO_PROCESSED) 186 + *val *= 1000; 187 + break; 188 + case IIO_CHAN_INFO_SCALE: 189 + *val = 1000; 187 190 break; 188 191 default: 189 192 return -EINVAL; ··· 213 206 ret = as3935_read(st, AS3935_DATA, &val); 214 207 if (ret) 215 208 goto err_read; 216 - val &= AS3935_DATA_MASK; 217 - val *= 1000; 218 209 219 - iio_push_to_buffers_with_timestamp(indio_dev, &val, pf->timestamp); 210 + st->buffer[0] = val & AS3935_DATA_MASK; 211 + iio_push_to_buffers_with_timestamp(indio_dev, &st->buffer, 212 + pf->timestamp); 220 213 err_read: 221 214 iio_trigger_notify_done(indio_dev->trig); 222 215
+1
drivers/iommu/arm-smmu-v3.c
··· 1941 1941 .attach_dev = arm_smmu_attach_dev, 1942 1942 .map = arm_smmu_map, 1943 1943 .unmap = arm_smmu_unmap, 1944 + .map_sg = default_iommu_map_sg, 1944 1945 .iova_to_phys = arm_smmu_iova_to_phys, 1945 1946 .add_device = arm_smmu_add_device, 1946 1947 .remove_device = arm_smmu_remove_device,
+12 -5
drivers/iommu/intel-iommu.c
··· 3222 3222 } 3223 3223 } 3224 3224 3225 - iommu_flush_write_buffer(iommu); 3226 - iommu_set_root_entry(iommu); 3227 - iommu->flush.flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL); 3228 - iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH); 3229 - 3230 3225 if (!ecap_pass_through(iommu->ecap)) 3231 3226 hw_pass_through = 0; 3232 3227 #ifdef CONFIG_INTEL_IOMMU_SVM 3233 3228 if (pasid_enabled(iommu)) 3234 3229 intel_svm_alloc_pasid_tables(iommu); 3235 3230 #endif 3231 + } 3232 + 3233 + /* 3234 + * Now that qi is enabled on all iommus, set the root entry and flush 3235 + * caches. This is required on some Intel X58 chipsets, otherwise the 3236 + * flush_context function will loop forever and the boot hangs. 3237 + */ 3238 + for_each_active_iommu(iommu, drhd) { 3239 + iommu_flush_write_buffer(iommu); 3240 + iommu_set_root_entry(iommu); 3241 + iommu->flush.flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL); 3242 + iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH); 3236 3243 } 3237 3244 3238 3245 if (iommu_pass_through)
+1 -1
drivers/iommu/rockchip-iommu.c
··· 815 815 dte_addr = virt_to_phys(rk_domain->dt); 816 816 for (i = 0; i < iommu->num_mmu; i++) { 817 817 rk_iommu_write(iommu->bases[i], RK_MMU_DTE_ADDR, dte_addr); 818 - rk_iommu_command(iommu->bases[i], RK_MMU_CMD_ZAP_CACHE); 818 + rk_iommu_base_command(iommu->bases[i], RK_MMU_CMD_ZAP_CACHE); 819 819 rk_iommu_write(iommu->bases[i], RK_MMU_INT_MASK, RK_MMU_IRQ_MASK); 820 820 } 821 821
+6 -3
drivers/leds/led-core.c
··· 53 53 54 54 if (!led_cdev->blink_delay_on || !led_cdev->blink_delay_off) { 55 55 led_set_brightness_nosleep(led_cdev, LED_OFF); 56 + led_cdev->flags &= ~LED_BLINK_SW; 56 57 return; 57 58 } 58 59 59 60 if (led_cdev->flags & LED_BLINK_ONESHOT_STOP) { 60 - led_cdev->flags &= ~LED_BLINK_ONESHOT_STOP; 61 + led_cdev->flags &= ~(LED_BLINK_ONESHOT_STOP | LED_BLINK_SW); 61 62 return; 62 63 } 63 64 ··· 152 151 return; 153 152 } 154 153 154 + led_cdev->flags |= LED_BLINK_SW; 155 155 mod_timer(&led_cdev->blink_timer, jiffies + 1); 156 156 } 157 157 ··· 221 219 del_timer_sync(&led_cdev->blink_timer); 222 220 led_cdev->blink_delay_on = 0; 223 221 led_cdev->blink_delay_off = 0; 222 + led_cdev->flags &= ~LED_BLINK_SW; 224 223 } 225 224 EXPORT_SYMBOL_GPL(led_stop_software_blink); 226 225 ··· 229 226 enum led_brightness brightness) 230 227 { 231 228 /* 232 - * In case blinking is on delay brightness setting 229 + * If software blink is active, delay brightness setting 233 230 * until the next timer tick. 234 231 */ 235 - if (led_cdev->blink_delay_on || led_cdev->blink_delay_off) { 232 + if (led_cdev->flags & LED_BLINK_SW) { 236 233 /* 237 234 * If we need to disable soft blinking delegate this to the 238 235 * work queue task to avoid problems in case we are called
+31
drivers/leds/trigger/ledtrig-heartbeat.c
··· 19 19 #include <linux/sched.h> 20 20 #include <linux/leds.h> 21 21 #include <linux/reboot.h> 22 + #include <linux/suspend.h> 22 23 #include "../leds.h" 23 24 24 25 static int panic_heartbeats; ··· 155 154 .deactivate = heartbeat_trig_deactivate, 156 155 }; 157 156 157 + static int heartbeat_pm_notifier(struct notifier_block *nb, 158 + unsigned long pm_event, void *unused) 159 + { 160 + int rc; 161 + 162 + switch (pm_event) { 163 + case PM_SUSPEND_PREPARE: 164 + case PM_HIBERNATION_PREPARE: 165 + case PM_RESTORE_PREPARE: 166 + led_trigger_unregister(&heartbeat_led_trigger); 167 + break; 168 + case PM_POST_SUSPEND: 169 + case PM_POST_HIBERNATION: 170 + case PM_POST_RESTORE: 171 + rc = led_trigger_register(&heartbeat_led_trigger); 172 + if (rc) 173 + pr_err("could not re-register heartbeat trigger\n"); 174 + break; 175 + default: 176 + break; 177 + } 178 + return NOTIFY_DONE; 179 + } 180 + 158 181 static int heartbeat_reboot_notifier(struct notifier_block *nb, 159 182 unsigned long code, void *unused) 160 183 { ··· 192 167 panic_heartbeats = 1; 193 168 return NOTIFY_DONE; 194 169 } 170 + 171 + static struct notifier_block heartbeat_pm_nb = { 172 + .notifier_call = heartbeat_pm_notifier, 173 + }; 195 174 196 175 static struct notifier_block heartbeat_reboot_nb = { 197 176 .notifier_call = heartbeat_reboot_notifier, ··· 213 184 atomic_notifier_chain_register(&panic_notifier_list, 214 185 &heartbeat_panic_nb); 215 186 register_reboot_notifier(&heartbeat_reboot_nb); 187 + register_pm_notifier(&heartbeat_pm_nb); 216 188 } 217 189 return rc; 218 190 } 219 191 220 192 static void __exit heartbeat_trig_exit(void) 221 193 { 194 + unregister_pm_notifier(&heartbeat_pm_nb); 222 195 unregister_reboot_notifier(&heartbeat_reboot_nb); 223 196 atomic_notifier_chain_unregister(&panic_notifier_list, 224 197 &heartbeat_panic_nb);
+16 -1
drivers/mcb/mcb-core.c
··· 61 61 struct mcb_driver *mdrv = to_mcb_driver(dev->driver); 62 62 struct mcb_device *mdev = to_mcb_device(dev); 63 63 const struct mcb_device_id *found_id; 64 + struct module *carrier_mod; 65 + int ret; 64 66 65 67 found_id = mcb_match_id(mdrv->id_table, mdev); 66 68 if (!found_id) 67 69 return -ENODEV; 68 70 69 - return mdrv->probe(mdev, found_id); 71 + carrier_mod = mdev->dev.parent->driver->owner; 72 + if (!try_module_get(carrier_mod)) 73 + return -EINVAL; 74 + 75 + get_device(dev); 76 + ret = mdrv->probe(mdev, found_id); 77 + if (ret) 78 + module_put(carrier_mod); 79 + 80 + return ret; 70 81 } 71 82 72 83 static int mcb_remove(struct device *dev) 73 84 { 74 85 struct mcb_driver *mdrv = to_mcb_driver(dev->driver); 75 86 struct mcb_device *mdev = to_mcb_device(dev); 87 + struct module *carrier_mod; 76 88 77 89 mdrv->remove(mdev); 90 + 91 + carrier_mod = mdev->dev.parent->driver->owner; 92 + module_put(carrier_mod); 78 93 79 94 put_device(&mdev->dev); 80 95
+20 -77
drivers/media/usb/uvc/uvc_v4l2.c
··· 1274 1274 static int uvc_v4l2_get_xu_mapping(struct uvc_xu_control_mapping *kp, 1275 1275 const struct uvc_xu_control_mapping32 __user *up) 1276 1276 { 1277 - struct uvc_menu_info __user *umenus; 1278 - struct uvc_menu_info __user *kmenus; 1279 1277 compat_caddr_t p; 1280 1278 1281 1279 if (!access_ok(VERIFY_READ, up, sizeof(*up)) || ··· 1290 1292 1291 1293 if (__get_user(p, &up->menu_info)) 1292 1294 return -EFAULT; 1293 - umenus = compat_ptr(p); 1294 - if (!access_ok(VERIFY_READ, umenus, kp->menu_count * sizeof(*umenus))) 1295 - return -EFAULT; 1296 - 1297 - kmenus = compat_alloc_user_space(kp->menu_count * sizeof(*kmenus)); 1298 - if (kmenus == NULL) 1299 - return -EFAULT; 1300 - kp->menu_info = kmenus; 1301 - 1302 - if (copy_in_user(kmenus, umenus, kp->menu_count * sizeof(*umenus))) 1303 - return -EFAULT; 1295 + kp->menu_info = compat_ptr(p); 1304 1296 1305 1297 return 0; 1306 1298 } ··· 1298 1310 static int uvc_v4l2_put_xu_mapping(const struct uvc_xu_control_mapping *kp, 1299 1311 struct uvc_xu_control_mapping32 __user *up) 1300 1312 { 1301 - struct uvc_menu_info __user *umenus; 1302 - struct uvc_menu_info __user *kmenus = kp->menu_info; 1303 - compat_caddr_t p; 1304 - 1305 1313 if (!access_ok(VERIFY_WRITE, up, sizeof(*up)) || 1306 1314 __copy_to_user(up, kp, offsetof(typeof(*up), menu_info)) || 1307 1315 __put_user(kp->menu_count, &up->menu_count)) 1308 1316 return -EFAULT; 1309 1317 1310 1318 if (__clear_user(up->reserved, sizeof(up->reserved))) 1311 - return -EFAULT; 1312 - 1313 - if (kp->menu_count == 0) 1314 - return 0; 1315 - 1316 - if (get_user(p, &up->menu_info)) 1317 - return -EFAULT; 1318 - umenus = compat_ptr(p); 1319 - 1320 - if (copy_in_user(umenus, kmenus, kp->menu_count * sizeof(*umenus))) 1321 1319 return -EFAULT; 1322 1320 1323 1321 return 0; ··· 1320 1346 static int uvc_v4l2_get_xu_query(struct uvc_xu_control_query *kp, 1321 1347 const struct uvc_xu_control_query32 __user *up) 1322 1348 { 1323 - u8 __user *udata; 1324 - u8 __user *kdata; 1325 1349 compat_caddr_t p; 1326 1350 1327 1351 if (!access_ok(VERIFY_READ, up, sizeof(*up)) || ··· 1333 1361 1334 1362 if (__get_user(p, &up->data)) 1335 1363 return -EFAULT; 1336 - udata = compat_ptr(p); 1337 - if (!access_ok(VERIFY_READ, udata, kp->size)) 1338 - return -EFAULT; 1339 - 1340 - kdata = compat_alloc_user_space(kp->size); 1341 - if (kdata == NULL) 1342 - return -EFAULT; 1343 - kp->data = kdata; 1344 - 1345 - if (copy_in_user(kdata, udata, kp->size)) 1346 - return -EFAULT; 1364 + kp->data = compat_ptr(p); 1347 1365 1348 1366 return 0; 1349 1367 } ··· 1341 1379 static int uvc_v4l2_put_xu_query(const struct uvc_xu_control_query *kp, 1342 1380 struct uvc_xu_control_query32 __user *up) 1343 1381 { 1344 - u8 __user *udata; 1345 - u8 __user *kdata = kp->data; 1346 - compat_caddr_t p; 1347 - 1348 1382 if (!access_ok(VERIFY_WRITE, up, sizeof(*up)) || 1349 1383 __copy_to_user(up, kp, offsetof(typeof(*up), data))) 1350 - return -EFAULT; 1351 - 1352 - if (kp->size == 0) 1353 - return 0; 1354 - 1355 - if (get_user(p, &up->data)) 1356 - return -EFAULT; 1357 - udata = compat_ptr(p); 1358 - if (!access_ok(VERIFY_READ, udata, kp->size)) 1359 - return -EFAULT; 1360 - 1361 - if (copy_in_user(udata, kdata, kp->size)) 1362 1384 return -EFAULT; 1363 1385 1364 1386 return 0; ··· 1354 1408 static long uvc_v4l2_compat_ioctl32(struct file *file, 1355 1409 unsigned int cmd, unsigned long arg) 1356 1410 { 1411 + struct uvc_fh *handle = file->private_data; 1357 1412 union { 1358 1413 struct uvc_xu_control_mapping xmap; 1359 1414 struct uvc_xu_control_query xqry; 1360 1415 } karg; 1361 1416 void __user *up = compat_ptr(arg); 1362 - mm_segment_t old_fs; 1363 1417 long ret; 1364 1418 1365 1419 switch (cmd) { 1366 1420 case UVCIOC_CTRL_MAP32: 1367 - cmd = UVCIOC_CTRL_MAP; 1368 1421 ret = uvc_v4l2_get_xu_mapping(&karg.xmap, up); 1422 + if (ret) 1423 + return ret; 1424 + ret = uvc_ioctl_ctrl_map(handle->chain, &karg.xmap); 1425 + if (ret) 1426 + return ret; 1427 + ret = uvc_v4l2_put_xu_mapping(&karg.xmap, up); 1428 + if (ret) 1429 + return ret; 1430 + 1369 1431 break; 1370 1432 1371 1433 case UVCIOC_CTRL_QUERY32: 1372 - cmd = UVCIOC_CTRL_QUERY; 1373 1434 ret = uvc_v4l2_get_xu_query(&karg.xqry, up); 1435 + if (ret) 1436 + return ret; 1437 + ret = uvc_xu_ctrl_query(handle->chain, &karg.xqry); 1438 + if (ret) 1439 + return ret; 1440 + ret = uvc_v4l2_put_xu_query(&karg.xqry, up); 1441 + if (ret) 1442 + return ret; 1374 1443 break; 1375 1444 1376 1445 default: 1377 1446 return -ENOIOCTLCMD; 1378 - } 1379 - 1380 - old_fs = get_fs(); 1381 - set_fs(KERNEL_DS); 1382 - ret = video_ioctl2(file, cmd, (unsigned long)&karg); 1383 - set_fs(old_fs); 1384 - 1385 - if (ret < 0) 1386 - return ret; 1387 - 1388 - switch (cmd) { 1389 - case UVCIOC_CTRL_MAP: 1390 - ret = uvc_v4l2_put_xu_mapping(&karg.xmap, up); 1391 - break; 1392 - 1393 - case UVCIOC_CTRL_QUERY: 1394 - ret = uvc_v4l2_put_xu_query(&karg.xqry, up); 1395 - break; 1396 1447 } 1397 1448 1398 1449 return ret;
+1 -1
drivers/media/v4l2-core/v4l2-mc.c
··· 1 1 /* 2 2 * Media Controller ancillary functions 3 3 * 4 - * Copyright (c) 2016 Mauro Carvalho Chehab <mchehab@osg.samsung.com> 4 + * Copyright (c) 2016 Mauro Carvalho Chehab <mchehab@kernel.org> 5 5 * Copyright (C) 2016 Shuah Khan <shuahkh@osg.samsung.com> 6 6 * Copyright (C) 2006-2010 Nokia Corporation 7 7 * Copyright (c) 2016 Intel Corporation.
+1 -1
drivers/memory/omap-gpmc.c
··· 398 398 gpmc_cs_modify_reg(cs, GPMC_CS_CONFIG4, 399 399 GPMC_CONFIG4_OEEXTRADELAY, p->oe_extra_delay); 400 400 gpmc_cs_modify_reg(cs, GPMC_CS_CONFIG4, 401 - GPMC_CONFIG4_OEEXTRADELAY, p->we_extra_delay); 401 + GPMC_CONFIG4_WEEXTRADELAY, p->we_extra_delay); 402 402 gpmc_cs_modify_reg(cs, GPMC_CS_CONFIG6, 403 403 GPMC_CONFIG6_CYCLE2CYCLESAMECSEN, 404 404 p->cycle2cyclesamecsen);
+1 -1
drivers/misc/mei/client.c
··· 730 730 /* synchronized under device mutex */ 731 731 if (waitqueue_active(&cl->wait)) { 732 732 cl_dbg(dev, cl, "Waking up ctrl write clients!\n"); 733 - wake_up_interruptible(&cl->wait); 733 + wake_up(&cl->wait); 734 734 } 735 735 } 736 736
+9 -2
drivers/mtd/ubi/build.c
··· 1147 1147 */ 1148 1148 static struct mtd_info * __init open_mtd_by_chdev(const char *mtd_dev) 1149 1149 { 1150 - struct kstat stat; 1151 1150 int err, minor; 1151 + struct path path; 1152 + struct kstat stat; 1152 1153 1153 1154 /* Probably this is an MTD character device node path */ 1154 - err = vfs_stat(mtd_dev, &stat); 1155 + err = kern_path(mtd_dev, LOOKUP_FOLLOW, &path); 1156 + if (err) 1157 + return ERR_PTR(err); 1158 + 1159 + err = vfs_getattr(&path, &stat); 1160 + path_put(&path); 1155 1161 if (err) 1156 1162 return ERR_PTR(err); 1157 1163 ··· 1166 1160 return ERR_PTR(-EINVAL); 1167 1161 1168 1162 minor = MINOR(stat.rdev); 1163 + 1169 1164 if (minor & 1) 1170 1165 /* 1171 1166 * Just do not think the "/dev/mtdrX" devices support is need,
+7 -1
drivers/mtd/ubi/kapi.c
··· 302 302 struct ubi_volume_desc *ubi_open_volume_path(const char *pathname, int mode) 303 303 { 304 304 int error, ubi_num, vol_id; 305 + struct path path; 305 306 struct kstat stat; 306 307 307 308 dbg_gen("open volume %s, mode %d", pathname, mode); ··· 310 309 if (!pathname || !*pathname) 311 310 return ERR_PTR(-EINVAL); 312 311 313 - error = vfs_stat(pathname, &stat); 312 + error = kern_path(pathname, LOOKUP_FOLLOW, &path); 313 + if (error) 314 + return ERR_PTR(error); 315 + 316 + error = vfs_getattr(&path, &stat); 317 + path_put(&path); 314 318 if (error) 315 319 return ERR_PTR(error); 316 320
+7 -2
drivers/nvme/host/pci.c
··· 1679 1679 1680 1680 static void nvme_dev_unmap(struct nvme_dev *dev) 1681 1681 { 1682 + struct pci_dev *pdev = to_pci_dev(dev->dev); 1683 + int bars; 1684 + 1682 1685 if (dev->bar) 1683 1686 iounmap(dev->bar); 1684 - pci_release_regions(to_pci_dev(dev->dev)); 1687 + 1688 + bars = pci_select_bars(pdev, IORESOURCE_MEM); 1689 + pci_release_selected_regions(pdev, bars); 1685 1690 } 1686 1691 1687 1692 static void nvme_pci_disable(struct nvme_dev *dev) ··· 1929 1924 1930 1925 return 0; 1931 1926 release: 1932 - pci_release_regions(pdev); 1927 + pci_release_selected_regions(pdev, bars); 1933 1928 return -ENODEV; 1934 1929 } 1935 1930
+13 -2
drivers/of/fdt.c
··· 395 395 struct device_node **nodepp) 396 396 { 397 397 struct device_node *root; 398 - int offset = 0, depth = 0; 398 + int offset = 0, depth = 0, initial_depth = 0; 399 399 #define FDT_MAX_DEPTH 64 400 400 unsigned int fpsizes[FDT_MAX_DEPTH]; 401 401 struct device_node *nps[FDT_MAX_DEPTH]; ··· 405 405 if (nodepp) 406 406 *nodepp = NULL; 407 407 408 + /* 409 + * We're unflattening device sub-tree if @dad is valid. There are 410 + * possibly multiple nodes in the first level of depth. We need 411 + * set @depth to 1 to make fdt_next_node() happy as it bails 412 + * immediately when negative @depth is found. Otherwise, the device 413 + * nodes except the first one won't be unflattened successfully. 414 + */ 415 + if (dad) 416 + depth = initial_depth = 1; 417 + 408 418 root = dad; 409 419 fpsizes[depth] = dad ? strlen(of_node_full_name(dad)) : 0; 410 420 nps[depth] = dad; 421 + 411 422 for (offset = 0; 412 - offset >= 0 && depth >= 0; 423 + offset >= 0 && depth >= initial_depth; 413 424 offset = fdt_next_node(blob, offset, &depth)) { 414 425 if (WARN_ON_ONCE(depth >= FDT_MAX_DEPTH)) 415 426 continue;
+10 -9
drivers/of/irq.c
··· 386 386 EXPORT_SYMBOL_GPL(of_irq_to_resource); 387 387 388 388 /** 389 - * of_irq_get - Decode a node's IRQ and return it as a Linux irq number 389 + * of_irq_get - Decode a node's IRQ and return it as a Linux IRQ number 390 390 * @dev: pointer to device tree node 391 - * @index: zero-based index of the irq 391 + * @index: zero-based index of the IRQ 392 392 * 393 - * Returns Linux irq number on success, or -EPROBE_DEFER if the irq domain 394 - * is not yet created. 395 - * 393 + * Returns Linux IRQ number on success, or 0 on the IRQ mapping failure, or 394 + * -EPROBE_DEFER if the IRQ domain is not yet created, or error code in case 395 + * of any other failure. 396 396 */ 397 397 int of_irq_get(struct device_node *dev, int index) 398 398 { ··· 413 413 EXPORT_SYMBOL_GPL(of_irq_get); 414 414 415 415 /** 416 - * of_irq_get_byname - Decode a node's IRQ and return it as a Linux irq number 416 + * of_irq_get_byname - Decode a node's IRQ and return it as a Linux IRQ number 417 417 * @dev: pointer to device tree node 418 - * @name: irq name 418 + * @name: IRQ name 419 419 * 420 - * Returns Linux irq number on success, or -EPROBE_DEFER if the irq domain 421 - * is not yet created, or error code in case of any other failure. 420 + * Returns Linux IRQ number on success, or 0 on the IRQ mapping failure, or 421 + * -EPROBE_DEFER if the IRQ domain is not yet created, or error code in case 422 + * of any other failure. 422 423 */ 423 424 int of_irq_get_byname(struct device_node *dev, const char *name) 424 425 {
+9 -2
drivers/of/of_reserved_mem.c
··· 127 127 } 128 128 129 129 /* Need adjust the alignment to satisfy the CMA requirement */ 130 - if (IS_ENABLED(CONFIG_CMA) && of_flat_dt_is_compatible(node, "shared-dma-pool")) 131 - align = max(align, (phys_addr_t)PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order)); 130 + if (IS_ENABLED(CONFIG_CMA) 131 + && of_flat_dt_is_compatible(node, "shared-dma-pool") 132 + && of_get_flat_dt_prop(node, "reusable", NULL) 133 + && !of_get_flat_dt_prop(node, "no-map", NULL)) { 134 + unsigned long order = 135 + max_t(unsigned long, MAX_ORDER - 1, pageblock_order); 136 + 137 + align = max(align, (phys_addr_t)PAGE_SIZE << order); 138 + } 132 139 133 140 prop = of_get_flat_dt_prop(node, "alloc-ranges", &len); 134 141 if (prop) {
+1 -1
drivers/perf/arm_pmu.c
··· 1010 1010 if (!ret) 1011 1011 ret = init_fn(pmu); 1012 1012 } else { 1013 - ret = probe_current_pmu(pmu, probe_table); 1014 1013 cpumask_setall(&pmu->supported_cpus); 1014 + ret = probe_current_pmu(pmu, probe_table); 1015 1015 } 1016 1016 1017 1017 if (ret) {
+5 -1
drivers/phy/phy-exynos-mipi-video.c
··· 233 233 struct exynos_mipi_video_phy *state) 234 234 { 235 235 u32 val; 236 + int ret; 236 237 237 - regmap_read(state->regmaps[data->resetn_map], data->resetn_reg, &val); 238 + ret = regmap_read(state->regmaps[data->resetn_map], data->resetn_reg, &val); 239 + if (ret) 240 + return 0; 241 + 238 242 return val & data->resetn_val; 239 243 } 240 244
+11 -4
drivers/phy/phy-ti-pipe3.c
··· 293 293 ret = ti_pipe3_dpll_wait_lock(phy); 294 294 } 295 295 296 - /* Program the DPLL only if not locked */ 296 + /* SATA has issues if re-programmed when locked */ 297 297 val = ti_pipe3_readl(phy->pll_ctrl_base, PLL_STATUS); 298 - if (!(val & PLL_LOCK)) 299 - if (ti_pipe3_dpll_program(phy)) 300 - return -EINVAL; 298 + if ((val & PLL_LOCK) && of_device_is_compatible(phy->dev->of_node, 299 + "ti,phy-pipe3-sata")) 300 + return ret; 301 + 302 + /* Program the DPLL */ 303 + ret = ti_pipe3_dpll_program(phy); 304 + if (ret) { 305 + ti_pipe3_disable_clocks(phy); 306 + return -EINVAL; 307 + } 301 308 302 309 return ret; 303 310 }
+10 -4
drivers/phy/phy-twl4030-usb.c
··· 463 463 twl4030_usb_set_mode(twl, twl->usb_mode); 464 464 if (twl->usb_mode == T2_USB_MODE_ULPI) 465 465 twl4030_i2c_access(twl, 0); 466 - schedule_delayed_work(&twl->id_workaround_work, 0); 466 + twl->linkstat = MUSB_UNKNOWN; 467 + schedule_delayed_work(&twl->id_workaround_work, HZ); 467 468 468 469 return 0; 469 470 } ··· 538 537 struct twl4030_usb *twl = _twl; 539 538 enum musb_vbus_id_status status; 540 539 bool status_changed = false; 540 + int err; 541 541 542 542 status = twl4030_usb_linkstat(twl); 543 543 ··· 569 567 pm_runtime_mark_last_busy(twl->dev); 570 568 pm_runtime_put_autosuspend(twl->dev); 571 569 } 572 - musb_mailbox(status); 570 + err = musb_mailbox(status); 571 + if (err) 572 + twl->linkstat = MUSB_UNKNOWN; 573 573 } 574 574 575 575 /* don't schedule during sleep - irq works right then */ ··· 599 595 struct twl4030_usb *twl = phy_get_drvdata(phy); 600 596 601 597 pm_runtime_get_sync(twl->dev); 602 - schedule_delayed_work(&twl->id_workaround_work, 0); 598 + twl->linkstat = MUSB_UNKNOWN; 599 + schedule_delayed_work(&twl->id_workaround_work, HZ); 603 600 pm_runtime_mark_last_busy(twl->dev); 604 601 pm_runtime_put_autosuspend(twl->dev); 605 602 ··· 768 763 if (cable_present(twl->linkstat)) 769 764 pm_runtime_put_noidle(twl->dev); 770 765 pm_runtime_mark_last_busy(twl->dev); 771 - pm_runtime_put_sync_suspend(twl->dev); 766 + pm_runtime_dont_use_autosuspend(&pdev->dev); 767 + pm_runtime_put_sync(twl->dev); 772 768 pm_runtime_disable(twl->dev); 773 769 774 770 /* autogate 60MHz ULPI clock,
+4 -6
drivers/platform/x86/Kconfig
··· 103 103 104 104 config DELL_LAPTOP 105 105 tristate "Dell Laptop Extras" 106 - depends on X86 107 106 depends on DELL_SMBIOS 108 107 depends on DMI 109 108 depends on BACKLIGHT_CLASS_DEVICE ··· 504 505 505 506 config SENSORS_HDAPS 506 507 tristate "Thinkpad Hard Drive Active Protection System (hdaps)" 507 - depends on INPUT && X86 508 + depends on INPUT 508 509 select INPUT_POLLDEV 509 510 default n 510 511 help ··· 748 749 749 750 config ACPI_CMPC 750 751 tristate "CMPC Laptop Extras" 751 - depends on X86 && ACPI 752 + depends on ACPI 752 753 depends on RFKILL || RFKILL=n 753 754 select INPUT 754 755 select BACKLIGHT_CLASS_DEVICE ··· 847 848 848 849 config INTEL_PMC_CORE 849 850 bool "Intel PMC Core driver" 850 - depends on X86 && PCI 851 + depends on PCI 851 852 ---help--- 852 853 The Intel Platform Controller Hub for Intel Core SoCs provides access 853 854 to Power Management Controller registers via a PCI interface. This ··· 859 860 860 861 config IBM_RTL 861 862 tristate "Device driver to enable PRTL support" 862 - depends on X86 && PCI 863 + depends on PCI 863 864 ---help--- 864 865 Enable support for IBM Premium Real Time Mode (PRTM). 865 866 This module will allow you the enter and exit PRTM in the BIOS via ··· 893 894 894 895 config SAMSUNG_LAPTOP 895 896 tristate "Samsung Laptop driver" 896 - depends on X86 897 897 depends on RFKILL || RFKILL = n 898 898 depends on ACPI_VIDEO || ACPI_VIDEO = n 899 899 depends on BACKLIGHT_CLASS_DEVICE
+2
drivers/platform/x86/ideapad-laptop.c
··· 567 567 static const struct key_entry ideapad_keymap[] = { 568 568 { KE_KEY, 6, { KEY_SWITCHVIDEOMODE } }, 569 569 { KE_KEY, 7, { KEY_CAMERA } }, 570 + { KE_KEY, 8, { KEY_MICMUTE } }, 570 571 { KE_KEY, 11, { KEY_F16 } }, 571 572 { KE_KEY, 13, { KEY_WLAN } }, 572 573 { KE_KEY, 16, { KEY_PROG1 } }, ··· 810 809 break; 811 810 case 13: 812 811 case 11: 812 + case 8: 813 813 case 7: 814 814 case 6: 815 815 ideapad_input_report(priv, vpc_bit);
+63 -24
drivers/platform/x86/thinkpad_acpi.c
··· 2043 2043 2044 2044 static u32 hotkey_orig_mask; /* events the BIOS had enabled */ 2045 2045 static u32 hotkey_all_mask; /* all events supported in fw */ 2046 + static u32 hotkey_adaptive_all_mask; /* all adaptive events supported in fw */ 2046 2047 static u32 hotkey_reserved_mask; /* events better left disabled */ 2047 2048 static u32 hotkey_driver_mask; /* events needed by the driver */ 2048 2049 static u32 hotkey_user_mask; /* events visible to userspace */ ··· 2743 2742 2744 2743 static DEVICE_ATTR_RO(hotkey_all_mask); 2745 2744 2745 + /* sysfs hotkey all_mask ----------------------------------------------- */ 2746 + static ssize_t hotkey_adaptive_all_mask_show(struct device *dev, 2747 + struct device_attribute *attr, 2748 + char *buf) 2749 + { 2750 + return snprintf(buf, PAGE_SIZE, "0x%08x\n", 2751 + hotkey_adaptive_all_mask | hotkey_source_mask); 2752 + } 2753 + 2754 + static DEVICE_ATTR_RO(hotkey_adaptive_all_mask); 2755 + 2746 2756 /* sysfs hotkey recommended_mask --------------------------------------- */ 2747 2757 static ssize_t hotkey_recommended_mask_show(struct device *dev, 2748 2758 struct device_attribute *attr, ··· 2997 2985 &dev_attr_wakeup_hotunplug_complete.attr, 2998 2986 &dev_attr_hotkey_mask.attr, 2999 2987 &dev_attr_hotkey_all_mask.attr, 2988 + &dev_attr_hotkey_adaptive_all_mask.attr, 3000 2989 &dev_attr_hotkey_recommended_mask.attr, 3001 2990 #ifdef CONFIG_THINKPAD_ACPI_HOTKEY_POLL 3002 2991 &dev_attr_hotkey_source_mask.attr, ··· 3334 3321 if (!tp_features.hotkey) 3335 3322 return 1; 3336 3323 3337 - /* 3338 - * Check if we have an adaptive keyboard, like on the 3339 - * Lenovo Carbon X1 2014 (2nd Gen). 3340 - */ 3341 - if (acpi_evalf(hkey_handle, &hkeyv, "MHKV", "qd")) { 3342 - if ((hkeyv >> 8) == 2) { 3343 - tp_features.has_adaptive_kbd = true; 3344 - res = sysfs_create_group(&tpacpi_pdev->dev.kobj, 3345 - &adaptive_kbd_attr_group); 3346 - if (res) 3347 - goto err_exit; 3348 - } 3349 - } 3350 - 3351 3324 quirks = tpacpi_check_quirks(tpacpi_hotkey_qtable, 3352 3325 ARRAY_SIZE(tpacpi_hotkey_qtable)); 3353 3326 ··· 3356 3357 A30, R30, R31, T20-22, X20-21, X22-24. Detected by checking 3357 3358 for HKEY interface version 0x100 */ 3358 3359 if (acpi_evalf(hkey_handle, &hkeyv, "MHKV", "qd")) { 3359 - if ((hkeyv >> 8) != 1) { 3360 - pr_err("unknown version of the HKEY interface: 0x%x\n", 3361 - hkeyv); 3362 - pr_err("please report this to %s\n", TPACPI_MAIL); 3363 - } else { 3360 + vdbg_printk(TPACPI_DBG_INIT | TPACPI_DBG_HKEY, 3361 + "firmware HKEY interface version: 0x%x\n", 3362 + hkeyv); 3363 + 3364 + switch (hkeyv >> 8) { 3365 + case 1: 3364 3366 /* 3365 3367 * MHKV 0x100 in A31, R40, R40e, 3366 3368 * T4x, X31, and later 3367 3369 */ 3368 - vdbg_printk(TPACPI_DBG_INIT | TPACPI_DBG_HKEY, 3369 - "firmware HKEY interface version: 0x%x\n", 3370 - hkeyv); 3371 3370 3372 3371 /* Paranoia check AND init hotkey_all_mask */ 3373 3372 if (!acpi_evalf(hkey_handle, &hotkey_all_mask, 3374 3373 "MHKA", "qd")) { 3375 - pr_err("missing MHKA handler, " 3376 - "please report this to %s\n", 3374 + pr_err("missing MHKA handler, please report this to %s\n", 3377 3375 TPACPI_MAIL); 3378 3376 /* Fallback: pre-init for FN+F3,F4,F12 */ 3379 3377 hotkey_all_mask = 0x080cU; 3380 3378 } else { 3381 3379 tp_features.hotkey_mask = 1; 3382 3380 } 3381 + break; 3382 + 3383 + case 2: 3384 + /* 3385 + * MHKV 0x200 in X1, T460s, X260, T560, X1 Tablet (2016) 3386 + */ 3387 + 3388 + /* Paranoia check AND init hotkey_all_mask */ 3389 + if (!acpi_evalf(hkey_handle, &hotkey_all_mask, 3390 + "MHKA", "dd", 1)) { 3391 + pr_err("missing MHKA handler, please report this to %s\n", 3392 + TPACPI_MAIL); 3393 + /* Fallback: pre-init for FN+F3,F4,F12 */ 3394 + hotkey_all_mask = 0x080cU; 3395 + } else { 3396 + tp_features.hotkey_mask = 1; 3397 + } 3398 + 3399 + /* 3400 + * Check if we have an adaptive keyboard, like on the 3401 + * Lenovo Carbon X1 2014 (2nd Gen). 3402 + */ 3403 + if (acpi_evalf(hkey_handle, &hotkey_adaptive_all_mask, 3404 + "MHKA", "dd", 2)) { 3405 + if (hotkey_adaptive_all_mask != 0) { 3406 + tp_features.has_adaptive_kbd = true; 3407 + res = sysfs_create_group( 3408 + &tpacpi_pdev->dev.kobj, 3409 + &adaptive_kbd_attr_group); 3410 + if (res) 3411 + goto err_exit; 3412 + } 3413 + } else { 3414 + tp_features.has_adaptive_kbd = false; 3415 + hotkey_adaptive_all_mask = 0x0U; 3416 + } 3417 + break; 3418 + 3419 + default: 3420 + pr_err("unknown version of the HKEY interface: 0x%x\n", 3421 + hkeyv); 3422 + pr_err("please report this to %s\n", TPACPI_MAIL); 3423 + break; 3383 3424 } 3384 3425 } 3385 3426
+2 -1
drivers/pwm/core.c
··· 457 457 { 458 458 int err; 459 459 460 - if (!pwm) 460 + if (!pwm || !state || !state->period || 461 + state->duty_cycle > state->period) 461 462 return -EINVAL; 462 463 463 464 if (!memcmp(state, &pwm->state, sizeof(*state)))
+1 -1
drivers/pwm/pwm-atmel-hlcdc.c
··· 272 272 chip->chip.of_pwm_n_cells = 3; 273 273 chip->chip.can_sleep = 1; 274 274 275 - ret = pwmchip_add(&chip->chip); 275 + ret = pwmchip_add_with_polarity(&chip->chip, PWM_POLARITY_INVERSED); 276 276 if (ret) { 277 277 clk_disable_unprepare(hlcdc->periph_clk); 278 278 return ret;
+1 -1
drivers/pwm/sysfs.c
··· 152 152 goto unlock; 153 153 } 154 154 155 - pwm_apply_state(pwm, &state); 155 + ret = pwm_apply_state(pwm, &state); 156 156 157 157 unlock: 158 158 mutex_unlock(&export->lock);
+14 -1
drivers/regulator/qcom_smd-regulator.c
··· 140 140 .enable = rpm_reg_enable, 141 141 .disable = rpm_reg_disable, 142 142 .is_enabled = rpm_reg_is_enabled, 143 + .list_voltage = regulator_list_voltage_linear_range, 144 + 145 + .get_voltage = rpm_reg_get_voltage, 146 + .set_voltage = rpm_reg_set_voltage, 147 + 148 + .set_load = rpm_reg_set_load, 149 + }; 150 + 151 + static const struct regulator_ops rpm_smps_ldo_ops_fixed = { 152 + .enable = rpm_reg_enable, 153 + .disable = rpm_reg_disable, 154 + .is_enabled = rpm_reg_is_enabled, 155 + .list_voltage = regulator_list_voltage_linear_range, 143 156 144 157 .get_voltage = rpm_reg_get_voltage, 145 158 .set_voltage = rpm_reg_set_voltage, ··· 260 247 static const struct regulator_desc pm8941_lnldo = { 261 248 .fixed_uV = 1740000, 262 249 .n_voltages = 1, 263 - .ops = &rpm_smps_ldo_ops, 250 + .ops = &rpm_smps_ldo_ops_fixed, 264 251 }; 265 252 266 253 static const struct regulator_desc pm8941_switch = {
+6 -3
drivers/regulator/tps51632-regulator.c
··· 94 94 int ramp_delay) 95 95 { 96 96 struct tps51632_chip *tps = rdev_get_drvdata(rdev); 97 - int bit = ramp_delay/6000; 97 + int bit; 98 98 int ret; 99 99 100 - if (bit) 101 - bit--; 100 + if (ramp_delay == 0) 101 + bit = 0; 102 + else 103 + bit = DIV_ROUND_UP(ramp_delay, 6000) - 1; 104 + 102 105 ret = regmap_write(tps->regmap, TPS51632_SLEW_REGS, BIT(bit)); 103 106 if (ret < 0) 104 107 dev_err(tps->dev, "SLEW reg write failed, err %d\n", ret);
+1
drivers/scsi/scsi_devinfo.c
··· 230 230 {"PIONEER", "CD-ROM DRM-624X", NULL, BLIST_FORCELUN | BLIST_SINGLELUN}, 231 231 {"Promise", "VTrak E610f", NULL, BLIST_SPARSELUN | BLIST_NO_RSOC}, 232 232 {"Promise", "", NULL, BLIST_SPARSELUN}, 233 + {"QEMU", "QEMU CD-ROM", NULL, BLIST_SKIP_VPD_PAGES}, 233 234 {"QNAP", "iSCSI Storage", NULL, BLIST_MAX_1024}, 234 235 {"SYNOLOGY", "iSCSI Storage", NULL, BLIST_MAX_1024}, 235 236 {"QUANTUM", "XP34301", "1071", BLIST_NOTQ},
+4 -4
drivers/scsi/sd.c
··· 2867 2867 if (sdkp->opt_xfer_blocks && 2868 2868 sdkp->opt_xfer_blocks <= dev_max && 2869 2869 sdkp->opt_xfer_blocks <= SD_DEF_XFER_BLOCKS && 2870 - sdkp->opt_xfer_blocks * sdp->sector_size >= PAGE_SIZE) 2871 - rw_max = q->limits.io_opt = 2872 - sdkp->opt_xfer_blocks * sdp->sector_size; 2873 - else 2870 + logical_to_bytes(sdp, sdkp->opt_xfer_blocks) >= PAGE_SIZE) { 2871 + q->limits.io_opt = logical_to_bytes(sdp, sdkp->opt_xfer_blocks); 2872 + rw_max = logical_to_sectors(sdp, sdkp->opt_xfer_blocks); 2873 + } else 2874 2874 rw_max = BLK_DEF_MAX_SECTORS; 2875 2875 2876 2876 /* Combine with controller limits */
+5
drivers/scsi/sd.h
··· 151 151 return blocks << (ilog2(sdev->sector_size) - 9); 152 152 } 153 153 154 + static inline unsigned int logical_to_bytes(struct scsi_device *sdev, sector_t blocks) 155 + { 156 + return blocks * sdev->sector_size; 157 + } 158 + 154 159 /* 155 160 * A DIF-capable target device can be formatted with different 156 161 * protection schemes. Currently 0 through 3 are defined:
+4 -3
drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c
··· 2521 2521 return 0; 2522 2522 2523 2523 failed: 2524 - if (ni) 2524 + if (ni) { 2525 2525 lnet_ni_decref(ni); 2526 + rej.ibr_cp.ibcp_queue_depth = kiblnd_msg_queue_size(version, ni); 2527 + rej.ibr_cp.ibcp_max_frags = kiblnd_rdma_frags(version, ni); 2528 + } 2526 2529 2527 2530 rej.ibr_version = version; 2528 - rej.ibr_cp.ibcp_queue_depth = kiblnd_msg_queue_size(version, ni); 2529 - rej.ibr_cp.ibcp_max_frags = kiblnd_rdma_frags(version, ni); 2530 2531 kiblnd_reject(cmid, &rej); 2531 2532 2532 2533 return -ECONNREFUSED;
+1 -1
drivers/staging/rtl8188eu/core/rtw_efuse.c
··· 102 102 if (!efuseTbl) 103 103 return; 104 104 105 - eFuseWord = (u16 **)rtw_malloc2d(EFUSE_MAX_SECTION_88E, EFUSE_MAX_WORD_UNIT, sizeof(*eFuseWord)); 105 + eFuseWord = (u16 **)rtw_malloc2d(EFUSE_MAX_SECTION_88E, EFUSE_MAX_WORD_UNIT, sizeof(u16)); 106 106 if (!eFuseWord) { 107 107 DBG_88E("%s: alloc eFuseWord fail!\n", __func__); 108 108 goto eFuseWord_failed;
+2 -1
drivers/staging/rtl8188eu/hal/usb_halinit.c
··· 2072 2072 { 2073 2073 struct hal_ops *halfunc = &adapt->HalFunc; 2074 2074 2075 - adapt->HalData = kzalloc(sizeof(*adapt->HalData), GFP_KERNEL); 2075 + 2076 + adapt->HalData = kzalloc(sizeof(struct hal_data_8188e), GFP_KERNEL); 2076 2077 if (!adapt->HalData) 2077 2078 DBG_88E("cant not alloc memory for HAL DATA\n"); 2078 2079
+8 -8
drivers/thermal/cpu_cooling.c
··· 857 857 goto free_power_table; 858 858 } 859 859 860 - snprintf(dev_name, sizeof(dev_name), "thermal-cpufreq-%d", 861 - cpufreq_dev->id); 862 - 863 - cool_dev = thermal_of_cooling_device_register(np, dev_name, cpufreq_dev, 864 - &cpufreq_cooling_ops); 865 - if (IS_ERR(cool_dev)) 866 - goto remove_idr; 867 - 868 860 /* Fill freq-table in descending order of frequencies */ 869 861 for (i = 0, freq = -1; i <= cpufreq_dev->max_level; i++) { 870 862 freq = find_next_max(table, freq); ··· 868 876 else 869 877 pr_debug("%s: freq:%u KHz\n", __func__, freq); 870 878 } 879 + 880 + snprintf(dev_name, sizeof(dev_name), "thermal-cpufreq-%d", 881 + cpufreq_dev->id); 882 + 883 + cool_dev = thermal_of_cooling_device_register(np, dev_name, cpufreq_dev, 884 + &cpufreq_cooling_ops); 885 + if (IS_ERR(cool_dev)) 886 + goto remove_idr; 871 887 872 888 cpufreq_dev->clipped_freq = cpufreq_dev->freq_table[0]; 873 889 cpufreq_dev->cool_dev = cool_dev;
+13 -10
drivers/usb/core/quirks.c
··· 44 44 /* Creative SB Audigy 2 NX */ 45 45 { USB_DEVICE(0x041e, 0x3020), .driver_info = USB_QUIRK_RESET_RESUME }, 46 46 47 + /* USB3503 */ 48 + { USB_DEVICE(0x0424, 0x3503), .driver_info = USB_QUIRK_RESET_RESUME }, 49 + 47 50 /* Microsoft Wireless Laser Mouse 6000 Receiver */ 48 51 { USB_DEVICE(0x045e, 0x00e1), .driver_info = USB_QUIRK_RESET_RESUME }, 49 52 ··· 176 173 /* MAYA44USB sound device */ 177 174 { USB_DEVICE(0x0a92, 0x0091), .driver_info = USB_QUIRK_RESET_RESUME }, 178 175 176 + /* ASUS Base Station(T100) */ 177 + { USB_DEVICE(0x0b05, 0x17e0), .driver_info = 178 + USB_QUIRK_IGNORE_REMOTE_WAKEUP }, 179 + 179 180 /* Action Semiconductor flash disk */ 180 181 { USB_DEVICE(0x10d6, 0x2200), .driver_info = 181 182 USB_QUIRK_STRING_FETCH_255 }, ··· 195 188 { USB_DEVICE(0x1908, 0x1315), .driver_info = 196 189 USB_QUIRK_HONOR_BNUMINTERFACES }, 197 190 198 - /* INTEL VALUE SSD */ 199 - { USB_DEVICE(0x8086, 0xf1a5), .driver_info = USB_QUIRK_RESET_RESUME }, 200 - 201 - /* USB3503 */ 202 - { USB_DEVICE(0x0424, 0x3503), .driver_info = USB_QUIRK_RESET_RESUME }, 203 - 204 - /* ASUS Base Station(T100) */ 205 - { USB_DEVICE(0x0b05, 0x17e0), .driver_info = 206 - USB_QUIRK_IGNORE_REMOTE_WAKEUP }, 207 - 208 191 /* Protocol and OTG Electrical Test Device */ 209 192 { USB_DEVICE(0x1a0a, 0x0200), .driver_info = 210 193 USB_QUIRK_LINEAR_UFRAME_INTR_BINTERVAL }, 194 + 195 + /* Acer C120 LED Projector */ 196 + { USB_DEVICE(0x1de1, 0xc102), .driver_info = USB_QUIRK_NO_LPM }, 211 197 212 198 /* Blackmagic Design Intensity Shuttle */ 213 199 { USB_DEVICE(0x1edb, 0xbd3b), .driver_info = USB_QUIRK_NO_LPM }, 214 200 215 201 /* Blackmagic Design UltraStudio SDI */ 216 202 { USB_DEVICE(0x1edb, 0xbd4f), .driver_info = USB_QUIRK_NO_LPM }, 203 + 204 + /* INTEL VALUE SSD */ 205 + { USB_DEVICE(0x8086, 0xf1a5), .driver_info = USB_QUIRK_RESET_RESUME }, 217 206 218 207 { } /* terminating entry must be last */ 219 208 };
+27
drivers/usb/dwc2/core.h
··· 64 64 DWC2_TRACE_SCHEDULER_VB(pr_fmt("%s: SCH: " fmt), \ 65 65 dev_name(hsotg->dev), ##__VA_ARGS__) 66 66 67 + #ifdef CONFIG_MIPS 68 + /* 69 + * There are some MIPS machines that can run in either big-endian 70 + * or little-endian mode and that use the dwc2 register without 71 + * a byteswap in both ways. 72 + * Unlike other architectures, MIPS apparently does not require a 73 + * barrier before the __raw_writel() to synchronize with DMA but does 74 + * require the barrier after the __raw_writel() to serialize a set of 75 + * writes. This set of operations was added specifically for MIPS and 76 + * should only be used there. 77 + */ 67 78 static inline u32 dwc2_readl(const void __iomem *addr) 68 79 { 69 80 u32 value = __raw_readl(addr); ··· 101 90 pr_info("INFO:: wrote %08x to %p\n", value, addr); 102 91 #endif 103 92 } 93 + #else 94 + /* Normal architectures just use readl/write */ 95 + static inline u32 dwc2_readl(const void __iomem *addr) 96 + { 97 + return readl(addr); 98 + } 99 + 100 + static inline void dwc2_writel(u32 value, void __iomem *addr) 101 + { 102 + writel(value, addr); 103 + 104 + #ifdef DWC2_LOG_WRITES 105 + pr_info("info:: wrote %08x to %p\n", value, addr); 106 + #endif 107 + } 108 + #endif 104 109 105 110 /* Maximum number of Endpoints/HostChannels */ 106 111 #define MAX_EPS_CHANNELS 16
+20 -4
drivers/usb/dwc2/gadget.c
··· 1018 1018 return 1; 1019 1019 } 1020 1020 1021 - static int dwc2_hsotg_ep_sethalt(struct usb_ep *ep, int value); 1021 + static int dwc2_hsotg_ep_sethalt(struct usb_ep *ep, int value, bool now); 1022 1022 1023 1023 /** 1024 1024 * get_ep_head - return the first request on the endpoint ··· 1094 1094 case USB_ENDPOINT_HALT: 1095 1095 halted = ep->halted; 1096 1096 1097 - dwc2_hsotg_ep_sethalt(&ep->ep, set); 1097 + dwc2_hsotg_ep_sethalt(&ep->ep, set, true); 1098 1098 1099 1099 ret = dwc2_hsotg_send_reply(hsotg, ep0, NULL, 0); 1100 1100 if (ret) { ··· 2948 2948 * dwc2_hsotg_ep_sethalt - set halt on a given endpoint 2949 2949 * @ep: The endpoint to set halt. 2950 2950 * @value: Set or unset the halt. 2951 + * @now: If true, stall the endpoint now. Otherwise return -EAGAIN if 2952 + * the endpoint is busy processing requests. 2953 + * 2954 + * We need to stall the endpoint immediately if request comes from set_feature 2955 + * protocol command handler. 2951 2956 */ 2952 - static int dwc2_hsotg_ep_sethalt(struct usb_ep *ep, int value) 2957 + static int dwc2_hsotg_ep_sethalt(struct usb_ep *ep, int value, bool now) 2953 2958 { 2954 2959 struct dwc2_hsotg_ep *hs_ep = our_ep(ep); 2955 2960 struct dwc2_hsotg *hs = hs_ep->parent; ··· 2972 2967 dev_warn(hs->dev, 2973 2968 "%s: can't clear halt on ep0\n", __func__); 2974 2969 return 0; 2970 + } 2971 + 2972 + if (hs_ep->isochronous) { 2973 + dev_err(hs->dev, "%s is Isochronous Endpoint\n", ep->name); 2974 + return -EINVAL; 2975 + } 2976 + 2977 + if (!now && value && !list_empty(&hs_ep->queue)) { 2978 + dev_dbg(hs->dev, "%s request is pending, cannot halt\n", 2979 + ep->name); 2980 + return -EAGAIN; 2975 2981 } 2976 2982 2977 2983 if (hs_ep->dir_in) { ··· 3036 3020 int ret = 0; 3037 3021 3038 3022 spin_lock_irqsave(&hs->lock, flags); 3039 - ret = dwc2_hsotg_ep_sethalt(ep, value); 3023 + ret = dwc2_hsotg_ep_sethalt(ep, value, false); 3040 3024 spin_unlock_irqrestore(&hs->lock, flags); 3041 3025 3042 3026 return ret;
+1
drivers/usb/dwc3/core.h
··· 402 402 #define DWC3_DEPCMD_GET_RSC_IDX(x) (((x) >> DWC3_DEPCMD_PARAM_SHIFT) & 0x7f) 403 403 #define DWC3_DEPCMD_STATUS(x) (((x) >> 12) & 0x0F) 404 404 #define DWC3_DEPCMD_HIPRI_FORCERM (1 << 11) 405 + #define DWC3_DEPCMD_CLEARPENDIN (1 << 11) 405 406 #define DWC3_DEPCMD_CMDACT (1 << 10) 406 407 #define DWC3_DEPCMD_CMDIOC (1 << 8) 407 408
+11 -8
drivers/usb/dwc3/dwc3-exynos.c
··· 128 128 129 129 platform_set_drvdata(pdev, exynos); 130 130 131 - ret = dwc3_exynos_register_phys(exynos); 132 - if (ret) { 133 - dev_err(dev, "couldn't register PHYs\n"); 134 - return ret; 135 - } 136 - 137 131 exynos->dev = dev; 138 132 139 133 exynos->clk = devm_clk_get(dev, "usbdrd30"); ··· 177 183 goto err3; 178 184 } 179 185 186 + ret = dwc3_exynos_register_phys(exynos); 187 + if (ret) { 188 + dev_err(dev, "couldn't register PHYs\n"); 189 + goto err4; 190 + } 191 + 180 192 if (node) { 181 193 ret = of_platform_populate(node, NULL, NULL, dev); 182 194 if (ret) { 183 195 dev_err(dev, "failed to add dwc3 core\n"); 184 - goto err4; 196 + goto err5; 185 197 } 186 198 } else { 187 199 dev_err(dev, "no device node, failed to add dwc3 core\n"); 188 200 ret = -ENODEV; 189 - goto err4; 201 + goto err5; 190 202 } 191 203 192 204 return 0; 193 205 206 + err5: 207 + platform_device_unregister(exynos->usb2_phy); 208 + platform_device_unregister(exynos->usb3_phy); 194 209 err4: 195 210 regulator_disable(exynos->vdd10); 196 211 err3:
+8 -2
drivers/usb/dwc3/dwc3-st.c
··· 129 129 switch (dwc3_data->dr_mode) { 130 130 case USB_DR_MODE_PERIPHERAL: 131 131 132 - val &= ~(USB3_FORCE_VBUSVALID | USB3_DELAY_VBUSVALID 132 + val &= ~(USB3_DELAY_VBUSVALID 133 133 | USB3_SEL_FORCE_OPMODE | USB3_FORCE_OPMODE(0x3) 134 134 | USB3_SEL_FORCE_DPPULLDOWN2 | USB3_FORCE_DPPULLDOWN2 135 135 | USB3_SEL_FORCE_DMPULLDOWN2 | USB3_FORCE_DMPULLDOWN2); 136 136 137 - val |= USB3_DEVICE_NOT_HOST; 137 + /* 138 + * USB3_PORT2_FORCE_VBUSVALID When '1' and when 139 + * USB3_PORT2_DEVICE_NOT_HOST = 1, forces VBUSVLDEXT2 input 140 + * of the pico PHY to 1. 141 + */ 142 + 143 + val |= USB3_DEVICE_NOT_HOST | USB3_FORCE_VBUSVALID; 138 144 break; 139 145 140 146 case USB_DR_MODE_HOST:
+24 -6
drivers/usb/dwc3/gadget.c
··· 347 347 return ret; 348 348 } 349 349 350 + static int dwc3_send_clear_stall_ep_cmd(struct dwc3_ep *dep) 351 + { 352 + struct dwc3 *dwc = dep->dwc; 353 + struct dwc3_gadget_ep_cmd_params params; 354 + u32 cmd = DWC3_DEPCMD_CLEARSTALL; 355 + 356 + /* 357 + * As of core revision 2.60a the recommended programming model 358 + * is to set the ClearPendIN bit when issuing a Clear Stall EP 359 + * command for IN endpoints. This is to prevent an issue where 360 + * some (non-compliant) hosts may not send ACK TPs for pending 361 + * IN transfers due to a mishandled error condition. Synopsys 362 + * STAR 9000614252. 363 + */ 364 + if (dep->direction && (dwc->revision >= DWC3_REVISION_260A)) 365 + cmd |= DWC3_DEPCMD_CLEARPENDIN; 366 + 367 + memset(&params, 0, sizeof(params)); 368 + 369 + return dwc3_send_gadget_ep_cmd(dwc, dep->number, cmd, &params); 370 + } 371 + 350 372 static dma_addr_t dwc3_trb_dma_offset(struct dwc3_ep *dep, 351 373 struct dwc3_trb *trb) 352 374 { ··· 1336 1314 else 1337 1315 dep->flags |= DWC3_EP_STALL; 1338 1316 } else { 1339 - ret = dwc3_send_gadget_ep_cmd(dwc, dep->number, 1340 - DWC3_DEPCMD_CLEARSTALL, &params); 1317 + ret = dwc3_send_clear_stall_ep_cmd(dep); 1341 1318 if (ret) 1342 1319 dev_err(dwc->dev, "failed to clear STALL on %s\n", 1343 1320 dep->name); ··· 2268 2247 2269 2248 for (epnum = 1; epnum < DWC3_ENDPOINTS_NUM; epnum++) { 2270 2249 struct dwc3_ep *dep; 2271 - struct dwc3_gadget_ep_cmd_params params; 2272 2250 int ret; 2273 2251 2274 2252 dep = dwc->eps[epnum]; ··· 2279 2259 2280 2260 dep->flags &= ~DWC3_EP_STALL; 2281 2261 2282 - memset(&params, 0, sizeof(params)); 2283 - ret = dwc3_send_gadget_ep_cmd(dwc, dep->number, 2284 - DWC3_DEPCMD_CLEARSTALL, &params); 2262 + ret = dwc3_send_clear_stall_ep_cmd(dep); 2285 2263 WARN_ON_ONCE(ret); 2286 2264 } 2287 2265 }
+13 -8
drivers/usb/gadget/composite.c
··· 1868 1868 } 1869 1869 break; 1870 1870 } 1871 - req->length = value; 1872 - req->context = cdev; 1873 - req->zero = value < w_length; 1874 - value = composite_ep0_queue(cdev, req, GFP_ATOMIC); 1875 - if (value < 0) { 1876 - DBG(cdev, "ep_queue --> %d\n", value); 1877 - req->status = 0; 1878 - composite_setup_complete(gadget->ep0, req); 1871 + 1872 + if (value >= 0) { 1873 + req->length = value; 1874 + req->context = cdev; 1875 + req->zero = value < w_length; 1876 + value = composite_ep0_queue(cdev, req, 1877 + GFP_ATOMIC); 1878 + if (value < 0) { 1879 + DBG(cdev, "ep_queue --> %d\n", value); 1880 + req->status = 0; 1881 + composite_setup_complete(gadget->ep0, 1882 + req); 1883 + } 1879 1884 } 1880 1885 return value; 1881 1886 }
+1
drivers/usb/gadget/configfs.c
··· 1401 1401 .owner = THIS_MODULE, 1402 1402 .name = "configfs-gadget", 1403 1403 }, 1404 + .match_existing_only = 1, 1404 1405 }; 1405 1406 1406 1407 static struct config_group *gadgets_make(
+15 -15
drivers/usb/gadget/function/f_fs.c
··· 2051 2051 2052 2052 if (len < sizeof(*d) || 2053 2053 d->bFirstInterfaceNumber >= ffs->interfaces_count || 2054 - d->Reserved1) 2054 + !d->Reserved1) 2055 2055 return -EINVAL; 2056 2056 for (i = 0; i < ARRAY_SIZE(d->Reserved2); ++i) 2057 2057 if (d->Reserved2[i]) ··· 2729 2729 func->ffs->ss_descs_count; 2730 2730 2731 2731 int fs_len, hs_len, ss_len, ret, i; 2732 + struct ffs_ep *eps_ptr; 2732 2733 2733 2734 /* Make it a single chunk, less management later on */ 2734 2735 vla_group(d); ··· 2778 2777 ffs->raw_descs_length); 2779 2778 2780 2779 memset(vla_ptr(vlabuf, d, inums), 0xff, d_inums__sz); 2781 - for (ret = ffs->eps_count; ret; --ret) { 2782 - struct ffs_ep *ptr; 2783 - 2784 - ptr = vla_ptr(vlabuf, d, eps); 2785 - ptr[ret].num = -1; 2786 - } 2780 + eps_ptr = vla_ptr(vlabuf, d, eps); 2781 + for (i = 0; i < ffs->eps_count; i++) 2782 + eps_ptr[i].num = -1; 2787 2783 2788 2784 /* Save pointers 2789 2785 * d_eps == vlabuf, func->eps used to kfree vlabuf later ··· 2849 2851 goto error; 2850 2852 2851 2853 func->function.os_desc_table = vla_ptr(vlabuf, d, os_desc_table); 2852 - if (c->cdev->use_os_string) 2854 + if (c->cdev->use_os_string) { 2853 2855 for (i = 0; i < ffs->interfaces_count; ++i) { 2854 2856 struct usb_os_desc *desc; 2855 2857 ··· 2860 2862 vla_ptr(vlabuf, d, ext_compat) + i * 16; 2861 2863 INIT_LIST_HEAD(&desc->ext_prop); 2862 2864 } 2863 - ret = ffs_do_os_descs(ffs->ms_os_descs_count, 2864 - vla_ptr(vlabuf, d, raw_descs) + 2865 - fs_len + hs_len + ss_len, 2866 - d_raw_descs__sz - fs_len - hs_len - ss_len, 2867 - __ffs_func_bind_do_os_desc, func); 2868 - if (unlikely(ret < 0)) 2869 - goto error; 2865 + ret = ffs_do_os_descs(ffs->ms_os_descs_count, 2866 + vla_ptr(vlabuf, d, raw_descs) + 2867 + fs_len + hs_len + ss_len, 2868 + d_raw_descs__sz - fs_len - hs_len - 2869 + ss_len, 2870 + __ffs_func_bind_do_os_desc, func); 2871 + if (unlikely(ret < 0)) 2872 + goto error; 2873 + } 2870 2874 func->function.os_desc_n = 2871 2875 c->cdev->use_os_string ? ffs->interfaces_count : 0; 2872 2876
-8
drivers/usb/gadget/function/f_printer.c
··· 161 161 .wMaxPacketSize = cpu_to_le16(512) 162 162 }; 163 163 164 - static struct usb_qualifier_descriptor dev_qualifier = { 165 - .bLength = sizeof(dev_qualifier), 166 - .bDescriptorType = USB_DT_DEVICE_QUALIFIER, 167 - .bcdUSB = cpu_to_le16(0x0200), 168 - .bDeviceClass = USB_CLASS_PRINTER, 169 - .bNumConfigurations = 1 170 - }; 171 - 172 164 static struct usb_descriptor_header *hs_printer_function[] = { 173 165 (struct usb_descriptor_header *) &intf_desc, 174 166 (struct usb_descriptor_header *) &hs_ep_in_desc,
+11 -9
drivers/usb/gadget/function/f_tcm.c
··· 1445 1445 for (i = 0; i < TPG_INSTANCES; ++i) 1446 1446 if (tpg_instances[i].tpg == tpg) 1447 1447 break; 1448 - if (i < TPG_INSTANCES) 1448 + if (i < TPG_INSTANCES) { 1449 1449 tpg_instances[i].tpg = NULL; 1450 - opts = container_of(tpg_instances[i].func_inst, 1451 - struct f_tcm_opts, func_inst); 1452 - mutex_lock(&opts->dep_lock); 1453 - if (opts->has_dep) 1454 - module_put(opts->dependent); 1455 - else 1456 - configfs_undepend_item_unlocked(&opts->func_inst.group.cg_item); 1457 - mutex_unlock(&opts->dep_lock); 1450 + opts = container_of(tpg_instances[i].func_inst, 1451 + struct f_tcm_opts, func_inst); 1452 + mutex_lock(&opts->dep_lock); 1453 + if (opts->has_dep) 1454 + module_put(opts->dependent); 1455 + else 1456 + configfs_undepend_item_unlocked( 1457 + &opts->func_inst.group.cg_item); 1458 + mutex_unlock(&opts->dep_lock); 1459 + } 1458 1460 mutex_unlock(&tpg_instances_lock); 1459 1461 1460 1462 kfree(tpg);
+1 -12
drivers/usb/gadget/function/f_uac2.c
··· 598 598 NULL, 599 599 }; 600 600 601 - static struct usb_qualifier_descriptor devqual_desc = { 602 - .bLength = sizeof devqual_desc, 603 - .bDescriptorType = USB_DT_DEVICE_QUALIFIER, 604 - 605 - .bcdUSB = cpu_to_le16(0x200), 606 - .bDeviceClass = USB_CLASS_MISC, 607 - .bDeviceSubClass = 0x02, 608 - .bDeviceProtocol = 0x01, 609 - .bNumConfigurations = 1, 610 - .bRESERVED = 0, 611 - }; 612 - 613 601 static struct usb_interface_assoc_descriptor iad_desc = { 614 602 .bLength = sizeof iad_desc, 615 603 .bDescriptorType = USB_DT_INTERFACE_ASSOCIATION, ··· 1280 1292 1281 1293 if (control_selector == UAC2_CS_CONTROL_SAM_FREQ) { 1282 1294 struct cntrl_cur_lay3 c; 1295 + memset(&c, 0, sizeof(struct cntrl_cur_lay3)); 1283 1296 1284 1297 if (entity_id == USB_IN_CLK_ID) 1285 1298 c.dCUR = p_srate;
+1 -3
drivers/usb/gadget/function/storage_common.c
··· 83 83 * USB 2.0 devices need to expose both high speed and full speed 84 84 * descriptors, unless they only run at full speed. 85 85 * 86 - * That means alternate endpoint descriptors (bigger packets) 87 - * and a "device qualifier" ... plus more construction options 88 - * for the configuration descriptor. 86 + * That means alternate endpoint descriptors (bigger packets). 89 87 */ 90 88 struct usb_endpoint_descriptor fsg_hs_bulk_in_desc = { 91 89 .bLength = USB_DT_ENDPOINT_SIZE,
+13 -4
drivers/usb/gadget/legacy/inode.c
··· 938 938 struct usb_ep *ep = dev->gadget->ep0; 939 939 struct usb_request *req = dev->req; 940 940 941 - if ((retval = setup_req (ep, req, 0)) == 0) 942 - retval = usb_ep_queue (ep, req, GFP_ATOMIC); 941 + if ((retval = setup_req (ep, req, 0)) == 0) { 942 + spin_unlock_irq (&dev->lock); 943 + retval = usb_ep_queue (ep, req, GFP_KERNEL); 944 + spin_lock_irq (&dev->lock); 945 + } 943 946 dev->state = STATE_DEV_CONNECTED; 944 947 945 948 /* assume that was SET_CONFIGURATION */ ··· 1460 1457 w_length); 1461 1458 if (value < 0) 1462 1459 break; 1460 + 1461 + spin_unlock (&dev->lock); 1463 1462 value = usb_ep_queue (gadget->ep0, dev->req, 1464 - GFP_ATOMIC); 1463 + GFP_KERNEL); 1464 + spin_lock (&dev->lock); 1465 1465 if (value < 0) { 1466 1466 clean_req (gadget->ep0, dev->req); 1467 1467 break; ··· 1487 1481 if (value >= 0 && dev->state != STATE_DEV_SETUP) { 1488 1482 req->length = value; 1489 1483 req->zero = value < w_length; 1490 - value = usb_ep_queue (gadget->ep0, req, GFP_ATOMIC); 1484 + 1485 + spin_unlock (&dev->lock); 1486 + value = usb_ep_queue (gadget->ep0, req, GFP_KERNEL); 1491 1487 if (value < 0) { 1492 1488 DBG (dev, "ep_queue --> %d\n", value); 1493 1489 req->status = 0; 1494 1490 } 1491 + return value; 1495 1492 } 1496 1493 1497 1494 /* device stalls when value < 0 */
+8 -4
drivers/usb/gadget/udc/udc-core.c
··· 603 603 } 604 604 } 605 605 606 - list_add_tail(&driver->pending, &gadget_driver_pending_list); 607 - pr_info("udc-core: couldn't find an available UDC - added [%s] to list of pending drivers\n", 608 - driver->function); 606 + if (!driver->match_existing_only) { 607 + list_add_tail(&driver->pending, &gadget_driver_pending_list); 608 + pr_info("udc-core: couldn't find an available UDC - added [%s] to list of pending drivers\n", 609 + driver->function); 610 + ret = 0; 611 + } 612 + 609 613 mutex_unlock(&udc_lock); 610 - return 0; 614 + return ret; 611 615 found: 612 616 ret = udc_bind_to_driver(udc, driver); 613 617 mutex_unlock(&udc_lock);
+9
drivers/usb/host/ehci-hcd.c
··· 368 368 { 369 369 struct ehci_hcd *ehci = hcd_to_ehci(hcd); 370 370 371 + /** 372 + * Protect the system from crashing at system shutdown in cases where 373 + * usb host is not added yet from OTG controller driver. 374 + * As ehci_setup() not done yet, so stop accessing registers or 375 + * variables initialized in ehci_setup() 376 + */ 377 + if (!ehci->sbrn) 378 + return; 379 + 371 380 spin_lock_irq(&ehci->lock); 372 381 ehci->shutdown = true; 373 382 ehci->rh_state = EHCI_RH_STOPPING;
+11 -3
drivers/usb/host/ehci-hub.c
··· 872 872 ) { 873 873 struct ehci_hcd *ehci = hcd_to_ehci (hcd); 874 874 int ports = HCS_N_PORTS (ehci->hcs_params); 875 - u32 __iomem *status_reg = &ehci->regs->port_status[ 876 - (wIndex & 0xff) - 1]; 877 - u32 __iomem *hostpc_reg = &ehci->regs->hostpc[(wIndex & 0xff) - 1]; 875 + u32 __iomem *status_reg, *hostpc_reg; 878 876 u32 temp, temp1, status; 879 877 unsigned long flags; 880 878 int retval = 0; 881 879 unsigned selector; 880 + 881 + /* 882 + * Avoid underflow while calculating (wIndex & 0xff) - 1. 883 + * The compiler might deduce that wIndex can never be 0 and then 884 + * optimize away the tests for !wIndex below. 885 + */ 886 + temp = wIndex & 0xff; 887 + temp -= (temp > 0); 888 + status_reg = &ehci->regs->port_status[temp]; 889 + hostpc_reg = &ehci->regs->hostpc[temp]; 882 890 883 891 /* 884 892 * FIXME: support SetPortFeatures USB_PORT_FEAT_INDICATOR.
+12 -2
drivers/usb/host/ehci-msm.c
··· 179 179 static int ehci_msm_pm_suspend(struct device *dev) 180 180 { 181 181 struct usb_hcd *hcd = dev_get_drvdata(dev); 182 + struct ehci_hcd *ehci = hcd_to_ehci(hcd); 182 183 bool do_wakeup = device_may_wakeup(dev); 183 184 184 185 dev_dbg(dev, "ehci-msm PM suspend\n"); 185 186 186 - return ehci_suspend(hcd, do_wakeup); 187 + /* Only call ehci_suspend if ehci_setup has been done */ 188 + if (ehci->sbrn) 189 + return ehci_suspend(hcd, do_wakeup); 190 + 191 + return 0; 187 192 } 188 193 189 194 static int ehci_msm_pm_resume(struct device *dev) 190 195 { 191 196 struct usb_hcd *hcd = dev_get_drvdata(dev); 197 + struct ehci_hcd *ehci = hcd_to_ehci(hcd); 192 198 193 199 dev_dbg(dev, "ehci-msm PM resume\n"); 194 - ehci_resume(hcd, false); 200 + 201 + /* Only call ehci_resume if ehci_setup has been done */ 202 + if (ehci->sbrn) 203 + ehci_resume(hcd, false); 195 204 196 205 return 0; 197 206 } 207 + 198 208 #else 199 209 #define ehci_msm_pm_suspend NULL 200 210 #define ehci_msm_pm_resume NULL
+13 -3
drivers/usb/host/ehci-tegra.c
··· 81 81 struct usb_hcd *hcd = platform_get_drvdata(pdev); 82 82 struct tegra_ehci_hcd *tegra = 83 83 (struct tegra_ehci_hcd *)hcd_to_ehci(hcd)->priv; 84 + bool has_utmi_pad_registers = false; 84 85 85 86 phy_np = of_parse_phandle(pdev->dev.of_node, "nvidia,phy", 0); 86 87 if (!phy_np) 87 88 return -ENOENT; 88 89 90 + if (of_property_read_bool(phy_np, "nvidia,has-utmi-pad-registers")) 91 + has_utmi_pad_registers = true; 92 + 89 93 if (!usb1_reset_attempted) { 90 94 struct reset_control *usb1_reset; 91 95 92 - usb1_reset = of_reset_control_get(phy_np, "usb"); 96 + if (!has_utmi_pad_registers) 97 + usb1_reset = of_reset_control_get(phy_np, "utmi-pads"); 98 + else 99 + usb1_reset = tegra->rst; 100 + 93 101 if (IS_ERR(usb1_reset)) { 94 102 dev_warn(&pdev->dev, 95 103 "can't get utmi-pads reset from the PHY\n"); ··· 107 99 reset_control_assert(usb1_reset); 108 100 udelay(1); 109 101 reset_control_deassert(usb1_reset); 102 + 103 + if (!has_utmi_pad_registers) 104 + reset_control_put(usb1_reset); 110 105 } 111 106 112 - reset_control_put(usb1_reset); 113 107 usb1_reset_attempted = true; 114 108 } 115 109 116 - if (!of_property_read_bool(phy_np, "nvidia,has-utmi-pad-registers")) { 110 + if (!has_utmi_pad_registers) { 117 111 reset_control_assert(tegra->rst); 118 112 udelay(1); 119 113 reset_control_deassert(tegra->rst);
+2 -1
drivers/usb/host/ohci-q.c
··· 183 183 { 184 184 int branch; 185 185 186 - ed->state = ED_OPER; 187 186 ed->ed_prev = NULL; 188 187 ed->ed_next = NULL; 189 188 ed->hwNextED = 0; ··· 258 259 /* the HC may not see the schedule updates yet, but if it does 259 260 * then they'll be properly ordered. 260 261 */ 262 + 263 + ed->state = ED_OPER; 261 264 return 0; 262 265 } 263 266
+5
drivers/usb/host/xhci-pci.c
··· 37 37 /* Device for a quirk */ 38 38 #define PCI_VENDOR_ID_FRESCO_LOGIC 0x1b73 39 39 #define PCI_DEVICE_ID_FRESCO_LOGIC_PDK 0x1000 40 + #define PCI_DEVICE_ID_FRESCO_LOGIC_FL1009 0x1009 40 41 #define PCI_DEVICE_ID_FRESCO_LOGIC_FL1400 0x1400 41 42 42 43 #define PCI_VENDOR_ID_ETRON 0x1b6f ··· 114 113 pdev->revision); 115 114 xhci->quirks |= XHCI_TRUST_TX_LENGTH; 116 115 } 116 + 117 + if (pdev->vendor == PCI_VENDOR_ID_FRESCO_LOGIC && 118 + pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_FL1009) 119 + xhci->quirks |= XHCI_BROKEN_STREAMS; 117 120 118 121 if (pdev->vendor == PCI_VENDOR_ID_NEC) 119 122 xhci->quirks |= XHCI_NEC_HOST;
+3
drivers/usb/host/xhci-plat.c
··· 196 196 ret = clk_prepare_enable(clk); 197 197 if (ret) 198 198 goto put_hcd; 199 + } else if (PTR_ERR(clk) == -EPROBE_DEFER) { 200 + ret = -EPROBE_DEFER; 201 + goto put_hcd; 199 202 } 200 203 201 204 xhci = hcd_to_xhci(hcd);
+24 -6
drivers/usb/host/xhci-ring.c
··· 290 290 291 291 temp_64 = xhci_read_64(xhci, &xhci->op_regs->cmd_ring); 292 292 xhci->cmd_ring_state = CMD_RING_STATE_ABORTED; 293 + 294 + /* 295 + * Writing the CMD_RING_ABORT bit should cause a cmd completion event, 296 + * however on some host hw the CMD_RING_RUNNING bit is correctly cleared 297 + * but the completion event in never sent. Use the cmd timeout timer to 298 + * handle those cases. Use twice the time to cover the bit polling retry 299 + */ 300 + mod_timer(&xhci->cmd_timer, jiffies + (2 * XHCI_CMD_DEFAULT_TIMEOUT)); 293 301 xhci_write_64(xhci, temp_64 | CMD_RING_ABORT, 294 302 &xhci->op_regs->cmd_ring); 295 303 ··· 322 314 323 315 xhci_err(xhci, "Stopped the command ring failed, " 324 316 "maybe the host is dead\n"); 317 + del_timer(&xhci->cmd_timer); 325 318 xhci->xhc_state |= XHCI_STATE_DYING; 326 319 xhci_quiesce(xhci); 327 320 xhci_halt(xhci); ··· 1255 1246 int ret; 1256 1247 unsigned long flags; 1257 1248 u64 hw_ring_state; 1258 - struct xhci_command *cur_cmd = NULL; 1249 + bool second_timeout = false; 1259 1250 xhci = (struct xhci_hcd *) data; 1260 1251 1261 1252 /* mark this command to be cancelled */ 1262 1253 spin_lock_irqsave(&xhci->lock, flags); 1263 1254 if (xhci->current_cmd) { 1264 - cur_cmd = xhci->current_cmd; 1265 - cur_cmd->status = COMP_CMD_ABORT; 1255 + if (xhci->current_cmd->status == COMP_CMD_ABORT) 1256 + second_timeout = true; 1257 + xhci->current_cmd->status = COMP_CMD_ABORT; 1266 1258 } 1267 - 1268 1259 1269 1260 /* Make sure command ring is running before aborting it */ 1270 1261 hw_ring_state = xhci_read_64(xhci, &xhci->op_regs->cmd_ring); 1271 1262 if ((xhci->cmd_ring_state & CMD_RING_STATE_RUNNING) && 1272 1263 (hw_ring_state & CMD_RING_RUNNING)) { 1273 - 1274 1264 spin_unlock_irqrestore(&xhci->lock, flags); 1275 1265 xhci_dbg(xhci, "Command timeout\n"); 1276 1266 ret = xhci_abort_cmd_ring(xhci); ··· 1281 1273 } 1282 1274 return; 1283 1275 } 1276 + 1277 + /* command ring failed to restart, or host removed. Bail out */ 1278 + if (second_timeout || xhci->xhc_state & XHCI_STATE_REMOVING) { 1279 + spin_unlock_irqrestore(&xhci->lock, flags); 1280 + xhci_dbg(xhci, "command timed out twice, ring start fail?\n"); 1281 + xhci_cleanup_command_queue(xhci); 1282 + return; 1283 + } 1284 + 1284 1285 /* command timeout on stopped ring, ring can't be aborted */ 1285 1286 xhci_dbg(xhci, "Command timeout on stopped ring\n"); 1286 1287 xhci_handle_stopped_cmd_ring(xhci, xhci->current_cmd); ··· 2738 2721 writel(irq_pending, &xhci->ir_set->irq_pending); 2739 2722 } 2740 2723 2741 - if (xhci->xhc_state & XHCI_STATE_DYING) { 2724 + if (xhci->xhc_state & XHCI_STATE_DYING || 2725 + xhci->xhc_state & XHCI_STATE_HALTED) { 2742 2726 xhci_dbg(xhci, "xHCI dying, ignoring interrupt. " 2743 2727 "Shouldn't IRQs be disabled?\n"); 2744 2728 /* Clear the event handler busy flag (RW1C);
+16 -13
drivers/usb/host/xhci.c
··· 685 685 u32 temp; 686 686 struct xhci_hcd *xhci = hcd_to_xhci(hcd); 687 687 688 - if (xhci->xhc_state & XHCI_STATE_HALTED) 689 - return; 690 - 691 688 mutex_lock(&xhci->mutex); 692 - spin_lock_irq(&xhci->lock); 693 - xhci->xhc_state |= XHCI_STATE_HALTED; 694 - xhci->cmd_ring_state = CMD_RING_STATE_STOPPED; 695 689 696 - /* Make sure the xHC is halted for a USB3 roothub 697 - * (xhci_stop() could be called as part of failed init). 698 - */ 699 - xhci_halt(xhci); 700 - xhci_reset(xhci); 701 - spin_unlock_irq(&xhci->lock); 690 + if (!(xhci->xhc_state & XHCI_STATE_HALTED)) { 691 + spin_lock_irq(&xhci->lock); 692 + 693 + xhci->xhc_state |= XHCI_STATE_HALTED; 694 + xhci->cmd_ring_state = CMD_RING_STATE_STOPPED; 695 + xhci_halt(xhci); 696 + xhci_reset(xhci); 697 + 698 + spin_unlock_irq(&xhci->lock); 699 + } 700 + 701 + if (!usb_hcd_is_primary_hcd(hcd)) { 702 + mutex_unlock(&xhci->mutex); 703 + return; 704 + } 702 705 703 706 xhci_cleanup_msix(xhci); 704 707 ··· 4889 4886 xhci->hcc_params2 = readl(&xhci->cap_regs->hcc_params2); 4890 4887 xhci_print_registers(xhci); 4891 4888 4892 - xhci->quirks = quirks; 4889 + xhci->quirks |= quirks; 4893 4890 4894 4891 get_quirks(dev, xhci); 4895 4892
+37 -48
drivers/usb/musb/musb_core.c
··· 1090 1090 musb_platform_try_idle(musb, 0); 1091 1091 } 1092 1092 1093 - static void musb_shutdown(struct platform_device *pdev) 1094 - { 1095 - struct musb *musb = dev_to_musb(&pdev->dev); 1096 - unsigned long flags; 1097 - 1098 - pm_runtime_get_sync(musb->controller); 1099 - 1100 - musb_host_cleanup(musb); 1101 - musb_gadget_cleanup(musb); 1102 - 1103 - spin_lock_irqsave(&musb->lock, flags); 1104 - musb_platform_disable(musb); 1105 - musb_generic_disable(musb); 1106 - spin_unlock_irqrestore(&musb->lock, flags); 1107 - 1108 - musb_writeb(musb->mregs, MUSB_DEVCTL, 0); 1109 - musb_platform_exit(musb); 1110 - 1111 - pm_runtime_put(musb->controller); 1112 - /* FIXME power down */ 1113 - } 1114 - 1115 - 1116 1093 /*-------------------------------------------------------------------------*/ 1117 1094 1118 1095 /* ··· 1679 1702 #define use_dma 0 1680 1703 #endif 1681 1704 1682 - static void (*musb_phy_callback)(enum musb_vbus_id_status status); 1705 + static int (*musb_phy_callback)(enum musb_vbus_id_status status); 1683 1706 1684 1707 /* 1685 1708 * musb_mailbox - optional phy notifier function ··· 1688 1711 * Optionally gets called from the USB PHY. Note that the USB PHY must be 1689 1712 * disabled at the point the phy_callback is registered or unregistered. 1690 1713 */ 1691 - void musb_mailbox(enum musb_vbus_id_status status) 1714 + int musb_mailbox(enum musb_vbus_id_status status) 1692 1715 { 1693 1716 if (musb_phy_callback) 1694 - musb_phy_callback(status); 1717 + return musb_phy_callback(status); 1695 1718 1719 + return -ENODEV; 1696 1720 }; 1697 1721 EXPORT_SYMBOL_GPL(musb_mailbox); 1698 1722 ··· 2006 2028 musb_readl = musb_default_readl; 2007 2029 musb_writel = musb_default_writel; 2008 2030 2009 - /* We need musb_read/write functions initialized for PM */ 2010 - pm_runtime_use_autosuspend(musb->controller); 2011 - pm_runtime_set_autosuspend_delay(musb->controller, 200); 2012 - pm_runtime_enable(musb->controller); 2013 - 2014 2031 /* The musb_platform_init() call: 2015 2032 * - adjusts musb->mregs 2016 2033 * - sets the musb->isr ··· 2107 2134 if (musb->ops->phy_callback) 2108 2135 musb_phy_callback = musb->ops->phy_callback; 2109 2136 2137 + /* 2138 + * We need musb_read/write functions initialized for PM. 2139 + * Note that at least 2430 glue needs autosuspend delay 2140 + * somewhere above 300 ms for the hardware to idle properly 2141 + * after disconnecting the cable in host mode. Let's use 2142 + * 500 ms for some margin. 2143 + */ 2144 + pm_runtime_use_autosuspend(musb->controller); 2145 + pm_runtime_set_autosuspend_delay(musb->controller, 500); 2146 + pm_runtime_enable(musb->controller); 2110 2147 pm_runtime_get_sync(musb->controller); 2111 2148 2112 2149 status = usb_phy_init(musb->xceiv); ··· 2220 2237 if (status) 2221 2238 goto fail5; 2222 2239 2223 - pm_runtime_put(musb->controller); 2224 - 2225 - /* 2226 - * For why this is currently needed, see commit 3e43a0725637 2227 - * ("usb: musb: core: add pm_runtime_irq_safe()") 2228 - */ 2229 - pm_runtime_irq_safe(musb->controller); 2240 + pm_runtime_mark_last_busy(musb->controller); 2241 + pm_runtime_put_autosuspend(musb->controller); 2230 2242 2231 2243 return 0; 2232 2244 ··· 2243 2265 usb_phy_shutdown(musb->xceiv); 2244 2266 2245 2267 err_usb_phy_init: 2268 + pm_runtime_dont_use_autosuspend(musb->controller); 2246 2269 pm_runtime_put_sync(musb->controller); 2270 + pm_runtime_disable(musb->controller); 2247 2271 2248 2272 fail2: 2249 2273 if (musb->irq_wake) ··· 2253 2273 musb_platform_exit(musb); 2254 2274 2255 2275 fail1: 2256 - pm_runtime_disable(musb->controller); 2257 2276 dev_err(musb->controller, 2258 2277 "musb_init_controller failed with status %d\n", status); 2259 2278 ··· 2291 2312 { 2292 2313 struct device *dev = &pdev->dev; 2293 2314 struct musb *musb = dev_to_musb(dev); 2315 + unsigned long flags; 2294 2316 2295 2317 /* this gets called on rmmod. 2296 2318 * - Host mode: host may still be active ··· 2299 2319 * - OTG mode: both roles are deactivated (or never-activated) 2300 2320 */ 2301 2321 musb_exit_debugfs(musb); 2302 - musb_shutdown(pdev); 2303 - musb_phy_callback = NULL; 2304 - 2305 - if (musb->dma_controller) 2306 - musb_dma_controller_destroy(musb->dma_controller); 2307 - 2308 - usb_phy_shutdown(musb->xceiv); 2309 2322 2310 2323 cancel_work_sync(&musb->irq_work); 2311 2324 cancel_delayed_work_sync(&musb->finish_resume_work); 2312 2325 cancel_delayed_work_sync(&musb->deassert_reset_work); 2326 + pm_runtime_get_sync(musb->controller); 2327 + musb_host_cleanup(musb); 2328 + musb_gadget_cleanup(musb); 2329 + spin_lock_irqsave(&musb->lock, flags); 2330 + musb_platform_disable(musb); 2331 + musb_generic_disable(musb); 2332 + spin_unlock_irqrestore(&musb->lock, flags); 2333 + musb_writeb(musb->mregs, MUSB_DEVCTL, 0); 2334 + pm_runtime_dont_use_autosuspend(musb->controller); 2335 + pm_runtime_put_sync(musb->controller); 2336 + pm_runtime_disable(musb->controller); 2337 + musb_platform_exit(musb); 2338 + musb_phy_callback = NULL; 2339 + if (musb->dma_controller) 2340 + musb_dma_controller_destroy(musb->dma_controller); 2341 + usb_phy_shutdown(musb->xceiv); 2313 2342 musb_free(musb); 2314 2343 device_init_wakeup(dev, 0); 2315 2344 return 0; ··· 2418 2429 musb_writew(musb_base, MUSB_INTRTXE, musb->intrtxe); 2419 2430 musb_writew(musb_base, MUSB_INTRRXE, musb->intrrxe); 2420 2431 musb_writeb(musb_base, MUSB_INTRUSBE, musb->context.intrusbe); 2421 - musb_writeb(musb_base, MUSB_DEVCTL, musb->context.devctl); 2432 + if (musb->context.devctl & MUSB_DEVCTL_SESSION) 2433 + musb_writeb(musb_base, MUSB_DEVCTL, musb->context.devctl); 2422 2434 2423 2435 for (i = 0; i < musb->config->num_eps; ++i) { 2424 2436 struct musb_hw_ep *hw_ep; ··· 2602 2612 }, 2603 2613 .probe = musb_probe, 2604 2614 .remove = musb_remove, 2605 - .shutdown = musb_shutdown, 2606 2615 }; 2607 2616 2608 2617 module_platform_driver(musb_driver);
+2 -1
drivers/usb/musb/musb_core.h
··· 215 215 dma_addr_t *dma_addr, u32 *len); 216 216 void (*pre_root_reset_end)(struct musb *musb); 217 217 void (*post_root_reset_end)(struct musb *musb); 218 - void (*phy_callback)(enum musb_vbus_id_status status); 218 + int (*phy_callback)(enum musb_vbus_id_status status); 219 219 }; 220 220 221 221 /* ··· 312 312 struct work_struct irq_work; 313 313 struct delayed_work deassert_reset_work; 314 314 struct delayed_work finish_resume_work; 315 + struct delayed_work gadget_work; 315 316 u16 hwvers; 316 317 317 318 u16 intrrxe;
+23 -11
drivers/usb/musb/musb_gadget.c
··· 1656 1656 return usb_phy_set_power(musb->xceiv, mA); 1657 1657 } 1658 1658 1659 + static void musb_gadget_work(struct work_struct *work) 1660 + { 1661 + struct musb *musb; 1662 + unsigned long flags; 1663 + 1664 + musb = container_of(work, struct musb, gadget_work.work); 1665 + pm_runtime_get_sync(musb->controller); 1666 + spin_lock_irqsave(&musb->lock, flags); 1667 + musb_pullup(musb, musb->softconnect); 1668 + spin_unlock_irqrestore(&musb->lock, flags); 1669 + pm_runtime_mark_last_busy(musb->controller); 1670 + pm_runtime_put_autosuspend(musb->controller); 1671 + } 1672 + 1659 1673 static int musb_gadget_pullup(struct usb_gadget *gadget, int is_on) 1660 1674 { 1661 1675 struct musb *musb = gadget_to_musb(gadget); ··· 1677 1663 1678 1664 is_on = !!is_on; 1679 1665 1680 - pm_runtime_get_sync(musb->controller); 1681 - 1682 1666 /* NOTE: this assumes we are sensing vbus; we'd rather 1683 1667 * not pullup unless the B-session is active. 1684 1668 */ 1685 1669 spin_lock_irqsave(&musb->lock, flags); 1686 1670 if (is_on != musb->softconnect) { 1687 1671 musb->softconnect = is_on; 1688 - musb_pullup(musb, is_on); 1672 + schedule_delayed_work(&musb->gadget_work, 0); 1689 1673 } 1690 1674 spin_unlock_irqrestore(&musb->lock, flags); 1691 - 1692 - pm_runtime_put(musb->controller); 1693 1675 1694 1676 return 0; 1695 1677 } ··· 1855 1845 #elif IS_ENABLED(CONFIG_USB_MUSB_GADGET) 1856 1846 musb->g.is_otg = 0; 1857 1847 #endif 1858 - 1848 + INIT_DELAYED_WORK(&musb->gadget_work, musb_gadget_work); 1859 1849 musb_g_init_endpoints(musb); 1860 1850 1861 1851 musb->is_active = 0; ··· 1876 1866 { 1877 1867 if (musb->port_mode == MUSB_PORT_MODE_HOST) 1878 1868 return; 1869 + 1870 + cancel_delayed_work_sync(&musb->gadget_work); 1879 1871 usb_del_gadget_udc(&musb->g); 1880 1872 } 1881 1873 ··· 1926 1914 if (musb->xceiv->last_event == USB_EVENT_ID) 1927 1915 musb_platform_set_vbus(musb, 1); 1928 1916 1929 - if (musb->xceiv->last_event == USB_EVENT_NONE) 1930 - pm_runtime_put(musb->controller); 1917 + pm_runtime_mark_last_busy(musb->controller); 1918 + pm_runtime_put_autosuspend(musb->controller); 1931 1919 1932 1920 return 0; 1933 1921 ··· 1946 1934 struct musb *musb = gadget_to_musb(g); 1947 1935 unsigned long flags; 1948 1936 1949 - if (musb->xceiv->last_event == USB_EVENT_NONE) 1950 - pm_runtime_get_sync(musb->controller); 1937 + pm_runtime_get_sync(musb->controller); 1951 1938 1952 1939 /* 1953 1940 * REVISIT always use otg_set_peripheral() here too; ··· 1974 1963 * that currently misbehaves. 1975 1964 */ 1976 1965 1977 - pm_runtime_put(musb->controller); 1966 + pm_runtime_mark_last_busy(musb->controller); 1967 + pm_runtime_put_autosuspend(musb->controller); 1978 1968 1979 1969 return 0; 1980 1970 }
+37 -31
drivers/usb/musb/musb_host.c
··· 434 434 } 435 435 } 436 436 437 - if (qh != NULL && qh->is_ready) { 437 + /* 438 + * The pipe must be broken if current urb->status is set, so don't 439 + * start next urb. 440 + * TODO: to minimize the risk of regression, only check urb->status 441 + * for RX, until we have a test case to understand the behavior of TX. 442 + */ 443 + if ((!status || !is_in) && qh && qh->is_ready) { 438 444 dev_dbg(musb->controller, "... next ep%d %cX urb %p\n", 439 445 hw_ep->epnum, is_in ? 'R' : 'T', next_urb(qh)); 440 446 musb_start_urb(musb, is_in, qh); ··· 600 594 musb_writew(ep->regs, MUSB_TXCSR, 0); 601 595 602 596 /* scrub all previous state, clearing toggle */ 603 - } else { 604 - csr = musb_readw(ep->regs, MUSB_RXCSR); 605 - if (csr & MUSB_RXCSR_RXPKTRDY) 606 - WARNING("rx%d, packet/%d ready?\n", ep->epnum, 607 - musb_readw(ep->regs, MUSB_RXCOUNT)); 608 - 609 - musb_h_flush_rxfifo(ep, MUSB_RXCSR_CLRDATATOG); 610 597 } 598 + csr = musb_readw(ep->regs, MUSB_RXCSR); 599 + if (csr & MUSB_RXCSR_RXPKTRDY) 600 + WARNING("rx%d, packet/%d ready?\n", ep->epnum, 601 + musb_readw(ep->regs, MUSB_RXCOUNT)); 602 + 603 + musb_h_flush_rxfifo(ep, MUSB_RXCSR_CLRDATATOG); 611 604 612 605 /* target addr and (for multipoint) hub addr/port */ 613 606 if (musb->is_multipoint) { ··· 632 627 ep->rx_reinit = 0; 633 628 } 634 629 635 - static int musb_tx_dma_set_mode_mentor(struct dma_controller *dma, 630 + static void musb_tx_dma_set_mode_mentor(struct dma_controller *dma, 636 631 struct musb_hw_ep *hw_ep, struct musb_qh *qh, 637 632 struct urb *urb, u32 offset, 638 633 u32 *length, u8 *mode) ··· 669 664 } 670 665 channel->desired_mode = *mode; 671 666 musb_writew(epio, MUSB_TXCSR, csr); 672 - 673 - return 0; 674 667 } 675 668 676 - static int musb_tx_dma_set_mode_cppi_tusb(struct dma_controller *dma, 677 - struct musb_hw_ep *hw_ep, 678 - struct musb_qh *qh, 679 - struct urb *urb, 680 - u32 offset, 681 - u32 *length, 682 - u8 *mode) 669 + static void musb_tx_dma_set_mode_cppi_tusb(struct dma_controller *dma, 670 + struct musb_hw_ep *hw_ep, 671 + struct musb_qh *qh, 672 + struct urb *urb, 673 + u32 offset, 674 + u32 *length, 675 + u8 *mode) 683 676 { 684 677 struct dma_channel *channel = hw_ep->tx_channel; 685 - 686 - if (!is_cppi_enabled(hw_ep->musb) && !tusb_dma_omap(hw_ep->musb)) 687 - return -ENODEV; 688 678 689 679 channel->actual_len = 0; 690 680 ··· 688 688 * to identify the zero-length-final-packet case. 689 689 */ 690 690 *mode = (urb->transfer_flags & URB_ZERO_PACKET) ? 1 : 0; 691 - 692 - return 0; 693 691 } 694 692 695 693 static bool musb_tx_dma_program(struct dma_controller *dma, ··· 697 699 struct dma_channel *channel = hw_ep->tx_channel; 698 700 u16 pkt_size = qh->maxpacket; 699 701 u8 mode; 700 - int res; 701 702 702 703 if (musb_dma_inventra(hw_ep->musb) || musb_dma_ux500(hw_ep->musb)) 703 - res = musb_tx_dma_set_mode_mentor(dma, hw_ep, qh, urb, 704 - offset, &length, &mode); 704 + musb_tx_dma_set_mode_mentor(dma, hw_ep, qh, urb, offset, 705 + &length, &mode); 706 + else if (is_cppi_enabled(hw_ep->musb) || tusb_dma_omap(hw_ep->musb)) 707 + musb_tx_dma_set_mode_cppi_tusb(dma, hw_ep, qh, urb, offset, 708 + &length, &mode); 705 709 else 706 - res = musb_tx_dma_set_mode_cppi_tusb(dma, hw_ep, qh, urb, 707 - offset, &length, &mode); 708 - if (res) 709 710 return false; 710 711 711 712 qh->segsize = length; ··· 992 995 if (is_in) { 993 996 dma = is_dma_capable() ? ep->rx_channel : NULL; 994 997 995 - /* clear nak timeout bit */ 998 + /* 999 + * Need to stop the transaction by clearing REQPKT first 1000 + * then the NAK Timeout bit ref MUSBMHDRC USB 2.0 HIGH-SPEED 1001 + * DUAL-ROLE CONTROLLER Programmer's Guide, section 9.2.2 1002 + */ 996 1003 rx_csr = musb_readw(epio, MUSB_RXCSR); 997 1004 rx_csr |= MUSB_RXCSR_H_WZC_BITS; 1005 + rx_csr &= ~MUSB_RXCSR_H_REQPKT; 1006 + musb_writew(epio, MUSB_RXCSR, rx_csr); 998 1007 rx_csr &= ~MUSB_RXCSR_DATAERROR; 999 1008 musb_writew(epio, MUSB_RXCSR, rx_csr); 1000 1009 ··· 1554 1551 struct urb *urb, 1555 1552 size_t len) 1556 1553 { 1557 - struct dma_channel *channel = hw_ep->tx_channel; 1554 + struct dma_channel *channel = hw_ep->rx_channel; 1558 1555 void __iomem *epio = hw_ep->regs; 1559 1556 dma_addr_t *buf; 1560 1557 u32 length, res; ··· 1872 1869 1873 1870 status = -EPROTO; 1874 1871 musb_writeb(epio, MUSB_RXINTERVAL, 0); 1872 + 1873 + rx_csr &= ~MUSB_RXCSR_H_ERROR; 1874 + musb_writew(epio, MUSB_RXCSR, rx_csr); 1875 1875 1876 1876 } else if (rx_csr & MUSB_RXCSR_DATAERROR) { 1877 1877
+89 -168
drivers/usb/musb/omap2430.c
··· 49 49 enum musb_vbus_id_status status; 50 50 struct work_struct omap_musb_mailbox_work; 51 51 struct device *control_otghs; 52 + bool cable_connected; 53 + bool enabled; 54 + bool powered; 52 55 }; 53 56 #define glue_to_musb(g) platform_get_drvdata(g->musb) 54 57 55 58 static struct omap2430_glue *_glue; 56 - 57 - static struct timer_list musb_idle_timer; 58 - 59 - static void musb_do_idle(unsigned long _musb) 60 - { 61 - struct musb *musb = (void *)_musb; 62 - unsigned long flags; 63 - u8 power; 64 - u8 devctl; 65 - 66 - spin_lock_irqsave(&musb->lock, flags); 67 - 68 - switch (musb->xceiv->otg->state) { 69 - case OTG_STATE_A_WAIT_BCON: 70 - 71 - devctl = musb_readb(musb->mregs, MUSB_DEVCTL); 72 - if (devctl & MUSB_DEVCTL_BDEVICE) { 73 - musb->xceiv->otg->state = OTG_STATE_B_IDLE; 74 - MUSB_DEV_MODE(musb); 75 - } else { 76 - musb->xceiv->otg->state = OTG_STATE_A_IDLE; 77 - MUSB_HST_MODE(musb); 78 - } 79 - break; 80 - case OTG_STATE_A_SUSPEND: 81 - /* finish RESUME signaling? */ 82 - if (musb->port1_status & MUSB_PORT_STAT_RESUME) { 83 - power = musb_readb(musb->mregs, MUSB_POWER); 84 - power &= ~MUSB_POWER_RESUME; 85 - dev_dbg(musb->controller, "root port resume stopped, power %02x\n", power); 86 - musb_writeb(musb->mregs, MUSB_POWER, power); 87 - musb->is_active = 1; 88 - musb->port1_status &= ~(USB_PORT_STAT_SUSPEND 89 - | MUSB_PORT_STAT_RESUME); 90 - musb->port1_status |= USB_PORT_STAT_C_SUSPEND << 16; 91 - usb_hcd_poll_rh_status(musb->hcd); 92 - /* NOTE: it might really be A_WAIT_BCON ... */ 93 - musb->xceiv->otg->state = OTG_STATE_A_HOST; 94 - } 95 - break; 96 - case OTG_STATE_A_HOST: 97 - devctl = musb_readb(musb->mregs, MUSB_DEVCTL); 98 - if (devctl & MUSB_DEVCTL_BDEVICE) 99 - musb->xceiv->otg->state = OTG_STATE_B_IDLE; 100 - else 101 - musb->xceiv->otg->state = OTG_STATE_A_WAIT_BCON; 102 - default: 103 - break; 104 - } 105 - spin_unlock_irqrestore(&musb->lock, flags); 106 - } 107 - 108 - 109 - static void omap2430_musb_try_idle(struct musb *musb, unsigned long timeout) 110 - { 111 - unsigned long default_timeout = jiffies + msecs_to_jiffies(3); 112 - static unsigned long last_timer; 113 - 114 - if (timeout == 0) 115 - timeout = default_timeout; 116 - 117 - /* Never idle if active, or when VBUS timeout is not set as host */ 118 - if (musb->is_active || ((musb->a_wait_bcon == 0) 119 - && (musb->xceiv->otg->state == OTG_STATE_A_WAIT_BCON))) { 120 - dev_dbg(musb->controller, "%s active, deleting timer\n", 121 - usb_otg_state_string(musb->xceiv->otg->state)); 122 - del_timer(&musb_idle_timer); 123 - last_timer = jiffies; 124 - return; 125 - } 126 - 127 - if (time_after(last_timer, timeout)) { 128 - if (!timer_pending(&musb_idle_timer)) 129 - last_timer = timeout; 130 - else { 131 - dev_dbg(musb->controller, "Longer idle timer already pending, ignoring\n"); 132 - return; 133 - } 134 - } 135 - last_timer = timeout; 136 - 137 - dev_dbg(musb->controller, "%s inactive, for idle timer for %lu ms\n", 138 - usb_otg_state_string(musb->xceiv->otg->state), 139 - (unsigned long)jiffies_to_msecs(timeout - jiffies)); 140 - mod_timer(&musb_idle_timer, timeout); 141 - } 142 59 143 60 static void omap2430_musb_set_vbus(struct musb *musb, int is_on) 144 61 { ··· 122 205 musb_readb(musb->mregs, MUSB_DEVCTL)); 123 206 } 124 207 125 - static int omap2430_musb_set_mode(struct musb *musb, u8 musb_mode) 126 - { 127 - u8 devctl = musb_readb(musb->mregs, MUSB_DEVCTL); 128 - 129 - devctl |= MUSB_DEVCTL_SESSION; 130 - musb_writeb(musb->mregs, MUSB_DEVCTL, devctl); 131 - 132 - return 0; 133 - } 134 - 135 208 static inline void omap2430_low_level_exit(struct musb *musb) 136 209 { 137 210 u32 l; ··· 141 234 musb_writel(musb->mregs, OTG_FORCESTDBY, l); 142 235 } 143 236 144 - static void omap2430_musb_mailbox(enum musb_vbus_id_status status) 237 + /* 238 + * We can get multiple cable events so we need to keep track 239 + * of the power state. Only keep power enabled if USB cable is 240 + * connected and a gadget is started. 241 + */ 242 + static void omap2430_set_power(struct musb *musb, bool enabled, bool cable) 243 + { 244 + struct device *dev = musb->controller; 245 + struct omap2430_glue *glue = dev_get_drvdata(dev->parent); 246 + bool power_up; 247 + int res; 248 + 249 + if (glue->enabled != enabled) 250 + glue->enabled = enabled; 251 + 252 + if (glue->cable_connected != cable) 253 + glue->cable_connected = cable; 254 + 255 + power_up = glue->enabled && glue->cable_connected; 256 + if (power_up == glue->powered) { 257 + dev_warn(musb->controller, "power state already %i\n", 258 + power_up); 259 + return; 260 + } 261 + 262 + glue->powered = power_up; 263 + 264 + if (power_up) { 265 + res = pm_runtime_get_sync(musb->controller); 266 + if (res < 0) { 267 + dev_err(musb->controller, "could not enable: %i", res); 268 + glue->powered = false; 269 + } 270 + } else { 271 + pm_runtime_mark_last_busy(musb->controller); 272 + pm_runtime_put_autosuspend(musb->controller); 273 + } 274 + } 275 + 276 + static int omap2430_musb_mailbox(enum musb_vbus_id_status status) 145 277 { 146 278 struct omap2430_glue *glue = _glue; 147 279 148 280 if (!glue) { 149 281 pr_err("%s: musb core is not yet initialized\n", __func__); 150 - return; 282 + return -EPROBE_DEFER; 151 283 } 152 284 glue->status = status; 153 285 154 286 if (!glue_to_musb(glue)) { 155 287 pr_err("%s: musb core is not yet ready\n", __func__); 156 - return; 288 + return -EPROBE_DEFER; 157 289 } 158 290 159 291 schedule_work(&glue->omap_musb_mailbox_work); 292 + 293 + return 0; 160 294 } 161 295 162 296 static void omap_musb_set_mailbox(struct omap2430_glue *glue) ··· 207 259 struct musb_hdrc_platform_data *pdata = dev_get_platdata(dev); 208 260 struct omap_musb_board_data *data = pdata->board_data; 209 261 struct usb_otg *otg = musb->xceiv->otg; 262 + bool cable_connected; 263 + 264 + cable_connected = ((glue->status == MUSB_ID_GROUND) || 265 + (glue->status == MUSB_VBUS_VALID)); 266 + 267 + if (cable_connected) 268 + omap2430_set_power(musb, glue->enabled, cable_connected); 210 269 211 270 switch (glue->status) { 212 271 case MUSB_ID_GROUND: ··· 223 268 musb->xceiv->otg->state = OTG_STATE_A_IDLE; 224 269 musb->xceiv->last_event = USB_EVENT_ID; 225 270 if (musb->gadget_driver) { 226 - pm_runtime_get_sync(dev); 227 271 omap_control_usb_set_mode(glue->control_otghs, 228 272 USB_MODE_HOST); 229 273 omap2430_musb_set_vbus(musb, 1); ··· 235 281 otg->default_a = false; 236 282 musb->xceiv->otg->state = OTG_STATE_B_IDLE; 237 283 musb->xceiv->last_event = USB_EVENT_VBUS; 238 - if (musb->gadget_driver) 239 - pm_runtime_get_sync(dev); 240 284 omap_control_usb_set_mode(glue->control_otghs, USB_MODE_DEVICE); 241 285 break; 242 286 ··· 243 291 dev_dbg(dev, "VBUS Disconnect\n"); 244 292 245 293 musb->xceiv->last_event = USB_EVENT_NONE; 246 - if (musb->gadget_driver) { 294 + if (musb->gadget_driver) 247 295 omap2430_musb_set_vbus(musb, 0); 248 - pm_runtime_mark_last_busy(dev); 249 - pm_runtime_put_autosuspend(dev); 250 - } 251 296 252 297 if (data->interface_type == MUSB_INTERFACE_UTMI) 253 298 otg_set_vbus(musb->xceiv->otg, 0); ··· 256 307 dev_dbg(dev, "ID float\n"); 257 308 } 258 309 310 + if (!cable_connected) 311 + omap2430_set_power(musb, glue->enabled, cable_connected); 312 + 259 313 atomic_notifier_call_chain(&musb->xceiv->notifier, 260 314 musb->xceiv->last_event, NULL); 261 315 } ··· 268 316 { 269 317 struct omap2430_glue *glue = container_of(mailbox_work, 270 318 struct omap2430_glue, omap_musb_mailbox_work); 271 - struct musb *musb = glue_to_musb(glue); 272 - struct device *dev = musb->controller; 273 319 274 - pm_runtime_get_sync(dev); 275 320 omap_musb_set_mailbox(glue); 276 - pm_runtime_mark_last_busy(dev); 277 - pm_runtime_put_autosuspend(dev); 278 321 } 279 322 280 323 static irqreturn_t omap2430_musb_interrupt(int irq, void *__hci) ··· 336 389 return PTR_ERR(musb->phy); 337 390 } 338 391 musb->isr = omap2430_musb_interrupt; 339 - 340 - /* 341 - * Enable runtime PM for musb parent (this driver). We can't 342 - * do it earlier as struct musb is not yet allocated and we 343 - * need to touch the musb registers for runtime PM. 344 - */ 345 - pm_runtime_enable(glue->dev); 346 - status = pm_runtime_get_sync(glue->dev); 347 - if (status < 0) 348 - goto err1; 349 - 350 - status = pm_runtime_get_sync(dev); 351 - if (status < 0) { 352 - dev_err(dev, "pm_runtime_get_sync FAILED %d\n", status); 353 - pm_runtime_put_sync(glue->dev); 354 - goto err1; 355 - } 392 + phy_init(musb->phy); 356 393 357 394 l = musb_readl(musb->mregs, OTG_INTERFSEL); 358 395 ··· 358 427 musb_readl(musb->mregs, OTG_INTERFSEL), 359 428 musb_readl(musb->mregs, OTG_SIMENABLE)); 360 429 361 - setup_timer(&musb_idle_timer, musb_do_idle, (unsigned long) musb); 362 - 363 430 if (glue->status != MUSB_UNKNOWN) 364 431 omap_musb_set_mailbox(glue); 365 432 366 - phy_init(musb->phy); 367 - phy_power_on(musb->phy); 368 - 369 - pm_runtime_put_noidle(musb->controller); 370 - pm_runtime_put_noidle(glue->dev); 371 433 return 0; 372 - 373 - err1: 374 - return status; 375 434 } 376 435 377 436 static void omap2430_musb_enable(struct musb *musb) ··· 372 451 struct omap2430_glue *glue = dev_get_drvdata(dev->parent); 373 452 struct musb_hdrc_platform_data *pdata = dev_get_platdata(dev); 374 453 struct omap_musb_board_data *data = pdata->board_data; 454 + 455 + if (!WARN_ON(!musb->phy)) 456 + phy_power_on(musb->phy); 457 + 458 + omap2430_set_power(musb, true, glue->cable_connected); 375 459 376 460 switch (glue->status) { 377 461 ··· 413 487 struct device *dev = musb->controller; 414 488 struct omap2430_glue *glue = dev_get_drvdata(dev->parent); 415 489 490 + if (!WARN_ON(!musb->phy)) 491 + phy_power_off(musb->phy); 492 + 416 493 if (glue->status != MUSB_UNKNOWN) 417 494 omap_control_usb_set_mode(glue->control_otghs, 418 495 USB_MODE_DISCONNECT); 496 + 497 + omap2430_set_power(musb, false, glue->cable_connected); 419 498 } 420 499 421 500 static int omap2430_musb_exit(struct musb *musb) 422 501 { 423 - del_timer_sync(&musb_idle_timer); 502 + struct device *dev = musb->controller; 503 + struct omap2430_glue *glue = dev_get_drvdata(dev->parent); 424 504 425 505 omap2430_low_level_exit(musb); 426 - phy_power_off(musb->phy); 427 506 phy_exit(musb->phy); 507 + musb->phy = NULL; 508 + cancel_work_sync(&glue->omap_musb_mailbox_work); 428 509 429 510 return 0; 430 511 } ··· 444 511 #endif 445 512 .init = omap2430_musb_init, 446 513 .exit = omap2430_musb_exit, 447 - 448 - .set_mode = omap2430_musb_set_mode, 449 - .try_idle = omap2430_musb_try_idle, 450 514 451 515 .set_vbus = omap2430_musb_set_vbus, 452 516 ··· 569 639 goto err2; 570 640 } 571 641 572 - /* 573 - * Note that we cannot enable PM runtime yet for this 574 - * driver as we need struct musb initialized first. 575 - * See omap2430_musb_init above. 576 - */ 642 + pm_runtime_enable(glue->dev); 643 + pm_runtime_use_autosuspend(glue->dev); 644 + pm_runtime_set_autosuspend_delay(glue->dev, 500); 577 645 578 646 ret = platform_device_add(musb); 579 647 if (ret) { ··· 590 662 591 663 static int omap2430_remove(struct platform_device *pdev) 592 664 { 593 - struct omap2430_glue *glue = platform_get_drvdata(pdev); 665 + struct omap2430_glue *glue = platform_get_drvdata(pdev); 666 + struct musb *musb = glue_to_musb(glue); 594 667 595 668 pm_runtime_get_sync(glue->dev); 596 - cancel_work_sync(&glue->omap_musb_mailbox_work); 597 669 platform_device_unregister(glue->musb); 670 + omap2430_set_power(musb, false, false); 598 671 pm_runtime_put_sync(glue->dev); 672 + pm_runtime_dont_use_autosuspend(glue->dev); 599 673 pm_runtime_disable(glue->dev); 600 674 601 675 return 0; ··· 610 680 struct omap2430_glue *glue = dev_get_drvdata(dev); 611 681 struct musb *musb = glue_to_musb(glue); 612 682 613 - if (musb) { 614 - musb->context.otg_interfsel = musb_readl(musb->mregs, 615 - OTG_INTERFSEL); 683 + if (!musb) 684 + return 0; 616 685 617 - omap2430_low_level_exit(musb); 618 - } 686 + musb->context.otg_interfsel = musb_readl(musb->mregs, 687 + OTG_INTERFSEL); 688 + 689 + omap2430_low_level_exit(musb); 619 690 620 691 return 0; 621 692 } ··· 627 696 struct musb *musb = glue_to_musb(glue); 628 697 629 698 if (!musb) 630 - return -EPROBE_DEFER; 699 + return 0; 631 700 632 701 omap2430_low_level_init(musb); 633 702 musb_writel(musb->mregs, OTG_INTERFSEL, ··· 669 738 }, 670 739 }; 671 740 741 + module_platform_driver(omap2430_driver); 742 + 672 743 MODULE_DESCRIPTION("OMAP2PLUS MUSB Glue Layer"); 673 744 MODULE_AUTHOR("Felipe Balbi <balbi@ti.com>"); 674 745 MODULE_LICENSE("GPL v2"); 675 - 676 - static int __init omap2430_init(void) 677 - { 678 - return platform_driver_register(&omap2430_driver); 679 - } 680 - subsys_initcall(omap2430_init); 681 - 682 - static void __exit omap2430_exit(void) 683 - { 684 - platform_driver_unregister(&omap2430_driver); 685 - } 686 - module_exit(omap2430_exit);
+34 -20
drivers/usb/musb/sunxi.c
··· 80 80 81 81 struct sunxi_glue { 82 82 struct device *dev; 83 - struct platform_device *musb; 83 + struct musb *musb; 84 + struct platform_device *musb_pdev; 84 85 struct clk *clk; 85 86 struct reset_control *rst; 86 87 struct phy *phy; ··· 103 102 return; 104 103 105 104 if (test_and_clear_bit(SUNXI_MUSB_FL_HOSTMODE_PEND, &glue->flags)) { 106 - struct musb *musb = platform_get_drvdata(glue->musb); 105 + struct musb *musb = glue->musb; 107 106 unsigned long flags; 108 107 u8 devctl; 109 108 ··· 113 112 if (test_bit(SUNXI_MUSB_FL_HOSTMODE, &glue->flags)) { 114 113 set_bit(SUNXI_MUSB_FL_VBUS_ON, &glue->flags); 115 114 musb->xceiv->otg->default_a = 1; 116 - musb->xceiv->otg->state = OTG_STATE_A_IDLE; 115 + musb->xceiv->otg->state = OTG_STATE_A_WAIT_VRISE; 117 116 MUSB_HST_MODE(musb); 118 117 devctl |= MUSB_DEVCTL_SESSION; 119 118 } else { ··· 146 145 { 147 146 struct sunxi_glue *glue = dev_get_drvdata(musb->controller->parent); 148 147 149 - if (is_on) 148 + if (is_on) { 150 149 set_bit(SUNXI_MUSB_FL_VBUS_ON, &glue->flags); 151 - else 150 + musb->xceiv->otg->state = OTG_STATE_A_WAIT_VRISE; 151 + } else { 152 152 clear_bit(SUNXI_MUSB_FL_VBUS_ON, &glue->flags); 153 + } 153 154 154 155 schedule_work(&glue->work); 155 156 } ··· 267 264 if (ret) 268 265 goto error_unregister_notifier; 269 266 270 - if (musb->port_mode == MUSB_PORT_MODE_HOST) { 271 - ret = phy_power_on(glue->phy); 272 - if (ret) 273 - goto error_phy_exit; 274 - set_bit(SUNXI_MUSB_FL_PHY_ON, &glue->flags); 275 - /* Stop musb work from turning vbus off again */ 276 - set_bit(SUNXI_MUSB_FL_VBUS_ON, &glue->flags); 277 - } 278 - 279 267 musb->isr = sunxi_musb_interrupt; 280 268 281 269 /* Stop the musb-core from doing runtime pm (not supported on sunxi) */ ··· 274 280 275 281 return 0; 276 282 277 - error_phy_exit: 278 - phy_exit(glue->phy); 279 283 error_unregister_notifier: 280 284 if (musb->port_mode == MUSB_PORT_MODE_DUAL_ROLE) 281 285 extcon_unregister_notifier(glue->extcon, EXTCON_USB_HOST, ··· 315 323 return 0; 316 324 } 317 325 326 + static int sunxi_set_mode(struct musb *musb, u8 mode) 327 + { 328 + struct sunxi_glue *glue = dev_get_drvdata(musb->controller->parent); 329 + int ret; 330 + 331 + if (mode == MUSB_HOST) { 332 + ret = phy_power_on(glue->phy); 333 + if (ret) 334 + return ret; 335 + 336 + set_bit(SUNXI_MUSB_FL_PHY_ON, &glue->flags); 337 + /* Stop musb work from turning vbus off again */ 338 + set_bit(SUNXI_MUSB_FL_VBUS_ON, &glue->flags); 339 + musb->xceiv->otg->state = OTG_STATE_A_WAIT_VRISE; 340 + } 341 + 342 + return 0; 343 + } 344 + 318 345 static void sunxi_musb_enable(struct musb *musb) 319 346 { 320 347 struct sunxi_glue *glue = dev_get_drvdata(musb->controller->parent); 348 + 349 + glue->musb = musb; 321 350 322 351 /* musb_core does not call us in a balanced manner */ 323 352 if (test_and_set_bit(SUNXI_MUSB_FL_ENABLED, &glue->flags)) ··· 582 569 .exit = sunxi_musb_exit, 583 570 .enable = sunxi_musb_enable, 584 571 .disable = sunxi_musb_disable, 572 + .set_mode = sunxi_set_mode, 585 573 .fifo_offset = sunxi_musb_fifo_offset, 586 574 .ep_offset = sunxi_musb_ep_offset, 587 575 .busctl_offset = sunxi_musb_busctl_offset, ··· 735 721 pinfo.data = &pdata; 736 722 pinfo.size_data = sizeof(pdata); 737 723 738 - glue->musb = platform_device_register_full(&pinfo); 739 - if (IS_ERR(glue->musb)) { 740 - ret = PTR_ERR(glue->musb); 724 + glue->musb_pdev = platform_device_register_full(&pinfo); 725 + if (IS_ERR(glue->musb_pdev)) { 726 + ret = PTR_ERR(glue->musb_pdev); 741 727 dev_err(&pdev->dev, "Error registering musb dev: %d\n", ret); 742 728 goto err_unregister_usb_phy; 743 729 } ··· 754 740 struct sunxi_glue *glue = platform_get_drvdata(pdev); 755 741 struct platform_device *usb_phy = glue->usb_phy; 756 742 757 - platform_device_unregister(glue->musb); /* Frees glue ! */ 743 + platform_device_unregister(glue->musb_pdev); 758 744 usb_phy_generic_unregister(usb_phy); 759 745 760 746 return 0;
+24 -5
drivers/usb/phy/phy-twl6030-usb.c
··· 97 97 98 98 struct regulator *usb3v3; 99 99 100 + /* used to check initial cable status after probe */ 101 + struct delayed_work get_status_work; 102 + 100 103 /* used to set vbus, in atomic path */ 101 104 struct work_struct set_vbus_work; 102 105 ··· 230 227 twl->asleep = 1; 231 228 status = MUSB_VBUS_VALID; 232 229 twl->linkstat = status; 233 - musb_mailbox(status); 230 + ret = musb_mailbox(status); 231 + if (ret) 232 + twl->linkstat = MUSB_UNKNOWN; 234 233 } else { 235 234 if (twl->linkstat != MUSB_UNKNOWN) { 236 235 status = MUSB_VBUS_OFF; 237 236 twl->linkstat = status; 238 - musb_mailbox(status); 237 + ret = musb_mailbox(status); 238 + if (ret) 239 + twl->linkstat = MUSB_UNKNOWN; 239 240 if (twl->asleep) { 240 241 regulator_disable(twl->usb3v3); 241 242 twl->asleep = 0; ··· 271 264 twl6030_writeb(twl, TWL_MODULE_USB, 0x10, USB_ID_INT_EN_HI_SET); 272 265 status = MUSB_ID_GROUND; 273 266 twl->linkstat = status; 274 - musb_mailbox(status); 267 + ret = musb_mailbox(status); 268 + if (ret) 269 + twl->linkstat = MUSB_UNKNOWN; 275 270 } else { 276 271 twl6030_writeb(twl, TWL_MODULE_USB, 0x10, USB_ID_INT_EN_HI_CLR); 277 272 twl6030_writeb(twl, TWL_MODULE_USB, 0x1, USB_ID_INT_EN_HI_SET); ··· 281 272 twl6030_writeb(twl, TWL_MODULE_USB, status, USB_ID_INT_LATCH_CLR); 282 273 283 274 return IRQ_HANDLED; 275 + } 276 + 277 + static void twl6030_status_work(struct work_struct *work) 278 + { 279 + struct twl6030_usb *twl = container_of(work, struct twl6030_usb, 280 + get_status_work.work); 281 + 282 + twl6030_usb_irq(twl->irq2, twl); 283 + twl6030_usbotg_irq(twl->irq1, twl); 284 284 } 285 285 286 286 static int twl6030_enable_irq(struct twl6030_usb *twl) ··· 302 284 REG_INT_MSK_LINE_C); 303 285 twl6030_interrupt_unmask(TWL6030_CHARGER_CTRL_INT_MASK, 304 286 REG_INT_MSK_STS_C); 305 - twl6030_usb_irq(twl->irq2, twl); 306 - twl6030_usbotg_irq(twl->irq1, twl); 307 287 308 288 return 0; 309 289 } ··· 387 371 dev_warn(&pdev->dev, "could not create sysfs file\n"); 388 372 389 373 INIT_WORK(&twl->set_vbus_work, otg_set_vbus_work); 374 + INIT_DELAYED_WORK(&twl->get_status_work, twl6030_status_work); 390 375 391 376 status = request_threaded_irq(twl->irq1, NULL, twl6030_usbotg_irq, 392 377 IRQF_TRIGGER_FALLING | IRQF_TRIGGER_RISING | IRQF_ONESHOT, ··· 412 395 413 396 twl->asleep = 0; 414 397 twl6030_enable_irq(twl); 398 + schedule_delayed_work(&twl->get_status_work, HZ); 415 399 dev_info(&pdev->dev, "Initialized TWL6030 USB module\n"); 416 400 417 401 return 0; ··· 422 404 { 423 405 struct twl6030_usb *twl = platform_get_drvdata(pdev); 424 406 407 + cancel_delayed_work(&twl->get_status_work); 425 408 twl6030_interrupt_mask(TWL6030_USBOTG_INT_MASK, 426 409 REG_INT_MSK_LINE_C); 427 410 twl6030_interrupt_mask(TWL6030_USBOTG_INT_MASK,
+1
drivers/usb/serial/mos7720.c
··· 2007 2007 urblist_entry) 2008 2008 usb_unlink_urb(urbtrack->urb); 2009 2009 spin_unlock_irqrestore(&mos_parport->listlock, flags); 2010 + parport_del_port(mos_parport->pp); 2010 2011 2011 2012 kref_put(&mos_parport->ref_count, destroy_mos_parport); 2012 2013 }
+1 -1
drivers/usb/storage/uas.c
··· 836 836 if (devinfo->flags & US_FL_BROKEN_FUA) 837 837 sdev->broken_fua = 1; 838 838 839 + scsi_change_queue_depth(sdev, devinfo->qdepth - 2); 839 840 return 0; 840 841 } 841 842 ··· 849 848 .slave_configure = uas_slave_configure, 850 849 .eh_abort_handler = uas_eh_abort_handler, 851 850 .eh_bus_reset_handler = uas_eh_bus_reset_handler, 852 - .can_queue = MAX_CMNDS, 853 851 .this_id = -1, 854 852 .sg_tablesize = SG_NONE, 855 853 .skip_settle_delay = 1,
+1 -1
drivers/usb/usbip/vhci_hcd.c
··· 941 941 942 942 static int vhci_get_frame_number(struct usb_hcd *hcd) 943 943 { 944 - pr_err("Not yet implemented\n"); 944 + dev_err_ratelimited(&hcd->self.root_hub->dev, "Not yet implemented\n"); 945 945 return 0; 946 946 } 947 947
+1 -1
drivers/watchdog/Kconfig
··· 746 746 747 747 config EBC_C384_WDT 748 748 tristate "WinSystems EBC-C384 Watchdog Timer" 749 - depends on X86 && ISA 749 + depends on X86 && ISA_BUS_API 750 750 select WATCHDOG_CORE 751 751 help 752 752 Enables watchdog timer support for the watchdog timer on the
+1 -1
fs/btrfs/check-integrity.c
··· 2645 2645 * This algorithm is recursive because the amount of used stack space 2646 2646 * is very small and the max recursion depth is limited. 2647 2647 */ 2648 - indent_add = sprintf(buf, "%c-%llu(%s/%llu/%d)", 2648 + indent_add = sprintf(buf, "%c-%llu(%s/%llu/%u)", 2649 2649 btrfsic_get_block_type(state, block), 2650 2650 block->logical_bytenr, block->dev_state->name, 2651 2651 block->dev_bytenr, block->mirror_num);
+6 -1
fs/btrfs/ctree.c
··· 1554 1554 trans->transid, root->fs_info->generation); 1555 1555 1556 1556 if (!should_cow_block(trans, root, buf)) { 1557 + trans->dirty = true; 1557 1558 *cow_ret = buf; 1558 1559 return 0; 1559 1560 } ··· 2513 2512 if (!btrfs_buffer_uptodate(tmp, 0, 0)) 2514 2513 ret = -EIO; 2515 2514 free_extent_buffer(tmp); 2515 + } else { 2516 + ret = PTR_ERR(tmp); 2516 2517 } 2517 2518 return ret; 2518 2519 } ··· 2778 2775 * then we don't want to set the path blocking, 2779 2776 * so we test it here 2780 2777 */ 2781 - if (!should_cow_block(trans, root, b)) 2778 + if (!should_cow_block(trans, root, b)) { 2779 + trans->dirty = true; 2782 2780 goto cow_done; 2781 + } 2783 2782 2784 2783 /* 2785 2784 * must have write locks on this node and the
+16 -20
fs/btrfs/disk-io.c
··· 1098 1098 struct inode *btree_inode = root->fs_info->btree_inode; 1099 1099 1100 1100 buf = btrfs_find_create_tree_block(root, bytenr); 1101 - if (!buf) 1101 + if (IS_ERR(buf)) 1102 1102 return; 1103 1103 read_extent_buffer_pages(&BTRFS_I(btree_inode)->io_tree, 1104 1104 buf, 0, WAIT_NONE, btree_get_extent, 0); ··· 1114 1114 int ret; 1115 1115 1116 1116 buf = btrfs_find_create_tree_block(root, bytenr); 1117 - if (!buf) 1117 + if (IS_ERR(buf)) 1118 1118 return 0; 1119 1119 1120 1120 set_bit(EXTENT_BUFFER_READAHEAD, &buf->bflags); ··· 1172 1172 int ret; 1173 1173 1174 1174 buf = btrfs_find_create_tree_block(root, bytenr); 1175 - if (!buf) 1176 - return ERR_PTR(-ENOMEM); 1175 + if (IS_ERR(buf)) 1176 + return buf; 1177 1177 1178 1178 ret = btree_read_extent_buffer_pages(root, buf, 0, parent_transid); 1179 1179 if (ret) { ··· 1804 1804 1805 1805 /* Make the cleaner go to sleep early. */ 1806 1806 if (btrfs_need_cleaner_sleep(root)) 1807 + goto sleep; 1808 + 1809 + /* 1810 + * Do not do anything if we might cause open_ctree() to block 1811 + * before we have finished mounting the filesystem. 1812 + */ 1813 + if (!root->fs_info->open) 1807 1814 goto sleep; 1808 1815 1809 1816 if (!mutex_trylock(&root->fs_info->cleaner_mutex)) ··· 2527 2520 int num_backups_tried = 0; 2528 2521 int backup_index = 0; 2529 2522 int max_active; 2530 - bool cleaner_mutex_locked = false; 2531 2523 2532 2524 tree_root = fs_info->tree_root = btrfs_alloc_root(fs_info, GFP_KERNEL); 2533 2525 chunk_root = fs_info->chunk_root = btrfs_alloc_root(fs_info, GFP_KERNEL); ··· 3005 2999 goto fail_sysfs; 3006 3000 } 3007 3001 3008 - /* 3009 - * Hold the cleaner_mutex thread here so that we don't block 3010 - * for a long time on btrfs_recover_relocation. cleaner_kthread 3011 - * will wait for us to finish mounting the filesystem. 3012 - */ 3013 - mutex_lock(&fs_info->cleaner_mutex); 3014 - cleaner_mutex_locked = true; 3015 3002 fs_info->cleaner_kthread = kthread_run(cleaner_kthread, tree_root, 3016 3003 "btrfs-cleaner"); 3017 3004 if (IS_ERR(fs_info->cleaner_kthread)) ··· 3064 3065 ret = btrfs_cleanup_fs_roots(fs_info); 3065 3066 if (ret) 3066 3067 goto fail_qgroup; 3067 - /* We locked cleaner_mutex before creating cleaner_kthread. */ 3068 + 3069 + mutex_lock(&fs_info->cleaner_mutex); 3068 3070 ret = btrfs_recover_relocation(tree_root); 3071 + mutex_unlock(&fs_info->cleaner_mutex); 3069 3072 if (ret < 0) { 3070 3073 btrfs_warn(fs_info, "failed to recover relocation: %d", 3071 3074 ret); ··· 3075 3074 goto fail_qgroup; 3076 3075 } 3077 3076 } 3078 - mutex_unlock(&fs_info->cleaner_mutex); 3079 - cleaner_mutex_locked = false; 3080 3077 3081 3078 location.objectid = BTRFS_FS_TREE_OBJECTID; 3082 3079 location.type = BTRFS_ROOT_ITEM_KEY; ··· 3188 3189 filemap_write_and_wait(fs_info->btree_inode->i_mapping); 3189 3190 3190 3191 fail_sysfs: 3191 - if (cleaner_mutex_locked) { 3192 - mutex_unlock(&fs_info->cleaner_mutex); 3193 - cleaner_mutex_locked = false; 3194 - } 3195 3192 btrfs_sysfs_remove_mounted(fs_info); 3196 3193 3197 3194 fail_fsdev_sysfs: ··· 4134 4139 ret = -EINVAL; 4135 4140 } 4136 4141 if (!is_power_of_2(btrfs_super_stripesize(sb)) || 4137 - btrfs_super_stripesize(sb) != sectorsize) { 4142 + ((btrfs_super_stripesize(sb) != sectorsize) && 4143 + (btrfs_super_stripesize(sb) != 4096))) { 4138 4144 btrfs_err(fs_info, "invalid stripesize %u", 4139 4145 btrfs_super_stripesize(sb)); 4140 4146 ret = -EINVAL;
+7 -5
fs/btrfs/extent-tree.c
··· 8016 8016 struct extent_buffer *buf; 8017 8017 8018 8018 buf = btrfs_find_create_tree_block(root, bytenr); 8019 - if (!buf) 8020 - return ERR_PTR(-ENOMEM); 8019 + if (IS_ERR(buf)) 8020 + return buf; 8021 + 8021 8022 btrfs_set_header_generation(buf, trans->transid); 8022 8023 btrfs_set_buffer_lockdep_class(root->root_key.objectid, buf, level); 8023 8024 btrfs_tree_lock(buf); ··· 8045 8044 set_extent_dirty(&trans->transaction->dirty_pages, buf->start, 8046 8045 buf->start + buf->len - 1, GFP_NOFS); 8047 8046 } 8048 - trans->blocks_used++; 8047 + trans->dirty = true; 8049 8048 /* this returns a buffer locked for blocking */ 8050 8049 return buf; 8051 8050 } ··· 8660 8659 next = btrfs_find_tree_block(root->fs_info, bytenr); 8661 8660 if (!next) { 8662 8661 next = btrfs_find_create_tree_block(root, bytenr); 8663 - if (!next) 8664 - return -ENOMEM; 8662 + if (IS_ERR(next)) 8663 + return PTR_ERR(next); 8664 + 8665 8665 btrfs_set_buffer_lockdep_class(root->root_key.objectid, next, 8666 8666 level - 1); 8667 8667 reada = 1;
+12 -3
fs/btrfs/extent_io.c
··· 4892 4892 int uptodate = 1; 4893 4893 int ret; 4894 4894 4895 + if (!IS_ALIGNED(start, fs_info->tree_root->sectorsize)) { 4896 + btrfs_err(fs_info, "bad tree block start %llu", start); 4897 + return ERR_PTR(-EINVAL); 4898 + } 4899 + 4895 4900 eb = find_extent_buffer(fs_info, start); 4896 4901 if (eb) 4897 4902 return eb; 4898 4903 4899 4904 eb = __alloc_extent_buffer(fs_info, start, len); 4900 4905 if (!eb) 4901 - return NULL; 4906 + return ERR_PTR(-ENOMEM); 4902 4907 4903 4908 for (i = 0; i < num_pages; i++, index++) { 4904 4909 p = find_or_create_page(mapping, index, GFP_NOFS|__GFP_NOFAIL); 4905 - if (!p) 4910 + if (!p) { 4911 + exists = ERR_PTR(-ENOMEM); 4906 4912 goto free_eb; 4913 + } 4907 4914 4908 4915 spin_lock(&mapping->private_lock); 4909 4916 if (PagePrivate(p)) { ··· 4955 4948 set_bit(EXTENT_BUFFER_UPTODATE, &eb->bflags); 4956 4949 again: 4957 4950 ret = radix_tree_preload(GFP_NOFS); 4958 - if (ret) 4951 + if (ret) { 4952 + exists = ERR_PTR(ret); 4959 4953 goto free_eb; 4954 + } 4960 4955 4961 4956 spin_lock(&fs_info->buffer_lock); 4962 4957 ret = radix_tree_insert(&fs_info->buffer_radix,
+10 -1
fs/btrfs/inode.c
··· 3271 3271 /* grab metadata reservation from transaction handle */ 3272 3272 if (reserve) { 3273 3273 ret = btrfs_orphan_reserve_metadata(trans, inode); 3274 - BUG_ON(ret); /* -ENOSPC in reservation; Logic error? JDM */ 3274 + ASSERT(!ret); 3275 + if (ret) { 3276 + atomic_dec(&root->orphan_inodes); 3277 + clear_bit(BTRFS_INODE_ORPHAN_META_RESERVED, 3278 + &BTRFS_I(inode)->runtime_flags); 3279 + if (insert) 3280 + clear_bit(BTRFS_INODE_HAS_ORPHAN_ITEM, 3281 + &BTRFS_I(inode)->runtime_flags); 3282 + return ret; 3283 + } 3275 3284 } 3276 3285 3277 3286 /* insert an orphan item to track this unlinked/truncated file */
+3 -1
fs/btrfs/super.c
··· 235 235 trans->aborted = errno; 236 236 /* Nothing used. The other threads that have joined this 237 237 * transaction may be able to continue. */ 238 - if (!trans->blocks_used && list_empty(&trans->new_bgs)) { 238 + if (!trans->dirty && list_empty(&trans->new_bgs)) { 239 239 const char *errstr; 240 240 241 241 errstr = btrfs_decode_error(errno); ··· 1807 1807 } 1808 1808 } 1809 1809 sb->s_flags &= ~MS_RDONLY; 1810 + 1811 + fs_info->open = 1; 1810 1812 } 1811 1813 out: 1812 1814 wake_up_process(fs_info->transaction_kthread);
+1 -6
fs/btrfs/transaction.c
··· 1311 1311 return ret; 1312 1312 } 1313 1313 1314 - /* Bisesctability fixup, remove in 4.8 */ 1315 - #ifndef btrfs_std_error 1316 - #define btrfs_std_error btrfs_handle_fs_error 1317 - #endif 1318 - 1319 1314 /* 1320 1315 * Do all special snapshot related qgroup dirty hack. 1321 1316 * ··· 1380 1385 switch_commit_roots(trans->transaction, fs_info); 1381 1386 ret = btrfs_write_and_wait_transaction(trans, src); 1382 1387 if (ret) 1383 - btrfs_std_error(fs_info, ret, 1388 + btrfs_handle_fs_error(fs_info, ret, 1384 1389 "Error while writing out transaction for qgroup"); 1385 1390 1386 1391 out:
+1 -1
fs/btrfs/transaction.h
··· 110 110 u64 chunk_bytes_reserved; 111 111 unsigned long use_count; 112 112 unsigned long blocks_reserved; 113 - unsigned long blocks_used; 114 113 unsigned long delayed_ref_updates; 115 114 struct btrfs_transaction *transaction; 116 115 struct btrfs_block_rsv *block_rsv; ··· 120 121 bool can_flush_pending_bgs; 121 122 bool reloc_reserved; 122 123 bool sync; 124 + bool dirty; 123 125 unsigned int type; 124 126 /* 125 127 * this root is only needed to validate that the root passed to
+2 -2
fs/btrfs/tree-log.c
··· 2422 2422 root_owner = btrfs_header_owner(parent); 2423 2423 2424 2424 next = btrfs_find_create_tree_block(root, bytenr); 2425 - if (!next) 2426 - return -ENOMEM; 2425 + if (IS_ERR(next)) 2426 + return PTR_ERR(next); 2427 2427 2428 2428 if (*level == 1) { 2429 2429 ret = wc->process_func(root, next, wc, ptr_gen);
+2 -2
fs/btrfs/volumes.c
··· 6607 6607 * overallocate but we can keep it as-is, only the first page is used. 6608 6608 */ 6609 6609 sb = btrfs_find_create_tree_block(root, BTRFS_SUPER_INFO_OFFSET); 6610 - if (!sb) 6611 - return -ENOMEM; 6610 + if (IS_ERR(sb)) 6611 + return PTR_ERR(sb); 6612 6612 set_extent_buffer_uptodate(sb); 6613 6613 btrfs_set_buffer_lockdep_class(root->root_key.objectid, sb, 0); 6614 6614 /*
+4 -3
fs/debugfs/file.c
··· 127 127 r = real_fops->open(inode, filp); 128 128 129 129 out: 130 - fops_put(real_fops); 131 130 debugfs_use_file_finish(srcu_idx); 132 131 return r; 133 132 } ··· 261 262 262 263 if (real_fops->open) { 263 264 r = real_fops->open(inode, filp); 264 - 265 - if (filp->f_op != proxy_fops) { 265 + if (r) { 266 + replace_fops(filp, d_inode(dentry)->i_fop); 267 + goto free_proxy; 268 + } else if (filp->f_op != proxy_fops) { 266 269 /* No protection against file removal anymore. */ 267 270 WARN(1, "debugfs file owner replaced proxy fops: %pd", 268 271 dentry);
+1 -1
fs/nfsd/blocklayout.c
··· 290 290 return error; 291 291 } 292 292 293 - #define NFSD_MDS_PR_KEY 0x0100000000000000 293 + #define NFSD_MDS_PR_KEY 0x0100000000000000ULL 294 294 295 295 /* 296 296 * We use the client ID as a unique key for the reservations.
+1 -17
fs/nfsd/nfs4callback.c
··· 710 710 } 711 711 } 712 712 713 - static struct rpc_clnt *create_backchannel_client(struct rpc_create_args *args) 714 - { 715 - struct rpc_xprt *xprt; 716 - 717 - if (args->protocol != XPRT_TRANSPORT_BC_TCP) 718 - return rpc_create(args); 719 - 720 - xprt = args->bc_xprt->xpt_bc_xprt; 721 - if (xprt) { 722 - xprt_get(xprt); 723 - return rpc_create_xprt(args, xprt); 724 - } 725 - 726 - return rpc_create(args); 727 - } 728 - 729 713 static int setup_callback_client(struct nfs4_client *clp, struct nfs4_cb_conn *conn, struct nfsd4_session *ses) 730 714 { 731 715 int maxtime = max_cb_time(clp->net); ··· 752 768 args.authflavor = ses->se_cb_sec.flavor; 753 769 } 754 770 /* Create RPC client */ 755 - client = create_backchannel_client(&args); 771 + client = rpc_create(&args); 756 772 if (IS_ERR(client)) { 757 773 dprintk("NFSD: couldn't create callback client: %ld\n", 758 774 PTR_ERR(client));
+37 -30
fs/nfsd/nfs4state.c
··· 3480 3480 } 3481 3481 3482 3482 static struct nfs4_ol_stateid * 3483 - init_open_stateid(struct nfs4_ol_stateid *stp, struct nfs4_file *fp, 3484 - struct nfsd4_open *open) 3483 + init_open_stateid(struct nfs4_file *fp, struct nfsd4_open *open) 3485 3484 { 3486 3485 3487 3486 struct nfs4_openowner *oo = open->op_openowner; 3488 3487 struct nfs4_ol_stateid *retstp = NULL; 3488 + struct nfs4_ol_stateid *stp; 3489 + 3490 + stp = open->op_stp; 3491 + /* We are moving these outside of the spinlocks to avoid the warnings */ 3492 + mutex_init(&stp->st_mutex); 3493 + mutex_lock(&stp->st_mutex); 3489 3494 3490 3495 spin_lock(&oo->oo_owner.so_client->cl_lock); 3491 3496 spin_lock(&fp->fi_lock); ··· 3498 3493 retstp = nfsd4_find_existing_open(fp, open); 3499 3494 if (retstp) 3500 3495 goto out_unlock; 3496 + 3497 + open->op_stp = NULL; 3501 3498 atomic_inc(&stp->st_stid.sc_count); 3502 3499 stp->st_stid.sc_type = NFS4_OPEN_STID; 3503 3500 INIT_LIST_HEAD(&stp->st_locks); ··· 3509 3502 stp->st_access_bmap = 0; 3510 3503 stp->st_deny_bmap = 0; 3511 3504 stp->st_openstp = NULL; 3512 - init_rwsem(&stp->st_rwsem); 3513 3505 list_add(&stp->st_perstateowner, &oo->oo_owner.so_stateids); 3514 3506 list_add(&stp->st_perfile, &fp->fi_stateids); 3515 3507 3516 3508 out_unlock: 3517 3509 spin_unlock(&fp->fi_lock); 3518 3510 spin_unlock(&oo->oo_owner.so_client->cl_lock); 3519 - return retstp; 3511 + if (retstp) { 3512 + mutex_lock(&retstp->st_mutex); 3513 + /* To keep mutex tracking happy */ 3514 + mutex_unlock(&stp->st_mutex); 3515 + stp = retstp; 3516 + } 3517 + return stp; 3520 3518 } 3521 3519 3522 3520 /* ··· 4317 4305 struct nfs4_client *cl = open->op_openowner->oo_owner.so_client; 4318 4306 struct nfs4_file *fp = NULL; 4319 4307 struct nfs4_ol_stateid *stp = NULL; 4320 - struct nfs4_ol_stateid *swapstp = NULL; 4321 4308 struct nfs4_delegation *dp = NULL; 4322 4309 __be32 status; 4323 4310 ··· 4346 4335 */ 4347 4336 if (stp) { 4348 4337 /* Stateid was found, this is an OPEN upgrade */ 4349 - down_read(&stp->st_rwsem); 4338 + mutex_lock(&stp->st_mutex); 4350 4339 status = nfs4_upgrade_open(rqstp, fp, current_fh, stp, open); 4351 4340 if (status) { 4352 - up_read(&stp->st_rwsem); 4341 + mutex_unlock(&stp->st_mutex); 4353 4342 goto out; 4354 4343 } 4355 4344 } else { 4356 - stp = open->op_stp; 4357 - open->op_stp = NULL; 4358 - swapstp = init_open_stateid(stp, fp, open); 4359 - if (swapstp) { 4360 - nfs4_put_stid(&stp->st_stid); 4361 - stp = swapstp; 4362 - down_read(&stp->st_rwsem); 4345 + /* stp is returned locked. */ 4346 + stp = init_open_stateid(fp, open); 4347 + /* See if we lost the race to some other thread */ 4348 + if (stp->st_access_bmap != 0) { 4363 4349 status = nfs4_upgrade_open(rqstp, fp, current_fh, 4364 4350 stp, open); 4365 4351 if (status) { 4366 - up_read(&stp->st_rwsem); 4352 + mutex_unlock(&stp->st_mutex); 4367 4353 goto out; 4368 4354 } 4369 4355 goto upgrade_out; 4370 4356 } 4371 - down_read(&stp->st_rwsem); 4372 4357 status = nfs4_get_vfs_file(rqstp, fp, current_fh, stp, open); 4373 4358 if (status) { 4374 - up_read(&stp->st_rwsem); 4359 + mutex_unlock(&stp->st_mutex); 4375 4360 release_open_stateid(stp); 4376 4361 goto out; 4377 4362 } ··· 4379 4372 } 4380 4373 upgrade_out: 4381 4374 nfs4_inc_and_copy_stateid(&open->op_stateid, &stp->st_stid); 4382 - up_read(&stp->st_rwsem); 4375 + mutex_unlock(&stp->st_mutex); 4383 4376 4384 4377 if (nfsd4_has_session(&resp->cstate)) { 4385 4378 if (open->op_deleg_want & NFS4_SHARE_WANT_NO_DELEG) { ··· 4984 4977 * revoked delegations are kept only for free_stateid. 4985 4978 */ 4986 4979 return nfserr_bad_stateid; 4987 - down_write(&stp->st_rwsem); 4980 + mutex_lock(&stp->st_mutex); 4988 4981 status = check_stateid_generation(stateid, &stp->st_stid.sc_stateid, nfsd4_has_session(cstate)); 4989 4982 if (status == nfs_ok) 4990 4983 status = nfs4_check_fh(current_fh, &stp->st_stid); 4991 4984 if (status != nfs_ok) 4992 - up_write(&stp->st_rwsem); 4985 + mutex_unlock(&stp->st_mutex); 4993 4986 return status; 4994 4987 } 4995 4988 ··· 5037 5030 return status; 5038 5031 oo = openowner(stp->st_stateowner); 5039 5032 if (!(oo->oo_flags & NFS4_OO_CONFIRMED)) { 5040 - up_write(&stp->st_rwsem); 5033 + mutex_unlock(&stp->st_mutex); 5041 5034 nfs4_put_stid(&stp->st_stid); 5042 5035 return nfserr_bad_stateid; 5043 5036 } ··· 5069 5062 oo = openowner(stp->st_stateowner); 5070 5063 status = nfserr_bad_stateid; 5071 5064 if (oo->oo_flags & NFS4_OO_CONFIRMED) { 5072 - up_write(&stp->st_rwsem); 5065 + mutex_unlock(&stp->st_mutex); 5073 5066 goto put_stateid; 5074 5067 } 5075 5068 oo->oo_flags |= NFS4_OO_CONFIRMED; 5076 5069 nfs4_inc_and_copy_stateid(&oc->oc_resp_stateid, &stp->st_stid); 5077 - up_write(&stp->st_rwsem); 5070 + mutex_unlock(&stp->st_mutex); 5078 5071 dprintk("NFSD: %s: success, seqid=%d stateid=" STATEID_FMT "\n", 5079 5072 __func__, oc->oc_seqid, STATEID_VAL(&stp->st_stid.sc_stateid)); 5080 5073 ··· 5150 5143 nfs4_inc_and_copy_stateid(&od->od_stateid, &stp->st_stid); 5151 5144 status = nfs_ok; 5152 5145 put_stateid: 5153 - up_write(&stp->st_rwsem); 5146 + mutex_unlock(&stp->st_mutex); 5154 5147 nfs4_put_stid(&stp->st_stid); 5155 5148 out: 5156 5149 nfsd4_bump_seqid(cstate, status); ··· 5203 5196 if (status) 5204 5197 goto out; 5205 5198 nfs4_inc_and_copy_stateid(&close->cl_stateid, &stp->st_stid); 5206 - up_write(&stp->st_rwsem); 5199 + mutex_unlock(&stp->st_mutex); 5207 5200 5208 5201 nfsd4_close_open_stateid(stp); 5209 5202 ··· 5429 5422 stp->st_access_bmap = 0; 5430 5423 stp->st_deny_bmap = open_stp->st_deny_bmap; 5431 5424 stp->st_openstp = open_stp; 5432 - init_rwsem(&stp->st_rwsem); 5425 + mutex_init(&stp->st_mutex); 5433 5426 list_add(&stp->st_locks, &open_stp->st_locks); 5434 5427 list_add(&stp->st_perstateowner, &lo->lo_owner.so_stateids); 5435 5428 spin_lock(&fp->fi_lock); ··· 5598 5591 &open_stp, nn); 5599 5592 if (status) 5600 5593 goto out; 5601 - up_write(&open_stp->st_rwsem); 5594 + mutex_unlock(&open_stp->st_mutex); 5602 5595 open_sop = openowner(open_stp->st_stateowner); 5603 5596 status = nfserr_bad_stateid; 5604 5597 if (!same_clid(&open_sop->oo_owner.so_client->cl_clientid, ··· 5607 5600 status = lookup_or_create_lock_state(cstate, open_stp, lock, 5608 5601 &lock_stp, &new); 5609 5602 if (status == nfs_ok) 5610 - down_write(&lock_stp->st_rwsem); 5603 + mutex_lock(&lock_stp->st_mutex); 5611 5604 } else { 5612 5605 status = nfs4_preprocess_seqid_op(cstate, 5613 5606 lock->lk_old_lock_seqid, ··· 5711 5704 seqid_mutating_err(ntohl(status))) 5712 5705 lock_sop->lo_owner.so_seqid++; 5713 5706 5714 - up_write(&lock_stp->st_rwsem); 5707 + mutex_unlock(&lock_stp->st_mutex); 5715 5708 5716 5709 /* 5717 5710 * If this is a new, never-before-used stateid, and we are ··· 5881 5874 fput: 5882 5875 fput(filp); 5883 5876 put_stateid: 5884 - up_write(&stp->st_rwsem); 5877 + mutex_unlock(&stp->st_mutex); 5885 5878 nfs4_put_stid(&stp->st_stid); 5886 5879 out: 5887 5880 nfsd4_bump_seqid(cstate, status);
+1 -1
fs/nfsd/state.h
··· 535 535 unsigned char st_access_bmap; 536 536 unsigned char st_deny_bmap; 537 537 struct nfs4_ol_stateid *st_openstp; 538 - struct rw_semaphore st_rwsem; 538 + struct mutex st_mutex; 539 539 }; 540 540 541 541 static inline struct nfs4_ol_stateid *openlockstateid(struct nfs4_stid *s)
+11 -2
fs/overlayfs/dir.c
··· 405 405 err = ovl_create_upper(dentry, inode, &stat, link, hardlink); 406 406 } else { 407 407 const struct cred *old_cred; 408 + struct cred *override_cred; 408 409 409 410 old_cred = ovl_override_creds(dentry->d_sb); 410 411 411 - err = ovl_create_over_whiteout(dentry, inode, &stat, link, 412 - hardlink); 412 + err = -ENOMEM; 413 + override_cred = prepare_creds(); 414 + if (override_cred) { 415 + override_cred->fsuid = old_cred->fsuid; 416 + override_cred->fsgid = old_cred->fsgid; 417 + put_cred(override_creds(override_cred)); 418 + put_cred(override_cred); 413 419 420 + err = ovl_create_over_whiteout(dentry, inode, &stat, 421 + link, hardlink); 422 + } 414 423 revert_creds(old_cred); 415 424 } 416 425
+6 -20
fs/overlayfs/inode.c
··· 238 238 return err; 239 239 } 240 240 241 - static bool ovl_need_xattr_filter(struct dentry *dentry, 242 - enum ovl_path_type type) 243 - { 244 - if ((type & (__OVL_PATH_PURE | __OVL_PATH_UPPER)) == __OVL_PATH_UPPER) 245 - return S_ISDIR(dentry->d_inode->i_mode); 246 - else 247 - return false; 248 - } 249 - 250 241 ssize_t ovl_getxattr(struct dentry *dentry, struct inode *inode, 251 242 const char *name, void *value, size_t size) 252 243 { 253 - struct path realpath; 254 - enum ovl_path_type type = ovl_path_real(dentry, &realpath); 244 + struct dentry *realdentry = ovl_dentry_real(dentry); 255 245 256 - if (ovl_need_xattr_filter(dentry, type) && ovl_is_private_xattr(name)) 246 + if (ovl_is_private_xattr(name)) 257 247 return -ENODATA; 258 248 259 - return vfs_getxattr(realpath.dentry, name, value, size); 249 + return vfs_getxattr(realdentry, name, value, size); 260 250 } 261 251 262 252 ssize_t ovl_listxattr(struct dentry *dentry, char *list, size_t size) 263 253 { 264 - struct path realpath; 265 - enum ovl_path_type type = ovl_path_real(dentry, &realpath); 254 + struct dentry *realdentry = ovl_dentry_real(dentry); 266 255 ssize_t res; 267 256 int off; 268 257 269 - res = vfs_listxattr(realpath.dentry, list, size); 258 + res = vfs_listxattr(realdentry, list, size); 270 259 if (res <= 0 || size == 0) 271 - return res; 272 - 273 - if (!ovl_need_xattr_filter(dentry, type)) 274 260 return res; 275 261 276 262 /* filter out private xattrs */ ··· 288 302 goto out; 289 303 290 304 err = -ENODATA; 291 - if (ovl_need_xattr_filter(dentry, type) && ovl_is_private_xattr(name)) 305 + if (ovl_is_private_xattr(name)) 292 306 goto out_drop_write; 293 307 294 308 if (!OVL_TYPE_UPPER(type)) {
+7 -2
fs/reiserfs/super.c
··· 1393 1393 unsigned long safe_mask = 0; 1394 1394 unsigned int commit_max_age = (unsigned int)-1; 1395 1395 struct reiserfs_journal *journal = SB_JOURNAL(s); 1396 - char *new_opts = kstrdup(arg, GFP_KERNEL); 1396 + char *new_opts; 1397 1397 int err; 1398 1398 char *qf_names[REISERFS_MAXQUOTAS]; 1399 1399 unsigned int qfmt = 0; 1400 1400 #ifdef CONFIG_QUOTA 1401 1401 int i; 1402 1402 #endif 1403 + 1404 + new_opts = kstrdup(arg, GFP_KERNEL); 1405 + if (arg && !new_opts) 1406 + return -ENOMEM; 1403 1407 1404 1408 sync_filesystem(s); 1405 1409 reiserfs_write_lock(s); ··· 1550 1546 } 1551 1547 1552 1548 out_ok_unlocked: 1553 - replace_mount_options(s, new_opts); 1549 + if (new_opts) 1550 + replace_mount_options(s, new_opts); 1554 1551 return 0; 1555 1552 1556 1553 out_err_unlock:
+9 -4
fs/udf/partition.c
··· 295 295 map = &UDF_SB(sb)->s_partmaps[partition]; 296 296 /* map to sparable/physical partition desc */ 297 297 phyblock = udf_get_pblock(sb, eloc.logicalBlockNum, 298 - map->s_partition_num, ext_offset + offset); 298 + map->s_type_specific.s_metadata.s_phys_partition_ref, 299 + ext_offset + offset); 299 300 } 300 301 301 302 brelse(epos.bh); ··· 318 317 mdata = &map->s_type_specific.s_metadata; 319 318 inode = mdata->s_metadata_fe ? : mdata->s_mirror_fe; 320 319 321 - /* We shouldn't mount such media... */ 322 - BUG_ON(!inode); 320 + if (!inode) 321 + return 0xFFFFFFFF; 322 + 323 323 retblk = udf_try_read_meta(inode, block, partition, offset); 324 324 if (retblk == 0xFFFFFFFF && mdata->s_metadata_fe) { 325 325 udf_warn(sb, "error reading from METADATA, trying to read from MIRROR\n"); 326 326 if (!(mdata->s_flags & MF_MIRROR_FE_LOADED)) { 327 327 mdata->s_mirror_fe = udf_find_metadata_inode_efe(sb, 328 - mdata->s_mirror_file_loc, map->s_partition_num); 328 + mdata->s_mirror_file_loc, 329 + mdata->s_phys_partition_ref); 330 + if (IS_ERR(mdata->s_mirror_fe)) 331 + mdata->s_mirror_fe = NULL; 329 332 mdata->s_flags |= MF_MIRROR_FE_LOADED; 330 333 } 331 334
+12 -10
fs/udf/super.c
··· 951 951 } 952 952 953 953 struct inode *udf_find_metadata_inode_efe(struct super_block *sb, 954 - u32 meta_file_loc, u32 partition_num) 954 + u32 meta_file_loc, u32 partition_ref) 955 955 { 956 956 struct kernel_lb_addr addr; 957 957 struct inode *metadata_fe; 958 958 959 959 addr.logicalBlockNum = meta_file_loc; 960 - addr.partitionReferenceNum = partition_num; 960 + addr.partitionReferenceNum = partition_ref; 961 961 962 962 metadata_fe = udf_iget_special(sb, &addr); 963 963 ··· 974 974 return metadata_fe; 975 975 } 976 976 977 - static int udf_load_metadata_files(struct super_block *sb, int partition) 977 + static int udf_load_metadata_files(struct super_block *sb, int partition, 978 + int type1_index) 978 979 { 979 980 struct udf_sb_info *sbi = UDF_SB(sb); 980 981 struct udf_part_map *map; ··· 985 984 986 985 map = &sbi->s_partmaps[partition]; 987 986 mdata = &map->s_type_specific.s_metadata; 987 + mdata->s_phys_partition_ref = type1_index; 988 988 989 989 /* metadata address */ 990 990 udf_debug("Metadata file location: block = %d part = %d\n", 991 - mdata->s_meta_file_loc, map->s_partition_num); 991 + mdata->s_meta_file_loc, mdata->s_phys_partition_ref); 992 992 993 993 fe = udf_find_metadata_inode_efe(sb, mdata->s_meta_file_loc, 994 - map->s_partition_num); 994 + mdata->s_phys_partition_ref); 995 995 if (IS_ERR(fe)) { 996 996 /* mirror file entry */ 997 997 udf_debug("Mirror metadata file location: block = %d part = %d\n", 998 - mdata->s_mirror_file_loc, map->s_partition_num); 998 + mdata->s_mirror_file_loc, mdata->s_phys_partition_ref); 999 999 1000 1000 fe = udf_find_metadata_inode_efe(sb, mdata->s_mirror_file_loc, 1001 - map->s_partition_num); 1001 + mdata->s_phys_partition_ref); 1002 1002 1003 1003 if (IS_ERR(fe)) { 1004 1004 udf_err(sb, "Both metadata and mirror metadata inode efe can not found\n"); ··· 1017 1015 */ 1018 1016 if (mdata->s_bitmap_file_loc != 0xFFFFFFFF) { 1019 1017 addr.logicalBlockNum = mdata->s_bitmap_file_loc; 1020 - addr.partitionReferenceNum = map->s_partition_num; 1018 + addr.partitionReferenceNum = mdata->s_phys_partition_ref; 1021 1019 1022 1020 udf_debug("Bitmap file location: block = %d part = %d\n", 1023 1021 addr.logicalBlockNum, addr.partitionReferenceNum); ··· 1285 1283 p = (struct partitionDesc *)bh->b_data; 1286 1284 partitionNumber = le16_to_cpu(p->partitionNumber); 1287 1285 1288 - /* First scan for TYPE1, SPARABLE and METADATA partitions */ 1286 + /* First scan for TYPE1 and SPARABLE partitions */ 1289 1287 for (i = 0; i < sbi->s_partitions; i++) { 1290 1288 map = &sbi->s_partmaps[i]; 1291 1289 udf_debug("Searching map: (%d == %d)\n", ··· 1335 1333 goto out_bh; 1336 1334 1337 1335 if (map->s_partition_type == UDF_METADATA_MAP25) { 1338 - ret = udf_load_metadata_files(sb, i); 1336 + ret = udf_load_metadata_files(sb, i, type1_idx); 1339 1337 if (ret < 0) { 1340 1338 udf_err(sb, "error loading MetaData partition map %d\n", 1341 1339 i);
+5
fs/udf/udf_sb.h
··· 61 61 __u32 s_bitmap_file_loc; 62 62 __u32 s_alloc_unit_size; 63 63 __u16 s_align_unit_size; 64 + /* 65 + * Partition Reference Number of the associated physical / sparable 66 + * partition 67 + */ 68 + __u16 s_phys_partition_ref; 64 69 int s_flags; 65 70 struct inode *s_metadata_fe; 66 71 struct inode *s_mirror_fe;
+12
include/linux/dcache.h
··· 575 575 return inode; 576 576 } 577 577 578 + /** 579 + * d_real_inode - Return the real inode 580 + * @dentry: The dentry to query 581 + * 582 + * If dentry is on an union/overlay, then return the underlying, real inode. 583 + * Otherwise return d_inode(). 584 + */ 585 + static inline struct inode *d_real_inode(struct dentry *dentry) 586 + { 587 + return d_backing_inode(d_real(dentry)); 588 + } 589 + 578 590 579 591 #endif /* __LINUX_DCACHE_H */
+8 -1
include/linux/iio/common/st_sensors.h
··· 223 223 * @get_irq_data_ready: Function to get the IRQ used for data ready signal. 224 224 * @tf: Transfer function structure used by I/O operations. 225 225 * @tb: Transfer buffers and mutex used by I/O operations. 226 + * @hw_irq_trigger: if we're using the hardware interrupt on the sensor. 227 + * @hw_timestamp: Latest timestamp from the interrupt handler, when in use. 226 228 */ 227 229 struct st_sensor_data { 228 230 struct device *dev; ··· 249 247 250 248 const struct st_sensor_transfer_function *tf; 251 249 struct st_sensor_transfer_buffer tb; 250 + 251 + bool hw_irq_trigger; 252 + s64 hw_timestamp; 252 253 }; 253 254 254 255 #ifdef CONFIG_IIO_BUFFER ··· 265 260 const struct iio_trigger_ops *trigger_ops); 266 261 267 262 void st_sensors_deallocate_trigger(struct iio_dev *indio_dev); 268 - 263 + int st_sensors_validate_device(struct iio_trigger *trig, 264 + struct iio_dev *indio_dev); 269 265 #else 270 266 static inline int st_sensors_allocate_trigger(struct iio_dev *indio_dev, 271 267 const struct iio_trigger_ops *trigger_ops) ··· 277 271 { 278 272 return; 279 273 } 274 + #define st_sensors_validate_device NULL 280 275 #endif 281 276 282 277 int st_sensors_init_sensor(struct iio_dev *indio_dev,
+3 -2
include/linux/isa.h
··· 6 6 #define __LINUX_ISA_H 7 7 8 8 #include <linux/device.h> 9 + #include <linux/errno.h> 9 10 #include <linux/kernel.h> 10 11 11 12 struct isa_driver { ··· 23 22 24 23 #define to_isa_driver(x) container_of((x), struct isa_driver, driver) 25 24 26 - #ifdef CONFIG_ISA 25 + #ifdef CONFIG_ISA_BUS_API 27 26 int isa_register_driver(struct isa_driver *, unsigned int); 28 27 void isa_unregister_driver(struct isa_driver *); 29 28 #else 30 29 static inline int isa_register_driver(struct isa_driver *d, unsigned int i) 31 30 { 32 - return 0; 31 + return -ENODEV; 33 32 } 34 33 35 34 static inline void isa_unregister_driver(struct isa_driver *d)
+12 -11
include/linux/leds.h
··· 42 42 #define LED_UNREGISTERING (1 << 1) 43 43 /* Upper 16 bits reflect control information */ 44 44 #define LED_CORE_SUSPENDRESUME (1 << 16) 45 - #define LED_BLINK_ONESHOT (1 << 17) 46 - #define LED_BLINK_ONESHOT_STOP (1 << 18) 47 - #define LED_BLINK_INVERT (1 << 19) 48 - #define LED_BLINK_BRIGHTNESS_CHANGE (1 << 20) 49 - #define LED_BLINK_DISABLE (1 << 21) 50 - #define LED_SYSFS_DISABLE (1 << 22) 51 - #define LED_DEV_CAP_FLASH (1 << 23) 52 - #define LED_HW_PLUGGABLE (1 << 24) 53 - #define LED_PANIC_INDICATOR (1 << 25) 45 + #define LED_BLINK_SW (1 << 17) 46 + #define LED_BLINK_ONESHOT (1 << 18) 47 + #define LED_BLINK_ONESHOT_STOP (1 << 19) 48 + #define LED_BLINK_INVERT (1 << 20) 49 + #define LED_BLINK_BRIGHTNESS_CHANGE (1 << 21) 50 + #define LED_BLINK_DISABLE (1 << 22) 51 + #define LED_SYSFS_DISABLE (1 << 23) 52 + #define LED_DEV_CAP_FLASH (1 << 24) 53 + #define LED_HW_PLUGGABLE (1 << 25) 54 + #define LED_PANIC_INDICATOR (1 << 26) 54 55 55 56 /* Set LED brightness level 56 57 * Must not sleep. Use brightness_set_blocking for drivers ··· 73 72 * and if both are zero then a sensible default should be chosen. 74 73 * The call should adjust the timings in that case and if it can't 75 74 * match the values specified exactly. 76 - * Deactivate blinking again when the brightness is set to a fixed 77 - * value via the brightness_set() callback. 75 + * Deactivate blinking again when the brightness is set to LED_OFF 76 + * via the brightness_set() callback. 78 77 */ 79 78 int (*blink_set)(struct led_classdev *led_cdev, 80 79 unsigned long *delay_on,
+1 -1
include/linux/of.h
··· 614 614 return NULL; 615 615 } 616 616 617 - static inline int of_parse_phandle_with_args(struct device_node *np, 617 + static inline int of_parse_phandle_with_args(const struct device_node *np, 618 618 const char *list_name, 619 619 const char *cells_name, 620 620 int index,
+1 -1
include/linux/of_pci.h
··· 8 8 struct of_phandle_args; 9 9 struct device_node; 10 10 11 - #ifdef CONFIG_OF 11 + #ifdef CONFIG_OF_PCI 12 12 int of_irq_parse_pci(const struct pci_dev *pdev, struct of_phandle_args *out_irq); 13 13 struct device_node *of_pci_find_child_device(struct device_node *parent, 14 14 unsigned int devfn);
+7
include/linux/of_reserved_mem.h
··· 31 31 int of_reserved_mem_device_init(struct device *dev); 32 32 void of_reserved_mem_device_release(struct device *dev); 33 33 34 + int early_init_dt_alloc_reserved_memory_arch(phys_addr_t size, 35 + phys_addr_t align, 36 + phys_addr_t start, 37 + phys_addr_t end, 38 + bool nomap, 39 + phys_addr_t *res_base); 40 + 34 41 void fdt_init_reserved_mem(void); 35 42 void fdt_reserved_mem_save_node(unsigned long node, const char *uname, 36 43 phys_addr_t base, phys_addr_t size);
+3
include/linux/pwm.h
··· 235 235 if (!pwm) 236 236 return -EINVAL; 237 237 238 + if (duty_ns < 0 || period_ns < 0) 239 + return -EINVAL; 240 + 238 241 pwm_get_state(pwm, &state); 239 242 if (state.duty_cycle == duty_ns && state.period == period_ns) 240 243 return 0;
-2
include/linux/sunrpc/clnt.h
··· 137 137 #define RPC_CLNT_CREATE_NO_RETRANS_TIMEOUT (1UL << 9) 138 138 139 139 struct rpc_clnt *rpc_create(struct rpc_create_args *args); 140 - struct rpc_clnt *rpc_create_xprt(struct rpc_create_args *args, 141 - struct rpc_xprt *xprt); 142 140 struct rpc_clnt *rpc_bind_new_program(struct rpc_clnt *, 143 141 const struct rpc_program *, u32); 144 142 struct rpc_clnt *rpc_clone_client(struct rpc_clnt *);
+1
include/linux/sunrpc/svc_xprt.h
··· 84 84 85 85 struct net *xpt_net; 86 86 struct rpc_xprt *xpt_bc_xprt; /* NFSv4.1 backchannel */ 87 + struct rpc_xprt_switch *xpt_bc_xps; /* NFSv4.1 backchannel */ 87 88 }; 88 89 89 90 static inline void unregister_xpt_user(struct svc_xprt *xpt, struct svc_xpt_user *u)
+1
include/linux/sunrpc/xprt.h
··· 297 297 size_t addrlen; 298 298 const char *servername; 299 299 struct svc_xprt *bc_xprt; /* NFSv4.1 backchannel */ 300 + struct rpc_xprt_switch *bc_xps; 300 301 unsigned int flags; 301 302 }; 302 303
+2
include/linux/thermal.h
··· 335 335 * @get_trend: a pointer to a function that reads the sensor temperature trend. 336 336 * @set_emul_temp: a pointer to a function that sets sensor emulated 337 337 * temperature. 338 + * @set_trip_temp: a pointer to a function that sets the trip temperature on 339 + * hardware. 338 340 */ 339 341 struct thermal_zone_of_device_ops { 340 342 int (*get_temp)(void *, int *);
+3
include/linux/usb/gadget.h
··· 1034 1034 * @udc_name: A name of UDC this driver should be bound to. If udc_name is NULL, 1035 1035 * this driver will be bound to any available UDC. 1036 1036 * @pending: UDC core private data used for deferred probe of this driver. 1037 + * @match_existing_only: If udc is not found, return an error and don't add this 1038 + * gadget driver to list of pending driver 1037 1039 * 1038 1040 * Devices are disabled till a gadget driver successfully bind()s, which 1039 1041 * means the driver will handle setup() requests needed to enumerate (and ··· 1099 1097 1100 1098 char *udc_name; 1101 1099 struct list_head pending; 1100 + unsigned match_existing_only:1; 1102 1101 }; 1103 1102 1104 1103
+3 -2
include/linux/usb/musb.h
··· 142 142 }; 143 143 144 144 #if IS_ENABLED(CONFIG_USB_MUSB_HDRC) 145 - void musb_mailbox(enum musb_vbus_id_status status); 145 + int musb_mailbox(enum musb_vbus_id_status status); 146 146 #else 147 - static inline void musb_mailbox(enum musb_vbus_id_status status) 147 + static inline int musb_mailbox(enum musb_vbus_id_status status) 148 148 { 149 + return 0; 149 150 } 150 151 #endif 151 152
+1 -1
include/media/v4l2-mc.h
··· 1 1 /* 2 2 * v4l2-mc.h - Media Controller V4L2 types and prototypes 3 3 * 4 - * Copyright (C) 2016 Mauro Carvalho Chehab <mchehab@osg.samsung.com> 4 + * Copyright (C) 2016 Mauro Carvalho Chehab <mchehab@kernel.org> 5 5 * Copyright (C) 2006-2010 Nokia Corporation 6 6 * Copyright (c) 2016 Intel Corporation. 7 7 *
+6 -1
kernel/kcov.c
··· 264 264 265 265 static int __init kcov_init(void) 266 266 { 267 - if (!debugfs_create_file("kcov", 0600, NULL, NULL, &kcov_fops)) { 267 + /* 268 + * The kcov debugfs file won't ever get removed and thus, 269 + * there is no need to protect it against removal races. The 270 + * use of debugfs_create_file_unsafe() is actually safe here. 271 + */ 272 + if (!debugfs_create_file_unsafe("kcov", 0600, NULL, NULL, &kcov_fops)) { 268 273 pr_err("failed to create kcov in debugfs\n"); 269 274 return -ENOMEM; 270 275 }
+12 -9
mm/page-writeback.c
··· 373 373 struct dirty_throttle_control *gdtc = mdtc_gdtc(dtc); 374 374 unsigned long bytes = vm_dirty_bytes; 375 375 unsigned long bg_bytes = dirty_background_bytes; 376 - unsigned long ratio = vm_dirty_ratio; 377 - unsigned long bg_ratio = dirty_background_ratio; 376 + /* convert ratios to per-PAGE_SIZE for higher precision */ 377 + unsigned long ratio = (vm_dirty_ratio * PAGE_SIZE) / 100; 378 + unsigned long bg_ratio = (dirty_background_ratio * PAGE_SIZE) / 100; 378 379 unsigned long thresh; 379 380 unsigned long bg_thresh; 380 381 struct task_struct *tsk; ··· 387 386 /* 388 387 * The byte settings can't be applied directly to memcg 389 388 * domains. Convert them to ratios by scaling against 390 - * globally available memory. 389 + * globally available memory. As the ratios are in 390 + * per-PAGE_SIZE, they can be obtained by dividing bytes by 391 + * number of pages. 391 392 */ 392 393 if (bytes) 393 - ratio = min(DIV_ROUND_UP(bytes, PAGE_SIZE) * 100 / 394 - global_avail, 100UL); 394 + ratio = min(DIV_ROUND_UP(bytes, global_avail), 395 + PAGE_SIZE); 395 396 if (bg_bytes) 396 - bg_ratio = min(DIV_ROUND_UP(bg_bytes, PAGE_SIZE) * 100 / 397 - global_avail, 100UL); 397 + bg_ratio = min(DIV_ROUND_UP(bg_bytes, global_avail), 398 + PAGE_SIZE); 398 399 bytes = bg_bytes = 0; 399 400 } 400 401 401 402 if (bytes) 402 403 thresh = DIV_ROUND_UP(bytes, PAGE_SIZE); 403 404 else 404 - thresh = (ratio * available_memory) / 100; 405 + thresh = (ratio * available_memory) / PAGE_SIZE; 405 406 406 407 if (bg_bytes) 407 408 bg_thresh = DIV_ROUND_UP(bg_bytes, PAGE_SIZE); 408 409 else 409 - bg_thresh = (bg_ratio * available_memory) / 100; 410 + bg_thresh = (bg_ratio * available_memory) / PAGE_SIZE; 410 411 411 412 if (bg_thresh >= thresh) 412 413 bg_thresh = thresh / 2;
+44 -29
mm/percpu.c
··· 112 112 int map_used; /* # of map entries used before the sentry */ 113 113 int map_alloc; /* # of map entries allocated */ 114 114 int *map; /* allocation map */ 115 - struct work_struct map_extend_work;/* async ->map[] extension */ 115 + struct list_head map_extend_list;/* on pcpu_map_extend_chunks */ 116 116 117 117 void *data; /* chunk data */ 118 118 int first_free; /* no free below this */ ··· 162 162 static int pcpu_reserved_chunk_limit; 163 163 164 164 static DEFINE_SPINLOCK(pcpu_lock); /* all internal data structures */ 165 - static DEFINE_MUTEX(pcpu_alloc_mutex); /* chunk create/destroy, [de]pop */ 165 + static DEFINE_MUTEX(pcpu_alloc_mutex); /* chunk create/destroy, [de]pop, map ext */ 166 166 167 167 static struct list_head *pcpu_slot __read_mostly; /* chunk list slots */ 168 + 169 + /* chunks which need their map areas extended, protected by pcpu_lock */ 170 + static LIST_HEAD(pcpu_map_extend_chunks); 168 171 169 172 /* 170 173 * The number of empty populated pages, protected by pcpu_lock. The ··· 398 395 { 399 396 int margin, new_alloc; 400 397 398 + lockdep_assert_held(&pcpu_lock); 399 + 401 400 if (is_atomic) { 402 401 margin = 3; 403 402 404 403 if (chunk->map_alloc < 405 - chunk->map_used + PCPU_ATOMIC_MAP_MARGIN_LOW && 406 - pcpu_async_enabled) 407 - schedule_work(&chunk->map_extend_work); 404 + chunk->map_used + PCPU_ATOMIC_MAP_MARGIN_LOW) { 405 + if (list_empty(&chunk->map_extend_list)) { 406 + list_add_tail(&chunk->map_extend_list, 407 + &pcpu_map_extend_chunks); 408 + pcpu_schedule_balance_work(); 409 + } 410 + } 408 411 } else { 409 412 margin = PCPU_ATOMIC_MAP_MARGIN_HIGH; 410 413 } ··· 444 435 size_t old_size = 0, new_size = new_alloc * sizeof(new[0]); 445 436 unsigned long flags; 446 437 438 + lockdep_assert_held(&pcpu_alloc_mutex); 439 + 447 440 new = pcpu_mem_zalloc(new_size); 448 441 if (!new) 449 442 return -ENOMEM; ··· 476 465 pcpu_mem_free(new); 477 466 478 467 return 0; 479 - } 480 - 481 - static void pcpu_map_extend_workfn(struct work_struct *work) 482 - { 483 - struct pcpu_chunk *chunk = container_of(work, struct pcpu_chunk, 484 - map_extend_work); 485 - int new_alloc; 486 - 487 - spin_lock_irq(&pcpu_lock); 488 - new_alloc = pcpu_need_to_extend(chunk, false); 489 - spin_unlock_irq(&pcpu_lock); 490 - 491 - if (new_alloc) 492 - pcpu_extend_area_map(chunk, new_alloc); 493 468 } 494 469 495 470 /** ··· 737 740 chunk->map_used = 1; 738 741 739 742 INIT_LIST_HEAD(&chunk->list); 740 - INIT_WORK(&chunk->map_extend_work, pcpu_map_extend_workfn); 743 + INIT_LIST_HEAD(&chunk->map_extend_list); 741 744 chunk->free_size = pcpu_unit_size; 742 745 chunk->contig_hint = pcpu_unit_size; 743 746 ··· 892 895 return NULL; 893 896 } 894 897 898 + if (!is_atomic) 899 + mutex_lock(&pcpu_alloc_mutex); 900 + 895 901 spin_lock_irqsave(&pcpu_lock, flags); 896 902 897 903 /* serve reserved allocations from the reserved chunk if available */ ··· 967 967 if (is_atomic) 968 968 goto fail; 969 969 970 - mutex_lock(&pcpu_alloc_mutex); 971 - 972 970 if (list_empty(&pcpu_slot[pcpu_nr_slots - 1])) { 973 971 chunk = pcpu_create_chunk(); 974 972 if (!chunk) { 975 - mutex_unlock(&pcpu_alloc_mutex); 976 973 err = "failed to allocate new chunk"; 977 974 goto fail; 978 975 } ··· 980 983 spin_lock_irqsave(&pcpu_lock, flags); 981 984 } 982 985 983 - mutex_unlock(&pcpu_alloc_mutex); 984 986 goto restart; 985 987 986 988 area_found: ··· 988 992 /* populate if not all pages are already there */ 989 993 if (!is_atomic) { 990 994 int page_start, page_end, rs, re; 991 - 992 - mutex_lock(&pcpu_alloc_mutex); 993 995 994 996 page_start = PFN_DOWN(off); 995 997 page_end = PFN_UP(off + size); ··· 999 1005 1000 1006 spin_lock_irqsave(&pcpu_lock, flags); 1001 1007 if (ret) { 1002 - mutex_unlock(&pcpu_alloc_mutex); 1003 1008 pcpu_free_area(chunk, off, &occ_pages); 1004 1009 err = "failed to populate"; 1005 1010 goto fail_unlock; ··· 1038 1045 /* see the flag handling in pcpu_blance_workfn() */ 1039 1046 pcpu_atomic_alloc_failed = true; 1040 1047 pcpu_schedule_balance_work(); 1048 + } else { 1049 + mutex_unlock(&pcpu_alloc_mutex); 1041 1050 } 1042 1051 return NULL; 1043 1052 } ··· 1124 1129 if (chunk == list_first_entry(free_head, struct pcpu_chunk, list)) 1125 1130 continue; 1126 1131 1132 + list_del_init(&chunk->map_extend_list); 1127 1133 list_move(&chunk->list, &to_free); 1128 1134 } 1129 1135 ··· 1141 1145 } 1142 1146 pcpu_destroy_chunk(chunk); 1143 1147 } 1148 + 1149 + /* service chunks which requested async area map extension */ 1150 + do { 1151 + int new_alloc = 0; 1152 + 1153 + spin_lock_irq(&pcpu_lock); 1154 + 1155 + chunk = list_first_entry_or_null(&pcpu_map_extend_chunks, 1156 + struct pcpu_chunk, map_extend_list); 1157 + if (chunk) { 1158 + list_del_init(&chunk->map_extend_list); 1159 + new_alloc = pcpu_need_to_extend(chunk, false); 1160 + } 1161 + 1162 + spin_unlock_irq(&pcpu_lock); 1163 + 1164 + if (new_alloc) 1165 + pcpu_extend_area_map(chunk, new_alloc); 1166 + } while (chunk); 1144 1167 1145 1168 /* 1146 1169 * Ensure there are certain number of free populated pages for ··· 1659 1644 */ 1660 1645 schunk = memblock_virt_alloc(pcpu_chunk_struct_size, 0); 1661 1646 INIT_LIST_HEAD(&schunk->list); 1662 - INIT_WORK(&schunk->map_extend_work, pcpu_map_extend_workfn); 1647 + INIT_LIST_HEAD(&schunk->map_extend_list); 1663 1648 schunk->base_addr = base_addr; 1664 1649 schunk->map = smap; 1665 1650 schunk->map_alloc = ARRAY_SIZE(smap); ··· 1688 1673 if (dyn_size) { 1689 1674 dchunk = memblock_virt_alloc(pcpu_chunk_struct_size, 0); 1690 1675 INIT_LIST_HEAD(&dchunk->list); 1691 - INIT_WORK(&dchunk->map_extend_work, pcpu_map_extend_workfn); 1676 + INIT_LIST_HEAD(&dchunk->map_extend_list); 1692 1677 dchunk->base_addr = base_addr; 1693 1678 dchunk->map = dmap; 1694 1679 dchunk->map_alloc = ARRAY_SIZE(dmap);
+25 -6
net/sunrpc/clnt.c
··· 446 446 return ERR_PTR(err); 447 447 } 448 448 449 - struct rpc_clnt *rpc_create_xprt(struct rpc_create_args *args, 449 + static struct rpc_clnt *rpc_create_xprt(struct rpc_create_args *args, 450 450 struct rpc_xprt *xprt) 451 451 { 452 452 struct rpc_clnt *clnt = NULL; 453 453 struct rpc_xprt_switch *xps; 454 454 455 - xps = xprt_switch_alloc(xprt, GFP_KERNEL); 456 - if (xps == NULL) 457 - return ERR_PTR(-ENOMEM); 458 - 455 + if (args->bc_xprt && args->bc_xprt->xpt_bc_xps) { 456 + WARN_ON(args->protocol != XPRT_TRANSPORT_BC_TCP); 457 + xps = args->bc_xprt->xpt_bc_xps; 458 + xprt_switch_get(xps); 459 + } else { 460 + xps = xprt_switch_alloc(xprt, GFP_KERNEL); 461 + if (xps == NULL) { 462 + xprt_put(xprt); 463 + return ERR_PTR(-ENOMEM); 464 + } 465 + if (xprt->bc_xprt) { 466 + xprt_switch_get(xps); 467 + xprt->bc_xprt->xpt_bc_xps = xps; 468 + } 469 + } 459 470 clnt = rpc_new_client(args, xps, xprt, NULL); 460 471 if (IS_ERR(clnt)) 461 472 return clnt; ··· 494 483 495 484 return clnt; 496 485 } 497 - EXPORT_SYMBOL_GPL(rpc_create_xprt); 498 486 499 487 /** 500 488 * rpc_create - create an RPC client and transport with one call ··· 518 508 .bc_xprt = args->bc_xprt, 519 509 }; 520 510 char servername[48]; 511 + 512 + if (args->bc_xprt) { 513 + WARN_ON(args->protocol != XPRT_TRANSPORT_BC_TCP); 514 + xprt = args->bc_xprt->xpt_bc_xprt; 515 + if (xprt) { 516 + xprt_get(xprt); 517 + return rpc_create_xprt(args, xprt); 518 + } 519 + } 521 520 522 521 if (args->flags & RPC_CLNT_CREATE_INFINITE_SLOTS) 523 522 xprtargs.flags |= XPRT_CREATE_INFINITE_SLOTS;
+2
net/sunrpc/svc_xprt.c
··· 136 136 /* See comment on corresponding get in xs_setup_bc_tcp(): */ 137 137 if (xprt->xpt_bc_xprt) 138 138 xprt_put(xprt->xpt_bc_xprt); 139 + if (xprt->xpt_bc_xps) 140 + xprt_switch_put(xprt->xpt_bc_xps); 139 141 xprt->xpt_ops->xpo_free(xprt); 140 142 module_put(owner); 141 143 }
+1
net/sunrpc/xprtsock.c
··· 3057 3057 return xprt; 3058 3058 3059 3059 args->bc_xprt->xpt_bc_xprt = NULL; 3060 + args->bc_xprt->xpt_bc_xps = NULL; 3060 3061 xprt_put(xprt); 3061 3062 ret = ERR_PTR(-EINVAL); 3062 3063 out_err:
+3 -3
net/unix/af_unix.c
··· 315 315 &unix_socket_table[i->i_ino & (UNIX_HASH_SIZE - 1)]) { 316 316 struct dentry *dentry = unix_sk(s)->path.dentry; 317 317 318 - if (dentry && d_backing_inode(dentry) == i) { 318 + if (dentry && d_real_inode(dentry) == i) { 319 319 sock_hold(s); 320 320 goto found; 321 321 } ··· 911 911 err = kern_path(sunname->sun_path, LOOKUP_FOLLOW, &path); 912 912 if (err) 913 913 goto fail; 914 - inode = d_backing_inode(path.dentry); 914 + inode = d_real_inode(path.dentry); 915 915 err = inode_permission(inode, MAY_WRITE); 916 916 if (err) 917 917 goto put_fail; ··· 1048 1048 goto out_up; 1049 1049 } 1050 1050 addr->hash = UNIX_HASH_SIZE; 1051 - hash = d_backing_inode(dentry)->i_ino & (UNIX_HASH_SIZE - 1); 1051 + hash = d_real_inode(dentry)->i_ino & (UNIX_HASH_SIZE - 1); 1052 1052 spin_lock(&unix_table_lock); 1053 1053 u->path = u_path; 1054 1054 list = &unix_socket_table[hash];
+1 -1
security/keys/key.c
··· 597 597 598 598 mutex_unlock(&key_construction_mutex); 599 599 600 - if (keyring) 600 + if (keyring && link_ret == 0) 601 601 __key_link_end(keyring, &key->index_key, edit); 602 602 603 603 /* wake up anyone waiting for a key to be constructed */
+3 -1
tools/virtio/ringtest/Makefile
··· 1 1 all: 2 2 3 - all: ring virtio_ring_0_9 virtio_ring_poll virtio_ring_inorder 3 + all: ring virtio_ring_0_9 virtio_ring_poll virtio_ring_inorder noring 4 4 5 5 CFLAGS += -Wall 6 6 CFLAGS += -pthread -O2 -ggdb ··· 15 15 virtio_ring_0_9: virtio_ring_0_9.o main.o 16 16 virtio_ring_poll: virtio_ring_poll.o main.o 17 17 virtio_ring_inorder: virtio_ring_inorder.o main.o 18 + noring: noring.o main.o 18 19 clean: 19 20 -rm main.o 20 21 -rm ring.o ring 21 22 -rm virtio_ring_0_9.o virtio_ring_0_9 22 23 -rm virtio_ring_poll.o virtio_ring_poll 23 24 -rm virtio_ring_inorder.o virtio_ring_inorder 25 + -rm noring.o noring 24 26 25 27 .PHONY: all clean
+4
tools/virtio/ringtest/README
··· 1 1 Partial implementation of various ring layouts, useful to tune virtio design. 2 2 Uses shared memory heavily. 3 + 4 + Typical use: 5 + 6 + # sh run-on-all.sh perf stat -r 10 --log-fd 1 -- ./ring
+69
tools/virtio/ringtest/noring.c
··· 1 + #define _GNU_SOURCE 2 + #include "main.h" 3 + #include <assert.h> 4 + 5 + /* stub implementation: useful for measuring overhead */ 6 + void alloc_ring(void) 7 + { 8 + } 9 + 10 + /* guest side */ 11 + int add_inbuf(unsigned len, void *buf, void *datap) 12 + { 13 + return 0; 14 + } 15 + 16 + /* 17 + * skb_array API provides no way for producer to find out whether a given 18 + * buffer was consumed. Our tests merely require that a successful get_buf 19 + * implies that add_inbuf succeed in the past, and that add_inbuf will succeed, 20 + * fake it accordingly. 21 + */ 22 + void *get_buf(unsigned *lenp, void **bufp) 23 + { 24 + return "Buffer"; 25 + } 26 + 27 + void poll_used(void) 28 + { 29 + } 30 + 31 + void disable_call() 32 + { 33 + assert(0); 34 + } 35 + 36 + bool enable_call() 37 + { 38 + assert(0); 39 + } 40 + 41 + void kick_available(void) 42 + { 43 + assert(0); 44 + } 45 + 46 + /* host side */ 47 + void disable_kick() 48 + { 49 + assert(0); 50 + } 51 + 52 + bool enable_kick() 53 + { 54 + assert(0); 55 + } 56 + 57 + void poll_avail(void) 58 + { 59 + } 60 + 61 + bool use_buf(unsigned *lenp, void **bufp) 62 + { 63 + return true; 64 + } 65 + 66 + void call_used(void) 67 + { 68 + assert(0); 69 + }
+2 -2
tools/virtio/ringtest/run-on-all.sh
··· 3 3 #use last CPU for host. Why not the first? 4 4 #many devices tend to use cpu0 by default so 5 5 #it tends to be busier 6 - HOST_AFFINITY=$(cd /dev/cpu; ls|grep -v '[a-z]'|sort -n|tail -1) 6 + HOST_AFFINITY=$(lscpu -p=cpu | tail -1) 7 7 8 8 #run command on all cpus 9 - for cpu in $(cd /dev/cpu; ls|grep -v '[a-z]'|sort -n); 9 + for cpu in $(seq 0 $HOST_AFFINITY) 10 10 do 11 11 #Don't run guest and host on same CPU 12 12 #It actually works ok if using signalling
+1 -1
virt/kvm/kvm_main.c
··· 2941 2941 if (copy_from_user(&routing, argp, sizeof(routing))) 2942 2942 goto out; 2943 2943 r = -EINVAL; 2944 - if (routing.nr >= KVM_MAX_IRQ_ROUTES) 2944 + if (routing.nr > KVM_MAX_IRQ_ROUTES) 2945 2945 goto out; 2946 2946 if (routing.flags) 2947 2947 goto out;