Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 4.2-rc3 into staging-next

We need the staging fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+9967 -4860
+4 -1
.mailmap
··· 116 116 Simon Kelley <simon@thekelleys.org.uk> 117 117 Stéphane Witzmann <stephane.witzmann@ubpmes.univ-bpclermont.fr> 118 118 Stephen Hemminger <shemminger@osdl.org> 119 + Sudeep Holla <sudeep.holla@arm.com> Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> 119 120 Sumit Semwal <sumit.semwal@ti.com> 120 121 Tejun Heo <htejun@gmail.com> 121 122 Thomas Graf <tgraf@suug.ch> ··· 126 125 Uwe Kleine-König <ukl@pengutronix.de> 127 126 Uwe Kleine-König <Uwe.Kleine-Koenig@digi.com> 128 127 Valdis Kletnieks <Valdis.Kletnieks@vt.edu> 129 - Viresh Kumar <viresh.linux@gmail.com> <viresh.kumar@st.com> 128 + Viresh Kumar <vireshk@kernel.org> <viresh.kumar@st.com> 129 + Viresh Kumar <vireshk@kernel.org> <viresh.linux@gmail.com> 130 + Viresh Kumar <vireshk@kernel.org> <viresh.kumar2@arm.com> 130 131 Takashi YOSHII <takashi.yoshii.zj@renesas.com> 131 132 Yusuke Goda <goda.yusuke@renesas.com> 132 133 Gustavo Padovan <gustavo@las.ic.unicamp.br>
+2 -4
Documentation/ABI/testing/sysfs-bus-iio
··· 1239 1239 object is near the sensor, usually be observing 1240 1240 reflectivity of infrared or ultrasound emitted. 1241 1241 Often these sensors are unit less and as such conversion 1242 - to SI units is not possible. Where it is, the units should 1243 - be meters. If such a conversion is not possible, the reported 1244 - values should behave in the same way as a distance, i.e. lower 1245 - values indicate something is closer to the sensor. 1242 + to SI units is not possible. Higher proximity measurements 1243 + indicate closer objects, and vice versa. 1246 1244 1247 1245 What: /sys/.../iio:deviceX/in_illuminance_input 1248 1246 What: /sys/.../iio:deviceX/in_illuminance_raw
+1 -1
Documentation/DocBook/drm.tmpl
··· 3383 3383 <td valign="top" >TBD</td> 3384 3384 </tr> 3385 3385 <tr> 3386 - <td rowspan="2" valign="top" >omap</td> 3386 + <td valign="top" >omap</td> 3387 3387 <td valign="top" >Generic</td> 3388 3388 <td valign="top" >“zorder”</td> 3389 3389 <td valign="top" >RANGE</td>
+1 -1
Documentation/arm/SPEAr/overview.txt
··· 60 60 Document Author 61 61 --------------- 62 62 63 - Viresh Kumar <viresh.linux@gmail.com>, (c) 2010-2012 ST Microelectronics 63 + Viresh Kumar <vireshk@kernel.org>, (c) 2010-2012 ST Microelectronics
+17 -1
Documentation/arm/sunxi/README
··· 36 36 + User Manual 37 37 http://dl.linux-sunxi.org/A20/A20%20User%20Manual%202013-03-22.pdf 38 38 39 - - Allwinner A23 39 + - Allwinner A23 (sun8i) 40 40 + Datasheet 41 41 http://dl.linux-sunxi.org/A23/A23%20Datasheet%20V1.0%2020130830.pdf 42 42 + User Manual ··· 55 55 + User Manual 56 56 http://dl.linux-sunxi.org/A31/A3x_release_document/A31s/IC/A31s%20User%20Manual%20%20V1.0%2020130322.pdf 57 57 58 + - Allwinner A33 (sun8i) 59 + + Datasheet 60 + http://dl.linux-sunxi.org/A33/A33%20Datasheet%20release%201.1.pdf 61 + + User Manual 62 + http://dl.linux-sunxi.org/A33/A33%20user%20manual%20release%201.1.pdf 63 + 64 + - Allwinner H3 (sun8i) 65 + + Datasheet 66 + http://dl.linux-sunxi.org/H3/Allwinner_H3_Datasheet_V1.0.pdf 67 + 58 68 * Quad ARM Cortex-A15, Quad ARM Cortex-A7 based SoCs 59 69 - Allwinner A80 60 70 + Datasheet 61 71 http://dl.linux-sunxi.org/A80/A80_Datasheet_Revision_1.0_0404.pdf 72 + 73 + * Octa ARM Cortex-A7 based SoCs 74 + - Allwinner A83T 75 + + Not Supported 76 + + Datasheet 77 + http://dl.linux-sunxi.org/A83T/A83T_datasheet_Revision_1.1.pdf
+6
Documentation/device-mapper/cache.txt
··· 258 258 no further I/O will be permitted and the status will just 259 259 contain the string 'Fail'. The userspace recovery tools 260 260 should then be used. 261 + needs_check : 'needs_check' if set, '-' if not set 262 + A metadata operation has failed, resulting in the needs_check 263 + flag being set in the metadata's superblock. The metadata 264 + device must be deactivated and checked/repaired before the 265 + cache can be made fully operational again. '-' indicates 266 + needs_check is not set. 261 267 262 268 Messages 263 269 --------
+8 -1
Documentation/device-mapper/thin-provisioning.txt
··· 296 296 underlying device. When this is enabled when loading the table, 297 297 it can get disabled if the underlying device doesn't support it. 298 298 299 - ro|rw 299 + ro|rw|out_of_data_space 300 300 If the pool encounters certain types of device failures it will 301 301 drop into a read-only metadata mode in which no changes to 302 302 the pool metadata (like allocating new blocks) are permitted. ··· 313 313 'no_space_timeout' expires. The 'no_space_timeout' dm-thin-pool 314 314 module parameter can be used to change this timeout -- it 315 315 defaults to 60 seconds but may be disabled using a value of 0. 316 + 317 + needs_check 318 + A metadata operation has failed, resulting in the needs_check 319 + flag being set in the metadata's superblock. The metadata 320 + device must be deactivated and checked/repaired before the 321 + thin-pool can be made fully operational again. '-' indicates 322 + needs_check is not set. 316 323 317 324 iii) Messages 318 325
+2
Documentation/devicetree/bindings/arm/sunxi.txt
··· 9 9 allwinner,sun6i-a31 10 10 allwinner,sun7i-a20 11 11 allwinner,sun8i-a23 12 + allwinner,sun8i-a33 13 + allwinner,sun8i-h3 12 14 allwinner,sun9i-a80
+24 -2
Documentation/devicetree/bindings/drm/imx/fsl-imx-drm.txt
··· 65 65 - edid: verbatim EDID data block describing attached display. 66 66 - ddc: phandle describing the i2c bus handling the display data 67 67 channel 68 - - port: A port node with endpoint definitions as defined in 68 + - port@[0-1]: Port nodes with endpoint definitions as defined in 69 69 Documentation/devicetree/bindings/media/video-interfaces.txt. 70 + Port 0 is the input port connected to the IPU display interface, 71 + port 1 is the output port connected to a panel. 70 72 71 73 example: 72 74 ··· 77 75 edid = [edid-data]; 78 76 interface-pix-fmt = "rgb24"; 79 77 80 - port { 78 + port@0 { 79 + reg = <0>; 80 + 81 81 display_in: endpoint { 82 82 remote-endpoint = <&ipu_di0_disp0>; 83 + }; 84 + }; 85 + 86 + port@1 { 87 + reg = <1>; 88 + 89 + display_out: endpoint { 90 + remote-endpoint = <&panel_in>; 91 + }; 92 + }; 93 + }; 94 + 95 + panel { 96 + ... 97 + 98 + port { 99 + panel_in: endpoint { 100 + remote-endpoint = <&display_out>; 83 101 }; 84 102 }; 85 103 };
+1
Documentation/devicetree/bindings/memory-controllers/ti/emif.txt
··· 8 8 Required properties: 9 9 - compatible : Should be of the form "ti,emif-<ip-rev>" where <ip-rev> 10 10 is the IP revision of the specific EMIF instance. 11 + For am437x should be ti,emif-am4372. 11 12 12 13 - phy-type : <u32> indicating the DDR phy type. Following are the 13 14 allowed values
+8
Documentation/kbuild/makefiles.txt
··· 952 952 $(KBUILD_ARFLAGS) set by the top level Makefile to "D" (deterministic 953 953 mode) if this option is supported by $(AR). 954 954 955 + ARCH_CPPFLAGS, ARCH_AFLAGS, ARCH_CFLAGS Overrides the kbuild defaults 956 + 957 + These variables are appended to the KBUILD_CPPFLAGS, 958 + KBUILD_AFLAGS, and KBUILD_CFLAGS, respectively, after the 959 + top-level Makefile has set any other flags. This provides a 960 + means for an architecture to override the defaults. 961 + 962 + 955 963 --- 6.2 Add prerequisites to archheaders: 956 964 957 965 The archheaders: rule is used to generate header files that
+11 -2
Documentation/power/swsusp.txt
··· 410 410 411 411 Q: Can I suspend-to-disk using a swap partition under LVM? 412 412 413 - A: No. You can suspend successfully, but you'll not be able to 414 - resume. uswsusp should be able to work with LVM. See suspend.sf.net. 413 + A: Yes and No. You can suspend successfully, but the kernel will not be able 414 + to resume on its own. You need an initramfs that can recognize the resume 415 + situation, activate the logical volume containing the swap volume (but not 416 + touch any filesystems!), and eventually call 417 + 418 + echo -n "$major:$minor" > /sys/power/resume 419 + 420 + where $major and $minor are the respective major and minor device numbers of 421 + the swap volume. 422 + 423 + uswsusp works with LVM, too. See http://suspend.sourceforge.net/ 415 424 416 425 Q: I upgraded the kernel from 2.6.15 to 2.6.16. Both kernels were 417 426 compiled with the similar configuration files. Anyway I found that
+77 -59
MAINTAINERS
··· 361 361 F: drivers/input/touchscreen/ad7879.c 362 362 363 363 ADDRESS SPACE LAYOUT RANDOMIZATION (ASLR) 364 - M: Jiri Kosina <jkosina@suse.cz> 364 + M: Jiri Kosina <jkosina@suse.com> 365 365 S: Maintained 366 366 367 367 ADM1025 HARDWARE MONITOR DRIVER 368 - M: Jean Delvare <jdelvare@suse.de> 368 + M: Jean Delvare <jdelvare@suse.com> 369 369 L: lm-sensors@lm-sensors.org 370 370 S: Maintained 371 371 F: Documentation/hwmon/adm1025 ··· 430 430 F: drivers/macintosh/therm_adt746x.c 431 431 432 432 ADT7475 HARDWARE MONITOR DRIVER 433 - M: Jean Delvare <jdelvare@suse.de> 433 + M: Jean Delvare <jdelvare@suse.com> 434 434 L: lm-sensors@lm-sensors.org 435 435 S: Maintained 436 436 F: Documentation/hwmon/adt7475 ··· 445 445 446 446 ADVANSYS SCSI DRIVER 447 447 M: Matthew Wilcox <matthew@wil.cx> 448 - M: Hannes Reinecke <hare@suse.de> 448 + M: Hannes Reinecke <hare@suse.com> 449 449 L: linux-scsi@vger.kernel.org 450 450 S: Maintained 451 451 F: Documentation/scsi/advansys.txt ··· 506 506 F: drivers/scsi/pcmcia/aha152x* 507 507 508 508 AIC7XXX / AIC79XX SCSI DRIVER 509 - M: Hannes Reinecke <hare@suse.de> 509 + M: Hannes Reinecke <hare@suse.com> 510 510 L: linux-scsi@vger.kernel.org 511 511 S: Maintained 512 512 F: drivers/scsi/aic7xxx/ ··· 746 746 F: sound/aoa/ 747 747 748 748 APM DRIVER 749 - M: Jiri Kosina <jkosina@suse.cz> 749 + M: Jiri Kosina <jkosina@suse.com> 750 750 S: Odd fixes 751 751 F: arch/x86/kernel/apm_32.c 752 752 F: include/linux/apm_bios.h ··· 1001 1001 M: Baruch Siach <baruch@tkos.co.il> 1002 1002 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1003 1003 S: Maintained 1004 + F: arch/arm/boot/dts/cx92755* 1004 1005 N: digicolor 1005 1006 1006 1007 ARM/EBSA110 MACHINE SUPPORT ··· 1325 1324 F: arch/arm/mach-pxa/palmtc.c 1326 1325 1327 1326 ARM/PALM TREO SUPPORT 1328 - M: Tomas Cech <sleep_walker@suse.cz> 1327 + M: Tomas Cech <sleep_walker@suse.com> 1329 1328 L: linux-arm-kernel@lists.infradead.org 1330 1329 W: http://hackndev.com 1331 1330 S: Maintained ··· 1615 1614 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1616 1615 S: Maintained 1617 1616 F: arch/arm/boot/dts/vexpress* 1617 + F: arch/arm64/boot/dts/arm/vexpress* 1618 1618 F: arch/arm/mach-vexpress/ 1619 1619 F: */*/vexpress* 1620 1620 F: */*/*/vexpress* ··· 2406 2404 BTRFS FILE SYSTEM 2407 2405 M: Chris Mason <clm@fb.com> 2408 2406 M: Josef Bacik <jbacik@fb.com> 2409 - M: David Sterba <dsterba@suse.cz> 2407 + M: David Sterba <dsterba@suse.com> 2410 2408 L: linux-btrfs@vger.kernel.org 2411 2409 W: http://btrfs.wiki.kernel.org/ 2412 2410 Q: http://patchwork.kernel.org/project/linux-btrfs/list/ ··· 2564 2562 F: arch/powerpc/oprofile/*cell* 2565 2563 F: arch/powerpc/platforms/cell/ 2566 2564 2567 - CEPH DISTRIBUTED FILE SYSTEM CLIENT 2565 + CEPH COMMON CODE (LIBCEPH) 2566 + M: Ilya Dryomov <idryomov@gmail.com> 2568 2567 M: "Yan, Zheng" <zyan@redhat.com> 2569 2568 M: Sage Weil <sage@redhat.com> 2570 2569 L: ceph-devel@vger.kernel.org 2571 2570 W: http://ceph.com/ 2572 2571 T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git 2572 + T: git git://github.com/ceph/ceph-client.git 2573 2573 S: Supported 2574 - F: Documentation/filesystems/ceph.txt 2575 - F: fs/ceph/ 2576 2574 F: net/ceph/ 2577 2575 F: include/linux/ceph/ 2578 2576 F: include/linux/crush/ 2577 + 2578 + CEPH DISTRIBUTED FILE SYSTEM CLIENT (CEPH) 2579 + M: "Yan, Zheng" <zyan@redhat.com> 2580 + M: Sage Weil <sage@redhat.com> 2581 + M: Ilya Dryomov <idryomov@gmail.com> 2582 + L: ceph-devel@vger.kernel.org 2583 + W: http://ceph.com/ 2584 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git 2585 + T: git git://github.com/ceph/ceph-client.git 2586 + S: Supported 2587 + F: Documentation/filesystems/ceph.txt 2588 + F: fs/ceph/ 2579 2589 2580 2590 CERTIFIED WIRELESS USB (WUSB) SUBSYSTEM: 2581 2591 L: linux-usb@vger.kernel.org ··· 2749 2735 M: Julia Lawall <Julia.Lawall@lip6.fr> 2750 2736 M: Gilles Muller <Gilles.Muller@lip6.fr> 2751 2737 M: Nicolas Palix <nicolas.palix@imag.fr> 2752 - M: Michal Marek <mmarek@suse.cz> 2738 + M: Michal Marek <mmarek@suse.com> 2753 2739 L: cocci@systeme.lip6.fr (moderated for non-subscribers) 2754 2740 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild.git misc 2755 2741 W: http://coccinelle.lip6.fr/ ··· 2865 2851 2866 2852 CONTROL GROUP - MEMORY RESOURCE CONTROLLER (MEMCG) 2867 2853 M: Johannes Weiner <hannes@cmpxchg.org> 2868 - M: Michal Hocko <mhocko@suse.cz> 2854 + M: Michal Hocko <mhocko@kernel.org> 2869 2855 L: cgroups@vger.kernel.org 2870 2856 L: linux-mm@kvack.org 2871 2857 S: Maintained ··· 2946 2932 F: arch/x86/kernel/msr.c 2947 2933 2948 2934 CPU POWER MONITORING SUBSYSTEM 2949 - M: Thomas Renninger <trenn@suse.de> 2935 + M: Thomas Renninger <trenn@suse.com> 2950 2936 L: linux-pm@vger.kernel.org 2951 2937 S: Maintained 2952 2938 F: tools/power/cpupower/ ··· 3176 3162 F: drivers/net/ethernet/dec/tulip/dmfe.c 3177 3163 3178 3164 DC390/AM53C974 SCSI driver 3179 - M: Hannes Reinecke <hare@suse.de> 3165 + M: Hannes Reinecke <hare@suse.com> 3180 3166 L: linux-scsi@vger.kernel.org 3181 3167 S: Maintained 3182 3168 F: drivers/scsi/am53c974.c ··· 3380 3366 S: Maintained 3381 3367 3382 3368 DISKQUOTA 3383 - M: Jan Kara <jack@suse.cz> 3369 + M: Jan Kara <jack@suse.com> 3384 3370 S: Maintained 3385 3371 F: Documentation/filesystems/quota.txt 3386 3372 F: fs/quota/ ··· 3436 3422 F: drivers/hwmon/dme1737.c 3437 3423 3438 3424 DMI/SMBIOS SUPPORT 3439 - M: Jean Delvare <jdelvare@suse.de> 3425 + M: Jean Delvare <jdelvare@suse.com> 3440 3426 S: Maintained 3441 3427 T: quilt http://jdelvare.nerim.net/devel/linux/jdelvare-dmi/ 3442 3428 F: Documentation/ABI/testing/sysfs-firmware-dmi-tables ··· 4052 4038 F: drivers/of/of_net.c 4053 4039 4054 4040 EXT2 FILE SYSTEM 4055 - M: Jan Kara <jack@suse.cz> 4041 + M: Jan Kara <jack@suse.com> 4056 4042 L: linux-ext4@vger.kernel.org 4057 4043 S: Maintained 4058 4044 F: Documentation/filesystems/ext2.txt ··· 4060 4046 F: include/linux/ext2* 4061 4047 4062 4048 EXT3 FILE SYSTEM 4063 - M: Jan Kara <jack@suse.cz> 4049 + M: Jan Kara <jack@suse.com> 4064 4050 M: Andrew Morton <akpm@linux-foundation.org> 4065 4051 M: Andreas Dilger <adilger.kernel@dilger.ca> 4066 4052 L: linux-ext4@vger.kernel.org ··· 4110 4096 F: include/video/exynos_mipi* 4111 4097 4112 4098 F71805F HARDWARE MONITORING DRIVER 4113 - M: Jean Delvare <jdelvare@suse.de> 4099 + M: Jean Delvare <jdelvare@suse.com> 4114 4100 L: lm-sensors@lm-sensors.org 4115 4101 S: Maintained 4116 4102 F: Documentation/hwmon/f71805f ··· 4245 4231 F: drivers/block/rsxx/ 4246 4232 4247 4233 FLOPPY DRIVER 4248 - M: Jiri Kosina <jkosina@suse.cz> 4234 + M: Jiri Kosina <jkosina@suse.com> 4249 4235 T: git git://git.kernel.org/pub/scm/linux/kernel/git/jikos/floppy.git 4250 4236 S: Odd fixes 4251 4237 F: drivers/block/floppy.c ··· 4666 4652 4667 4653 H8/300 ARCHITECTURE 4668 4654 M: Yoshinori Sato <ysato@users.sourceforge.jp> 4669 - L: uclinux-h8-devel@lists.sourceforge.jp 4655 + L: uclinux-h8-devel@lists.sourceforge.jp (moderated for non-subscribers) 4670 4656 W: http://uclinux-h8.sourceforge.jp 4671 4657 T: git git://git.sourceforge.jp/gitroot/uclinux-h8/linux.git 4672 4658 S: Maintained ··· 4713 4699 F: drivers/media/usb/hackrf/ 4714 4700 4715 4701 HARDWARE MONITORING 4716 - M: Jean Delvare <jdelvare@suse.de> 4702 + M: Jean Delvare <jdelvare@suse.com> 4717 4703 M: Guenter Roeck <linux@roeck-us.net> 4718 4704 L: lm-sensors@lm-sensors.org 4719 4705 W: http://www.lm-sensors.org/ ··· 4816 4802 F: arch/*/include/asm/suspend*.h 4817 4803 4818 4804 HID CORE LAYER 4819 - M: Jiri Kosina <jkosina@suse.cz> 4805 + M: Jiri Kosina <jkosina@suse.com> 4820 4806 L: linux-input@vger.kernel.org 4821 4807 T: git git://git.kernel.org/pub/scm/linux/kernel/git/jikos/hid.git 4822 4808 S: Maintained ··· 4825 4811 F: include/uapi/linux/hid* 4826 4812 4827 4813 HID SENSOR HUB DRIVERS 4828 - M: Jiri Kosina <jkosina@suse.cz> 4814 + M: Jiri Kosina <jkosina@suse.com> 4829 4815 M: Jonathan Cameron <jic23@kernel.org> 4830 4816 M: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> 4831 4817 L: linux-input@vger.kernel.org ··· 4959 4945 F: tools/hv/ 4960 4946 4961 4947 I2C OVER PARALLEL PORT 4962 - M: Jean Delvare <jdelvare@suse.de> 4948 + M: Jean Delvare <jdelvare@suse.com> 4963 4949 L: linux-i2c@vger.kernel.org 4964 4950 S: Maintained 4965 4951 F: Documentation/i2c/busses/i2c-parport ··· 4968 4954 F: drivers/i2c/busses/i2c-parport-light.c 4969 4955 4970 4956 I2C/SMBUS CONTROLLER DRIVERS FOR PC 4971 - M: Jean Delvare <jdelvare@suse.de> 4957 + M: Jean Delvare <jdelvare@suse.com> 4972 4958 L: linux-i2c@vger.kernel.org 4973 4959 S: Maintained 4974 4960 F: Documentation/i2c/busses/i2c-ali1535 ··· 5009 4995 F: Documentation/i2c/busses/i2c-ismt 5010 4996 5011 4997 I2C/SMBUS STUB DRIVER 5012 - M: Jean Delvare <jdelvare@suse.de> 4998 + M: Jean Delvare <jdelvare@suse.com> 5013 4999 L: linux-i2c@vger.kernel.org 5014 5000 S: Maintained 5015 5001 F: drivers/i2c/i2c-stub.c ··· 5036 5022 S: Maintained 5037 5023 5038 5024 I2C-TAOS-EVM DRIVER 5039 - M: Jean Delvare <jdelvare@suse.de> 5025 + M: Jean Delvare <jdelvare@suse.com> 5040 5026 L: linux-i2c@vger.kernel.org 5041 5027 S: Maintained 5042 5028 F: Documentation/i2c/busses/i2c-taos-evm ··· 5565 5551 F: net/netfilter/ipvs/ 5566 5552 5567 5553 IPWIRELESS DRIVER 5568 - M: Jiri Kosina <jkosina@suse.cz> 5569 - M: David Sterba <dsterba@suse.cz> 5554 + M: Jiri Kosina <jkosina@suse.com> 5555 + M: David Sterba <dsterba@suse.com> 5570 5556 S: Odd Fixes 5571 5557 F: drivers/tty/ipwireless/ 5572 5558 ··· 5686 5672 F: drivers/isdn/hardware/eicon/ 5687 5673 5688 5674 IT87 HARDWARE MONITORING DRIVER 5689 - M: Jean Delvare <jdelvare@suse.de> 5675 + M: Jean Delvare <jdelvare@suse.com> 5690 5676 L: lm-sensors@lm-sensors.org 5691 5677 S: Maintained 5692 5678 F: Documentation/hwmon/it87 ··· 5753 5739 5754 5740 JOURNALLING LAYER FOR BLOCK DEVICES (JBD) 5755 5741 M: Andrew Morton <akpm@linux-foundation.org> 5756 - M: Jan Kara <jack@suse.cz> 5742 + M: Jan Kara <jack@suse.com> 5757 5743 L: linux-ext4@vger.kernel.org 5758 5744 S: Maintained 5759 5745 F: fs/jbd/ ··· 5817 5803 F: fs/autofs4/ 5818 5804 5819 5805 KERNEL BUILD + files below scripts/ (unless maintained elsewhere) 5820 - M: Michal Marek <mmarek@suse.cz> 5806 + M: Michal Marek <mmarek@suse.com> 5821 5807 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild.git for-next 5822 5808 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild.git rc-fixes 5823 5809 L: linux-kbuild@vger.kernel.org ··· 5881 5867 F: arch/x86/kvm/svm.c 5882 5868 5883 5869 KERNEL VIRTUAL MACHINE (KVM) FOR POWERPC 5884 - M: Alexander Graf <agraf@suse.de> 5870 + M: Alexander Graf <agraf@suse.com> 5885 5871 L: kvm-ppc@vger.kernel.org 5886 5872 W: http://kvm.qumranet.com 5887 5873 T: git git://github.com/agraf/linux-2.6.git ··· 6038 6024 F: include/linux/leds.h 6039 6025 6040 6026 LEGACY EEPROM DRIVER 6041 - M: Jean Delvare <jdelvare@suse.de> 6027 + M: Jean Delvare <jdelvare@suse.com> 6042 6028 S: Maintained 6043 6029 F: Documentation/misc-devices/eeprom 6044 6030 F: drivers/misc/eeprom/eeprom.c ··· 6091 6077 F: include/linux/libata.h 6092 6078 6093 6079 LIBATA PATA ARASAN COMPACT FLASH CONTROLLER 6094 - M: Viresh Kumar <viresh.linux@gmail.com> 6080 + M: Viresh Kumar <vireshk@kernel.org> 6095 6081 L: linux-ide@vger.kernel.org 6096 6082 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata.git 6097 6083 S: Maintained ··· 6161 6147 Q: https://patchwork.kernel.org/project/linux-nvdimm/list/ 6162 6148 S: Supported 6163 6149 F: drivers/nvdimm/pmem.c 6150 + F: include/linux/pmem.h 6164 6151 6165 6152 LINUX FOR IBM pSERIES (RS/6000) 6166 6153 M: Paul Mackerras <paulus@au.ibm.com> ··· 6176 6161 W: http://www.penguinppc.org/ 6177 6162 L: linuxppc-dev@lists.ozlabs.org 6178 6163 Q: http://patchwork.ozlabs.org/project/linuxppc-dev/list/ 6179 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc.git 6164 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git 6180 6165 S: Supported 6181 6166 F: Documentation/powerpc/ 6182 6167 F: arch/powerpc/ ··· 6252 6237 LIVE PATCHING 6253 6238 M: Josh Poimboeuf <jpoimboe@redhat.com> 6254 6239 M: Seth Jennings <sjenning@redhat.com> 6255 - M: Jiri Kosina <jkosina@suse.cz> 6256 - M: Vojtech Pavlik <vojtech@suse.cz> 6240 + M: Jiri Kosina <jkosina@suse.com> 6241 + M: Vojtech Pavlik <vojtech@suse.com> 6257 6242 S: Maintained 6258 6243 F: kernel/livepatch/ 6259 6244 F: include/linux/livepatch.h ··· 6279 6264 F: drivers/hwmon/lm73.c 6280 6265 6281 6266 LM78 HARDWARE MONITOR DRIVER 6282 - M: Jean Delvare <jdelvare@suse.de> 6267 + M: Jean Delvare <jdelvare@suse.com> 6283 6268 L: lm-sensors@lm-sensors.org 6284 6269 S: Maintained 6285 6270 F: Documentation/hwmon/lm78 6286 6271 F: drivers/hwmon/lm78.c 6287 6272 6288 6273 LM83 HARDWARE MONITOR DRIVER 6289 - M: Jean Delvare <jdelvare@suse.de> 6274 + M: Jean Delvare <jdelvare@suse.com> 6290 6275 L: lm-sensors@lm-sensors.org 6291 6276 S: Maintained 6292 6277 F: Documentation/hwmon/lm83 6293 6278 F: drivers/hwmon/lm83.c 6294 6279 6295 6280 LM90 HARDWARE MONITOR DRIVER 6296 - M: Jean Delvare <jdelvare@suse.de> 6281 + M: Jean Delvare <jdelvare@suse.com> 6297 6282 L: lm-sensors@lm-sensors.org 6298 6283 S: Maintained 6299 6284 F: Documentation/hwmon/lm90 ··· 7020 7005 F: net/*/netfilter.c 7021 7006 F: net/*/netfilter/ 7022 7007 F: net/netfilter/ 7008 + F: net/bridge/br_netfilter*.c 7023 7009 7024 7010 NETLABEL 7025 7011 M: Paul Moore <paul@paul-moore.com> ··· 7720 7704 F: drivers/char/pc8736x_gpio.c 7721 7705 7722 7706 PC87427 HARDWARE MONITORING DRIVER 7723 - M: Jean Delvare <jdelvare@suse.de> 7707 + M: Jean Delvare <jdelvare@suse.com> 7724 7708 L: lm-sensors@lm-sensors.org 7725 7709 S: Maintained 7726 7710 F: Documentation/hwmon/pc87427 ··· 7997 7981 F: drivers/pinctrl/samsung/ 7998 7982 7999 7983 PIN CONTROLLER - ST SPEAR 8000 - M: Viresh Kumar <viresh.linux@gmail.com> 7984 + M: Viresh Kumar <vireshk@kernel.org> 8001 7985 L: spear-devel@list.st.com 8002 7986 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 8003 7987 W: http://www.st.com/spear ··· 8005 7989 F: drivers/pinctrl/spear/ 8006 7990 8007 7991 PKTCDVD DRIVER 8008 - M: Jiri Kosina <jkosina@suse.cz> 7992 + M: Jiri Kosina <jkosina@suse.com> 8009 7993 S: Maintained 8010 7994 F: drivers/block/pktcdvd.c 8011 7995 F: include/linux/pktcdvd.h ··· 8382 8366 M: Ilya Dryomov <idryomov@gmail.com> 8383 8367 M: Sage Weil <sage@redhat.com> 8384 8368 M: Alex Elder <elder@kernel.org> 8385 - M: ceph-devel@vger.kernel.org 8369 + L: ceph-devel@vger.kernel.org 8386 8370 W: http://ceph.com/ 8387 8371 T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git 8372 + T: git git://github.com/ceph/ceph-client.git 8388 8373 S: Supported 8374 + F: Documentation/ABI/testing/sysfs-bus-rbd 8389 8375 F: drivers/block/rbd.c 8390 8376 F: drivers/block/rbd_types.h 8391 8377 ··· 8896 8878 F: drivers/tty/serial/ 8897 8879 8898 8880 SYNOPSYS DESIGNWARE DMAC DRIVER 8899 - M: Viresh Kumar <viresh.linux@gmail.com> 8881 + M: Viresh Kumar <vireshk@kernel.org> 8900 8882 M: Andy Shevchenko <andriy.shevchenko@linux.intel.com> 8901 8883 S: Maintained 8902 8884 F: include/linux/dma/dw.h ··· 9063 9045 F: drivers/mmc/host/sdhci-s3c* 9064 9046 9065 9047 SECURE DIGITAL HOST CONTROLLER INTERFACE (SDHCI) ST SPEAR DRIVER 9066 - M: Viresh Kumar <viresh.linux@gmail.com> 9048 + M: Viresh Kumar <vireshk@kernel.org> 9067 9049 L: spear-devel@list.st.com 9068 9050 L: linux-mmc@vger.kernel.org 9069 9051 S: Maintained ··· 9425 9407 F: drivers/hwmon/sch5627.c 9426 9408 9427 9409 SMSC47B397 HARDWARE MONITOR DRIVER 9428 - M: Jean Delvare <jdelvare@suse.de> 9410 + M: Jean Delvare <jdelvare@suse.com> 9429 9411 L: lm-sensors@lm-sensors.org 9430 9412 S: Maintained 9431 9413 F: Documentation/hwmon/smsc47b397 ··· 9474 9456 F: drivers/media/pci/solo6x10/ 9475 9457 9476 9458 SOFTWARE RAID (Multiple Disks) SUPPORT 9477 - M: Neil Brown <neilb@suse.de> 9459 + M: Neil Brown <neilb@suse.com> 9478 9460 L: linux-raid@vger.kernel.org 9479 9461 S: Supported 9480 9462 F: drivers/md/ ··· 9517 9499 9518 9500 SOUND 9519 9501 M: Jaroslav Kysela <perex@perex.cz> 9520 - M: Takashi Iwai <tiwai@suse.de> 9502 + M: Takashi Iwai <tiwai@suse.com> 9521 9503 L: alsa-devel@alsa-project.org (moderated for non-subscribers) 9522 9504 W: http://www.alsa-project.org/ 9523 9505 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound.git ··· 9601 9583 F: include/linux/compiler.h 9602 9584 9603 9585 SPEAR PLATFORM SUPPORT 9604 - M: Viresh Kumar <viresh.linux@gmail.com> 9586 + M: Viresh Kumar <vireshk@kernel.org> 9605 9587 M: Shiraz Hashim <shiraz.linux.kernel@gmail.com> 9606 9588 L: spear-devel@list.st.com 9607 9589 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) ··· 9610 9592 F: arch/arm/mach-spear/ 9611 9593 9612 9594 SPEAR CLOCK FRAMEWORK SUPPORT 9613 - M: Viresh Kumar <viresh.linux@gmail.com> 9595 + M: Viresh Kumar <vireshk@kernel.org> 9614 9596 L: spear-devel@list.st.com 9615 9597 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 9616 9598 W: http://www.st.com/spear ··· 10400 10382 10401 10383 TTY LAYER 10402 10384 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 10403 - M: Jiri Slaby <jslaby@suse.cz> 10385 + M: Jiri Slaby <jslaby@suse.com> 10404 10386 S: Supported 10405 10387 T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty.git 10406 10388 F: Documentation/serial/ ··· 10474 10456 F: arch/m68k/include/asm/*_no.* 10475 10457 10476 10458 UDF FILESYSTEM 10477 - M: Jan Kara <jack@suse.cz> 10459 + M: Jan Kara <jack@suse.com> 10478 10460 S: Maintained 10479 10461 F: Documentation/filesystems/udf.txt 10480 10462 F: fs/udf/ ··· 10617 10599 F: include/linux/usb/gadget* 10618 10600 10619 10601 USB HID/HIDBP DRIVERS (USB KEYBOARDS, MICE, REMOTE CONTROLS, ...) 10620 - M: Jiri Kosina <jkosina@suse.cz> 10602 + M: Jiri Kosina <jkosina@suse.com> 10621 10603 L: linux-usb@vger.kernel.org 10622 10604 T: git git://git.kernel.org/pub/scm/linux/kernel/git/jikos/hid.git 10623 10605 S: Maintained ··· 10742 10724 F: drivers/usb/host/uhci* 10743 10725 10744 10726 USB "USBNET" DRIVER FRAMEWORK 10745 - M: Oliver Neukum <oneukum@suse.de> 10727 + M: Oliver Neukum <oneukum@suse.com> 10746 10728 L: netdev@vger.kernel.org 10747 10729 W: http://www.linux-usb.org/usbnet 10748 10730 S: Maintained ··· 11069 11051 F: drivers/hwmon/w83793.c 11070 11052 11071 11053 W83795 HARDWARE MONITORING DRIVER 11072 - M: Jean Delvare <jdelvare@suse.de> 11054 + M: Jean Delvare <jdelvare@suse.com> 11073 11055 L: lm-sensors@lm-sensors.org 11074 11056 S: Maintained 11075 11057 F: drivers/hwmon/w83795.c
+6 -5
Makefile
··· 1 1 VERSION = 4 2 2 PATCHLEVEL = 2 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc1 4 + EXTRAVERSION = -rc3 5 5 NAME = Hurr durr I'ma sheep 6 6 7 7 # *DOCUMENTATION* ··· 780 780 include scripts/Makefile.kasan 781 781 include scripts/Makefile.extrawarn 782 782 783 - # Add user supplied CPPFLAGS, AFLAGS and CFLAGS as the last assignments 784 - KBUILD_CPPFLAGS += $(KCPPFLAGS) 785 - KBUILD_AFLAGS += $(KAFLAGS) 786 - KBUILD_CFLAGS += $(KCFLAGS) 783 + # Add any arch overrides and user supplied CPPFLAGS, AFLAGS and CFLAGS as the 784 + # last assignments 785 + KBUILD_CPPFLAGS += $(ARCH_CPPFLAGS) $(KCPPFLAGS) 786 + KBUILD_AFLAGS += $(ARCH_AFLAGS) $(KAFLAGS) 787 + KBUILD_CFLAGS += $(ARCH_CFLAGS) $(KCFLAGS) 787 788 788 789 # Use --build-id when available. 789 790 LDFLAGS_BUILD_ID = $(patsubst -Wl$(comma)%,%,\
+4
arch/Kconfig
··· 221 221 config ARCH_THREAD_INFO_ALLOCATOR 222 222 bool 223 223 224 + # Select if arch wants to size task_struct dynamically via arch_task_struct_size: 225 + config ARCH_WANTS_DYNAMIC_TASK_STRUCT 226 + bool 227 + 224 228 config HAVE_REGS_AND_STACK_ACCESS_API 225 229 bool 226 230 help
+1
arch/alpha/include/asm/Kbuild
··· 5 5 generic-y += exec.h 6 6 generic-y += irq_work.h 7 7 generic-y += mcs_spinlock.h 8 + generic-y += mm-arch-hooks.h 8 9 generic-y += preempt.h 9 10 generic-y += sections.h 10 11 generic-y += trace_clock.h
-15
arch/alpha/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_ALPHA_MM_ARCH_HOOKS_H 13 - #define _ASM_ALPHA_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_ALPHA_MM_ARCH_HOOKS_H */
+2 -1
arch/arc/Kconfig
··· 115 115 116 116 config ARC_CPU_750D 117 117 bool "ARC750D" 118 + select ARC_CANT_LLSC 118 119 help 119 120 Support for ARC750 core 120 121 ··· 363 362 config ARC_HAS_LLSC 364 363 bool "Insn: LLOCK/SCOND (efficient atomic ops)" 365 364 default y 366 - depends on !ARC_CPU_750D && !ARC_CANT_LLSC 365 + depends on !ARC_CANT_LLSC 367 366 368 367 config ARC_HAS_SWAPE 369 368 bool "Insn: SWAPE (endian-swap)"
+2 -1
arch/arc/Makefile
··· 49 49 50 50 ifndef CONFIG_CC_OPTIMIZE_FOR_SIZE 51 51 # Generic build system uses -O2, we want -O3 52 - cflags-y += -O3 52 + # Note: No need to add to cflags-y as that happens anyways 53 + ARCH_CFLAGS += -O3 53 54 endif 54 55 55 56 # small data is default for elf32 tool-chain. If not usable, disable it
+1 -1
arch/arc/boot/dts/axc003.dtsi
··· 12 12 13 13 / { 14 14 compatible = "snps,arc"; 15 - clock-frequency = <75000000>; 15 + clock-frequency = <90000000>; 16 16 #address-cells = <1>; 17 17 #size-cells = <1>; 18 18
+1 -1
arch/arc/boot/dts/axc003_idu.dtsi
··· 12 12 13 13 / { 14 14 compatible = "snps,arc"; 15 - clock-frequency = <75000000>; 15 + clock-frequency = <90000000>; 16 16 #address-cells = <1>; 17 17 #size-cells = <1>; 18 18
+1
arch/arc/include/asm/Kbuild
··· 22 22 generic-y += local.h 23 23 generic-y += local64.h 24 24 generic-y += mcs_spinlock.h 25 + generic-y += mm-arch-hooks.h 25 26 generic-y += mman.h 26 27 generic-y += msgbuf.h 27 28 generic-y += param.h
+9 -26
arch/arc/include/asm/bitops.h
··· 50 50 * done for const @nr, but no code is generated due to gcc \ 51 51 * const prop. \ 52 52 */ \ 53 - if (__builtin_constant_p(nr)) \ 54 - nr &= 0x1f; \ 53 + nr &= 0x1f; \ 55 54 \ 56 55 __asm__ __volatile__( \ 57 56 "1: llock %0, [%1] \n" \ ··· 81 82 \ 82 83 m += nr >> 5; \ 83 84 \ 84 - if (__builtin_constant_p(nr)) \ 85 - nr &= 0x1f; \ 85 + nr &= 0x1f; \ 86 86 \ 87 87 /* \ 88 88 * Explicit full memory barrier needed before/after as \ ··· 127 129 unsigned long temp, flags; \ 128 130 m += nr >> 5; \ 129 131 \ 130 - if (__builtin_constant_p(nr)) \ 131 - nr &= 0x1f; \ 132 - \ 133 132 /* \ 134 133 * spin lock/unlock provide the needed smp_mb() before/after \ 135 134 */ \ 136 135 bitops_lock(flags); \ 137 136 \ 138 137 temp = *m; \ 139 - *m = temp c_op (1UL << nr); \ 138 + *m = temp c_op (1UL << (nr & 0x1f)); \ 140 139 \ 141 140 bitops_unlock(flags); \ 142 141 } ··· 144 149 unsigned long old, flags; \ 145 150 m += nr >> 5; \ 146 151 \ 147 - if (__builtin_constant_p(nr)) \ 148 - nr &= 0x1f; \ 149 - \ 150 152 bitops_lock(flags); \ 151 153 \ 152 154 old = *m; \ 153 - *m = old c_op (1 << nr); \ 155 + *m = old c_op (1UL << (nr & 0x1f)); \ 154 156 \ 155 157 bitops_unlock(flags); \ 156 158 \ 157 - return (old & (1 << nr)) != 0; \ 159 + return (old & (1UL << (nr & 0x1f))) != 0; \ 158 160 } 159 161 160 162 #endif /* CONFIG_ARC_HAS_LLSC */ ··· 166 174 unsigned long temp; \ 167 175 m += nr >> 5; \ 168 176 \ 169 - if (__builtin_constant_p(nr)) \ 170 - nr &= 0x1f; \ 171 - \ 172 177 temp = *m; \ 173 - *m = temp c_op (1UL << nr); \ 178 + *m = temp c_op (1UL << (nr & 0x1f)); \ 174 179 } 175 180 176 181 #define __TEST_N_BIT_OP(op, c_op, asm_op) \ ··· 176 187 unsigned long old; \ 177 188 m += nr >> 5; \ 178 189 \ 179 - if (__builtin_constant_p(nr)) \ 180 - nr &= 0x1f; \ 181 - \ 182 190 old = *m; \ 183 - *m = old c_op (1 << nr); \ 191 + *m = old c_op (1UL << (nr & 0x1f)); \ 184 192 \ 185 - return (old & (1 << nr)) != 0; \ 193 + return (old & (1UL << (nr & 0x1f))) != 0; \ 186 194 } 187 195 188 196 #define BIT_OPS(op, c_op, asm_op) \ ··· 210 224 211 225 addr += nr >> 5; 212 226 213 - if (__builtin_constant_p(nr)) 214 - nr &= 0x1f; 215 - 216 - mask = 1 << nr; 227 + mask = 1UL << (nr & 0x1f); 217 228 218 229 return ((mask & *addr) != 0); 219 230 }
+42 -6
arch/arc/include/asm/futex.h
··· 16 16 #include <linux/uaccess.h> 17 17 #include <asm/errno.h> 18 18 19 + #ifdef CONFIG_ARC_HAS_LLSC 20 + 19 21 #define __futex_atomic_op(insn, ret, oldval, uaddr, oparg)\ 20 22 \ 21 23 __asm__ __volatile__( \ 22 - "1: ld %1, [%2] \n" \ 24 + "1: llock %1, [%2] \n" \ 23 25 insn "\n" \ 24 - "2: st %0, [%2] \n" \ 26 + "2: scond %0, [%2] \n" \ 27 + " bnz 1b \n" \ 25 28 " mov %0, 0 \n" \ 26 29 "3: \n" \ 27 30 " .section .fixup,\"ax\" \n" \ ··· 41 38 : "=&r" (ret), "=&r" (oldval) \ 42 39 : "r" (uaddr), "r" (oparg), "ir" (-EFAULT) \ 43 40 : "cc", "memory") 41 + 42 + #else /* !CONFIG_ARC_HAS_LLSC */ 43 + 44 + #define __futex_atomic_op(insn, ret, oldval, uaddr, oparg)\ 45 + \ 46 + __asm__ __volatile__( \ 47 + "1: ld %1, [%2] \n" \ 48 + insn "\n" \ 49 + "2: st %0, [%2] \n" \ 50 + " mov %0, 0 \n" \ 51 + "3: \n" \ 52 + " .section .fixup,\"ax\" \n" \ 53 + " .align 4 \n" \ 54 + "4: mov %0, %4 \n" \ 55 + " b 3b \n" \ 56 + " .previous \n" \ 57 + " .section __ex_table,\"a\" \n" \ 58 + " .align 4 \n" \ 59 + " .word 1b, 4b \n" \ 60 + " .word 2b, 4b \n" \ 61 + " .previous \n" \ 62 + \ 63 + : "=&r" (ret), "=&r" (oldval) \ 64 + : "r" (uaddr), "r" (oparg), "ir" (-EFAULT) \ 65 + : "cc", "memory") 66 + 67 + #endif 44 68 45 69 static inline int futex_atomic_op_inuser(int encoded_op, u32 __user *uaddr) 46 70 { ··· 153 123 154 124 pagefault_disable(); 155 125 156 - /* TBD : can use llock/scond */ 157 126 __asm__ __volatile__( 158 - "1: ld %0, [%3] \n" 159 - " brne %0, %1, 3f \n" 160 - "2: st %2, [%3] \n" 127 + #ifdef CONFIG_ARC_HAS_LLSC 128 + "1: llock %0, [%3] \n" 129 + " brne %0, %1, 3f \n" 130 + "2: scond %2, [%3] \n" 131 + " bnz 1b \n" 132 + #else 133 + "1: ld %0, [%3] \n" 134 + " brne %0, %1, 3f \n" 135 + "2: st %2, [%3] \n" 136 + #endif 161 137 "3: \n" 162 138 " .section .fixup,\"ax\" \n" 163 139 "4: mov %0, %4 \n"
-15
arch/arc/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_ARC_MM_ARCH_HOOKS_H 13 - #define _ASM_ARC_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_ARC_MM_ARCH_HOOKS_H */
+1 -1
arch/arc/include/asm/ptrace.h
··· 106 106 long r25, r24, r23, r22, r21, r20, r19, r18, r17, r16, r15, r14, r13; 107 107 }; 108 108 109 - #define instruction_pointer(regs) ((regs)->ret) 109 + #define instruction_pointer(regs) (unsigned long)((regs)->ret) 110 110 #define profile_pc(regs) instruction_pointer(regs) 111 111 112 112 /* return 1 if user mode or 0 if kernel mode */
-1
arch/arc/kernel/intc-arcv2.c
··· 12 12 #include <linux/of.h> 13 13 #include <linux/irqdomain.h> 14 14 #include <linux/irqchip.h> 15 - #include "../../drivers/irqchip/irqchip.h" 16 15 #include <asm/irq.h> 17 16 18 17 /*
-1
arch/arc/kernel/intc-compact.c
··· 12 12 #include <linux/of.h> 13 13 #include <linux/irqdomain.h> 14 14 #include <linux/irqchip.h> 15 - #include "../../drivers/irqchip/irqchip.h" 16 15 #include <asm/irq.h> 17 16 18 17 /*
+19 -4
arch/arc/kernel/mcip.c
··· 175 175 #include <linux/irqchip.h> 176 176 #include <linux/of.h> 177 177 #include <linux/of_irq.h> 178 - #include "../../drivers/irqchip/irqchip.h" 179 178 180 179 /* 181 180 * Set the DEST for @cmn_irq to @cpu_mask (1 bit per core) ··· 217 218 raw_spin_unlock_irqrestore(&mcip_lock, flags); 218 219 } 219 220 221 + #ifdef CONFIG_SMP 220 222 static int 221 - idu_irq_set_affinity(struct irq_data *d, const struct cpumask *cpumask, bool f) 223 + idu_irq_set_affinity(struct irq_data *data, const struct cpumask *cpumask, 224 + bool force) 222 225 { 226 + unsigned long flags; 227 + cpumask_t online; 228 + 229 + /* errout if no online cpu per @cpumask */ 230 + if (!cpumask_and(&online, cpumask, cpu_online_mask)) 231 + return -EINVAL; 232 + 233 + raw_spin_lock_irqsave(&mcip_lock, flags); 234 + 235 + idu_set_dest(data->hwirq, cpumask_bits(&online)[0]); 236 + idu_set_mode(data->hwirq, IDU_M_TRIG_LEVEL, IDU_M_DISTRI_RR); 237 + 238 + raw_spin_unlock_irqrestore(&mcip_lock, flags); 239 + 223 240 return IRQ_SET_MASK_OK; 224 241 } 242 + #endif 225 243 226 244 static struct irq_chip idu_irq_chip = { 227 245 .name = "MCIP IDU Intc", ··· 346 330 if (!i) 347 331 idu_first_irq = irq; 348 332 349 - irq_set_handler_data(irq, domain); 350 - irq_set_chained_handler(irq, idu_cascade_isr); 333 + irq_set_chained_handler_and_data(irq, idu_cascade_isr, domain); 351 334 } 352 335 353 336 __mcip_cmd(CMD_IDU_ENABLE, 0);
+10 -5
arch/arc/kernel/setup.c
··· 142 142 } 143 143 144 144 static const struct cpuinfo_data arc_cpu_tbl[] = { 145 + #ifdef CONFIG_ISA_ARCOMPACT 145 146 { {0x20, "ARC 600" }, 0x2F}, 146 147 { {0x30, "ARC 700" }, 0x33}, 147 148 { {0x34, "ARC 700 R4.10"}, 0x34}, 148 149 { {0x35, "ARC 700 R4.11"}, 0x35}, 149 - { {0x50, "ARC HS38" }, 0x51}, 150 + #else 151 + { {0x50, "ARC HS38 R2.0"}, 0x51}, 152 + { {0x52, "ARC HS38 R2.1"}, 0x52}, 153 + #endif 150 154 { {0x00, NULL } } 151 155 }; 152 156 153 - #define IS_AVAIL1(v, str) ((v) ? str : "") 154 - #define IS_USED(cfg) (IS_ENABLED(cfg) ? "" : "(not used) ") 155 - #define IS_AVAIL2(v, str, cfg) IS_AVAIL1(v, str), IS_AVAIL1(v, IS_USED(cfg)) 157 + #define IS_AVAIL1(v, s) ((v) ? s : "") 158 + #define IS_USED_RUN(v) ((v) ? "" : "(not used) ") 159 + #define IS_USED_CFG(cfg) IS_USED_RUN(IS_ENABLED(cfg)) 160 + #define IS_AVAIL2(v, s, cfg) IS_AVAIL1(v, s), IS_AVAIL1(v, IS_USED_CFG(cfg)) 156 161 157 162 static char *arc_cpu_mumbojumbo(int cpu_id, char *buf, int len) 158 163 { ··· 231 226 n += scnprintf(buf + n, len - n, "mpy[opt %d] ", opt); 232 227 } 233 228 n += scnprintf(buf + n, len - n, "%s", 234 - IS_USED(CONFIG_ARC_HAS_HW_MPY)); 229 + IS_USED_CFG(CONFIG_ARC_HAS_HW_MPY)); 235 230 } 236 231 237 232 n += scnprintf(buf + n, len - n, "%s%s%s%s%s%s%s%s\n",
-1
arch/arc/kernel/troubleshoot.c
··· 58 58 59 59 static void print_task_path_n_nm(struct task_struct *tsk, char *buf) 60 60 { 61 - struct path path; 62 61 char *path_nm = NULL; 63 62 struct mm_struct *mm; 64 63 struct file *exe_file;
+10 -2
arch/arc/mm/cache.c
··· 468 468 noinline void slc_op(unsigned long paddr, unsigned long sz, const int op) 469 469 { 470 470 #ifdef CONFIG_ISA_ARCV2 471 + /* 472 + * SLC is shared between all cores and concurrent aux operations from 473 + * multiple cores need to be serialized using a spinlock 474 + * A concurrent operation can be silently ignored and/or the old/new 475 + * operation can remain incomplete forever (lockup in SLC_CTRL_BUSY loop 476 + * below) 477 + */ 478 + static DEFINE_SPINLOCK(lock); 471 479 unsigned long flags; 472 480 unsigned int ctrl; 473 481 474 - local_irq_save(flags); 482 + spin_lock_irqsave(&lock, flags); 475 483 476 484 /* 477 485 * The Region Flush operation is specified by CTRL.RGN_OP[11..9] ··· 512 504 513 505 while (read_aux_reg(ARC_REG_SLC_CTRL) & SLC_CTRL_BUSY); 514 506 515 - local_irq_restore(flags); 507 + spin_unlock_irqrestore(&lock, flags); 516 508 #endif 517 509 } 518 510
+2 -2
arch/arc/mm/dma.c
··· 60 60 61 61 /* This is kernel Virtual address (0x7000_0000 based) */ 62 62 kvaddr = ioremap_nocache((unsigned long)paddr, size); 63 - if (kvaddr != NULL) 64 - memset(kvaddr, 0, size); 63 + if (kvaddr == NULL) 64 + return NULL; 65 65 66 66 /* This is bus address, platform dependent */ 67 67 *dma_handle = (dma_addr_t)paddr;
+6
arch/arm/Kconfig
··· 1693 1693 config HIGHPTE 1694 1694 bool "Allocate 2nd-level pagetables from highmem" 1695 1695 depends on HIGHMEM 1696 + help 1697 + The VM uses one page of physical memory for each page table. 1698 + For systems with a lot of processes, this can use a lot of 1699 + precious low memory, eventually leading to low memory being 1700 + consumed by page tables. Setting this option will allow 1701 + user-space 2nd level page tables to reside in high memory. 1696 1702 1697 1703 config HW_PERF_EVENTS 1698 1704 bool "Enable hardware performance counter support for perf events"
+1 -1
arch/arm/Kconfig.debug
··· 1635 1635 1636 1636 config DEBUG_SET_MODULE_RONX 1637 1637 bool "Set loadable kernel module data as NX and text as RO" 1638 - depends on MODULES 1638 + depends on MODULES && MMU 1639 1639 ---help--- 1640 1640 This option helps catch unintended modifications to loadable 1641 1641 kernel module's text and read-only data. It also prevents execution
+4
arch/arm/boot/dts/am335x-boneblack.dts
··· 80 80 status = "okay"; 81 81 }; 82 82 }; 83 + 84 + &rtc { 85 + system-power-controller; 86 + };
+13 -3
arch/arm/boot/dts/am335x-pepper.dts
··· 74 74 audio_codec: tlv320aic3106@1b { 75 75 compatible = "ti,tlv320aic3106"; 76 76 reg = <0x1b>; 77 + ai3x-micbias-vg = <0x2>; 77 78 }; 78 79 79 80 accel: lis331dlh@1d { ··· 154 153 ti,audio-routing = 155 154 "Headphone Jack", "HPLOUT", 156 155 "Headphone Jack", "HPROUT", 157 - "LINE1L", "Line In"; 156 + "MIC3L", "Mic3L Switch"; 158 157 }; 159 158 160 159 &mcasp0 { ··· 439 438 regulators { 440 439 dcdc1_reg: regulator@0 { 441 440 /* VDD_1V8 system supply */ 441 + regulator-always-on; 442 442 }; 443 443 444 444 dcdc2_reg: regulator@1 { 445 445 /* VDD_CORE voltage limits 0.95V - 1.26V with +/-4% tolerance */ 446 446 regulator-name = "vdd_core"; 447 447 regulator-min-microvolt = <925000>; 448 - regulator-max-microvolt = <1325000>; 448 + regulator-max-microvolt = <1150000>; 449 449 regulator-boot-on; 450 + regulator-always-on; 450 451 }; 451 452 452 453 dcdc3_reg: regulator@2 { 453 454 /* VDD_MPU voltage limits 0.95V - 1.1V with +/-4% tolerance */ 454 455 regulator-name = "vdd_mpu"; 455 456 regulator-min-microvolt = <925000>; 456 - regulator-max-microvolt = <1150000>; 457 + regulator-max-microvolt = <1325000>; 457 458 regulator-boot-on; 459 + regulator-always-on; 458 460 }; 459 461 460 462 ldo1_reg: regulator@3 { 461 463 /* VRTC 1.8V always-on supply */ 464 + regulator-name = "vrtc,vdds"; 462 465 regulator-always-on; 463 466 }; 464 467 465 468 ldo2_reg: regulator@4 { 466 469 /* 3.3V rail */ 470 + regulator-name = "vdd_3v3aux"; 471 + regulator-always-on; 467 472 }; 468 473 469 474 ldo3_reg: regulator@5 { 470 475 /* VDD_3V3A 3.3V rail */ 476 + regulator-name = "vdd_3v3a"; 471 477 regulator-min-microvolt = <3300000>; 472 478 regulator-max-microvolt = <3300000>; 473 479 }; 474 480 475 481 ldo4_reg: regulator@6 { 476 482 /* VDD_3V3B 3.3V rail */ 483 + regulator-name = "vdd_3v3b"; 484 + regulator-always-on; 477 485 }; 478 486 }; 479 487 };
+7
arch/arm/boot/dts/am4372.dtsi
··· 132 132 }; 133 133 }; 134 134 135 + emif: emif@4c000000 { 136 + compatible = "ti,emif-am4372"; 137 + reg = <0x4c000000 0x1000000>; 138 + ti,hwmods = "emif"; 139 + }; 140 + 135 141 edma: edma@49000000 { 136 142 compatible = "ti,edma3"; 137 143 ti,hwmods = "tpcc", "tptc0", "tptc1", "tptc2"; ··· 947 941 ti,hwmods = "dss_rfbi"; 948 942 clocks = <&disp_clk>; 949 943 clock-names = "fck"; 944 + status = "disabled"; 950 945 }; 951 946 }; 952 947
+4
arch/arm/boot/dts/am57xx-beagle-x15.dts
··· 605 605 phy-supply = <&ldousb_reg>; 606 606 }; 607 607 608 + &usb2_phy2 { 609 + phy-supply = <&ldousb_reg>; 610 + }; 611 + 608 612 &usb1 { 609 613 dr_mode = "host"; 610 614 pinctrl-names = "default";
+1041 -1
arch/arm/boot/dts/atlas7.dtsi
··· 135 135 compatible = "sirf,atlas7-ioc"; 136 136 reg = <0x18880000 0x1000>, 137 137 <0x10E40000 0x1000>; 138 + 139 + audio_ac97_pmx: audio_ac97@0 { 140 + audio_ac97 { 141 + groups = "audio_ac97_grp"; 142 + function = "audio_ac97"; 143 + }; 144 + }; 145 + 146 + audio_func_dbg_pmx: audio_func_dbg@0 { 147 + audio_func_dbg { 148 + groups = "audio_func_dbg_grp"; 149 + function = "audio_func_dbg"; 150 + }; 151 + }; 152 + 153 + audio_i2s_pmx: audio_i2s@0 { 154 + audio_i2s { 155 + groups = "audio_i2s_grp"; 156 + function = "audio_i2s"; 157 + }; 158 + }; 159 + 160 + audio_i2s_2ch_pmx: audio_i2s_2ch@0 { 161 + audio_i2s_2ch { 162 + groups = "audio_i2s_2ch_grp"; 163 + function = "audio_i2s_2ch"; 164 + }; 165 + }; 166 + 167 + audio_i2s_extclk_pmx: audio_i2s_extclk@0 { 168 + audio_i2s_extclk { 169 + groups = "audio_i2s_extclk_grp"; 170 + function = "audio_i2s_extclk"; 171 + }; 172 + }; 173 + 174 + audio_uart0_pmx: audio_uart0@0 { 175 + audio_uart0 { 176 + groups = "audio_uart0_grp"; 177 + function = "audio_uart0"; 178 + }; 179 + }; 180 + 181 + audio_uart1_pmx: audio_uart1@0 { 182 + audio_uart1 { 183 + groups = "audio_uart1_grp"; 184 + function = "audio_uart1"; 185 + }; 186 + }; 187 + 188 + audio_uart2_pmx0: audio_uart2@0 { 189 + audio_uart2_0 { 190 + groups = "audio_uart2_grp0"; 191 + function = "audio_uart2_m0"; 192 + }; 193 + }; 194 + 195 + audio_uart2_pmx1: audio_uart2@1 { 196 + audio_uart2_1 { 197 + groups = "audio_uart2_grp1"; 198 + function = "audio_uart2_m1"; 199 + }; 200 + }; 201 + 202 + c_can_trnsvr_pmx: c_can_trnsvr@0 { 203 + c_can_trnsvr { 204 + groups = "c_can_trnsvr_grp"; 205 + function = "c_can_trnsvr"; 206 + }; 207 + }; 208 + 209 + c0_can_pmx0: c0_can@0 { 210 + c0_can_0 { 211 + groups = "c0_can_grp0"; 212 + function = "c0_can_m0"; 213 + }; 214 + }; 215 + 216 + c0_can_pmx1: c0_can@1 { 217 + c0_can_1 { 218 + groups = "c0_can_grp1"; 219 + function = "c0_can_m1"; 220 + }; 221 + }; 222 + 223 + c1_can_pmx0: c1_can@0 { 224 + c1_can_0 { 225 + groups = "c1_can_grp0"; 226 + function = "c1_can_m0"; 227 + }; 228 + }; 229 + 230 + c1_can_pmx1: c1_can@1 { 231 + c1_can_1 { 232 + groups = "c1_can_grp1"; 233 + function = "c1_can_m1"; 234 + }; 235 + }; 236 + 237 + c1_can_pmx2: c1_can@2 { 238 + c1_can_2 { 239 + groups = "c1_can_grp2"; 240 + function = "c1_can_m2"; 241 + }; 242 + }; 243 + 244 + ca_audio_lpc_pmx: ca_audio_lpc@0 { 245 + ca_audio_lpc { 246 + groups = "ca_audio_lpc_grp"; 247 + function = "ca_audio_lpc"; 248 + }; 249 + }; 250 + 251 + ca_bt_lpc_pmx: ca_bt_lpc@0 { 252 + ca_bt_lpc { 253 + groups = "ca_bt_lpc_grp"; 254 + function = "ca_bt_lpc"; 255 + }; 256 + }; 257 + 258 + ca_coex_pmx: ca_coex@0 { 259 + ca_coex { 260 + groups = "ca_coex_grp"; 261 + function = "ca_coex"; 262 + }; 263 + }; 264 + 265 + ca_curator_lpc_pmx: ca_curator_lpc@0 { 266 + ca_curator_lpc { 267 + groups = "ca_curator_lpc_grp"; 268 + function = "ca_curator_lpc"; 269 + }; 270 + }; 271 + 272 + ca_pcm_debug_pmx: ca_pcm_debug@0 { 273 + ca_pcm_debug { 274 + groups = "ca_pcm_debug_grp"; 275 + function = "ca_pcm_debug"; 276 + }; 277 + }; 278 + 279 + ca_pio_pmx: ca_pio@0 { 280 + ca_pio { 281 + groups = "ca_pio_grp"; 282 + function = "ca_pio"; 283 + }; 284 + }; 285 + 286 + ca_sdio_debug_pmx: ca_sdio_debug@0 { 287 + ca_sdio_debug { 288 + groups = "ca_sdio_debug_grp"; 289 + function = "ca_sdio_debug"; 290 + }; 291 + }; 292 + 293 + ca_spi_pmx: ca_spi@0 { 294 + ca_spi { 295 + groups = "ca_spi_grp"; 296 + function = "ca_spi"; 297 + }; 298 + }; 299 + 300 + ca_trb_pmx: ca_trb@0 { 301 + ca_trb { 302 + groups = "ca_trb_grp"; 303 + function = "ca_trb"; 304 + }; 305 + }; 306 + 307 + ca_uart_debug_pmx: ca_uart_debug@0 { 308 + ca_uart_debug { 309 + groups = "ca_uart_debug_grp"; 310 + function = "ca_uart_debug"; 311 + }; 312 + }; 313 + 314 + clkc_pmx0: clkc@0 { 315 + clkc_0 { 316 + groups = "clkc_grp0"; 317 + function = "clkc_m0"; 318 + }; 319 + }; 320 + 321 + clkc_pmx1: clkc@1 { 322 + clkc_1 { 323 + groups = "clkc_grp1"; 324 + function = "clkc_m1"; 325 + }; 326 + }; 327 + 328 + gn_gnss_i2c_pmx: gn_gnss_i2c@0 { 329 + gn_gnss_i2c { 330 + groups = "gn_gnss_i2c_grp"; 331 + function = "gn_gnss_i2c"; 332 + }; 333 + }; 334 + 335 + gn_gnss_uart_nopause_pmx: gn_gnss_uart_nopause@0 { 336 + gn_gnss_uart_nopause { 337 + groups = "gn_gnss_uart_nopause_grp"; 338 + function = "gn_gnss_uart_nopause"; 339 + }; 340 + }; 341 + 342 + gn_gnss_uart_pmx: gn_gnss_uart@0 { 343 + gn_gnss_uart { 344 + groups = "gn_gnss_uart_grp"; 345 + function = "gn_gnss_uart"; 346 + }; 347 + }; 348 + 349 + gn_trg_spi_pmx0: gn_trg_spi@0 { 350 + gn_trg_spi_0 { 351 + groups = "gn_trg_spi_grp0"; 352 + function = "gn_trg_spi_m0"; 353 + }; 354 + }; 355 + 356 + gn_trg_spi_pmx1: gn_trg_spi@1 { 357 + gn_trg_spi_1 { 358 + groups = "gn_trg_spi_grp1"; 359 + function = "gn_trg_spi_m1"; 360 + }; 361 + }; 362 + 363 + cvbs_dbg_pmx: cvbs_dbg@0 { 364 + cvbs_dbg { 365 + groups = "cvbs_dbg_grp"; 366 + function = "cvbs_dbg"; 367 + }; 368 + }; 369 + 370 + cvbs_dbg_test_pmx0: cvbs_dbg_test@0 { 371 + cvbs_dbg_test_0 { 372 + groups = "cvbs_dbg_test_grp0"; 373 + function = "cvbs_dbg_test_m0"; 374 + }; 375 + }; 376 + 377 + cvbs_dbg_test_pmx1: cvbs_dbg_test@1 { 378 + cvbs_dbg_test_1 { 379 + groups = "cvbs_dbg_test_grp1"; 380 + function = "cvbs_dbg_test_m1"; 381 + }; 382 + }; 383 + 384 + cvbs_dbg_test_pmx2: cvbs_dbg_test@2 { 385 + cvbs_dbg_test_2 { 386 + groups = "cvbs_dbg_test_grp2"; 387 + function = "cvbs_dbg_test_m2"; 388 + }; 389 + }; 390 + 391 + cvbs_dbg_test_pmx3: cvbs_dbg_test@3 { 392 + cvbs_dbg_test_3 { 393 + groups = "cvbs_dbg_test_grp3"; 394 + function = "cvbs_dbg_test_m3"; 395 + }; 396 + }; 397 + 398 + cvbs_dbg_test_pmx4: cvbs_dbg_test@4 { 399 + cvbs_dbg_test_4 { 400 + groups = "cvbs_dbg_test_grp4"; 401 + function = "cvbs_dbg_test_m4"; 402 + }; 403 + }; 404 + 405 + cvbs_dbg_test_pmx5: cvbs_dbg_test@5 { 406 + cvbs_dbg_test_5 { 407 + groups = "cvbs_dbg_test_grp5"; 408 + function = "cvbs_dbg_test_m5"; 409 + }; 410 + }; 411 + 412 + cvbs_dbg_test_pmx6: cvbs_dbg_test@6 { 413 + cvbs_dbg_test_6 { 414 + groups = "cvbs_dbg_test_grp6"; 415 + function = "cvbs_dbg_test_m6"; 416 + }; 417 + }; 418 + 419 + cvbs_dbg_test_pmx7: cvbs_dbg_test@7 { 420 + cvbs_dbg_test_7 { 421 + groups = "cvbs_dbg_test_grp7"; 422 + function = "cvbs_dbg_test_m7"; 423 + }; 424 + }; 425 + 426 + cvbs_dbg_test_pmx8: cvbs_dbg_test@8 { 427 + cvbs_dbg_test_8 { 428 + groups = "cvbs_dbg_test_grp8"; 429 + function = "cvbs_dbg_test_m8"; 430 + }; 431 + }; 432 + 433 + cvbs_dbg_test_pmx9: cvbs_dbg_test@9 { 434 + cvbs_dbg_test_9 { 435 + groups = "cvbs_dbg_test_grp9"; 436 + function = "cvbs_dbg_test_m9"; 437 + }; 438 + }; 439 + 440 + cvbs_dbg_test_pmx10: cvbs_dbg_test@10 { 441 + cvbs_dbg_test_10 { 442 + groups = "cvbs_dbg_test_grp10"; 443 + function = "cvbs_dbg_test_m10"; 444 + }; 445 + }; 446 + 447 + cvbs_dbg_test_pmx11: cvbs_dbg_test@11 { 448 + cvbs_dbg_test_11 { 449 + groups = "cvbs_dbg_test_grp11"; 450 + function = "cvbs_dbg_test_m11"; 451 + }; 452 + }; 453 + 454 + cvbs_dbg_test_pmx12: cvbs_dbg_test@12 { 455 + cvbs_dbg_test_12 { 456 + groups = "cvbs_dbg_test_grp12"; 457 + function = "cvbs_dbg_test_m12"; 458 + }; 459 + }; 460 + 461 + cvbs_dbg_test_pmx13: cvbs_dbg_test@13 { 462 + cvbs_dbg_test_13 { 463 + groups = "cvbs_dbg_test_grp13"; 464 + function = "cvbs_dbg_test_m13"; 465 + }; 466 + }; 467 + 468 + cvbs_dbg_test_pmx14: cvbs_dbg_test@14 { 469 + cvbs_dbg_test_14 { 470 + groups = "cvbs_dbg_test_grp14"; 471 + function = "cvbs_dbg_test_m14"; 472 + }; 473 + }; 474 + 475 + cvbs_dbg_test_pmx15: cvbs_dbg_test@15 { 476 + cvbs_dbg_test_15 { 477 + groups = "cvbs_dbg_test_grp15"; 478 + function = "cvbs_dbg_test_m15"; 479 + }; 480 + }; 481 + 482 + gn_gnss_power_pmx: gn_gnss_power@0 { 483 + gn_gnss_power { 484 + groups = "gn_gnss_power_grp"; 485 + function = "gn_gnss_power"; 486 + }; 487 + }; 488 + 489 + gn_gnss_sw_status_pmx: gn_gnss_sw_status@0 { 490 + gn_gnss_sw_status { 491 + groups = "gn_gnss_sw_status_grp"; 492 + function = "gn_gnss_sw_status"; 493 + }; 494 + }; 495 + 496 + gn_gnss_eclk_pmx: gn_gnss_eclk@0 { 497 + gn_gnss_eclk { 498 + groups = "gn_gnss_eclk_grp"; 499 + function = "gn_gnss_eclk"; 500 + }; 501 + }; 502 + 503 + gn_gnss_irq1_pmx0: gn_gnss_irq1@0 { 504 + gn_gnss_irq1_0 { 505 + groups = "gn_gnss_irq1_grp0"; 506 + function = "gn_gnss_irq1_m0"; 507 + }; 508 + }; 509 + 510 + gn_gnss_irq2_pmx0: gn_gnss_irq2@0 { 511 + gn_gnss_irq2_0 { 512 + groups = "gn_gnss_irq2_grp0"; 513 + function = "gn_gnss_irq2_m0"; 514 + }; 515 + }; 516 + 517 + gn_gnss_tm_pmx: gn_gnss_tm@0 { 518 + gn_gnss_tm { 519 + groups = "gn_gnss_tm_grp"; 520 + function = "gn_gnss_tm"; 521 + }; 522 + }; 523 + 524 + gn_gnss_tsync_pmx: gn_gnss_tsync@0 { 525 + gn_gnss_tsync { 526 + groups = "gn_gnss_tsync_grp"; 527 + function = "gn_gnss_tsync"; 528 + }; 529 + }; 530 + 531 + gn_io_gnsssys_sw_cfg_pmx: gn_io_gnsssys_sw_cfg@0 { 532 + gn_io_gnsssys_sw_cfg { 533 + groups = "gn_io_gnsssys_sw_cfg_grp"; 534 + function = "gn_io_gnsssys_sw_cfg"; 535 + }; 536 + }; 537 + 538 + gn_trg_pmx0: gn_trg@0 { 539 + gn_trg_0 { 540 + groups = "gn_trg_grp0"; 541 + function = "gn_trg_m0"; 542 + }; 543 + }; 544 + 545 + gn_trg_pmx1: gn_trg@1 { 546 + gn_trg_1 { 547 + groups = "gn_trg_grp1"; 548 + function = "gn_trg_m1"; 549 + }; 550 + }; 551 + 552 + gn_trg_shutdown_pmx0: gn_trg_shutdown@0 { 553 + gn_trg_shutdown_0 { 554 + groups = "gn_trg_shutdown_grp0"; 555 + function = "gn_trg_shutdown_m0"; 556 + }; 557 + }; 558 + 559 + gn_trg_shutdown_pmx1: gn_trg_shutdown@1 { 560 + gn_trg_shutdown_1 { 561 + groups = "gn_trg_shutdown_grp1"; 562 + function = "gn_trg_shutdown_m1"; 563 + }; 564 + }; 565 + 566 + gn_trg_shutdown_pmx2: gn_trg_shutdown@2 { 567 + gn_trg_shutdown_2 { 568 + groups = "gn_trg_shutdown_grp2"; 569 + function = "gn_trg_shutdown_m2"; 570 + }; 571 + }; 572 + 573 + gn_trg_shutdown_pmx3: gn_trg_shutdown@3 { 574 + gn_trg_shutdown_3 { 575 + groups = "gn_trg_shutdown_grp3"; 576 + function = "gn_trg_shutdown_m3"; 577 + }; 578 + }; 579 + 580 + i2c0_pmx: i2c0@0 { 581 + i2c0 { 582 + groups = "i2c0_grp"; 583 + function = "i2c0"; 584 + }; 585 + }; 586 + 587 + i2c1_pmx: i2c1@0 { 588 + i2c1 { 589 + groups = "i2c1_grp"; 590 + function = "i2c1"; 591 + }; 592 + }; 593 + 594 + jtag_pmx0: jtag@0 { 595 + jtag_0 { 596 + groups = "jtag_grp0"; 597 + function = "jtag_m0"; 598 + }; 599 + }; 600 + 601 + ks_kas_spi_pmx0: ks_kas_spi@0 { 602 + ks_kas_spi_0 { 603 + groups = "ks_kas_spi_grp0"; 604 + function = "ks_kas_spi_m0"; 605 + }; 606 + }; 607 + 608 + ld_ldd_pmx: ld_ldd@0 { 609 + ld_ldd { 610 + groups = "ld_ldd_grp"; 611 + function = "ld_ldd"; 612 + }; 613 + }; 614 + 615 + ld_ldd_16bit_pmx: ld_ldd_16bit@0 { 616 + ld_ldd_16bit { 617 + groups = "ld_ldd_16bit_grp"; 618 + function = "ld_ldd_16bit"; 619 + }; 620 + }; 621 + 622 + ld_ldd_fck_pmx: ld_ldd_fck@0 { 623 + ld_ldd_fck { 624 + groups = "ld_ldd_fck_grp"; 625 + function = "ld_ldd_fck"; 626 + }; 627 + }; 628 + 629 + ld_ldd_lck_pmx: ld_ldd_lck@0 { 630 + ld_ldd_lck { 631 + groups = "ld_ldd_lck_grp"; 632 + function = "ld_ldd_lck"; 633 + }; 634 + }; 635 + 636 + lr_lcdrom_pmx: lr_lcdrom@0 { 637 + lr_lcdrom { 638 + groups = "lr_lcdrom_grp"; 639 + function = "lr_lcdrom"; 640 + }; 641 + }; 642 + 643 + lvds_analog_pmx: lvds_analog@0 { 644 + lvds_analog { 645 + groups = "lvds_analog_grp"; 646 + function = "lvds_analog"; 647 + }; 648 + }; 649 + 650 + nd_df_pmx: nd_df@0 { 651 + nd_df { 652 + groups = "nd_df_grp"; 653 + function = "nd_df"; 654 + }; 655 + }; 656 + 657 + nd_df_nowp_pmx: nd_df_nowp@0 { 658 + nd_df_nowp { 659 + groups = "nd_df_nowp_grp"; 660 + function = "nd_df_nowp"; 661 + }; 662 + }; 663 + 664 + ps_pmx: ps@0 { 665 + ps { 666 + groups = "ps_grp"; 667 + function = "ps"; 668 + }; 669 + }; 670 + 671 + pwc_core_on_pmx: pwc_core_on@0 { 672 + pwc_core_on { 673 + groups = "pwc_core_on_grp"; 674 + function = "pwc_core_on"; 675 + }; 676 + }; 677 + 678 + pwc_ext_on_pmx: pwc_ext_on@0 { 679 + pwc_ext_on { 680 + groups = "pwc_ext_on_grp"; 681 + function = "pwc_ext_on"; 682 + }; 683 + }; 684 + 685 + pwc_gpio3_clk_pmx: pwc_gpio3_clk@0 { 686 + pwc_gpio3_clk { 687 + groups = "pwc_gpio3_clk_grp"; 688 + function = "pwc_gpio3_clk"; 689 + }; 690 + }; 691 + 692 + pwc_io_on_pmx: pwc_io_on@0 { 693 + pwc_io_on { 694 + groups = "pwc_io_on_grp"; 695 + function = "pwc_io_on"; 696 + }; 697 + }; 698 + 699 + pwc_lowbatt_b_pmx0: pwc_lowbatt_b@0 { 700 + pwc_lowbatt_b_0 { 701 + groups = "pwc_lowbatt_b_grp0"; 702 + function = "pwc_lowbatt_b_m0"; 703 + }; 704 + }; 705 + 706 + pwc_mem_on_pmx: pwc_mem_on@0 { 707 + pwc_mem_on { 708 + groups = "pwc_mem_on_grp"; 709 + function = "pwc_mem_on"; 710 + }; 711 + }; 712 + 713 + pwc_on_key_b_pmx0: pwc_on_key_b@0 { 714 + pwc_on_key_b_0 { 715 + groups = "pwc_on_key_b_grp0"; 716 + function = "pwc_on_key_b_m0"; 717 + }; 718 + }; 719 + 720 + pwc_wakeup_src0_pmx: pwc_wakeup_src0@0 { 721 + pwc_wakeup_src0 { 722 + groups = "pwc_wakeup_src0_grp"; 723 + function = "pwc_wakeup_src0"; 724 + }; 725 + }; 726 + 727 + pwc_wakeup_src1_pmx: pwc_wakeup_src1@0 { 728 + pwc_wakeup_src1 { 729 + groups = "pwc_wakeup_src1_grp"; 730 + function = "pwc_wakeup_src1"; 731 + }; 732 + }; 733 + 734 + pwc_wakeup_src2_pmx: pwc_wakeup_src2@0 { 735 + pwc_wakeup_src2 { 736 + groups = "pwc_wakeup_src2_grp"; 737 + function = "pwc_wakeup_src2"; 738 + }; 739 + }; 740 + 741 + pwc_wakeup_src3_pmx: pwc_wakeup_src3@0 { 742 + pwc_wakeup_src3 { 743 + groups = "pwc_wakeup_src3_grp"; 744 + function = "pwc_wakeup_src3"; 745 + }; 746 + }; 747 + 748 + pw_cko0_pmx0: pw_cko0@0 { 749 + pw_cko0_0 { 750 + groups = "pw_cko0_grp0"; 751 + function = "pw_cko0_m0"; 752 + }; 753 + }; 754 + 755 + pw_cko0_pmx1: pw_cko0@1 { 756 + pw_cko0_1 { 757 + groups = "pw_cko0_grp1"; 758 + function = "pw_cko0_m1"; 759 + }; 760 + }; 761 + 762 + pw_cko0_pmx2: pw_cko0@2 { 763 + pw_cko0_2 { 764 + groups = "pw_cko0_grp2"; 765 + function = "pw_cko0_m2"; 766 + }; 767 + }; 768 + 769 + pw_cko1_pmx0: pw_cko1@0 { 770 + pw_cko1_0 { 771 + groups = "pw_cko1_grp0"; 772 + function = "pw_cko1_m0"; 773 + }; 774 + }; 775 + 776 + pw_cko1_pmx1: pw_cko1@1 { 777 + pw_cko1_1 { 778 + groups = "pw_cko1_grp1"; 779 + function = "pw_cko1_m1"; 780 + }; 781 + }; 782 + 783 + pw_i2s01_clk_pmx0: pw_i2s01_clk@0 { 784 + pw_i2s01_clk_0 { 785 + groups = "pw_i2s01_clk_grp0"; 786 + function = "pw_i2s01_clk_m0"; 787 + }; 788 + }; 789 + 790 + pw_i2s01_clk_pmx1: pw_i2s01_clk@1 { 791 + pw_i2s01_clk_1 { 792 + groups = "pw_i2s01_clk_grp1"; 793 + function = "pw_i2s01_clk_m1"; 794 + }; 795 + }; 796 + 797 + pw_pwm0_pmx: pw_pwm0@0 { 798 + pw_pwm0 { 799 + groups = "pw_pwm0_grp"; 800 + function = "pw_pwm0"; 801 + }; 802 + }; 803 + 804 + pw_pwm1_pmx: pw_pwm1@0 { 805 + pw_pwm1 { 806 + groups = "pw_pwm1_grp"; 807 + function = "pw_pwm1"; 808 + }; 809 + }; 810 + 811 + pw_pwm2_pmx0: pw_pwm2@0 { 812 + pw_pwm2_0 { 813 + groups = "pw_pwm2_grp0"; 814 + function = "pw_pwm2_m0"; 815 + }; 816 + }; 817 + 818 + pw_pwm2_pmx1: pw_pwm2@1 { 819 + pw_pwm2_1 { 820 + groups = "pw_pwm2_grp1"; 821 + function = "pw_pwm2_m1"; 822 + }; 823 + }; 824 + 825 + pw_pwm3_pmx0: pw_pwm3@0 { 826 + pw_pwm3_0 { 827 + groups = "pw_pwm3_grp0"; 828 + function = "pw_pwm3_m0"; 829 + }; 830 + }; 831 + 832 + pw_pwm3_pmx1: pw_pwm3@1 { 833 + pw_pwm3_1 { 834 + groups = "pw_pwm3_grp1"; 835 + function = "pw_pwm3_m1"; 836 + }; 837 + }; 838 + 839 + pw_pwm_cpu_vol_pmx0: pw_pwm_cpu_vol@0 { 840 + pw_pwm_cpu_vol_0 { 841 + groups = "pw_pwm_cpu_vol_grp0"; 842 + function = "pw_pwm_cpu_vol_m0"; 843 + }; 844 + }; 845 + 846 + pw_pwm_cpu_vol_pmx1: pw_pwm_cpu_vol@1 { 847 + pw_pwm_cpu_vol_1 { 848 + groups = "pw_pwm_cpu_vol_grp1"; 849 + function = "pw_pwm_cpu_vol_m1"; 850 + }; 851 + }; 852 + 853 + pw_backlight_pmx0: pw_backlight@0 { 854 + pw_backlight_0 { 855 + groups = "pw_backlight_grp0"; 856 + function = "pw_backlight_m0"; 857 + }; 858 + }; 859 + 860 + pw_backlight_pmx1: pw_backlight@1 { 861 + pw_backlight_1 { 862 + groups = "pw_backlight_grp1"; 863 + function = "pw_backlight_m1"; 864 + }; 865 + }; 866 + 867 + rg_eth_mac_pmx: rg_eth_mac@0 { 868 + rg_eth_mac { 869 + groups = "rg_eth_mac_grp"; 870 + function = "rg_eth_mac"; 871 + }; 872 + }; 873 + 874 + rg_gmac_phy_intr_n_pmx: rg_gmac_phy_intr_n@0 { 875 + rg_gmac_phy_intr_n { 876 + groups = "rg_gmac_phy_intr_n_grp"; 877 + function = "rg_gmac_phy_intr_n"; 878 + }; 879 + }; 880 + 881 + rg_rgmii_mac_pmx: rg_rgmii_mac@0 { 882 + rg_rgmii_mac { 883 + groups = "rg_rgmii_mac_grp"; 884 + function = "rg_rgmii_mac"; 885 + }; 886 + }; 887 + 888 + rg_rgmii_phy_ref_clk_pmx0: rg_rgmii_phy_ref_clk@0 { 889 + rg_rgmii_phy_ref_clk_0 { 890 + groups = 891 + "rg_rgmii_phy_ref_clk_grp0"; 892 + function = 893 + "rg_rgmii_phy_ref_clk_m0"; 894 + }; 895 + }; 896 + 897 + rg_rgmii_phy_ref_clk_pmx1: rg_rgmii_phy_ref_clk@1 { 898 + rg_rgmii_phy_ref_clk_1 { 899 + groups = 900 + "rg_rgmii_phy_ref_clk_grp1"; 901 + function = 902 + "rg_rgmii_phy_ref_clk_m1"; 903 + }; 904 + }; 905 + 906 + sd0_pmx: sd0@0 { 907 + sd0 { 908 + groups = "sd0_grp"; 909 + function = "sd0"; 910 + }; 911 + }; 912 + 913 + sd0_4bit_pmx: sd0_4bit@0 { 914 + sd0_4bit { 915 + groups = "sd0_4bit_grp"; 916 + function = "sd0_4bit"; 917 + }; 918 + }; 919 + 920 + sd1_pmx: sd1@0 { 921 + sd1 { 922 + groups = "sd1_grp"; 923 + function = "sd1"; 924 + }; 925 + }; 926 + 927 + sd1_4bit_pmx0: sd1_4bit@0 { 928 + sd1_4bit_0 { 929 + groups = "sd1_4bit_grp0"; 930 + function = "sd1_4bit_m0"; 931 + }; 932 + }; 933 + 934 + sd1_4bit_pmx1: sd1_4bit@1 { 935 + sd1_4bit_1 { 936 + groups = "sd1_4bit_grp1"; 937 + function = "sd1_4bit_m1"; 938 + }; 939 + }; 940 + 941 + sd2_pmx0: sd2@0 { 942 + sd2_0 { 943 + groups = "sd2_grp0"; 944 + function = "sd2_m0"; 945 + }; 946 + }; 947 + 948 + sd2_no_cdb_pmx0: sd2_no_cdb@0 { 949 + sd2_no_cdb_0 { 950 + groups = "sd2_no_cdb_grp0"; 951 + function = "sd2_no_cdb_m0"; 952 + }; 953 + }; 954 + 955 + sd3_pmx: sd3@0 { 956 + sd3 { 957 + groups = "sd3_grp"; 958 + function = "sd3"; 959 + }; 960 + }; 961 + 962 + sd5_pmx: sd5@0 { 963 + sd5 { 964 + groups = "sd5_grp"; 965 + function = "sd5"; 966 + }; 967 + }; 968 + 969 + sd6_pmx0: sd6@0 { 970 + sd6_0 { 971 + groups = "sd6_grp0"; 972 + function = "sd6_m0"; 973 + }; 974 + }; 975 + 976 + sd6_pmx1: sd6@1 { 977 + sd6_1 { 978 + groups = "sd6_grp1"; 979 + function = "sd6_m1"; 980 + }; 981 + }; 982 + 983 + sp0_ext_ldo_on_pmx: sp0_ext_ldo_on@0 { 984 + sp0_ext_ldo_on { 985 + groups = "sp0_ext_ldo_on_grp"; 986 + function = "sp0_ext_ldo_on"; 987 + }; 988 + }; 989 + 990 + sp0_qspi_pmx: sp0_qspi@0 { 991 + sp0_qspi { 992 + groups = "sp0_qspi_grp"; 993 + function = "sp0_qspi"; 994 + }; 995 + }; 996 + 997 + sp1_spi_pmx: sp1_spi@0 { 998 + sp1_spi { 999 + groups = "sp1_spi_grp"; 1000 + function = "sp1_spi"; 1001 + }; 1002 + }; 1003 + 1004 + tpiu_trace_pmx: tpiu_trace@0 { 1005 + tpiu_trace { 1006 + groups = "tpiu_trace_grp"; 1007 + function = "tpiu_trace"; 1008 + }; 1009 + }; 1010 + 1011 + uart0_pmx: uart0@0 { 1012 + uart0 { 1013 + groups = "uart0_grp"; 1014 + function = "uart0"; 1015 + }; 1016 + }; 1017 + 1018 + uart0_nopause_pmx: uart0_nopause@0 { 1019 + uart0_nopause { 1020 + groups = "uart0_nopause_grp"; 1021 + function = "uart0_nopause"; 1022 + }; 1023 + }; 1024 + 1025 + uart1_pmx: uart1@0 { 1026 + uart1 { 1027 + groups = "uart1_grp"; 1028 + function = "uart1"; 1029 + }; 1030 + }; 1031 + 1032 + uart2_pmx: uart2@0 { 1033 + uart2 { 1034 + groups = "uart2_grp"; 1035 + function = "uart2"; 1036 + }; 1037 + }; 1038 + 1039 + uart3_pmx0: uart3@0 { 1040 + uart3_0 { 1041 + groups = "uart3_grp0"; 1042 + function = "uart3_m0"; 1043 + }; 1044 + }; 1045 + 1046 + uart3_pmx1: uart3@1 { 1047 + uart3_1 { 1048 + groups = "uart3_grp1"; 1049 + function = "uart3_m1"; 1050 + }; 1051 + }; 1052 + 1053 + uart3_pmx2: uart3@2 { 1054 + uart3_2 { 1055 + groups = "uart3_grp2"; 1056 + function = "uart3_m2"; 1057 + }; 1058 + }; 1059 + 1060 + uart3_pmx3: uart3@3 { 1061 + uart3_3 { 1062 + groups = "uart3_grp3"; 1063 + function = "uart3_m3"; 1064 + }; 1065 + }; 1066 + 1067 + uart3_nopause_pmx0: uart3_nopause@0 { 1068 + uart3_nopause_0 { 1069 + groups = "uart3_nopause_grp0"; 1070 + function = "uart3_nopause_m0"; 1071 + }; 1072 + }; 1073 + 1074 + uart3_nopause_pmx1: uart3_nopause@1 { 1075 + uart3_nopause_1 { 1076 + groups = "uart3_nopause_grp1"; 1077 + function = "uart3_nopause_m1"; 1078 + }; 1079 + }; 1080 + 1081 + uart4_pmx0: uart4@0 { 1082 + uart4_0 { 1083 + groups = "uart4_grp0"; 1084 + function = "uart4_m0"; 1085 + }; 1086 + }; 1087 + 1088 + uart4_pmx1: uart4@1 { 1089 + uart4_1 { 1090 + groups = "uart4_grp1"; 1091 + function = "uart4_m1"; 1092 + }; 1093 + }; 1094 + 1095 + uart4_pmx2: uart4@2 { 1096 + uart4_2 { 1097 + groups = "uart4_grp2"; 1098 + function = "uart4_m2"; 1099 + }; 1100 + }; 1101 + 1102 + uart4_nopause_pmx: uart4_nopause@0 { 1103 + uart4_nopause { 1104 + groups = "uart4_nopause_grp"; 1105 + function = "uart4_nopause"; 1106 + }; 1107 + }; 1108 + 1109 + usb0_drvvbus_pmx: usb0_drvvbus@0 { 1110 + usb0_drvvbus { 1111 + groups = "usb0_drvvbus_grp"; 1112 + function = "usb0_drvvbus"; 1113 + }; 1114 + }; 1115 + 1116 + usb1_drvvbus_pmx: usb1_drvvbus@0 { 1117 + usb1_drvvbus { 1118 + groups = "usb1_drvvbus_grp"; 1119 + function = "usb1_drvvbus"; 1120 + }; 1121 + }; 1122 + 1123 + visbus_dout_pmx: visbus_dout@0 { 1124 + visbus_dout { 1125 + groups = "visbus_dout_grp"; 1126 + function = "visbus_dout"; 1127 + }; 1128 + }; 1129 + 1130 + vi_vip1_pmx: vi_vip1@0 { 1131 + vi_vip1 { 1132 + groups = "vi_vip1_grp"; 1133 + function = "vi_vip1"; 1134 + }; 1135 + }; 1136 + 1137 + vi_vip1_ext_pmx: vi_vip1_ext@0 { 1138 + vi_vip1_ext { 1139 + groups = "vi_vip1_ext_grp"; 1140 + function = "vi_vip1_ext"; 1141 + }; 1142 + }; 1143 + 1144 + vi_vip1_low8bit_pmx: vi_vip1_low8bit@0 { 1145 + vi_vip1_low8bit { 1146 + groups = "vi_vip1_low8bit_grp"; 1147 + function = "vi_vip1_low8bit"; 1148 + }; 1149 + }; 1150 + 1151 + vi_vip1_high8bit_pmx: vi_vip1_high8bit@0 { 1152 + vi_vip1_high8bit { 1153 + groups = "vi_vip1_high8bit_grp"; 1154 + function = "vi_vip1_high8bit"; 1155 + }; 1156 + }; 138 1157 }; 139 1158 140 1159 pmipc { ··· 1375 356 clock-names = "gpio0_io"; 1376 357 gpio-controller; 1377 358 interrupt-controller; 359 + 360 + gpio-banks = <2>; 361 + gpio-ranges = <&pinctrl 0 0 0>, 362 + <&pinctrl 32 0 0>; 363 + gpio-ranges-group-names = "lvds_gpio_grp", 364 + "uart_nand_gpio_grp"; 1378 365 }; 1379 366 1380 367 nand@17050000 { ··· 1486 461 #interrupt-cells = <2>; 1487 462 compatible = "sirf,atlas7-gpio"; 1488 463 reg = <0x13300000 0x1000>; 1489 - interrupts = <0 43 0>, <0 44 0>, <0 45 0>; 464 + interrupts = <0 43 0>, <0 44 0>, 465 + <0 45 0>, <0 46 0>; 1490 466 clocks = <&car 84>; 1491 467 clock-names = "gpio1_io"; 1492 468 gpio-controller; 1493 469 interrupt-controller; 470 + 471 + gpio-banks = <4>; 472 + gpio-ranges = <&pinctrl 0 0 0>, 473 + <&pinctrl 32 0 0>, 474 + <&pinctrl 64 0 0>, 475 + <&pinctrl 96 0 0>; 476 + gpio-ranges-group-names = "gnss_gpio_grp", 477 + "lcd_vip_gpio_grp", 478 + "sdio_i2s_gpio_grp", 479 + "sp_rgmii_gpio_grp"; 1494 480 }; 1495 481 1496 482 sd2: sdhci@14200000 { ··· 1780 744 interrupts = <0 47 0>; 1781 745 gpio-controller; 1782 746 interrupt-controller; 747 + 748 + gpio-banks = <1>; 749 + gpio-ranges = <&pinctrl 0 0 0>; 750 + gpio-ranges-group-names = "rtc_gpio_grp"; 1783 751 }; 1784 752 1785 753 rtc-iobg@18840000 {
+4
arch/arm/boot/dts/cros-ec-keyboard.dtsi
··· 22 22 MATRIX_KEY(0x00, 0x02, KEY_F1) 23 23 MATRIX_KEY(0x00, 0x03, KEY_B) 24 24 MATRIX_KEY(0x00, 0x04, KEY_F10) 25 + MATRIX_KEY(0x00, 0x05, KEY_RO) 25 26 MATRIX_KEY(0x00, 0x06, KEY_N) 26 27 MATRIX_KEY(0x00, 0x08, KEY_EQUAL) 27 28 MATRIX_KEY(0x00, 0x0a, KEY_RIGHTALT) ··· 35 34 MATRIX_KEY(0x01, 0x08, KEY_APOSTROPHE) 36 35 MATRIX_KEY(0x01, 0x09, KEY_F9) 37 36 MATRIX_KEY(0x01, 0x0b, KEY_BACKSPACE) 37 + MATRIX_KEY(0x01, 0x0c, KEY_HENKAN) 38 38 39 39 MATRIX_KEY(0x02, 0x00, KEY_LEFTCTRL) 40 40 MATRIX_KEY(0x02, 0x01, KEY_TAB) ··· 47 45 MATRIX_KEY(0x02, 0x07, KEY_102ND) 48 46 MATRIX_KEY(0x02, 0x08, KEY_LEFTBRACE) 49 47 MATRIX_KEY(0x02, 0x09, KEY_F8) 48 + MATRIX_KEY(0x02, 0x0a, KEY_YEN) 50 49 51 50 MATRIX_KEY(0x03, 0x01, KEY_GRAVE) 52 51 MATRIX_KEY(0x03, 0x02, KEY_F2) ··· 56 53 MATRIX_KEY(0x03, 0x06, KEY_6) 57 54 MATRIX_KEY(0x03, 0x08, KEY_MINUS) 58 55 MATRIX_KEY(0x03, 0x0b, KEY_BACKSLASH) 56 + MATRIX_KEY(0x03, 0x0c, KEY_MUHENKAN) 59 57 60 58 MATRIX_KEY(0x04, 0x00, KEY_RIGHTCTRL) 61 59 MATRIX_KEY(0x04, 0x01, KEY_A)
+3 -2
arch/arm/boot/dts/dra7-evm.dts
··· 686 686 687 687 &dcan1 { 688 688 status = "ok"; 689 - pinctrl-names = "default", "sleep"; 690 - pinctrl-0 = <&dcan1_pins_default>; 689 + pinctrl-names = "default", "sleep", "active"; 690 + pinctrl-0 = <&dcan1_pins_sleep>; 691 691 pinctrl-1 = <&dcan1_pins_sleep>; 692 + pinctrl-2 = <&dcan1_pins_default>; 692 693 };
+3 -2
arch/arm/boot/dts/dra72-evm.dts
··· 587 587 588 588 &dcan1 { 589 589 status = "ok"; 590 - pinctrl-names = "default", "sleep"; 591 - pinctrl-0 = <&dcan1_pins_default>; 590 + pinctrl-names = "default", "sleep", "active"; 591 + pinctrl-0 = <&dcan1_pins_sleep>; 592 592 pinctrl-1 = <&dcan1_pins_sleep>; 593 + pinctrl-2 = <&dcan1_pins_default>; 593 594 }; 594 595 595 596 &qspi {
+1
arch/arm/boot/dts/imx23.dtsi
··· 468 468 interrupts = <36 37 38 39 40 41 42 43 44>; 469 469 status = "disabled"; 470 470 clocks = <&clks 26>; 471 + #io-channel-cells = <1>; 471 472 }; 472 473 473 474 spdif@80054000 {
+6 -6
arch/arm/boot/dts/imx27.dtsi
··· 108 108 }; 109 109 110 110 gpt1: timer@10003000 { 111 - compatible = "fsl,imx27-gpt", "fsl,imx1-gpt"; 111 + compatible = "fsl,imx27-gpt", "fsl,imx21-gpt"; 112 112 reg = <0x10003000 0x1000>; 113 113 interrupts = <26>; 114 114 clocks = <&clks IMX27_CLK_GPT1_IPG_GATE>, ··· 117 117 }; 118 118 119 119 gpt2: timer@10004000 { 120 - compatible = "fsl,imx27-gpt", "fsl,imx1-gpt"; 120 + compatible = "fsl,imx27-gpt", "fsl,imx21-gpt"; 121 121 reg = <0x10004000 0x1000>; 122 122 interrupts = <25>; 123 123 clocks = <&clks IMX27_CLK_GPT2_IPG_GATE>, ··· 126 126 }; 127 127 128 128 gpt3: timer@10005000 { 129 - compatible = "fsl,imx27-gpt", "fsl,imx1-gpt"; 129 + compatible = "fsl,imx27-gpt", "fsl,imx21-gpt"; 130 130 reg = <0x10005000 0x1000>; 131 131 interrupts = <24>; 132 132 clocks = <&clks IMX27_CLK_GPT3_IPG_GATE>, ··· 376 376 }; 377 377 378 378 gpt4: timer@10019000 { 379 - compatible = "fsl,imx27-gpt", "fsl,imx1-gpt"; 379 + compatible = "fsl,imx27-gpt", "fsl,imx21-gpt"; 380 380 reg = <0x10019000 0x1000>; 381 381 interrupts = <4>; 382 382 clocks = <&clks IMX27_CLK_GPT4_IPG_GATE>, ··· 385 385 }; 386 386 387 387 gpt5: timer@1001a000 { 388 - compatible = "fsl,imx27-gpt", "fsl,imx1-gpt"; 388 + compatible = "fsl,imx27-gpt", "fsl,imx21-gpt"; 389 389 reg = <0x1001a000 0x1000>; 390 390 interrupts = <3>; 391 391 clocks = <&clks IMX27_CLK_GPT5_IPG_GATE>, ··· 436 436 }; 437 437 438 438 gpt6: timer@1001f000 { 439 - compatible = "fsl,imx27-gpt", "fsl,imx1-gpt"; 439 + compatible = "fsl,imx27-gpt", "fsl,imx21-gpt"; 440 440 reg = <0x1001f000 0x1000>; 441 441 interrupts = <2>; 442 442 clocks = <&clks IMX27_CLK_GPT6_IPG_GATE>,
+3 -2
arch/arm/boot/dts/imx53-qsb-common.dtsi
··· 295 295 &tve { 296 296 pinctrl-names = "default"; 297 297 pinctrl-0 = <&pinctrl_vga_sync>; 298 + ddc-i2c-bus = <&i2c2>; 298 299 fsl,tve-mode = "vga"; 299 - fsl,hsync-pin = <4>; 300 - fsl,vsync-pin = <6>; 300 + fsl,hsync-pin = <7>; /* IPU DI1 PIN7 via EIM_OE */ 301 + fsl,vsync-pin = <8>; /* IPU DI1 PIN8 via EIM_RW */ 301 302 status = "okay"; 302 303 }; 303 304
+2 -1
arch/arm/boot/dts/k2e.dtsi
··· 86 86 gpio,syscon-dev = <&devctrl 0x240>; 87 87 }; 88 88 89 - pcie@21020000 { 89 + pcie1: pcie@21020000 { 90 90 compatible = "ti,keystone-pcie","snps,dw-pcie"; 91 91 clocks = <&clkpcie1>; 92 92 clock-names = "pcie"; ··· 96 96 ranges = <0x81000000 0 0 0x23260000 0x4000 0x4000 97 97 0x82000000 0 0x60000000 0x60000000 0 0x10000000>; 98 98 99 + status = "disabled"; 99 100 device_type = "pci"; 100 101 num-lanes = <2>; 101 102
+2 -1
arch/arm/boot/dts/keystone.dtsi
··· 286 286 ti,syscon-dev = <&devctrl 0x2a0>; 287 287 }; 288 288 289 - pcie@21800000 { 289 + pcie0: pcie@21800000 { 290 290 compatible = "ti,keystone-pcie", "snps,dw-pcie"; 291 291 clocks = <&clkpcie>; 292 292 clock-names = "pcie"; ··· 296 296 ranges = <0x81000000 0 0 0x23250000 0 0x4000 297 297 0x82000000 0 0x50000000 0x50000000 0 0x10000000>; 298 298 299 + status = "disabled"; 299 300 device_type = "pci"; 300 301 num-lanes = <2>; 301 302
+1 -1
arch/arm/boot/dts/omap3-overo-common-lcd35.dtsi
··· 120 120 121 121 lcd0: display@0 { 122 122 compatible = "lgphilips,lb035q02"; 123 - label = "lcd"; 123 + label = "lcd35"; 124 124 125 125 reg = <1>; /* CS1 */ 126 126 spi-max-frequency = <10000000>;
+1 -1
arch/arm/boot/dts/omap3-overo-common-lcd43.dtsi
··· 98 98 99 99 lcd0: display@0 { 100 100 compatible = "samsung,lte430wq-f0c", "panel-dpi"; 101 - label = "lcd"; 101 + label = "lcd43"; 102 102 103 103 pinctrl-names = "default"; 104 104 pinctrl-0 = <&lte430_pins>;
+2
arch/arm/boot/dts/omap4.dtsi
··· 551 551 reg = <0x4a066000 0x100>; 552 552 interrupts = <GIC_SPI 28 IRQ_TYPE_LEVEL_HIGH>; 553 553 ti,hwmods = "mmu_dsp"; 554 + #iommu-cells = <0>; 554 555 }; 555 556 556 557 mmu_ipu: mmu@55082000 { ··· 559 558 reg = <0x55082000 0x100>; 560 559 interrupts = <GIC_SPI 100 IRQ_TYPE_LEVEL_HIGH>; 561 560 ti,hwmods = "mmu_ipu"; 561 + #iommu-cells = <0>; 562 562 ti,iommu-bus-err-back; 563 563 }; 564 564
+2
arch/arm/boot/dts/omap5.dtsi
··· 612 612 reg = <0x4a066000 0x100>; 613 613 interrupts = <GIC_SPI 28 IRQ_TYPE_LEVEL_HIGH>; 614 614 ti,hwmods = "mmu_dsp"; 615 + #iommu-cells = <0>; 615 616 }; 616 617 617 618 mmu_ipu: mmu@55082000 { ··· 620 619 reg = <0x55082000 0x100>; 621 620 interrupts = <GIC_SPI 100 IRQ_TYPE_LEVEL_HIGH>; 622 621 ti,hwmods = "mmu_ipu"; 622 + #iommu-cells = <0>; 623 623 ti,iommu-bus-err-back; 624 624 }; 625 625
+16 -16
arch/arm/boot/dts/socfpga_cyclone5_sockit.dts
··· 60 60 rxc-skew-ps = <2000>; 61 61 }; 62 62 63 + &gpio2 { 64 + status = "okay"; 65 + }; 66 + 67 + &i2c1 { 68 + status = "okay"; 69 + 70 + accel1: accelerometer@53 { 71 + compatible = "adi,adxl345"; 72 + reg = <0x53>; 73 + 74 + interrupt-parent = <&portc>; 75 + interrupts = <3 2>; 76 + }; 77 + }; 78 + 63 79 &mmc0 { 64 80 vmmc-supply = <&regulator_3_3v>; 65 81 vqmmc-supply = <&regulator_3_3v>; ··· 83 67 84 68 &usb1 { 85 69 status = "okay"; 86 - }; 87 - 88 - &gpio2 { 89 - status = "okay"; 90 - }; 91 - 92 - &i2c1{ 93 - status = "okay"; 94 - 95 - accel1: accel1@53{ 96 - compatible = "adxl34x"; 97 - reg = <0x53>; 98 - 99 - interrupt-parent = < &portc >; 100 - interrupts = <3 2>; 101 - }; 102 70 };
+1 -1
arch/arm/boot/dts/spear1310-evb.dts
··· 1 1 /* 2 2 * DTS file for SPEAr1310 Evaluation Baord 3 3 * 4 - * Copyright 2012 Viresh Kumar <viresh.linux@gmail.com> 4 + * Copyright 2012 Viresh Kumar <vireshk@kernel.org> 5 5 * 6 6 * The code contained herein is licensed under the GNU General Public 7 7 * License. You may obtain a copy of the GNU General Public License
+1 -1
arch/arm/boot/dts/spear1310.dtsi
··· 1 1 /* 2 2 * DTS file for all SPEAr1310 SoCs 3 3 * 4 - * Copyright 2012 Viresh Kumar <viresh.linux@gmail.com> 4 + * Copyright 2012 Viresh Kumar <vireshk@kernel.org> 5 5 * 6 6 * The code contained herein is licensed under the GNU General Public 7 7 * License. You may obtain a copy of the GNU General Public License
+1 -1
arch/arm/boot/dts/spear1340-evb.dts
··· 1 1 /* 2 2 * DTS file for SPEAr1340 Evaluation Baord 3 3 * 4 - * Copyright 2012 Viresh Kumar <viresh.linux@gmail.com> 4 + * Copyright 2012 Viresh Kumar <vireshk@kernel.org> 5 5 * 6 6 * The code contained herein is licensed under the GNU General Public 7 7 * License. You may obtain a copy of the GNU General Public License
+1 -1
arch/arm/boot/dts/spear1340.dtsi
··· 1 1 /* 2 2 * DTS file for all SPEAr1340 SoCs 3 3 * 4 - * Copyright 2012 Viresh Kumar <viresh.linux@gmail.com> 4 + * Copyright 2012 Viresh Kumar <vireshk@kernel.org> 5 5 * 6 6 * The code contained herein is licensed under the GNU General Public 7 7 * License. You may obtain a copy of the GNU General Public License
+1 -1
arch/arm/boot/dts/spear13xx.dtsi
··· 1 1 /* 2 2 * DTS file for all SPEAr13xx SoCs 3 3 * 4 - * Copyright 2012 Viresh Kumar <viresh.linux@gmail.com> 4 + * Copyright 2012 Viresh Kumar <vireshk@kernel.org> 5 5 * 6 6 * The code contained herein is licensed under the GNU General Public 7 7 * License. You may obtain a copy of the GNU General Public License
+1 -1
arch/arm/boot/dts/spear300-evb.dts
··· 1 1 /* 2 2 * DTS file for SPEAr300 Evaluation Baord 3 3 * 4 - * Copyright 2012 Viresh Kumar <viresh.linux@gmail.com> 4 + * Copyright 2012 Viresh Kumar <vireshk@kernel.org> 5 5 * 6 6 * The code contained herein is licensed under the GNU General Public 7 7 * License. You may obtain a copy of the GNU General Public License
+1 -1
arch/arm/boot/dts/spear300.dtsi
··· 1 1 /* 2 2 * DTS file for SPEAr300 SoC 3 3 * 4 - * Copyright 2012 Viresh Kumar <viresh.linux@gmail.com> 4 + * Copyright 2012 Viresh Kumar <vireshk@kernel.org> 5 5 * 6 6 * The code contained herein is licensed under the GNU General Public 7 7 * License. You may obtain a copy of the GNU General Public License
+1 -1
arch/arm/boot/dts/spear310-evb.dts
··· 1 1 /* 2 2 * DTS file for SPEAr310 Evaluation Baord 3 3 * 4 - * Copyright 2012 Viresh Kumar <viresh.linux@gmail.com> 4 + * Copyright 2012 Viresh Kumar <vireshk@kernel.org> 5 5 * 6 6 * The code contained herein is licensed under the GNU General Public 7 7 * License. You may obtain a copy of the GNU General Public License
+1 -1
arch/arm/boot/dts/spear310.dtsi
··· 1 1 /* 2 2 * DTS file for SPEAr310 SoC 3 3 * 4 - * Copyright 2012 Viresh Kumar <viresh.linux@gmail.com> 4 + * Copyright 2012 Viresh Kumar <vireshk@kernel.org> 5 5 * 6 6 * The code contained herein is licensed under the GNU General Public 7 7 * License. You may obtain a copy of the GNU General Public License
+1 -1
arch/arm/boot/dts/spear320-evb.dts
··· 1 1 /* 2 2 * DTS file for SPEAr320 Evaluation Baord 3 3 * 4 - * Copyright 2012 Viresh Kumar <viresh.linux@gmail.com> 4 + * Copyright 2012 Viresh Kumar <vireshk@kernel.org> 5 5 * 6 6 * The code contained herein is licensed under the GNU General Public 7 7 * License. You may obtain a copy of the GNU General Public License
+1 -1
arch/arm/boot/dts/spear320.dtsi
··· 1 1 /* 2 2 * DTS file for SPEAr320 SoC 3 3 * 4 - * Copyright 2012 Viresh Kumar <viresh.linux@gmail.com> 4 + * Copyright 2012 Viresh Kumar <vireshk@kernel.org> 5 5 * 6 6 * The code contained herein is licensed under the GNU General Public 7 7 * License. You may obtain a copy of the GNU General Public License
+1 -1
arch/arm/boot/dts/spear3xx.dtsi
··· 1 1 /* 2 2 * DTS file for all SPEAr3xx SoCs 3 3 * 4 - * Copyright 2012 Viresh Kumar <viresh.linux@gmail.com> 4 + * Copyright 2012 Viresh Kumar <vireshk@kernel.org> 5 5 * 6 6 * The code contained herein is licensed under the GNU General Public 7 7 * License. You may obtain a copy of the GNU General Public License
+7
arch/arm/boot/dts/ste-ccu8540.dts
··· 17 17 model = "ST-Ericsson U8540 platform with Device Tree"; 18 18 compatible = "st-ericsson,ccu8540", "st-ericsson,u8540"; 19 19 20 + /* This stablilizes the serial port enumeration */ 21 + aliases { 22 + serial0 = &ux500_serial0; 23 + serial1 = &ux500_serial1; 24 + serial2 = &ux500_serial2; 25 + }; 26 + 20 27 memory@0 { 21 28 device_type = "memory"; 22 29 reg = <0x20000000 0x1f000000>, <0xc0000000 0x3f000000>;
+7
arch/arm/boot/dts/ste-ccu9540.dts
··· 16 16 model = "ST-Ericsson CCU9540 platform with Device Tree"; 17 17 compatible = "st-ericsson,ccu9540", "st-ericsson,u9540"; 18 18 19 + /* This stablilizes the serial port enumeration */ 20 + aliases { 21 + serial0 = &ux500_serial0; 22 + serial1 = &ux500_serial1; 23 + serial2 = &ux500_serial2; 24 + }; 25 + 19 26 memory { 20 27 reg = <0x00000000 0x20000000>; 21 28 };
+3 -3
arch/arm/boot/dts/ste-dbx5x0.dtsi
··· 971 971 power-domains = <&pm_domains DOMAIN_VAPE>; 972 972 }; 973 973 974 - uart@80120000 { 974 + ux500_serial0: uart@80120000 { 975 975 compatible = "arm,pl011", "arm,primecell"; 976 976 reg = <0x80120000 0x1000>; 977 977 interrupts = <0 11 IRQ_TYPE_LEVEL_HIGH>; ··· 986 986 status = "disabled"; 987 987 }; 988 988 989 - uart@80121000 { 989 + ux500_serial1: uart@80121000 { 990 990 compatible = "arm,pl011", "arm,primecell"; 991 991 reg = <0x80121000 0x1000>; 992 992 interrupts = <0 19 IRQ_TYPE_LEVEL_HIGH>; ··· 1001 1001 status = "disabled"; 1002 1002 }; 1003 1003 1004 - uart@80007000 { 1004 + ux500_serial2: uart@80007000 { 1005 1005 compatible = "arm,pl011", "arm,primecell"; 1006 1006 reg = <0x80007000 0x1000>; 1007 1007 interrupts = <0 26 IRQ_TYPE_LEVEL_HIGH>;
+1 -1
arch/arm/boot/dts/ste-href.dtsi
··· 32 32 status = "okay"; 33 33 }; 34 34 35 + /* This UART is unused and thus left disabled */ 35 36 uart@80121000 { 36 37 pinctrl-names = "default", "sleep"; 37 38 pinctrl-0 = <&uart1_default_mode>; 38 39 pinctrl-1 = <&uart1_sleep_mode>; 39 - status = "okay"; 40 40 }; 41 41 42 42 uart@80007000 {
+7
arch/arm/boot/dts/ste-hrefprev60-stuib.dts
··· 17 17 model = "ST-Ericsson HREF (pre-v60) and ST UIB"; 18 18 compatible = "st-ericsson,mop500", "st-ericsson,u8500"; 19 19 20 + /* This stablilizes the serial port enumeration */ 21 + aliases { 22 + serial0 = &ux500_serial0; 23 + serial1 = &ux500_serial1; 24 + serial2 = &ux500_serial2; 25 + }; 26 + 20 27 soc { 21 28 /* Reset line for the BU21013 touchscreen */ 22 29 i2c@80110000 {
+7
arch/arm/boot/dts/ste-hrefprev60-tvk.dts
··· 16 16 / { 17 17 model = "ST-Ericsson HREF (pre-v60) and TVK1281618 UIB"; 18 18 compatible = "st-ericsson,mop500", "st-ericsson,u8500"; 19 + 20 + /* This stablilizes the serial port enumeration */ 21 + aliases { 22 + serial0 = &ux500_serial0; 23 + serial1 = &ux500_serial1; 24 + serial2 = &ux500_serial2; 25 + }; 19 26 };
+5
arch/arm/boot/dts/ste-hrefprev60.dtsi
··· 23 23 }; 24 24 25 25 soc { 26 + /* Enable UART1 on this board */ 27 + uart@80121000 { 28 + status = "okay"; 29 + }; 30 + 26 31 i2c@80004000 { 27 32 tps61052@33 { 28 33 compatible = "tps61052";
+7
arch/arm/boot/dts/ste-hrefv60plus-stuib.dts
··· 19 19 model = "ST-Ericsson HREF (v60+) and ST UIB"; 20 20 compatible = "st-ericsson,hrefv60+", "st-ericsson,u8500"; 21 21 22 + /* This stablilizes the serial port enumeration */ 23 + aliases { 24 + serial0 = &ux500_serial0; 25 + serial1 = &ux500_serial1; 26 + serial2 = &ux500_serial2; 27 + }; 28 + 22 29 soc { 23 30 /* Reset line for the BU21013 touchscreen */ 24 31 i2c@80110000 {
+7
arch/arm/boot/dts/ste-hrefv60plus-tvk.dts
··· 18 18 / { 19 19 model = "ST-Ericsson HREF (v60+) and TVK1281618 UIB"; 20 20 compatible = "st-ericsson,hrefv60+", "st-ericsson,u8500"; 21 + 22 + /* This stablilizes the serial port enumeration */ 23 + aliases { 24 + serial0 = &ux500_serial0; 25 + serial1 = &ux500_serial1; 26 + serial2 = &ux500_serial2; 27 + }; 21 28 };
+23 -2
arch/arm/boot/dts/ste-hrefv60plus.dtsi
··· 43 43 <&vaudio_hf_hrefv60_mode>, 44 44 <&gbf_hrefv60_mode>, 45 45 <&hdtv_hrefv60_mode>, 46 - <&touch_hrefv60_mode>; 46 + <&touch_hrefv60_mode>, 47 + <&gpios_hrefv60_mode>; 47 48 48 49 sdi0 { 49 - /* SD card detect GPIO pin, extend default state */ 50 50 sdi0_default_mode: sdi0_default { 51 + /* SD card detect GPIO pin, extend default state */ 51 52 default_hrefv60_cfg1 { 52 53 pins = "GPIO95_E8"; 53 54 ste,config = <&gpio_in_pu>; 55 + }; 56 + /* VMMCI level-shifter enable */ 57 + default_hrefv60_cfg2 { 58 + pins = "GPIO169_D22"; 59 + ste,config = <&gpio_out_lo>; 60 + }; 61 + /* VMMCI level-shifter voltage select */ 62 + default_hrefv60_cfg3 { 63 + pins = "GPIO5_AG6"; 64 + ste,config = <&gpio_out_hi>; 54 65 }; 55 66 }; 56 67 }; ··· 221 210 hrefv60_cfg2 { 222 211 pins ="GPIO66_G3"; 223 212 ste,config = <&gpio_out_lo>; 213 + }; 214 + }; 215 + }; 216 + gpios { 217 + /* Dangling GPIO pins */ 218 + gpios_hrefv60_mode: gpios_hrefv60 { 219 + default_cfg1 { 220 + /* Normally UART1 RXD, now dangling */ 221 + pins = "GPIO4_AH6"; 222 + ste,config = <&in_pu>; 224 223 }; 225 224 }; 226 225 };
+23 -2
arch/arm/boot/dts/ste-snowball.dts
··· 18 18 model = "Calao Systems Snowball platform with device tree"; 19 19 compatible = "calaosystems,snowball-a9500", "st-ericsson,u9500"; 20 20 21 + /* This stablilizes the serial port enumeration */ 22 + aliases { 23 + serial0 = &ux500_serial0; 24 + serial1 = &ux500_serial1; 25 + serial2 = &ux500_serial2; 26 + }; 27 + 21 28 memory { 22 29 reg = <0x00000000 0x20000000>; 23 30 }; ··· 230 223 status = "okay"; 231 224 }; 232 225 226 + /* This UART is unused and thus left disabled */ 233 227 uart@80121000 { 234 228 pinctrl-names = "default", "sleep"; 235 229 pinctrl-0 = <&uart1_default_mode>; 236 230 pinctrl-1 = <&uart1_sleep_mode>; 237 - status = "okay"; 238 231 }; 239 232 240 233 uart@80007000 { ··· 459 452 pins = "GPIO21_AB3"; /* DAT31DIR */ 460 453 ste,config = <&out_hi>; 461 454 }; 462 - 455 + /* SD card detect GPIO pin, extend default state */ 456 + snowball_cfg2 { 457 + pins = "GPIO218_AH11"; 458 + ste,config = <&gpio_in_pu>; 459 + }; 460 + /* VMMCI level-shifter enable */ 461 + snowball_cfg3 { 462 + pins = "GPIO217_AH12"; 463 + ste,config = <&gpio_out_lo>; 464 + }; 465 + /* VMMCI level-shifter voltage select */ 466 + snowball_cfg4 { 467 + pins = "GPIO228_AJ6"; 468 + ste,config = <&gpio_out_hi>; 469 + }; 463 470 }; 464 471 }; 465 472 ssp0 {
+23 -2
arch/arm/boot/dts/vexpress-v2p-ca15_a7.dts
··· 150 150 interface-type = "ace"; 151 151 reg = <0x5000 0x1000>; 152 152 }; 153 + 154 + pmu@9000 { 155 + compatible = "arm,cci-400-pmu,r0"; 156 + reg = <0x9000 0x5000>; 157 + interrupts = <0 105 4>, 158 + <0 101 4>, 159 + <0 102 4>, 160 + <0 103 4>, 161 + <0 104 4>; 162 + }; 153 163 }; 154 164 155 165 memory-controller@7ffd0000 { ··· 197 187 <1 10 0xf08>; 198 188 }; 199 189 200 - pmu { 190 + pmu_a15 { 201 191 compatible = "arm,cortex-a15-pmu"; 202 192 interrupts = <0 68 4>, 203 193 <0 69 4>; 204 - interrupt-affinity = <&cpu0>, <&cpu1>; 194 + interrupt-affinity = <&cpu0>, 195 + <&cpu1>; 196 + }; 197 + 198 + pmu_a7 { 199 + compatible = "arm,cortex-a7-pmu"; 200 + interrupts = <0 128 4>, 201 + <0 129 4>, 202 + <0 130 4>; 203 + interrupt-affinity = <&cpu2>, 204 + <&cpu3>, 205 + <&cpu4>; 205 206 }; 206 207 207 208 oscclk6a: oscclk6a {
-1
arch/arm/configs/multi_v7_defconfig
··· 353 353 CONFIG_POWER_RESET_GPIO=y 354 354 CONFIG_POWER_RESET_GPIO_RESTART=y 355 355 CONFIG_POWER_RESET_KEYSTONE=y 356 - CONFIG_POWER_RESET_SUN6I=y 357 356 CONFIG_POWER_RESET_RMOBILE=y 358 357 CONFIG_SENSORS_LM90=y 359 358 CONFIG_SENSORS_LM95245=y
+5 -1
arch/arm/configs/sunxi_defconfig
··· 2 2 CONFIG_HIGH_RES_TIMERS=y 3 3 CONFIG_BLK_DEV_INITRD=y 4 4 CONFIG_PERF_EVENTS=y 5 + CONFIG_MODULES=y 5 6 CONFIG_ARCH_SUNXI=y 6 7 CONFIG_SMP=y 7 8 CONFIG_NR_CPUS=8 ··· 78 77 CONFIG_GPIO_SYSFS=y 79 78 CONFIG_POWER_SUPPLY=y 80 79 CONFIG_POWER_RESET=y 81 - CONFIG_POWER_RESET_SUN6I=y 82 80 CONFIG_THERMAL=y 83 81 CONFIG_CPU_THERMAL=y 84 82 CONFIG_WATCHDOG=y ··· 87 87 CONFIG_REGULATOR_FIXED_VOLTAGE=y 88 88 CONFIG_REGULATOR_AXP20X=y 89 89 CONFIG_REGULATOR_GPIO=y 90 + CONFIG_FB=y 91 + CONFIG_FB_SIMPLE=y 92 + CONFIG_FRAMEBUFFER_CONSOLE=y 93 + CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y 90 94 CONFIG_USB=y 91 95 CONFIG_USB_EHCI_HCD=y 92 96 CONFIG_USB_EHCI_HCD_PLATFORM=y
+1
arch/arm/include/asm/Kbuild
··· 13 13 generic-y += local.h 14 14 generic-y += local64.h 15 15 generic-y += mcs_spinlock.h 16 + generic-y += mm-arch-hooks.h 16 17 generic-y += msgbuf.h 17 18 generic-y += param.h 18 19 generic-y += parport.h
+58 -17
arch/arm/include/asm/io.h
··· 140 140 * The _caller variety takes a __builtin_return_address(0) value for 141 141 * /proc/vmalloc to use - and should only be used in non-inline functions. 142 142 */ 143 - extern void __iomem *__arm_ioremap_pfn_caller(unsigned long, unsigned long, 144 - size_t, unsigned int, void *); 145 143 extern void __iomem *__arm_ioremap_caller(phys_addr_t, size_t, unsigned int, 146 144 void *); 147 - 148 145 extern void __iomem *__arm_ioremap_pfn(unsigned long, unsigned long, size_t, unsigned int); 149 - extern void __iomem *__arm_ioremap(phys_addr_t, size_t, unsigned int); 150 146 extern void __iomem *__arm_ioremap_exec(phys_addr_t, size_t, bool cached); 151 147 extern void __iounmap(volatile void __iomem *addr); 152 - extern void __arm_iounmap(volatile void __iomem *addr); 153 148 154 149 extern void __iomem * (*arch_ioremap_caller)(phys_addr_t, size_t, 155 150 unsigned int, void *); ··· 316 321 static inline void memset_io(volatile void __iomem *dst, unsigned c, 317 322 size_t count) 318 323 { 319 - memset((void __force *)dst, c, count); 324 + extern void mmioset(void *, unsigned int, size_t); 325 + mmioset((void __force *)dst, c, count); 320 326 } 321 327 #define memset_io(dst,c,count) memset_io(dst,c,count) 322 328 323 329 static inline void memcpy_fromio(void *to, const volatile void __iomem *from, 324 330 size_t count) 325 331 { 326 - memcpy(to, (const void __force *)from, count); 332 + extern void mmiocpy(void *, const void *, size_t); 333 + mmiocpy(to, (const void __force *)from, count); 327 334 } 328 335 #define memcpy_fromio(to,from,count) memcpy_fromio(to,from,count) 329 336 330 337 static inline void memcpy_toio(volatile void __iomem *to, const void *from, 331 338 size_t count) 332 339 { 333 - memcpy((void __force *)to, from, count); 340 + extern void mmiocpy(void *, const void *, size_t); 341 + mmiocpy((void __force *)to, from, count); 334 342 } 335 343 #define memcpy_toio(to,from,count) memcpy_toio(to,from,count) 336 344 ··· 346 348 #endif /* readl */ 347 349 348 350 /* 349 - * ioremap and friends. 351 + * ioremap() and friends. 350 352 * 351 - * ioremap takes a PCI memory address, as specified in 352 - * Documentation/io-mapping.txt. 353 + * ioremap() takes a resource address, and size. Due to the ARM memory 354 + * types, it is important to use the correct ioremap() function as each 355 + * mapping has specific properties. 353 356 * 357 + * Function Memory type Cacheability Cache hint 358 + * ioremap() Device n/a n/a 359 + * ioremap_nocache() Device n/a n/a 360 + * ioremap_cache() Normal Writeback Read allocate 361 + * ioremap_wc() Normal Non-cacheable n/a 362 + * ioremap_wt() Normal Non-cacheable n/a 363 + * 364 + * All device mappings have the following properties: 365 + * - no access speculation 366 + * - no repetition (eg, on return from an exception) 367 + * - number, order and size of accesses are maintained 368 + * - unaligned accesses are "unpredictable" 369 + * - writes may be delayed before they hit the endpoint device 370 + * 371 + * ioremap_nocache() is the same as ioremap() as there are too many device 372 + * drivers using this for device registers, and documentation which tells 373 + * people to use it for such for this to be any different. This is not a 374 + * safe fallback for memory-like mappings, or memory regions where the 375 + * compiler may generate unaligned accesses - eg, via inlining its own 376 + * memcpy. 377 + * 378 + * All normal memory mappings have the following properties: 379 + * - reads can be repeated with no side effects 380 + * - repeated reads return the last value written 381 + * - reads can fetch additional locations without side effects 382 + * - writes can be repeated (in certain cases) with no side effects 383 + * - writes can be merged before accessing the target 384 + * - unaligned accesses can be supported 385 + * - ordering is not guaranteed without explicit dependencies or barrier 386 + * instructions 387 + * - writes may be delayed before they hit the endpoint memory 388 + * 389 + * The cache hint is only a performance hint: CPUs may alias these hints. 390 + * Eg, a CPU not implementing read allocate but implementing write allocate 391 + * will provide a write allocate mapping instead. 354 392 */ 355 - #define ioremap(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE) 356 - #define ioremap_nocache(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE) 357 - #define ioremap_cache(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE_CACHED) 358 - #define ioremap_wc(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE_WC) 359 - #define ioremap_wt(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE) 360 - #define iounmap __arm_iounmap 393 + void __iomem *ioremap(resource_size_t res_cookie, size_t size); 394 + #define ioremap ioremap 395 + #define ioremap_nocache ioremap 396 + 397 + void __iomem *ioremap_cache(resource_size_t res_cookie, size_t size); 398 + #define ioremap_cache ioremap_cache 399 + 400 + void __iomem *ioremap_wc(resource_size_t res_cookie, size_t size); 401 + #define ioremap_wc ioremap_wc 402 + #define ioremap_wt ioremap_wc 403 + 404 + void iounmap(volatile void __iomem *iomem_cookie); 405 + #define iounmap iounmap 361 406 362 407 /* 363 408 * io{read,write}{16,32}be() macros
+2 -2
arch/arm/include/asm/memory.h
··· 275 275 */ 276 276 #define __pa(x) __virt_to_phys((unsigned long)(x)) 277 277 #define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x))) 278 - #define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT) 278 + #define pfn_to_kaddr(pfn) __va((phys_addr_t)(pfn) << PAGE_SHIFT) 279 279 280 280 extern phys_addr_t (*arch_virt_to_idmap)(unsigned long x); 281 281 ··· 286 286 */ 287 287 static inline phys_addr_t __virt_to_idmap(unsigned long x) 288 288 { 289 - if (arch_virt_to_idmap) 289 + if (IS_ENABLED(CONFIG_MMU) && arch_virt_to_idmap) 290 290 return arch_virt_to_idmap(x); 291 291 else 292 292 return __virt_to_phys(x);
-15
arch/arm/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_ARM_MM_ARCH_HOOKS_H 13 - #define _ASM_ARM_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_ARM_MM_ARCH_HOOKS_H */
+30 -1
arch/arm/include/asm/pgtable-2level.h
··· 129 129 130 130 /* 131 131 * These are the memory types, defined to be compatible with 132 - * pre-ARMv6 CPUs cacheable and bufferable bits: XXCB 132 + * pre-ARMv6 CPUs cacheable and bufferable bits: n/a,n/a,C,B 133 + * ARMv6+ without TEX remapping, they are a table index. 134 + * ARMv6+ with TEX remapping, they correspond to n/a,TEX(0),C,B 135 + * 136 + * MT type Pre-ARMv6 ARMv6+ type / cacheable status 137 + * UNCACHED Uncached Strongly ordered 138 + * BUFFERABLE Bufferable Normal memory / non-cacheable 139 + * WRITETHROUGH Writethrough Normal memory / write through 140 + * WRITEBACK Writeback Normal memory / write back, read alloc 141 + * MINICACHE Minicache N/A 142 + * WRITEALLOC Writeback Normal memory / write back, write alloc 143 + * DEV_SHARED Uncached Device memory (shared) 144 + * DEV_NONSHARED Uncached Device memory (non-shared) 145 + * DEV_WC Bufferable Normal memory / non-cacheable 146 + * DEV_CACHED Writeback Normal memory / write back, read alloc 147 + * VECTORS Variable Normal memory / variable 148 + * 149 + * All normal memory mappings have the following properties: 150 + * - reads can be repeated with no side effects 151 + * - repeated reads return the last value written 152 + * - reads can fetch additional locations without side effects 153 + * - writes can be repeated (in certain cases) with no side effects 154 + * - writes can be merged before accessing the target 155 + * - unaligned accesses can be supported 156 + * 157 + * All device mappings have the following properties: 158 + * - no access speculation 159 + * - no repetition (eg, on return from an exception) 160 + * - number, order and size of accesses are maintained 161 + * - unaligned accesses are "unpredictable" 133 162 */ 134 163 #define L_PTE_MT_UNCACHED (_AT(pteval_t, 0x00) << 2) /* 0000 */ 135 164 #define L_PTE_MT_BUFFERABLE (_AT(pteval_t, 0x01) << 2) /* 0001 */
+6
arch/arm/kernel/armksyms.c
··· 50 50 51 51 extern void fpundefinstr(void); 52 52 53 + void mmioset(void *, unsigned int, size_t); 54 + void mmiocpy(void *, const void *, size_t); 55 + 53 56 /* platform dependent support */ 54 57 EXPORT_SYMBOL(arm_delay_ops); 55 58 ··· 90 87 EXPORT_SYMBOL(memmove); 91 88 EXPORT_SYMBOL(memchr); 92 89 EXPORT_SYMBOL(__memzero); 90 + 91 + EXPORT_SYMBOL(mmioset); 92 + EXPORT_SYMBOL(mmiocpy); 93 93 94 94 #ifdef CONFIG_MMU 95 95 EXPORT_SYMBOL(copy_page);
+1 -1
arch/arm/kernel/entry-armv.S
··· 410 410 zero_fp 411 411 412 412 .if \trace 413 - #ifdef CONFIG_IRQSOFF_TRACER 413 + #ifdef CONFIG_TRACE_IRQFLAGS 414 414 bl trace_hardirqs_off 415 415 #endif 416 416 ct_user_exit save = 0
+2 -1
arch/arm/kernel/perf_event.c
··· 818 818 if (arch_find_n_match_cpu_physical_id(dn, cpu, NULL)) 819 819 break; 820 820 821 - of_node_put(dn); 822 821 if (cpu >= nr_cpu_ids) { 823 822 pr_warn("Failed to find logical CPU for %s\n", 824 823 dn->name); 824 + of_node_put(dn); 825 825 break; 826 826 } 827 + of_node_put(dn); 827 828 828 829 irqs[i] = cpu; 829 830 cpumask_set_cpu(cpu, &pmu->supported_cpus);
+1 -1
arch/arm/kernel/reboot.c
··· 50 50 flush_cache_all(); 51 51 52 52 /* Switch to the identity mapping. */ 53 - phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset); 53 + phys_reset = (phys_reset_t)(unsigned long)virt_to_idmap(cpu_reset); 54 54 phys_reset((unsigned long)addr); 55 55 56 56 /* Should never get here. */
+2 -2
arch/arm/kernel/smp.c
··· 578 578 struct pt_regs *old_regs = set_irq_regs(regs); 579 579 580 580 if ((unsigned)ipinr < NR_IPI) { 581 - trace_ipi_entry(ipi_types[ipinr]); 581 + trace_ipi_entry_rcuidle(ipi_types[ipinr]); 582 582 __inc_irq_stat(cpu, ipi_irqs[ipinr]); 583 583 } 584 584 ··· 637 637 } 638 638 639 639 if ((unsigned)ipinr < NR_IPI) 640 - trace_ipi_exit(ipi_types[ipinr]); 640 + trace_ipi_exit_rcuidle(ipi_types[ipinr]); 641 641 set_irq_regs(old_regs); 642 642 } 643 643
+2
arch/arm/lib/memcpy.S
··· 61 61 62 62 /* Prototype: void *memcpy(void *dest, const void *src, size_t n); */ 63 63 64 + ENTRY(mmiocpy) 64 65 ENTRY(memcpy) 65 66 66 67 #include "copy_template.S" 67 68 68 69 ENDPROC(memcpy) 70 + ENDPROC(mmiocpy)
+2
arch/arm/lib/memset.S
··· 16 16 .text 17 17 .align 5 18 18 19 + ENTRY(mmioset) 19 20 ENTRY(memset) 20 21 UNWIND( .fnstart ) 21 22 ands r3, r0, #3 @ 1 unaligned? ··· 134 133 b 1b 135 134 UNWIND( .fnend ) 136 135 ENDPROC(memset) 136 + ENDPROC(mmioset)
+6 -21
arch/arm/mach-imx/gpc.c
··· 291 291 } 292 292 } 293 293 294 - #ifdef CONFIG_PM_GENERIC_DOMAINS 295 - 296 294 static void _imx6q_pm_pu_power_off(struct generic_pm_domain *genpd) 297 295 { 298 296 int iso, iso2sw; ··· 397 399 static int imx_gpc_genpd_init(struct device *dev, struct regulator *pu_reg) 398 400 { 399 401 struct clk *clk; 400 - bool is_off; 401 402 int i; 402 403 403 404 imx6q_pu_domain.reg = pu_reg; ··· 413 416 } 414 417 imx6q_pu_domain.num_clks = i; 415 418 416 - is_off = IS_ENABLED(CONFIG_PM); 417 - if (is_off) { 418 - _imx6q_pm_pu_power_off(&imx6q_pu_domain.base); 419 - } else { 420 - /* 421 - * Enable power if compiled without CONFIG_PM in case the 422 - * bootloader disabled it. 423 - */ 424 - imx6q_pm_pu_power_on(&imx6q_pu_domain.base); 425 - } 419 + /* Enable power always in case bootloader disabled it. */ 420 + imx6q_pm_pu_power_on(&imx6q_pu_domain.base); 426 421 427 - pm_genpd_init(&imx6q_pu_domain.base, NULL, is_off); 422 + if (!IS_ENABLED(CONFIG_PM_GENERIC_DOMAINS)) 423 + return 0; 424 + 425 + pm_genpd_init(&imx6q_pu_domain.base, NULL, false); 428 426 return of_genpd_add_provider_onecell(dev->of_node, 429 427 &imx_gpc_onecell_data); 430 428 ··· 428 436 clk_put(imx6q_pu_domain.clk[i]); 429 437 return -EINVAL; 430 438 } 431 - 432 - #else 433 - static inline int imx_gpc_genpd_init(struct device *dev, struct regulator *reg) 434 - { 435 - return 0; 436 - } 437 - #endif /* CONFIG_PM_GENERIC_DOMAINS */ 438 439 439 440 static int imx_gpc_probe(struct platform_device *pdev) 440 441 {
+1
arch/arm/mach-omap2/Kconfig
··· 60 60 select ARM_GIC 61 61 select MACH_OMAP_GENERIC 62 62 select MIGHT_HAVE_CACHE_L2X0 63 + select HAVE_ARM_SCU 63 64 64 65 config SOC_DRA7XX 65 66 bool "TI DRA7XX"
-1
arch/arm/mach-omap2/dma.c
··· 117 117 u8 revision = dma_read(REVISION, 0) & 0xff; 118 118 printk(KERN_INFO "OMAP DMA hardware revision %d.%d\n", 119 119 revision >> 4, revision & 0xf); 120 - return; 121 120 } 122 121 123 122 static unsigned configure_dma_errata(void)
+1
arch/arm/mach-prima2/Kconfig
··· 4 4 select ARCH_REQUIRE_GPIOLIB 5 5 select GENERIC_IRQ_CHIP 6 6 select NO_IOPORT_MAP 7 + select REGMAP 7 8 select PINCTRL 8 9 select PINCTRL_SIRF 9 10 help
+45 -3
arch/arm/mach-prima2/rtciobrg.c
··· 1 1 /* 2 - * RTC I/O Bridge interfaces for CSR SiRFprimaII 2 + * RTC I/O Bridge interfaces for CSR SiRFprimaII/atlas7 3 3 * ARM access the registers of SYSRTC, GPSRTC and PWRC through this module 4 4 * 5 5 * Copyright (c) 2011 Cambridge Silicon Radio Limited, a CSR plc group company. ··· 10 10 #include <linux/kernel.h> 11 11 #include <linux/module.h> 12 12 #include <linux/io.h> 13 + #include <linux/regmap.h> 13 14 #include <linux/of.h> 14 15 #include <linux/of_address.h> 15 16 #include <linux/of_device.h> ··· 67 66 { 68 67 unsigned long flags, val; 69 68 69 + /* TODO: add hwspinlock to sync with M3 */ 70 70 spin_lock_irqsave(&rtciobrg_lock, flags); 71 71 72 72 val = __sirfsoc_rtc_iobrg_readl(addr); ··· 92 90 { 93 91 unsigned long flags; 94 92 93 + /* TODO: add hwspinlock to sync with M3 */ 95 94 spin_lock_irqsave(&rtciobrg_lock, flags); 96 95 97 96 sirfsoc_rtc_iobrg_pre_writel(val, addr); ··· 104 101 spin_unlock_irqrestore(&rtciobrg_lock, flags); 105 102 } 106 103 EXPORT_SYMBOL_GPL(sirfsoc_rtc_iobrg_writel); 104 + 105 + 106 + static int regmap_iobg_regwrite(void *context, unsigned int reg, 107 + unsigned int val) 108 + { 109 + sirfsoc_rtc_iobrg_writel(val, reg); 110 + return 0; 111 + } 112 + 113 + static int regmap_iobg_regread(void *context, unsigned int reg, 114 + unsigned int *val) 115 + { 116 + *val = (u32)sirfsoc_rtc_iobrg_readl(reg); 117 + return 0; 118 + } 119 + 120 + static struct regmap_bus regmap_iobg = { 121 + .reg_write = regmap_iobg_regwrite, 122 + .reg_read = regmap_iobg_regread, 123 + }; 124 + 125 + /** 126 + * devm_regmap_init_iobg(): Initialise managed register map 127 + * 128 + * @iobg: Device that will be interacted with 129 + * @config: Configuration for register map 130 + * 131 + * The return value will be an ERR_PTR() on error or a valid pointer 132 + * to a struct regmap. The regmap will be automatically freed by the 133 + * device management code. 134 + */ 135 + struct regmap *devm_regmap_init_iobg(struct device *dev, 136 + const struct regmap_config *config) 137 + { 138 + const struct regmap_bus *bus = &regmap_iobg; 139 + 140 + return devm_regmap_init(dev, bus, dev, config); 141 + } 142 + EXPORT_SYMBOL_GPL(devm_regmap_init_iobg); 107 143 108 144 static const struct of_device_id rtciobrg_ids[] = { 109 145 { .compatible = "sirf,prima2-rtciobg" }, ··· 174 132 } 175 133 postcore_initcall(sirfsoc_rtciobrg_init); 176 134 177 - MODULE_AUTHOR("Zhiwu Song <zhiwu.song@csr.com>, " 178 - "Barry Song <baohua.song@csr.com>"); 135 + MODULE_AUTHOR("Zhiwu Song <zhiwu.song@csr.com>"); 136 + MODULE_AUTHOR("Barry Song <baohua.song@csr.com>"); 179 137 MODULE_DESCRIPTION("CSR SiRFprimaII rtc io bridge"); 180 138 MODULE_LICENSE("GPL v2");
+3
arch/arm/mach-pxa/capc7117.c
··· 24 24 #include <linux/ata_platform.h> 25 25 #include <linux/serial_8250.h> 26 26 #include <linux/gpio.h> 27 + #include <linux/regulator/machine.h> 27 28 28 29 #include <asm/mach-types.h> 29 30 #include <asm/mach/arch.h> ··· 145 144 146 145 capc7117_uarts_init(); 147 146 capc7117_ide_init(); 147 + 148 + regulator_has_full_constraints(); 148 149 } 149 150 150 151 MACHINE_START(CAPC7117,
+3
arch/arm/mach-pxa/cm-x2xx.c
··· 13 13 #include <linux/syscore_ops.h> 14 14 #include <linux/irq.h> 15 15 #include <linux/gpio.h> 16 + #include <linux/regulator/machine.h> 16 17 17 18 #include <linux/dm9000.h> 18 19 #include <linux/leds.h> ··· 467 466 cmx2xx_init_ac97(); 468 467 cmx2xx_init_touchscreen(); 469 468 cmx2xx_init_leds(); 469 + 470 + regulator_has_full_constraints(); 470 471 } 471 472 472 473 static void __init cmx2xx_init_irq(void)
+2
arch/arm/mach-pxa/cm-x300.c
··· 835 835 cm_x300_init_ac97(); 836 836 cm_x300_init_wi2wi(); 837 837 cm_x300_init_bl(); 838 + 839 + regulator_has_full_constraints(); 838 840 } 839 841 840 842 static void __init cm_x300_fixup(struct tag *tags, char **cmdline)
+3
arch/arm/mach-pxa/colibri-pxa270.c
··· 18 18 #include <linux/mtd/partitions.h> 19 19 #include <linux/mtd/physmap.h> 20 20 #include <linux/platform_device.h> 21 + #include <linux/regulator/machine.h> 21 22 #include <linux/ucb1400.h> 22 23 23 24 #include <asm/mach/arch.h> ··· 295 294 printk(KERN_ERR "Illegal colibri_pxa270_baseboard type %d\n", 296 295 colibri_pxa270_baseboard); 297 296 } 297 + 298 + regulator_has_full_constraints(); 298 299 } 299 300 300 301 /* The "Income s.r.o. SH-Dmaster PXA270 SBC" board can be booted either
+2
arch/arm/mach-pxa/em-x270.c
··· 1306 1306 em_x270_init_i2c(); 1307 1307 em_x270_init_camera(); 1308 1308 em_x270_userspace_consumers_init(); 1309 + 1310 + regulator_has_full_constraints(); 1309 1311 } 1310 1312 1311 1313 MACHINE_START(EM_X270, "Compulab EM-X270")
+3
arch/arm/mach-pxa/icontrol.c
··· 26 26 #include <linux/spi/spi.h> 27 27 #include <linux/spi/pxa2xx_spi.h> 28 28 #include <linux/can/platform/mcp251x.h> 29 + #include <linux/regulator/machine.h> 29 30 30 31 #include "generic.h" 31 32 ··· 186 185 mxm_8x10_mmc_init(); 187 186 188 187 icontrol_can_init(); 188 + 189 + regulator_has_full_constraints(); 189 190 } 190 191 191 192 MACHINE_START(ICONTROL, "iControl/SafeTcam boards using Embedian MXM-8x10 CoM")
+3
arch/arm/mach-pxa/trizeps4.c
··· 26 26 #include <linux/dm9000.h> 27 27 #include <linux/mtd/physmap.h> 28 28 #include <linux/mtd/partitions.h> 29 + #include <linux/regulator/machine.h> 29 30 #include <linux/i2c/pxa-i2c.h> 30 31 31 32 #include <asm/types.h> ··· 535 534 536 535 BCR_writew(trizeps_conxs_bcr); 537 536 board_backlight_power(1); 537 + 538 + regulator_has_full_constraints(); 538 539 } 539 540 540 541 static void __init trizeps4_map_io(void)
+3
arch/arm/mach-pxa/vpac270.c
··· 24 24 #include <linux/dm9000.h> 25 25 #include <linux/ucb1400.h> 26 26 #include <linux/ata_platform.h> 27 + #include <linux/regulator/machine.h> 27 28 #include <linux/regulator/max1586.h> 28 29 #include <linux/i2c/pxa-i2c.h> 29 30 ··· 712 711 vpac270_ts_init(); 713 712 vpac270_rtc_init(); 714 713 vpac270_ide_init(); 714 + 715 + regulator_has_full_constraints(); 715 716 } 716 717 717 718 MACHINE_START(VPAC270, "Voipac PXA270")
+2
arch/arm/mach-pxa/zeus.c
··· 868 868 i2c_register_board_info(0, ARRAY_AND_SIZE(zeus_i2c_devices)); 869 869 pxa2xx_set_spi_info(3, &pxa2xx_spi_ssp3_master_info); 870 870 spi_register_board_info(zeus_spi_board_info, ARRAY_SIZE(zeus_spi_board_info)); 871 + 872 + regulator_has_full_constraints(); 871 873 } 872 874 873 875 static struct map_desc zeus_io_desc[] __initdata = {
+1 -1
arch/arm/mach-spear/generic.h
··· 3 3 * 4 4 * Copyright (C) 2009-2012 ST Microelectronics 5 5 * Rajeev Kumar <rajeev-dlh.kumar@st.com> 6 - * Viresh Kumar <viresh.linux@gmail.com> 6 + * Viresh Kumar <vireshk@kernel.org> 7 7 * 8 8 * This file is licensed under the terms of the GNU General Public 9 9 * License version 2. This program is licensed "as is" without any
+1 -1
arch/arm/mach-spear/include/mach/irqs.h
··· 3 3 * 4 4 * Copyright (C) 2009-2012 ST Microelectronics 5 5 * Rajeev Kumar <rajeev-dlh.kumar@st.com> 6 - * Viresh Kumar <viresh.linux@gmail.com> 6 + * Viresh Kumar <vireshk@kernel.org> 7 7 * 8 8 * This file is licensed under the terms of the GNU General Public 9 9 * License version 2. This program is licensed "as is" without any
+1 -1
arch/arm/mach-spear/include/mach/misc_regs.h
··· 4 4 * Miscellaneous registers definitions for SPEAr3xx machine family 5 5 * 6 6 * Copyright (C) 2009 ST Microelectronics 7 - * Viresh Kumar <viresh.linux@gmail.com> 7 + * Viresh Kumar <vireshk@kernel.org> 8 8 * 9 9 * This file is licensed under the terms of the GNU General Public 10 10 * License version 2. This program is licensed "as is" without any
+1 -1
arch/arm/mach-spear/include/mach/spear.h
··· 3 3 * 4 4 * Copyright (C) 2009,2012 ST Microelectronics 5 5 * Rajeev Kumar<rajeev-dlh.kumar@st.com> 6 - * Viresh Kumar <viresh.linux@gmail.com> 6 + * Viresh Kumar <vireshk@kernel.org> 7 7 * 8 8 * This file is licensed under the terms of the GNU General Public 9 9 * License version 2. This program is licensed "as is" without any
+1 -1
arch/arm/mach-spear/include/mach/uncompress.h
··· 4 4 * Serial port stubs for kernel decompress status messages 5 5 * 6 6 * Copyright (C) 2009 ST Microelectronics 7 - * Viresh Kumar <viresh.linux@gmail.com> 7 + * Viresh Kumar <vireshk@kernel.org> 8 8 * 9 9 * This file is licensed under the terms of the GNU General Public 10 10 * License version 2. This program is licensed "as is" without any
+1 -1
arch/arm/mach-spear/pl080.c
··· 4 4 * DMAC pl080 definitions for SPEAr platform 5 5 * 6 6 * Copyright (C) 2012 ST Microelectronics 7 - * Viresh Kumar <viresh.linux@gmail.com> 7 + * Viresh Kumar <vireshk@kernel.org> 8 8 * 9 9 * This file is licensed under the terms of the GNU General Public 10 10 * License version 2. This program is licensed "as is" without any
+1 -1
arch/arm/mach-spear/pl080.h
··· 4 4 * DMAC pl080 definitions for SPEAr platform 5 5 * 6 6 * Copyright (C) 2012 ST Microelectronics 7 - * Viresh Kumar <viresh.linux@gmail.com> 7 + * Viresh Kumar <vireshk@kernel.org> 8 8 * 9 9 * This file is licensed under the terms of the GNU General Public 10 10 * License version 2. This program is licensed "as is" without any
+1 -1
arch/arm/mach-spear/restart.c
··· 4 4 * SPEAr platform specific restart functions 5 5 * 6 6 * Copyright (C) 2009 ST Microelectronics 7 - * Viresh Kumar <viresh.linux@gmail.com> 7 + * Viresh Kumar <vireshk@kernel.org> 8 8 * 9 9 * This file is licensed under the terms of the GNU General Public 10 10 * License version 2. This program is licensed "as is" without any
+1 -1
arch/arm/mach-spear/spear1310.c
··· 4 4 * SPEAr1310 machine source file 5 5 * 6 6 * Copyright (C) 2012 ST Microelectronics 7 - * Viresh Kumar <viresh.linux@gmail.com> 7 + * Viresh Kumar <vireshk@kernel.org> 8 8 * 9 9 * This file is licensed under the terms of the GNU General Public 10 10 * License version 2. This program is licensed "as is" without any
+1 -1
arch/arm/mach-spear/spear1340.c
··· 4 4 * SPEAr1340 machine source file 5 5 * 6 6 * Copyright (C) 2012 ST Microelectronics 7 - * Viresh Kumar <viresh.linux@gmail.com> 7 + * Viresh Kumar <vireshk@kernel.org> 8 8 * 9 9 * This file is licensed under the terms of the GNU General Public 10 10 * License version 2. This program is licensed "as is" without any
+1 -1
arch/arm/mach-spear/spear13xx.c
··· 4 4 * SPEAr13XX machines common source file 5 5 * 6 6 * Copyright (C) 2012 ST Microelectronics 7 - * Viresh Kumar <viresh.linux@gmail.com> 7 + * Viresh Kumar <vireshk@kernel.org> 8 8 * 9 9 * This file is licensed under the terms of the GNU General Public 10 10 * License version 2. This program is licensed "as is" without any
+1 -1
arch/arm/mach-spear/spear300.c
··· 4 4 * SPEAr300 machine source file 5 5 * 6 6 * Copyright (C) 2009-2012 ST Microelectronics 7 - * Viresh Kumar <viresh.linux@gmail.com> 7 + * Viresh Kumar <vireshk@kernel.org> 8 8 * 9 9 * This file is licensed under the terms of the GNU General Public 10 10 * License version 2. This program is licensed "as is" without any
+1 -1
arch/arm/mach-spear/spear310.c
··· 4 4 * SPEAr310 machine source file 5 5 * 6 6 * Copyright (C) 2009-2012 ST Microelectronics 7 - * Viresh Kumar <viresh.linux@gmail.com> 7 + * Viresh Kumar <vireshk@kernel.org> 8 8 * 9 9 * This file is licensed under the terms of the GNU General Public 10 10 * License version 2. This program is licensed "as is" without any
+1 -1
arch/arm/mach-spear/spear320.c
··· 4 4 * SPEAr320 machine source file 5 5 * 6 6 * Copyright (C) 2009-2012 ST Microelectronics 7 - * Viresh Kumar <viresh.linux@gmail.com> 7 + * Viresh Kumar <vireshk@kernel.org> 8 8 * 9 9 * This file is licensed under the terms of the GNU General Public 10 10 * License version 2. This program is licensed "as is" without any
+1 -1
arch/arm/mach-spear/spear3xx.c
··· 4 4 * SPEAr3XX machines common source file 5 5 * 6 6 * Copyright (C) 2009-2012 ST Microelectronics 7 - * Viresh Kumar <viresh.linux@gmail.com> 7 + * Viresh Kumar <vireshk@kernel.org> 8 8 * 9 9 * This file is licensed under the terms of the GNU General Public 10 10 * License version 2. This program is licensed "as is" without any
+1 -1
arch/arm/mach-sunxi/Kconfig
··· 35 35 select SUN5I_HSTIMER 36 36 37 37 config MACH_SUN8I 38 - bool "Allwinner A23 (sun8i) SoCs support" 38 + bool "Allwinner sun8i Family SoCs support" 39 39 default ARCH_SUNXI 40 40 select ARM_GIC 41 41 select MFD_SUN6I_PRCM
+4 -1
arch/arm/mach-sunxi/sunxi.c
··· 67 67 68 68 static const char * const sun8i_board_dt_compat[] = { 69 69 "allwinner,sun8i-a23", 70 + "allwinner,sun8i-a33", 71 + "allwinner,sun8i-h3", 70 72 NULL, 71 73 }; 72 74 73 - DT_MACHINE_START(SUN8I_DT, "Allwinner sun8i (A23) Family") 75 + DT_MACHINE_START(SUN8I_DT, "Allwinner sun8i Family") 76 + .init_time = sun6i_timer_init, 74 77 .dt_compat = sun8i_board_dt_compat, 75 78 .init_late = sunxi_dt_cpufreq_init, 76 79 MACHINE_END
+1 -1
arch/arm/mm/dma-mapping.c
··· 1971 1971 { 1972 1972 int next_bitmap; 1973 1973 1974 - if (mapping->nr_bitmaps > mapping->extensions) 1974 + if (mapping->nr_bitmaps >= mapping->extensions) 1975 1975 return -EINVAL; 1976 1976 1977 1977 next_bitmap = mapping->nr_bitmaps;
+23 -10
arch/arm/mm/ioremap.c
··· 255 255 } 256 256 #endif 257 257 258 - void __iomem * __arm_ioremap_pfn_caller(unsigned long pfn, 258 + static void __iomem * __arm_ioremap_pfn_caller(unsigned long pfn, 259 259 unsigned long offset, size_t size, unsigned int mtype, void *caller) 260 260 { 261 261 const struct mem_type *type; ··· 363 363 unsigned int mtype) 364 364 { 365 365 return __arm_ioremap_pfn_caller(pfn, offset, size, mtype, 366 - __builtin_return_address(0)); 366 + __builtin_return_address(0)); 367 367 } 368 368 EXPORT_SYMBOL(__arm_ioremap_pfn); 369 369 ··· 371 371 unsigned int, void *) = 372 372 __arm_ioremap_caller; 373 373 374 - void __iomem * 375 - __arm_ioremap(phys_addr_t phys_addr, size_t size, unsigned int mtype) 374 + void __iomem *ioremap(resource_size_t res_cookie, size_t size) 376 375 { 377 - return arch_ioremap_caller(phys_addr, size, mtype, 378 - __builtin_return_address(0)); 376 + return arch_ioremap_caller(res_cookie, size, MT_DEVICE, 377 + __builtin_return_address(0)); 379 378 } 380 - EXPORT_SYMBOL(__arm_ioremap); 379 + EXPORT_SYMBOL(ioremap); 380 + 381 + void __iomem *ioremap_cache(resource_size_t res_cookie, size_t size) 382 + { 383 + return arch_ioremap_caller(res_cookie, size, MT_DEVICE_CACHED, 384 + __builtin_return_address(0)); 385 + } 386 + EXPORT_SYMBOL(ioremap_cache); 387 + 388 + void __iomem *ioremap_wc(resource_size_t res_cookie, size_t size) 389 + { 390 + return arch_ioremap_caller(res_cookie, size, MT_DEVICE_WC, 391 + __builtin_return_address(0)); 392 + } 393 + EXPORT_SYMBOL(ioremap_wc); 381 394 382 395 /* 383 396 * Remap an arbitrary physical address space into the kernel virtual ··· 444 431 445 432 void (*arch_iounmap)(volatile void __iomem *) = __iounmap; 446 433 447 - void __arm_iounmap(volatile void __iomem *io_addr) 434 + void iounmap(volatile void __iomem *cookie) 448 435 { 449 - arch_iounmap(io_addr); 436 + arch_iounmap(cookie); 450 437 } 451 - EXPORT_SYMBOL(__arm_iounmap); 438 + EXPORT_SYMBOL(iounmap); 452 439 453 440 #ifdef CONFIG_PCI 454 441 static int pci_ioremap_mem_type = MT_DEVICE;
+7
arch/arm/mm/mmu.c
··· 1072 1072 int highmem = 0; 1073 1073 phys_addr_t vmalloc_limit = __pa(vmalloc_min - 1) + 1; 1074 1074 struct memblock_region *reg; 1075 + bool should_use_highmem = false; 1075 1076 1076 1077 for_each_memblock(memory, reg) { 1077 1078 phys_addr_t block_start = reg->base; ··· 1091 1090 pr_notice("Ignoring RAM at %pa-%pa (!CONFIG_HIGHMEM)\n", 1092 1091 &block_start, &block_end); 1093 1092 memblock_remove(reg->base, reg->size); 1093 + should_use_highmem = true; 1094 1094 continue; 1095 1095 } 1096 1096 ··· 1102 1100 &block_start, &block_end, &vmalloc_limit); 1103 1101 memblock_remove(vmalloc_limit, overlap_size); 1104 1102 block_end = vmalloc_limit; 1103 + should_use_highmem = true; 1105 1104 } 1106 1105 } 1107 1106 ··· 1136 1133 1137 1134 } 1138 1135 } 1136 + 1137 + if (should_use_highmem) 1138 + pr_notice("Consider using a HIGHMEM enabled kernel.\n"); 1139 1139 1140 1140 high_memory = __va(arm_lowmem_limit - 1) + 1; 1141 1141 ··· 1500 1494 build_mem_type_table(); 1501 1495 prepare_page_table(); 1502 1496 map_lowmem(); 1497 + memblock_set_current_limit(arm_lowmem_limit); 1503 1498 dma_contiguous_remap(); 1504 1499 devicemaps_init(mdesc); 1505 1500 kmap_init();
+31 -18
arch/arm/mm/nommu.c
··· 351 351 } 352 352 EXPORT_SYMBOL(__arm_ioremap_pfn); 353 353 354 - void __iomem *__arm_ioremap_pfn_caller(unsigned long pfn, unsigned long offset, 355 - size_t size, unsigned int mtype, void *caller) 356 - { 357 - return __arm_ioremap_pfn(pfn, offset, size, mtype); 358 - } 359 - 360 - void __iomem *__arm_ioremap(phys_addr_t phys_addr, size_t size, 361 - unsigned int mtype) 362 - { 363 - return (void __iomem *)phys_addr; 364 - } 365 - EXPORT_SYMBOL(__arm_ioremap); 366 - 367 - void __iomem * (*arch_ioremap_caller)(phys_addr_t, size_t, unsigned int, void *); 368 - 369 354 void __iomem *__arm_ioremap_caller(phys_addr_t phys_addr, size_t size, 370 355 unsigned int mtype, void *caller) 371 356 { 372 - return __arm_ioremap(phys_addr, size, mtype); 357 + return (void __iomem *)phys_addr; 373 358 } 359 + 360 + void __iomem * (*arch_ioremap_caller)(phys_addr_t, size_t, unsigned int, void *); 361 + 362 + void __iomem *ioremap(resource_size_t res_cookie, size_t size) 363 + { 364 + return __arm_ioremap_caller(res_cookie, size, MT_DEVICE, 365 + __builtin_return_address(0)); 366 + } 367 + EXPORT_SYMBOL(ioremap); 368 + 369 + void __iomem *ioremap_cache(resource_size_t res_cookie, size_t size) 370 + { 371 + return __arm_ioremap_caller(res_cookie, size, MT_DEVICE_CACHED, 372 + __builtin_return_address(0)); 373 + } 374 + EXPORT_SYMBOL(ioremap_cache); 375 + 376 + void __iomem *ioremap_wc(resource_size_t res_cookie, size_t size) 377 + { 378 + return __arm_ioremap_caller(res_cookie, size, MT_DEVICE_WC, 379 + __builtin_return_address(0)); 380 + } 381 + EXPORT_SYMBOL(ioremap_wc); 382 + 383 + void __iounmap(volatile void __iomem *addr) 384 + { 385 + } 386 + EXPORT_SYMBOL(__iounmap); 374 387 375 388 void (*arch_iounmap)(volatile void __iomem *); 376 389 377 - void __arm_iounmap(volatile void __iomem *addr) 390 + void iounmap(volatile void __iomem *addr) 378 391 { 379 392 } 380 - EXPORT_SYMBOL(__arm_iounmap); 393 + EXPORT_SYMBOL(iounmap);
+9 -5
arch/arm/mm/proc-v7.S
··· 274 274 __v7_b15mp_setup: 275 275 __v7_ca17mp_setup: 276 276 mov r10, #0 277 - 1: 277 + 1: adr r12, __v7_setup_stack @ the local stack 278 + stmia r12, {r0-r5, lr} @ v7_invalidate_l1 touches r0-r6 279 + bl v7_invalidate_l1 280 + ldmia r12, {r0-r5, lr} 278 281 #ifdef CONFIG_SMP 279 282 ALT_SMP(mrc p15, 0, r0, c1, c0, 1) 280 283 ALT_UP(mov r0, #(1 << 6)) @ fake it for UP ··· 286 283 orreq r0, r0, r10 @ Enable CPU-specific SMP bits 287 284 mcreq p15, 0, r0, c1, c0, 1 288 285 #endif 289 - b __v7_setup 286 + b __v7_setup_cont 290 287 291 288 /* 292 289 * Errata: ··· 416 413 417 414 __v7_setup: 418 415 adr r12, __v7_setup_stack @ the local stack 419 - stmia r12, {r0-r5, r7, r9, r11, lr} 416 + stmia r12, {r0-r5, lr} @ v7_invalidate_l1 touches r0-r6 420 417 bl v7_invalidate_l1 421 - ldmia r12, {r0-r5, r7, r9, r11, lr} 418 + ldmia r12, {r0-r5, lr} 422 419 420 + __v7_setup_cont: 423 421 and r0, r9, #0xff000000 @ ARM? 424 422 teq r0, #0x41000000 425 423 bne __errata_finish ··· 484 480 485 481 .align 2 486 482 __v7_setup_stack: 487 - .space 4 * 11 @ 11 registers 483 + .space 4 * 7 @ 12 registers 488 484 489 485 __INITDATA 490 486
+33 -23
arch/arm/vdso/vdsomunge.c
··· 45 45 * it does. 46 46 */ 47 47 48 - #define _GNU_SOURCE 49 - 50 48 #include <byteswap.h> 51 49 #include <elf.h> 52 50 #include <errno.h> 53 - #include <error.h> 54 51 #include <fcntl.h> 52 + #include <stdarg.h> 55 53 #include <stdbool.h> 56 54 #include <stdio.h> 57 55 #include <stdlib.h> ··· 80 82 #define EF_ARM_ABI_FLOAT_HARD 0x400 81 83 #endif 82 84 85 + static int failed; 86 + static const char *argv0; 83 87 static const char *outfile; 88 + 89 + static void fail(const char *fmt, ...) 90 + { 91 + va_list ap; 92 + 93 + failed = 1; 94 + fprintf(stderr, "%s: ", argv0); 95 + va_start(ap, fmt); 96 + vfprintf(stderr, fmt, ap); 97 + va_end(ap); 98 + exit(EXIT_FAILURE); 99 + } 84 100 85 101 static void cleanup(void) 86 102 { 87 - if (error_message_count > 0 && outfile != NULL) 103 + if (failed && outfile != NULL) 88 104 unlink(outfile); 89 105 } 90 106 ··· 131 119 int infd; 132 120 133 121 atexit(cleanup); 122 + argv0 = argv[0]; 134 123 135 124 if (argc != 3) 136 - error(EXIT_FAILURE, 0, "Usage: %s [infile] [outfile]", argv[0]); 125 + fail("Usage: %s [infile] [outfile]\n", argv[0]); 137 126 138 127 infile = argv[1]; 139 128 outfile = argv[2]; 140 129 141 130 infd = open(infile, O_RDONLY); 142 131 if (infd < 0) 143 - error(EXIT_FAILURE, errno, "Cannot open %s", infile); 132 + fail("Cannot open %s: %s\n", infile, strerror(errno)); 144 133 145 134 if (fstat(infd, &stat) != 0) 146 - error(EXIT_FAILURE, errno, "Failed stat for %s", infile); 135 + fail("Failed stat for %s: %s\n", infile, strerror(errno)); 147 136 148 137 inbuf = mmap(NULL, stat.st_size, PROT_READ, MAP_PRIVATE, infd, 0); 149 138 if (inbuf == MAP_FAILED) 150 - error(EXIT_FAILURE, errno, "Failed to map %s", infile); 139 + fail("Failed to map %s: %s\n", infile, strerror(errno)); 151 140 152 141 close(infd); 153 142 154 143 inhdr = inbuf; 155 144 156 145 if (memcmp(&inhdr->e_ident, ELFMAG, SELFMAG) != 0) 157 - error(EXIT_FAILURE, 0, "Not an ELF file"); 146 + fail("Not an ELF file\n"); 158 147 159 148 if (inhdr->e_ident[EI_CLASS] != ELFCLASS32) 160 - error(EXIT_FAILURE, 0, "Unsupported ELF class"); 149 + fail("Unsupported ELF class\n"); 161 150 162 151 swap = inhdr->e_ident[EI_DATA] != HOST_ORDER; 163 152 164 153 if (read_elf_half(inhdr->e_type, swap) != ET_DYN) 165 - error(EXIT_FAILURE, 0, "Not a shared object"); 154 + fail("Not a shared object\n"); 166 155 167 - if (read_elf_half(inhdr->e_machine, swap) != EM_ARM) { 168 - error(EXIT_FAILURE, 0, "Unsupported architecture %#x", 169 - inhdr->e_machine); 170 - } 156 + if (read_elf_half(inhdr->e_machine, swap) != EM_ARM) 157 + fail("Unsupported architecture %#x\n", inhdr->e_machine); 171 158 172 159 e_flags = read_elf_word(inhdr->e_flags, swap); 173 160 174 161 if (EF_ARM_EABI_VERSION(e_flags) != EF_ARM_EABI_VER5) { 175 - error(EXIT_FAILURE, 0, "Unsupported EABI version %#x", 176 - EF_ARM_EABI_VERSION(e_flags)); 162 + fail("Unsupported EABI version %#x\n", 163 + EF_ARM_EABI_VERSION(e_flags)); 177 164 } 178 165 179 166 if (e_flags & EF_ARM_ABI_FLOAT_HARD) 180 - error(EXIT_FAILURE, 0, 181 - "Unexpected hard-float flag set in e_flags"); 167 + fail("Unexpected hard-float flag set in e_flags\n"); 182 168 183 169 clear_soft_float = !!(e_flags & EF_ARM_ABI_FLOAT_SOFT); 184 170 185 171 outfd = open(outfile, O_RDWR | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR); 186 172 if (outfd < 0) 187 - error(EXIT_FAILURE, errno, "Cannot open %s", outfile); 173 + fail("Cannot open %s: %s\n", outfile, strerror(errno)); 188 174 189 175 if (ftruncate(outfd, stat.st_size) != 0) 190 - error(EXIT_FAILURE, errno, "Cannot truncate %s", outfile); 176 + fail("Cannot truncate %s: %s\n", outfile, strerror(errno)); 191 177 192 178 outbuf = mmap(NULL, stat.st_size, PROT_READ | PROT_WRITE, MAP_SHARED, 193 179 outfd, 0); 194 180 if (outbuf == MAP_FAILED) 195 - error(EXIT_FAILURE, errno, "Failed to map %s", outfile); 181 + fail("Failed to map %s: %s\n", outfile, strerror(errno)); 196 182 197 183 close(outfd); 198 184 ··· 205 195 } 206 196 207 197 if (msync(outbuf, stat.st_size, MS_SYNC) != 0) 208 - error(EXIT_FAILURE, errno, "Failed to sync %s", outfile); 198 + fail("Failed to sync %s: %s\n", outfile, strerror(errno)); 209 199 210 200 return EXIT_SUCCESS; 211 201 }
+1 -1
arch/arm64/Kconfig
··· 23 23 select BUILDTIME_EXTABLE_SORT 24 24 select CLONE_BACKWARDS 25 25 select COMMON_CLK 26 - select EDAC_SUPPORT 27 26 select CPU_PM if (SUSPEND || CPU_IDLE) 28 27 select DCACHE_WORD_ACCESS 28 + select EDAC_SUPPORT 29 29 select GENERIC_ALLOCATOR 30 30 select GENERIC_CLOCKEVENTS 31 31 select GENERIC_CLOCKEVENTS_BROADCAST if SMP
+10
arch/arm64/boot/dts/apm/apm-mustang.dts
··· 23 23 device_type = "memory"; 24 24 reg = < 0x1 0x00000000 0x0 0x80000000 >; /* Updated by bootloader */ 25 25 }; 26 + 27 + gpio-keys { 28 + compatible = "gpio-keys"; 29 + button@1 { 30 + label = "POWER"; 31 + linux,code = <116>; 32 + linux,input-type = <0x1>; 33 + interrupts = <0x0 0x2d 0x1>; 34 + }; 35 + }; 26 36 }; 27 37 28 38 &pcie0clk {
+1
arch/arm64/boot/dts/arm/Makefile
··· 1 1 dtb-$(CONFIG_ARCH_VEXPRESS) += foundation-v8.dtb 2 2 dtb-$(CONFIG_ARCH_VEXPRESS) += juno.dtb juno-r1.dtb 3 3 dtb-$(CONFIG_ARCH_VEXPRESS) += rtsm_ve-aemv8a.dtb 4 + dtb-$(CONFIG_ARCH_VEXPRESS) += vexpress-v2f-1xv7-ca53x2.dtb 4 5 5 6 always := $(dtb-y) 6 7 subdir-y := $(dts-dirs)
+191
arch/arm64/boot/dts/arm/vexpress-v2f-1xv7-ca53x2.dts
··· 1 + /* 2 + * ARM Ltd. Versatile Express 3 + * 4 + * LogicTile Express 20MG 5 + * V2F-1XV7 6 + * 7 + * Cortex-A53 (2 cores) Soft Macrocell Model 8 + * 9 + * HBI-0247C 10 + */ 11 + 12 + /dts-v1/; 13 + 14 + #include <dt-bindings/interrupt-controller/arm-gic.h> 15 + 16 + / { 17 + model = "V2F-1XV7 Cortex-A53x2 SMM"; 18 + arm,hbi = <0x247>; 19 + arm,vexpress,site = <0xf>; 20 + compatible = "arm,vexpress,v2f-1xv7,ca53x2", "arm,vexpress,v2f-1xv7", "arm,vexpress"; 21 + interrupt-parent = <&gic>; 22 + #address-cells = <2>; 23 + #size-cells = <2>; 24 + 25 + chosen { 26 + stdout-path = "serial0:38400n8"; 27 + }; 28 + 29 + aliases { 30 + serial0 = &v2m_serial0; 31 + serial1 = &v2m_serial1; 32 + serial2 = &v2m_serial2; 33 + serial3 = &v2m_serial3; 34 + i2c0 = &v2m_i2c_dvi; 35 + i2c1 = &v2m_i2c_pcie; 36 + }; 37 + 38 + cpus { 39 + #address-cells = <2>; 40 + #size-cells = <0>; 41 + 42 + cpu@0 { 43 + device_type = "cpu"; 44 + compatible = "arm,cortex-a53", "arm,armv8"; 45 + reg = <0 0>; 46 + next-level-cache = <&L2_0>; 47 + }; 48 + 49 + cpu@1 { 50 + device_type = "cpu"; 51 + compatible = "arm,cortex-a53", "arm,armv8"; 52 + reg = <0 1>; 53 + next-level-cache = <&L2_0>; 54 + }; 55 + 56 + L2_0: l2-cache0 { 57 + compatible = "cache"; 58 + }; 59 + }; 60 + 61 + memory@80000000 { 62 + device_type = "memory"; 63 + reg = <0 0x80000000 0 0x80000000>; /* 2GB @ 2GB */ 64 + }; 65 + 66 + gic: interrupt-controller@2c001000 { 67 + compatible = "arm,gic-400"; 68 + #interrupt-cells = <3>; 69 + #address-cells = <0>; 70 + interrupt-controller; 71 + reg = <0 0x2c001000 0 0x1000>, 72 + <0 0x2c002000 0 0x2000>, 73 + <0 0x2c004000 0 0x2000>, 74 + <0 0x2c006000 0 0x2000>; 75 + interrupts = <GIC_PPI 9 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_HIGH)>; 76 + }; 77 + 78 + timer { 79 + compatible = "arm,armv8-timer"; 80 + interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>, 81 + <GIC_PPI 14 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>, 82 + <GIC_PPI 11 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>, 83 + <GIC_PPI 10 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>; 84 + }; 85 + 86 + pmu { 87 + compatible = "arm,armv8-pmuv3"; 88 + interrupts = <GIC_SPI 68 IRQ_TYPE_LEVEL_HIGH>, 89 + <GIC_SPI 69 IRQ_TYPE_LEVEL_HIGH>; 90 + }; 91 + 92 + dcc { 93 + compatible = "arm,vexpress,config-bus"; 94 + arm,vexpress,config-bridge = <&v2m_sysreg>; 95 + 96 + smbclk: osc@4 { 97 + /* SMC clock */ 98 + compatible = "arm,vexpress-osc"; 99 + arm,vexpress-sysreg,func = <1 4>; 100 + freq-range = <40000000 40000000>; 101 + #clock-cells = <0>; 102 + clock-output-names = "smclk"; 103 + }; 104 + 105 + volt@0 { 106 + /* VIO to expansion board above */ 107 + compatible = "arm,vexpress-volt"; 108 + arm,vexpress-sysreg,func = <2 0>; 109 + regulator-name = "VIO_UP"; 110 + regulator-min-microvolt = <800000>; 111 + regulator-max-microvolt = <1800000>; 112 + regulator-always-on; 113 + }; 114 + 115 + volt@1 { 116 + /* 12V from power connector J6 */ 117 + compatible = "arm,vexpress-volt"; 118 + arm,vexpress-sysreg,func = <2 1>; 119 + regulator-name = "12"; 120 + regulator-always-on; 121 + }; 122 + 123 + temp@0 { 124 + /* FPGA temperature */ 125 + compatible = "arm,vexpress-temp"; 126 + arm,vexpress-sysreg,func = <4 0>; 127 + label = "FPGA"; 128 + }; 129 + }; 130 + 131 + smb { 132 + compatible = "simple-bus"; 133 + 134 + #address-cells = <2>; 135 + #size-cells = <1>; 136 + ranges = <0 0 0 0x08000000 0x04000000>, 137 + <1 0 0 0x14000000 0x04000000>, 138 + <2 0 0 0x18000000 0x04000000>, 139 + <3 0 0 0x1c000000 0x04000000>, 140 + <4 0 0 0x0c000000 0x04000000>, 141 + <5 0 0 0x10000000 0x04000000>; 142 + 143 + #interrupt-cells = <1>; 144 + interrupt-map-mask = <0 0 63>; 145 + interrupt-map = <0 0 0 &gic GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>, 146 + <0 0 1 &gic GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>, 147 + <0 0 2 &gic GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>, 148 + <0 0 3 &gic GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>, 149 + <0 0 4 &gic GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>, 150 + <0 0 5 &gic GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>, 151 + <0 0 6 &gic GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>, 152 + <0 0 7 &gic GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>, 153 + <0 0 8 &gic GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>, 154 + <0 0 9 &gic GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>, 155 + <0 0 10 &gic GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>, 156 + <0 0 11 &gic GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>, 157 + <0 0 12 &gic GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>, 158 + <0 0 13 &gic GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>, 159 + <0 0 14 &gic GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>, 160 + <0 0 15 &gic GIC_SPI 15 IRQ_TYPE_LEVEL_HIGH>, 161 + <0 0 16 &gic GIC_SPI 16 IRQ_TYPE_LEVEL_HIGH>, 162 + <0 0 17 &gic GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>, 163 + <0 0 18 &gic GIC_SPI 18 IRQ_TYPE_LEVEL_HIGH>, 164 + <0 0 19 &gic GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>, 165 + <0 0 20 &gic GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>, 166 + <0 0 21 &gic GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>, 167 + <0 0 22 &gic GIC_SPI 22 IRQ_TYPE_LEVEL_HIGH>, 168 + <0 0 23 &gic GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>, 169 + <0 0 24 &gic GIC_SPI 24 IRQ_TYPE_LEVEL_HIGH>, 170 + <0 0 25 &gic GIC_SPI 25 IRQ_TYPE_LEVEL_HIGH>, 171 + <0 0 26 &gic GIC_SPI 26 IRQ_TYPE_LEVEL_HIGH>, 172 + <0 0 27 &gic GIC_SPI 27 IRQ_TYPE_LEVEL_HIGH>, 173 + <0 0 28 &gic GIC_SPI 28 IRQ_TYPE_LEVEL_HIGH>, 174 + <0 0 29 &gic GIC_SPI 29 IRQ_TYPE_LEVEL_HIGH>, 175 + <0 0 30 &gic GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>, 176 + <0 0 31 &gic GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>, 177 + <0 0 32 &gic GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>, 178 + <0 0 33 &gic GIC_SPI 33 IRQ_TYPE_LEVEL_HIGH>, 179 + <0 0 34 &gic GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>, 180 + <0 0 35 &gic GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>, 181 + <0 0 36 &gic GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>, 182 + <0 0 37 &gic GIC_SPI 37 IRQ_TYPE_LEVEL_HIGH>, 183 + <0 0 38 &gic GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>, 184 + <0 0 39 &gic GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>, 185 + <0 0 40 &gic GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>, 186 + <0 0 41 &gic GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>, 187 + <0 0 42 &gic GIC_SPI 42 IRQ_TYPE_LEVEL_HIGH>; 188 + 189 + /include/ "../../../../arm/boot/dts/vexpress-v2m-rs1.dtsi" 190 + }; 191 + };
+9
arch/arm64/boot/dts/cavium/thunder-88xx.dtsi
··· 376 376 gic0: interrupt-controller@8010,00000000 { 377 377 compatible = "arm,gic-v3"; 378 378 #interrupt-cells = <3>; 379 + #address-cells = <2>; 380 + #size-cells = <2>; 381 + ranges; 379 382 interrupt-controller; 380 383 reg = <0x8010 0x00000000 0x0 0x010000>, /* GICD */ 381 384 <0x8010 0x80000000 0x0 0x600000>; /* GICR */ 382 385 interrupts = <1 9 0xf04>; 386 + 387 + its: gic-its@8010,00020000 { 388 + compatible = "arm,gic-v3-its"; 389 + msi-controller; 390 + reg = <0x8010 0x20000 0x0 0x200000>; 391 + }; 383 392 }; 384 393 385 394 uaa0: serial@87e0,24000000 {
+1
arch/arm64/configs/defconfig
··· 83 83 CONFIG_ATA=y 84 84 CONFIG_SATA_AHCI=y 85 85 CONFIG_SATA_AHCI_PLATFORM=y 86 + CONFIG_AHCI_CEVA=y 86 87 CONFIG_AHCI_XGENE=y 87 88 CONFIG_PATA_PLATFORM=y 88 89 CONFIG_PATA_OF_PLATFORM=y
+1
arch/arm64/include/asm/Kbuild
··· 25 25 generic-y += local.h 26 26 generic-y += local64.h 27 27 generic-y += mcs_spinlock.h 28 + generic-y += mm-arch-hooks.h 28 29 generic-y += mman.h 29 30 generic-y += msgbuf.h 30 31 generic-y += msi.h
+8
arch/arm64/include/asm/acpi.h
··· 19 19 #include <asm/psci.h> 20 20 #include <asm/smp_plat.h> 21 21 22 + /* Macros for consistency checks of the GICC subtable of MADT */ 23 + #define ACPI_MADT_GICC_LENGTH \ 24 + (acpi_gbl_FADT.header.revision < 6 ? 76 : 80) 25 + 26 + #define BAD_MADT_GICC_ENTRY(entry, end) \ 27 + (!(entry) || (unsigned long)(entry) + sizeof(*(entry)) > (end) || \ 28 + (entry)->header.length != ACPI_MADT_GICC_LENGTH) 29 + 22 30 /* Basic configuration for ACPI */ 23 31 #ifdef CONFIG_ACPI 24 32 /* ACPI table mapping after acpi_gbl_permanent_mmap is set */
-15
arch/arm64/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_ARM64_MM_ARCH_HOOKS_H 13 - #define _ASM_ARM64_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_ARM64_MM_ARCH_HOOKS_H */
+2 -2
arch/arm64/kernel/entry.S
··· 352 352 // TODO: add support for undefined instructions in kernel mode 353 353 enable_dbg 354 354 mov x0, sp 355 + mov x2, x1 355 356 mov x1, #BAD_SYNC 356 - mrs x2, esr_el1 357 357 b bad_mode 358 358 ENDPROC(el1_sync) 359 359 ··· 553 553 ct_user_exit 554 554 mov x0, sp 555 555 mov x1, #BAD_SYNC 556 - mrs x2, esr_el1 556 + mov x2, x25 557 557 bl bad_mode 558 558 b ret_to_user 559 559 ENDPROC(el0_sync)
-2
arch/arm64/kernel/entry32.S
··· 32 32 33 33 ENTRY(compat_sys_sigreturn_wrapper) 34 34 mov x0, sp 35 - mov x27, #0 // prevent syscall restart handling (why) 36 35 b compat_sys_sigreturn 37 36 ENDPROC(compat_sys_sigreturn_wrapper) 38 37 39 38 ENTRY(compat_sys_rt_sigreturn_wrapper) 40 39 mov x0, sp 41 - mov x27, #0 // prevent syscall restart handling (why) 42 40 b compat_sys_rt_sigreturn 43 41 ENDPROC(compat_sys_rt_sigreturn_wrapper) 44 42
+1 -1
arch/arm64/kernel/smp.c
··· 438 438 struct acpi_madt_generic_interrupt *processor; 439 439 440 440 processor = (struct acpi_madt_generic_interrupt *)header; 441 - if (BAD_MADT_ENTRY(processor, end)) 441 + if (BAD_MADT_GICC_ENTRY(processor, end)) 442 442 return -EINVAL; 443 443 444 444 acpi_table_print_madt_entry(header);
-2
arch/arm64/mm/Makefile
··· 4 4 context.o proc.o pageattr.o 5 5 obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o 6 6 obj-$(CONFIG_ARM64_PTDUMP) += dump.o 7 - 8 - CFLAGS_mmu.o := -I$(srctree)/scripts/dtc/libfdt/
+1
arch/avr32/include/asm/Kbuild
··· 12 12 generic-y += local.h 13 13 generic-y += local64.h 14 14 generic-y += mcs_spinlock.h 15 + generic-y += mm-arch-hooks.h 15 16 generic-y += param.h 16 17 generic-y += percpu.h 17 18 generic-y += preempt.h
-15
arch/avr32/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_AVR32_MM_ARCH_HOOKS_H 13 - #define _ASM_AVR32_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_AVR32_MM_ARCH_HOOKS_H */
+1
arch/blackfin/include/asm/Kbuild
··· 21 21 generic-y += local.h 22 22 generic-y += local64.h 23 23 generic-y += mcs_spinlock.h 24 + generic-y += mm-arch-hooks.h 24 25 generic-y += mman.h 25 26 generic-y += msgbuf.h 26 27 generic-y += mutex.h
-15
arch/blackfin/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_BLACKFIN_MM_ARCH_HOOKS_H 13 - #define _ASM_BLACKFIN_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_BLACKFIN_MM_ARCH_HOOKS_H */
+1
arch/c6x/include/asm/Kbuild
··· 26 26 generic-y += kmap_types.h 27 27 generic-y += local.h 28 28 generic-y += mcs_spinlock.h 29 + generic-y += mm-arch-hooks.h 29 30 generic-y += mman.h 30 31 generic-y += mmu.h 31 32 generic-y += mmu_context.h
-15
arch/c6x/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_C6X_MM_ARCH_HOOKS_H 13 - #define _ASM_C6X_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_C6X_MM_ARCH_HOOKS_H */
+1 -1
arch/cris/arch-v32/drivers/sync_serial.c
··· 1464 1464 if (port->write_ts_idx == NBR_IN_DESCR) 1465 1465 port->write_ts_idx = 0; 1466 1466 idx = port->write_ts_idx++; 1467 - do_posix_clock_monotonic_gettime(&port->timestamp[idx]); 1467 + ktime_get_ts(&port->timestamp[idx]); 1468 1468 port->in_buffer_len += port->inbufchunk; 1469 1469 } 1470 1470 spin_unlock_irqrestore(&port->lock, flags);
+1
arch/cris/include/asm/Kbuild
··· 18 18 generic-y += local.h 19 19 generic-y += local64.h 20 20 generic-y += mcs_spinlock.h 21 + generic-y += mm-arch-hooks.h 21 22 generic-y += module.h 22 23 generic-y += percpu.h 23 24 generic-y += preempt.h
-15
arch/cris/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_CRIS_MM_ARCH_HOOKS_H 13 - #define _ASM_CRIS_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_CRIS_MM_ARCH_HOOKS_H */
+1
arch/frv/include/asm/Kbuild
··· 4 4 generic-y += exec.h 5 5 generic-y += irq_work.h 6 6 generic-y += mcs_spinlock.h 7 + generic-y += mm-arch-hooks.h 7 8 generic-y += preempt.h 8 9 generic-y += trace_clock.h
-15
arch/frv/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_FRV_MM_ARCH_HOOKS_H 13 - #define _ASM_FRV_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_FRV_MM_ARCH_HOOKS_H */
+1
arch/h8300/include/asm/Kbuild
··· 33 33 generic-y += local.h 34 34 generic-y += local64.h 35 35 generic-y += mcs_spinlock.h 36 + generic-y += mm-arch-hooks.h 36 37 generic-y += mman.h 37 38 generic-y += mmu.h 38 39 generic-y += mmu_context.h
+1
arch/hexagon/include/asm/Kbuild
··· 28 28 generic-y += local.h 29 29 generic-y += local64.h 30 30 generic-y += mcs_spinlock.h 31 + generic-y += mm-arch-hooks.h 31 32 generic-y += mman.h 32 33 generic-y += msgbuf.h 33 34 generic-y += pci.h
-15
arch/hexagon/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_HEXAGON_MM_ARCH_HOOKS_H 13 - #define _ASM_HEXAGON_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_HEXAGON_MM_ARCH_HOOKS_H */
+1
arch/ia64/include/asm/Kbuild
··· 4 4 generic-y += irq_work.h 5 5 generic-y += kvm_para.h 6 6 generic-y += mcs_spinlock.h 7 + generic-y += mm-arch-hooks.h 7 8 generic-y += preempt.h 8 9 generic-y += trace_clock.h 9 10 generic-y += vtime.h
-15
arch/ia64/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_IA64_MM_ARCH_HOOKS_H 13 - #define _ASM_IA64_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_IA64_MM_ARCH_HOOKS_H */
+1
arch/m32r/include/asm/Kbuild
··· 4 4 generic-y += exec.h 5 5 generic-y += irq_work.h 6 6 generic-y += mcs_spinlock.h 7 + generic-y += mm-arch-hooks.h 7 8 generic-y += module.h 8 9 generic-y += preempt.h 9 10 generic-y += sections.h
-15
arch/m32r/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_M32R_MM_ARCH_HOOKS_H 13 - #define _ASM_M32R_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_M32R_MM_ARCH_HOOKS_H */
+27 -22
arch/m68k/Kconfig.cpu
··· 125 125 126 126 if COLDFIRE 127 127 128 + choice 129 + prompt "ColdFire SoC type" 130 + default M520x 131 + help 132 + Select the type of ColdFire System-on-Chip (SoC) that you want 133 + to build for. 134 + 128 135 config M5206 129 136 bool "MCF5206" 130 137 depends on !MMU ··· 181 174 help 182 175 Freescale (Motorola) Coldfire 5251/5253 processor support. 183 176 184 - config M527x 185 - bool 186 - 187 177 config M5271 188 178 bool "MCF5271" 189 179 depends on !MMU ··· 227 223 help 228 224 Motorola ColdFire 5307 processor support. 229 225 230 - config M53xx 231 - bool 232 - 233 226 config M532x 234 227 bool "MCF532x" 235 228 depends on !MMU ··· 251 250 select HAVE_MBAR 252 251 help 253 252 Motorola ColdFire 5407 processor support. 254 - 255 - config M54xx 256 - bool 257 253 258 254 config M547x 259 255 bool "MCF547x" ··· 277 279 select HAVE_CACHE_CB 278 280 help 279 281 Freescale Coldfire 54410/54415/54416/54417/54418 processor support. 282 + 283 + endchoice 284 + 285 + config M527x 286 + bool 287 + 288 + config M53xx 289 + bool 290 + 291 + config M54xx 292 + bool 280 293 281 294 endif # COLDFIRE 282 295 ··· 425 416 config HAVE_IPSBAR 426 417 bool 427 418 428 - config CLOCK_SET 429 - bool "Enable setting the CPU clock frequency" 430 - depends on COLDFIRE 431 - default n 432 - help 433 - On some CPU's you do not need to know what the core CPU clock 434 - frequency is. On these you can disable clock setting. On some 435 - traditional 68K parts, and on all ColdFire parts you need to set 436 - the appropriate CPU clock frequency. On these devices many of the 437 - onboard peripherals derive their timing from the master CPU clock 438 - frequency. 439 - 440 419 config CLOCK_FREQ 441 420 int "Set the core clock frequency" 421 + default "25000000" if M5206 422 + default "54000000" if M5206e 423 + default "166666666" if M520x 424 + default "140000000" if M5249 425 + default "150000000" if M527x || M523x 426 + default "90000000" if M5307 427 + default "50000000" if M5407 428 + default "266000000" if M54xx 442 429 default "66666666" 443 - depends on CLOCK_SET 430 + depends on COLDFIRE 444 431 help 445 432 Define the CPU clock frequency in use. This is the core clock 446 433 frequency, it may or may not be the same as the external clock
+3 -19
arch/m68k/configs/m5208evb_defconfig
··· 1 - # CONFIG_MMU is not set 2 - CONFIG_EXPERIMENTAL=y 3 1 CONFIG_LOG_BUF_SHIFT=14 4 - # CONFIG_CC_OPTIMIZE_FOR_SIZE is not set 5 2 CONFIG_EXPERT=y 6 3 # CONFIG_KALLSYMS is not set 7 - # CONFIG_HOTPLUG is not set 8 4 # CONFIG_FUTEX is not set 9 5 # CONFIG_EPOLL is not set 10 6 # CONFIG_SIGNALFD is not set ··· 12 16 # CONFIG_BLK_DEV_BSG is not set 13 17 # CONFIG_IOSCHED_DEADLINE is not set 14 18 # CONFIG_IOSCHED_CFQ is not set 15 - CONFIG_M520x=y 16 - CONFIG_CLOCK_SET=y 17 - CONFIG_CLOCK_FREQ=166666666 18 - CONFIG_CLOCK_DIV=2 19 - CONFIG_M5208EVB=y 19 + # CONFIG_MMU is not set 20 20 # CONFIG_4KSTACKS is not set 21 21 CONFIG_RAMBASE=0x40000000 22 22 CONFIG_RAMSIZE=0x2000000 23 23 CONFIG_VECTORBASE=0x40000000 24 24 CONFIG_KERNELBASE=0x40020000 25 - CONFIG_RAM16BIT=y 26 25 CONFIG_BINFMT_FLAT=y 27 26 CONFIG_NET=y 28 27 CONFIG_PACKET=y ··· 31 40 # CONFIG_IPV6 is not set 32 41 # CONFIG_FW_LOADER is not set 33 42 CONFIG_MTD=y 34 - CONFIG_MTD_CHAR=y 35 43 CONFIG_MTD_BLOCK=y 36 44 CONFIG_MTD_RAM=y 37 45 CONFIG_MTD_UCLINUX=y 38 46 CONFIG_BLK_DEV_RAM=y 39 - # CONFIG_MISC_DEVICES is not set 40 47 CONFIG_NETDEVICES=y 41 - CONFIG_NET_ETHERNET=y 42 48 CONFIG_FEC=y 43 - # CONFIG_NETDEV_1000 is not set 44 - # CONFIG_NETDEV_10000 is not set 45 49 # CONFIG_INPUT is not set 46 50 # CONFIG_SERIO is not set 47 51 # CONFIG_VT is not set 52 + # CONFIG_UNIX98_PTYS is not set 48 53 CONFIG_SERIAL_MCF=y 49 54 CONFIG_SERIAL_MCF_BAUDRATE=115200 50 55 CONFIG_SERIAL_MCF_CONSOLE=y 51 - # CONFIG_UNIX98_PTYS is not set 52 56 # CONFIG_HW_RANDOM is not set 53 57 # CONFIG_HWMON is not set 54 58 # CONFIG_USB_SUPPORT is not set ··· 54 68 CONFIG_ROMFS_FS=y 55 69 CONFIG_ROMFS_BACKED_BY_MTD=y 56 70 # CONFIG_NETWORK_FILESYSTEMS is not set 57 - # CONFIG_RCU_CPU_STALL_DETECTOR is not set 58 - CONFIG_SYSCTL_SYSCALL_CHECK=y 59 - CONFIG_FULLDEBUG=y 60 71 CONFIG_BOOTPARAM=y 61 72 CONFIG_BOOTPARAM_STRING="root=/dev/mtdblock0" 73 + CONFIG_FULLDEBUG=y
+2 -15
arch/m68k/configs/m5249evb_defconfig
··· 1 - # CONFIG_MMU is not set 2 - CONFIG_EXPERIMENTAL=y 3 1 CONFIG_LOG_BUF_SHIFT=14 4 - # CONFIG_CC_OPTIMIZE_FOR_SIZE is not set 5 2 CONFIG_EXPERT=y 6 3 # CONFIG_KALLSYMS is not set 7 - # CONFIG_HOTPLUG is not set 8 4 # CONFIG_FUTEX is not set 9 5 # CONFIG_EPOLL is not set 10 6 # CONFIG_SIGNALFD is not set ··· 12 16 # CONFIG_BLK_DEV_BSG is not set 13 17 # CONFIG_IOSCHED_DEADLINE is not set 14 18 # CONFIG_IOSCHED_CFQ is not set 19 + # CONFIG_MMU is not set 15 20 CONFIG_M5249=y 16 - CONFIG_CLOCK_SET=y 17 - CONFIG_CLOCK_FREQ=140000000 18 - CONFIG_CLOCK_DIV=2 19 21 CONFIG_M5249C3=y 20 22 CONFIG_RAMBASE=0x00000000 21 23 CONFIG_RAMSIZE=0x00800000 ··· 32 38 # CONFIG_IPV6 is not set 33 39 # CONFIG_FW_LOADER is not set 34 40 CONFIG_MTD=y 35 - CONFIG_MTD_CHAR=y 36 41 CONFIG_MTD_BLOCK=y 37 42 CONFIG_MTD_RAM=y 38 43 CONFIG_MTD_UCLINUX=y 39 44 CONFIG_BLK_DEV_RAM=y 40 - # CONFIG_MISC_DEVICES is not set 41 45 CONFIG_NETDEVICES=y 42 - CONFIG_NET_ETHERNET=y 43 - # CONFIG_NETDEV_1000 is not set 44 - # CONFIG_NETDEV_10000 is not set 45 46 CONFIG_PPP=y 46 47 # CONFIG_INPUT is not set 47 48 # CONFIG_SERIO is not set 48 49 # CONFIG_VT is not set 50 + # CONFIG_UNIX98_PTYS is not set 49 51 CONFIG_SERIAL_MCF=y 50 52 CONFIG_SERIAL_MCF_CONSOLE=y 51 - # CONFIG_UNIX98_PTYS is not set 52 53 # CONFIG_HWMON is not set 53 54 # CONFIG_USB_SUPPORT is not set 54 55 CONFIG_EXT2_FS=y ··· 51 62 CONFIG_ROMFS_FS=y 52 63 CONFIG_ROMFS_BACKED_BY_MTD=y 53 64 # CONFIG_NETWORK_FILESYSTEMS is not set 54 - # CONFIG_RCU_CPU_STALL_DETECTOR is not set 55 65 CONFIG_BOOTPARAM=y 56 66 CONFIG_BOOTPARAM_STRING="root=/dev/mtdblock0" 57 - # CONFIG_CRC32 is not set
+2 -12
arch/m68k/configs/m5272c3_defconfig
··· 1 - # CONFIG_MMU is not set 2 - CONFIG_EXPERIMENTAL=y 3 1 CONFIG_LOG_BUF_SHIFT=14 4 - # CONFIG_CC_OPTIMIZE_FOR_SIZE is not set 5 2 CONFIG_EXPERT=y 6 3 # CONFIG_KALLSYMS is not set 7 - # CONFIG_HOTPLUG is not set 8 4 # CONFIG_FUTEX is not set 9 5 # CONFIG_EPOLL is not set 10 6 # CONFIG_SIGNALFD is not set ··· 12 16 # CONFIG_BLK_DEV_BSG is not set 13 17 # CONFIG_IOSCHED_DEADLINE is not set 14 18 # CONFIG_IOSCHED_CFQ is not set 19 + # CONFIG_MMU is not set 15 20 CONFIG_M5272=y 16 - CONFIG_CLOCK_SET=y 17 21 CONFIG_M5272C3=y 18 22 CONFIG_RAMBASE=0x00000000 19 23 CONFIG_RAMSIZE=0x00800000 ··· 32 36 # CONFIG_IPV6 is not set 33 37 # CONFIG_FW_LOADER is not set 34 38 CONFIG_MTD=y 35 - CONFIG_MTD_CHAR=y 36 39 CONFIG_MTD_BLOCK=y 37 40 CONFIG_MTD_RAM=y 38 41 CONFIG_MTD_UCLINUX=y 39 42 CONFIG_BLK_DEV_RAM=y 40 - # CONFIG_MISC_DEVICES is not set 41 43 CONFIG_NETDEVICES=y 42 - CONFIG_NET_ETHERNET=y 43 44 CONFIG_FEC=y 44 - # CONFIG_NETDEV_1000 is not set 45 - # CONFIG_NETDEV_10000 is not set 46 45 # CONFIG_INPUT is not set 47 46 # CONFIG_SERIO is not set 48 47 # CONFIG_VT is not set 48 + # CONFIG_UNIX98_PTYS is not set 49 49 CONFIG_SERIAL_MCF=y 50 50 CONFIG_SERIAL_MCF_CONSOLE=y 51 - # CONFIG_UNIX98_PTYS is not set 52 51 # CONFIG_HWMON is not set 53 52 # CONFIG_USB_SUPPORT is not set 54 53 CONFIG_EXT2_FS=y ··· 52 61 CONFIG_ROMFS_FS=y 53 62 CONFIG_ROMFS_BACKED_BY_MTD=y 54 63 # CONFIG_NETWORK_FILESYSTEMS is not set 55 - # CONFIG_RCU_CPU_STALL_DETECTOR is not set 56 64 CONFIG_BOOTPARAM=y 57 65 CONFIG_BOOTPARAM_STRING="root=/dev/mtdblock0"
+2 -17
arch/m68k/configs/m5275evb_defconfig
··· 1 - # CONFIG_MMU is not set 2 - CONFIG_EXPERIMENTAL=y 3 1 CONFIG_LOG_BUF_SHIFT=14 4 - # CONFIG_CC_OPTIMIZE_FOR_SIZE is not set 5 2 CONFIG_EXPERT=y 6 3 # CONFIG_KALLSYMS is not set 7 - # CONFIG_HOTPLUG is not set 8 4 # CONFIG_FUTEX is not set 9 5 # CONFIG_EPOLL is not set 10 6 # CONFIG_SIGNALFD is not set ··· 12 16 # CONFIG_BLK_DEV_BSG is not set 13 17 # CONFIG_IOSCHED_DEADLINE is not set 14 18 # CONFIG_IOSCHED_CFQ is not set 19 + # CONFIG_MMU is not set 15 20 CONFIG_M5275=y 16 - CONFIG_CLOCK_SET=y 17 - CONFIG_CLOCK_FREQ=150000000 18 - CONFIG_CLOCK_DIV=2 19 - CONFIG_M5275EVB=y 20 21 # CONFIG_4KSTACKS is not set 21 22 CONFIG_RAMBASE=0x00000000 22 23 CONFIG_RAMSIZE=0x00000000 ··· 32 39 # CONFIG_IPV6 is not set 33 40 # CONFIG_FW_LOADER is not set 34 41 CONFIG_MTD=y 35 - CONFIG_MTD_CHAR=y 36 42 CONFIG_MTD_BLOCK=y 37 43 CONFIG_MTD_RAM=y 38 44 CONFIG_MTD_UCLINUX=y 39 45 CONFIG_BLK_DEV_RAM=y 40 - # CONFIG_MISC_DEVICES is not set 41 46 CONFIG_NETDEVICES=y 42 - CONFIG_NET_ETHERNET=y 43 47 CONFIG_FEC=y 44 - # CONFIG_NETDEV_1000 is not set 45 - # CONFIG_NETDEV_10000 is not set 46 48 CONFIG_PPP=y 47 49 # CONFIG_INPUT is not set 48 50 # CONFIG_SERIO is not set 49 51 # CONFIG_VT is not set 52 + # CONFIG_UNIX98_PTYS is not set 50 53 CONFIG_SERIAL_MCF=y 51 54 CONFIG_SERIAL_MCF_CONSOLE=y 52 - # CONFIG_UNIX98_PTYS is not set 53 55 # CONFIG_HWMON is not set 54 56 # CONFIG_USB_SUPPORT is not set 55 57 CONFIG_EXT2_FS=y ··· 53 65 CONFIG_ROMFS_FS=y 54 66 CONFIG_ROMFS_BACKED_BY_MTD=y 55 67 # CONFIG_NETWORK_FILESYSTEMS is not set 56 - # CONFIG_RCU_CPU_STALL_DETECTOR is not set 57 - CONFIG_SYSCTL_SYSCALL_CHECK=y 58 68 CONFIG_BOOTPARAM=y 59 69 CONFIG_BOOTPARAM_STRING="root=/dev/mtdblock0" 60 - # CONFIG_CRC32 is not set
+3 -18
arch/m68k/configs/m5307c3_defconfig
··· 1 - # CONFIG_MMU is not set 2 - CONFIG_EXPERIMENTAL=y 3 1 CONFIG_LOG_BUF_SHIFT=14 4 - # CONFIG_CC_OPTIMIZE_FOR_SIZE is not set 5 2 CONFIG_EXPERT=y 6 3 # CONFIG_KALLSYMS is not set 7 - # CONFIG_HOTPLUG is not set 8 4 # CONFIG_FUTEX is not set 9 5 # CONFIG_EPOLL is not set 10 6 # CONFIG_SIGNALFD is not set ··· 12 16 # CONFIG_BLK_DEV_BSG is not set 13 17 # CONFIG_IOSCHED_DEADLINE is not set 14 18 # CONFIG_IOSCHED_CFQ is not set 19 + # CONFIG_MMU is not set 15 20 CONFIG_M5307=y 16 - CONFIG_CLOCK_SET=y 17 - CONFIG_CLOCK_FREQ=90000000 18 - CONFIG_CLOCK_DIV=2 19 21 CONFIG_M5307C3=y 20 22 CONFIG_RAMBASE=0x00000000 21 23 CONFIG_RAMSIZE=0x00800000 ··· 32 38 # CONFIG_IPV6 is not set 33 39 # CONFIG_FW_LOADER is not set 34 40 CONFIG_MTD=y 35 - CONFIG_MTD_CHAR=y 36 41 CONFIG_MTD_BLOCK=y 37 42 CONFIG_MTD_RAM=y 38 43 CONFIG_MTD_UCLINUX=y 39 44 CONFIG_BLK_DEV_RAM=y 40 - # CONFIG_MISC_DEVICES is not set 41 45 CONFIG_NETDEVICES=y 42 - CONFIG_NET_ETHERNET=y 43 - # CONFIG_NETDEV_1000 is not set 44 - # CONFIG_NETDEV_10000 is not set 45 46 CONFIG_PPP=y 46 47 CONFIG_SLIP=y 47 48 CONFIG_SLIP_COMPRESSED=y ··· 45 56 # CONFIG_INPUT_MOUSE is not set 46 57 # CONFIG_SERIO is not set 47 58 # CONFIG_VT is not set 59 + # CONFIG_LEGACY_PTYS is not set 48 60 CONFIG_SERIAL_MCF=y 49 61 CONFIG_SERIAL_MCF_CONSOLE=y 50 - # CONFIG_LEGACY_PTYS is not set 51 62 # CONFIG_HW_RANDOM is not set 52 63 # CONFIG_HWMON is not set 53 - # CONFIG_HID_SUPPORT is not set 54 64 # CONFIG_USB_SUPPORT is not set 55 65 CONFIG_EXT2_FS=y 56 66 # CONFIG_DNOTIFY is not set 57 67 CONFIG_ROMFS_FS=y 58 68 CONFIG_ROMFS_BACKED_BY_MTD=y 59 69 # CONFIG_NETWORK_FILESYSTEMS is not set 60 - # CONFIG_RCU_CPU_STALL_DETECTOR is not set 61 - CONFIG_SYSCTL_SYSCALL_CHECK=y 62 - CONFIG_FULLDEBUG=y 63 70 CONFIG_BOOTPARAM=y 64 71 CONFIG_BOOTPARAM_STRING="root=/dev/mtdblock0" 65 - # CONFIG_CRC32 is not set 72 + CONFIG_FULLDEBUG=y
+2 -15
arch/m68k/configs/m5407c3_defconfig
··· 1 - # CONFIG_MMU is not set 2 - CONFIG_EXPERIMENTAL=y 3 1 CONFIG_LOG_BUF_SHIFT=14 4 - # CONFIG_CC_OPTIMIZE_FOR_SIZE is not set 5 2 CONFIG_EXPERT=y 6 3 # CONFIG_KALLSYMS is not set 7 - # CONFIG_HOTPLUG is not set 8 4 # CONFIG_FUTEX is not set 9 5 # CONFIG_EPOLL is not set 10 6 # CONFIG_SIGNALFD is not set ··· 13 17 # CONFIG_BLK_DEV_BSG is not set 14 18 # CONFIG_IOSCHED_DEADLINE is not set 15 19 # CONFIG_IOSCHED_CFQ is not set 20 + # CONFIG_MMU is not set 16 21 CONFIG_M5407=y 17 - CONFIG_CLOCK_SET=y 18 - CONFIG_CLOCK_FREQ=50000000 19 22 CONFIG_M5407C3=y 20 23 CONFIG_RAMBASE=0x00000000 21 24 CONFIG_RAMSIZE=0x00000000 ··· 33 38 # CONFIG_IPV6 is not set 34 39 # CONFIG_FW_LOADER is not set 35 40 CONFIG_MTD=y 36 - CONFIG_MTD_CHAR=y 37 41 CONFIG_MTD_BLOCK=y 38 42 CONFIG_MTD_RAM=y 39 43 CONFIG_MTD_UCLINUX=y 40 44 CONFIG_BLK_DEV_RAM=y 41 - # CONFIG_MISC_DEVICES is not set 42 45 CONFIG_NETDEVICES=y 43 - CONFIG_NET_ETHERNET=y 44 - # CONFIG_NETDEV_1000 is not set 45 - # CONFIG_NETDEV_10000 is not set 46 46 CONFIG_PPP=y 47 47 # CONFIG_INPUT is not set 48 48 # CONFIG_VT is not set 49 + # CONFIG_UNIX98_PTYS is not set 49 50 CONFIG_SERIAL_MCF=y 50 51 CONFIG_SERIAL_MCF_CONSOLE=y 51 - # CONFIG_UNIX98_PTYS is not set 52 52 # CONFIG_HW_RANDOM is not set 53 53 # CONFIG_HWMON is not set 54 54 # CONFIG_USB_SUPPORT is not set ··· 53 63 CONFIG_ROMFS_FS=y 54 64 CONFIG_ROMFS_BACKED_BY_MTD=y 55 65 # CONFIG_NETWORK_FILESYSTEMS is not set 56 - # CONFIG_RCU_CPU_STALL_DETECTOR is not set 57 - CONFIG_SYSCTL_SYSCALL_CHECK=y 58 66 CONFIG_BOOTPARAM=y 59 67 CONFIG_BOOTPARAM_STRING="root=/dev/mtdblock0" 60 - # CONFIG_CRC32 is not set
+1 -8
arch/m68k/configs/m5475evb_defconfig
··· 1 - CONFIG_EXPERIMENTAL=y 2 1 # CONFIG_SWAP is not set 3 2 CONFIG_LOG_BUF_SHIFT=14 4 - CONFIG_SYSFS_DEPRECATED=y 5 - CONFIG_SYSFS_DEPRECATED_V2=y 6 3 CONFIG_SYSCTL_SYSCALL=y 7 4 # CONFIG_KALLSYMS is not set 8 - # CONFIG_HOTPLUG is not set 9 5 # CONFIG_FUTEX is not set 10 6 # CONFIG_EPOLL is not set 11 7 # CONFIG_SIGNALFD is not set ··· 16 20 # CONFIG_IOSCHED_DEADLINE is not set 17 21 # CONFIG_IOSCHED_CFQ is not set 18 22 CONFIG_COLDFIRE=y 19 - CONFIG_M547x=y 20 - CONFIG_CLOCK_SET=y 21 - CONFIG_CLOCK_FREQ=266000000 22 23 # CONFIG_4KSTACKS is not set 23 24 CONFIG_RAMBASE=0x0 24 25 CONFIG_RAMSIZE=0x2000000 25 26 CONFIG_VECTORBASE=0x0 26 27 CONFIG_MBAR=0xff000000 27 28 CONFIG_KERNELBASE=0x20000 29 + CONFIG_PCI=y 28 30 # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set 29 31 # CONFIG_FW_LOADER is not set 30 32 CONFIG_MTD=y 31 - CONFIG_MTD_CHAR=y 32 33 CONFIG_MTD_BLOCK=y 33 34 CONFIG_MTD_CFI=y 34 35 CONFIG_MTD_JEDECPROBE=y
+1
arch/m68k/include/asm/Kbuild
··· 18 18 generic-y += local.h 19 19 generic-y += local64.h 20 20 generic-y += mcs_spinlock.h 21 + generic-y += mm-arch-hooks.h 21 22 generic-y += mman.h 22 23 generic-y += mutex.h 23 24 generic-y += percpu.h
+1 -1
arch/m68k/include/asm/coldfire.h
··· 19 19 * in any case new boards come along from time to time that have yet 20 20 * another different clocking frequency. 21 21 */ 22 - #ifdef CONFIG_CLOCK_SET 22 + #ifdef CONFIG_CLOCK_FREQ 23 23 #define MCF_CLK CONFIG_CLOCK_FREQ 24 24 #else 25 25 #error "Don't know what your ColdFire CPU clock frequency is??"
+2 -1
arch/m68k/include/asm/io_mm.h
··· 413 413 #define writew(val, addr) out_le16((addr), (val)) 414 414 #endif /* CONFIG_ATARI_ROM_ISA */ 415 415 416 - #if !defined(CONFIG_ISA) && !defined(CONFIG_ATARI_ROM_ISA) 416 + #if !defined(CONFIG_ISA) && !defined(CONFIG_ATARI_ROM_ISA) && \ 417 + !(defined(CONFIG_PCI) && defined(CONFIG_COLDFIRE)) 417 418 /* 418 419 * We need to define dummy functions for GENERIC_IOMAP support. 419 420 */
-15
arch/m68k/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_M68K_MM_ARCH_HOOKS_H 13 - #define _ASM_M68K_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_M68K_MM_ARCH_HOOKS_H */
+1
arch/metag/include/asm/Kbuild
··· 25 25 generic-y += local.h 26 26 generic-y += local64.h 27 27 generic-y += mcs_spinlock.h 28 + generic-y += mm-arch-hooks.h 28 29 generic-y += msgbuf.h 29 30 generic-y += mutex.h 30 31 generic-y += param.h
-15
arch/metag/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_METAG_MM_ARCH_HOOKS_H 13 - #define _ASM_METAG_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_METAG_MM_ARCH_HOOKS_H */
+1
arch/microblaze/include/asm/Kbuild
··· 6 6 generic-y += exec.h 7 7 generic-y += irq_work.h 8 8 generic-y += mcs_spinlock.h 9 + generic-y += mm-arch-hooks.h 9 10 generic-y += preempt.h 10 11 generic-y += syscalls.h 11 12 generic-y += trace_clock.h
-15
arch/microblaze/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_MICROBLAZE_MM_ARCH_HOOKS_H 13 - #define _ASM_MICROBLAZE_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_MICROBLAZE_MM_ARCH_HOOKS_H */
+2 -6
arch/mips/Kconfig
··· 1427 1427 select CPU_SUPPORTS_HIGHMEM 1428 1428 select CPU_SUPPORTS_MSA 1429 1429 select GENERIC_CSUM 1430 + select MIPS_O32_FP64_SUPPORT if MIPS32_O32 1430 1431 help 1431 1432 Choose this option to build a kernel for release 6 or later of the 1432 1433 MIPS64 architecture. New MIPS processors, starting with the Warrior ··· 2232 2231 2233 2232 config MIPS_CPS 2234 2233 bool "MIPS Coherent Processing System support" 2235 - depends on SYS_SUPPORTS_MIPS_CPS && !64BIT 2234 + depends on SYS_SUPPORTS_MIPS_CPS 2236 2235 select MIPS_CM 2237 2236 select MIPS_CPC 2238 2237 select MIPS_CPS_PM if HOTPLUG_CPU ··· 2262 2261 2263 2262 config MIPS_CPC 2264 2263 bool 2265 - 2266 - config SB1_PASS_1_WORKAROUNDS 2267 - bool 2268 - depends on CPU_SB1_PASS_1 2269 - default y 2270 2264 2271 2265 config SB1_PASS_2_WORKAROUNDS 2272 2266 bool
-7
arch/mips/Makefile
··· 181 181 cflags-$(CONFIG_CPU_R4400_WORKAROUNDS) += $(call cc-option,-mfix-r4400,) 182 182 cflags-$(CONFIG_CPU_DADDI_WORKAROUNDS) += $(call cc-option,-mno-daddi,) 183 183 184 - ifdef CONFIG_CPU_SB1 185 - ifdef CONFIG_SB1_PASS_1_WORKAROUNDS 186 - KBUILD_AFLAGS_MODULE += -msb1-pass1-workarounds 187 - KBUILD_CFLAGS_MODULE += -msb1-pass1-workarounds 188 - endif 189 - endif 190 - 191 184 # For smartmips configurations, there are hundreds of warnings due to ISA overrides 192 185 # in assembly and header files. smartmips is only supported for MIPS32r1 onwards 193 186 # and there is no support for 64-bit. Various '.set mips2' or '.set mips3' or
+1
arch/mips/include/asm/Kbuild
··· 7 7 generic-y += irq_work.h 8 8 generic-y += local64.h 9 9 generic-y += mcs_spinlock.h 10 + generic-y += mm-arch-hooks.h 10 11 generic-y += mutex.h 11 12 generic-y += parport.h 12 13 generic-y += percpu.h
+1 -1
arch/mips/include/asm/fpu.h
··· 74 74 goto fr_common; 75 75 76 76 case FPU_64BIT: 77 - #if !(defined(CONFIG_CPU_MIPS32_R2) || defined(CONFIG_CPU_MIPS32_R6) \ 77 + #if !(defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6) \ 78 78 || defined(CONFIG_64BIT)) 79 79 /* we only have a 32-bit FPU */ 80 80 return SIGFPE;
+1 -1
arch/mips/include/asm/mach-loongson64/mmzone.h
··· 1 1 /* 2 2 * Copyright (C) 2010 Loongson Inc. & Lemote Inc. & 3 - * Insititute of Computing Technology 3 + * Institute of Computing Technology 4 4 * Author: Xiang Gao, gaoxiang@ict.ac.cn 5 5 * Huacai Chen, chenhc@lemote.com 6 6 * Xiaofu Meng, Shuangshuang Zhang
+1 -2
arch/mips/include/asm/mach-sibyte/war.h
··· 13 13 #define R4600_V2_HIT_CACHEOP_WAR 0 14 14 #define R5432_CP0_INTERRUPT_WAR 0 15 15 16 - #if defined(CONFIG_SB1_PASS_1_WORKAROUNDS) || \ 17 - defined(CONFIG_SB1_PASS_2_WORKAROUNDS) 16 + #if defined(CONFIG_SB1_PASS_2_WORKAROUNDS) 18 17 19 18 #ifndef __ASSEMBLY__ 20 19 extern int sb1250_m3_workaround_needed(void);
-15
arch/mips/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_MIPS_MM_ARCH_HOOKS_H 13 - #define _ASM_MIPS_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_MIPS_MM_ARCH_HOOKS_H */
+1
arch/mips/include/asm/smp.h
··· 23 23 extern int smp_num_siblings; 24 24 extern cpumask_t cpu_sibling_map[]; 25 25 extern cpumask_t cpu_core_map[]; 26 + extern cpumask_t cpu_foreign_map; 26 27 27 28 #define raw_smp_processor_id() (current_thread_info()->cpu) 28 29
+2 -2
arch/mips/include/uapi/asm/sigcontext.h
··· 16 16 17 17 /* 18 18 * Keep this struct definition in sync with the sigcontext fragment 19 - * in arch/mips/tools/offset.c 19 + * in arch/mips/kernel/asm-offsets.c 20 20 */ 21 21 struct sigcontext { 22 22 unsigned int sc_regmask; /* Unused */ ··· 46 46 #include <linux/posix_types.h> 47 47 /* 48 48 * Keep this struct definition in sync with the sigcontext fragment 49 - * in arch/mips/tools/offset.c 49 + * in arch/mips/kernel/asm-offsets.c 50 50 * 51 51 * Warning: this structure illdefined with sc_badvaddr being just an unsigned 52 52 * int so it was changed to unsigned long in 2.6.0-test1. This may break
+1 -1
arch/mips/kernel/asm-offsets.c
··· 1 1 /* 2 - * offset.c: Calculate pt_regs and task_struct offsets. 2 + * asm-offsets.c: Calculate pt_regs and task_struct offsets. 3 3 * 4 4 * Copyright (C) 1996 David S. Miller 5 5 * Copyright (C) 1997, 1998, 1999, 2000, 2001, 2002, 2003 Ralf Baechle
+2 -2
arch/mips/kernel/branch.c
··· 600 600 break; 601 601 602 602 case blezl_op: /* not really i_format */ 603 - if (NO_R6EMU) 603 + if (!insn.i_format.rt && NO_R6EMU) 604 604 goto sigill_r6; 605 605 case blez_op: 606 606 /* ··· 635 635 break; 636 636 637 637 case bgtzl_op: 638 - if (NO_R6EMU) 638 + if (!insn.i_format.rt && NO_R6EMU) 639 639 goto sigill_r6; 640 640 case bgtz_op: 641 641 /*
+48 -48
arch/mips/kernel/cps-vec.S
··· 60 60 nop 61 61 62 62 /* This is an NMI */ 63 - la k0, nmi_handler 63 + PTR_LA k0, nmi_handler 64 64 jr k0 65 65 nop 66 66 ··· 107 107 mul t1, t1, t0 108 108 mul t1, t1, t2 109 109 110 - li a0, KSEG0 111 - add a1, a0, t1 110 + li a0, CKSEG0 111 + PTR_ADD a1, a0, t1 112 112 1: cache Index_Store_Tag_I, 0(a0) 113 - add a0, a0, t0 113 + PTR_ADD a0, a0, t0 114 114 bne a0, a1, 1b 115 115 nop 116 116 icache_done: ··· 134 134 mul t1, t1, t0 135 135 mul t1, t1, t2 136 136 137 - li a0, KSEG0 138 - addu a1, a0, t1 139 - subu a1, a1, t0 137 + li a0, CKSEG0 138 + PTR_ADDU a1, a0, t1 139 + PTR_SUBU a1, a1, t0 140 140 1: cache Index_Store_Tag_D, 0(a0) 141 141 bne a0, a1, 1b 142 - add a0, a0, t0 142 + PTR_ADD a0, a0, t0 143 143 dcache_done: 144 144 145 145 /* Set Kseg0 CCA to that in s0 */ ··· 152 152 153 153 /* Enter the coherent domain */ 154 154 li t0, 0xff 155 - sw t0, GCR_CL_COHERENCE_OFS(v1) 155 + PTR_S t0, GCR_CL_COHERENCE_OFS(v1) 156 156 ehb 157 157 158 158 /* Jump to kseg0 */ 159 - la t0, 1f 159 + PTR_LA t0, 1f 160 160 jr t0 161 161 nop 162 162 ··· 178 178 nop 179 179 180 180 /* Off we go! */ 181 - lw t1, VPEBOOTCFG_PC(v0) 182 - lw gp, VPEBOOTCFG_GP(v0) 183 - lw sp, VPEBOOTCFG_SP(v0) 181 + PTR_L t1, VPEBOOTCFG_PC(v0) 182 + PTR_L gp, VPEBOOTCFG_GP(v0) 183 + PTR_L sp, VPEBOOTCFG_SP(v0) 184 184 jr t1 185 185 nop 186 186 END(mips_cps_core_entry) ··· 217 217 218 218 .org 0x480 219 219 LEAF(excep_ejtag) 220 - la k0, ejtag_debug_handler 220 + PTR_LA k0, ejtag_debug_handler 221 221 jr k0 222 222 nop 223 223 END(excep_ejtag) ··· 229 229 nop 230 230 231 231 .set push 232 - .set mips32r2 232 + .set mips64r2 233 233 .set mt 234 234 235 235 /* Only allow 1 TC per VPE to execute... */ ··· 237 237 238 238 /* ...and for the moment only 1 VPE */ 239 239 dvpe 240 - la t1, 1f 240 + PTR_LA t1, 1f 241 241 jr.hb t1 242 242 nop 243 243 ··· 250 250 mfc0 t0, CP0_MVPCONF0 251 251 srl t0, t0, MVPCONF0_PVPE_SHIFT 252 252 andi t0, t0, (MVPCONF0_PVPE >> MVPCONF0_PVPE_SHIFT) 253 - addiu t7, t0, 1 253 + addiu ta3, t0, 1 254 254 255 255 /* If there's only 1, we're done */ 256 256 beqz t0, 2f 257 257 nop 258 258 259 259 /* Loop through each VPE within this core */ 260 - li t5, 1 260 + li ta1, 1 261 261 262 262 1: /* Operate on the appropriate TC */ 263 - mtc0 t5, CP0_VPECONTROL 263 + mtc0 ta1, CP0_VPECONTROL 264 264 ehb 265 265 266 266 /* Bind TC to VPE (1:1 TC:VPE mapping) */ 267 - mttc0 t5, CP0_TCBIND 267 + mttc0 ta1, CP0_TCBIND 268 268 269 269 /* Set exclusive TC, non-active, master */ 270 270 li t0, VPECONF0_MVP 271 - sll t1, t5, VPECONF0_XTC_SHIFT 271 + sll t1, ta1, VPECONF0_XTC_SHIFT 272 272 or t0, t0, t1 273 273 mttc0 t0, CP0_VPECONF0 274 274 ··· 280 280 mttc0 t0, CP0_TCHALT 281 281 282 282 /* Next VPE */ 283 - addiu t5, t5, 1 284 - slt t0, t5, t7 283 + addiu ta1, ta1, 1 284 + slt t0, ta1, ta3 285 285 bnez t0, 1b 286 286 nop 287 287 ··· 298 298 299 299 LEAF(mips_cps_boot_vpes) 300 300 /* Retrieve CM base address */ 301 - la t0, mips_cm_base 302 - lw t0, 0(t0) 301 + PTR_LA t0, mips_cm_base 302 + PTR_L t0, 0(t0) 303 303 304 304 /* Calculate a pointer to this cores struct core_boot_config */ 305 - lw t0, GCR_CL_ID_OFS(t0) 305 + PTR_L t0, GCR_CL_ID_OFS(t0) 306 306 li t1, COREBOOTCFG_SIZE 307 307 mul t0, t0, t1 308 - la t1, mips_cps_core_bootcfg 309 - lw t1, 0(t1) 310 - addu t0, t0, t1 308 + PTR_LA t1, mips_cps_core_bootcfg 309 + PTR_L t1, 0(t1) 310 + PTR_ADDU t0, t0, t1 311 311 312 312 /* Calculate this VPEs ID. If the core doesn't support MT use 0 */ 313 - has_mt t6, 1f 313 + has_mt ta2, 1f 314 314 li t9, 0 315 315 316 316 /* Find the number of VPEs present in the core */ ··· 334 334 1: /* Calculate a pointer to this VPEs struct vpe_boot_config */ 335 335 li t1, VPEBOOTCFG_SIZE 336 336 mul v0, t9, t1 337 - lw t7, COREBOOTCFG_VPECONFIG(t0) 338 - addu v0, v0, t7 337 + PTR_L ta3, COREBOOTCFG_VPECONFIG(t0) 338 + PTR_ADDU v0, v0, ta3 339 339 340 340 #ifdef CONFIG_MIPS_MT 341 341 342 342 /* If the core doesn't support MT then return */ 343 - bnez t6, 1f 343 + bnez ta2, 1f 344 344 nop 345 345 jr ra 346 346 nop 347 347 348 348 .set push 349 - .set mips32r2 349 + .set mips64r2 350 350 .set mt 351 351 352 352 1: /* Enter VPE configuration state */ 353 353 dvpe 354 - la t1, 1f 354 + PTR_LA t1, 1f 355 355 jr.hb t1 356 356 nop 357 357 1: mfc0 t1, CP0_MVPCONTROL ··· 360 360 ehb 361 361 362 362 /* Loop through each VPE */ 363 - lw t6, COREBOOTCFG_VPEMASK(t0) 364 - move t8, t6 365 - li t5, 0 363 + PTR_L ta2, COREBOOTCFG_VPEMASK(t0) 364 + move t8, ta2 365 + li ta1, 0 366 366 367 367 /* Check whether the VPE should be running. If not, skip it */ 368 - 1: andi t0, t6, 1 368 + 1: andi t0, ta2, 1 369 369 beqz t0, 2f 370 370 nop 371 371 ··· 373 373 mfc0 t0, CP0_VPECONTROL 374 374 ori t0, t0, VPECONTROL_TARGTC 375 375 xori t0, t0, VPECONTROL_TARGTC 376 - or t0, t0, t5 376 + or t0, t0, ta1 377 377 mtc0 t0, CP0_VPECONTROL 378 378 ehb 379 379 ··· 384 384 385 385 /* Calculate a pointer to the VPEs struct vpe_boot_config */ 386 386 li t0, VPEBOOTCFG_SIZE 387 - mul t0, t0, t5 388 - addu t0, t0, t7 387 + mul t0, t0, ta1 388 + addu t0, t0, ta3 389 389 390 390 /* Set the TC restart PC */ 391 391 lw t1, VPEBOOTCFG_PC(t0) ··· 423 423 mttc0 t0, CP0_VPECONF0 424 424 425 425 /* Next VPE */ 426 - 2: srl t6, t6, 1 427 - addiu t5, t5, 1 428 - bnez t6, 1b 426 + 2: srl ta2, ta2, 1 427 + addiu ta1, ta1, 1 428 + bnez ta2, 1b 429 429 nop 430 430 431 431 /* Leave VPE configuration state */ ··· 445 445 /* This VPE should be offline, halt the TC */ 446 446 li t0, TCHALT_H 447 447 mtc0 t0, CP0_TCHALT 448 - la t0, 1f 448 + PTR_LA t0, 1f 449 449 1: jr.hb t0 450 450 nop 451 451 ··· 466 466 .set noat 467 467 lw $1, TI_CPU(gp) 468 468 sll $1, $1, LONGLOG 469 - la \dest, __per_cpu_offset 469 + PTR_LA \dest, __per_cpu_offset 470 470 addu $1, $1, \dest 471 471 lw $1, 0($1) 472 - la \dest, cps_cpu_state 472 + PTR_LA \dest, cps_cpu_state 473 473 addu \dest, \dest, $1 474 474 .set pop 475 475 .endm
+27 -10
arch/mips/kernel/scall32-o32.S
··· 73 73 .set noreorder 74 74 .set nomacro 75 75 76 - 1: user_lw(t5, 16(t0)) # argument #5 from usp 77 - 4: user_lw(t6, 20(t0)) # argument #6 from usp 78 - 3: user_lw(t7, 24(t0)) # argument #7 from usp 79 - 2: user_lw(t8, 28(t0)) # argument #8 from usp 76 + load_a4: user_lw(t5, 16(t0)) # argument #5 from usp 77 + load_a5: user_lw(t6, 20(t0)) # argument #6 from usp 78 + load_a6: user_lw(t7, 24(t0)) # argument #7 from usp 79 + load_a7: user_lw(t8, 28(t0)) # argument #8 from usp 80 + loads_done: 80 81 81 82 sw t5, 16(sp) # argument #5 to ksp 82 83 sw t6, 20(sp) # argument #6 to ksp ··· 86 85 .set pop 87 86 88 87 .section __ex_table,"a" 89 - PTR 1b,bad_stack 90 - PTR 2b,bad_stack 91 - PTR 3b,bad_stack 92 - PTR 4b,bad_stack 88 + PTR load_a4, bad_stack_a4 89 + PTR load_a5, bad_stack_a5 90 + PTR load_a6, bad_stack_a6 91 + PTR load_a7, bad_stack_a7 93 92 .previous 94 93 95 94 lw t0, TI_FLAGS($28) # syscall tracing enabled? ··· 154 153 /* ------------------------------------------------------------------------ */ 155 154 156 155 /* 157 - * The stackpointer for a call with more than 4 arguments is bad. 158 - * We probably should handle this case a bit more drastic. 156 + * Our open-coded access area sanity test for the stack pointer 157 + * failed. We probably should handle this case a bit more drastic. 159 158 */ 160 159 bad_stack: 161 160 li v0, EFAULT ··· 163 162 li t0, 1 # set error flag 164 163 sw t0, PT_R7(sp) 165 164 j o32_syscall_exit 165 + 166 + bad_stack_a4: 167 + li t5, 0 168 + b load_a5 169 + 170 + bad_stack_a5: 171 + li t6, 0 172 + b load_a6 173 + 174 + bad_stack_a6: 175 + li t7, 0 176 + b load_a7 177 + 178 + bad_stack_a7: 179 + li t8, 0 180 + b loads_done 166 181 167 182 /* 168 183 * The system call does not exist in this kernel
+26 -9
arch/mips/kernel/scall64-o32.S
··· 69 69 daddu t1, t0, 32 70 70 bltz t1, bad_stack 71 71 72 - 1: lw a4, 16(t0) # argument #5 from usp 73 - 2: lw a5, 20(t0) # argument #6 from usp 74 - 3: lw a6, 24(t0) # argument #7 from usp 75 - 4: lw a7, 28(t0) # argument #8 from usp (for indirect syscalls) 72 + load_a4: lw a4, 16(t0) # argument #5 from usp 73 + load_a5: lw a5, 20(t0) # argument #6 from usp 74 + load_a6: lw a6, 24(t0) # argument #7 from usp 75 + load_a7: lw a7, 28(t0) # argument #8 from usp 76 + loads_done: 76 77 77 78 .section __ex_table,"a" 78 - PTR 1b, bad_stack 79 - PTR 2b, bad_stack 80 - PTR 3b, bad_stack 81 - PTR 4b, bad_stack 79 + PTR load_a4, bad_stack_a4 80 + PTR load_a5, bad_stack_a5 81 + PTR load_a6, bad_stack_a6 82 + PTR load_a7, bad_stack_a7 82 83 .previous 83 84 84 85 li t1, _TIF_WORK_SYSCALL_ENTRY ··· 167 166 li t0, 1 # set error flag 168 167 sd t0, PT_R7(sp) 169 168 j o32_syscall_exit 169 + 170 + bad_stack_a4: 171 + li a4, 0 172 + b load_a5 173 + 174 + bad_stack_a5: 175 + li a5, 0 176 + b load_a6 177 + 178 + bad_stack_a6: 179 + li a6, 0 180 + b load_a7 181 + 182 + bad_stack_a7: 183 + li a7, 0 184 + b loads_done 170 185 171 186 not_o32_scall: 172 187 /* ··· 400 383 PTR sys_connect /* 4170 */ 401 384 PTR sys_getpeername 402 385 PTR sys_getsockname 403 - PTR sys_getsockopt 386 + PTR compat_sys_getsockopt 404 387 PTR sys_listen 405 388 PTR compat_sys_recv /* 4175 */ 406 389 PTR compat_sys_recvfrom
+5 -8
arch/mips/kernel/setup.c
··· 337 337 min_low_pfn = start; 338 338 if (end <= reserved_end) 339 339 continue; 340 + #ifdef CONFIG_BLK_DEV_INITRD 341 + /* mapstart should be after initrd_end */ 342 + if (initrd_end && end <= (unsigned long)PFN_UP(__pa(initrd_end))) 343 + continue; 344 + #endif 340 345 if (start >= mapstart) 341 346 continue; 342 347 mapstart = max(reserved_end, start); ··· 370 365 #endif 371 366 max_low_pfn = PFN_DOWN(HIGHMEM_START); 372 367 } 373 - 374 - #ifdef CONFIG_BLK_DEV_INITRD 375 - /* 376 - * mapstart should be after initrd_end 377 - */ 378 - if (initrd_end) 379 - mapstart = max(mapstart, (unsigned long)PFN_UP(__pa(initrd_end))); 380 - #endif 381 368 382 369 /* 383 370 * Initialize the boot-time allocator with low memory only.
+3 -3
arch/mips/kernel/smp-cps.c
··· 133 133 /* 134 134 * Patch the start of mips_cps_core_entry to provide: 135 135 * 136 - * v0 = CM base address 136 + * v1 = CM base address 137 137 * s0 = kseg0 CCA 138 138 */ 139 139 entry_code = (u32 *)&mips_cps_core_entry; ··· 369 369 370 370 static void wait_for_sibling_halt(void *ptr_cpu) 371 371 { 372 - unsigned cpu = (unsigned)ptr_cpu; 372 + unsigned cpu = (unsigned long)ptr_cpu; 373 373 unsigned vpe_id = cpu_vpe_id(&cpu_data[cpu]); 374 374 unsigned halted; 375 375 unsigned long flags; ··· 430 430 */ 431 431 err = smp_call_function_single(cpu_death_sibling, 432 432 wait_for_sibling_halt, 433 - (void *)cpu, 1); 433 + (void *)(unsigned long)cpu, 1); 434 434 if (err) 435 435 panic("Failed to call remote sibling CPU\n"); 436 436 }
+43 -1
arch/mips/kernel/smp.c
··· 63 63 cpumask_t cpu_core_map[NR_CPUS] __read_mostly; 64 64 EXPORT_SYMBOL(cpu_core_map); 65 65 66 + /* 67 + * A logcal cpu mask containing only one VPE per core to 68 + * reduce the number of IPIs on large MT systems. 69 + */ 70 + cpumask_t cpu_foreign_map __read_mostly; 71 + EXPORT_SYMBOL(cpu_foreign_map); 72 + 66 73 /* representing cpus for which sibling maps can be computed */ 67 74 static cpumask_t cpu_sibling_setup_map; 68 75 ··· 108 101 cpumask_set_cpu(cpu, &cpu_core_map[i]); 109 102 } 110 103 } 104 + } 105 + 106 + /* 107 + * Calculate a new cpu_foreign_map mask whenever a 108 + * new cpu appears or disappears. 109 + */ 110 + static inline void calculate_cpu_foreign_map(void) 111 + { 112 + int i, k, core_present; 113 + cpumask_t temp_foreign_map; 114 + 115 + /* Re-calculate the mask */ 116 + for_each_online_cpu(i) { 117 + core_present = 0; 118 + for_each_cpu(k, &temp_foreign_map) 119 + if (cpu_data[i].package == cpu_data[k].package && 120 + cpu_data[i].core == cpu_data[k].core) 121 + core_present = 1; 122 + if (!core_present) 123 + cpumask_set_cpu(i, &temp_foreign_map); 124 + } 125 + 126 + cpumask_copy(&cpu_foreign_map, &temp_foreign_map); 111 127 } 112 128 113 129 struct plat_smp_ops *mp_ops; ··· 176 146 set_cpu_sibling_map(cpu); 177 147 set_cpu_core_map(cpu); 178 148 149 + calculate_cpu_foreign_map(); 150 + 179 151 cpumask_set_cpu(cpu, &cpu_callin_map); 180 152 181 153 synchronise_count_slave(cpu); ··· 205 173 static void stop_this_cpu(void *dummy) 206 174 { 207 175 /* 208 - * Remove this CPU: 176 + * Remove this CPU. Be a bit slow here and 177 + * set the bits for every online CPU so we don't miss 178 + * any IPI whilst taking this VPE down. 209 179 */ 180 + 181 + cpumask_copy(&cpu_foreign_map, cpu_online_mask); 182 + 183 + /* Make it visible to every other CPU */ 184 + smp_mb(); 185 + 210 186 set_cpu_online(smp_processor_id(), false); 187 + calculate_cpu_foreign_map(); 211 188 local_irq_disable(); 212 189 while (1); 213 190 } ··· 238 197 mp_ops->prepare_cpus(max_cpus); 239 198 set_cpu_sibling_map(0); 240 199 set_cpu_core_map(0); 200 + calculate_cpu_foreign_map(); 241 201 #ifndef CONFIG_HOTPLUG_CPU 242 202 init_cpu_present(cpu_possible_mask); 243 203 #endif
+4 -4
arch/mips/kernel/traps.c
··· 2130 2130 BUG_ON(current->mm); 2131 2131 enter_lazy_tlb(&init_mm, current); 2132 2132 2133 - /* Boot CPU's cache setup in setup_arch(). */ 2134 - if (!is_boot_cpu) 2135 - cpu_cache_init(); 2136 - tlb_init(); 2133 + /* Boot CPU's cache setup in setup_arch(). */ 2134 + if (!is_boot_cpu) 2135 + cpu_cache_init(); 2136 + tlb_init(); 2137 2137 TLBMISS_HANDLER_SETUP(); 2138 2138 } 2139 2139
+1 -1
arch/mips/loongson64/common/bonito-irq.c
··· 3 3 * Author: Jun Sun, jsun@mvista.com or jsun@junsun.net 4 4 * Copyright (C) 2000, 2001 Ralf Baechle (ralf@gnu.org) 5 5 * 6 - * Copyright (C) 2007 Lemote Inc. & Insititute of Computing Technology 6 + * Copyright (C) 2007 Lemote Inc. & Institute of Computing Technology 7 7 * Author: Fuxin Zhang, zhangfx@lemote.com 8 8 * 9 9 * This program is free software; you can redistribute it and/or modify it
+1 -1
arch/mips/loongson64/common/cmdline.c
··· 6 6 * Copyright 2003 ICT CAS 7 7 * Author: Michael Guo <guoyi@ict.ac.cn> 8 8 * 9 - * Copyright (C) 2007 Lemote Inc. & Insititute of Computing Technology 9 + * Copyright (C) 2007 Lemote Inc. & Institute of Computing Technology 10 10 * Author: Fuxin Zhang, zhangfx@lemote.com 11 11 * 12 12 * Copyright (C) 2009 Lemote Inc.
+1 -1
arch/mips/loongson64/common/cs5536/cs5536_mfgpt.c
··· 1 1 /* 2 2 * CS5536 General timer functions 3 3 * 4 - * Copyright (C) 2007 Lemote Inc. & Insititute of Computing Technology 4 + * Copyright (C) 2007 Lemote Inc. & Institute of Computing Technology 5 5 * Author: Yanhua, yanh@lemote.com 6 6 * 7 7 * Copyright (C) 2009 Lemote Inc.
+1 -1
arch/mips/loongson64/common/env.c
··· 6 6 * Copyright 2003 ICT CAS 7 7 * Author: Michael Guo <guoyi@ict.ac.cn> 8 8 * 9 - * Copyright (C) 2007 Lemote Inc. & Insititute of Computing Technology 9 + * Copyright (C) 2007 Lemote Inc. & Institute of Computing Technology 10 10 * Author: Fuxin Zhang, zhangfx@lemote.com 11 11 * 12 12 * Copyright (C) 2009 Lemote Inc.
+1 -1
arch/mips/loongson64/common/irq.c
··· 1 1 /* 2 - * Copyright (C) 2007 Lemote Inc. & Insititute of Computing Technology 2 + * Copyright (C) 2007 Lemote Inc. & Institute of Computing Technology 3 3 * Author: Fuxin Zhang, zhangfx@lemote.com 4 4 * 5 5 * This program is free software; you can redistribute it and/or modify it
+1 -1
arch/mips/loongson64/common/setup.c
··· 1 1 /* 2 - * Copyright (C) 2007 Lemote Inc. & Insititute of Computing Technology 2 + * Copyright (C) 2007 Lemote Inc. & Institute of Computing Technology 3 3 * Author: Fuxin Zhang, zhangfx@lemote.com 4 4 * 5 5 * This program is free software; you can redistribute it and/or modify it
+1 -1
arch/mips/loongson64/fuloong-2e/irq.c
··· 1 1 /* 2 - * Copyright (C) 2007 Lemote Inc. & Insititute of Computing Technology 2 + * Copyright (C) 2007 Lemote Inc. & Institute of Computing Technology 3 3 * Author: Fuxin Zhang, zhangfx@lemote.com 4 4 * 5 5 * This program is free software; you can redistribute it and/or modify it
+2 -2
arch/mips/loongson64/lemote-2f/clock.c
··· 1 1 /* 2 - * Copyright (C) 2006 - 2008 Lemote Inc. & Insititute of Computing Technology 2 + * Copyright (C) 2006 - 2008 Lemote Inc. & Institute of Computing Technology 3 3 * Author: Yanhua, yanh@lemote.com 4 4 * 5 5 * This file is subject to the terms and conditions of the GNU General Public ··· 15 15 #include <linux/spinlock.h> 16 16 17 17 #include <asm/clock.h> 18 - #include <asm/mach-loongson/loongson.h> 18 + #include <asm/mach-loongson64/loongson.h> 19 19 20 20 static LIST_HEAD(clock_list); 21 21 static DEFINE_SPINLOCK(clock_lock);
+1 -1
arch/mips/loongson64/loongson-3/numa.c
··· 1 1 /* 2 2 * Copyright (C) 2010 Loongson Inc. & Lemote Inc. & 3 - * Insititute of Computing Technology 3 + * Institute of Computing Technology 4 4 * Author: Xiang Gao, gaoxiang@ict.ac.cn 5 5 * Huacai Chen, chenhc@lemote.com 6 6 * Xiaofu Meng, Shuangshuang Zhang
+3 -3
arch/mips/math-emu/cp1emu.c
··· 451 451 /* Fall through */ 452 452 case jr_op: 453 453 /* For R6, JR already emulated in jalr_op */ 454 - if (NO_R6EMU && insn.r_format.opcode == jr_op) 454 + if (NO_R6EMU && insn.r_format.func == jr_op) 455 455 break; 456 456 *contpc = regs->regs[insn.r_format.rs]; 457 457 return 1; ··· 551 551 dec_insn.next_pc_inc; 552 552 return 1; 553 553 case blezl_op: 554 - if (NO_R6EMU) 554 + if (!insn.i_format.rt && NO_R6EMU) 555 555 break; 556 556 case blez_op: 557 557 ··· 588 588 dec_insn.next_pc_inc; 589 589 return 1; 590 590 case bgtzl_op: 591 - if (NO_R6EMU) 591 + if (!insn.i_format.rt && NO_R6EMU) 592 592 break; 593 593 case bgtz_op: 594 594 /*
+14 -4
arch/mips/mm/c-r4k.c
··· 37 37 #include <asm/cacheflush.h> /* for run_uncached() */ 38 38 #include <asm/traps.h> 39 39 #include <asm/dma-coherence.h> 40 + #include <asm/mips-cm.h> 40 41 41 42 /* 42 43 * Special Variant of smp_call_function for use by cache functions: ··· 52 51 { 53 52 preempt_disable(); 54 53 55 - #ifndef CONFIG_MIPS_MT_SMP 56 - smp_call_function(func, info, 1); 57 - #endif 54 + /* 55 + * The Coherent Manager propagates address-based cache ops to other 56 + * cores but not index-based ops. However, r4k_on_each_cpu is used 57 + * in both cases so there is no easy way to tell what kind of op is 58 + * executed to the other cores. The best we can probably do is 59 + * to restrict that call when a CM is not present because both 60 + * CM-based SMP protocols (CMP & CPS) restrict index-based cache ops. 61 + */ 62 + if (!mips_cm_present()) 63 + smp_call_function_many(&cpu_foreign_map, func, info, 1); 58 64 func(info); 59 65 preempt_enable(); 60 66 } ··· 945 937 } 946 938 947 939 static char *way_string[] = { NULL, "direct mapped", "2-way", 948 - "3-way", "4-way", "5-way", "6-way", "7-way", "8-way" 940 + "3-way", "4-way", "5-way", "6-way", "7-way", "8-way", 941 + "9-way", "10-way", "11-way", "12-way", 942 + "13-way", "14-way", "15-way", "16-way", 949 943 }; 950 944 951 945 static void probe_pcache(void)
+13 -7
arch/mips/mti-malta/malta-time.c
··· 119 119 120 120 int get_c0_fdc_int(void) 121 121 { 122 - int mips_cpu_fdc_irq; 122 + /* 123 + * Some cores claim the FDC is routable through the GIC, but it doesn't 124 + * actually seem to be connected for those Malta bitstreams. 125 + */ 126 + switch (current_cpu_type()) { 127 + case CPU_INTERAPTIV: 128 + case CPU_PROAPTIV: 129 + return -1; 130 + }; 123 131 124 132 if (cpu_has_veic) 125 - mips_cpu_fdc_irq = -1; 133 + return -1; 126 134 else if (gic_present) 127 - mips_cpu_fdc_irq = gic_get_c0_fdc_int(); 135 + return gic_get_c0_fdc_int(); 128 136 else if (cp0_fdc_irq >= 0) 129 - mips_cpu_fdc_irq = MIPS_CPU_IRQ_BASE + cp0_fdc_irq; 137 + return MIPS_CPU_IRQ_BASE + cp0_fdc_irq; 130 138 else 131 - mips_cpu_fdc_irq = -1; 132 - 133 - return mips_cpu_fdc_irq; 139 + return -1; 134 140 } 135 141 136 142 int get_c0_perfcount_int(void)
+7 -1
arch/mips/pistachio/init.c
··· 63 63 plat_setup_iocoherency(); 64 64 } 65 65 66 - #define DEFAULT_CPC_BASE_ADDR 0x1bde0000 66 + #define DEFAULT_CPC_BASE_ADDR 0x1bde0000 67 + #define DEFAULT_CDMM_BASE_ADDR 0x1bdd0000 67 68 68 69 phys_addr_t mips_cpc_default_phys_base(void) 69 70 { 70 71 return DEFAULT_CPC_BASE_ADDR; 72 + } 73 + 74 + phys_addr_t mips_cdmm_phys_base(void) 75 + { 76 + return DEFAULT_CDMM_BASE_ADDR; 71 77 } 72 78 73 79 static void __init mips_nmi_setup(void)
+5
arch/mips/pistachio/time.c
··· 27 27 return gic_get_c0_perfcount_int(); 28 28 } 29 29 30 + int get_c0_fdc_int(void) 31 + { 32 + return gic_get_c0_fdc_int(); 33 + } 34 + 30 35 void __init plat_time_init(void) 31 36 { 32 37 struct device_node *np;
-5
arch/mips/sibyte/Kconfig
··· 81 81 prompt "SiByte SOC Stepping" 82 82 depends on SIBYTE_SB1xxx_SOC 83 83 84 - config CPU_SB1_PASS_1 85 - bool "1250 Pass1" 86 - depends on SIBYTE_SB1250 87 - select CPU_HAS_PREFETCH 88 - 89 84 config CPU_SB1_PASS_2_1250 90 85 bool "1250 An" 91 86 depends on SIBYTE_SB1250
+1 -4
arch/mips/sibyte/common/bus_watcher.c
··· 81 81 { 82 82 u32 status, l2_err, memio_err; 83 83 84 - #ifdef CONFIG_SB1_PASS_1_WORKAROUNDS 85 - /* Destructive read, clears register and interrupt */ 86 - status = csr_in32(IOADDR(A_SCD_BUS_ERR_STATUS)); 87 - #elif defined(CONFIG_SIBYTE_BCM112X) || defined(CONFIG_SIBYTE_SB1250) 84 + #if defined(CONFIG_SIBYTE_BCM112X) || defined(CONFIG_SIBYTE_SB1250) 88 85 /* Use non-destructive register */ 89 86 status = csr_in32(IOADDR(A_SCD_BUS_ERR_STATUS_DEBUG)); 90 87 #elif defined(CONFIG_SIBYTE_BCM1x55) || defined(CONFIG_SIBYTE_BCM1x80)
-2
arch/mips/sibyte/sb1250/setup.c
··· 202 202 203 203 switch (war_pass) { 204 204 case K_SYS_REVISION_BCM1250_PASS1: 205 - #ifndef CONFIG_SB1_PASS_1_WORKAROUNDS 206 205 printk("@@@@ This is a BCM1250 A0-A2 (Pass 1) board, " 207 206 "and the kernel doesn't have the proper " 208 207 "workarounds compiled in. @@@@\n"); 209 208 bad_config = 1; 210 - #endif 211 209 break; 212 210 case K_SYS_REVISION_BCM1250_PASS2: 213 211 /* Pass 2 - easiest as default for now - so many numbers */
+1
arch/mn10300/include/asm/Kbuild
··· 5 5 generic-y += exec.h 6 6 generic-y += irq_work.h 7 7 generic-y += mcs_spinlock.h 8 + generic-y += mm-arch-hooks.h 8 9 generic-y += preempt.h 9 10 generic-y += sections.h 10 11 generic-y += trace_clock.h
-15
arch/mn10300/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_MN10300_MM_ARCH_HOOKS_H 13 - #define _ASM_MN10300_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_MN10300_MM_ARCH_HOOKS_H */
+1
arch/nios2/include/asm/Kbuild
··· 30 30 generic-y += kvm_para.h 31 31 generic-y += local.h 32 32 generic-y += mcs_spinlock.h 33 + generic-y += mm-arch-hooks.h 33 34 generic-y += mman.h 34 35 generic-y += module.h 35 36 generic-y += msgbuf.h
-15
arch/nios2/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_NIOS2_MM_ARCH_HOOKS_H 13 - #define _ASM_NIOS2_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_NIOS2_MM_ARCH_HOOKS_H */
+1 -3
arch/openrisc/Kconfig
··· 17 17 select GENERIC_IRQ_SHOW 18 18 select GENERIC_IOMAP 19 19 select GENERIC_CPU_DEVICES 20 + select HAVE_UID16 20 21 select GENERIC_ATOMIC64 21 22 select GENERIC_CLOCKEVENTS 22 23 select GENERIC_STRNCPY_FROM_USER ··· 30 29 def_bool y 31 30 32 31 config HAVE_DMA_ATTRS 33 - def_bool y 34 - 35 - config UID16 36 32 def_bool y 37 33 38 34 config RWSEM_GENERIC_SPINLOCK
+1
arch/openrisc/include/asm/Kbuild
··· 36 36 generic-y += kvm_para.h 37 37 generic-y += local.h 38 38 generic-y += mcs_spinlock.h 39 + generic-y += mm-arch-hooks.h 39 40 generic-y += mman.h 40 41 generic-y += module.h 41 42 generic-y += msgbuf.h
-15
arch/openrisc/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_OPENRISC_MM_ARCH_HOOKS_H 13 - #define _ASM_OPENRISC_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_OPENRISC_MM_ARCH_HOOKS_H */
+1
arch/parisc/include/asm/Kbuild
··· 15 15 generic-y += local.h 16 16 generic-y += local64.h 17 17 generic-y += mcs_spinlock.h 18 + generic-y += mm-arch-hooks.h 18 19 generic-y += mutex.h 19 20 generic-y += param.h 20 21 generic-y += percpu.h
-15
arch/parisc/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_PARISC_MM_ARCH_HOOKS_H 13 - #define _ASM_PARISC_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_PARISC_MM_ARCH_HOOKS_H */
+2 -1
arch/parisc/include/asm/pgalloc.h
··· 72 72 73 73 static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) 74 74 { 75 - if(pmd_flag(*pmd) & PxD_FLAG_ATTACHED) 75 + if (pmd_flag(*pmd) & PxD_FLAG_ATTACHED) { 76 76 /* 77 77 * This is the permanent pmd attached to the pgd; 78 78 * cannot free it. ··· 81 81 */ 82 82 mm_inc_nr_pmds(mm); 83 83 return; 84 + } 84 85 free_pages((unsigned long)pmd, PMD_ORDER); 85 86 } 86 87
+37 -18
arch/parisc/include/asm/pgtable.h
··· 16 16 #include <asm/processor.h> 17 17 #include <asm/cache.h> 18 18 19 - extern spinlock_t pa_dbit_lock; 19 + extern spinlock_t pa_tlb_lock; 20 20 21 21 /* 22 22 * kern_addr_valid(ADDR) tests if ADDR is pointing to valid kernel ··· 33 33 */ 34 34 #define kern_addr_valid(addr) (1) 35 35 36 + /* Purge data and instruction TLB entries. Must be called holding 37 + * the pa_tlb_lock. The TLB purge instructions are slow on SMP 38 + * machines since the purge must be broadcast to all CPUs. 39 + */ 40 + 41 + static inline void purge_tlb_entries(struct mm_struct *mm, unsigned long addr) 42 + { 43 + mtsp(mm->context, 1); 44 + pdtlb(addr); 45 + if (unlikely(split_tlb)) 46 + pitlb(addr); 47 + } 48 + 36 49 /* Certain architectures need to do special things when PTEs 37 50 * within a page table are directly modified. Thus, the following 38 51 * hook is made available. ··· 55 42 *(pteptr) = (pteval); \ 56 43 } while(0) 57 44 58 - extern void purge_tlb_entries(struct mm_struct *, unsigned long); 45 + #define pte_inserted(x) \ 46 + ((pte_val(x) & (_PAGE_PRESENT|_PAGE_ACCESSED)) \ 47 + == (_PAGE_PRESENT|_PAGE_ACCESSED)) 59 48 60 - #define set_pte_at(mm, addr, ptep, pteval) \ 61 - do { \ 49 + #define set_pte_at(mm, addr, ptep, pteval) \ 50 + do { \ 51 + pte_t old_pte; \ 62 52 unsigned long flags; \ 63 - spin_lock_irqsave(&pa_dbit_lock, flags); \ 64 - set_pte(ptep, pteval); \ 65 - purge_tlb_entries(mm, addr); \ 66 - spin_unlock_irqrestore(&pa_dbit_lock, flags); \ 53 + spin_lock_irqsave(&pa_tlb_lock, flags); \ 54 + old_pte = *ptep; \ 55 + set_pte(ptep, pteval); \ 56 + if (pte_inserted(old_pte)) \ 57 + purge_tlb_entries(mm, addr); \ 58 + spin_unlock_irqrestore(&pa_tlb_lock, flags); \ 67 59 } while (0) 68 60 69 61 #endif /* !__ASSEMBLY__ */ ··· 286 268 287 269 #define pte_none(x) (pte_val(x) == 0) 288 270 #define pte_present(x) (pte_val(x) & _PAGE_PRESENT) 289 - #define pte_clear(mm,addr,xp) do { pte_val(*(xp)) = 0; } while (0) 271 + #define pte_clear(mm, addr, xp) set_pte_at(mm, addr, xp, __pte(0)) 290 272 291 273 #define pmd_flag(x) (pmd_val(x) & PxD_FLAG_MASK) 292 274 #define pmd_address(x) ((unsigned long)(pmd_val(x) &~ PxD_FLAG_MASK) << PxD_VALUE_SHIFT) ··· 453 435 if (!pte_young(*ptep)) 454 436 return 0; 455 437 456 - spin_lock_irqsave(&pa_dbit_lock, flags); 438 + spin_lock_irqsave(&pa_tlb_lock, flags); 457 439 pte = *ptep; 458 440 if (!pte_young(pte)) { 459 - spin_unlock_irqrestore(&pa_dbit_lock, flags); 441 + spin_unlock_irqrestore(&pa_tlb_lock, flags); 460 442 return 0; 461 443 } 462 444 set_pte(ptep, pte_mkold(pte)); 463 445 purge_tlb_entries(vma->vm_mm, addr); 464 - spin_unlock_irqrestore(&pa_dbit_lock, flags); 446 + spin_unlock_irqrestore(&pa_tlb_lock, flags); 465 447 return 1; 466 448 } 467 449 ··· 471 453 pte_t old_pte; 472 454 unsigned long flags; 473 455 474 - spin_lock_irqsave(&pa_dbit_lock, flags); 456 + spin_lock_irqsave(&pa_tlb_lock, flags); 475 457 old_pte = *ptep; 476 - pte_clear(mm,addr,ptep); 477 - purge_tlb_entries(mm, addr); 478 - spin_unlock_irqrestore(&pa_dbit_lock, flags); 458 + set_pte(ptep, __pte(0)); 459 + if (pte_inserted(old_pte)) 460 + purge_tlb_entries(mm, addr); 461 + spin_unlock_irqrestore(&pa_tlb_lock, flags); 479 462 480 463 return old_pte; 481 464 } ··· 484 465 static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) 485 466 { 486 467 unsigned long flags; 487 - spin_lock_irqsave(&pa_dbit_lock, flags); 468 + spin_lock_irqsave(&pa_tlb_lock, flags); 488 469 set_pte(ptep, pte_wrprotect(*ptep)); 489 470 purge_tlb_entries(mm, addr); 490 - spin_unlock_irqrestore(&pa_dbit_lock, flags); 471 + spin_unlock_irqrestore(&pa_tlb_lock, flags); 491 472 } 492 473 493 474 #define pte_same(A,B) (pte_val(A) == pte_val(B))
+29 -24
arch/parisc/include/asm/tlbflush.h
··· 13 13 * active at any one time on the Merced bus. This tlb purge 14 14 * synchronisation is fairly lightweight and harmless so we activate 15 15 * it on all systems not just the N class. 16 + 17 + * It is also used to ensure PTE updates are atomic and consistent 18 + * with the TLB. 16 19 */ 17 20 extern spinlock_t pa_tlb_lock; 18 21 ··· 27 24 28 25 #define smp_flush_tlb_all() flush_tlb_all() 29 26 27 + int __flush_tlb_range(unsigned long sid, 28 + unsigned long start, unsigned long end); 29 + 30 + #define flush_tlb_range(vma, start, end) \ 31 + __flush_tlb_range((vma)->vm_mm->context, start, end) 32 + 33 + #define flush_tlb_kernel_range(start, end) \ 34 + __flush_tlb_range(0, start, end) 35 + 30 36 /* 31 37 * flush_tlb_mm() 32 38 * 33 - * XXX This code is NOT valid for HP-UX compatibility processes, 34 - * (although it will probably work 99% of the time). HP-UX 35 - * processes are free to play with the space id's and save them 36 - * over long periods of time, etc. so we have to preserve the 37 - * space and just flush the entire tlb. We need to check the 38 - * personality in order to do that, but the personality is not 39 - * currently being set correctly. 40 - * 41 - * Of course, Linux processes could do the same thing, but 42 - * we don't support that (and the compilers, dynamic linker, 43 - * etc. do not do that). 39 + * The code to switch to a new context is NOT valid for processes 40 + * which play with the space id's. Thus, we have to preserve the 41 + * space and just flush the entire tlb. However, the compilers, 42 + * dynamic linker, etc, do not manipulate space id's, so there 43 + * could be a significant performance benefit in switching contexts 44 + * and not flushing the whole tlb. 44 45 */ 45 46 46 47 static inline void flush_tlb_mm(struct mm_struct *mm) ··· 52 45 BUG_ON(mm == &init_mm); /* Should never happen */ 53 46 54 47 #if 1 || defined(CONFIG_SMP) 48 + /* Except for very small threads, flushing the whole TLB is 49 + * faster than using __flush_tlb_range. The pdtlb and pitlb 50 + * instructions are very slow because of the TLB broadcast. 51 + * It might be faster to do local range flushes on all CPUs 52 + * on PA 2.0 systems. 53 + */ 55 54 flush_tlb_all(); 56 55 #else 57 56 /* FIXME: currently broken, causing space id and protection ids 58 - * to go out of sync, resulting in faults on userspace accesses. 57 + * to go out of sync, resulting in faults on userspace accesses. 58 + * This approach needs further investigation since running many 59 + * small applications (e.g., GCC testsuite) is faster on HP-UX. 59 60 */ 60 61 if (mm) { 61 62 if (mm->context != 0) ··· 80 65 { 81 66 unsigned long flags, sid; 82 67 83 - /* For one page, it's not worth testing the split_tlb variable */ 84 - 85 - mb(); 86 68 sid = vma->vm_mm->context; 87 69 purge_tlb_start(flags); 88 70 mtsp(sid, 1); 89 71 pdtlb(addr); 90 - pitlb(addr); 72 + if (unlikely(split_tlb)) 73 + pitlb(addr); 91 74 purge_tlb_end(flags); 92 75 } 93 - 94 - void __flush_tlb_range(unsigned long sid, 95 - unsigned long start, unsigned long end); 96 - 97 - #define flush_tlb_range(vma,start,end) __flush_tlb_range((vma)->vm_mm->context,start,end) 98 - 99 - #define flush_tlb_kernel_range(start, end) __flush_tlb_range(0,start,end) 100 - 101 76 #endif
+67 -38
arch/parisc/kernel/cache.c
··· 342 342 EXPORT_SYMBOL(flush_kernel_icache_range_asm); 343 343 344 344 #define FLUSH_THRESHOLD 0x80000 /* 0.5MB */ 345 - int parisc_cache_flush_threshold __read_mostly = FLUSH_THRESHOLD; 345 + static unsigned long parisc_cache_flush_threshold __read_mostly = FLUSH_THRESHOLD; 346 + 347 + #define FLUSH_TLB_THRESHOLD (2*1024*1024) /* 2MB initial TLB threshold */ 348 + static unsigned long parisc_tlb_flush_threshold __read_mostly = FLUSH_TLB_THRESHOLD; 346 349 347 350 void __init parisc_setup_cache_timing(void) 348 351 { 349 352 unsigned long rangetime, alltime; 350 - unsigned long size; 353 + unsigned long size, start; 351 354 352 355 alltime = mfctl(16); 353 356 flush_data_cache(); ··· 367 364 /* Racy, but if we see an intermediate value, it's ok too... */ 368 365 parisc_cache_flush_threshold = size * alltime / rangetime; 369 366 370 - parisc_cache_flush_threshold = (parisc_cache_flush_threshold + L1_CACHE_BYTES - 1) &~ (L1_CACHE_BYTES - 1); 367 + parisc_cache_flush_threshold = L1_CACHE_ALIGN(parisc_cache_flush_threshold); 371 368 if (!parisc_cache_flush_threshold) 372 369 parisc_cache_flush_threshold = FLUSH_THRESHOLD; 373 370 374 371 if (parisc_cache_flush_threshold > cache_info.dc_size) 375 372 parisc_cache_flush_threshold = cache_info.dc_size; 376 373 377 - printk(KERN_INFO "Setting cache flush threshold to %x (%d CPUs online)\n", parisc_cache_flush_threshold, num_online_cpus()); 374 + printk(KERN_INFO "Setting cache flush threshold to %lu kB\n", 375 + parisc_cache_flush_threshold/1024); 376 + 377 + /* calculate TLB flush threshold */ 378 + 379 + alltime = mfctl(16); 380 + flush_tlb_all(); 381 + alltime = mfctl(16) - alltime; 382 + 383 + size = PAGE_SIZE; 384 + start = (unsigned long) _text; 385 + rangetime = mfctl(16); 386 + while (start < (unsigned long) _end) { 387 + flush_tlb_kernel_range(start, start + PAGE_SIZE); 388 + start += PAGE_SIZE; 389 + size += PAGE_SIZE; 390 + } 391 + rangetime = mfctl(16) - rangetime; 392 + 393 + printk(KERN_DEBUG "Whole TLB flush %lu cycles, flushing %lu bytes %lu cycles\n", 394 + alltime, size, rangetime); 395 + 396 + parisc_tlb_flush_threshold = size * alltime / rangetime; 397 + parisc_tlb_flush_threshold *= num_online_cpus(); 398 + parisc_tlb_flush_threshold = PAGE_ALIGN(parisc_tlb_flush_threshold); 399 + if (!parisc_tlb_flush_threshold) 400 + parisc_tlb_flush_threshold = FLUSH_TLB_THRESHOLD; 401 + 402 + printk(KERN_INFO "Setting TLB flush threshold to %lu kB\n", 403 + parisc_tlb_flush_threshold/1024); 378 404 } 379 405 380 406 extern void purge_kernel_dcache_page_asm(unsigned long); ··· 435 403 } 436 404 EXPORT_SYMBOL(copy_user_page); 437 405 438 - void purge_tlb_entries(struct mm_struct *mm, unsigned long addr) 406 + /* __flush_tlb_range() 407 + * 408 + * returns 1 if all TLBs were flushed. 409 + */ 410 + int __flush_tlb_range(unsigned long sid, unsigned long start, 411 + unsigned long end) 439 412 { 440 - unsigned long flags; 413 + unsigned long flags, size; 441 414 442 - /* Note: purge_tlb_entries can be called at startup with 443 - no context. */ 444 - 445 - purge_tlb_start(flags); 446 - mtsp(mm->context, 1); 447 - pdtlb(addr); 448 - pitlb(addr); 449 - purge_tlb_end(flags); 450 - } 451 - EXPORT_SYMBOL(purge_tlb_entries); 452 - 453 - void __flush_tlb_range(unsigned long sid, unsigned long start, 454 - unsigned long end) 455 - { 456 - unsigned long npages; 457 - 458 - npages = ((end - (start & PAGE_MASK)) + (PAGE_SIZE - 1)) >> PAGE_SHIFT; 459 - if (npages >= 512) /* 2MB of space: arbitrary, should be tuned */ 415 + size = (end - start); 416 + if (size >= parisc_tlb_flush_threshold) { 460 417 flush_tlb_all(); 461 - else { 462 - unsigned long flags; 418 + return 1; 419 + } 463 420 421 + /* Purge TLB entries for small ranges using the pdtlb and 422 + pitlb instructions. These instructions execute locally 423 + but cause a purge request to be broadcast to other TLBs. */ 424 + if (likely(!split_tlb)) { 425 + while (start < end) { 426 + purge_tlb_start(flags); 427 + mtsp(sid, 1); 428 + pdtlb(start); 429 + purge_tlb_end(flags); 430 + start += PAGE_SIZE; 431 + } 432 + return 0; 433 + } 434 + 435 + /* split TLB case */ 436 + while (start < end) { 464 437 purge_tlb_start(flags); 465 438 mtsp(sid, 1); 466 - if (split_tlb) { 467 - while (npages--) { 468 - pdtlb(start); 469 - pitlb(start); 470 - start += PAGE_SIZE; 471 - } 472 - } else { 473 - while (npages--) { 474 - pdtlb(start); 475 - start += PAGE_SIZE; 476 - } 477 - } 439 + pdtlb(start); 440 + pitlb(start); 478 441 purge_tlb_end(flags); 442 + start += PAGE_SIZE; 479 443 } 444 + return 0; 480 445 } 481 446 482 447 static void cacheflush_h_tmp_function(void *dummy)
+79 -84
arch/parisc/kernel/entry.S
··· 45 45 .level 2.0 46 46 #endif 47 47 48 - .import pa_dbit_lock,data 48 + .import pa_tlb_lock,data 49 49 50 50 /* space_to_prot macro creates a prot id from a space id */ 51 51 ··· 420 420 SHLREG %r9,PxD_VALUE_SHIFT,\pmd 421 421 extru \va,31-PAGE_SHIFT,ASM_BITS_PER_PTE,\index 422 422 dep %r0,31,PAGE_SHIFT,\pmd /* clear offset */ 423 - shladd \index,BITS_PER_PTE_ENTRY,\pmd,\pmd 424 - LDREG %r0(\pmd),\pte /* pmd is now pte */ 423 + shladd \index,BITS_PER_PTE_ENTRY,\pmd,\pmd /* pmd is now pte */ 424 + LDREG %r0(\pmd),\pte 425 425 bb,>=,n \pte,_PAGE_PRESENT_BIT,\fault 426 426 .endm 427 427 ··· 453 453 L2_ptep \pgd,\pte,\index,\va,\fault 454 454 .endm 455 455 456 - /* Acquire pa_dbit_lock lock. */ 457 - .macro dbit_lock spc,tmp,tmp1 456 + /* Acquire pa_tlb_lock lock and recheck page is still present. */ 457 + .macro tlb_lock spc,ptp,pte,tmp,tmp1,fault 458 458 #ifdef CONFIG_SMP 459 459 cmpib,COND(=),n 0,\spc,2f 460 - load32 PA(pa_dbit_lock),\tmp 460 + load32 PA(pa_tlb_lock),\tmp 461 461 1: LDCW 0(\tmp),\tmp1 462 462 cmpib,COND(=) 0,\tmp1,1b 463 463 nop 464 + LDREG 0(\ptp),\pte 465 + bb,<,n \pte,_PAGE_PRESENT_BIT,2f 466 + b \fault 467 + stw \spc,0(\tmp) 464 468 2: 465 469 #endif 466 470 .endm 467 471 468 - /* Release pa_dbit_lock lock without reloading lock address. */ 469 - .macro dbit_unlock0 spc,tmp 472 + /* Release pa_tlb_lock lock without reloading lock address. */ 473 + .macro tlb_unlock0 spc,tmp 470 474 #ifdef CONFIG_SMP 471 475 or,COND(=) %r0,\spc,%r0 472 476 stw \spc,0(\tmp) 473 477 #endif 474 478 .endm 475 479 476 - /* Release pa_dbit_lock lock. */ 477 - .macro dbit_unlock1 spc,tmp 480 + /* Release pa_tlb_lock lock. */ 481 + .macro tlb_unlock1 spc,tmp 478 482 #ifdef CONFIG_SMP 479 - load32 PA(pa_dbit_lock),\tmp 480 - dbit_unlock0 \spc,\tmp 483 + load32 PA(pa_tlb_lock),\tmp 484 + tlb_unlock0 \spc,\tmp 481 485 #endif 482 486 .endm 483 487 484 488 /* Set the _PAGE_ACCESSED bit of the PTE. Be clever and 485 489 * don't needlessly dirty the cache line if it was already set */ 486 - .macro update_ptep spc,ptep,pte,tmp,tmp1 487 - #ifdef CONFIG_SMP 488 - or,COND(=) %r0,\spc,%r0 489 - LDREG 0(\ptep),\pte 490 - #endif 490 + .macro update_accessed ptp,pte,tmp,tmp1 491 491 ldi _PAGE_ACCESSED,\tmp1 492 492 or \tmp1,\pte,\tmp 493 493 and,COND(<>) \tmp1,\pte,%r0 494 - STREG \tmp,0(\ptep) 494 + STREG \tmp,0(\ptp) 495 495 .endm 496 496 497 497 /* Set the dirty bit (and accessed bit). No need to be 498 498 * clever, this is only used from the dirty fault */ 499 - .macro update_dirty spc,ptep,pte,tmp 500 - #ifdef CONFIG_SMP 501 - or,COND(=) %r0,\spc,%r0 502 - LDREG 0(\ptep),\pte 503 - #endif 499 + .macro update_dirty ptp,pte,tmp 504 500 ldi _PAGE_ACCESSED|_PAGE_DIRTY,\tmp 505 501 or \tmp,\pte,\pte 506 - STREG \pte,0(\ptep) 502 + STREG \pte,0(\ptp) 507 503 .endm 508 504 509 505 /* bitshift difference between a PFN (based on kernel's PAGE_SIZE) ··· 1144 1148 1145 1149 L3_ptep ptp,pte,t0,va,dtlb_check_alias_20w 1146 1150 1147 - dbit_lock spc,t0,t1 1148 - update_ptep spc,ptp,pte,t0,t1 1151 + tlb_lock spc,ptp,pte,t0,t1,dtlb_check_alias_20w 1152 + update_accessed ptp,pte,t0,t1 1149 1153 1150 1154 make_insert_tlb spc,pte,prot 1151 1155 1152 1156 idtlbt pte,prot 1153 - dbit_unlock1 spc,t0 1154 1157 1158 + tlb_unlock1 spc,t0 1155 1159 rfir 1156 1160 nop 1157 1161 ··· 1170 1174 1171 1175 L3_ptep ptp,pte,t0,va,nadtlb_check_alias_20w 1172 1176 1173 - dbit_lock spc,t0,t1 1174 - update_ptep spc,ptp,pte,t0,t1 1177 + tlb_lock spc,ptp,pte,t0,t1,nadtlb_check_alias_20w 1178 + update_accessed ptp,pte,t0,t1 1175 1179 1176 1180 make_insert_tlb spc,pte,prot 1177 1181 1178 1182 idtlbt pte,prot 1179 - dbit_unlock1 spc,t0 1180 1183 1184 + tlb_unlock1 spc,t0 1181 1185 rfir 1182 1186 nop 1183 1187 ··· 1198 1202 1199 1203 L2_ptep ptp,pte,t0,va,dtlb_check_alias_11 1200 1204 1201 - dbit_lock spc,t0,t1 1202 - update_ptep spc,ptp,pte,t0,t1 1205 + tlb_lock spc,ptp,pte,t0,t1,dtlb_check_alias_11 1206 + update_accessed ptp,pte,t0,t1 1203 1207 1204 1208 make_insert_tlb_11 spc,pte,prot 1205 1209 1206 - mfsp %sr1,t0 /* Save sr1 so we can use it in tlb inserts */ 1210 + mfsp %sr1,t1 /* Save sr1 so we can use it in tlb inserts */ 1207 1211 mtsp spc,%sr1 1208 1212 1209 1213 idtlba pte,(%sr1,va) 1210 1214 idtlbp prot,(%sr1,va) 1211 1215 1212 - mtsp t0, %sr1 /* Restore sr1 */ 1213 - dbit_unlock1 spc,t0 1216 + mtsp t1, %sr1 /* Restore sr1 */ 1214 1217 1218 + tlb_unlock1 spc,t0 1215 1219 rfir 1216 1220 nop 1217 1221 ··· 1231 1235 1232 1236 L2_ptep ptp,pte,t0,va,nadtlb_check_alias_11 1233 1237 1234 - dbit_lock spc,t0,t1 1235 - update_ptep spc,ptp,pte,t0,t1 1238 + tlb_lock spc,ptp,pte,t0,t1,nadtlb_check_alias_11 1239 + update_accessed ptp,pte,t0,t1 1236 1240 1237 1241 make_insert_tlb_11 spc,pte,prot 1238 1242 1239 - 1240 - mfsp %sr1,t0 /* Save sr1 so we can use it in tlb inserts */ 1243 + mfsp %sr1,t1 /* Save sr1 so we can use it in tlb inserts */ 1241 1244 mtsp spc,%sr1 1242 1245 1243 1246 idtlba pte,(%sr1,va) 1244 1247 idtlbp prot,(%sr1,va) 1245 1248 1246 - mtsp t0, %sr1 /* Restore sr1 */ 1247 - dbit_unlock1 spc,t0 1249 + mtsp t1, %sr1 /* Restore sr1 */ 1248 1250 1251 + tlb_unlock1 spc,t0 1249 1252 rfir 1250 1253 nop 1251 1254 ··· 1264 1269 1265 1270 L2_ptep ptp,pte,t0,va,dtlb_check_alias_20 1266 1271 1267 - dbit_lock spc,t0,t1 1268 - update_ptep spc,ptp,pte,t0,t1 1272 + tlb_lock spc,ptp,pte,t0,t1,dtlb_check_alias_20 1273 + update_accessed ptp,pte,t0,t1 1269 1274 1270 1275 make_insert_tlb spc,pte,prot 1271 1276 1272 - f_extend pte,t0 1277 + f_extend pte,t1 1273 1278 1274 1279 idtlbt pte,prot 1275 - dbit_unlock1 spc,t0 1276 1280 1281 + tlb_unlock1 spc,t0 1277 1282 rfir 1278 1283 nop 1279 1284 ··· 1292 1297 1293 1298 L2_ptep ptp,pte,t0,va,nadtlb_check_alias_20 1294 1299 1295 - dbit_lock spc,t0,t1 1296 - update_ptep spc,ptp,pte,t0,t1 1300 + tlb_lock spc,ptp,pte,t0,t1,nadtlb_check_alias_20 1301 + update_accessed ptp,pte,t0,t1 1297 1302 1298 1303 make_insert_tlb spc,pte,prot 1299 1304 1300 - f_extend pte,t0 1305 + f_extend pte,t1 1301 1306 1302 - idtlbt pte,prot 1303 - dbit_unlock1 spc,t0 1307 + idtlbt pte,prot 1304 1308 1309 + tlb_unlock1 spc,t0 1305 1310 rfir 1306 1311 nop 1307 1312 ··· 1401 1406 1402 1407 L3_ptep ptp,pte,t0,va,itlb_fault 1403 1408 1404 - dbit_lock spc,t0,t1 1405 - update_ptep spc,ptp,pte,t0,t1 1409 + tlb_lock spc,ptp,pte,t0,t1,itlb_fault 1410 + update_accessed ptp,pte,t0,t1 1406 1411 1407 1412 make_insert_tlb spc,pte,prot 1408 1413 1409 1414 iitlbt pte,prot 1410 - dbit_unlock1 spc,t0 1411 1415 1416 + tlb_unlock1 spc,t0 1412 1417 rfir 1413 1418 nop 1414 1419 ··· 1425 1430 1426 1431 L3_ptep ptp,pte,t0,va,naitlb_check_alias_20w 1427 1432 1428 - dbit_lock spc,t0,t1 1429 - update_ptep spc,ptp,pte,t0,t1 1433 + tlb_lock spc,ptp,pte,t0,t1,naitlb_check_alias_20w 1434 + update_accessed ptp,pte,t0,t1 1430 1435 1431 1436 make_insert_tlb spc,pte,prot 1432 1437 1433 1438 iitlbt pte,prot 1434 - dbit_unlock1 spc,t0 1435 1439 1440 + tlb_unlock1 spc,t0 1436 1441 rfir 1437 1442 nop 1438 1443 ··· 1453 1458 1454 1459 L2_ptep ptp,pte,t0,va,itlb_fault 1455 1460 1456 - dbit_lock spc,t0,t1 1457 - update_ptep spc,ptp,pte,t0,t1 1461 + tlb_lock spc,ptp,pte,t0,t1,itlb_fault 1462 + update_accessed ptp,pte,t0,t1 1458 1463 1459 1464 make_insert_tlb_11 spc,pte,prot 1460 1465 1461 - mfsp %sr1,t0 /* Save sr1 so we can use it in tlb inserts */ 1466 + mfsp %sr1,t1 /* Save sr1 so we can use it in tlb inserts */ 1462 1467 mtsp spc,%sr1 1463 1468 1464 1469 iitlba pte,(%sr1,va) 1465 1470 iitlbp prot,(%sr1,va) 1466 1471 1467 - mtsp t0, %sr1 /* Restore sr1 */ 1468 - dbit_unlock1 spc,t0 1472 + mtsp t1, %sr1 /* Restore sr1 */ 1469 1473 1474 + tlb_unlock1 spc,t0 1470 1475 rfir 1471 1476 nop 1472 1477 ··· 1477 1482 1478 1483 L2_ptep ptp,pte,t0,va,naitlb_check_alias_11 1479 1484 1480 - dbit_lock spc,t0,t1 1481 - update_ptep spc,ptp,pte,t0,t1 1485 + tlb_lock spc,ptp,pte,t0,t1,naitlb_check_alias_11 1486 + update_accessed ptp,pte,t0,t1 1482 1487 1483 1488 make_insert_tlb_11 spc,pte,prot 1484 1489 1485 - mfsp %sr1,t0 /* Save sr1 so we can use it in tlb inserts */ 1490 + mfsp %sr1,t1 /* Save sr1 so we can use it in tlb inserts */ 1486 1491 mtsp spc,%sr1 1487 1492 1488 1493 iitlba pte,(%sr1,va) 1489 1494 iitlbp prot,(%sr1,va) 1490 1495 1491 - mtsp t0, %sr1 /* Restore sr1 */ 1492 - dbit_unlock1 spc,t0 1496 + mtsp t1, %sr1 /* Restore sr1 */ 1493 1497 1498 + tlb_unlock1 spc,t0 1494 1499 rfir 1495 1500 nop 1496 1501 ··· 1511 1516 1512 1517 L2_ptep ptp,pte,t0,va,itlb_fault 1513 1518 1514 - dbit_lock spc,t0,t1 1515 - update_ptep spc,ptp,pte,t0,t1 1519 + tlb_lock spc,ptp,pte,t0,t1,itlb_fault 1520 + update_accessed ptp,pte,t0,t1 1516 1521 1517 1522 make_insert_tlb spc,pte,prot 1518 1523 1519 - f_extend pte,t0 1524 + f_extend pte,t1 1520 1525 1521 1526 iitlbt pte,prot 1522 - dbit_unlock1 spc,t0 1523 1527 1528 + tlb_unlock1 spc,t0 1524 1529 rfir 1525 1530 nop 1526 1531 ··· 1531 1536 1532 1537 L2_ptep ptp,pte,t0,va,naitlb_check_alias_20 1533 1538 1534 - dbit_lock spc,t0,t1 1535 - update_ptep spc,ptp,pte,t0,t1 1539 + tlb_lock spc,ptp,pte,t0,t1,naitlb_check_alias_20 1540 + update_accessed ptp,pte,t0,t1 1536 1541 1537 1542 make_insert_tlb spc,pte,prot 1538 1543 1539 - f_extend pte,t0 1544 + f_extend pte,t1 1540 1545 1541 1546 iitlbt pte,prot 1542 - dbit_unlock1 spc,t0 1543 1547 1548 + tlb_unlock1 spc,t0 1544 1549 rfir 1545 1550 nop 1546 1551 ··· 1563 1568 1564 1569 L3_ptep ptp,pte,t0,va,dbit_fault 1565 1570 1566 - dbit_lock spc,t0,t1 1567 - update_dirty spc,ptp,pte,t1 1571 + tlb_lock spc,ptp,pte,t0,t1,dbit_fault 1572 + update_dirty ptp,pte,t1 1568 1573 1569 1574 make_insert_tlb spc,pte,prot 1570 1575 1571 1576 idtlbt pte,prot 1572 - dbit_unlock0 spc,t0 1573 1577 1578 + tlb_unlock0 spc,t0 1574 1579 rfir 1575 1580 nop 1576 1581 #else ··· 1583 1588 1584 1589 L2_ptep ptp,pte,t0,va,dbit_fault 1585 1590 1586 - dbit_lock spc,t0,t1 1587 - update_dirty spc,ptp,pte,t1 1591 + tlb_lock spc,ptp,pte,t0,t1,dbit_fault 1592 + update_dirty ptp,pte,t1 1588 1593 1589 1594 make_insert_tlb_11 spc,pte,prot 1590 1595 ··· 1595 1600 idtlbp prot,(%sr1,va) 1596 1601 1597 1602 mtsp t1, %sr1 /* Restore sr1 */ 1598 - dbit_unlock0 spc,t0 1599 1603 1604 + tlb_unlock0 spc,t0 1600 1605 rfir 1601 1606 nop 1602 1607 ··· 1607 1612 1608 1613 L2_ptep ptp,pte,t0,va,dbit_fault 1609 1614 1610 - dbit_lock spc,t0,t1 1611 - update_dirty spc,ptp,pte,t1 1615 + tlb_lock spc,ptp,pte,t0,t1,dbit_fault 1616 + update_dirty ptp,pte,t1 1612 1617 1613 1618 make_insert_tlb spc,pte,prot 1614 1619 1615 1620 f_extend pte,t1 1616 1621 1617 - idtlbt pte,prot 1618 - dbit_unlock0 spc,t0 1622 + idtlbt pte,prot 1619 1623 1624 + tlb_unlock0 spc,t0 1620 1625 rfir 1621 1626 nop 1622 1627 #endif
-4
arch/parisc/kernel/traps.c
··· 43 43 44 44 #include "../math-emu/math-emu.h" /* for handle_fpe() */ 45 45 46 - #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) 47 - DEFINE_SPINLOCK(pa_dbit_lock); 48 - #endif 49 - 50 46 static void parisc_show_stack(struct task_struct *task, unsigned long *sp, 51 47 struct pt_regs *regs); 52 48
+21 -10
arch/powerpc/kernel/idle_power7.S
··· 52 52 .text 53 53 54 54 /* 55 + * Used by threads when the lock bit of core_idle_state is set. 56 + * Threads will spin in HMT_LOW until the lock bit is cleared. 57 + * r14 - pointer to core_idle_state 58 + * r15 - used to load contents of core_idle_state 59 + */ 60 + 61 + core_idle_lock_held: 62 + HMT_LOW 63 + 3: lwz r15,0(r14) 64 + andi. r15,r15,PNV_CORE_IDLE_LOCK_BIT 65 + bne 3b 66 + HMT_MEDIUM 67 + lwarx r15,0,r14 68 + blr 69 + 70 + /* 55 71 * Pass requested state in r3: 56 72 * r3 - PNV_THREAD_NAP/SLEEP/WINKLE 57 73 * ··· 166 150 ld r14,PACA_CORE_IDLE_STATE_PTR(r13) 167 151 lwarx_loop1: 168 152 lwarx r15,0,r14 153 + 154 + andi. r9,r15,PNV_CORE_IDLE_LOCK_BIT 155 + bnel core_idle_lock_held 156 + 169 157 andc r15,r15,r7 /* Clear thread bit */ 170 158 171 159 andi. r15,r15,PNV_CORE_IDLE_THREAD_BITS ··· 314 294 * workaround undo code or resyncing timebase or restoring context 315 295 * In either case loop until the lock bit is cleared. 316 296 */ 317 - bne core_idle_lock_held 297 + bnel core_idle_lock_held 318 298 319 299 cmpwi cr2,r15,0 320 300 lbz r4,PACA_SUBCORE_SIBLING_MASK(r13) ··· 338 318 bne- lwarx_loop2 339 319 isync 340 320 b common_exit 341 - 342 - core_idle_lock_held: 343 - HMT_LOW 344 - core_idle_lock_loop: 345 - lwz r15,0(14) 346 - andi. r9,r15,PNV_CORE_IDLE_LOCK_BIT 347 - bne core_idle_lock_loop 348 - HMT_MEDIUM 349 - b lwarx_loop2 350 321 351 322 first_thread_in_subcore: 352 323 /* First thread in subcore to wakeup */
+2
arch/powerpc/kernel/traps.c
··· 297 297 298 298 __this_cpu_inc(irq_stat.mce_exceptions); 299 299 300 + add_taint(TAINT_MACHINE_CHECK, LOCKDEP_NOW_UNRELIABLE); 301 + 300 302 if (cur_cpu_spec && cur_cpu_spec->machine_check_early) 301 303 handled = cur_cpu_spec->machine_check_early(regs); 302 304 return handled;
+4
arch/powerpc/mm/fault.c
··· 529 529 printk(KERN_ALERT "Unable to handle kernel paging request for " 530 530 "instruction fetch\n"); 531 531 break; 532 + case 0x600: 533 + printk(KERN_ALERT "Unable to handle kernel paging request for " 534 + "unaligned access at address 0x%08lx\n", regs->dar); 535 + break; 532 536 default: 533 537 printk(KERN_ALERT "Unable to handle kernel paging request for " 534 538 "unknown fault\n");
+2
arch/powerpc/perf/hv-24x7.c
··· 320 320 if (!attr) 321 321 return NULL; 322 322 323 + sysfs_attr_init(&attr->attr.attr); 324 + 323 325 attr->var = str; 324 326 attr->attr.attr.name = name; 325 327 attr->attr.attr.mode = 0444;
+5 -11
arch/powerpc/platforms/powernv/opal-elog.c
··· 237 237 return elog; 238 238 } 239 239 240 - static void elog_work_fn(struct work_struct *work) 240 + static irqreturn_t elog_event(int irq, void *data) 241 241 { 242 242 __be64 size; 243 243 __be64 id; ··· 251 251 rc = opal_get_elog_size(&id, &size, &type); 252 252 if (rc != OPAL_SUCCESS) { 253 253 pr_err("ELOG: OPAL log info read failed\n"); 254 - return; 254 + return IRQ_HANDLED; 255 255 } 256 256 257 257 elog_size = be64_to_cpu(size); ··· 270 270 * entries. 271 271 */ 272 272 if (kset_find_obj(elog_kset, name)) 273 - return; 273 + return IRQ_HANDLED; 274 274 275 275 create_elog_obj(log_id, elog_size, elog_type); 276 - } 277 276 278 - static DECLARE_WORK(elog_work, elog_work_fn); 279 - 280 - static irqreturn_t elog_event(int irq, void *data) 281 - { 282 - schedule_work(&elog_work); 283 277 return IRQ_HANDLED; 284 278 } 285 279 ··· 298 304 return irq; 299 305 } 300 306 301 - rc = request_irq(irq, elog_event, 302 - IRQ_TYPE_LEVEL_HIGH, "opal-elog", NULL); 307 + rc = request_threaded_irq(irq, NULL, elog_event, 308 + IRQF_TRIGGER_HIGH | IRQF_ONESHOT, "opal-elog", NULL); 303 309 if (rc) { 304 310 pr_err("%s: Can't request OPAL event irq (%d)\n", 305 311 __func__, rc);
+4 -5
arch/powerpc/platforms/powernv/opal-prd.c
··· 112 112 static int opal_prd_mmap(struct file *file, struct vm_area_struct *vma) 113 113 { 114 114 size_t addr, size; 115 + pgprot_t page_prot; 115 116 int rc; 116 117 117 118 pr_devel("opal_prd_mmap(0x%016lx, 0x%016lx, 0x%lx, 0x%lx)\n", ··· 126 125 if (!opal_prd_range_is_valid(addr, size)) 127 126 return -EINVAL; 128 127 129 - vma->vm_page_prot = __pgprot(pgprot_val(phys_mem_access_prot(file, 130 - vma->vm_pgoff, 131 - size, vma->vm_page_prot)) 132 - | _PAGE_SPECIAL); 128 + page_prot = phys_mem_access_prot(file, vma->vm_pgoff, 129 + size, vma->vm_page_prot); 133 130 134 131 rc = remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff, size, 135 - vma->vm_page_prot); 132 + page_prot); 136 133 137 134 return rc; 138 135 }
+1
arch/powerpc/sysdev/ppc4xx_hsta_msi.c
··· 18 18 #include <linux/pci.h> 19 19 #include <linux/semaphore.h> 20 20 #include <asm/msi_bitmap.h> 21 + #include <asm/ppc-pci.h> 21 22 22 23 struct ppc4xx_hsta_msi { 23 24 struct device *dev;
+1
arch/s390/include/asm/Kbuild
··· 3 3 generic-y += clkdev.h 4 4 generic-y += irq_work.h 5 5 generic-y += mcs_spinlock.h 6 + generic-y += mm-arch-hooks.h 6 7 generic-y += preempt.h 7 8 generic-y += trace_clock.h
+4 -1
arch/s390/include/asm/ctl_reg.h
··· 57 57 unsigned long lap : 1; /* Low-address-protection control */ 58 58 unsigned long : 4; 59 59 unsigned long edat : 1; /* Enhanced-DAT-enablement control */ 60 - unsigned long : 23; 60 + unsigned long : 4; 61 + unsigned long afp : 1; /* AFP-register control */ 62 + unsigned long vx : 1; /* Vector enablement control */ 63 + unsigned long : 17; 61 64 }; 62 65 }; 63 66
+1
arch/s390/include/asm/hugetlb.h
··· 14 14 15 15 #define is_hugepage_only_range(mm, addr, len) 0 16 16 #define hugetlb_free_pgd_range free_pgd_range 17 + #define hugepages_supported() (MACHINE_HAS_HPAGE) 17 18 18 19 void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, 19 20 pte_t *ptep, pte_t pte);
-15
arch/s390/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_S390_MM_ARCH_HOOKS_H 13 - #define _ASM_S390_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_S390_MM_ARCH_HOOKS_H */
+4 -4
arch/s390/include/asm/page.h
··· 17 17 #define PAGE_DEFAULT_ACC 0 18 18 #define PAGE_DEFAULT_KEY (PAGE_DEFAULT_ACC << 4) 19 19 20 - #include <asm/setup.h> 21 - #ifndef __ASSEMBLY__ 22 - 23 - extern int HPAGE_SHIFT; 20 + #define HPAGE_SHIFT 20 24 21 #define HPAGE_SIZE (1UL << HPAGE_SHIFT) 25 22 #define HPAGE_MASK (~(HPAGE_SIZE - 1)) 26 23 #define HUGETLB_PAGE_ORDER (HPAGE_SHIFT - PAGE_SHIFT) ··· 26 29 #define ARCH_HAS_HUGE_PTE_TYPE 27 30 #define ARCH_HAS_PREPARE_HUGEPAGE 28 31 #define ARCH_HAS_HUGEPAGE_CLEAR_FLUSH 32 + 33 + #include <asm/setup.h> 34 + #ifndef __ASSEMBLY__ 29 35 30 36 static inline void storage_key_init_range(unsigned long start, unsigned long end) 31 37 {
+8
arch/s390/include/asm/perf_event.h
··· 87 87 } __packed; 88 88 89 89 /* Perf hardware reserve and release functions */ 90 + #ifdef CONFIG_PERF_EVENTS 90 91 int perf_reserve_sampling(void); 91 92 void perf_release_sampling(void); 93 + #else /* CONFIG_PERF_EVENTS */ 94 + static inline int perf_reserve_sampling(void) 95 + { 96 + return 0; 97 + } 98 + static inline void perf_release_sampling(void) {} 99 + #endif /* CONFIG_PERF_EVENTS */ 92 100 93 101 #endif /* _ASM_S390_PERF_EVENT_H */
+30 -21
arch/s390/kernel/nmi.c
··· 21 21 #include <asm/nmi.h> 22 22 #include <asm/crw.h> 23 23 #include <asm/switch_to.h> 24 + #include <asm/ctl_reg.h> 24 25 25 26 struct mcck_struct { 26 27 int kill_task; ··· 130 129 } else 131 130 asm volatile("lfpc 0(%0)" : : "a" (fpt_creg_save_area)); 132 131 133 - asm volatile( 134 - " ld 0,0(%0)\n" 135 - " ld 1,8(%0)\n" 136 - " ld 2,16(%0)\n" 137 - " ld 3,24(%0)\n" 138 - " ld 4,32(%0)\n" 139 - " ld 5,40(%0)\n" 140 - " ld 6,48(%0)\n" 141 - " ld 7,56(%0)\n" 142 - " ld 8,64(%0)\n" 143 - " ld 9,72(%0)\n" 144 - " ld 10,80(%0)\n" 145 - " ld 11,88(%0)\n" 146 - " ld 12,96(%0)\n" 147 - " ld 13,104(%0)\n" 148 - " ld 14,112(%0)\n" 149 - " ld 15,120(%0)\n" 150 - : : "a" (fpt_save_area)); 151 - /* Revalidate vector registers */ 152 - if (MACHINE_HAS_VX && current->thread.vxrs) { 132 + if (!MACHINE_HAS_VX) { 133 + /* Revalidate floating point registers */ 134 + asm volatile( 135 + " ld 0,0(%0)\n" 136 + " ld 1,8(%0)\n" 137 + " ld 2,16(%0)\n" 138 + " ld 3,24(%0)\n" 139 + " ld 4,32(%0)\n" 140 + " ld 5,40(%0)\n" 141 + " ld 6,48(%0)\n" 142 + " ld 7,56(%0)\n" 143 + " ld 8,64(%0)\n" 144 + " ld 9,72(%0)\n" 145 + " ld 10,80(%0)\n" 146 + " ld 11,88(%0)\n" 147 + " ld 12,96(%0)\n" 148 + " ld 13,104(%0)\n" 149 + " ld 14,112(%0)\n" 150 + " ld 15,120(%0)\n" 151 + : : "a" (fpt_save_area)); 152 + } else { 153 + /* Revalidate vector registers */ 154 + union ctlreg0 cr0; 155 + 153 156 if (!mci->vr) { 154 157 /* 155 158 * Vector registers can't be restored and therefore ··· 161 156 */ 162 157 kill_task = 1; 163 158 } 159 + cr0.val = S390_lowcore.cregs_save_area[0]; 160 + cr0.afp = cr0.vx = 1; 161 + __ctl_load(cr0.val, 0, 0); 164 162 restore_vx_regs((__vector128 *) 165 - S390_lowcore.vector_save_area_addr); 163 + &S390_lowcore.vector_save_area); 164 + __ctl_load(S390_lowcore.cregs_save_area[0], 0, 0); 166 165 } 167 166 /* Revalidate access registers */ 168 167 asm volatile(
+1 -1
arch/s390/kernel/process.c
··· 163 163 asmlinkage void execve_tail(void) 164 164 { 165 165 current->thread.fp_regs.fpc = 0; 166 - asm volatile("sfpc %0,%0" : : "d" (0)); 166 + asm volatile("sfpc %0" : : "d" (0)); 167 167 } 168 168 169 169 /*
+4
arch/s390/kernel/sclp.S
··· 270 270 jno .Lesa2 271 271 ahi %r15,-80 272 272 stmh %r6,%r15,96(%r15) # store upper register halves 273 + basr %r13,0 274 + lmh %r0,%r15,.Lzeroes-.(%r13) # clear upper register halves 273 275 .Lesa2: 274 276 lr %r10,%r2 # save string pointer 275 277 lhi %r2,0 ··· 293 291 .Lesa3: 294 292 lm %r6,%r15,120(%r15) # restore registers 295 293 br %r14 294 + .Lzeroes: 295 + .fill 64,4,0 296 296 297 297 .LwritedataS4: 298 298 .long 0x00760005 # SCLP command for write data
-2
arch/s390/kernel/setup.c
··· 885 885 */ 886 886 setup_hwcaps(); 887 887 888 - HPAGE_SHIFT = MACHINE_HAS_HPAGE ? 20 : 0; 889 - 890 888 /* 891 889 * Create kernel page tables and switch to virtual addressing. 892 890 */
-2
arch/s390/mm/pgtable.c
··· 31 31 #define ALLOC_ORDER 2 32 32 #define FRAG_MASK 0x03 33 33 34 - int HPAGE_SHIFT; 35 - 36 34 unsigned long *crst_table_alloc(struct mm_struct *mm) 37 35 { 38 36 struct page *page = alloc_pages(GFP_KERNEL, ALLOC_ORDER);
+1
arch/s390/oprofile/init.c
··· 16 16 #include <linux/fs.h> 17 17 #include <linux/module.h> 18 18 #include <asm/processor.h> 19 + #include <asm/perf_event.h> 19 20 20 21 #include "../../../drivers/oprofile/oprof.h" 21 22
+1
arch/score/include/asm/Kbuild
··· 7 7 generic-y += cputime.h 8 8 generic-y += irq_work.h 9 9 generic-y += mcs_spinlock.h 10 + generic-y += mm-arch-hooks.h 10 11 generic-y += preempt.h 11 12 generic-y += sections.h 12 13 generic-y += trace_clock.h
-15
arch/score/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_SCORE_MM_ARCH_HOOKS_H 13 - #define _ASM_SCORE_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_SCORE_MM_ARCH_HOOKS_H */
+1
arch/sh/include/asm/Kbuild
··· 16 16 generic-y += local.h 17 17 generic-y += local64.h 18 18 generic-y += mcs_spinlock.h 19 + generic-y += mm-arch-hooks.h 19 20 generic-y += mman.h 20 21 generic-y += msgbuf.h 21 22 generic-y += param.h
-15
arch/sh/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_SH_MM_ARCH_HOOKS_H 13 - #define _ASM_SH_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_SH_MM_ARCH_HOOKS_H */
+1
arch/sparc/include/asm/Kbuild
··· 12 12 generic-y += local.h 13 13 generic-y += local64.h 14 14 generic-y += mcs_spinlock.h 15 + generic-y += mm-arch-hooks.h 15 16 generic-y += module.h 16 17 generic-y += mutex.h 17 18 generic-y += preempt.h
-15
arch/sparc/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_SPARC_MM_ARCH_HOOKS_H 13 - #define _ASM_SPARC_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_SPARC_MM_ARCH_HOOKS_H */
+1
arch/tile/include/asm/Kbuild
··· 19 19 generic-y += local.h 20 20 generic-y += local64.h 21 21 generic-y += mcs_spinlock.h 22 + generic-y += mm-arch-hooks.h 22 23 generic-y += msgbuf.h 23 24 generic-y += mutex.h 24 25 generic-y += param.h
-15
arch/tile/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_TILE_MM_ARCH_HOOKS_H 13 - #define _ASM_TILE_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_TILE_MM_ARCH_HOOKS_H */
+2 -2
arch/tile/lib/memcpy_user_64.c
··· 28 28 #define _ST(p, inst, v) \ 29 29 ({ \ 30 30 asm("1: " #inst " %0, %1;" \ 31 - ".pushsection .coldtext.memcpy,\"ax\";" \ 31 + ".pushsection .coldtext,\"ax\";" \ 32 32 "2: { move r0, %2; jrp lr };" \ 33 33 ".section __ex_table,\"a\";" \ 34 34 ".align 8;" \ ··· 41 41 ({ \ 42 42 unsigned long __v; \ 43 43 asm("1: " #inst " %0, %1;" \ 44 - ".pushsection .coldtext.memcpy,\"ax\";" \ 44 + ".pushsection .coldtext,\"ax\";" \ 45 45 "2: { move r0, %2; jrp lr };" \ 46 46 ".section __ex_table,\"a\";" \ 47 47 ".align 8;" \
+1
arch/um/include/asm/Kbuild
··· 16 16 generic-y += irq_work.h 17 17 generic-y += kdebug.h 18 18 generic-y += mcs_spinlock.h 19 + generic-y += mm-arch-hooks.h 19 20 generic-y += mutex.h 20 21 generic-y += param.h 21 22 generic-y += pci.h
-15
arch/um/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_UM_MM_ARCH_HOOKS_H 13 - #define _ASM_UM_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_UM_MM_ARCH_HOOKS_H */
+1
arch/unicore32/include/asm/Kbuild
··· 26 26 generic-y += kmap_types.h 27 27 generic-y += local.h 28 28 generic-y += mcs_spinlock.h 29 + generic-y += mm-arch-hooks.h 29 30 generic-y += mman.h 30 31 generic-y += module.h 31 32 generic-y += msgbuf.h
-15
arch/unicore32/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_UNICORE32_MM_ARCH_HOOKS_H 13 - #define _ASM_UNICORE32_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_UNICORE32_MM_ARCH_HOOKS_H */
+7 -1
arch/x86/Kconfig
··· 41 41 select ARCH_USE_CMPXCHG_LOCKREF if X86_64 42 42 select ARCH_USE_QUEUED_RWLOCKS 43 43 select ARCH_USE_QUEUED_SPINLOCKS 44 + select ARCH_WANTS_DYNAMIC_TASK_STRUCT 44 45 select ARCH_WANT_FRAME_POINTERS 45 46 select ARCH_WANT_IPC_PARSE_VERSION if X86_32 46 47 select ARCH_WANT_OPTIONAL_GPIOLIB ··· 254 253 255 254 config ARCH_SUPPORTS_DEBUG_PAGEALLOC 256 255 def_bool y 256 + 257 + config KASAN_SHADOW_OFFSET 258 + hex 259 + depends on KASAN 260 + default 0xdffffc0000000000 257 261 258 262 config HAVE_INTEL_TXT 259 263 def_bool y ··· 2021 2015 2022 2016 To compile command line arguments into the kernel, 2023 2017 set this option to 'Y', then fill in the 2024 - the boot arguments in CONFIG_CMDLINE. 2018 + boot arguments in CONFIG_CMDLINE. 2025 2019 2026 2020 Systems with fully functional boot loaders (i.e. non-embedded) 2027 2021 should leave this option set to 'N'.
+12
arch/x86/Kconfig.debug
··· 297 297 298 298 If unsure, say N. 299 299 300 + config DEBUG_ENTRY 301 + bool "Debug low-level entry code" 302 + depends on DEBUG_KERNEL 303 + ---help--- 304 + This option enables sanity checks in x86's low-level entry code. 305 + Some of these sanity checks may slow down kernel entries and 306 + exits or otherwise impact performance. 307 + 308 + This is currently used to help test NMI code. 309 + 310 + If unsure, say N. 311 + 300 312 config DEBUG_NMI_SELFTEST 301 313 bool "NMI Selftest" 302 314 depends on DEBUG_KERNEL && X86_LOCAL_APIC
+202 -103
arch/x86/entry/entry_64.S
··· 1237 1237 * If the variable is not set and the stack is not the NMI 1238 1238 * stack then: 1239 1239 * o Set the special variable on the stack 1240 - * o Copy the interrupt frame into a "saved" location on the stack 1241 - * o Copy the interrupt frame into a "copy" location on the stack 1240 + * o Copy the interrupt frame into an "outermost" location on the 1241 + * stack 1242 + * o Copy the interrupt frame into an "iret" location on the stack 1242 1243 * o Continue processing the NMI 1243 1244 * If the variable is set or the previous stack is the NMI stack: 1244 - * o Modify the "copy" location to jump to the repeate_nmi 1245 + * o Modify the "iret" location to jump to the repeat_nmi 1245 1246 * o return back to the first NMI 1246 1247 * 1247 1248 * Now on exit of the first NMI, we first clear the stack variable ··· 1251 1250 * a nested NMI that updated the copy interrupt stack frame, a 1252 1251 * jump will be made to the repeat_nmi code that will handle the second 1253 1252 * NMI. 1253 + * 1254 + * However, espfix prevents us from directly returning to userspace 1255 + * with a single IRET instruction. Similarly, IRET to user mode 1256 + * can fault. We therefore handle NMIs from user space like 1257 + * other IST entries. 1254 1258 */ 1255 1259 1256 1260 /* Use %rdx as our temp variable throughout */ 1257 1261 pushq %rdx 1258 1262 1259 - /* 1260 - * If %cs was not the kernel segment, then the NMI triggered in user 1261 - * space, which means it is definitely not nested. 1262 - */ 1263 - cmpl $__KERNEL_CS, 16(%rsp) 1264 - jne first_nmi 1263 + testb $3, CS-RIP+8(%rsp) 1264 + jz .Lnmi_from_kernel 1265 1265 1266 1266 /* 1267 - * Check the special variable on the stack to see if NMIs are 1268 - * executing. 1267 + * NMI from user mode. We need to run on the thread stack, but we 1268 + * can't go through the normal entry paths: NMIs are masked, and 1269 + * we don't want to enable interrupts, because then we'll end 1270 + * up in an awkward situation in which IRQs are on but NMIs 1271 + * are off. 1272 + */ 1273 + 1274 + SWAPGS 1275 + cld 1276 + movq %rsp, %rdx 1277 + movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp 1278 + pushq 5*8(%rdx) /* pt_regs->ss */ 1279 + pushq 4*8(%rdx) /* pt_regs->rsp */ 1280 + pushq 3*8(%rdx) /* pt_regs->flags */ 1281 + pushq 2*8(%rdx) /* pt_regs->cs */ 1282 + pushq 1*8(%rdx) /* pt_regs->rip */ 1283 + pushq $-1 /* pt_regs->orig_ax */ 1284 + pushq %rdi /* pt_regs->di */ 1285 + pushq %rsi /* pt_regs->si */ 1286 + pushq (%rdx) /* pt_regs->dx */ 1287 + pushq %rcx /* pt_regs->cx */ 1288 + pushq %rax /* pt_regs->ax */ 1289 + pushq %r8 /* pt_regs->r8 */ 1290 + pushq %r9 /* pt_regs->r9 */ 1291 + pushq %r10 /* pt_regs->r10 */ 1292 + pushq %r11 /* pt_regs->r11 */ 1293 + pushq %rbx /* pt_regs->rbx */ 1294 + pushq %rbp /* pt_regs->rbp */ 1295 + pushq %r12 /* pt_regs->r12 */ 1296 + pushq %r13 /* pt_regs->r13 */ 1297 + pushq %r14 /* pt_regs->r14 */ 1298 + pushq %r15 /* pt_regs->r15 */ 1299 + 1300 + /* 1301 + * At this point we no longer need to worry about stack damage 1302 + * due to nesting -- we're on the normal thread stack and we're 1303 + * done with the NMI stack. 1304 + */ 1305 + 1306 + movq %rsp, %rdi 1307 + movq $-1, %rsi 1308 + call do_nmi 1309 + 1310 + /* 1311 + * Return back to user mode. We must *not* do the normal exit 1312 + * work, because we don't want to enable interrupts. Fortunately, 1313 + * do_nmi doesn't modify pt_regs. 1314 + */ 1315 + SWAPGS 1316 + jmp restore_c_regs_and_iret 1317 + 1318 + .Lnmi_from_kernel: 1319 + /* 1320 + * Here's what our stack frame will look like: 1321 + * +---------------------------------------------------------+ 1322 + * | original SS | 1323 + * | original Return RSP | 1324 + * | original RFLAGS | 1325 + * | original CS | 1326 + * | original RIP | 1327 + * +---------------------------------------------------------+ 1328 + * | temp storage for rdx | 1329 + * +---------------------------------------------------------+ 1330 + * | "NMI executing" variable | 1331 + * +---------------------------------------------------------+ 1332 + * | iret SS } Copied from "outermost" frame | 1333 + * | iret Return RSP } on each loop iteration; overwritten | 1334 + * | iret RFLAGS } by a nested NMI to force another | 1335 + * | iret CS } iteration if needed. | 1336 + * | iret RIP } | 1337 + * +---------------------------------------------------------+ 1338 + * | outermost SS } initialized in first_nmi; | 1339 + * | outermost Return RSP } will not be changed before | 1340 + * | outermost RFLAGS } NMI processing is done. | 1341 + * | outermost CS } Copied to "iret" frame on each | 1342 + * | outermost RIP } iteration. | 1343 + * +---------------------------------------------------------+ 1344 + * | pt_regs | 1345 + * +---------------------------------------------------------+ 1346 + * 1347 + * The "original" frame is used by hardware. Before re-enabling 1348 + * NMIs, we need to be done with it, and we need to leave enough 1349 + * space for the asm code here. 1350 + * 1351 + * We return by executing IRET while RSP points to the "iret" frame. 1352 + * That will either return for real or it will loop back into NMI 1353 + * processing. 1354 + * 1355 + * The "outermost" frame is copied to the "iret" frame on each 1356 + * iteration of the loop, so each iteration starts with the "iret" 1357 + * frame pointing to the final return target. 1358 + */ 1359 + 1360 + /* 1361 + * Determine whether we're a nested NMI. 1362 + * 1363 + * If we interrupted kernel code between repeat_nmi and 1364 + * end_repeat_nmi, then we are a nested NMI. We must not 1365 + * modify the "iret" frame because it's being written by 1366 + * the outer NMI. That's okay; the outer NMI handler is 1367 + * about to about to call do_nmi anyway, so we can just 1368 + * resume the outer NMI. 1369 + */ 1370 + 1371 + movq $repeat_nmi, %rdx 1372 + cmpq 8(%rsp), %rdx 1373 + ja 1f 1374 + movq $end_repeat_nmi, %rdx 1375 + cmpq 8(%rsp), %rdx 1376 + ja nested_nmi_out 1377 + 1: 1378 + 1379 + /* 1380 + * Now check "NMI executing". If it's set, then we're nested. 1381 + * This will not detect if we interrupted an outer NMI just 1382 + * before IRET. 1269 1383 */ 1270 1384 cmpl $1, -8(%rsp) 1271 1385 je nested_nmi 1272 1386 1273 1387 /* 1274 - * Now test if the previous stack was an NMI stack. 1275 - * We need the double check. We check the NMI stack to satisfy the 1276 - * race when the first NMI clears the variable before returning. 1277 - * We check the variable because the first NMI could be in a 1278 - * breakpoint routine using a breakpoint stack. 1388 + * Now test if the previous stack was an NMI stack. This covers 1389 + * the case where we interrupt an outer NMI after it clears 1390 + * "NMI executing" but before IRET. We need to be careful, though: 1391 + * there is one case in which RSP could point to the NMI stack 1392 + * despite there being no NMI active: naughty userspace controls 1393 + * RSP at the very beginning of the SYSCALL targets. We can 1394 + * pull a fast one on naughty userspace, though: we program 1395 + * SYSCALL to mask DF, so userspace cannot cause DF to be set 1396 + * if it controls the kernel's RSP. We set DF before we clear 1397 + * "NMI executing". 1279 1398 */ 1280 1399 lea 6*8(%rsp), %rdx 1281 1400 /* Compare the NMI stack (rdx) with the stack we came from (4*8(%rsp)) */ ··· 1407 1286 cmpq %rdx, 4*8(%rsp) 1408 1287 /* If it is below the NMI stack, it is a normal NMI */ 1409 1288 jb first_nmi 1410 - /* Ah, it is within the NMI stack, treat it as nested */ 1289 + 1290 + /* Ah, it is within the NMI stack. */ 1291 + 1292 + testb $(X86_EFLAGS_DF >> 8), (3*8 + 1)(%rsp) 1293 + jz first_nmi /* RSP was user controlled. */ 1294 + 1295 + /* This is a nested NMI. */ 1411 1296 1412 1297 nested_nmi: 1413 1298 /* 1414 - * Do nothing if we interrupted the fixup in repeat_nmi. 1415 - * It's about to repeat the NMI handler, so we are fine 1416 - * with ignoring this one. 1299 + * Modify the "iret" frame to point to repeat_nmi, forcing another 1300 + * iteration of NMI handling. 1417 1301 */ 1418 - movq $repeat_nmi, %rdx 1419 - cmpq 8(%rsp), %rdx 1420 - ja 1f 1421 - movq $end_repeat_nmi, %rdx 1422 - cmpq 8(%rsp), %rdx 1423 - ja nested_nmi_out 1424 - 1425 - 1: 1426 - /* Set up the interrupted NMIs stack to jump to repeat_nmi */ 1427 - leaq -1*8(%rsp), %rdx 1428 - movq %rdx, %rsp 1302 + subq $8, %rsp 1429 1303 leaq -10*8(%rsp), %rdx 1430 1304 pushq $__KERNEL_DS 1431 1305 pushq %rdx ··· 1434 1318 nested_nmi_out: 1435 1319 popq %rdx 1436 1320 1437 - /* No need to check faults here */ 1321 + /* We are returning to kernel mode, so this cannot result in a fault. */ 1438 1322 INTERRUPT_RETURN 1439 1323 1440 1324 first_nmi: 1441 - /* 1442 - * Because nested NMIs will use the pushed location that we 1443 - * stored in rdx, we must keep that space available. 1444 - * Here's what our stack frame will look like: 1445 - * +-------------------------+ 1446 - * | original SS | 1447 - * | original Return RSP | 1448 - * | original RFLAGS | 1449 - * | original CS | 1450 - * | original RIP | 1451 - * +-------------------------+ 1452 - * | temp storage for rdx | 1453 - * +-------------------------+ 1454 - * | NMI executing variable | 1455 - * +-------------------------+ 1456 - * | copied SS | 1457 - * | copied Return RSP | 1458 - * | copied RFLAGS | 1459 - * | copied CS | 1460 - * | copied RIP | 1461 - * +-------------------------+ 1462 - * | Saved SS | 1463 - * | Saved Return RSP | 1464 - * | Saved RFLAGS | 1465 - * | Saved CS | 1466 - * | Saved RIP | 1467 - * +-------------------------+ 1468 - * | pt_regs | 1469 - * +-------------------------+ 1470 - * 1471 - * The saved stack frame is used to fix up the copied stack frame 1472 - * that a nested NMI may change to make the interrupted NMI iret jump 1473 - * to the repeat_nmi. The original stack frame and the temp storage 1474 - * is also used by nested NMIs and can not be trusted on exit. 1475 - */ 1476 - /* Do not pop rdx, nested NMIs will corrupt that part of the stack */ 1325 + /* Restore rdx. */ 1477 1326 movq (%rsp), %rdx 1478 1327 1479 - /* Set the NMI executing variable on the stack. */ 1480 - pushq $1 1328 + /* Make room for "NMI executing". */ 1329 + pushq $0 1481 1330 1482 - /* Leave room for the "copied" frame */ 1331 + /* Leave room for the "iret" frame */ 1483 1332 subq $(5*8), %rsp 1484 1333 1485 - /* Copy the stack frame to the Saved frame */ 1334 + /* Copy the "original" frame to the "outermost" frame */ 1486 1335 .rept 5 1487 1336 pushq 11*8(%rsp) 1488 1337 .endr 1489 1338 1490 1339 /* Everything up to here is safe from nested NMIs */ 1491 1340 1341 + #ifdef CONFIG_DEBUG_ENTRY 1342 + /* 1343 + * For ease of testing, unmask NMIs right away. Disabled by 1344 + * default because IRET is very expensive. 1345 + */ 1346 + pushq $0 /* SS */ 1347 + pushq %rsp /* RSP (minus 8 because of the previous push) */ 1348 + addq $8, (%rsp) /* Fix up RSP */ 1349 + pushfq /* RFLAGS */ 1350 + pushq $__KERNEL_CS /* CS */ 1351 + pushq $1f /* RIP */ 1352 + INTERRUPT_RETURN /* continues at repeat_nmi below */ 1353 + 1: 1354 + #endif 1355 + 1356 + repeat_nmi: 1492 1357 /* 1493 1358 * If there was a nested NMI, the first NMI's iret will return 1494 1359 * here. But NMIs are still enabled and we can take another ··· 1478 1381 * it will just return, as we are about to repeat an NMI anyway. 1479 1382 * This makes it safe to copy to the stack frame that a nested 1480 1383 * NMI will update. 1384 + * 1385 + * RSP is pointing to "outermost RIP". gsbase is unknown, but, if 1386 + * we're repeating an NMI, gsbase has the same value that it had on 1387 + * the first iteration. paranoid_entry will load the kernel 1388 + * gsbase if needed before we call do_nmi. "NMI executing" 1389 + * is zero. 1481 1390 */ 1482 - repeat_nmi: 1483 - /* 1484 - * Update the stack variable to say we are still in NMI (the update 1485 - * is benign for the non-repeat case, where 1 was pushed just above 1486 - * to this very stack slot). 1487 - */ 1488 - movq $1, 10*8(%rsp) 1391 + movq $1, 10*8(%rsp) /* Set "NMI executing". */ 1489 1392 1490 - /* Make another copy, this one may be modified by nested NMIs */ 1393 + /* 1394 + * Copy the "outermost" frame to the "iret" frame. NMIs that nest 1395 + * here must not modify the "iret" frame while we're writing to 1396 + * it or it will end up containing garbage. 1397 + */ 1491 1398 addq $(10*8), %rsp 1492 1399 .rept 5 1493 1400 pushq -6*8(%rsp) ··· 1500 1399 end_repeat_nmi: 1501 1400 1502 1401 /* 1503 - * Everything below this point can be preempted by a nested 1504 - * NMI if the first NMI took an exception and reset our iret stack 1505 - * so that we repeat another NMI. 1402 + * Everything below this point can be preempted by a nested NMI. 1403 + * If this happens, then the inner NMI will change the "iret" 1404 + * frame to point back to repeat_nmi. 1506 1405 */ 1507 1406 pushq $-1 /* ORIG_RAX: no syscall to restart */ 1508 1407 ALLOC_PT_GPREGS_ON_STACK ··· 1516 1415 */ 1517 1416 call paranoid_entry 1518 1417 1519 - /* 1520 - * Save off the CR2 register. If we take a page fault in the NMI then 1521 - * it could corrupt the CR2 value. If the NMI preempts a page fault 1522 - * handler before it was able to read the CR2 register, and then the 1523 - * NMI itself takes a page fault, the page fault that was preempted 1524 - * will read the information from the NMI page fault and not the 1525 - * origin fault. Save it off and restore it if it changes. 1526 - * Use the r12 callee-saved register. 1527 - */ 1528 - movq %cr2, %r12 1529 - 1530 1418 /* paranoidentry do_nmi, 0; without TRACE_IRQS_OFF */ 1531 1419 movq %rsp, %rdi 1532 1420 movq $-1, %rsi 1533 1421 call do_nmi 1534 1422 1535 - /* Did the NMI take a page fault? Restore cr2 if it did */ 1536 - movq %cr2, %rcx 1537 - cmpq %rcx, %r12 1538 - je 1f 1539 - movq %r12, %cr2 1540 - 1: 1541 1423 testl %ebx, %ebx /* swapgs needed? */ 1542 1424 jnz nmi_restore 1543 1425 nmi_swapgs: ··· 1528 1444 nmi_restore: 1529 1445 RESTORE_EXTRA_REGS 1530 1446 RESTORE_C_REGS 1531 - /* Pop the extra iret frame at once */ 1447 + 1448 + /* Point RSP at the "iret" frame. */ 1532 1449 REMOVE_PT_GPREGS_FROM_STACK 6*8 1533 1450 1534 - /* Clear the NMI executing stack variable */ 1535 - movq $0, 5*8(%rsp) 1451 + /* 1452 + * Clear "NMI executing". Set DF first so that we can easily 1453 + * distinguish the remaining code between here and IRET from 1454 + * the SYSCALL entry and exit paths. On a native kernel, we 1455 + * could just inspect RIP, but, on paravirt kernels, 1456 + * INTERRUPT_RETURN can translate into a jump into a 1457 + * hypercall page. 1458 + */ 1459 + std 1460 + movq $0, 5*8(%rsp) /* clear "NMI executing" */ 1461 + 1462 + /* 1463 + * INTERRUPT_RETURN reads the "iret" frame and exits the NMI 1464 + * stack in a single instruction. We are returning to kernel 1465 + * mode, so this cannot result in a fault. 1466 + */ 1536 1467 INTERRUPT_RETURN 1537 1468 END(nmi) 1538 1469
+1
arch/x86/include/asm/Kbuild
··· 9 9 generic-y += dma-contiguous.h 10 10 generic-y += early_ioremap.h 11 11 generic-y += mcs_spinlock.h 12 + generic-y += mm-arch-hooks.h
+1 -1
arch/x86/include/asm/espfix.h
··· 9 9 DECLARE_PER_CPU_READ_MOSTLY(unsigned long, espfix_waddr); 10 10 11 11 extern void init_espfix_bsp(void); 12 - extern void init_espfix_ap(void); 12 + extern void init_espfix_ap(int cpu); 13 13 14 14 #endif /* CONFIG_X86_64 */ 15 15
+38 -34
arch/x86/include/asm/fpu/types.h
··· 189 189 struct fxregs_state fxsave; 190 190 struct swregs_state soft; 191 191 struct xregs_state xsave; 192 + u8 __padding[PAGE_SIZE]; 192 193 }; 193 194 194 195 /* ··· 198 197 * state fields: 199 198 */ 200 199 struct fpu { 201 - /* 202 - * @state: 203 - * 204 - * In-memory copy of all FPU registers that we save/restore 205 - * over context switches. If the task is using the FPU then 206 - * the registers in the FPU are more recent than this state 207 - * copy. If the task context-switches away then they get 208 - * saved here and represent the FPU state. 209 - * 210 - * After context switches there may be a (short) time period 211 - * during which the in-FPU hardware registers are unchanged 212 - * and still perfectly match this state, if the tasks 213 - * scheduled afterwards are not using the FPU. 214 - * 215 - * This is the 'lazy restore' window of optimization, which 216 - * we track though 'fpu_fpregs_owner_ctx' and 'fpu->last_cpu'. 217 - * 218 - * We detect whether a subsequent task uses the FPU via setting 219 - * CR0::TS to 1, which causes any FPU use to raise a #NM fault. 220 - * 221 - * During this window, if the task gets scheduled again, we 222 - * might be able to skip having to do a restore from this 223 - * memory buffer to the hardware registers - at the cost of 224 - * incurring the overhead of #NM fault traps. 225 - * 226 - * Note that on modern CPUs that support the XSAVEOPT (or other 227 - * optimized XSAVE instructions), we don't use #NM traps anymore, 228 - * as the hardware can track whether FPU registers need saving 229 - * or not. On such CPUs we activate the non-lazy ('eagerfpu') 230 - * logic, which unconditionally saves/restores all FPU state 231 - * across context switches. (if FPU state exists.) 232 - */ 233 - union fpregs_state state; 234 - 235 200 /* 236 201 * @last_cpu: 237 202 * ··· 255 288 * deal with bursty apps that only use the FPU for a short time: 256 289 */ 257 290 unsigned char counter; 291 + /* 292 + * @state: 293 + * 294 + * In-memory copy of all FPU registers that we save/restore 295 + * over context switches. If the task is using the FPU then 296 + * the registers in the FPU are more recent than this state 297 + * copy. If the task context-switches away then they get 298 + * saved here and represent the FPU state. 299 + * 300 + * After context switches there may be a (short) time period 301 + * during which the in-FPU hardware registers are unchanged 302 + * and still perfectly match this state, if the tasks 303 + * scheduled afterwards are not using the FPU. 304 + * 305 + * This is the 'lazy restore' window of optimization, which 306 + * we track though 'fpu_fpregs_owner_ctx' and 'fpu->last_cpu'. 307 + * 308 + * We detect whether a subsequent task uses the FPU via setting 309 + * CR0::TS to 1, which causes any FPU use to raise a #NM fault. 310 + * 311 + * During this window, if the task gets scheduled again, we 312 + * might be able to skip having to do a restore from this 313 + * memory buffer to the hardware registers - at the cost of 314 + * incurring the overhead of #NM fault traps. 315 + * 316 + * Note that on modern CPUs that support the XSAVEOPT (or other 317 + * optimized XSAVE instructions), we don't use #NM traps anymore, 318 + * as the hardware can track whether FPU registers need saving 319 + * or not. On such CPUs we activate the non-lazy ('eagerfpu') 320 + * logic, which unconditionally saves/restores all FPU state 321 + * across context switches. (if FPU state exists.) 322 + */ 323 + union fpregs_state state; 324 + /* 325 + * WARNING: 'state' is dynamically-sized. Do not put 326 + * anything after it here. 327 + */ 258 328 }; 259 329 260 330 #endif /* _ASM_X86_FPU_H */
-27
arch/x86/include/asm/intel_pmc_ipc.h
··· 25 25 26 26 #if IS_ENABLED(CONFIG_INTEL_PMC_IPC) 27 27 28 - /* 29 - * intel_pmc_ipc_simple_command 30 - * @cmd: command 31 - * @sub: sub type 32 - */ 33 28 int intel_pmc_ipc_simple_command(int cmd, int sub); 34 - 35 - /* 36 - * intel_pmc_ipc_raw_cmd 37 - * @cmd: command 38 - * @sub: sub type 39 - * @in: input data 40 - * @inlen: input length in bytes 41 - * @out: output data 42 - * @outlen: output length in dwords 43 - * @sptr: data writing to SPTR register 44 - * @dptr: data writing to DPTR register 45 - */ 46 29 int intel_pmc_ipc_raw_cmd(u32 cmd, u32 sub, u8 *in, u32 inlen, 47 30 u32 *out, u32 outlen, u32 dptr, u32 sptr); 48 - 49 - /* 50 - * intel_pmc_ipc_command 51 - * @cmd: command 52 - * @sub: sub type 53 - * @in: input data 54 - * @inlen: input length in bytes 55 - * @out: output data 56 - * @outlen: output length in dwords 57 - */ 58 31 int intel_pmc_ipc_command(u32 cmd, u32 sub, u8 *in, u32 inlen, 59 32 u32 *out, u32 outlen); 60 33
+2 -6
arch/x86/include/asm/kasan.h
··· 14 14 15 15 #ifndef __ASSEMBLY__ 16 16 17 - extern pte_t kasan_zero_pte[]; 18 - extern pte_t kasan_zero_pmd[]; 19 - extern pte_t kasan_zero_pud[]; 20 - 21 17 #ifdef CONFIG_KASAN 22 - void __init kasan_map_early_shadow(pgd_t *pgd); 18 + void __init kasan_early_init(void); 23 19 void __init kasan_init(void); 24 20 #else 25 - static inline void kasan_map_early_shadow(pgd_t *pgd) { } 21 + static inline void kasan_early_init(void) { } 26 22 static inline void kasan_init(void) { } 27 23 #endif 28 24
+2
arch/x86/include/asm/kvm_host.h
··· 604 604 bool iommu_noncoherent; 605 605 #define __KVM_HAVE_ARCH_NONCOHERENT_DMA 606 606 atomic_t noncoherent_dma_count; 607 + #define __KVM_HAVE_ARCH_ASSIGNED_DEVICE 608 + atomic_t assigned_device_count; 607 609 struct kvm_pic *vpic; 608 610 struct kvm_ioapic *vioapic; 609 611 struct kvm_pit *vpit;
-15
arch/x86/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_X86_MM_ARCH_HOOKS_H 13 - #define _ASM_X86_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_X86_MM_ARCH_HOOKS_H */
+1 -1
arch/x86/include/asm/mmu_context.h
··· 23 23 24 24 static inline void load_mm_cr4(struct mm_struct *mm) 25 25 { 26 - if (static_key_true(&rdpmc_always_available) || 26 + if (static_key_false(&rdpmc_always_available) || 27 27 atomic_read(&mm->context.perf_rdpmc_allowed)) 28 28 cr4_set_bits(X86_CR4_PCE); 29 29 else
+7 -3
arch/x86/include/asm/processor.h
··· 390 390 #endif 391 391 unsigned long gs; 392 392 393 - /* Floating point and extended processor state */ 394 - struct fpu fpu; 395 - 396 393 /* Save middle states of ptrace breakpoints */ 397 394 struct perf_event *ptrace_bps[HBP_NUM]; 398 395 /* Debug status used for traps, single steps, etc... */ ··· 415 418 unsigned long iopl; 416 419 /* Max allowed port in the bitmap, in bytes: */ 417 420 unsigned io_bitmap_max; 421 + 422 + /* Floating point and extended processor state */ 423 + struct fpu fpu; 424 + /* 425 + * WARNING: 'fpu' is dynamically-sized. It *MUST* be at 426 + * the end. 427 + */ 418 428 }; 419 429 420 430 /*
+2
arch/x86/include/uapi/asm/hyperv.h
··· 108 108 #define HV_X64_HYPERCALL_PARAMS_XMM_AVAILABLE (1 << 4) 109 109 /* Support for a virtual guest idle state is available */ 110 110 #define HV_X64_GUEST_IDLE_STATE_AVAILABLE (1 << 5) 111 + /* Guest crash data handler available */ 112 + #define HV_X64_GUEST_CRASH_MSR_AVAILABLE (1 << 10) 111 113 112 114 /* 113 115 * Implementation recommendations. Indicates which behaviors the hypervisor
+2 -8
arch/x86/kernel/apic/vector.c
··· 409 409 int irq, vector; 410 410 struct apic_chip_data *data; 411 411 412 - /* 413 - * vector_lock will make sure that we don't run into irq vector 414 - * assignments that might be happening on another cpu in parallel, 415 - * while we setup our initial vector to irq mappings. 416 - */ 417 - raw_spin_lock(&vector_lock); 418 412 /* Mark the inuse vectors */ 419 413 for_each_active_irq(irq) { 420 414 data = apic_chip_data(irq_get_irq_data(irq)); ··· 430 436 if (!cpumask_test_cpu(cpu, data->domain)) 431 437 per_cpu(vector_irq, cpu)[vector] = VECTOR_UNDEFINED; 432 438 } 433 - raw_spin_unlock(&vector_lock); 434 439 } 435 440 436 441 /* 437 - * Setup the vector to irq mappings. 442 + * Setup the vector to irq mappings. Must be called with vector_lock held. 438 443 */ 439 444 void setup_vector_irq(int cpu) 440 445 { 441 446 int irq; 442 447 448 + lockdep_assert_held(&vector_lock); 443 449 /* 444 450 * On most of the platforms, legacy PIC delivers the interrupts on the 445 451 * boot cpu. But there are certain platforms where PIC interrupts are
+3 -1
arch/x86/kernel/early_printk.c
··· 175 175 } 176 176 177 177 if (*s) { 178 - if (kstrtoul(s, 0, &baud) < 0 || baud == 0) 178 + baud = simple_strtoull(s, &e, 0); 179 + 180 + if (baud == 0 || s == e) 179 181 baud = DEFAULT_BAUD; 180 182 } 181 183
+16 -12
arch/x86/kernel/espfix_64.c
··· 131 131 init_espfix_random(); 132 132 133 133 /* The rest is the same as for any other processor */ 134 - init_espfix_ap(); 134 + init_espfix_ap(0); 135 135 } 136 136 137 - void init_espfix_ap(void) 137 + void init_espfix_ap(int cpu) 138 138 { 139 - unsigned int cpu, page; 139 + unsigned int page; 140 140 unsigned long addr; 141 141 pud_t pud, *pud_p; 142 142 pmd_t pmd, *pmd_p; 143 143 pte_t pte, *pte_p; 144 - int n; 144 + int n, node; 145 145 void *stack_page; 146 146 pteval_t ptemask; 147 147 148 148 /* We only have to do this once... */ 149 - if (likely(this_cpu_read(espfix_stack))) 149 + if (likely(per_cpu(espfix_stack, cpu))) 150 150 return; /* Already initialized */ 151 151 152 - cpu = smp_processor_id(); 153 152 addr = espfix_base_addr(cpu); 154 153 page = cpu/ESPFIX_STACKS_PER_PAGE; 155 154 ··· 164 165 if (stack_page) 165 166 goto unlock_done; 166 167 168 + node = cpu_to_node(cpu); 167 169 ptemask = __supported_pte_mask; 168 170 169 171 pud_p = &espfix_pud_page[pud_index(addr)]; 170 172 pud = *pud_p; 171 173 if (!pud_present(pud)) { 172 - pmd_p = (pmd_t *)__get_free_page(PGALLOC_GFP); 174 + struct page *page = alloc_pages_node(node, PGALLOC_GFP, 0); 175 + 176 + pmd_p = (pmd_t *)page_address(page); 173 177 pud = __pud(__pa(pmd_p) | (PGTABLE_PROT & ptemask)); 174 178 paravirt_alloc_pmd(&init_mm, __pa(pmd_p) >> PAGE_SHIFT); 175 179 for (n = 0; n < ESPFIX_PUD_CLONES; n++) ··· 182 180 pmd_p = pmd_offset(&pud, addr); 183 181 pmd = *pmd_p; 184 182 if (!pmd_present(pmd)) { 185 - pte_p = (pte_t *)__get_free_page(PGALLOC_GFP); 183 + struct page *page = alloc_pages_node(node, PGALLOC_GFP, 0); 184 + 185 + pte_p = (pte_t *)page_address(page); 186 186 pmd = __pmd(__pa(pte_p) | (PGTABLE_PROT & ptemask)); 187 187 paravirt_alloc_pte(&init_mm, __pa(pte_p) >> PAGE_SHIFT); 188 188 for (n = 0; n < ESPFIX_PMD_CLONES; n++) ··· 192 188 } 193 189 194 190 pte_p = pte_offset_kernel(&pmd, addr); 195 - stack_page = (void *)__get_free_page(GFP_KERNEL); 191 + stack_page = page_address(alloc_pages_node(node, GFP_KERNEL, 0)); 196 192 pte = __pte(__pa(stack_page) | (__PAGE_KERNEL_RO & ptemask)); 197 193 for (n = 0; n < ESPFIX_PTE_CLONES; n++) 198 194 set_pte(&pte_p[n*PTE_STRIDE], pte); ··· 203 199 unlock_done: 204 200 mutex_unlock(&espfix_init_mutex); 205 201 done: 206 - this_cpu_write(espfix_stack, addr); 207 - this_cpu_write(espfix_waddr, (unsigned long)stack_page 208 - + (addr & ~PAGE_MASK)); 202 + per_cpu(espfix_stack, cpu) = addr; 203 + per_cpu(espfix_waddr, cpu) = (unsigned long)stack_page 204 + + (addr & ~PAGE_MASK); 209 205 }
+40
arch/x86/kernel/fpu/init.c
··· 4 4 #include <asm/fpu/internal.h> 5 5 #include <asm/tlbflush.h> 6 6 7 + #include <linux/sched.h> 8 + 7 9 /* 8 10 * Initialize the TS bit in CR0 according to the style of context-switches 9 11 * we are using: ··· 137 135 */ 138 136 unsigned int xstate_size; 139 137 EXPORT_SYMBOL_GPL(xstate_size); 138 + 139 + /* Enforce that 'MEMBER' is the last field of 'TYPE': */ 140 + #define CHECK_MEMBER_AT_END_OF(TYPE, MEMBER) \ 141 + BUILD_BUG_ON(sizeof(TYPE) != offsetofend(TYPE, MEMBER)) 142 + 143 + /* 144 + * We append the 'struct fpu' to the task_struct: 145 + */ 146 + static void __init fpu__init_task_struct_size(void) 147 + { 148 + int task_size = sizeof(struct task_struct); 149 + 150 + /* 151 + * Subtract off the static size of the register state. 152 + * It potentially has a bunch of padding. 153 + */ 154 + task_size -= sizeof(((struct task_struct *)0)->thread.fpu.state); 155 + 156 + /* 157 + * Add back the dynamically-calculated register state 158 + * size. 159 + */ 160 + task_size += xstate_size; 161 + 162 + /* 163 + * We dynamically size 'struct fpu', so we require that 164 + * it be at the end of 'thread_struct' and that 165 + * 'thread_struct' be at the end of 'task_struct'. If 166 + * you hit a compile error here, check the structure to 167 + * see if something got added to the end. 168 + */ 169 + CHECK_MEMBER_AT_END_OF(struct fpu, state); 170 + CHECK_MEMBER_AT_END_OF(struct thread_struct, fpu); 171 + CHECK_MEMBER_AT_END_OF(struct task_struct, thread); 172 + 173 + arch_task_struct_size = task_size; 174 + } 140 175 141 176 /* 142 177 * Set up the xstate_size based on the legacy FPU context size. ··· 326 287 fpu__init_system_generic(); 327 288 fpu__init_system_xstate_size_legacy(); 328 289 fpu__init_system_xstate(); 290 + fpu__init_task_struct_size(); 329 291 330 292 fpu__init_system_ctx_switch(); 331 293 }
+4 -6
arch/x86/kernel/head64.c
··· 161 161 /* Kill off the identity-map trampoline */ 162 162 reset_early_page_tables(); 163 163 164 - kasan_map_early_shadow(early_level4_pgt); 165 - 166 - /* clear bss before set_intr_gate with early_idt_handler */ 167 164 clear_bss(); 165 + 166 + clear_page(init_level4_pgt); 167 + 168 + kasan_early_init(); 168 169 169 170 for (i = 0; i < NUM_EXCEPTION_VECTORS; i++) 170 171 set_intr_gate(i, early_idt_handler_array[i]); ··· 178 177 */ 179 178 load_ucode_bsp(); 180 179 181 - clear_page(init_level4_pgt); 182 180 /* set init_level4_pgt kernel high mapping*/ 183 181 init_level4_pgt[511] = early_level4_pgt[511]; 184 - 185 - kasan_map_early_shadow(init_level4_pgt); 186 182 187 183 x86_64_start_reservations(real_mode_data); 188 184 }
-29
arch/x86/kernel/head_64.S
··· 516 516 /* This must match the first entry in level2_kernel_pgt */ 517 517 .quad 0x0000000000000000 518 518 519 - #ifdef CONFIG_KASAN 520 - #define FILL(VAL, COUNT) \ 521 - .rept (COUNT) ; \ 522 - .quad (VAL) ; \ 523 - .endr 524 - 525 - NEXT_PAGE(kasan_zero_pte) 526 - FILL(kasan_zero_page - __START_KERNEL_map + _KERNPG_TABLE, 512) 527 - NEXT_PAGE(kasan_zero_pmd) 528 - FILL(kasan_zero_pte - __START_KERNEL_map + _KERNPG_TABLE, 512) 529 - NEXT_PAGE(kasan_zero_pud) 530 - FILL(kasan_zero_pmd - __START_KERNEL_map + _KERNPG_TABLE, 512) 531 - 532 - #undef FILL 533 - #endif 534 - 535 - 536 519 #include "../../x86/xen/xen-head.S" 537 520 538 521 __PAGE_ALIGNED_BSS 539 522 NEXT_PAGE(empty_zero_page) 540 523 .skip PAGE_SIZE 541 524 542 - #ifdef CONFIG_KASAN 543 - /* 544 - * This page used as early shadow. We don't use empty_zero_page 545 - * at early stages, stack instrumentation could write some garbage 546 - * to this page. 547 - * Latter we reuse it as zero shadow for large ranges of memory 548 - * that allowed to access, but not instrumented by kasan 549 - * (vmalloc/vmemmap ...). 550 - */ 551 - NEXT_PAGE(kasan_zero_page) 552 - .skip PAGE_SIZE 553 - #endif
+18 -2
arch/x86/kernel/irq.c
··· 347 347 if (!desc) 348 348 continue; 349 349 350 + /* 351 + * Protect against concurrent action removal, 352 + * affinity changes etc. 353 + */ 354 + raw_spin_lock(&desc->lock); 350 355 data = irq_desc_get_irq_data(desc); 351 356 cpumask_copy(&affinity_new, data->affinity); 352 357 cpumask_clear_cpu(this_cpu, &affinity_new); 353 358 354 359 /* Do not count inactive or per-cpu irqs. */ 355 - if (!irq_has_action(irq) || irqd_is_per_cpu(data)) 360 + if (!irq_has_action(irq) || irqd_is_per_cpu(data)) { 361 + raw_spin_unlock(&desc->lock); 356 362 continue; 363 + } 357 364 365 + raw_spin_unlock(&desc->lock); 358 366 /* 359 367 * A single irq may be mapped to multiple 360 368 * cpu's vector_irq[] (for example IOAPIC cluster ··· 393 385 * vector. If the vector is marked in the used vectors 394 386 * bitmap or an irq is assigned to it, we don't count 395 387 * it as available. 388 + * 389 + * As this is an inaccurate snapshot anyway, we can do 390 + * this w/o holding vector_lock. 396 391 */ 397 392 for (vector = FIRST_EXTERNAL_VECTOR; 398 393 vector < first_system_vector; vector++) { ··· 497 486 */ 498 487 mdelay(1); 499 488 489 + /* 490 + * We can walk the vector array of this cpu without holding 491 + * vector_lock because the cpu is already marked !online, so 492 + * nothing else will touch it. 493 + */ 500 494 for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; vector++) { 501 495 unsigned int irr; 502 496 ··· 513 497 irq = __this_cpu_read(vector_irq[vector]); 514 498 515 499 desc = irq_to_desc(irq); 500 + raw_spin_lock(&desc->lock); 516 501 data = irq_desc_get_irq_data(desc); 517 502 chip = irq_data_get_irq_chip(data); 518 - raw_spin_lock(&desc->lock); 519 503 if (chip->irq_retrigger) { 520 504 chip->irq_retrigger(data); 521 505 __this_cpu_write(vector_irq[vector], VECTOR_RETRIGGERED);
+52 -71
arch/x86/kernel/nmi.c
··· 408 408 NOKPROBE_SYMBOL(default_do_nmi); 409 409 410 410 /* 411 - * NMIs can hit breakpoints which will cause it to lose its 412 - * NMI context with the CPU when the breakpoint does an iret. 413 - */ 414 - #ifdef CONFIG_X86_32 415 - /* 416 - * For i386, NMIs use the same stack as the kernel, and we can 417 - * add a workaround to the iret problem in C (preventing nested 418 - * NMIs if an NMI takes a trap). Simply have 3 states the NMI 419 - * can be in: 411 + * NMIs can page fault or hit breakpoints which will cause it to lose 412 + * its NMI context with the CPU when the breakpoint or page fault does an IRET. 413 + * 414 + * As a result, NMIs can nest if NMIs get unmasked due an IRET during 415 + * NMI processing. On x86_64, the asm glue protects us from nested NMIs 416 + * if the outer NMI came from kernel mode, but we can still nest if the 417 + * outer NMI came from user mode. 418 + * 419 + * To handle these nested NMIs, we have three states: 420 420 * 421 421 * 1) not running 422 422 * 2) executing ··· 430 430 * (Note, the latch is binary, thus multiple NMIs triggering, 431 431 * when one is running, are ignored. Only one NMI is restarted.) 432 432 * 433 - * If an NMI hits a breakpoint that executes an iret, another 434 - * NMI can preempt it. We do not want to allow this new NMI 435 - * to run, but we want to execute it when the first one finishes. 436 - * We set the state to "latched", and the exit of the first NMI will 437 - * perform a dec_return, if the result is zero (NOT_RUNNING), then 438 - * it will simply exit the NMI handler. If not, the dec_return 439 - * would have set the state to NMI_EXECUTING (what we want it to 440 - * be when we are running). In this case, we simply jump back 441 - * to rerun the NMI handler again, and restart the 'latched' NMI. 433 + * If an NMI executes an iret, another NMI can preempt it. We do not 434 + * want to allow this new NMI to run, but we want to execute it when the 435 + * first one finishes. We set the state to "latched", and the exit of 436 + * the first NMI will perform a dec_return, if the result is zero 437 + * (NOT_RUNNING), then it will simply exit the NMI handler. If not, the 438 + * dec_return would have set the state to NMI_EXECUTING (what we want it 439 + * to be when we are running). In this case, we simply jump back to 440 + * rerun the NMI handler again, and restart the 'latched' NMI. 442 441 * 443 442 * No trap (breakpoint or page fault) should be hit before nmi_restart, 444 443 * thus there is no race between the first check of state for NOT_RUNNING ··· 460 461 static DEFINE_PER_CPU(enum nmi_states, nmi_state); 461 462 static DEFINE_PER_CPU(unsigned long, nmi_cr2); 462 463 463 - #define nmi_nesting_preprocess(regs) \ 464 - do { \ 465 - if (this_cpu_read(nmi_state) != NMI_NOT_RUNNING) { \ 466 - this_cpu_write(nmi_state, NMI_LATCHED); \ 467 - return; \ 468 - } \ 469 - this_cpu_write(nmi_state, NMI_EXECUTING); \ 470 - this_cpu_write(nmi_cr2, read_cr2()); \ 471 - } while (0); \ 472 - nmi_restart: 473 - 474 - #define nmi_nesting_postprocess() \ 475 - do { \ 476 - if (unlikely(this_cpu_read(nmi_cr2) != read_cr2())) \ 477 - write_cr2(this_cpu_read(nmi_cr2)); \ 478 - if (this_cpu_dec_return(nmi_state)) \ 479 - goto nmi_restart; \ 480 - } while (0) 481 - #else /* x86_64 */ 464 + #ifdef CONFIG_X86_64 482 465 /* 483 - * In x86_64 things are a bit more difficult. This has the same problem 484 - * where an NMI hitting a breakpoint that calls iret will remove the 485 - * NMI context, allowing a nested NMI to enter. What makes this more 486 - * difficult is that both NMIs and breakpoints have their own stack. 487 - * When a new NMI or breakpoint is executed, the stack is set to a fixed 488 - * point. If an NMI is nested, it will have its stack set at that same 489 - * fixed address that the first NMI had, and will start corrupting the 490 - * stack. This is handled in entry_64.S, but the same problem exists with 491 - * the breakpoint stack. 466 + * In x86_64, we need to handle breakpoint -> NMI -> breakpoint. Without 467 + * some care, the inner breakpoint will clobber the outer breakpoint's 468 + * stack. 492 469 * 493 - * If a breakpoint is being processed, and the debug stack is being used, 494 - * if an NMI comes in and also hits a breakpoint, the stack pointer 495 - * will be set to the same fixed address as the breakpoint that was 496 - * interrupted, causing that stack to be corrupted. To handle this case, 497 - * check if the stack that was interrupted is the debug stack, and if 498 - * so, change the IDT so that new breakpoints will use the current stack 499 - * and not switch to the fixed address. On return of the NMI, switch back 500 - * to the original IDT. 470 + * If a breakpoint is being processed, and the debug stack is being 471 + * used, if an NMI comes in and also hits a breakpoint, the stack 472 + * pointer will be set to the same fixed address as the breakpoint that 473 + * was interrupted, causing that stack to be corrupted. To handle this 474 + * case, check if the stack that was interrupted is the debug stack, and 475 + * if so, change the IDT so that new breakpoints will use the current 476 + * stack and not switch to the fixed address. On return of the NMI, 477 + * switch back to the original IDT. 501 478 */ 502 479 static DEFINE_PER_CPU(int, update_debug_stack); 480 + #endif 503 481 504 - static inline void nmi_nesting_preprocess(struct pt_regs *regs) 482 + dotraplinkage notrace void 483 + do_nmi(struct pt_regs *regs, long error_code) 505 484 { 485 + if (this_cpu_read(nmi_state) != NMI_NOT_RUNNING) { 486 + this_cpu_write(nmi_state, NMI_LATCHED); 487 + return; 488 + } 489 + this_cpu_write(nmi_state, NMI_EXECUTING); 490 + this_cpu_write(nmi_cr2, read_cr2()); 491 + nmi_restart: 492 + 493 + #ifdef CONFIG_X86_64 506 494 /* 507 495 * If we interrupted a breakpoint, it is possible that 508 496 * the nmi handler will have breakpoints too. We need to ··· 500 514 debug_stack_set_zero(); 501 515 this_cpu_write(update_debug_stack, 1); 502 516 } 503 - } 504 - 505 - static inline void nmi_nesting_postprocess(void) 506 - { 507 - if (unlikely(this_cpu_read(update_debug_stack))) { 508 - debug_stack_reset(); 509 - this_cpu_write(update_debug_stack, 0); 510 - } 511 - } 512 517 #endif 513 - 514 - dotraplinkage notrace void 515 - do_nmi(struct pt_regs *regs, long error_code) 516 - { 517 - nmi_nesting_preprocess(regs); 518 518 519 519 nmi_enter(); 520 520 ··· 511 539 512 540 nmi_exit(); 513 541 514 - /* On i386, may loop back to preprocess */ 515 - nmi_nesting_postprocess(); 542 + #ifdef CONFIG_X86_64 543 + if (unlikely(this_cpu_read(update_debug_stack))) { 544 + debug_stack_reset(); 545 + this_cpu_write(update_debug_stack, 0); 546 + } 547 + #endif 548 + 549 + if (unlikely(this_cpu_read(nmi_cr2) != read_cr2())) 550 + write_cr2(this_cpu_read(nmi_cr2)); 551 + if (this_cpu_dec_return(nmi_state)) 552 + goto nmi_restart; 516 553 } 517 554 NOKPROBE_SYMBOL(do_nmi); 518 555
+1 -1
arch/x86/kernel/process.c
··· 81 81 */ 82 82 int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) 83 83 { 84 - *dst = *src; 84 + memcpy(dst, src, arch_task_struct_size); 85 85 86 86 return fpu__copy(&dst->thread.fpu, &src->thread.fpu); 87 87 }
+23 -15
arch/x86/kernel/smpboot.c
··· 171 171 apic_ap_setup(); 172 172 173 173 /* 174 - * Need to setup vector mappings before we enable interrupts. 175 - */ 176 - setup_vector_irq(smp_processor_id()); 177 - 178 - /* 179 174 * Save our processor parameters. Note: this information 180 175 * is needed for clock calibration. 181 176 */ ··· 234 239 check_tsc_sync_target(); 235 240 236 241 /* 237 - * Enable the espfix hack for this CPU 238 - */ 239 - #ifdef CONFIG_X86_ESPFIX64 240 - init_espfix_ap(); 241 - #endif 242 - 243 - /* 244 - * We need to hold vector_lock so there the set of online cpus 245 - * does not change while we are assigning vectors to cpus. Holding 246 - * this lock ensures we don't half assign or remove an irq from a cpu. 242 + * Lock vector_lock and initialize the vectors on this cpu 243 + * before setting the cpu online. We must set it online with 244 + * vector_lock held to prevent a concurrent setup/teardown 245 + * from seeing a half valid vector space. 247 246 */ 248 247 lock_vector_lock(); 248 + setup_vector_irq(smp_processor_id()); 249 249 set_cpu_online(smp_processor_id(), true); 250 250 unlock_vector_lock(); 251 251 cpu_set_state_online(smp_processor_id()); ··· 844 854 initial_code = (unsigned long)start_secondary; 845 855 stack_start = idle->thread.sp; 846 856 857 + /* 858 + * Enable the espfix hack for this CPU 859 + */ 860 + #ifdef CONFIG_X86_ESPFIX64 861 + init_espfix_ap(cpu); 862 + #endif 863 + 847 864 /* So we see what's up */ 848 865 announce_cpu(cpu, apicid); 849 866 ··· 992 995 993 996 common_cpu_up(cpu, tidle); 994 997 998 + /* 999 + * We have to walk the irq descriptors to setup the vector 1000 + * space for the cpu which comes online. Prevent irq 1001 + * alloc/free across the bringup. 1002 + */ 1003 + irq_lock_sparse(); 1004 + 995 1005 err = do_boot_cpu(apicid, cpu, tidle); 1006 + 996 1007 if (err) { 1008 + irq_unlock_sparse(); 997 1009 pr_err("do_boot_cpu failed(%d) to wakeup CPU#%u\n", err, cpu); 998 1010 return -EIO; 999 1011 } ··· 1019 1013 cpu_relax(); 1020 1014 touch_nmi_watchdog(); 1021 1015 } 1016 + 1017 + irq_unlock_sparse(); 1022 1018 1023 1019 return 0; 1024 1020 }
+10 -1
arch/x86/kernel/tsc.c
··· 598 598 if (!pit_expect_msb(0xff-i, &delta, &d2)) 599 599 break; 600 600 601 + delta -= tsc; 602 + 603 + /* 604 + * Extrapolate the error and fail fast if the error will 605 + * never be below 500 ppm. 606 + */ 607 + if (i == 1 && 608 + d1 + d2 >= (delta * MAX_QUICK_PIT_ITERATIONS) >> 11) 609 + return 0; 610 + 601 611 /* 602 612 * Iterate until the error is less than 500 ppm 603 613 */ 604 - delta -= tsc; 605 614 if (d1+d2 >= delta >> 11) 606 615 continue; 607 616
+2
arch/x86/kvm/cpuid.c
··· 98 98 best->ebx = xstate_required_size(vcpu->arch.xcr0, true); 99 99 100 100 vcpu->arch.eager_fpu = use_eager_fpu() || guest_cpuid_has_mpx(vcpu); 101 + if (vcpu->arch.eager_fpu) 102 + kvm_x86_ops->fpu_activate(vcpu); 101 103 102 104 /* 103 105 * The existing code assumes virtual address is 48-bit in the canonical
+2
arch/x86/kvm/iommu.c
··· 200 200 goto out_unmap; 201 201 } 202 202 203 + kvm_arch_start_assignment(kvm); 203 204 pci_set_dev_assigned(pdev); 204 205 205 206 dev_info(&pdev->dev, "kvm assign device\n"); ··· 225 224 iommu_detach_device(domain, &pdev->dev); 226 225 227 226 pci_clear_dev_assigned(pdev); 227 + kvm_arch_end_assignment(kvm); 228 228 229 229 dev_info(&pdev->dev, "kvm deassign device\n"); 230 230
+9 -1
arch/x86/kvm/mmu.c
··· 2479 2479 return 0; 2480 2480 } 2481 2481 2482 + static bool kvm_is_mmio_pfn(pfn_t pfn) 2483 + { 2484 + if (pfn_valid(pfn)) 2485 + return !is_zero_pfn(pfn) && PageReserved(pfn_to_page(pfn)); 2486 + 2487 + return true; 2488 + } 2489 + 2482 2490 static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, 2483 2491 unsigned pte_access, int level, 2484 2492 gfn_t gfn, pfn_t pfn, bool speculative, ··· 2514 2506 spte |= PT_PAGE_SIZE_MASK; 2515 2507 if (tdp_enabled) 2516 2508 spte |= kvm_x86_ops->get_mt_mask(vcpu, gfn, 2517 - kvm_is_reserved_pfn(pfn)); 2509 + kvm_is_mmio_pfn(pfn)); 2518 2510 2519 2511 if (host_writable) 2520 2512 spte |= SPTE_HOST_WRITEABLE;
+103 -5
arch/x86/kvm/svm.c
··· 865 865 set_msr_interception(msrpm, MSR_IA32_LASTINTTOIP, 0, 0); 866 866 } 867 867 868 + #define MTRR_TYPE_UC_MINUS 7 869 + #define MTRR2PROTVAL_INVALID 0xff 870 + 871 + static u8 mtrr2protval[8]; 872 + 873 + static u8 fallback_mtrr_type(int mtrr) 874 + { 875 + /* 876 + * WT and WP aren't always available in the host PAT. Treat 877 + * them as UC and UC- respectively. Everything else should be 878 + * there. 879 + */ 880 + switch (mtrr) 881 + { 882 + case MTRR_TYPE_WRTHROUGH: 883 + return MTRR_TYPE_UNCACHABLE; 884 + case MTRR_TYPE_WRPROT: 885 + return MTRR_TYPE_UC_MINUS; 886 + default: 887 + BUG(); 888 + } 889 + } 890 + 891 + static void build_mtrr2protval(void) 892 + { 893 + int i; 894 + u64 pat; 895 + 896 + for (i = 0; i < 8; i++) 897 + mtrr2protval[i] = MTRR2PROTVAL_INVALID; 898 + 899 + /* Ignore the invalid MTRR types. */ 900 + mtrr2protval[2] = 0; 901 + mtrr2protval[3] = 0; 902 + 903 + /* 904 + * Use host PAT value to figure out the mapping from guest MTRR 905 + * values to nested page table PAT/PCD/PWT values. We do not 906 + * want to change the host PAT value every time we enter the 907 + * guest. 908 + */ 909 + rdmsrl(MSR_IA32_CR_PAT, pat); 910 + for (i = 0; i < 8; i++) { 911 + u8 mtrr = pat >> (8 * i); 912 + 913 + if (mtrr2protval[mtrr] == MTRR2PROTVAL_INVALID) 914 + mtrr2protval[mtrr] = __cm_idx2pte(i); 915 + } 916 + 917 + for (i = 0; i < 8; i++) { 918 + if (mtrr2protval[i] == MTRR2PROTVAL_INVALID) { 919 + u8 fallback = fallback_mtrr_type(i); 920 + mtrr2protval[i] = mtrr2protval[fallback]; 921 + BUG_ON(mtrr2protval[i] == MTRR2PROTVAL_INVALID); 922 + } 923 + } 924 + } 925 + 868 926 static __init int svm_hardware_setup(void) 869 927 { 870 928 int cpu; ··· 989 931 } else 990 932 kvm_disable_tdp(); 991 933 934 + build_mtrr2protval(); 992 935 return 0; 993 936 994 937 err: ··· 1144 1085 return target_tsc - tsc; 1145 1086 } 1146 1087 1088 + static void svm_set_guest_pat(struct vcpu_svm *svm, u64 *g_pat) 1089 + { 1090 + struct kvm_vcpu *vcpu = &svm->vcpu; 1091 + 1092 + /* Unlike Intel, AMD takes the guest's CR0.CD into account. 1093 + * 1094 + * AMD does not have IPAT. To emulate it for the case of guests 1095 + * with no assigned devices, just set everything to WB. If guests 1096 + * have assigned devices, however, we cannot force WB for RAM 1097 + * pages only, so use the guest PAT directly. 1098 + */ 1099 + if (!kvm_arch_has_assigned_device(vcpu->kvm)) 1100 + *g_pat = 0x0606060606060606; 1101 + else 1102 + *g_pat = vcpu->arch.pat; 1103 + } 1104 + 1105 + static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) 1106 + { 1107 + u8 mtrr; 1108 + 1109 + /* 1110 + * 1. MMIO: trust guest MTRR, so same as item 3. 1111 + * 2. No passthrough: always map as WB, and force guest PAT to WB as well 1112 + * 3. Passthrough: can't guarantee the result, try to trust guest. 1113 + */ 1114 + if (!is_mmio && !kvm_arch_has_assigned_device(vcpu->kvm)) 1115 + return 0; 1116 + 1117 + mtrr = kvm_mtrr_get_guest_memory_type(vcpu, gfn); 1118 + return mtrr2protval[mtrr]; 1119 + } 1120 + 1147 1121 static void init_vmcb(struct vcpu_svm *svm, bool init_event) 1148 1122 { 1149 1123 struct vmcb_control_area *control = &svm->vmcb->control; ··· 1272 1180 clr_cr_intercept(svm, INTERCEPT_CR3_READ); 1273 1181 clr_cr_intercept(svm, INTERCEPT_CR3_WRITE); 1274 1182 save->g_pat = svm->vcpu.arch.pat; 1183 + svm_set_guest_pat(svm, &save->g_pat); 1275 1184 save->cr3 = 0; 1276 1185 save->cr4 = 0; 1277 1186 } ··· 3347 3254 case MSR_VM_IGNNE: 3348 3255 vcpu_unimpl(vcpu, "unimplemented wrmsr: 0x%x data 0x%llx\n", ecx, data); 3349 3256 break; 3257 + case MSR_IA32_CR_PAT: 3258 + if (npt_enabled) { 3259 + if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data)) 3260 + return 1; 3261 + vcpu->arch.pat = data; 3262 + svm_set_guest_pat(svm, &svm->vmcb->save.g_pat); 3263 + mark_dirty(svm->vmcb, VMCB_NPT); 3264 + break; 3265 + } 3266 + /* fall through */ 3350 3267 default: 3351 3268 return kvm_set_msr_common(vcpu, msr); 3352 3269 } ··· 4189 4086 static bool svm_has_high_real_mode_segbase(void) 4190 4087 { 4191 4088 return true; 4192 - } 4193 - 4194 - static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) 4195 - { 4196 - return 0; 4197 4089 } 4198 4090 4199 4091 static void svm_cpuid_update(struct kvm_vcpu *vcpu)
+3 -8
arch/x86/kvm/vmx.c
··· 8632 8632 u64 ipat = 0; 8633 8633 8634 8634 /* For VT-d and EPT combination 8635 - * 1. MMIO: always map as UC 8635 + * 1. MMIO: guest may want to apply WC, trust it. 8636 8636 * 2. EPT with VT-d: 8637 8637 * a. VT-d without snooping control feature: can't guarantee the 8638 - * result, try to trust guest. 8638 + * result, try to trust guest. So the same as item 1. 8639 8639 * b. VT-d with snooping control feature: snooping control feature of 8640 8640 * VT-d engine can guarantee the cache correctness. Just set it 8641 8641 * to WB to keep consistent with host. So the same as item 3. 8642 8642 * 3. EPT without VT-d: always map as WB and set IPAT=1 to keep 8643 8643 * consistent with host MTRR 8644 8644 */ 8645 - if (is_mmio) { 8646 - cache = MTRR_TYPE_UNCACHABLE; 8647 - goto exit; 8648 - } 8649 - 8650 - if (!kvm_arch_has_noncoherent_dma(vcpu->kvm)) { 8645 + if (!is_mmio && !kvm_arch_has_noncoherent_dma(vcpu->kvm)) { 8651 8646 ipat = VMX_EPT_IPAT_BIT; 8652 8647 cache = MTRR_TYPE_WRBACK; 8653 8648 goto exit;
+19 -7
arch/x86/kvm/x86.c
··· 3157 3157 cpuid_count(XSTATE_CPUID, index, 3158 3158 &size, &offset, &ecx, &edx); 3159 3159 memcpy(dest, src + offset, size); 3160 - } else 3161 - WARN_ON_ONCE(1); 3160 + } 3162 3161 3163 3162 valid -= feature; 3164 3163 } ··· 7314 7315 7315 7316 vcpu = kvm_x86_ops->vcpu_create(kvm, id); 7316 7317 7317 - /* 7318 - * Activate fpu unconditionally in case the guest needs eager FPU. It will be 7319 - * deactivated soon if it doesn't. 7320 - */ 7321 - kvm_x86_ops->fpu_activate(vcpu); 7322 7318 return vcpu; 7323 7319 } 7324 7320 ··· 8211 8217 return !kvm_event_needs_reinjection(vcpu) && 8212 8218 kvm_x86_ops->interrupt_allowed(vcpu); 8213 8219 } 8220 + 8221 + void kvm_arch_start_assignment(struct kvm *kvm) 8222 + { 8223 + atomic_inc(&kvm->arch.assigned_device_count); 8224 + } 8225 + EXPORT_SYMBOL_GPL(kvm_arch_start_assignment); 8226 + 8227 + void kvm_arch_end_assignment(struct kvm *kvm) 8228 + { 8229 + atomic_dec(&kvm->arch.assigned_device_count); 8230 + } 8231 + EXPORT_SYMBOL_GPL(kvm_arch_end_assignment); 8232 + 8233 + bool kvm_arch_has_assigned_device(struct kvm *kvm) 8234 + { 8235 + return atomic_read(&kvm->arch.assigned_device_count); 8236 + } 8237 + EXPORT_SYMBOL_GPL(kvm_arch_has_assigned_device); 8214 8238 8215 8239 void kvm_arch_register_noncoherent_dma(struct kvm *kvm) 8216 8240 {
+1 -1
arch/x86/lib/usercopy.c
··· 20 20 unsigned long ret; 21 21 22 22 if (__range_not_ok(from, n, TASK_SIZE)) 23 - return 0; 23 + return n; 24 24 25 25 /* 26 26 * Even though this function is typically called from NMI/IRQ context
+42 -5
arch/x86/mm/kasan_init_64.c
··· 1 + #define pr_fmt(fmt) "kasan: " fmt 1 2 #include <linux/bootmem.h> 2 3 #include <linux/kasan.h> 3 4 #include <linux/kdebug.h> ··· 12 11 extern pgd_t early_level4_pgt[PTRS_PER_PGD]; 13 12 extern struct range pfn_mapped[E820_X_MAX]; 14 13 15 - extern unsigned char kasan_zero_page[PAGE_SIZE]; 14 + static pud_t kasan_zero_pud[PTRS_PER_PUD] __page_aligned_bss; 15 + static pmd_t kasan_zero_pmd[PTRS_PER_PMD] __page_aligned_bss; 16 + static pte_t kasan_zero_pte[PTRS_PER_PTE] __page_aligned_bss; 17 + 18 + /* 19 + * This page used as early shadow. We don't use empty_zero_page 20 + * at early stages, stack instrumentation could write some garbage 21 + * to this page. 22 + * Latter we reuse it as zero shadow for large ranges of memory 23 + * that allowed to access, but not instrumented by kasan 24 + * (vmalloc/vmemmap ...). 25 + */ 26 + static unsigned char kasan_zero_page[PAGE_SIZE] __page_aligned_bss; 16 27 17 28 static int __init map_range(struct range *range) 18 29 { ··· 49 36 pgd_clear(pgd_offset_k(start)); 50 37 } 51 38 52 - void __init kasan_map_early_shadow(pgd_t *pgd) 39 + static void __init kasan_map_early_shadow(pgd_t *pgd) 53 40 { 54 41 int i; 55 42 unsigned long start = KASAN_SHADOW_START; ··· 86 73 while (IS_ALIGNED(addr, PMD_SIZE) && addr + PMD_SIZE <= end) { 87 74 WARN_ON(!pmd_none(*pmd)); 88 75 set_pmd(pmd, __pmd(__pa_nodebug(kasan_zero_pte) 89 - | __PAGE_KERNEL_RO)); 76 + | _KERNPG_TABLE)); 90 77 addr += PMD_SIZE; 91 78 pmd = pmd_offset(pud, addr); 92 79 } ··· 112 99 while (IS_ALIGNED(addr, PUD_SIZE) && addr + PUD_SIZE <= end) { 113 100 WARN_ON(!pud_none(*pud)); 114 101 set_pud(pud, __pud(__pa_nodebug(kasan_zero_pmd) 115 - | __PAGE_KERNEL_RO)); 102 + | _KERNPG_TABLE)); 116 103 addr += PUD_SIZE; 117 104 pud = pud_offset(pgd, addr); 118 105 } ··· 137 124 while (IS_ALIGNED(addr, PGDIR_SIZE) && addr + PGDIR_SIZE <= end) { 138 125 WARN_ON(!pgd_none(*pgd)); 139 126 set_pgd(pgd, __pgd(__pa_nodebug(kasan_zero_pud) 140 - | __PAGE_KERNEL_RO)); 127 + | _KERNPG_TABLE)); 141 128 addr += PGDIR_SIZE; 142 129 pgd = pgd_offset_k(addr); 143 130 } ··· 179 166 }; 180 167 #endif 181 168 169 + void __init kasan_early_init(void) 170 + { 171 + int i; 172 + pteval_t pte_val = __pa_nodebug(kasan_zero_page) | __PAGE_KERNEL; 173 + pmdval_t pmd_val = __pa_nodebug(kasan_zero_pte) | _KERNPG_TABLE; 174 + pudval_t pud_val = __pa_nodebug(kasan_zero_pmd) | _KERNPG_TABLE; 175 + 176 + for (i = 0; i < PTRS_PER_PTE; i++) 177 + kasan_zero_pte[i] = __pte(pte_val); 178 + 179 + for (i = 0; i < PTRS_PER_PMD; i++) 180 + kasan_zero_pmd[i] = __pmd(pmd_val); 181 + 182 + for (i = 0; i < PTRS_PER_PUD; i++) 183 + kasan_zero_pud[i] = __pud(pud_val); 184 + 185 + kasan_map_early_shadow(early_level4_pgt); 186 + kasan_map_early_shadow(init_level4_pgt); 187 + } 188 + 182 189 void __init kasan_init(void) 183 190 { 184 191 int i; ··· 209 176 210 177 memcpy(early_level4_pgt, init_level4_pgt, sizeof(early_level4_pgt)); 211 178 load_cr3(early_level4_pgt); 179 + __flush_tlb_all(); 212 180 213 181 clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END); 214 182 ··· 236 202 memset(kasan_zero_page, 0, PAGE_SIZE); 237 203 238 204 load_cr3(init_level4_pgt); 205 + __flush_tlb_all(); 239 206 init_task.kasan_depth = 0; 207 + 208 + pr_info("Kernel address sanitizer initialized\n"); 240 209 }
+1
arch/xtensa/include/asm/Kbuild
··· 19 19 generic-y += local.h 20 20 generic-y += local64.h 21 21 generic-y += mcs_spinlock.h 22 + generic-y += mm-arch-hooks.h 22 23 generic-y += percpu.h 23 24 generic-y += preempt.h 24 25 generic-y += resource.h
-15
arch/xtensa/include/asm/mm-arch-hooks.h
··· 1 - /* 2 - * Architecture specific mm hooks 3 - * 4 - * Copyright (C) 2015, IBM Corporation 5 - * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef _ASM_XTENSA_MM_ARCH_HOOKS_H 13 - #define _ASM_XTENSA_MM_ARCH_HOOKS_H 14 - 15 - #endif /* _ASM_XTENSA_MM_ARCH_HOOKS_H */
+2 -2
block/bio-integrity.c
··· 51 51 unsigned long idx = BIO_POOL_NONE; 52 52 unsigned inline_vecs; 53 53 54 - if (!bs) { 54 + if (!bs || !bs->bio_integrity_pool) { 55 55 bip = kmalloc(sizeof(struct bio_integrity_payload) + 56 56 sizeof(struct bio_vec) * nr_vecs, gfp_mask); 57 57 inline_vecs = nr_vecs; ··· 104 104 kfree(page_address(bip->bip_vec->bv_page) + 105 105 bip->bip_vec->bv_offset); 106 106 107 - if (bs) { 107 + if (bs && bs->bio_integrity_pool) { 108 108 if (bip->bip_slab != BIO_POOL_NONE) 109 109 bvec_free(bs->bvec_integrity_pool, bip->bip_vec, 110 110 bip->bip_slab);
+81 -59
block/blk-cgroup.c
··· 29 29 30 30 #define MAX_KEY_LEN 100 31 31 32 + /* 33 + * blkcg_pol_mutex protects blkcg_policy[] and policy [de]activation. 34 + * blkcg_pol_register_mutex nests outside of it and synchronizes entire 35 + * policy [un]register operations including cgroup file additions / 36 + * removals. Putting cgroup file registration outside blkcg_pol_mutex 37 + * allows grabbing it from cgroup callbacks. 38 + */ 39 + static DEFINE_MUTEX(blkcg_pol_register_mutex); 32 40 static DEFINE_MUTEX(blkcg_pol_mutex); 33 41 34 42 struct blkcg blkcg_root; ··· 45 37 struct cgroup_subsys_state * const blkcg_root_css = &blkcg_root.css; 46 38 47 39 static struct blkcg_policy *blkcg_policy[BLKCG_MAX_POLS]; 40 + 41 + static LIST_HEAD(all_blkcgs); /* protected by blkcg_pol_mutex */ 48 42 49 43 static bool blkcg_policy_enabled(struct request_queue *q, 50 44 const struct blkcg_policy *pol) ··· 463 453 struct blkcg_gq *blkg; 464 454 int i; 465 455 466 - /* 467 - * XXX: We invoke cgroup_add/rm_cftypes() under blkcg_pol_mutex 468 - * which ends up putting cgroup's internal cgroup_tree_mutex under 469 - * it; however, cgroup_tree_mutex is nested above cgroup file 470 - * active protection and grabbing blkcg_pol_mutex from a cgroup 471 - * file operation creates a possible circular dependency. cgroup 472 - * internal locking is planned to go through further simplification 473 - * and this issue should go away soon. For now, let's trylock 474 - * blkcg_pol_mutex and restart the write on failure. 475 - * 476 - * http://lkml.kernel.org/g/5363C04B.4010400@oracle.com 477 - */ 478 - if (!mutex_trylock(&blkcg_pol_mutex)) 479 - return restart_syscall(); 456 + mutex_lock(&blkcg_pol_mutex); 480 457 spin_lock_irq(&blkcg->lock); 481 458 482 459 /* ··· 819 822 { 820 823 struct blkcg *blkcg = css_to_blkcg(css); 821 824 822 - if (blkcg != &blkcg_root) 825 + mutex_lock(&blkcg_pol_mutex); 826 + list_del(&blkcg->all_blkcgs_node); 827 + mutex_unlock(&blkcg_pol_mutex); 828 + 829 + if (blkcg != &blkcg_root) { 830 + int i; 831 + 832 + for (i = 0; i < BLKCG_MAX_POLS; i++) 833 + kfree(blkcg->pd[i]); 823 834 kfree(blkcg); 835 + } 824 836 } 825 837 826 838 static struct cgroup_subsys_state * ··· 838 832 struct blkcg *blkcg; 839 833 struct cgroup_subsys_state *ret; 840 834 int i; 835 + 836 + mutex_lock(&blkcg_pol_mutex); 841 837 842 838 if (!parent_css) { 843 839 blkcg = &blkcg_root; ··· 883 875 #ifdef CONFIG_CGROUP_WRITEBACK 884 876 INIT_LIST_HEAD(&blkcg->cgwb_list); 885 877 #endif 878 + list_add_tail(&blkcg->all_blkcgs_node, &all_blkcgs); 879 + 880 + mutex_unlock(&blkcg_pol_mutex); 886 881 return &blkcg->css; 887 882 888 883 free_pd_blkcg: 889 884 for (i--; i >= 0; i--) 890 885 kfree(blkcg->pd[i]); 891 - 892 886 free_blkcg: 893 887 kfree(blkcg); 888 + mutex_unlock(&blkcg_pol_mutex); 894 889 return ret; 895 890 } 896 891 ··· 1048 1037 const struct blkcg_policy *pol) 1049 1038 { 1050 1039 LIST_HEAD(pds); 1051 - LIST_HEAD(cpds); 1052 1040 struct blkcg_gq *blkg; 1053 1041 struct blkg_policy_data *pd, *nd; 1054 - struct blkcg_policy_data *cpd, *cnd; 1055 1042 int cnt = 0, ret; 1056 1043 1057 1044 if (blkcg_policy_enabled(q, pol)) ··· 1062 1053 cnt++; 1063 1054 spin_unlock_irq(q->queue_lock); 1064 1055 1065 - /* 1066 - * Allocate per-blkg and per-blkcg policy data 1067 - * for all existing blkgs. 1068 - */ 1056 + /* allocate per-blkg policy data for all existing blkgs */ 1069 1057 while (cnt--) { 1070 1058 pd = kzalloc_node(pol->pd_size, GFP_KERNEL, q->node); 1071 1059 if (!pd) { ··· 1070 1064 goto out_free; 1071 1065 } 1072 1066 list_add_tail(&pd->alloc_node, &pds); 1073 - 1074 - if (!pol->cpd_size) 1075 - continue; 1076 - cpd = kzalloc_node(pol->cpd_size, GFP_KERNEL, q->node); 1077 - if (!cpd) { 1078 - ret = -ENOMEM; 1079 - goto out_free; 1080 - } 1081 - list_add_tail(&cpd->alloc_node, &cpds); 1082 1067 } 1083 1068 1084 1069 /* ··· 1079 1082 spin_lock_irq(q->queue_lock); 1080 1083 1081 1084 list_for_each_entry(blkg, &q->blkg_list, q_node) { 1082 - if (WARN_ON(list_empty(&pds)) || 1083 - WARN_ON(pol->cpd_size && list_empty(&cpds))) { 1085 + if (WARN_ON(list_empty(&pds))) { 1084 1086 /* umm... this shouldn't happen, just abort */ 1085 1087 ret = -ENOMEM; 1086 1088 goto out_unlock; 1087 1089 } 1088 - cpd = list_first_entry(&cpds, struct blkcg_policy_data, 1089 - alloc_node); 1090 - list_del_init(&cpd->alloc_node); 1091 1090 pd = list_first_entry(&pds, struct blkg_policy_data, alloc_node); 1092 1091 list_del_init(&pd->alloc_node); 1093 1092 1094 1093 /* grab blkcg lock too while installing @pd on @blkg */ 1095 1094 spin_lock(&blkg->blkcg->lock); 1096 1095 1097 - if (!pol->cpd_size) 1098 - goto no_cpd; 1099 - if (!blkg->blkcg->pd[pol->plid]) { 1100 - /* Per-policy per-blkcg data */ 1101 - blkg->blkcg->pd[pol->plid] = cpd; 1102 - cpd->plid = pol->plid; 1103 - pol->cpd_init_fn(blkg->blkcg); 1104 - } else { /* must free it as it has already been extracted */ 1105 - kfree(cpd); 1106 - } 1107 - no_cpd: 1108 1096 blkg->pd[pol->plid] = pd; 1109 1097 pd->blkg = blkg; 1110 1098 pd->plid = pol->plid; ··· 1106 1124 blk_queue_bypass_end(q); 1107 1125 list_for_each_entry_safe(pd, nd, &pds, alloc_node) 1108 1126 kfree(pd); 1109 - list_for_each_entry_safe(cpd, cnd, &cpds, alloc_node) 1110 - kfree(cpd); 1111 1127 return ret; 1112 1128 } 1113 1129 EXPORT_SYMBOL_GPL(blkcg_activate_policy); ··· 1142 1162 1143 1163 kfree(blkg->pd[pol->plid]); 1144 1164 blkg->pd[pol->plid] = NULL; 1145 - kfree(blkg->blkcg->pd[pol->plid]); 1146 - blkg->blkcg->pd[pol->plid] = NULL; 1147 1165 1148 1166 spin_unlock(&blkg->blkcg->lock); 1149 1167 } ··· 1160 1182 */ 1161 1183 int blkcg_policy_register(struct blkcg_policy *pol) 1162 1184 { 1185 + struct blkcg *blkcg; 1163 1186 int i, ret; 1164 1187 1165 1188 if (WARN_ON(pol->pd_size < sizeof(struct blkg_policy_data))) 1166 1189 return -EINVAL; 1167 1190 1191 + mutex_lock(&blkcg_pol_register_mutex); 1168 1192 mutex_lock(&blkcg_pol_mutex); 1169 1193 1170 1194 /* find an empty slot */ ··· 1175 1195 if (!blkcg_policy[i]) 1176 1196 break; 1177 1197 if (i >= BLKCG_MAX_POLS) 1178 - goto out_unlock; 1198 + goto err_unlock; 1179 1199 1180 - /* register and update blkgs */ 1200 + /* register @pol */ 1181 1201 pol->plid = i; 1182 - blkcg_policy[i] = pol; 1202 + blkcg_policy[pol->plid] = pol; 1203 + 1204 + /* allocate and install cpd's */ 1205 + if (pol->cpd_size) { 1206 + list_for_each_entry(blkcg, &all_blkcgs, all_blkcgs_node) { 1207 + struct blkcg_policy_data *cpd; 1208 + 1209 + cpd = kzalloc(pol->cpd_size, GFP_KERNEL); 1210 + if (!cpd) { 1211 + mutex_unlock(&blkcg_pol_mutex); 1212 + goto err_free_cpds; 1213 + } 1214 + 1215 + blkcg->pd[pol->plid] = cpd; 1216 + cpd->plid = pol->plid; 1217 + pol->cpd_init_fn(blkcg); 1218 + } 1219 + } 1220 + 1221 + mutex_unlock(&blkcg_pol_mutex); 1183 1222 1184 1223 /* everything is in place, add intf files for the new policy */ 1185 1224 if (pol->cftypes) 1186 1225 WARN_ON(cgroup_add_legacy_cftypes(&blkio_cgrp_subsys, 1187 1226 pol->cftypes)); 1188 - ret = 0; 1189 - out_unlock: 1227 + mutex_unlock(&blkcg_pol_register_mutex); 1228 + return 0; 1229 + 1230 + err_free_cpds: 1231 + if (pol->cpd_size) { 1232 + list_for_each_entry(blkcg, &all_blkcgs, all_blkcgs_node) { 1233 + kfree(blkcg->pd[pol->plid]); 1234 + blkcg->pd[pol->plid] = NULL; 1235 + } 1236 + } 1237 + blkcg_policy[pol->plid] = NULL; 1238 + err_unlock: 1190 1239 mutex_unlock(&blkcg_pol_mutex); 1240 + mutex_unlock(&blkcg_pol_register_mutex); 1191 1241 return ret; 1192 1242 } 1193 1243 EXPORT_SYMBOL_GPL(blkcg_policy_register); ··· 1230 1220 */ 1231 1221 void blkcg_policy_unregister(struct blkcg_policy *pol) 1232 1222 { 1233 - mutex_lock(&blkcg_pol_mutex); 1223 + struct blkcg *blkcg; 1224 + 1225 + mutex_lock(&blkcg_pol_register_mutex); 1234 1226 1235 1227 if (WARN_ON(blkcg_policy[pol->plid] != pol)) 1236 1228 goto out_unlock; ··· 1241 1229 if (pol->cftypes) 1242 1230 cgroup_rm_cftypes(pol->cftypes); 1243 1231 1244 - /* unregister and update blkgs */ 1232 + /* remove cpds and unregister */ 1233 + mutex_lock(&blkcg_pol_mutex); 1234 + 1235 + if (pol->cpd_size) { 1236 + list_for_each_entry(blkcg, &all_blkcgs, all_blkcgs_node) { 1237 + kfree(blkcg->pd[pol->plid]); 1238 + blkcg->pd[pol->plid] = NULL; 1239 + } 1240 + } 1245 1241 blkcg_policy[pol->plid] = NULL; 1246 - out_unlock: 1242 + 1247 1243 mutex_unlock(&blkcg_pol_mutex); 1244 + out_unlock: 1245 + mutex_unlock(&blkcg_pol_register_mutex); 1248 1246 } 1249 1247 EXPORT_SYMBOL_GPL(blkcg_policy_unregister);
+1 -1
block/blk-core.c
··· 3370 3370 int __init blk_dev_init(void) 3371 3371 { 3372 3372 BUILD_BUG_ON(__REQ_NR_BITS > 8 * 3373 - sizeof(((struct request *)0)->cmd_flags)); 3373 + FIELD_SIZEOF(struct request, cmd_flags)); 3374 3374 3375 3375 /* used for unplugging and affects IO latency/throughput - HIGHPRI */ 3376 3376 kblockd_workqueue = alloc_workqueue("kblockd",
+1 -1
block/blk-mq.c
··· 1998 1998 goto err_hctxs; 1999 1999 2000 2000 setup_timer(&q->timeout, blk_mq_rq_timer, (unsigned long) q); 2001 - blk_queue_rq_timeout(q, set->timeout ? set->timeout : 30000); 2001 + blk_queue_rq_timeout(q, set->timeout ? set->timeout : 30 * HZ); 2002 2002 2003 2003 q->nr_queues = nr_cpu_ids; 2004 2004 q->nr_hw_queues = set->nr_hw_queues;
+5 -2
drivers/acpi/acpi_lpss.c
··· 352 352 pdata->mmio_size = resource_size(rentry->res); 353 353 pdata->mmio_base = ioremap(rentry->res->start, 354 354 pdata->mmio_size); 355 - if (!pdata->mmio_base) 356 - goto err_out; 357 355 break; 358 356 } 359 357 360 358 acpi_dev_free_resource_list(&resource_list); 359 + 360 + if (!pdata->mmio_base) { 361 + ret = -ENOMEM; 362 + goto err_out; 363 + } 361 364 362 365 pdata->dev_desc = dev_desc; 363 366
+120 -14
drivers/acpi/nfit.c
··· 18 18 #include <linux/list.h> 19 19 #include <linux/acpi.h> 20 20 #include <linux/sort.h> 21 + #include <linux/pmem.h> 21 22 #include <linux/io.h> 22 23 #include "nfit.h" 23 24 ··· 306 305 return true; 307 306 } 308 307 308 + static bool add_flush(struct acpi_nfit_desc *acpi_desc, 309 + struct acpi_nfit_flush_address *flush) 310 + { 311 + struct device *dev = acpi_desc->dev; 312 + struct nfit_flush *nfit_flush = devm_kzalloc(dev, sizeof(*nfit_flush), 313 + GFP_KERNEL); 314 + 315 + if (!nfit_flush) 316 + return false; 317 + INIT_LIST_HEAD(&nfit_flush->list); 318 + nfit_flush->flush = flush; 319 + list_add_tail(&nfit_flush->list, &acpi_desc->flushes); 320 + dev_dbg(dev, "%s: nfit_flush handle: %d hint_count: %d\n", __func__, 321 + flush->device_handle, flush->hint_count); 322 + return true; 323 + } 324 + 309 325 static void *add_table(struct acpi_nfit_desc *acpi_desc, void *table, 310 326 const void *end) 311 327 { ··· 356 338 return err; 357 339 break; 358 340 case ACPI_NFIT_TYPE_FLUSH_ADDRESS: 359 - dev_dbg(dev, "%s: flush\n", __func__); 341 + if (!add_flush(acpi_desc, table)) 342 + return err; 360 343 break; 361 344 case ACPI_NFIT_TYPE_SMBIOS: 362 345 dev_dbg(dev, "%s: smbios\n", __func__); ··· 408 389 { 409 390 u16 dcr = __to_nfit_memdev(nfit_mem)->region_index; 410 391 struct nfit_memdev *nfit_memdev; 392 + struct nfit_flush *nfit_flush; 411 393 struct nfit_dcr *nfit_dcr; 412 394 struct nfit_bdw *nfit_bdw; 413 395 struct nfit_idt *nfit_idt; ··· 460 440 if (nfit_idt->idt->interleave_index != idt_idx) 461 441 continue; 462 442 nfit_mem->idt_bdw = nfit_idt->idt; 443 + break; 444 + } 445 + 446 + list_for_each_entry(nfit_flush, &acpi_desc->flushes, list) { 447 + if (nfit_flush->flush->device_handle != 448 + nfit_memdev->memdev->device_handle) 449 + continue; 450 + nfit_mem->nfit_flush = nfit_flush; 463 451 break; 464 452 } 465 453 break; ··· 1006 978 return mmio->base_offset + line_offset + table_offset + sub_line_offset; 1007 979 } 1008 980 981 + static void wmb_blk(struct nfit_blk *nfit_blk) 982 + { 983 + 984 + if (nfit_blk->nvdimm_flush) { 985 + /* 986 + * The first wmb() is needed to 'sfence' all previous writes 987 + * such that they are architecturally visible for the platform 988 + * buffer flush. Note that we've already arranged for pmem 989 + * writes to avoid the cache via arch_memcpy_to_pmem(). The 990 + * final wmb() ensures ordering for the NVDIMM flush write. 991 + */ 992 + wmb(); 993 + writeq(1, nfit_blk->nvdimm_flush); 994 + wmb(); 995 + } else 996 + wmb_pmem(); 997 + } 998 + 1009 999 static u64 read_blk_stat(struct nfit_blk *nfit_blk, unsigned int bw) 1010 1000 { 1011 1001 struct nfit_blk_mmio *mmio = &nfit_blk->mmio[DCR]; ··· 1058 1012 offset = to_interleave_offset(offset, mmio); 1059 1013 1060 1014 writeq(cmd, mmio->base + offset); 1061 - /* FIXME: conditionally perform read-back if mandated by firmware */ 1015 + wmb_blk(nfit_blk); 1016 + 1017 + if (nfit_blk->dimm_flags & ND_BLK_DCR_LATCH) 1018 + readq(mmio->base + offset); 1062 1019 } 1063 1020 1064 1021 static int acpi_nfit_blk_single_io(struct nfit_blk *nfit_blk, ··· 1075 1026 1076 1027 base_offset = nfit_blk->bdw_offset + dpa % L1_CACHE_BYTES 1077 1028 + lane * mmio->size; 1078 - /* TODO: non-temporal access, flush hints, cache management etc... */ 1079 1029 write_blk_ctl(nfit_blk, lane, dpa, len, rw); 1080 1030 while (len) { 1081 1031 unsigned int c; ··· 1093 1045 } 1094 1046 1095 1047 if (rw) 1096 - memcpy(mmio->aperture + offset, iobuf + copied, c); 1048 + memcpy_to_pmem(mmio->aperture + offset, 1049 + iobuf + copied, c); 1097 1050 else 1098 - memcpy(iobuf + copied, mmio->aperture + offset, c); 1051 + memcpy_from_pmem(iobuf + copied, 1052 + mmio->aperture + offset, c); 1099 1053 1100 1054 copied += c; 1101 1055 len -= c; 1102 1056 } 1057 + 1058 + if (rw) 1059 + wmb_blk(nfit_blk); 1060 + 1103 1061 rc = read_blk_stat(nfit_blk, lane) ? -EIO : 0; 1104 1062 return rc; 1105 1063 } ··· 1178 1124 } 1179 1125 1180 1126 static void __iomem *__nfit_spa_map(struct acpi_nfit_desc *acpi_desc, 1181 - struct acpi_nfit_system_address *spa) 1127 + struct acpi_nfit_system_address *spa, enum spa_map_type type) 1182 1128 { 1183 1129 resource_size_t start = spa->address; 1184 1130 resource_size_t n = spa->length; ··· 1206 1152 if (!res) 1207 1153 goto err_mem; 1208 1154 1209 - /* TODO: cacheability based on the spa type */ 1210 - spa_map->iomem = ioremap_nocache(start, n); 1155 + if (type == SPA_MAP_APERTURE) { 1156 + /* 1157 + * TODO: memremap_pmem() support, but that requires cache 1158 + * flushing when the aperture is moved. 1159 + */ 1160 + spa_map->iomem = ioremap_wc(start, n); 1161 + } else 1162 + spa_map->iomem = ioremap_nocache(start, n); 1163 + 1211 1164 if (!spa_map->iomem) 1212 1165 goto err_map; 1213 1166 ··· 1232 1171 * nfit_spa_map - interleave-aware managed-mappings of acpi_nfit_system_address ranges 1233 1172 * @nvdimm_bus: NFIT-bus that provided the spa table entry 1234 1173 * @nfit_spa: spa table to map 1174 + * @type: aperture or control region 1235 1175 * 1236 1176 * In the case where block-data-window apertures and 1237 1177 * dimm-control-regions are interleaved they will end up sharing a ··· 1242 1180 * unbound. 1243 1181 */ 1244 1182 static void __iomem *nfit_spa_map(struct acpi_nfit_desc *acpi_desc, 1245 - struct acpi_nfit_system_address *spa) 1183 + struct acpi_nfit_system_address *spa, enum spa_map_type type) 1246 1184 { 1247 1185 void __iomem *iomem; 1248 1186 1249 1187 mutex_lock(&acpi_desc->spa_map_mutex); 1250 - iomem = __nfit_spa_map(acpi_desc, spa); 1188 + iomem = __nfit_spa_map(acpi_desc, spa, type); 1251 1189 mutex_unlock(&acpi_desc->spa_map_mutex); 1252 1190 1253 1191 return iomem; ··· 1268 1206 return 0; 1269 1207 } 1270 1208 1209 + static int acpi_nfit_blk_get_flags(struct nvdimm_bus_descriptor *nd_desc, 1210 + struct nvdimm *nvdimm, struct nfit_blk *nfit_blk) 1211 + { 1212 + struct nd_cmd_dimm_flags flags; 1213 + int rc; 1214 + 1215 + memset(&flags, 0, sizeof(flags)); 1216 + rc = nd_desc->ndctl(nd_desc, nvdimm, ND_CMD_DIMM_FLAGS, &flags, 1217 + sizeof(flags)); 1218 + 1219 + if (rc >= 0 && flags.status == 0) 1220 + nfit_blk->dimm_flags = flags.flags; 1221 + else if (rc == -ENOTTY) { 1222 + /* fall back to a conservative default */ 1223 + nfit_blk->dimm_flags = ND_BLK_DCR_LATCH; 1224 + rc = 0; 1225 + } else 1226 + rc = -ENXIO; 1227 + 1228 + return rc; 1229 + } 1230 + 1271 1231 static int acpi_nfit_blk_region_enable(struct nvdimm_bus *nvdimm_bus, 1272 1232 struct device *dev) 1273 1233 { 1274 1234 struct nvdimm_bus_descriptor *nd_desc = to_nd_desc(nvdimm_bus); 1275 1235 struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc); 1276 1236 struct nd_blk_region *ndbr = to_nd_blk_region(dev); 1237 + struct nfit_flush *nfit_flush; 1277 1238 struct nfit_blk_mmio *mmio; 1278 1239 struct nfit_blk *nfit_blk; 1279 1240 struct nfit_mem *nfit_mem; ··· 1308 1223 if (!nfit_mem || !nfit_mem->dcr || !nfit_mem->bdw) { 1309 1224 dev_dbg(dev, "%s: missing%s%s%s\n", __func__, 1310 1225 nfit_mem ? "" : " nfit_mem", 1311 - nfit_mem->dcr ? "" : " dcr", 1312 - nfit_mem->bdw ? "" : " bdw"); 1226 + (nfit_mem && nfit_mem->dcr) ? "" : " dcr", 1227 + (nfit_mem && nfit_mem->bdw) ? "" : " bdw"); 1313 1228 return -ENXIO; 1314 1229 } 1315 1230 ··· 1322 1237 /* map block aperture memory */ 1323 1238 nfit_blk->bdw_offset = nfit_mem->bdw->offset; 1324 1239 mmio = &nfit_blk->mmio[BDW]; 1325 - mmio->base = nfit_spa_map(acpi_desc, nfit_mem->spa_bdw); 1240 + mmio->base = nfit_spa_map(acpi_desc, nfit_mem->spa_bdw, 1241 + SPA_MAP_APERTURE); 1326 1242 if (!mmio->base) { 1327 1243 dev_dbg(dev, "%s: %s failed to map bdw\n", __func__, 1328 1244 nvdimm_name(nvdimm)); ··· 1345 1259 nfit_blk->cmd_offset = nfit_mem->dcr->command_offset; 1346 1260 nfit_blk->stat_offset = nfit_mem->dcr->status_offset; 1347 1261 mmio = &nfit_blk->mmio[DCR]; 1348 - mmio->base = nfit_spa_map(acpi_desc, nfit_mem->spa_dcr); 1262 + mmio->base = nfit_spa_map(acpi_desc, nfit_mem->spa_dcr, 1263 + SPA_MAP_CONTROL); 1349 1264 if (!mmio->base) { 1350 1265 dev_dbg(dev, "%s: %s failed to map dcr\n", __func__, 1351 1266 nvdimm_name(nvdimm)); ··· 1363 1276 __func__, nvdimm_name(nvdimm)); 1364 1277 return rc; 1365 1278 } 1279 + 1280 + rc = acpi_nfit_blk_get_flags(nd_desc, nvdimm, nfit_blk); 1281 + if (rc < 0) { 1282 + dev_dbg(dev, "%s: %s failed get DIMM flags\n", 1283 + __func__, nvdimm_name(nvdimm)); 1284 + return rc; 1285 + } 1286 + 1287 + nfit_flush = nfit_mem->nfit_flush; 1288 + if (nfit_flush && nfit_flush->flush->hint_count != 0) { 1289 + nfit_blk->nvdimm_flush = devm_ioremap_nocache(dev, 1290 + nfit_flush->flush->hint_address[0], 8); 1291 + if (!nfit_blk->nvdimm_flush) 1292 + return -ENOMEM; 1293 + } 1294 + 1295 + if (!arch_has_pmem_api() && !nfit_blk->nvdimm_flush) 1296 + dev_warn(dev, "unable to guarantee persistence of writes\n"); 1366 1297 1367 1298 if (mmio->line_size == 0) 1368 1299 return 0; ··· 1564 1459 INIT_LIST_HEAD(&acpi_desc->dcrs); 1565 1460 INIT_LIST_HEAD(&acpi_desc->bdws); 1566 1461 INIT_LIST_HEAD(&acpi_desc->idts); 1462 + INIT_LIST_HEAD(&acpi_desc->flushes); 1567 1463 INIT_LIST_HEAD(&acpi_desc->memdevs); 1568 1464 INIT_LIST_HEAD(&acpi_desc->dimms); 1569 1465 mutex_init(&acpi_desc->spa_map_mutex);
+19 -1
drivers/acpi/nfit.h
··· 40 40 NFIT_UUID_MAX, 41 41 }; 42 42 43 + enum { 44 + ND_BLK_DCR_LATCH = 2, 45 + }; 46 + 43 47 struct nfit_spa { 44 48 struct acpi_nfit_system_address *spa; 45 49 struct list_head list; ··· 64 60 struct list_head list; 65 61 }; 66 62 63 + struct nfit_flush { 64 + struct acpi_nfit_flush_address *flush; 65 + struct list_head list; 66 + }; 67 + 67 68 struct nfit_memdev { 68 69 struct acpi_nfit_memory_map *memdev; 69 70 struct list_head list; ··· 86 77 struct acpi_nfit_system_address *spa_bdw; 87 78 struct acpi_nfit_interleave *idt_dcr; 88 79 struct acpi_nfit_interleave *idt_bdw; 80 + struct nfit_flush *nfit_flush; 89 81 struct list_head list; 90 82 struct acpi_device *adev; 91 83 unsigned long dsm_mask; ··· 98 88 struct mutex spa_map_mutex; 99 89 struct list_head spa_maps; 100 90 struct list_head memdevs; 91 + struct list_head flushes; 101 92 struct list_head dimms; 102 93 struct list_head spas; 103 94 struct list_head dcrs; ··· 120 109 struct nfit_blk_mmio { 121 110 union { 122 111 void __iomem *base; 123 - void *aperture; 112 + void __pmem *aperture; 124 113 }; 125 114 u64 size; 126 115 u64 base_offset; ··· 134 123 u64 bdw_offset; /* post interleave offset */ 135 124 u64 stat_offset; 136 125 u64 cmd_offset; 126 + void __iomem *nvdimm_flush; 127 + u32 dimm_flags; 128 + }; 129 + 130 + enum spa_map_type { 131 + SPA_MAP_CONTROL, 132 + SPA_MAP_APERTURE, 137 133 }; 138 134 139 135 struct nfit_spa_mapping {
+9 -3
drivers/acpi/osl.c
··· 175 175 if (!addr || !length) 176 176 return; 177 177 178 - acpi_reserve_region(addr, length, gas->space_id, 0, desc); 178 + /* Resources are never freed */ 179 + if (gas->space_id == ACPI_ADR_SPACE_SYSTEM_IO) 180 + request_region(addr, length, desc); 181 + else if (gas->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) 182 + request_mem_region(addr, length, desc); 179 183 } 180 184 181 - static void __init acpi_reserve_resources(void) 185 + static int __init acpi_reserve_resources(void) 182 186 { 183 187 acpi_request_region(&acpi_gbl_FADT.xpm1a_event_block, acpi_gbl_FADT.pm1_event_length, 184 188 "ACPI PM1a_EVT_BLK"); ··· 211 207 if (!(acpi_gbl_FADT.gpe1_block_length & 0x1)) 212 208 acpi_request_region(&acpi_gbl_FADT.xgpe1_block, 213 209 acpi_gbl_FADT.gpe1_block_length, "ACPI GPE1_BLK"); 210 + 211 + return 0; 214 212 } 213 + fs_initcall_sync(acpi_reserve_resources); 215 214 216 215 void acpi_os_printf(const char *fmt, ...) 217 216 { ··· 1869 1862 1870 1863 acpi_status __init acpi_os_initialize1(void) 1871 1864 { 1872 - acpi_reserve_resources(); 1873 1865 kacpid_wq = alloc_workqueue("kacpid", 0, 1); 1874 1866 kacpi_notify_wq = alloc_workqueue("kacpi_notify", 0, 1); 1875 1867 kacpi_hotplug_wq = alloc_ordered_workqueue("kacpi_hotplug", 0);
+15 -171
drivers/acpi/resource.c
··· 26 26 #include <linux/device.h> 27 27 #include <linux/export.h> 28 28 #include <linux/ioport.h> 29 - #include <linux/list.h> 30 29 #include <linux/slab.h> 31 30 32 31 #ifdef CONFIG_X86 ··· 193 194 u8 iodec = attr->granularity == 0xfff ? ACPI_DECODE_10 : ACPI_DECODE_16; 194 195 bool wp = addr->info.mem.write_protect; 195 196 u64 len = attr->address_length; 197 + u64 start, end, offset = 0; 196 198 struct resource *res = &win->res; 197 199 198 200 /* ··· 205 205 pr_debug("ACPI: Invalid address space min_addr_fix %d, max_addr_fix %d, len %llx\n", 206 206 addr->min_address_fixed, addr->max_address_fixed, len); 207 207 208 - res->start = attr->minimum; 209 - res->end = attr->maximum; 210 - 211 208 /* 212 209 * For bridges that translate addresses across the bridge, 213 210 * translation_offset is the offset that must be added to the ··· 212 215 * primary side. Non-bridge devices must list 0 for all Address 213 216 * Translation offset bits. 214 217 */ 215 - if (addr->producer_consumer == ACPI_PRODUCER) { 216 - res->start += attr->translation_offset; 217 - res->end += attr->translation_offset; 218 - } else if (attr->translation_offset) { 218 + if (addr->producer_consumer == ACPI_PRODUCER) 219 + offset = attr->translation_offset; 220 + else if (attr->translation_offset) 219 221 pr_debug("ACPI: translation_offset(%lld) is invalid for non-bridge device.\n", 220 222 attr->translation_offset); 223 + start = attr->minimum + offset; 224 + end = attr->maximum + offset; 225 + 226 + win->offset = offset; 227 + res->start = start; 228 + res->end = end; 229 + if (sizeof(resource_size_t) < sizeof(u64) && 230 + (offset != win->offset || start != res->start || end != res->end)) { 231 + pr_warn("acpi resource window ([%#llx-%#llx] ignored, not CPU addressable)\n", 232 + attr->minimum, attr->maximum); 233 + return false; 221 234 } 222 235 223 236 switch (addr->resource_type) { ··· 243 236 default: 244 237 return false; 245 238 } 246 - 247 - win->offset = attr->translation_offset; 248 239 249 240 if (addr->producer_consumer == ACPI_PRODUCER) 250 241 res->flags |= IORESOURCE_WINDOW; ··· 627 622 return (type & types) ? 0 : 1; 628 623 } 629 624 EXPORT_SYMBOL_GPL(acpi_dev_filter_resource_type); 630 - 631 - struct reserved_region { 632 - struct list_head node; 633 - u64 start; 634 - u64 end; 635 - }; 636 - 637 - static LIST_HEAD(reserved_io_regions); 638 - static LIST_HEAD(reserved_mem_regions); 639 - 640 - static int request_range(u64 start, u64 end, u8 space_id, unsigned long flags, 641 - char *desc) 642 - { 643 - unsigned int length = end - start + 1; 644 - struct resource *res; 645 - 646 - res = space_id == ACPI_ADR_SPACE_SYSTEM_IO ? 647 - request_region(start, length, desc) : 648 - request_mem_region(start, length, desc); 649 - if (!res) 650 - return -EIO; 651 - 652 - res->flags &= ~flags; 653 - return 0; 654 - } 655 - 656 - static int add_region_before(u64 start, u64 end, u8 space_id, 657 - unsigned long flags, char *desc, 658 - struct list_head *head) 659 - { 660 - struct reserved_region *reg; 661 - int error; 662 - 663 - reg = kmalloc(sizeof(*reg), GFP_KERNEL); 664 - if (!reg) 665 - return -ENOMEM; 666 - 667 - error = request_range(start, end, space_id, flags, desc); 668 - if (error) { 669 - kfree(reg); 670 - return error; 671 - } 672 - 673 - reg->start = start; 674 - reg->end = end; 675 - list_add_tail(&reg->node, head); 676 - return 0; 677 - } 678 - 679 - /** 680 - * acpi_reserve_region - Reserve an I/O or memory region as a system resource. 681 - * @start: Starting address of the region. 682 - * @length: Length of the region. 683 - * @space_id: Identifier of address space to reserve the region from. 684 - * @flags: Resource flags to clear for the region after requesting it. 685 - * @desc: Region description (for messages). 686 - * 687 - * Reserve an I/O or memory region as a system resource to prevent others from 688 - * using it. If the new region overlaps with one of the regions (in the given 689 - * address space) already reserved by this routine, only the non-overlapping 690 - * parts of it will be reserved. 691 - * 692 - * Returned is either 0 (success) or a negative error code indicating a resource 693 - * reservation problem. It is the code of the first encountered error, but the 694 - * routine doesn't abort until it has attempted to request all of the parts of 695 - * the new region that don't overlap with other regions reserved previously. 696 - * 697 - * The resources requested by this routine are never released. 698 - */ 699 - int acpi_reserve_region(u64 start, unsigned int length, u8 space_id, 700 - unsigned long flags, char *desc) 701 - { 702 - struct list_head *regions; 703 - struct reserved_region *reg; 704 - u64 end = start + length - 1; 705 - int ret = 0, error = 0; 706 - 707 - if (space_id == ACPI_ADR_SPACE_SYSTEM_IO) 708 - regions = &reserved_io_regions; 709 - else if (space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) 710 - regions = &reserved_mem_regions; 711 - else 712 - return -EINVAL; 713 - 714 - if (list_empty(regions)) 715 - return add_region_before(start, end, space_id, flags, desc, regions); 716 - 717 - list_for_each_entry(reg, regions, node) 718 - if (reg->start == end + 1) { 719 - /* The new region can be prepended to this one. */ 720 - ret = request_range(start, end, space_id, flags, desc); 721 - if (!ret) 722 - reg->start = start; 723 - 724 - return ret; 725 - } else if (reg->start > end) { 726 - /* No overlap. Add the new region here and get out. */ 727 - return add_region_before(start, end, space_id, flags, 728 - desc, &reg->node); 729 - } else if (reg->end == start - 1) { 730 - goto combine; 731 - } else if (reg->end >= start) { 732 - goto overlap; 733 - } 734 - 735 - /* The new region goes after the last existing one. */ 736 - return add_region_before(start, end, space_id, flags, desc, regions); 737 - 738 - overlap: 739 - /* 740 - * The new region overlaps an existing one. 741 - * 742 - * The head part of the new region immediately preceding the existing 743 - * overlapping one can be combined with it right away. 744 - */ 745 - if (reg->start > start) { 746 - error = request_range(start, reg->start - 1, space_id, flags, desc); 747 - if (error) 748 - ret = error; 749 - else 750 - reg->start = start; 751 - } 752 - 753 - combine: 754 - /* 755 - * The new region is adjacent to an existing one. If it extends beyond 756 - * that region all the way to the next one, it is possible to combine 757 - * all three of them. 758 - */ 759 - while (reg->end < end) { 760 - struct reserved_region *next = NULL; 761 - u64 a = reg->end + 1, b = end; 762 - 763 - if (!list_is_last(&reg->node, regions)) { 764 - next = list_next_entry(reg, node); 765 - if (next->start <= end) 766 - b = next->start - 1; 767 - } 768 - error = request_range(a, b, space_id, flags, desc); 769 - if (!error) { 770 - if (next && next->start == b + 1) { 771 - reg->end = next->end; 772 - list_del(&next->node); 773 - kfree(next); 774 - } else { 775 - reg->end = end; 776 - break; 777 - } 778 - } else if (next) { 779 - if (!ret) 780 - ret = error; 781 - 782 - reg = next; 783 - } else { 784 - break; 785 - } 786 - } 787 - 788 - return ret ? ret : error; 789 - } 790 - EXPORT_SYMBOL_GPL(acpi_reserve_region);
+30 -2
drivers/acpi/scan.c
··· 1019 1019 return false; 1020 1020 } 1021 1021 1022 + static bool __acpi_match_device_cls(const struct acpi_device_id *id, 1023 + struct acpi_hardware_id *hwid) 1024 + { 1025 + int i, msk, byte_shift; 1026 + char buf[3]; 1027 + 1028 + if (!id->cls) 1029 + return false; 1030 + 1031 + /* Apply class-code bitmask, before checking each class-code byte */ 1032 + for (i = 1; i <= 3; i++) { 1033 + byte_shift = 8 * (3 - i); 1034 + msk = (id->cls_msk >> byte_shift) & 0xFF; 1035 + if (!msk) 1036 + continue; 1037 + 1038 + sprintf(buf, "%02x", (id->cls >> byte_shift) & msk); 1039 + if (strncmp(buf, &hwid->id[(i - 1) * 2], 2)) 1040 + return false; 1041 + } 1042 + return true; 1043 + } 1044 + 1022 1045 static const struct acpi_device_id *__acpi_match_device( 1023 1046 struct acpi_device *device, 1024 1047 const struct acpi_device_id *ids, ··· 1059 1036 1060 1037 list_for_each_entry(hwid, &device->pnp.ids, list) { 1061 1038 /* First, check the ACPI/PNP IDs provided by the caller. */ 1062 - for (id = ids; id->id[0]; id++) 1063 - if (!strcmp((char *) id->id, hwid->id)) 1039 + for (id = ids; id->id[0] || id->cls; id++) { 1040 + if (id->id[0] && !strcmp((char *) id->id, hwid->id)) 1064 1041 return id; 1042 + else if (id->cls && __acpi_match_device_cls(id, hwid)) 1043 + return id; 1044 + } 1065 1045 1066 1046 /* 1067 1047 * Next, check ACPI_DT_NAMESPACE_HID and try to match the ··· 2127 2101 if (info->valid & ACPI_VALID_UID) 2128 2102 pnp->unique_id = kstrdup(info->unique_id.string, 2129 2103 GFP_KERNEL); 2104 + if (info->valid & ACPI_VALID_CLS) 2105 + acpi_add_id(pnp, info->class_code.string); 2130 2106 2131 2107 kfree(info); 2132 2108
+1 -1
drivers/ata/Kconfig
··· 48 48 49 49 config ATA_ACPI 50 50 bool "ATA ACPI Support" 51 - depends on ACPI && PCI 51 + depends on ACPI 52 52 default y 53 53 help 54 54 This option adds support for ATA-related ACPI objects.
+9
drivers/ata/ahci_platform.c
··· 20 20 #include <linux/platform_device.h> 21 21 #include <linux/libata.h> 22 22 #include <linux/ahci_platform.h> 23 + #include <linux/acpi.h> 24 + #include <linux/pci_ids.h> 23 25 #include "ahci.h" 24 26 25 27 #define DRV_NAME "ahci" ··· 81 79 }; 82 80 MODULE_DEVICE_TABLE(of, ahci_of_match); 83 81 82 + static const struct acpi_device_id ahci_acpi_match[] = { 83 + { ACPI_DEVICE_CLASS(PCI_CLASS_STORAGE_SATA_AHCI, 0xffffff) }, 84 + {}, 85 + }; 86 + MODULE_DEVICE_TABLE(acpi, ahci_acpi_match); 87 + 84 88 static struct platform_driver ahci_driver = { 85 89 .probe = ahci_probe, 86 90 .remove = ata_platform_remove_one, 87 91 .driver = { 88 92 .name = DRV_NAME, 89 93 .of_match_table = ahci_of_match, 94 + .acpi_match_table = ahci_acpi_match, 90 95 .pm = &ahci_pm_ops, 91 96 }, 92 97 };
+2 -2
drivers/ata/pata_arasan_cf.c
··· 4 4 * Arasan Compact Flash host controller source file 5 5 * 6 6 * Copyright (C) 2011 ST Microelectronics 7 - * Viresh Kumar <viresh.linux@gmail.com> 7 + * Viresh Kumar <vireshk@kernel.org> 8 8 * 9 9 * This file is licensed under the terms of the GNU General Public 10 10 * License version 2. This program is licensed "as is" without any ··· 968 968 969 969 module_platform_driver(arasan_cf_driver); 970 970 971 - MODULE_AUTHOR("Viresh Kumar <viresh.linux@gmail.com>"); 971 + MODULE_AUTHOR("Viresh Kumar <vireshk@kernel.org>"); 972 972 MODULE_DESCRIPTION("Arasan ATA Compact Flash driver"); 973 973 MODULE_LICENSE("GPL"); 974 974 MODULE_ALIAS("platform:" DRIVER_NAME);
+13 -3
drivers/base/firmware_class.c
··· 563 563 kfree(fw_priv); 564 564 } 565 565 566 - static int firmware_uevent(struct device *dev, struct kobj_uevent_env *env) 566 + static int do_firmware_uevent(struct firmware_priv *fw_priv, struct kobj_uevent_env *env) 567 567 { 568 - struct firmware_priv *fw_priv = to_firmware_priv(dev); 569 - 570 568 if (add_uevent_var(env, "FIRMWARE=%s", fw_priv->buf->fw_id)) 571 569 return -ENOMEM; 572 570 if (add_uevent_var(env, "TIMEOUT=%i", loading_timeout)) ··· 573 575 return -ENOMEM; 574 576 575 577 return 0; 578 + } 579 + 580 + static int firmware_uevent(struct device *dev, struct kobj_uevent_env *env) 581 + { 582 + struct firmware_priv *fw_priv = to_firmware_priv(dev); 583 + int err = 0; 584 + 585 + mutex_lock(&fw_lock); 586 + if (fw_priv->buf) 587 + err = do_firmware_uevent(fw_priv, env); 588 + mutex_unlock(&fw_lock); 589 + return err; 576 590 } 577 591 578 592 static struct class firmware_class = {
+11 -2
drivers/base/power/domain.c
··· 6 6 * This file is released under the GPLv2. 7 7 */ 8 8 9 + #include <linux/delay.h> 9 10 #include <linux/kernel.h> 10 11 #include <linux/io.h> 11 12 #include <linux/platform_device.h> ··· 19 18 #include <linux/sched.h> 20 19 #include <linux/suspend.h> 21 20 #include <linux/export.h> 21 + 22 + #define GENPD_RETRY_MAX_MS 250 /* Approximate */ 22 23 23 24 #define GENPD_DEV_CALLBACK(genpd, type, callback, dev) \ 24 25 ({ \ ··· 2134 2131 static void genpd_dev_pm_detach(struct device *dev, bool power_off) 2135 2132 { 2136 2133 struct generic_pm_domain *pd; 2134 + unsigned int i; 2137 2135 int ret = 0; 2138 2136 2139 2137 pd = pm_genpd_lookup_dev(dev); ··· 2143 2139 2144 2140 dev_dbg(dev, "removing from PM domain %s\n", pd->name); 2145 2141 2146 - while (1) { 2142 + for (i = 1; i < GENPD_RETRY_MAX_MS; i <<= 1) { 2147 2143 ret = pm_genpd_remove_device(pd, dev); 2148 2144 if (ret != -EAGAIN) 2149 2145 break; 2146 + 2147 + mdelay(i); 2150 2148 cond_resched(); 2151 2149 } 2152 2150 ··· 2189 2183 { 2190 2184 struct of_phandle_args pd_args; 2191 2185 struct generic_pm_domain *pd; 2186 + unsigned int i; 2192 2187 int ret; 2193 2188 2194 2189 if (!dev->of_node) ··· 2225 2218 2226 2219 dev_dbg(dev, "adding to PM domain %s\n", pd->name); 2227 2220 2228 - while (1) { 2221 + for (i = 1; i < GENPD_RETRY_MAX_MS; i <<= 1) { 2229 2222 ret = pm_genpd_add_device(pd, dev); 2230 2223 if (ret != -EAGAIN) 2231 2224 break; 2225 + 2226 + mdelay(i); 2232 2227 cond_resched(); 2233 2228 } 2234 2229
+5 -7
drivers/base/power/wakeirq.c
··· 45 45 return -EEXIST; 46 46 } 47 47 48 - dev->power.wakeirq = wirq; 49 - spin_unlock_irqrestore(&dev->power.lock, flags); 50 - 51 48 err = device_wakeup_attach_irq(dev, wirq); 52 - if (err) 53 - return err; 49 + if (!err) 50 + dev->power.wakeirq = wirq; 54 51 55 - return 0; 52 + spin_unlock_irqrestore(&dev->power.lock, flags); 53 + return err; 56 54 } 57 55 58 56 /** ··· 103 105 return; 104 106 105 107 spin_lock_irqsave(&dev->power.lock, flags); 108 + device_wakeup_detach_irq(dev); 106 109 dev->power.wakeirq = NULL; 107 110 spin_unlock_irqrestore(&dev->power.lock, flags); 108 111 109 - device_wakeup_detach_irq(dev); 110 112 if (wirq->dedicated_irq) 111 113 free_irq(wirq->irq, wirq); 112 114 kfree(wirq);
+10 -21
drivers/base/power/wakeup.c
··· 281 281 * Attach a device wakeirq to the wakeup source so the device 282 282 * wake IRQ can be configured automatically for suspend and 283 283 * resume. 284 + * 285 + * Call under the device's power.lock lock. 284 286 */ 285 287 int device_wakeup_attach_irq(struct device *dev, 286 288 struct wake_irq *wakeirq) 287 289 { 288 290 struct wakeup_source *ws; 289 - int ret = 0; 290 291 291 - spin_lock_irq(&dev->power.lock); 292 292 ws = dev->power.wakeup; 293 293 if (!ws) { 294 294 dev_err(dev, "forgot to call call device_init_wakeup?\n"); 295 - ret = -EINVAL; 296 - goto unlock; 295 + return -EINVAL; 297 296 } 298 297 299 - if (ws->wakeirq) { 300 - ret = -EEXIST; 301 - goto unlock; 302 - } 298 + if (ws->wakeirq) 299 + return -EEXIST; 303 300 304 301 ws->wakeirq = wakeirq; 305 - 306 - unlock: 307 - spin_unlock_irq(&dev->power.lock); 308 - 309 - return ret; 302 + return 0; 310 303 } 311 304 312 305 /** ··· 307 314 * @dev: Device to handle 308 315 * 309 316 * Removes a device wakeirq from the wakeup source. 317 + * 318 + * Call under the device's power.lock lock. 310 319 */ 311 320 void device_wakeup_detach_irq(struct device *dev) 312 321 { 313 322 struct wakeup_source *ws; 314 323 315 - spin_lock_irq(&dev->power.lock); 316 324 ws = dev->power.wakeup; 317 - if (!ws) 318 - goto unlock; 319 - 320 - ws->wakeirq = NULL; 321 - 322 - unlock: 323 - spin_unlock_irq(&dev->power.lock); 325 + if (ws) 326 + ws->wakeirq = NULL; 324 327 } 325 328 326 329 /**
+11 -2
drivers/block/nvme-core.c
··· 2108 2108 goto out_free_disk; 2109 2109 2110 2110 add_disk(ns->disk); 2111 - if (ns->ms) 2112 - revalidate_disk(ns->disk); 2111 + if (ns->ms) { 2112 + struct block_device *bd = bdget_disk(ns->disk, 0); 2113 + if (!bd) 2114 + return; 2115 + if (blkdev_get(bd, FMODE_READ, NULL)) { 2116 + bdput(bd); 2117 + return; 2118 + } 2119 + blkdev_reread_part(bd); 2120 + blkdev_put(bd, FMODE_READ); 2121 + } 2113 2122 return; 2114 2123 out_free_disk: 2115 2124 kfree(disk);
+2 -1
drivers/char/tpm/tpm-chip.c
··· 129 129 130 130 device_initialize(&chip->dev); 131 131 132 - chip->cdev.owner = chip->pdev->driver->owner; 133 132 cdev_init(&chip->cdev, &tpm_fops); 133 + chip->cdev.owner = chip->pdev->driver->owner; 134 + chip->cdev.kobj.parent = &chip->dev.kobj; 134 135 135 136 return chip; 136 137 }
+8
drivers/char/tpm/tpm_crb.c
··· 233 233 return -ENODEV; 234 234 } 235 235 236 + /* At least some versions of AMI BIOS have a bug that TPM2 table has 237 + * zero address for the control area and therefore we must fail. 238 + */ 239 + if (!buf->control_area_pa) { 240 + dev_err(dev, "TPM2 ACPI table has a zero address for the control area\n"); 241 + return -EINVAL; 242 + } 243 + 236 244 if (buf->hdr.length < sizeof(struct acpi_tpm2)) { 237 245 dev_err(dev, "TPM2 ACPI table has wrong size"); 238 246 return -EINVAL;
+3 -1
drivers/clk/at91/clk-h32mx.c
··· 116 116 h32mxclk->pmc = pmc; 117 117 118 118 clk = clk_register(NULL, &h32mxclk->hw); 119 - if (!clk) 119 + if (!clk) { 120 + kfree(h32mxclk); 120 121 return; 122 + } 121 123 122 124 of_clk_add_provider(np, of_clk_src_simple_get, clk); 123 125 }
+3 -1
drivers/clk/at91/clk-main.c
··· 171 171 irq_set_status_flags(osc->irq, IRQ_NOAUTOEN); 172 172 ret = request_irq(osc->irq, clk_main_osc_irq_handler, 173 173 IRQF_TRIGGER_HIGH, name, osc); 174 - if (ret) 174 + if (ret) { 175 + kfree(osc); 175 176 return ERR_PTR(ret); 177 + } 176 178 177 179 if (bypass) 178 180 pmc_write(pmc, AT91_CKGR_MOR,
+6 -2
drivers/clk/at91/clk-master.c
··· 165 165 irq_set_status_flags(master->irq, IRQ_NOAUTOEN); 166 166 ret = request_irq(master->irq, clk_master_irq_handler, 167 167 IRQF_TRIGGER_HIGH, "clk-master", master); 168 - if (ret) 168 + if (ret) { 169 + kfree(master); 169 170 return ERR_PTR(ret); 171 + } 170 172 171 173 clk = clk_register(NULL, &master->hw); 172 - if (IS_ERR(clk)) 174 + if (IS_ERR(clk)) { 175 + free_irq(master->irq, master); 173 176 kfree(master); 177 + } 174 178 175 179 return clk; 176 180 }
+6 -2
drivers/clk/at91/clk-pll.c
··· 346 346 irq_set_status_flags(pll->irq, IRQ_NOAUTOEN); 347 347 ret = request_irq(pll->irq, clk_pll_irq_handler, IRQF_TRIGGER_HIGH, 348 348 id ? "clk-pllb" : "clk-plla", pll); 349 - if (ret) 349 + if (ret) { 350 + kfree(pll); 350 351 return ERR_PTR(ret); 352 + } 351 353 352 354 clk = clk_register(NULL, &pll->hw); 353 - if (IS_ERR(clk)) 355 + if (IS_ERR(clk)) { 356 + free_irq(pll->irq, pll); 354 357 kfree(pll); 358 + } 355 359 356 360 return clk; 357 361 }
+6 -2
drivers/clk/at91/clk-system.c
··· 130 130 irq_set_status_flags(sys->irq, IRQ_NOAUTOEN); 131 131 ret = request_irq(sys->irq, clk_system_irq_handler, 132 132 IRQF_TRIGGER_HIGH, name, sys); 133 - if (ret) 133 + if (ret) { 134 + kfree(sys); 134 135 return ERR_PTR(ret); 136 + } 135 137 } 136 138 137 139 clk = clk_register(NULL, &sys->hw); 138 - if (IS_ERR(clk)) 140 + if (IS_ERR(clk)) { 141 + free_irq(sys->irq, sys); 139 142 kfree(sys); 143 + } 140 144 141 145 return clk; 142 146 }
+6 -2
drivers/clk/at91/clk-utmi.c
··· 118 118 irq_set_status_flags(utmi->irq, IRQ_NOAUTOEN); 119 119 ret = request_irq(utmi->irq, clk_utmi_irq_handler, 120 120 IRQF_TRIGGER_HIGH, "clk-utmi", utmi); 121 - if (ret) 121 + if (ret) { 122 + kfree(utmi); 122 123 return ERR_PTR(ret); 124 + } 123 125 124 126 clk = clk_register(NULL, &utmi->hw); 125 - if (IS_ERR(clk)) 127 + if (IS_ERR(clk)) { 128 + free_irq(utmi->irq, utmi); 126 129 kfree(utmi); 130 + } 127 131 128 132 return clk; 129 133 }
+1 -5
drivers/clk/bcm/clk-iproc-asiu.c
··· 222 222 struct iproc_asiu_clk *asiu_clk; 223 223 const char *clk_name; 224 224 225 - clk_name = kzalloc(IPROC_CLK_NAME_LEN, GFP_KERNEL); 226 - if (WARN_ON(!clk_name)) 227 - goto err_clk_register; 228 - 229 225 ret = of_property_read_string_index(node, "clock-output-names", 230 226 i, &clk_name); 231 227 if (WARN_ON(ret)) ··· 255 259 256 260 err_clk_register: 257 261 for (i = 0; i < num_clks; i++) 258 - kfree(asiu->clks[i].name); 262 + clk_unregister(asiu->clk_data.clks[i]); 259 263 iounmap(asiu->gate_base); 260 264 261 265 err_iomap_gate:
+4 -9
drivers/clk/bcm/clk-iproc-pll.c
··· 366 366 val = readl(pll->pll_base + ctrl->ndiv_int.offset); 367 367 ndiv_int = (val >> ctrl->ndiv_int.shift) & 368 368 bit_mask(ctrl->ndiv_int.width); 369 - ndiv = ndiv_int << ctrl->ndiv_int.shift; 369 + ndiv = (u64)ndiv_int << ctrl->ndiv_int.shift; 370 370 371 371 if (ctrl->flags & IPROC_CLK_PLL_HAS_NDIV_FRAC) { 372 372 val = readl(pll->pll_base + ctrl->ndiv_frac.offset); ··· 374 374 bit_mask(ctrl->ndiv_frac.width); 375 375 376 376 if (ndiv_frac != 0) 377 - ndiv = (ndiv_int << ctrl->ndiv_int.shift) | ndiv_frac; 377 + ndiv = ((u64)ndiv_int << ctrl->ndiv_int.shift) | 378 + ndiv_frac; 378 379 } 379 380 380 381 val = readl(pll->pll_base + ctrl->pdiv.offset); ··· 656 655 memset(&init, 0, sizeof(init)); 657 656 parent_name = node->name; 658 657 659 - clk_name = kzalloc(IPROC_CLK_NAME_LEN, GFP_KERNEL); 660 - if (WARN_ON(!clk_name)) 661 - goto err_clk_register; 662 - 663 658 ret = of_property_read_string_index(node, "clock-output-names", 664 659 i, &clk_name); 665 660 if (WARN_ON(ret)) ··· 687 690 return; 688 691 689 692 err_clk_register: 690 - for (i = 0; i < num_clks; i++) { 691 - kfree(pll->clks[i].name); 693 + for (i = 0; i < num_clks; i++) 692 694 clk_unregister(pll->clk_data.clks[i]); 693 - } 694 695 695 696 err_pll_register: 696 697 if (pll->asiu_base)
+1 -1
drivers/clk/clk-stm32f4.c
··· 268 268 memcpy(table, stm32f42xx_gate_map, sizeof(table)); 269 269 270 270 /* only bits set in table can be used as indices */ 271 - if (WARN_ON(secondary > 8 * sizeof(table) || 271 + if (WARN_ON(secondary >= BITS_PER_BYTE * sizeof(table) || 272 272 0 == (table[BIT_ULL_WORD(secondary)] & 273 273 BIT_ULL_MASK(secondary)))) 274 274 return -EINVAL;
+21 -5
drivers/clk/mediatek/clk-mt8173.c
··· 700 700 MUX(CLK_PERI_UART3_SEL, "uart3_ck_sel", uart_ck_sel_parents, 0x40c, 3, 1), 701 701 }; 702 702 703 + static struct clk_onecell_data *mt8173_top_clk_data __initdata; 704 + static struct clk_onecell_data *mt8173_pll_clk_data __initdata; 705 + 706 + static void __init mtk_clk_enable_critical(void) 707 + { 708 + if (!mt8173_top_clk_data || !mt8173_pll_clk_data) 709 + return; 710 + 711 + clk_prepare_enable(mt8173_pll_clk_data->clks[CLK_APMIXED_ARMCA15PLL]); 712 + clk_prepare_enable(mt8173_pll_clk_data->clks[CLK_APMIXED_ARMCA7PLL]); 713 + clk_prepare_enable(mt8173_top_clk_data->clks[CLK_TOP_MEM_SEL]); 714 + clk_prepare_enable(mt8173_top_clk_data->clks[CLK_TOP_DDRPHYCFG_SEL]); 715 + clk_prepare_enable(mt8173_top_clk_data->clks[CLK_TOP_CCI400_SEL]); 716 + clk_prepare_enable(mt8173_top_clk_data->clks[CLK_TOP_RTC_SEL]); 717 + } 718 + 703 719 static void __init mtk_topckgen_init(struct device_node *node) 704 720 { 705 721 struct clk_onecell_data *clk_data; ··· 728 712 return; 729 713 } 730 714 731 - clk_data = mtk_alloc_clk_data(CLK_TOP_NR_CLK); 715 + mt8173_top_clk_data = clk_data = mtk_alloc_clk_data(CLK_TOP_NR_CLK); 732 716 733 717 mtk_clk_register_factors(root_clk_alias, ARRAY_SIZE(root_clk_alias), clk_data); 734 718 mtk_clk_register_factors(top_divs, ARRAY_SIZE(top_divs), clk_data); 735 719 mtk_clk_register_composites(top_muxes, ARRAY_SIZE(top_muxes), base, 736 720 &mt8173_clk_lock, clk_data); 737 721 738 - clk_prepare_enable(clk_data->clks[CLK_TOP_CCI400_SEL]); 739 - 740 722 r = of_clk_add_provider(node, of_clk_src_onecell_get, clk_data); 741 723 if (r) 742 724 pr_err("%s(): could not register clock provider: %d\n", 743 725 __func__, r); 726 + 727 + mtk_clk_enable_critical(); 744 728 } 745 729 CLK_OF_DECLARE(mtk_topckgen, "mediatek,mt8173-topckgen", mtk_topckgen_init); 746 730 ··· 834 818 { 835 819 struct clk_onecell_data *clk_data; 836 820 837 - clk_data = mtk_alloc_clk_data(CLK_APMIXED_NR_CLK); 821 + mt8173_pll_clk_data = clk_data = mtk_alloc_clk_data(CLK_APMIXED_NR_CLK); 838 822 if (!clk_data) 839 823 return; 840 824 841 825 mtk_clk_register_plls(node, plls, ARRAY_SIZE(plls), clk_data); 842 826 843 - clk_prepare_enable(clk_data->clks[CLK_APMIXED_ARMCA15PLL]); 827 + mtk_clk_enable_critical(); 844 828 } 845 829 CLK_OF_DECLARE(mtk_apmixedsys, "mediatek,mt8173-apmixedsys", 846 830 mtk_apmixedsys_init);
+3 -6
drivers/clk/qcom/clk-rcg2.c
··· 530 530 struct clk_rcg2 *rcg = to_clk_rcg2(hw); 531 531 struct freq_tbl f = *rcg->freq_tbl; 532 532 const struct frac_entry *frac = frac_table_pixel; 533 - unsigned long request, src_rate; 533 + unsigned long request; 534 534 int delta = 100000; 535 535 u32 mask = BIT(rcg->hid_width) - 1; 536 536 u32 hid_div; 537 - int index = qcom_find_src_index(hw, rcg->parent_map, f.src); 538 - struct clk *parent = clk_get_parent_by_index(hw->clk, index); 539 537 540 538 for (; frac->num; frac++) { 541 539 request = (rate * frac->den) / frac->num; 542 540 543 - src_rate = __clk_round_rate(parent, request); 544 - if ((src_rate < (request - delta)) || 545 - (src_rate > (request + delta))) 541 + if ((parent_rate < (request - delta)) || 542 + (parent_rate > (request + delta))) 546 543 continue; 547 544 548 545 regmap_read(rcg->clkr.regmap, rcg->cmd_rcgr + CFG_REG,
+1 -1
drivers/clk/spear/clk-aux-synth.c
··· 1 1 /* 2 2 * Copyright (C) 2012 ST Microelectronics 3 - * Viresh Kumar <viresh.linux@gmail.com> 3 + * Viresh Kumar <vireshk@kernel.org> 4 4 * 5 5 * This file is licensed under the terms of the GNU General Public 6 6 * License version 2. This program is licensed "as is" without any
+1 -1
drivers/clk/spear/clk-frac-synth.c
··· 1 1 /* 2 2 * Copyright (C) 2012 ST Microelectronics 3 - * Viresh Kumar <viresh.linux@gmail.com> 3 + * Viresh Kumar <vireshk@kernel.org> 4 4 * 5 5 * This file is licensed under the terms of the GNU General Public 6 6 * License version 2. This program is licensed "as is" without any
+1 -1
drivers/clk/spear/clk-gpt-synth.c
··· 1 1 /* 2 2 * Copyright (C) 2012 ST Microelectronics 3 - * Viresh Kumar <viresh.linux@gmail.com> 3 + * Viresh Kumar <vireshk@kernel.org> 4 4 * 5 5 * This file is licensed under the terms of the GNU General Public 6 6 * License version 2. This program is licensed "as is" without any
+1 -1
drivers/clk/spear/clk-vco-pll.c
··· 1 1 /* 2 2 * Copyright (C) 2012 ST Microelectronics 3 - * Viresh Kumar <viresh.linux@gmail.com> 3 + * Viresh Kumar <vireshk@kernel.org> 4 4 * 5 5 * This file is licensed under the terms of the GNU General Public 6 6 * License version 2. This program is licensed "as is" without any
+1 -1
drivers/clk/spear/clk.c
··· 1 1 /* 2 2 * Copyright (C) 2012 ST Microelectronics 3 - * Viresh Kumar <viresh.linux@gmail.com> 3 + * Viresh Kumar <vireshk@kernel.org> 4 4 * 5 5 * This file is licensed under the terms of the GNU General Public 6 6 * License version 2. This program is licensed "as is" without any
+1 -1
drivers/clk/spear/clk.h
··· 2 2 * Clock framework definitions for SPEAr platform 3 3 * 4 4 * Copyright (C) 2012 ST Microelectronics 5 - * Viresh Kumar <viresh.linux@gmail.com> 5 + * Viresh Kumar <vireshk@kernel.org> 6 6 * 7 7 * This file is licensed under the terms of the GNU General Public 8 8 * License version 2. This program is licensed "as is" without any
+1 -1
drivers/clk/spear/spear1310_clock.c
··· 4 4 * SPEAr1310 machine clock framework source file 5 5 * 6 6 * Copyright (C) 2012 ST Microelectronics 7 - * Viresh Kumar <viresh.linux@gmail.com> 7 + * Viresh Kumar <vireshk@kernel.org> 8 8 * 9 9 * This file is licensed under the terms of the GNU General Public 10 10 * License version 2. This program is licensed "as is" without any
+1 -1
drivers/clk/spear/spear1340_clock.c
··· 4 4 * SPEAr1340 machine clock framework source file 5 5 * 6 6 * Copyright (C) 2012 ST Microelectronics 7 - * Viresh Kumar <viresh.linux@gmail.com> 7 + * Viresh Kumar <vireshk@kernel.org> 8 8 * 9 9 * This file is licensed under the terms of the GNU General Public 10 10 * License version 2. This program is licensed "as is" without any
+1 -1
drivers/clk/spear/spear3xx_clock.c
··· 2 2 * SPEAr3xx machines clock framework source file 3 3 * 4 4 * Copyright (C) 2012 ST Microelectronics 5 - * Viresh Kumar <viresh.linux@gmail.com> 5 + * Viresh Kumar <vireshk@kernel.org> 6 6 * 7 7 * This file is licensed under the terms of the GNU General Public 8 8 * License version 2. This program is licensed "as is" without any
+1 -1
drivers/clk/spear/spear6xx_clock.c
··· 2 2 * SPEAr6xx machines clock framework source file 3 3 * 4 4 * Copyright (C) 2012 ST Microelectronics 5 - * Viresh Kumar <viresh.linux@gmail.com> 5 + * Viresh Kumar <vireshk@kernel.org> 6 6 * 7 7 * This file is licensed under the terms of the GNU General Public 8 8 * License version 2. This program is licensed "as is" without any
+3 -1
drivers/clk/st/clk-flexgen.c
··· 190 190 191 191 init.name = name; 192 192 init.ops = &flexgen_ops; 193 - init.flags = CLK_IS_BASIC | flexgen_flags; 193 + init.flags = CLK_IS_BASIC | CLK_GET_RATE_NOCACHE | flexgen_flags; 194 194 init.parent_names = parent_names; 195 195 init.num_parents = num_parents; 196 196 ··· 302 302 rlock = kzalloc(sizeof(spinlock_t), GFP_KERNEL); 303 303 if (!rlock) 304 304 goto err; 305 + 306 + spin_lock_init(rlock); 305 307 306 308 for (i = 0; i < clk_data->clk_num; i++) { 307 309 struct clk *clk;
+4 -8
drivers/clk/st/clkgen-fsyn.c
··· 340 340 CLKGEN_FIELD(0x30c, 0xf, 20), 341 341 CLKGEN_FIELD(0x310, 0xf, 20) }, 342 342 .lockstatus_present = true, 343 - .lock_status = CLKGEN_FIELD(0x2A0, 0x1, 24), 343 + .lock_status = CLKGEN_FIELD(0x2f0, 0x1, 24), 344 344 .powerup_polarity = 1, 345 345 .standby_polarity = 1, 346 346 .pll_ops = &st_quadfs_pll_c32_ops, ··· 489 489 struct st_clk_quadfs_pll *pll = to_quadfs_pll(hw); 490 490 u32 npda = CLKGEN_READ(pll, npda); 491 491 492 - return !!npda; 492 + return pll->data->powerup_polarity ? !npda : !!npda; 493 493 } 494 494 495 495 static int clk_fs660c32_vco_get_rate(unsigned long input, struct stm_fs *fs, ··· 635 635 636 636 init.name = name; 637 637 init.ops = quadfs->pll_ops; 638 - init.flags = CLK_IS_BASIC; 638 + init.flags = CLK_IS_BASIC | CLK_GET_RATE_NOCACHE; 639 639 init.parent_names = &parent_name; 640 640 init.num_parents = 1; 641 641 ··· 774 774 if (fs->lock) 775 775 spin_lock_irqsave(fs->lock, flags); 776 776 777 - CLKGEN_WRITE(fs, nsb[fs->chan], !fs->data->standby_polarity); 777 + CLKGEN_WRITE(fs, nsb[fs->chan], fs->data->standby_polarity); 778 778 779 779 if (fs->lock) 780 780 spin_unlock_irqrestore(fs->lock, flags); ··· 1081 1081 { 1082 1082 .compatible = "st,stih407-quadfs660-D", 1083 1083 .data = &st_fs660c32_D_407 1084 - }, 1085 - { 1086 - .compatible = "st,stih407-quadfs660-D", 1087 - .data = (void *)&st_fs660c32_D_407 1088 1084 }, 1089 1085 {} 1090 1086 };
+6 -4
drivers/clk/st/clkgen-mux.c
··· 237 237 238 238 init.name = name; 239 239 init.ops = &clkgena_divmux_ops; 240 - init.flags = CLK_IS_BASIC; 240 + init.flags = CLK_IS_BASIC | CLK_GET_RATE_NOCACHE; 241 241 init.parent_names = parent_names; 242 242 init.num_parents = num_parents; 243 243 ··· 513 513 0, &clk_name)) 514 514 return; 515 515 516 - clk = clk_register_divider_table(NULL, clk_name, parent_name, 0, 516 + clk = clk_register_divider_table(NULL, clk_name, parent_name, 517 + CLK_GET_RATE_NOCACHE, 517 518 reg + data->offset, data->shift, 1, 518 519 0, data->table, NULL); 519 520 if (IS_ERR(clk)) ··· 583 582 }; 584 583 static struct clkgen_mux_data stih407_a9_mux_data = { 585 584 .offset = 0x1a4, 586 - .shift = 1, 585 + .shift = 0, 587 586 .width = 2, 588 587 }; 589 588 ··· 787 786 &mux->hw, &clk_mux_ops, 788 787 &div->hw, &clk_divider_ops, 789 788 &gate->hw, &clk_gate_ops, 790 - data->clk_flags); 789 + data->clk_flags | 790 + CLK_GET_RATE_NOCACHE); 791 791 if (IS_ERR(clk)) { 792 792 kfree(gate); 793 793 kfree(div);
+1 -1
drivers/clk/st/clkgen-pll.c
··· 406 406 init.name = clk_name; 407 407 init.ops = pll_data->ops; 408 408 409 - init.flags = CLK_IS_BASIC; 409 + init.flags = CLK_IS_BASIC | CLK_GET_RATE_NOCACHE; 410 410 init.parent_names = &parent_name; 411 411 init.num_parents = 1; 412 412
+1
drivers/clk/sunxi/clk-sunxi.c
··· 1391 1391 CLK_OF_DECLARE(sun6i_a31_clk_init, "allwinner,sun6i-a31", sun6i_init_clocks); 1392 1392 CLK_OF_DECLARE(sun6i_a31s_clk_init, "allwinner,sun6i-a31s", sun6i_init_clocks); 1393 1393 CLK_OF_DECLARE(sun8i_a23_clk_init, "allwinner,sun8i-a23", sun6i_init_clocks); 1394 + CLK_OF_DECLARE(sun8i_a33_clk_init, "allwinner,sun8i-a33", sun6i_init_clocks); 1394 1395 1395 1396 static void __init sun9i_init_clocks(struct device_node *node) 1396 1397 {
+1
drivers/clocksource/timer-imx-gpt.c
··· 529 529 530 530 CLOCKSOURCE_OF_DECLARE(imx1_timer, "fsl,imx1-gpt", imx1_timer_init_dt); 531 531 CLOCKSOURCE_OF_DECLARE(imx21_timer, "fsl,imx21-gpt", imx21_timer_init_dt); 532 + CLOCKSOURCE_OF_DECLARE(imx27_timer, "fsl,imx27-gpt", imx21_timer_init_dt); 532 533 CLOCKSOURCE_OF_DECLARE(imx31_timer, "fsl,imx31-gpt", imx31_timer_init_dt); 533 534 CLOCKSOURCE_OF_DECLARE(imx25_timer, "fsl,imx25-gpt", imx31_timer_init_dt); 534 535 CLOCKSOURCE_OF_DECLARE(imx50_timer, "fsl,imx50-gpt", imx31_timer_init_dt);
+10
drivers/cpufreq/cpufreq.c
··· 169 169 } 170 170 EXPORT_SYMBOL_GPL(get_governor_parent_kobj); 171 171 172 + struct cpufreq_frequency_table *cpufreq_frequency_get_table(unsigned int cpu) 173 + { 174 + struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_data, cpu); 175 + 176 + return policy && !policy_is_inactive(policy) ? 177 + policy->freq_table : NULL; 178 + } 179 + EXPORT_SYMBOL_GPL(cpufreq_frequency_get_table); 180 + 172 181 static inline u64 get_cpu_idle_time_jiffy(unsigned int cpu, u64 *wall) 173 182 { 174 183 u64 idle_time; ··· 1141 1132 1142 1133 down_write(&policy->rwsem); 1143 1134 policy->cpu = cpu; 1135 + policy->governor = NULL; 1144 1136 up_write(&policy->rwsem); 1145 1137 } 1146 1138
-9
drivers/cpufreq/freq_table.c
··· 297 297 } 298 298 EXPORT_SYMBOL_GPL(cpufreq_table_validate_and_show); 299 299 300 - struct cpufreq_policy *cpufreq_cpu_get_raw(unsigned int cpu); 301 - 302 - struct cpufreq_frequency_table *cpufreq_frequency_get_table(unsigned int cpu) 303 - { 304 - struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu); 305 - return policy ? policy->freq_table : NULL; 306 - } 307 - EXPORT_SYMBOL_GPL(cpufreq_frequency_get_table); 308 - 309 300 MODULE_AUTHOR("Dominik Brodowski <linux@brodo.de>"); 310 301 MODULE_DESCRIPTION("CPUfreq frequency table helpers"); 311 302 MODULE_LICENSE("GPL");
+1 -1
drivers/cpufreq/loongson2_cpufreq.c
··· 3 3 * 4 4 * The 2E revision of loongson processor not support this feature. 5 5 * 6 - * Copyright (C) 2006 - 2008 Lemote Inc. & Insititute of Computing Technology 6 + * Copyright (C) 2006 - 2008 Lemote Inc. & Institute of Computing Technology 7 7 * Author: Yanhua, yanh@lemote.com 8 8 * 9 9 * This file is subject to the terms and conditions of the GNU General Public
+7 -2
drivers/cpuidle/cpuidle.c
··· 112 112 static void enter_freeze_proper(struct cpuidle_driver *drv, 113 113 struct cpuidle_device *dev, int index) 114 114 { 115 - tick_freeze(); 115 + /* 116 + * trace_suspend_resume() called by tick_freeze() for the last CPU 117 + * executing it contains RCU usage regarded as invalid in the idle 118 + * context, so tell RCU about that. 119 + */ 120 + RCU_NONIDLE(tick_freeze()); 116 121 /* 117 122 * The state used here cannot be a "coupled" one, because the "coupled" 118 123 * cpuidle mechanism enables interrupts and doing that with timekeeping ··· 127 122 WARN_ON(!irqs_disabled()); 128 123 /* 129 124 * timekeeping_resume() that will be called by tick_unfreeze() for the 130 - * last CPU executing it calls functions containing RCU read-side 125 + * first CPU executing it calls functions containing RCU read-side 131 126 * critical sections, so tell RCU about that. 132 127 */ 133 128 RCU_NONIDLE(tick_unfreeze());
+4 -2
drivers/crypto/nx/nx-aes-ccm.c
··· 494 494 static int ccm4309_aes_nx_encrypt(struct aead_request *req) 495 495 { 496 496 struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(req->base.tfm); 497 + struct nx_gcm_rctx *rctx = aead_request_ctx(req); 497 498 struct blkcipher_desc desc; 498 - u8 *iv = nx_ctx->priv.ccm.iv; 499 + u8 *iv = rctx->iv; 499 500 500 501 iv[0] = 3; 501 502 memcpy(iv + 1, nx_ctx->priv.ccm.nonce, 3); ··· 526 525 static int ccm4309_aes_nx_decrypt(struct aead_request *req) 527 526 { 528 527 struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(req->base.tfm); 528 + struct nx_gcm_rctx *rctx = aead_request_ctx(req); 529 529 struct blkcipher_desc desc; 530 - u8 *iv = nx_ctx->priv.ccm.iv; 530 + u8 *iv = rctx->iv; 531 531 532 532 iv[0] = 3; 533 533 memcpy(iv + 1, nx_ctx->priv.ccm.nonce, 3);
+4 -3
drivers/crypto/nx/nx-aes-ctr.c
··· 72 72 if (key_len < CTR_RFC3686_NONCE_SIZE) 73 73 return -EINVAL; 74 74 75 - memcpy(nx_ctx->priv.ctr.iv, 75 + memcpy(nx_ctx->priv.ctr.nonce, 76 76 in_key + key_len - CTR_RFC3686_NONCE_SIZE, 77 77 CTR_RFC3686_NONCE_SIZE); 78 78 ··· 131 131 unsigned int nbytes) 132 132 { 133 133 struct nx_crypto_ctx *nx_ctx = crypto_blkcipher_ctx(desc->tfm); 134 - u8 *iv = nx_ctx->priv.ctr.iv; 134 + u8 iv[16]; 135 135 136 + memcpy(iv, nx_ctx->priv.ctr.nonce, CTR_RFC3686_IV_SIZE); 136 137 memcpy(iv + CTR_RFC3686_NONCE_SIZE, 137 138 desc->info, CTR_RFC3686_IV_SIZE); 138 139 iv[12] = iv[13] = iv[14] = 0; 139 140 iv[15] = 1; 140 141 141 - desc->info = nx_ctx->priv.ctr.iv; 142 + desc->info = iv; 142 143 143 144 return ctr_aes_nx_crypt(desc, dst, src, nbytes); 144 145 }
+10 -7
drivers/crypto/nx/nx-aes-gcm.c
··· 317 317 static int gcm_aes_nx_crypt(struct aead_request *req, int enc) 318 318 { 319 319 struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(req->base.tfm); 320 + struct nx_gcm_rctx *rctx = aead_request_ctx(req); 320 321 struct nx_csbcpb *csbcpb = nx_ctx->csbcpb; 321 322 struct blkcipher_desc desc; 322 323 unsigned int nbytes = req->cryptlen; ··· 327 326 328 327 spin_lock_irqsave(&nx_ctx->lock, irq_flags); 329 328 330 - desc.info = nx_ctx->priv.gcm.iv; 329 + desc.info = rctx->iv; 331 330 /* initialize the counter */ 332 331 *(u32 *)(desc.info + NX_GCM_CTR_OFFSET) = 1; 333 332 ··· 425 424 426 425 static int gcm_aes_nx_encrypt(struct aead_request *req) 427 426 { 428 - struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(req->base.tfm); 429 - char *iv = nx_ctx->priv.gcm.iv; 427 + struct nx_gcm_rctx *rctx = aead_request_ctx(req); 428 + char *iv = rctx->iv; 430 429 431 430 memcpy(iv, req->iv, 12); 432 431 ··· 435 434 436 435 static int gcm_aes_nx_decrypt(struct aead_request *req) 437 436 { 438 - struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(req->base.tfm); 439 - char *iv = nx_ctx->priv.gcm.iv; 437 + struct nx_gcm_rctx *rctx = aead_request_ctx(req); 438 + char *iv = rctx->iv; 440 439 441 440 memcpy(iv, req->iv, 12); 442 441 ··· 446 445 static int gcm4106_aes_nx_encrypt(struct aead_request *req) 447 446 { 448 447 struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(req->base.tfm); 449 - char *iv = nx_ctx->priv.gcm.iv; 448 + struct nx_gcm_rctx *rctx = aead_request_ctx(req); 449 + char *iv = rctx->iv; 450 450 char *nonce = nx_ctx->priv.gcm.nonce; 451 451 452 452 memcpy(iv, nonce, NX_GCM4106_NONCE_LEN); ··· 459 457 static int gcm4106_aes_nx_decrypt(struct aead_request *req) 460 458 { 461 459 struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(req->base.tfm); 462 - char *iv = nx_ctx->priv.gcm.iv; 460 + struct nx_gcm_rctx *rctx = aead_request_ctx(req); 461 + char *iv = rctx->iv; 463 462 char *nonce = nx_ctx->priv.gcm.nonce; 464 463 465 464 memcpy(iv, nonce, NX_GCM4106_NONCE_LEN);
+44 -26
drivers/crypto/nx/nx-aes-xcbc.c
··· 42 42 unsigned int key_len) 43 43 { 44 44 struct nx_crypto_ctx *nx_ctx = crypto_shash_ctx(desc); 45 + struct nx_csbcpb *csbcpb = nx_ctx->csbcpb; 45 46 46 47 switch (key_len) { 47 48 case AES_KEYSIZE_128: ··· 52 51 return -EINVAL; 53 52 } 54 53 55 - memcpy(nx_ctx->priv.xcbc.key, in_key, key_len); 54 + memcpy(csbcpb->cpb.aes_xcbc.key, in_key, key_len); 56 55 57 56 return 0; 58 57 } ··· 149 148 return rc; 150 149 } 151 150 152 - static int nx_xcbc_init(struct shash_desc *desc) 151 + static int nx_crypto_ctx_aes_xcbc_init2(struct crypto_tfm *tfm) 153 152 { 154 - struct xcbc_state *sctx = shash_desc_ctx(desc); 155 - struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base); 153 + struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(tfm); 156 154 struct nx_csbcpb *csbcpb = nx_ctx->csbcpb; 157 - struct nx_sg *out_sg; 158 - int len; 155 + int err; 156 + 157 + err = nx_crypto_ctx_aes_xcbc_init(tfm); 158 + if (err) 159 + return err; 159 160 160 161 nx_ctx_init(nx_ctx, HCOP_FC_AES); 161 - 162 - memset(sctx, 0, sizeof *sctx); 163 162 164 163 NX_CPB_SET_KEY_SIZE(csbcpb, NX_KS_AES_128); 165 164 csbcpb->cpb.hdr.mode = NX_MODE_AES_XCBC_MAC; 166 165 167 - memcpy(csbcpb->cpb.aes_xcbc.key, nx_ctx->priv.xcbc.key, AES_BLOCK_SIZE); 168 - memset(nx_ctx->priv.xcbc.key, 0, sizeof *nx_ctx->priv.xcbc.key); 166 + return 0; 167 + } 169 168 170 - len = AES_BLOCK_SIZE; 171 - out_sg = nx_build_sg_list(nx_ctx->out_sg, (u8 *)sctx->state, 172 - &len, nx_ctx->ap->sglen); 169 + static int nx_xcbc_init(struct shash_desc *desc) 170 + { 171 + struct xcbc_state *sctx = shash_desc_ctx(desc); 173 172 174 - if (len != AES_BLOCK_SIZE) 175 - return -EINVAL; 176 - 177 - nx_ctx->op.outlen = (nx_ctx->out_sg - out_sg) * sizeof(struct nx_sg); 173 + memset(sctx, 0, sizeof *sctx); 178 174 179 175 return 0; 180 176 } ··· 184 186 struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base); 185 187 struct nx_csbcpb *csbcpb = nx_ctx->csbcpb; 186 188 struct nx_sg *in_sg; 189 + struct nx_sg *out_sg; 187 190 u32 to_process = 0, leftover, total; 188 191 unsigned int max_sg_len; 189 192 unsigned long irq_flags; ··· 212 213 max_sg_len = min_t(u64, max_sg_len, 213 214 nx_ctx->ap->databytelen/NX_PAGE_SIZE); 214 215 216 + data_len = AES_BLOCK_SIZE; 217 + out_sg = nx_build_sg_list(nx_ctx->out_sg, (u8 *)sctx->state, 218 + &len, nx_ctx->ap->sglen); 219 + 220 + if (data_len != AES_BLOCK_SIZE) { 221 + rc = -EINVAL; 222 + goto out; 223 + } 224 + 225 + nx_ctx->op.outlen = (nx_ctx->out_sg - out_sg) * sizeof(struct nx_sg); 226 + 215 227 do { 216 228 to_process = total - to_process; 217 229 to_process = to_process & ~(AES_BLOCK_SIZE - 1); ··· 245 235 (u8 *) sctx->buffer, 246 236 &data_len, 247 237 max_sg_len); 248 - if (data_len != sctx->count) 249 - return -EINVAL; 238 + if (data_len != sctx->count) { 239 + rc = -EINVAL; 240 + goto out; 241 + } 250 242 } 251 243 252 244 data_len = to_process - sctx->count; ··· 257 245 &data_len, 258 246 max_sg_len); 259 247 260 - if (data_len != to_process - sctx->count) 261 - return -EINVAL; 248 + if (data_len != to_process - sctx->count) { 249 + rc = -EINVAL; 250 + goto out; 251 + } 262 252 263 253 nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) * 264 254 sizeof(struct nx_sg); ··· 339 325 in_sg = nx_build_sg_list(nx_ctx->in_sg, (u8 *)sctx->buffer, 340 326 &len, nx_ctx->ap->sglen); 341 327 342 - if (len != sctx->count) 343 - return -EINVAL; 328 + if (len != sctx->count) { 329 + rc = -EINVAL; 330 + goto out; 331 + } 344 332 345 333 len = AES_BLOCK_SIZE; 346 334 out_sg = nx_build_sg_list(nx_ctx->out_sg, out, &len, 347 335 nx_ctx->ap->sglen); 348 336 349 - if (len != AES_BLOCK_SIZE) 350 - return -EINVAL; 337 + if (len != AES_BLOCK_SIZE) { 338 + rc = -EINVAL; 339 + goto out; 340 + } 351 341 352 342 nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) * sizeof(struct nx_sg); 353 343 nx_ctx->op.outlen = (nx_ctx->out_sg - out_sg) * sizeof(struct nx_sg); ··· 390 372 .cra_blocksize = AES_BLOCK_SIZE, 391 373 .cra_module = THIS_MODULE, 392 374 .cra_ctxsize = sizeof(struct nx_crypto_ctx), 393 - .cra_init = nx_crypto_ctx_aes_xcbc_init, 375 + .cra_init = nx_crypto_ctx_aes_xcbc_init2, 394 376 .cra_exit = nx_crypto_ctx_exit, 395 377 } 396 378 };
+24 -19
drivers/crypto/nx/nx-sha256.c
··· 29 29 #include "nx.h" 30 30 31 31 32 - static int nx_sha256_init(struct shash_desc *desc) 32 + static int nx_crypto_ctx_sha256_init(struct crypto_tfm *tfm) 33 33 { 34 - struct sha256_state *sctx = shash_desc_ctx(desc); 35 - struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base); 36 - struct nx_sg *out_sg; 37 - int len; 38 - u32 max_sg_len; 34 + struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(tfm); 35 + int err; 36 + 37 + err = nx_crypto_ctx_sha_init(tfm); 38 + if (err) 39 + return err; 39 40 40 41 nx_ctx_init(nx_ctx, HCOP_FC_SHA); 41 - 42 - memset(sctx, 0, sizeof *sctx); 43 42 44 43 nx_ctx->ap = &nx_ctx->props[NX_PROPS_SHA256]; 45 44 46 45 NX_CPB_SET_DIGEST_SIZE(nx_ctx->csbcpb, NX_DS_SHA256); 47 46 48 - max_sg_len = min_t(u64, nx_ctx->ap->sglen, 49 - nx_driver.of.max_sg_len/sizeof(struct nx_sg)); 50 - max_sg_len = min_t(u64, max_sg_len, 51 - nx_ctx->ap->databytelen/NX_PAGE_SIZE); 47 + return 0; 48 + } 52 49 53 - len = SHA256_DIGEST_SIZE; 54 - out_sg = nx_build_sg_list(nx_ctx->out_sg, (u8 *)sctx->state, 55 - &len, max_sg_len); 56 - nx_ctx->op.outlen = (nx_ctx->out_sg - out_sg) * sizeof(struct nx_sg); 50 + static int nx_sha256_init(struct shash_desc *desc) { 51 + struct sha256_state *sctx = shash_desc_ctx(desc); 57 52 58 - if (len != SHA256_DIGEST_SIZE) 59 - return -EINVAL; 53 + memset(sctx, 0, sizeof *sctx); 60 54 61 55 sctx->state[0] = __cpu_to_be32(SHA256_H0); 62 56 sctx->state[1] = __cpu_to_be32(SHA256_H1); ··· 72 78 struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base); 73 79 struct nx_csbcpb *csbcpb = (struct nx_csbcpb *)nx_ctx->csbcpb; 74 80 struct nx_sg *in_sg; 81 + struct nx_sg *out_sg; 75 82 u64 to_process = 0, leftover, total; 76 83 unsigned long irq_flags; 77 84 int rc = 0; ··· 102 107 nx_driver.of.max_sg_len/sizeof(struct nx_sg)); 103 108 max_sg_len = min_t(u64, max_sg_len, 104 109 nx_ctx->ap->databytelen/NX_PAGE_SIZE); 110 + 111 + data_len = SHA256_DIGEST_SIZE; 112 + out_sg = nx_build_sg_list(nx_ctx->out_sg, (u8 *)sctx->state, 113 + &data_len, max_sg_len); 114 + nx_ctx->op.outlen = (nx_ctx->out_sg - out_sg) * sizeof(struct nx_sg); 115 + 116 + if (data_len != SHA256_DIGEST_SIZE) { 117 + rc = -EINVAL; 118 + goto out; 119 + } 105 120 106 121 do { 107 122 /* ··· 287 282 .cra_blocksize = SHA256_BLOCK_SIZE, 288 283 .cra_module = THIS_MODULE, 289 284 .cra_ctxsize = sizeof(struct nx_crypto_ctx), 290 - .cra_init = nx_crypto_ctx_sha_init, 285 + .cra_init = nx_crypto_ctx_sha256_init, 291 286 .cra_exit = nx_crypto_ctx_exit, 292 287 } 293 288 };
+25 -19
drivers/crypto/nx/nx-sha512.c
··· 28 28 #include "nx.h" 29 29 30 30 31 - static int nx_sha512_init(struct shash_desc *desc) 31 + static int nx_crypto_ctx_sha512_init(struct crypto_tfm *tfm) 32 32 { 33 - struct sha512_state *sctx = shash_desc_ctx(desc); 34 - struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base); 35 - struct nx_sg *out_sg; 36 - int len; 37 - u32 max_sg_len; 33 + struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(tfm); 34 + int err; 35 + 36 + err = nx_crypto_ctx_sha_init(tfm); 37 + if (err) 38 + return err; 38 39 39 40 nx_ctx_init(nx_ctx, HCOP_FC_SHA); 40 - 41 - memset(sctx, 0, sizeof *sctx); 42 41 43 42 nx_ctx->ap = &nx_ctx->props[NX_PROPS_SHA512]; 44 43 45 44 NX_CPB_SET_DIGEST_SIZE(nx_ctx->csbcpb, NX_DS_SHA512); 46 45 47 - max_sg_len = min_t(u64, nx_ctx->ap->sglen, 48 - nx_driver.of.max_sg_len/sizeof(struct nx_sg)); 49 - max_sg_len = min_t(u64, max_sg_len, 50 - nx_ctx->ap->databytelen/NX_PAGE_SIZE); 46 + return 0; 47 + } 51 48 52 - len = SHA512_DIGEST_SIZE; 53 - out_sg = nx_build_sg_list(nx_ctx->out_sg, (u8 *)sctx->state, 54 - &len, max_sg_len); 55 - nx_ctx->op.outlen = (nx_ctx->out_sg - out_sg) * sizeof(struct nx_sg); 49 + static int nx_sha512_init(struct shash_desc *desc) 50 + { 51 + struct sha512_state *sctx = shash_desc_ctx(desc); 56 52 57 - if (len != SHA512_DIGEST_SIZE) 58 - return -EINVAL; 53 + memset(sctx, 0, sizeof *sctx); 59 54 60 55 sctx->state[0] = __cpu_to_be64(SHA512_H0); 61 56 sctx->state[1] = __cpu_to_be64(SHA512_H1); ··· 72 77 struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base); 73 78 struct nx_csbcpb *csbcpb = (struct nx_csbcpb *)nx_ctx->csbcpb; 74 79 struct nx_sg *in_sg; 80 + struct nx_sg *out_sg; 75 81 u64 to_process, leftover = 0, total; 76 82 unsigned long irq_flags; 77 83 int rc = 0; ··· 102 106 nx_driver.of.max_sg_len/sizeof(struct nx_sg)); 103 107 max_sg_len = min_t(u64, max_sg_len, 104 108 nx_ctx->ap->databytelen/NX_PAGE_SIZE); 109 + 110 + data_len = SHA512_DIGEST_SIZE; 111 + out_sg = nx_build_sg_list(nx_ctx->out_sg, (u8 *)sctx->state, 112 + &data_len, max_sg_len); 113 + nx_ctx->op.outlen = (nx_ctx->out_sg - out_sg) * sizeof(struct nx_sg); 114 + 115 + if (data_len != SHA512_DIGEST_SIZE) { 116 + rc = -EINVAL; 117 + goto out; 118 + } 105 119 106 120 do { 107 121 /* ··· 294 288 .cra_blocksize = SHA512_BLOCK_SIZE, 295 289 .cra_module = THIS_MODULE, 296 290 .cra_ctxsize = sizeof(struct nx_crypto_ctx), 297 - .cra_init = nx_crypto_ctx_sha_init, 291 + .cra_init = nx_crypto_ctx_sha512_init, 298 292 .cra_exit = nx_crypto_ctx_exit, 299 293 } 300 294 };
+3
drivers/crypto/nx/nx.c
··· 713 713 /* entry points from the crypto tfm initializers */ 714 714 int nx_crypto_ctx_aes_ccm_init(struct crypto_tfm *tfm) 715 715 { 716 + crypto_aead_set_reqsize(__crypto_aead_cast(tfm), 717 + sizeof(struct nx_ccm_rctx)); 716 718 return nx_crypto_ctx_init(crypto_tfm_ctx(tfm), NX_FC_AES, 717 719 NX_MODE_AES_CCM); 718 720 } 719 721 720 722 int nx_crypto_ctx_aes_gcm_init(struct crypto_aead *tfm) 721 723 { 724 + crypto_aead_set_reqsize(tfm, sizeof(struct nx_gcm_rctx)); 722 725 return nx_crypto_ctx_init(crypto_aead_ctx(tfm), NX_FC_AES, 723 726 NX_MODE_AES_GCM); 724 727 }
+11 -3
drivers/crypto/nx/nx.h
··· 2 2 #ifndef __NX_H__ 3 3 #define __NX_H__ 4 4 5 + #include <crypto/ctr.h> 6 + 5 7 #define NX_NAME "nx-crypto" 6 8 #define NX_STRING "IBM Power7+ Nest Accelerator Crypto Driver" 7 9 #define NX_VERSION "1.0" ··· 93 91 94 92 #define NX_GCM4106_NONCE_LEN (4) 95 93 #define NX_GCM_CTR_OFFSET (12) 96 - struct nx_gcm_priv { 94 + struct nx_gcm_rctx { 97 95 u8 iv[16]; 96 + }; 97 + 98 + struct nx_gcm_priv { 98 99 u8 iauth_tag[16]; 99 100 u8 nonce[NX_GCM4106_NONCE_LEN]; 100 101 }; ··· 105 100 #define NX_CCM_AES_KEY_LEN (16) 106 101 #define NX_CCM4309_AES_KEY_LEN (19) 107 102 #define NX_CCM4309_NONCE_LEN (3) 108 - struct nx_ccm_priv { 103 + struct nx_ccm_rctx { 109 104 u8 iv[16]; 105 + }; 106 + 107 + struct nx_ccm_priv { 110 108 u8 b0[16]; 111 109 u8 iauth_tag[16]; 112 110 u8 oauth_tag[16]; ··· 121 113 }; 122 114 123 115 struct nx_ctr_priv { 124 - u8 iv[16]; 116 + u8 nonce[CTR_RFC3686_NONCE_SIZE]; 125 117 }; 126 118 127 119 struct nx_crypto_ctx {
-3
drivers/crypto/omap-des.c
··· 536 536 dmaengine_terminate_all(dd->dma_lch_in); 537 537 dmaengine_terminate_all(dd->dma_lch_out); 538 538 539 - dma_unmap_sg(dd->dev, dd->in_sg, dd->in_sg_len, DMA_TO_DEVICE); 540 - dma_unmap_sg(dd->dev, dd->out_sg, dd->out_sg_len, DMA_FROM_DEVICE); 541 - 542 539 return err; 543 540 } 544 541
+1 -1
drivers/dma/dw/core.c
··· 1746 1746 MODULE_LICENSE("GPL v2"); 1747 1747 MODULE_DESCRIPTION("Synopsys DesignWare DMA Controller core driver"); 1748 1748 MODULE_AUTHOR("Haavard Skinnemoen (Atmel)"); 1749 - MODULE_AUTHOR("Viresh Kumar <viresh.linux@gmail.com>"); 1749 + MODULE_AUTHOR("Viresh Kumar <vireshk@kernel.org>");
+11 -3
drivers/gpio/gpio-brcmstb.c
··· 87 87 struct brcmstb_gpio_bank *bank; 88 88 int ret = 0; 89 89 90 + if (!priv) { 91 + dev_err(&pdev->dev, "called %s without drvdata!\n", __func__); 92 + return -EFAULT; 93 + } 94 + 95 + /* 96 + * You can lose return values below, but we report all errors, and it's 97 + * more important to actually perform all of the steps. 98 + */ 90 99 list_for_each(pos, &priv->bank_list) { 91 100 bank = list_entry(pos, struct brcmstb_gpio_bank, node); 92 101 ret = bgpio_remove(&bank->bgc); ··· 152 143 priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 153 144 if (!priv) 154 145 return -ENOMEM; 146 + platform_set_drvdata(pdev, priv); 147 + INIT_LIST_HEAD(&priv->bank_list); 155 148 156 149 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 157 150 reg_base = devm_ioremap_resource(dev, res); ··· 164 153 priv->reg_base = reg_base; 165 154 priv->pdev = pdev; 166 155 167 - INIT_LIST_HEAD(&priv->bank_list); 168 156 if (brcmstb_gpio_sanity_check_banks(dev, np, res)) 169 157 return -EINVAL; 170 158 ··· 230 220 231 221 dev_info(dev, "Registered %d banks (GPIO(s): %d-%d)\n", 232 222 priv->num_banks, priv->gpio_base, gpio_base - 1); 233 - 234 - platform_set_drvdata(pdev, priv); 235 223 236 224 return 0; 237 225
+2 -4
drivers/gpio/gpio-davinci.c
··· 578 578 writel_relaxed(~0, &g->clr_falling); 579 579 writel_relaxed(~0, &g->clr_rising); 580 580 581 - /* set up all irqs in this bank */ 582 - irq_set_chained_handler(bank_irq, gpio_irq_handler); 583 - 584 581 /* 585 582 * Each chip handles 32 gpios, and each irq bank consists of 16 586 583 * gpio irqs. Pass the irq bank's corresponding controller to 587 584 * the chained irq handler. 588 585 */ 589 - irq_set_handler_data(bank_irq, &chips[gpio / 32]); 586 + irq_set_chained_handler_and_data(bank_irq, gpio_irq_handler, 587 + &chips[gpio / 32]); 590 588 591 589 binten |= BIT(bank); 592 590 }
+1
drivers/gpio/gpio-max732x.c
··· 603 603 gc->base = gpio_start; 604 604 gc->ngpio = port; 605 605 gc->label = chip->client->name; 606 + gc->dev = &chip->client->dev; 606 607 gc->owner = THIS_MODULE; 607 608 608 609 return port;
+4 -1
drivers/gpio/gpio-omap.c
··· 500 500 501 501 spin_lock_irqsave(&bank->lock, flags); 502 502 retval = omap_set_gpio_triggering(bank, offset, type); 503 - if (retval) 503 + if (retval) { 504 + spin_unlock_irqrestore(&bank->lock, flags); 504 505 goto error; 506 + } 505 507 omap_gpio_init_irq(bank, offset); 506 508 if (!omap_gpio_is_input(bank, offset)) { 507 509 spin_unlock_irqrestore(&bank->lock, flags); ··· 1187 1185 bank->irq = res->start; 1188 1186 bank->dev = dev; 1189 1187 bank->chip.dev = dev; 1188 + bank->chip.owner = THIS_MODULE; 1190 1189 bank->dbck_flag = pdata->dbck_flag; 1191 1190 bank->stride = pdata->bank_stride; 1192 1191 bank->width = pdata->bank_width;
+4
drivers/gpio/gpio-pca953x.c
··· 570 570 "could not connect irqchip to gpiochip\n"); 571 571 return ret; 572 572 } 573 + 574 + gpiochip_set_chained_irqchip(&chip->gpio_chip, 575 + &pca953x_irq_chip, 576 + client->irq, NULL); 573 577 } 574 578 575 579 return 0;
+2 -2
drivers/gpio/gpio-xilinx.c
··· 220 220 if (!chip->gpio_width[1]) 221 221 return; 222 222 223 - xgpio_writereg(mm_gc->regs + XGPIO_DATA_OFFSET + XGPIO_TRI_OFFSET, 223 + xgpio_writereg(mm_gc->regs + XGPIO_DATA_OFFSET + XGPIO_CHANNEL_OFFSET, 224 224 chip->gpio_state[1]); 225 - xgpio_writereg(mm_gc->regs + XGPIO_TRI_OFFSET + XGPIO_TRI_OFFSET, 225 + xgpio_writereg(mm_gc->regs + XGPIO_TRI_OFFSET + XGPIO_CHANNEL_OFFSET, 226 226 chip->gpio_dir[1]); 227 227 } 228 228
+1
drivers/gpio/gpio-zynq.c
··· 757 757 gpiochip_remove(&gpio->chip); 758 758 clk_disable_unprepare(gpio->clk); 759 759 device_set_wakeup_capable(&pdev->dev, 0); 760 + pm_runtime_disable(&pdev->dev); 760 761 return 0; 761 762 } 762 763
+16 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
··· 669 669 static int amdgpu_cs_dependencies(struct amdgpu_device *adev, 670 670 struct amdgpu_cs_parser *p) 671 671 { 672 + struct amdgpu_fpriv *fpriv = p->filp->driver_priv; 672 673 struct amdgpu_ib *ib; 673 674 int i, j, r; 674 675 ··· 695 694 for (j = 0; j < num_deps; ++j) { 696 695 struct amdgpu_fence *fence; 697 696 struct amdgpu_ring *ring; 697 + struct amdgpu_ctx *ctx; 698 698 699 699 r = amdgpu_cs_get_ring(adev, deps[j].ip_type, 700 700 deps[j].ip_instance, ··· 703 701 if (r) 704 702 return r; 705 703 704 + ctx = amdgpu_ctx_get(fpriv, deps[j].ctx_id); 705 + if (ctx == NULL) 706 + return -EINVAL; 707 + 706 708 r = amdgpu_fence_recreate(ring, p->filp, 707 709 deps[j].handle, 708 710 &fence); 709 - if (r) 711 + if (r) { 712 + amdgpu_ctx_put(ctx); 710 713 return r; 714 + } 711 715 712 716 amdgpu_sync_fence(&ib->sync, fence); 713 717 amdgpu_fence_unref(&fence); 718 + amdgpu_ctx_put(ctx); 714 719 } 715 720 } 716 721 ··· 817 808 818 809 r = amdgpu_cs_get_ring(adev, wait->in.ip_type, wait->in.ip_instance, 819 810 wait->in.ring, &ring); 820 - if (r) 811 + if (r) { 812 + amdgpu_ctx_put(ctx); 821 813 return r; 814 + } 822 815 823 816 r = amdgpu_fence_recreate(ring, filp, wait->in.handle, &fence); 824 - if (r) 817 + if (r) { 818 + amdgpu_ctx_put(ctx); 825 819 return r; 820 + } 826 821 827 822 r = fence_wait_timeout(&fence->base, true, timeout); 828 823 amdgpu_fence_unref(&fence);
+7 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 1207 1207 } else { 1208 1208 if (adev->ip_blocks[i].funcs->early_init) { 1209 1209 r = adev->ip_blocks[i].funcs->early_init((void *)adev); 1210 - if (r) 1210 + if (r == -ENOENT) 1211 + adev->ip_block_enabled[i] = false; 1212 + else if (r) 1211 1213 return r; 1214 + else 1215 + adev->ip_block_enabled[i] = true; 1216 + } else { 1217 + adev->ip_block_enabled[i] = true; 1212 1218 } 1213 - adev->ip_block_enabled[i] = true; 1214 1219 } 1215 1220 } 1216 1221
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
··· 352 352 if (((int64_t)timeout_ns) < 0) 353 353 return MAX_SCHEDULE_TIMEOUT; 354 354 355 - timeout = ktime_sub_ns(ktime_get(), timeout_ns); 355 + timeout = ktime_sub(ns_to_ktime(timeout_ns), ktime_get()); 356 356 if (ktime_to_ns(timeout) < 0) 357 357 return 0; 358 358
+12 -4
drivers/gpu/drm/amd/amdgpu/cz_dpm.c
··· 1679 1679 if (ret) 1680 1680 return ret; 1681 1681 1682 - DRM_INFO("DPM unforce state min=%d, max=%d.\n", 1683 - pi->sclk_dpm.soft_min_clk, 1684 - pi->sclk_dpm.soft_max_clk); 1682 + DRM_DEBUG("DPM unforce state min=%d, max=%d.\n", 1683 + pi->sclk_dpm.soft_min_clk, 1684 + pi->sclk_dpm.soft_max_clk); 1685 1685 1686 1686 return 0; 1687 1687 } 1688 1688 1689 1689 static int cz_dpm_force_dpm_level(struct amdgpu_device *adev, 1690 - enum amdgpu_dpm_forced_level level) 1690 + enum amdgpu_dpm_forced_level level) 1691 1691 { 1692 1692 int ret = 0; 1693 1693 1694 1694 switch (level) { 1695 1695 case AMDGPU_DPM_FORCED_LEVEL_HIGH: 1696 + ret = cz_dpm_unforce_dpm_levels(adev); 1697 + if (ret) 1698 + return ret; 1696 1699 ret = cz_dpm_force_highest(adev); 1697 1700 if (ret) 1698 1701 return ret; 1699 1702 break; 1700 1703 case AMDGPU_DPM_FORCED_LEVEL_LOW: 1704 + ret = cz_dpm_unforce_dpm_levels(adev); 1705 + if (ret) 1706 + return ret; 1701 1707 ret = cz_dpm_force_lowest(adev); 1702 1708 if (ret) 1703 1709 return ret; ··· 1716 1710 default: 1717 1711 break; 1718 1712 } 1713 + 1714 + adev->pm.dpm.forced_level = level; 1719 1715 1720 1716 return ret; 1721 1717 }
+14 -8
drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
··· 3403 3403 3404 3404 switch (entry->src_data) { 3405 3405 case 0: /* vblank */ 3406 - if (disp_int & interrupt_status_offsets[crtc].vblank) { 3406 + if (disp_int & interrupt_status_offsets[crtc].vblank) 3407 3407 dce_v10_0_crtc_vblank_int_ack(adev, crtc); 3408 - if (amdgpu_irq_enabled(adev, source, irq_type)) { 3409 - drm_handle_vblank(adev->ddev, crtc); 3410 - } 3411 - DRM_DEBUG("IH: D%d vblank\n", crtc + 1); 3408 + else 3409 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 3410 + 3411 + if (amdgpu_irq_enabled(adev, source, irq_type)) { 3412 + drm_handle_vblank(adev->ddev, crtc); 3412 3413 } 3414 + DRM_DEBUG("IH: D%d vblank\n", crtc + 1); 3415 + 3413 3416 break; 3414 3417 case 1: /* vline */ 3415 - if (disp_int & interrupt_status_offsets[crtc].vline) { 3418 + if (disp_int & interrupt_status_offsets[crtc].vline) 3416 3419 dce_v10_0_crtc_vline_int_ack(adev, crtc); 3417 - DRM_DEBUG("IH: D%d vline\n", crtc + 1); 3418 - } 3420 + else 3421 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 3422 + 3423 + DRM_DEBUG("IH: D%d vline\n", crtc + 1); 3424 + 3419 3425 break; 3420 3426 default: 3421 3427 DRM_DEBUG("Unhandled interrupt: %d %d\n", entry->src_id, entry->src_data);
+14 -8
drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
··· 3402 3402 3403 3403 switch (entry->src_data) { 3404 3404 case 0: /* vblank */ 3405 - if (disp_int & interrupt_status_offsets[crtc].vblank) { 3405 + if (disp_int & interrupt_status_offsets[crtc].vblank) 3406 3406 dce_v11_0_crtc_vblank_int_ack(adev, crtc); 3407 - if (amdgpu_irq_enabled(adev, source, irq_type)) { 3408 - drm_handle_vblank(adev->ddev, crtc); 3409 - } 3410 - DRM_DEBUG("IH: D%d vblank\n", crtc + 1); 3407 + else 3408 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 3409 + 3410 + if (amdgpu_irq_enabled(adev, source, irq_type)) { 3411 + drm_handle_vblank(adev->ddev, crtc); 3411 3412 } 3413 + DRM_DEBUG("IH: D%d vblank\n", crtc + 1); 3414 + 3412 3415 break; 3413 3416 case 1: /* vline */ 3414 - if (disp_int & interrupt_status_offsets[crtc].vline) { 3417 + if (disp_int & interrupt_status_offsets[crtc].vline) 3415 3418 dce_v11_0_crtc_vline_int_ack(adev, crtc); 3416 - DRM_DEBUG("IH: D%d vline\n", crtc + 1); 3417 - } 3419 + else 3420 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 3421 + 3422 + DRM_DEBUG("IH: D%d vline\n", crtc + 1); 3423 + 3418 3424 break; 3419 3425 default: 3420 3426 DRM_DEBUG("Unhandled interrupt: %d %d\n", entry->src_id, entry->src_data);
+18 -8
drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
··· 2566 2566 struct drm_device *dev = crtc->dev; 2567 2567 struct amdgpu_device *adev = dev->dev_private; 2568 2568 struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc); 2569 + unsigned type; 2569 2570 2570 2571 switch (mode) { 2571 2572 case DRM_MODE_DPMS_ON: ··· 2575 2574 dce_v8_0_vga_enable(crtc, true); 2576 2575 amdgpu_atombios_crtc_blank(crtc, ATOM_DISABLE); 2577 2576 dce_v8_0_vga_enable(crtc, false); 2577 + /* Make sure VBLANK interrupt is still enabled */ 2578 + type = amdgpu_crtc_idx_to_irq_type(adev, amdgpu_crtc->crtc_id); 2579 + amdgpu_irq_update(adev, &adev->crtc_irq, type); 2578 2580 drm_vblank_post_modeset(dev, amdgpu_crtc->crtc_id); 2579 2581 dce_v8_0_crtc_load_lut(crtc); 2580 2582 break; ··· 3241 3237 3242 3238 switch (entry->src_data) { 3243 3239 case 0: /* vblank */ 3244 - if (disp_int & interrupt_status_offsets[crtc].vblank) { 3240 + if (disp_int & interrupt_status_offsets[crtc].vblank) 3245 3241 WREG32(mmLB_VBLANK_STATUS + crtc_offsets[crtc], LB_VBLANK_STATUS__VBLANK_ACK_MASK); 3246 - if (amdgpu_irq_enabled(adev, source, irq_type)) { 3247 - drm_handle_vblank(adev->ddev, crtc); 3248 - } 3249 - DRM_DEBUG("IH: D%d vblank\n", crtc + 1); 3242 + else 3243 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 3244 + 3245 + if (amdgpu_irq_enabled(adev, source, irq_type)) { 3246 + drm_handle_vblank(adev->ddev, crtc); 3250 3247 } 3248 + DRM_DEBUG("IH: D%d vblank\n", crtc + 1); 3249 + 3251 3250 break; 3252 3251 case 1: /* vline */ 3253 - if (disp_int & interrupt_status_offsets[crtc].vline) { 3252 + if (disp_int & interrupt_status_offsets[crtc].vline) 3254 3253 WREG32(mmLB_VLINE_STATUS + crtc_offsets[crtc], LB_VLINE_STATUS__VLINE_ACK_MASK); 3255 - DRM_DEBUG("IH: D%d vline\n", crtc + 1); 3256 - } 3254 + else 3255 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 3256 + 3257 + DRM_DEBUG("IH: D%d vline\n", crtc + 1); 3258 + 3257 3259 break; 3258 3260 default: 3259 3261 DRM_DEBUG("Unhandled interrupt: %d %d\n", entry->src_id, entry->src_data);
+1 -4
drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
··· 1813 1813 u32 data, mask; 1814 1814 1815 1815 data = RREG32(mmCC_RB_BACKEND_DISABLE); 1816 - if (data & 1) 1817 - data &= CC_RB_BACKEND_DISABLE__BACKEND_DISABLE_MASK; 1818 - else 1819 - data = 0; 1816 + data &= CC_RB_BACKEND_DISABLE__BACKEND_DISABLE_MASK; 1820 1817 1821 1818 data |= RREG32(mmGC_USER_RB_BACKEND_DISABLE); 1822 1819
+33 -2
drivers/gpu/drm/amd/amdgpu/vi.c
··· 122 122 spin_unlock_irqrestore(&adev->smc_idx_lock, flags); 123 123 } 124 124 125 + /* smu_8_0_d.h */ 126 + #define mmMP0PUB_IND_INDEX 0x180 127 + #define mmMP0PUB_IND_DATA 0x181 128 + 129 + static u32 cz_smc_rreg(struct amdgpu_device *adev, u32 reg) 130 + { 131 + unsigned long flags; 132 + u32 r; 133 + 134 + spin_lock_irqsave(&adev->smc_idx_lock, flags); 135 + WREG32(mmMP0PUB_IND_INDEX, (reg)); 136 + r = RREG32(mmMP0PUB_IND_DATA); 137 + spin_unlock_irqrestore(&adev->smc_idx_lock, flags); 138 + return r; 139 + } 140 + 141 + static void cz_smc_wreg(struct amdgpu_device *adev, u32 reg, u32 v) 142 + { 143 + unsigned long flags; 144 + 145 + spin_lock_irqsave(&adev->smc_idx_lock, flags); 146 + WREG32(mmMP0PUB_IND_INDEX, (reg)); 147 + WREG32(mmMP0PUB_IND_DATA, (v)); 148 + spin_unlock_irqrestore(&adev->smc_idx_lock, flags); 149 + } 150 + 125 151 static u32 vi_uvd_ctx_rreg(struct amdgpu_device *adev, u32 reg) 126 152 { 127 153 unsigned long flags; ··· 1248 1222 bool smc_enabled = false; 1249 1223 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 1250 1224 1251 - adev->smc_rreg = &vi_smc_rreg; 1252 - adev->smc_wreg = &vi_smc_wreg; 1225 + if (adev->flags & AMDGPU_IS_APU) { 1226 + adev->smc_rreg = &cz_smc_rreg; 1227 + adev->smc_wreg = &cz_smc_wreg; 1228 + } else { 1229 + adev->smc_rreg = &vi_smc_rreg; 1230 + adev->smc_wreg = &vi_smc_wreg; 1231 + } 1253 1232 adev->pcie_rreg = &vi_pcie_rreg; 1254 1233 adev->pcie_wreg = &vi_pcie_wreg; 1255 1234 adev->uvd_ctx_rreg = &vi_uvd_ctx_rreg;
+7 -2
drivers/gpu/drm/amd/amdkfd/kfd_process.c
··· 420 420 pqm_uninit(&p->pqm); 421 421 422 422 pdd = kfd_get_process_device_data(dev, p); 423 + 424 + if (!pdd) { 425 + mutex_unlock(&p->mutex); 426 + return; 427 + } 428 + 423 429 if (pdd->reset_wavefronts) { 424 430 dbgdev_wave_reset_wavefronts(pdd->dev, p); 425 431 pdd->reset_wavefronts = false; ··· 437 431 * We don't call amd_iommu_unbind_pasid() here 438 432 * because the IOMMU called us. 439 433 */ 440 - if (pdd) 441 - pdd->bound = false; 434 + pdd->bound = false; 442 435 443 436 mutex_unlock(&p->mutex); 444 437 }
-2
drivers/gpu/drm/armada/armada_crtc.c
··· 531 531 532 532 drm_crtc_vblank_off(crtc); 533 533 534 - crtc->mode = *adj; 535 - 536 534 val = dcrtc->dumb_ctrl & ~CFG_DUMB_ENA; 537 535 if (val != dcrtc->dumb_ctrl) { 538 536 dcrtc->dumb_ctrl = val;
+3 -2
drivers/gpu/drm/armada/armada_gem.c
··· 69 69 70 70 if (dobj->obj.import_attach) { 71 71 /* We only ever display imported data */ 72 - dma_buf_unmap_attachment(dobj->obj.import_attach, dobj->sgt, 73 - DMA_TO_DEVICE); 72 + if (dobj->sgt) 73 + dma_buf_unmap_attachment(dobj->obj.import_attach, 74 + dobj->sgt, DMA_TO_DEVICE); 74 75 drm_prime_gem_destroy(&dobj->obj, NULL); 75 76 } 76 77
+75 -46
drivers/gpu/drm/armada/armada_overlay.c
··· 7 7 * published by the Free Software Foundation. 8 8 */ 9 9 #include <drm/drmP.h> 10 + #include <drm/drm_plane_helper.h> 10 11 #include "armada_crtc.h" 11 12 #include "armada_drm.h" 12 13 #include "armada_fb.h" ··· 86 85 87 86 if (fb) 88 87 armada_drm_queue_unref_work(dcrtc->crtc.dev, fb); 89 - } 90 88 91 - static unsigned armada_limit(int start, unsigned size, unsigned max) 92 - { 93 - int end = start + size; 94 - if (end < 0) 95 - return 0; 96 - if (start < 0) 97 - start = 0; 98 - return (unsigned)end > max ? max - start : end - start; 89 + wake_up(&dplane->vbl.wait); 99 90 } 100 91 101 92 static int ··· 98 105 { 99 106 struct armada_plane *dplane = drm_to_armada_plane(plane); 100 107 struct armada_crtc *dcrtc = drm_to_armada_crtc(crtc); 108 + struct drm_rect src = { 109 + .x1 = src_x, 110 + .y1 = src_y, 111 + .x2 = src_x + src_w, 112 + .y2 = src_y + src_h, 113 + }; 114 + struct drm_rect dest = { 115 + .x1 = crtc_x, 116 + .y1 = crtc_y, 117 + .x2 = crtc_x + crtc_w, 118 + .y2 = crtc_y + crtc_h, 119 + }; 120 + const struct drm_rect clip = { 121 + .x2 = crtc->mode.hdisplay, 122 + .y2 = crtc->mode.vdisplay, 123 + }; 101 124 uint32_t val, ctrl0; 102 125 unsigned idx = 0; 126 + bool visible; 103 127 int ret; 104 128 105 - crtc_w = armada_limit(crtc_x, crtc_w, dcrtc->crtc.mode.hdisplay); 106 - crtc_h = armada_limit(crtc_y, crtc_h, dcrtc->crtc.mode.vdisplay); 129 + ret = drm_plane_helper_check_update(plane, crtc, fb, &src, &dest, &clip, 130 + 0, INT_MAX, true, false, &visible); 131 + if (ret) 132 + return ret; 133 + 107 134 ctrl0 = CFG_DMA_FMT(drm_fb_to_armada_fb(fb)->fmt) | 108 135 CFG_DMA_MOD(drm_fb_to_armada_fb(fb)->mod) | 109 136 CFG_CBSH_ENA | CFG_DMA_HSMOOTH | CFG_DMA_ENA; 110 137 111 138 /* Does the position/size result in nothing to display? */ 112 - if (crtc_w == 0 || crtc_h == 0) { 139 + if (!visible) 113 140 ctrl0 &= ~CFG_DMA_ENA; 114 - } 115 - 116 - /* 117 - * FIXME: if the starting point is off screen, we need to 118 - * adjust src_x, src_y, src_w, src_h appropriately, and 119 - * according to the scale. 120 - */ 121 141 122 142 if (!dcrtc->plane) { 123 143 dcrtc->plane = plane; ··· 140 134 /* FIXME: overlay on an interlaced display */ 141 135 /* Just updating the position/size? */ 142 136 if (plane->fb == fb && dplane->ctrl0 == ctrl0) { 143 - val = (src_h & 0xffff0000) | src_w >> 16; 137 + val = (drm_rect_height(&src) & 0xffff0000) | 138 + drm_rect_width(&src) >> 16; 144 139 dplane->src_hw = val; 145 140 writel_relaxed(val, dcrtc->base + LCD_SPU_DMA_HPXL_VLN); 146 - val = crtc_h << 16 | crtc_w; 141 + 142 + val = drm_rect_height(&dest) << 16 | drm_rect_width(&dest); 147 143 dplane->dst_hw = val; 148 144 writel_relaxed(val, dcrtc->base + LCD_SPU_DZM_HPXL_VLN); 149 - val = crtc_y << 16 | crtc_x; 145 + 146 + val = dest.y1 << 16 | dest.x1; 150 147 dplane->dst_yx = val; 151 148 writel_relaxed(val, dcrtc->base + LCD_SPU_DMA_OVSA_HPXL_VLN); 149 + 152 150 return 0; 153 151 } else if (~dplane->ctrl0 & ctrl0 & CFG_DMA_ENA) { 154 152 /* Power up the Y/U/V FIFOs on ENA 0->1 transitions */ ··· 160 150 dcrtc->base + LCD_SPU_SRAM_PARA1); 161 151 } 162 152 163 - ret = wait_event_timeout(dplane->vbl.wait, 164 - list_empty(&dplane->vbl.update.node), 165 - HZ/25); 166 - if (ret < 0) 167 - return ret; 153 + wait_event_timeout(dplane->vbl.wait, 154 + list_empty(&dplane->vbl.update.node), 155 + HZ/25); 168 156 169 157 if (plane->fb != fb) { 170 158 struct armada_gem_object *obj = drm_fb_obj(fb); 171 - uint32_t sy, su, sv; 159 + uint32_t addr[3], pixel_format; 160 + int i, num_planes, hsub; 172 161 173 162 /* 174 163 * Take a reference on the new framebuffer - we want to ··· 187 178 older_fb); 188 179 } 189 180 190 - src_y >>= 16; 191 - src_x >>= 16; 192 - sy = obj->dev_addr + fb->offsets[0] + src_y * fb->pitches[0] + 193 - src_x * fb->bits_per_pixel / 8; 194 - su = obj->dev_addr + fb->offsets[1] + src_y * fb->pitches[1] + 195 - src_x; 196 - sv = obj->dev_addr + fb->offsets[2] + src_y * fb->pitches[2] + 197 - src_x; 181 + src_y = src.y1 >> 16; 182 + src_x = src.x1 >> 16; 198 183 199 - armada_reg_queue_set(dplane->vbl.regs, idx, sy, 184 + pixel_format = fb->pixel_format; 185 + hsub = drm_format_horz_chroma_subsampling(pixel_format); 186 + num_planes = drm_format_num_planes(pixel_format); 187 + 188 + /* 189 + * Annoyingly, shifting a YUYV-format image by one pixel 190 + * causes the U/V planes to toggle. Toggle the UV swap. 191 + * (Unfortunately, this causes momentary colour flickering.) 192 + */ 193 + if (src_x & (hsub - 1) && num_planes == 1) 194 + ctrl0 ^= CFG_DMA_MOD(CFG_SWAPUV); 195 + 196 + for (i = 0; i < num_planes; i++) 197 + addr[i] = obj->dev_addr + fb->offsets[i] + 198 + src_y * fb->pitches[i] + 199 + src_x * drm_format_plane_cpp(pixel_format, i); 200 + for (; i < ARRAY_SIZE(addr); i++) 201 + addr[i] = 0; 202 + 203 + armada_reg_queue_set(dplane->vbl.regs, idx, addr[0], 200 204 LCD_SPU_DMA_START_ADDR_Y0); 201 - armada_reg_queue_set(dplane->vbl.regs, idx, su, 205 + armada_reg_queue_set(dplane->vbl.regs, idx, addr[1], 202 206 LCD_SPU_DMA_START_ADDR_U0); 203 - armada_reg_queue_set(dplane->vbl.regs, idx, sv, 207 + armada_reg_queue_set(dplane->vbl.regs, idx, addr[2], 204 208 LCD_SPU_DMA_START_ADDR_V0); 205 - armada_reg_queue_set(dplane->vbl.regs, idx, sy, 209 + armada_reg_queue_set(dplane->vbl.regs, idx, addr[0], 206 210 LCD_SPU_DMA_START_ADDR_Y1); 207 - armada_reg_queue_set(dplane->vbl.regs, idx, su, 211 + armada_reg_queue_set(dplane->vbl.regs, idx, addr[1], 208 212 LCD_SPU_DMA_START_ADDR_U1); 209 - armada_reg_queue_set(dplane->vbl.regs, idx, sv, 213 + armada_reg_queue_set(dplane->vbl.regs, idx, addr[2], 210 214 LCD_SPU_DMA_START_ADDR_V1); 211 215 212 216 val = fb->pitches[0] << 16 | fb->pitches[0]; ··· 230 208 LCD_SPU_DMA_PITCH_UV); 231 209 } 232 210 233 - val = (src_h & 0xffff0000) | src_w >> 16; 211 + val = (drm_rect_height(&src) & 0xffff0000) | drm_rect_width(&src) >> 16; 234 212 if (dplane->src_hw != val) { 235 213 dplane->src_hw = val; 236 214 armada_reg_queue_set(dplane->vbl.regs, idx, val, 237 215 LCD_SPU_DMA_HPXL_VLN); 238 216 } 239 - val = crtc_h << 16 | crtc_w; 217 + 218 + val = drm_rect_height(&dest) << 16 | drm_rect_width(&dest); 240 219 if (dplane->dst_hw != val) { 241 220 dplane->dst_hw = val; 242 221 armada_reg_queue_set(dplane->vbl.regs, idx, val, 243 222 LCD_SPU_DZM_HPXL_VLN); 244 223 } 245 - val = crtc_y << 16 | crtc_x; 224 + 225 + val = dest.y1 << 16 | dest.x1; 246 226 if (dplane->dst_yx != val) { 247 227 dplane->dst_yx = val; 248 228 armada_reg_queue_set(dplane->vbl.regs, idx, val, 249 229 LCD_SPU_DMA_OVSA_HPXL_VLN); 250 230 } 231 + 251 232 if (dplane->ctrl0 != ctrl0) { 252 233 dplane->ctrl0 = ctrl0; 253 234 armada_reg_queue_mod(dplane->vbl.regs, idx, ctrl0, ··· 304 279 305 280 static void armada_plane_destroy(struct drm_plane *plane) 306 281 { 307 - kfree(plane); 282 + struct armada_plane *dplane = drm_to_armada_plane(plane); 283 + 284 + drm_plane_cleanup(plane); 285 + 286 + kfree(dplane); 308 287 } 309 288 310 289 static int armada_plane_set_property(struct drm_plane *plane,
+5 -2
drivers/gpu/drm/drm_crtc.c
··· 2706 2706 if (!drm_core_check_feature(dev, DRIVER_MODESET)) 2707 2707 return -EINVAL; 2708 2708 2709 - /* For some reason crtc x/y offsets are signed internally. */ 2710 - if (crtc_req->x > INT_MAX || crtc_req->y > INT_MAX) 2709 + /* 2710 + * Universal plane src offsets are only 16.16, prevent havoc for 2711 + * drivers using universal plane code internally. 2712 + */ 2713 + if (crtc_req->x & 0xffff0000 || crtc_req->y & 0xffff0000) 2711 2714 return -ERANGE; 2712 2715 2713 2716 drm_modeset_lock_all(dev);
+60
drivers/gpu/drm/drm_ioc32.c
··· 70 70 71 71 #define DRM_IOCTL_WAIT_VBLANK32 DRM_IOWR(0x3a, drm_wait_vblank32_t) 72 72 73 + #define DRM_IOCTL_MODE_ADDFB232 DRM_IOWR(0xb8, drm_mode_fb_cmd232_t) 74 + 73 75 typedef struct drm_version_32 { 74 76 int version_major; /**< Major version */ 75 77 int version_minor; /**< Minor version */ ··· 1018 1016 return 0; 1019 1017 } 1020 1018 1019 + typedef struct drm_mode_fb_cmd232 { 1020 + u32 fb_id; 1021 + u32 width; 1022 + u32 height; 1023 + u32 pixel_format; 1024 + u32 flags; 1025 + u32 handles[4]; 1026 + u32 pitches[4]; 1027 + u32 offsets[4]; 1028 + u64 modifier[4]; 1029 + } __attribute__((packed)) drm_mode_fb_cmd232_t; 1030 + 1031 + static int compat_drm_mode_addfb2(struct file *file, unsigned int cmd, 1032 + unsigned long arg) 1033 + { 1034 + struct drm_mode_fb_cmd232 __user *argp = (void __user *)arg; 1035 + struct drm_mode_fb_cmd232 req32; 1036 + struct drm_mode_fb_cmd2 __user *req64; 1037 + int i; 1038 + int err; 1039 + 1040 + if (copy_from_user(&req32, argp, sizeof(req32))) 1041 + return -EFAULT; 1042 + 1043 + req64 = compat_alloc_user_space(sizeof(*req64)); 1044 + 1045 + if (!access_ok(VERIFY_WRITE, req64, sizeof(*req64)) 1046 + || __put_user(req32.width, &req64->width) 1047 + || __put_user(req32.height, &req64->height) 1048 + || __put_user(req32.pixel_format, &req64->pixel_format) 1049 + || __put_user(req32.flags, &req64->flags)) 1050 + return -EFAULT; 1051 + 1052 + for (i = 0; i < 4; i++) { 1053 + if (__put_user(req32.handles[i], &req64->handles[i])) 1054 + return -EFAULT; 1055 + if (__put_user(req32.pitches[i], &req64->pitches[i])) 1056 + return -EFAULT; 1057 + if (__put_user(req32.offsets[i], &req64->offsets[i])) 1058 + return -EFAULT; 1059 + if (__put_user(req32.modifier[i], &req64->modifier[i])) 1060 + return -EFAULT; 1061 + } 1062 + 1063 + err = drm_ioctl(file, DRM_IOCTL_MODE_ADDFB2, (unsigned long)req64); 1064 + if (err) 1065 + return err; 1066 + 1067 + if (__get_user(req32.fb_id, &req64->fb_id)) 1068 + return -EFAULT; 1069 + 1070 + if (copy_to_user(argp, &req32, sizeof(req32))) 1071 + return -EFAULT; 1072 + 1073 + return 0; 1074 + } 1075 + 1021 1076 static drm_ioctl_compat_t *drm_compat_ioctls[] = { 1022 1077 [DRM_IOCTL_NR(DRM_IOCTL_VERSION32)] = compat_drm_version, 1023 1078 [DRM_IOCTL_NR(DRM_IOCTL_GET_UNIQUE32)] = compat_drm_getunique, ··· 1107 1048 [DRM_IOCTL_NR(DRM_IOCTL_UPDATE_DRAW32)] = compat_drm_update_draw, 1108 1049 #endif 1109 1050 [DRM_IOCTL_NR(DRM_IOCTL_WAIT_VBLANK32)] = compat_drm_wait_vblank, 1051 + [DRM_IOCTL_NR(DRM_IOCTL_MODE_ADDFB232)] = compat_drm_mode_addfb2, 1110 1052 }; 1111 1053 1112 1054 /**
+3 -3
drivers/gpu/drm/i915/i915_drv.h
··· 826 826 struct kref ref; 827 827 int user_handle; 828 828 uint8_t remap_slice; 829 + struct drm_i915_private *i915; 829 830 struct drm_i915_file_private *file_priv; 830 831 struct i915_ctx_hang_stats hang_stats; 831 832 struct i915_hw_ppgtt *ppgtt; ··· 2037 2036 unsigned int cache_level:3; 2038 2037 unsigned int cache_dirty:1; 2039 2038 2040 - unsigned int has_dma_mapping:1; 2041 - 2042 2039 unsigned int frontbuffer_bits:INTEL_FRONTBUFFER_BITS; 2043 2040 2044 2041 unsigned int pin_display; ··· 3115 3116 int i915_debugfs_connector_add(struct drm_connector *connector); 3116 3117 void intel_display_crc_init(struct drm_device *dev); 3117 3118 #else 3118 - static inline int i915_debugfs_connector_add(struct drm_connector *connector) {} 3119 + static inline int i915_debugfs_connector_add(struct drm_connector *connector) 3120 + { return 0; } 3119 3121 static inline void intel_display_crc_init(struct drm_device *dev) {} 3120 3122 #endif 3121 3123
+17 -18
drivers/gpu/drm/i915/i915_gem.c
··· 213 213 sg_dma_len(sg) = obj->base.size; 214 214 215 215 obj->pages = st; 216 - obj->has_dma_mapping = true; 217 216 return 0; 218 217 } 219 218 ··· 264 265 265 266 sg_free_table(obj->pages); 266 267 kfree(obj->pages); 267 - 268 - obj->has_dma_mapping = false; 269 268 } 270 269 271 270 static void ··· 2136 2139 obj->base.read_domains = obj->base.write_domain = I915_GEM_DOMAIN_CPU; 2137 2140 } 2138 2141 2142 + i915_gem_gtt_finish_object(obj); 2143 + 2139 2144 if (i915_gem_object_needs_bit17_swizzle(obj)) 2140 2145 i915_gem_object_save_bit_17_swizzle(obj); 2141 2146 ··· 2198 2199 struct sg_page_iter sg_iter; 2199 2200 struct page *page; 2200 2201 unsigned long last_pfn = 0; /* suppress gcc warning */ 2202 + int ret; 2201 2203 gfp_t gfp; 2202 2204 2203 2205 /* Assert that the object is not currently in any GPU domain. As it ··· 2246 2246 */ 2247 2247 i915_gem_shrink_all(dev_priv); 2248 2248 page = shmem_read_mapping_page(mapping, i); 2249 - if (IS_ERR(page)) 2249 + if (IS_ERR(page)) { 2250 + ret = PTR_ERR(page); 2250 2251 goto err_pages; 2252 + } 2251 2253 } 2252 2254 #ifdef CONFIG_SWIOTLB 2253 2255 if (swiotlb_nr_tbl()) { ··· 2278 2276 sg_mark_end(sg); 2279 2277 obj->pages = st; 2280 2278 2279 + ret = i915_gem_gtt_prepare_object(obj); 2280 + if (ret) 2281 + goto err_pages; 2282 + 2281 2283 if (i915_gem_object_needs_bit17_swizzle(obj)) 2282 2284 i915_gem_object_do_bit_17_swizzle(obj); 2283 2285 ··· 2306 2300 * space and so want to translate the error from shmemfs back to our 2307 2301 * usual understanding of ENOMEM. 2308 2302 */ 2309 - if (PTR_ERR(page) == -ENOSPC) 2310 - return -ENOMEM; 2311 - else 2312 - return PTR_ERR(page); 2303 + if (ret == -ENOSPC) 2304 + ret = -ENOMEM; 2305 + 2306 + return ret; 2313 2307 } 2314 2308 2315 2309 /* Ensure that the associated pages are gathered from the backing storage ··· 2548 2542 } 2549 2543 2550 2544 request->emitted_jiffies = jiffies; 2545 + ring->last_submitted_seqno = request->seqno; 2551 2546 list_add_tail(&request->list, &ring->request_list); 2552 2547 request->file_priv = NULL; 2553 2548 ··· 3254 3247 3255 3248 /* Since the unbound list is global, only move to that list if 3256 3249 * no more VMAs exist. */ 3257 - if (list_empty(&obj->vma_list)) { 3258 - i915_gem_gtt_finish_object(obj); 3250 + if (list_empty(&obj->vma_list)) 3259 3251 list_move_tail(&obj->global_list, &dev_priv->mm.unbound_list); 3260 - } 3261 3252 3262 3253 /* And finally now the object is completely decoupled from this vma, 3263 3254 * we can drop its hold on the backing storage and allow it to be ··· 3773 3768 goto err_remove_node; 3774 3769 } 3775 3770 3776 - ret = i915_gem_gtt_prepare_object(obj); 3777 - if (ret) 3778 - goto err_remove_node; 3779 - 3780 3771 trace_i915_vma_bind(vma, flags); 3781 3772 ret = i915_vma_bind(vma, obj->cache_level, flags); 3782 3773 if (ret) 3783 - goto err_finish_gtt; 3774 + goto err_remove_node; 3784 3775 3785 3776 list_move_tail(&obj->global_list, &dev_priv->mm.bound_list); 3786 3777 list_add_tail(&vma->mm_list, &vm->inactive_list); 3787 3778 3788 3779 return vma; 3789 3780 3790 - err_finish_gtt: 3791 - i915_gem_gtt_finish_object(obj); 3792 3781 err_remove_node: 3793 3782 drm_mm_remove_node(&vma->node); 3794 3783 err_free_vma:
+3 -5
drivers/gpu/drm/i915/i915_gem_context.c
··· 135 135 136 136 void i915_gem_context_free(struct kref *ctx_ref) 137 137 { 138 - struct intel_context *ctx = container_of(ctx_ref, 139 - typeof(*ctx), ref); 138 + struct intel_context *ctx = container_of(ctx_ref, typeof(*ctx), ref); 140 139 141 140 trace_i915_context_free(ctx); 142 141 ··· 156 157 struct drm_i915_gem_object *obj; 157 158 int ret; 158 159 159 - obj = i915_gem_object_create_stolen(dev, size); 160 - if (obj == NULL) 161 - obj = i915_gem_alloc_object(dev, size); 160 + obj = i915_gem_alloc_object(dev, size); 162 161 if (obj == NULL) 163 162 return ERR_PTR(-ENOMEM); 164 163 ··· 194 197 195 198 kref_init(&ctx->ref); 196 199 list_add_tail(&ctx->link, &dev_priv->context_list); 200 + ctx->i915 = dev_priv; 197 201 198 202 if (dev_priv->hw_context_size) { 199 203 struct drm_i915_gem_object *obj =
-2
drivers/gpu/drm/i915/i915_gem_dmabuf.c
··· 256 256 return PTR_ERR(sg); 257 257 258 258 obj->pages = sg; 259 - obj->has_dma_mapping = true; 260 259 return 0; 261 260 } 262 261 ··· 263 264 { 264 265 dma_buf_unmap_attachment(obj->base.import_attach, 265 266 obj->pages, DMA_BIDIRECTIONAL); 266 - obj->has_dma_mapping = false; 267 267 } 268 268 269 269 static const struct drm_i915_gem_object_ops i915_gem_object_dmabuf_ops = {
+18 -14
drivers/gpu/drm/i915/i915_gem_gtt.c
··· 1723 1723 1724 1724 int i915_gem_gtt_prepare_object(struct drm_i915_gem_object *obj) 1725 1725 { 1726 - if (obj->has_dma_mapping) 1727 - return 0; 1728 - 1729 1726 if (!dma_map_sg(&obj->base.dev->pdev->dev, 1730 1727 obj->pages->sgl, obj->pages->nents, 1731 1728 PCI_DMA_BIDIRECTIONAL)) ··· 1969 1972 1970 1973 interruptible = do_idling(dev_priv); 1971 1974 1972 - if (!obj->has_dma_mapping) 1973 - dma_unmap_sg(&dev->pdev->dev, 1974 - obj->pages->sgl, obj->pages->nents, 1975 - PCI_DMA_BIDIRECTIONAL); 1975 + dma_unmap_sg(&dev->pdev->dev, obj->pages->sgl, obj->pages->nents, 1976 + PCI_DMA_BIDIRECTIONAL); 1976 1977 1977 1978 undo_idling(dev_priv, interruptible); 1978 1979 } ··· 2541 2546 struct drm_i915_private *dev_priv = dev->dev_private; 2542 2547 struct drm_i915_gem_object *obj; 2543 2548 struct i915_address_space *vm; 2549 + struct i915_vma *vma; 2550 + bool flush; 2544 2551 2545 2552 i915_check_and_clear_faults(dev); 2546 2553 ··· 2552 2555 dev_priv->gtt.base.total, 2553 2556 true); 2554 2557 2558 + /* Cache flush objects bound into GGTT and rebind them. */ 2559 + vm = &dev_priv->gtt.base; 2555 2560 list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) { 2556 - struct i915_vma *vma = i915_gem_obj_to_vma(obj, 2557 - &dev_priv->gtt.base); 2558 - if (!vma) 2559 - continue; 2561 + flush = false; 2562 + list_for_each_entry(vma, &obj->vma_list, vma_link) { 2563 + if (vma->vm != vm) 2564 + continue; 2560 2565 2561 - i915_gem_clflush_object(obj, obj->pin_display); 2562 - WARN_ON(i915_vma_bind(vma, obj->cache_level, PIN_UPDATE)); 2566 + WARN_ON(i915_vma_bind(vma, obj->cache_level, 2567 + PIN_UPDATE)); 2568 + 2569 + flush = true; 2570 + } 2571 + 2572 + if (flush) 2573 + i915_gem_clflush_object(obj, obj->pin_display); 2563 2574 } 2564 - 2565 2575 2566 2576 if (INTEL_INFO(dev)->gen >= 8) { 2567 2577 if (IS_CHERRYVIEW(dev) || IS_BROXTON(dev))
-1
drivers/gpu/drm/i915/i915_gem_stolen.c
··· 416 416 if (obj->pages == NULL) 417 417 goto cleanup; 418 418 419 - obj->has_dma_mapping = true; 420 419 i915_gem_object_pin_pages(obj); 421 420 obj->stolen = stolen; 422 421
+27 -2
drivers/gpu/drm/i915/i915_gem_userptr.c
··· 545 545 return ret; 546 546 } 547 547 548 + static int 549 + __i915_gem_userptr_set_pages(struct drm_i915_gem_object *obj, 550 + struct page **pvec, int num_pages) 551 + { 552 + int ret; 553 + 554 + ret = st_set_pages(&obj->pages, pvec, num_pages); 555 + if (ret) 556 + return ret; 557 + 558 + ret = i915_gem_gtt_prepare_object(obj); 559 + if (ret) { 560 + sg_free_table(obj->pages); 561 + kfree(obj->pages); 562 + obj->pages = NULL; 563 + } 564 + 565 + return ret; 566 + } 567 + 548 568 static void 549 569 __i915_gem_userptr_get_pages_worker(struct work_struct *_work) 550 570 { ··· 604 584 if (obj->userptr.work != &work->work) { 605 585 ret = 0; 606 586 } else if (pinned == num_pages) { 607 - ret = st_set_pages(&obj->pages, pvec, num_pages); 587 + ret = __i915_gem_userptr_set_pages(obj, pvec, num_pages); 608 588 if (ret == 0) { 609 589 list_add_tail(&obj->global_list, &to_i915(dev)->mm.unbound_list); 590 + obj->get_page.sg = obj->pages->sgl; 591 + obj->get_page.last = 0; 592 + 610 593 pinned = 0; 611 594 } 612 595 } ··· 716 693 } 717 694 } 718 695 } else { 719 - ret = st_set_pages(&obj->pages, pvec, num_pages); 696 + ret = __i915_gem_userptr_set_pages(obj, pvec, num_pages); 720 697 if (ret == 0) { 721 698 obj->userptr.work = NULL; 722 699 pinned = 0; ··· 737 714 738 715 if (obj->madv != I915_MADV_WILLNEED) 739 716 obj->dirty = 0; 717 + 718 + i915_gem_gtt_finish_object(obj); 740 719 741 720 for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, 0) { 742 721 struct page *page = sg_page_iter_page(&sg_iter);
+1 -1
drivers/gpu/drm/i915/i915_ioc32.c
··· 204 204 drm_ioctl_compat_t *fn = NULL; 205 205 int ret; 206 206 207 - if (nr < DRM_COMMAND_BASE) 207 + if (nr < DRM_COMMAND_BASE || nr >= DRM_COMMAND_END) 208 208 return drm_compat_ioctl(filp, cmd, arg); 209 209 210 210 if (nr < DRM_COMMAND_BASE + ARRAY_SIZE(i915_compat_ioctls))
+3 -10
drivers/gpu/drm/i915/i915_irq.c
··· 2706 2706 spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags); 2707 2707 } 2708 2708 2709 - static struct drm_i915_gem_request * 2710 - ring_last_request(struct intel_engine_cs *ring) 2711 - { 2712 - return list_entry(ring->request_list.prev, 2713 - struct drm_i915_gem_request, list); 2714 - } 2715 - 2716 2709 static bool 2717 - ring_idle(struct intel_engine_cs *ring) 2710 + ring_idle(struct intel_engine_cs *ring, u32 seqno) 2718 2711 { 2719 2712 return (list_empty(&ring->request_list) || 2720 - i915_gem_request_completed(ring_last_request(ring), false)); 2713 + i915_seqno_passed(seqno, ring->last_submitted_seqno)); 2721 2714 } 2722 2715 2723 2716 static bool ··· 2932 2939 acthd = intel_ring_get_active_head(ring); 2933 2940 2934 2941 if (ring->hangcheck.seqno == seqno) { 2935 - if (ring_idle(ring)) { 2942 + if (ring_idle(ring, seqno)) { 2936 2943 ring->hangcheck.action = HANGCHECK_IDLE; 2937 2944 2938 2945 if (waitqueue_active(&ring->irq_queue)) {
+1 -1
drivers/gpu/drm/i915/i915_trace.h
··· 727 727 TP_fast_assign( 728 728 __entry->ctx = ctx; 729 729 __entry->vm = ctx->ppgtt ? &ctx->ppgtt->base : NULL; 730 - __entry->dev = ctx->file_priv->dev_priv->dev->primary->index; 730 + __entry->dev = ctx->i915->dev->primary->index; 731 731 ), 732 732 733 733 TP_printk("dev=%u, ctx=%p, ctx_vm=%p",
+11 -7
drivers/gpu/drm/i915/intel_display.c
··· 4854 4854 struct intel_plane *intel_plane; 4855 4855 int pipe = intel_crtc->pipe; 4856 4856 4857 + if (!intel_crtc->active) 4858 + return; 4859 + 4857 4860 intel_crtc_wait_for_pending_flips(crtc); 4858 4861 4859 4862 intel_pre_disable_primary(crtc); ··· 6314 6311 struct drm_device *dev = crtc->dev; 6315 6312 struct drm_connector *connector; 6316 6313 struct drm_i915_private *dev_priv = dev->dev_private; 6317 - 6318 - /* crtc should still be enabled when we disable it. */ 6319 - WARN_ON(!crtc->state->enable); 6320 6314 6321 6315 intel_crtc_disable_planes(crtc); 6322 6316 dev_priv->display.crtc_disable(crtc); ··· 7887 7887 int pipe = pipe_config->cpu_transcoder; 7888 7888 enum dpio_channel port = vlv_pipe_to_channel(pipe); 7889 7889 intel_clock_t clock; 7890 - u32 cmn_dw13, pll_dw0, pll_dw1, pll_dw2; 7890 + u32 cmn_dw13, pll_dw0, pll_dw1, pll_dw2, pll_dw3; 7891 7891 int refclk = 100000; 7892 7892 7893 7893 mutex_lock(&dev_priv->sb_lock); ··· 7895 7895 pll_dw0 = vlv_dpio_read(dev_priv, pipe, CHV_PLL_DW0(port)); 7896 7896 pll_dw1 = vlv_dpio_read(dev_priv, pipe, CHV_PLL_DW1(port)); 7897 7897 pll_dw2 = vlv_dpio_read(dev_priv, pipe, CHV_PLL_DW2(port)); 7898 + pll_dw3 = vlv_dpio_read(dev_priv, pipe, CHV_PLL_DW3(port)); 7898 7899 mutex_unlock(&dev_priv->sb_lock); 7899 7900 7900 7901 clock.m1 = (pll_dw1 & 0x7) == DPIO_CHV_M1_DIV_BY_2 ? 2 : 0; 7901 - clock.m2 = ((pll_dw0 & 0xff) << 22) | (pll_dw2 & 0x3fffff); 7902 + clock.m2 = (pll_dw0 & 0xff) << 22; 7903 + if (pll_dw3 & DPIO_CHV_FRAC_DIV_EN) 7904 + clock.m2 |= pll_dw2 & 0x3fffff; 7902 7905 clock.n = (pll_dw1 >> DPIO_CHV_N_DIV_SHIFT) & 0xf; 7903 7906 clock.p1 = (cmn_dw13 >> DPIO_CHV_P1_DIV_SHIFT) & 0x7; 7904 7907 clock.p2 = (cmn_dw13 >> DPIO_CHV_P2_DIV_SHIFT) & 0x1f; ··· 12588 12585 continue; 12589 12586 12590 12587 if (!crtc_state->enable) { 12591 - intel_crtc_disable(crtc); 12588 + if (crtc->state->enable) 12589 + intel_crtc_disable(crtc); 12592 12590 } else if (crtc->state->enable) { 12593 12591 intel_crtc_disable_planes(crtc); 12594 12592 dev_priv->display.crtc_disable(crtc); ··· 13274 13270 if (ret) 13275 13271 return ret; 13276 13272 13277 - if (intel_crtc->active) { 13273 + if (crtc_state ? crtc_state->base.active : intel_crtc->active) { 13278 13274 struct intel_plane_state *old_state = 13279 13275 to_intel_plane_state(plane->state); 13280 13276
+7
drivers/gpu/drm/i915/intel_ringbuffer.h
··· 275 275 * Do we have some not yet emitted requests outstanding? 276 276 */ 277 277 struct drm_i915_gem_request *outstanding_lazy_request; 278 + /** 279 + * Seqno of request most recently submitted to request_list. 280 + * Used exclusively by hang checker to avoid grabbing lock while 281 + * inspecting request list. 282 + */ 283 + u32 last_submitted_seqno; 284 + 278 285 bool gpu_caches_dirty; 279 286 280 287 wait_queue_head_t irq_queue;
+1 -1
drivers/gpu/drm/imx/imx-tve.c
··· 301 301 302 302 switch (tve->mode) { 303 303 case TVE_MODE_VGA: 304 - imx_drm_set_bus_format_pins(encoder, MEDIA_BUS_FMT_YUV8_1X24, 304 + imx_drm_set_bus_format_pins(encoder, MEDIA_BUS_FMT_GBR888_1X24, 305 305 tve->hsync_pin, tve->vsync_pin); 306 306 break; 307 307 case TVE_MODE_TVOUT:
+15 -6
drivers/gpu/drm/imx/parallel-display.c
··· 21 21 #include <drm/drm_panel.h> 22 22 #include <linux/videodev2.h> 23 23 #include <video/of_display_timing.h> 24 + #include <linux/of_graph.h> 24 25 25 26 #include "imx-drm.h" 26 27 ··· 209 208 { 210 209 struct drm_device *drm = data; 211 210 struct device_node *np = dev->of_node; 212 - struct device_node *panel_node; 211 + struct device_node *port; 213 212 const u8 *edidp; 214 213 struct imx_parallel_display *imxpd; 215 214 int ret; ··· 235 234 imxpd->bus_format = MEDIA_BUS_FMT_RGB666_1X24_CPADHI; 236 235 } 237 236 238 - panel_node = of_parse_phandle(np, "fsl,panel", 0); 239 - if (panel_node) { 240 - imxpd->panel = of_drm_find_panel(panel_node); 241 - if (!imxpd->panel) 242 - return -EPROBE_DEFER; 237 + /* port@1 is the output port */ 238 + port = of_graph_get_port_by_id(np, 1); 239 + if (port) { 240 + struct device_node *endpoint, *remote; 241 + 242 + endpoint = of_get_child_by_name(port, "endpoint"); 243 + if (endpoint) { 244 + remote = of_graph_get_remote_port_parent(endpoint); 245 + if (remote) 246 + imxpd->panel = of_drm_find_panel(remote); 247 + if (!imxpd->panel) 248 + return -EPROBE_DEFER; 249 + } 243 250 } 244 251 245 252 imxpd->dev = dev;
+1 -1
drivers/gpu/drm/omapdrm/omap_dmm_tiler.c
··· 285 285 286 286 if (wait) { 287 287 if (!wait_for_completion_timeout(&engine->compl, 288 - msecs_to_jiffies(1))) { 288 + msecs_to_jiffies(100))) { 289 289 dev_err(dmm->dev, "timed out waiting for done\n"); 290 290 ret = -ETIMEDOUT; 291 291 }
+3 -3
drivers/gpu/drm/omapdrm/omap_drv.h
··· 177 177 struct drm_mode_fb_cmd2 *mode_cmd, struct drm_gem_object **bos); 178 178 struct drm_gem_object *omap_framebuffer_bo(struct drm_framebuffer *fb, int p); 179 179 int omap_framebuffer_pin(struct drm_framebuffer *fb); 180 - int omap_framebuffer_unpin(struct drm_framebuffer *fb); 180 + void omap_framebuffer_unpin(struct drm_framebuffer *fb); 181 181 void omap_framebuffer_update_scanout(struct drm_framebuffer *fb, 182 182 struct omap_drm_window *win, struct omap_overlay_info *info); 183 183 struct drm_connector *omap_framebuffer_get_next_connector( ··· 211 211 enum dma_data_direction dir); 212 212 int omap_gem_get_paddr(struct drm_gem_object *obj, 213 213 dma_addr_t *paddr, bool remap); 214 - int omap_gem_put_paddr(struct drm_gem_object *obj); 214 + void omap_gem_put_paddr(struct drm_gem_object *obj); 215 215 int omap_gem_get_pages(struct drm_gem_object *obj, struct page ***pages, 216 216 bool remap); 217 217 int omap_gem_put_pages(struct drm_gem_object *obj); ··· 236 236 /* PVR needs alignment to 8 pixels.. right now that is the most 237 237 * restrictive stride requirement.. 238 238 */ 239 - return ALIGN(pitch, 8 * bytespp); 239 + return roundup(pitch, 8 * bytespp); 240 240 } 241 241 242 242 /* map crtc to vblank mask */
+4 -12
drivers/gpu/drm/omapdrm/omap_fb.c
··· 287 287 } 288 288 289 289 /* unpin, no longer being scanned out: */ 290 - int omap_framebuffer_unpin(struct drm_framebuffer *fb) 290 + void omap_framebuffer_unpin(struct drm_framebuffer *fb) 291 291 { 292 292 struct omap_framebuffer *omap_fb = to_omap_framebuffer(fb); 293 - int ret, i, n = drm_format_num_planes(fb->pixel_format); 293 + int i, n = drm_format_num_planes(fb->pixel_format); 294 294 295 295 mutex_lock(&omap_fb->lock); 296 296 ··· 298 298 299 299 if (omap_fb->pin_count > 0) { 300 300 mutex_unlock(&omap_fb->lock); 301 - return 0; 301 + return; 302 302 } 303 303 304 304 for (i = 0; i < n; i++) { 305 305 struct plane *plane = &omap_fb->planes[i]; 306 - ret = omap_gem_put_paddr(plane->bo); 307 - if (ret) 308 - goto fail; 306 + omap_gem_put_paddr(plane->bo); 309 307 plane->paddr = 0; 310 308 } 311 309 312 310 mutex_unlock(&omap_fb->lock); 313 - 314 - return 0; 315 - 316 - fail: 317 - mutex_unlock(&omap_fb->lock); 318 - return ret; 319 311 } 320 312 321 313 struct drm_gem_object *omap_framebuffer_bo(struct drm_framebuffer *fb, int p)
+1 -1
drivers/gpu/drm/omapdrm/omap_fbdev.c
··· 135 135 fbdev->ywrap_enabled = priv->has_dmm && ywrap_enabled; 136 136 if (fbdev->ywrap_enabled) { 137 137 /* need to align pitch to page size if using DMM scrolling */ 138 - mode_cmd.pitches[0] = ALIGN(mode_cmd.pitches[0], PAGE_SIZE); 138 + mode_cmd.pitches[0] = PAGE_ALIGN(mode_cmd.pitches[0]); 139 139 } 140 140 141 141 /* allocate backing bo */
+14 -12
drivers/gpu/drm/omapdrm/omap_gem.c
··· 808 808 /* Release physical address, when DMA is no longer being performed.. this 809 809 * could potentially unpin and unmap buffers from TILER 810 810 */ 811 - int omap_gem_put_paddr(struct drm_gem_object *obj) 811 + void omap_gem_put_paddr(struct drm_gem_object *obj) 812 812 { 813 813 struct omap_gem_object *omap_obj = to_omap_bo(obj); 814 - int ret = 0; 814 + int ret; 815 815 816 816 mutex_lock(&obj->dev->struct_mutex); 817 817 if (omap_obj->paddr_cnt > 0) { ··· 821 821 if (ret) { 822 822 dev_err(obj->dev->dev, 823 823 "could not unpin pages: %d\n", ret); 824 - goto fail; 825 824 } 826 825 ret = tiler_release(omap_obj->block); 827 826 if (ret) { ··· 831 832 omap_obj->block = NULL; 832 833 } 833 834 } 834 - fail: 835 + 835 836 mutex_unlock(&obj->dev->struct_mutex); 836 - return ret; 837 837 } 838 838 839 839 /* Get rotated scanout address (only valid if already pinned), at the ··· 1376 1378 1377 1379 omap_obj = kzalloc(sizeof(*omap_obj), GFP_KERNEL); 1378 1380 if (!omap_obj) 1379 - goto fail; 1380 - 1381 - spin_lock(&priv->list_lock); 1382 - list_add(&omap_obj->mm_list, &priv->obj_list); 1383 - spin_unlock(&priv->list_lock); 1381 + return NULL; 1384 1382 1385 1383 obj = &omap_obj->base; 1386 1384 ··· 1386 1392 */ 1387 1393 omap_obj->vaddr = dma_alloc_writecombine(dev->dev, size, 1388 1394 &omap_obj->paddr, GFP_KERNEL); 1389 - if (omap_obj->vaddr) 1390 - flags |= OMAP_BO_DMA; 1395 + if (!omap_obj->vaddr) { 1396 + kfree(omap_obj); 1391 1397 1398 + return NULL; 1399 + } 1400 + 1401 + flags |= OMAP_BO_DMA; 1392 1402 } 1403 + 1404 + spin_lock(&priv->list_lock); 1405 + list_add(&omap_obj->mm_list, &priv->obj_list); 1406 + spin_unlock(&priv->list_lock); 1393 1407 1394 1408 omap_obj->flags = flags; 1395 1409
+26
drivers/gpu/drm/omapdrm/omap_plane.c
··· 17 17 * this program. If not, see <http://www.gnu.org/licenses/>. 18 18 */ 19 19 20 + #include <drm/drm_atomic.h> 20 21 #include <drm/drm_atomic_helper.h> 21 22 #include <drm/drm_plane_helper.h> 22 23 ··· 154 153 dispc_ovl_enable(omap_plane->id, false); 155 154 } 156 155 156 + static int omap_plane_atomic_check(struct drm_plane *plane, 157 + struct drm_plane_state *state) 158 + { 159 + struct drm_crtc_state *crtc_state; 160 + 161 + if (!state->crtc) 162 + return 0; 163 + 164 + crtc_state = drm_atomic_get_crtc_state(state->state, state->crtc); 165 + if (IS_ERR(crtc_state)) 166 + return PTR_ERR(crtc_state); 167 + 168 + if (state->crtc_x < 0 || state->crtc_y < 0) 169 + return -EINVAL; 170 + 171 + if (state->crtc_x + state->crtc_w > crtc_state->adjusted_mode.hdisplay) 172 + return -EINVAL; 173 + 174 + if (state->crtc_y + state->crtc_h > crtc_state->adjusted_mode.vdisplay) 175 + return -EINVAL; 176 + 177 + return 0; 178 + } 179 + 157 180 static const struct drm_plane_helper_funcs omap_plane_helper_funcs = { 158 181 .prepare_fb = omap_plane_prepare_fb, 159 182 .cleanup_fb = omap_plane_cleanup_fb, 183 + .atomic_check = omap_plane_atomic_check, 160 184 .atomic_update = omap_plane_atomic_update, 161 185 .atomic_disable = omap_plane_atomic_disable, 162 186 };
+1 -1
drivers/gpu/drm/radeon/ci_dpm.c
··· 5818 5818 tmp |= DPM_ENABLED; 5819 5819 break; 5820 5820 default: 5821 - DRM_ERROR("Invalid PCC GPIO: %u!\n", gpio.shift); 5821 + DRM_DEBUG("Invalid PCC GPIO: %u!\n", gpio.shift); 5822 5822 break; 5823 5823 } 5824 5824 WREG32_SMC(CNB_PWRMGT_CNTL, tmp);
+192 -144
drivers/gpu/drm/radeon/cik.c
··· 7964 7964 case 1: /* D1 vblank/vline */ 7965 7965 switch (src_data) { 7966 7966 case 0: /* D1 vblank */ 7967 - if (rdev->irq.stat_regs.cik.disp_int & LB_D1_VBLANK_INTERRUPT) { 7968 - if (rdev->irq.crtc_vblank_int[0]) { 7969 - drm_handle_vblank(rdev->ddev, 0); 7970 - rdev->pm.vblank_sync = true; 7971 - wake_up(&rdev->irq.vblank_queue); 7972 - } 7973 - if (atomic_read(&rdev->irq.pflip[0])) 7974 - radeon_crtc_handle_vblank(rdev, 0); 7975 - rdev->irq.stat_regs.cik.disp_int &= ~LB_D1_VBLANK_INTERRUPT; 7976 - DRM_DEBUG("IH: D1 vblank\n"); 7967 + if (!(rdev->irq.stat_regs.cik.disp_int & LB_D1_VBLANK_INTERRUPT)) 7968 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 7969 + 7970 + if (rdev->irq.crtc_vblank_int[0]) { 7971 + drm_handle_vblank(rdev->ddev, 0); 7972 + rdev->pm.vblank_sync = true; 7973 + wake_up(&rdev->irq.vblank_queue); 7977 7974 } 7975 + if (atomic_read(&rdev->irq.pflip[0])) 7976 + radeon_crtc_handle_vblank(rdev, 0); 7977 + rdev->irq.stat_regs.cik.disp_int &= ~LB_D1_VBLANK_INTERRUPT; 7978 + DRM_DEBUG("IH: D1 vblank\n"); 7979 + 7978 7980 break; 7979 7981 case 1: /* D1 vline */ 7980 - if (rdev->irq.stat_regs.cik.disp_int & LB_D1_VLINE_INTERRUPT) { 7981 - rdev->irq.stat_regs.cik.disp_int &= ~LB_D1_VLINE_INTERRUPT; 7982 - DRM_DEBUG("IH: D1 vline\n"); 7983 - } 7982 + if (!(rdev->irq.stat_regs.cik.disp_int & LB_D1_VLINE_INTERRUPT)) 7983 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 7984 + 7985 + rdev->irq.stat_regs.cik.disp_int &= ~LB_D1_VLINE_INTERRUPT; 7986 + DRM_DEBUG("IH: D1 vline\n"); 7987 + 7984 7988 break; 7985 7989 default: 7986 7990 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data); ··· 7994 7990 case 2: /* D2 vblank/vline */ 7995 7991 switch (src_data) { 7996 7992 case 0: /* D2 vblank */ 7997 - if (rdev->irq.stat_regs.cik.disp_int_cont & LB_D2_VBLANK_INTERRUPT) { 7998 - if (rdev->irq.crtc_vblank_int[1]) { 7999 - drm_handle_vblank(rdev->ddev, 1); 8000 - rdev->pm.vblank_sync = true; 8001 - wake_up(&rdev->irq.vblank_queue); 8002 - } 8003 - if (atomic_read(&rdev->irq.pflip[1])) 8004 - radeon_crtc_handle_vblank(rdev, 1); 8005 - rdev->irq.stat_regs.cik.disp_int_cont &= ~LB_D2_VBLANK_INTERRUPT; 8006 - DRM_DEBUG("IH: D2 vblank\n"); 7993 + if (!(rdev->irq.stat_regs.cik.disp_int_cont & LB_D2_VBLANK_INTERRUPT)) 7994 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 7995 + 7996 + if (rdev->irq.crtc_vblank_int[1]) { 7997 + drm_handle_vblank(rdev->ddev, 1); 7998 + rdev->pm.vblank_sync = true; 7999 + wake_up(&rdev->irq.vblank_queue); 8007 8000 } 8001 + if (atomic_read(&rdev->irq.pflip[1])) 8002 + radeon_crtc_handle_vblank(rdev, 1); 8003 + rdev->irq.stat_regs.cik.disp_int_cont &= ~LB_D2_VBLANK_INTERRUPT; 8004 + DRM_DEBUG("IH: D2 vblank\n"); 8005 + 8008 8006 break; 8009 8007 case 1: /* D2 vline */ 8010 - if (rdev->irq.stat_regs.cik.disp_int_cont & LB_D2_VLINE_INTERRUPT) { 8011 - rdev->irq.stat_regs.cik.disp_int_cont &= ~LB_D2_VLINE_INTERRUPT; 8012 - DRM_DEBUG("IH: D2 vline\n"); 8013 - } 8008 + if (!(rdev->irq.stat_regs.cik.disp_int_cont & LB_D2_VLINE_INTERRUPT)) 8009 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 8010 + 8011 + rdev->irq.stat_regs.cik.disp_int_cont &= ~LB_D2_VLINE_INTERRUPT; 8012 + DRM_DEBUG("IH: D2 vline\n"); 8013 + 8014 8014 break; 8015 8015 default: 8016 8016 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data); ··· 8024 8016 case 3: /* D3 vblank/vline */ 8025 8017 switch (src_data) { 8026 8018 case 0: /* D3 vblank */ 8027 - if (rdev->irq.stat_regs.cik.disp_int_cont2 & LB_D3_VBLANK_INTERRUPT) { 8028 - if (rdev->irq.crtc_vblank_int[2]) { 8029 - drm_handle_vblank(rdev->ddev, 2); 8030 - rdev->pm.vblank_sync = true; 8031 - wake_up(&rdev->irq.vblank_queue); 8032 - } 8033 - if (atomic_read(&rdev->irq.pflip[2])) 8034 - radeon_crtc_handle_vblank(rdev, 2); 8035 - rdev->irq.stat_regs.cik.disp_int_cont2 &= ~LB_D3_VBLANK_INTERRUPT; 8036 - DRM_DEBUG("IH: D3 vblank\n"); 8019 + if (!(rdev->irq.stat_regs.cik.disp_int_cont2 & LB_D3_VBLANK_INTERRUPT)) 8020 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 8021 + 8022 + if (rdev->irq.crtc_vblank_int[2]) { 8023 + drm_handle_vblank(rdev->ddev, 2); 8024 + rdev->pm.vblank_sync = true; 8025 + wake_up(&rdev->irq.vblank_queue); 8037 8026 } 8027 + if (atomic_read(&rdev->irq.pflip[2])) 8028 + radeon_crtc_handle_vblank(rdev, 2); 8029 + rdev->irq.stat_regs.cik.disp_int_cont2 &= ~LB_D3_VBLANK_INTERRUPT; 8030 + DRM_DEBUG("IH: D3 vblank\n"); 8031 + 8038 8032 break; 8039 8033 case 1: /* D3 vline */ 8040 - if (rdev->irq.stat_regs.cik.disp_int_cont2 & LB_D3_VLINE_INTERRUPT) { 8041 - rdev->irq.stat_regs.cik.disp_int_cont2 &= ~LB_D3_VLINE_INTERRUPT; 8042 - DRM_DEBUG("IH: D3 vline\n"); 8043 - } 8034 + if (!(rdev->irq.stat_regs.cik.disp_int_cont2 & LB_D3_VLINE_INTERRUPT)) 8035 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 8036 + 8037 + rdev->irq.stat_regs.cik.disp_int_cont2 &= ~LB_D3_VLINE_INTERRUPT; 8038 + DRM_DEBUG("IH: D3 vline\n"); 8039 + 8044 8040 break; 8045 8041 default: 8046 8042 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data); ··· 8054 8042 case 4: /* D4 vblank/vline */ 8055 8043 switch (src_data) { 8056 8044 case 0: /* D4 vblank */ 8057 - if (rdev->irq.stat_regs.cik.disp_int_cont3 & LB_D4_VBLANK_INTERRUPT) { 8058 - if (rdev->irq.crtc_vblank_int[3]) { 8059 - drm_handle_vblank(rdev->ddev, 3); 8060 - rdev->pm.vblank_sync = true; 8061 - wake_up(&rdev->irq.vblank_queue); 8062 - } 8063 - if (atomic_read(&rdev->irq.pflip[3])) 8064 - radeon_crtc_handle_vblank(rdev, 3); 8065 - rdev->irq.stat_regs.cik.disp_int_cont3 &= ~LB_D4_VBLANK_INTERRUPT; 8066 - DRM_DEBUG("IH: D4 vblank\n"); 8045 + if (!(rdev->irq.stat_regs.cik.disp_int_cont3 & LB_D4_VBLANK_INTERRUPT)) 8046 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 8047 + 8048 + if (rdev->irq.crtc_vblank_int[3]) { 8049 + drm_handle_vblank(rdev->ddev, 3); 8050 + rdev->pm.vblank_sync = true; 8051 + wake_up(&rdev->irq.vblank_queue); 8067 8052 } 8053 + if (atomic_read(&rdev->irq.pflip[3])) 8054 + radeon_crtc_handle_vblank(rdev, 3); 8055 + rdev->irq.stat_regs.cik.disp_int_cont3 &= ~LB_D4_VBLANK_INTERRUPT; 8056 + DRM_DEBUG("IH: D4 vblank\n"); 8057 + 8068 8058 break; 8069 8059 case 1: /* D4 vline */ 8070 - if (rdev->irq.stat_regs.cik.disp_int_cont3 & LB_D4_VLINE_INTERRUPT) { 8071 - rdev->irq.stat_regs.cik.disp_int_cont3 &= ~LB_D4_VLINE_INTERRUPT; 8072 - DRM_DEBUG("IH: D4 vline\n"); 8073 - } 8060 + if (!(rdev->irq.stat_regs.cik.disp_int_cont3 & LB_D4_VLINE_INTERRUPT)) 8061 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 8062 + 8063 + rdev->irq.stat_regs.cik.disp_int_cont3 &= ~LB_D4_VLINE_INTERRUPT; 8064 + DRM_DEBUG("IH: D4 vline\n"); 8065 + 8074 8066 break; 8075 8067 default: 8076 8068 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data); ··· 8084 8068 case 5: /* D5 vblank/vline */ 8085 8069 switch (src_data) { 8086 8070 case 0: /* D5 vblank */ 8087 - if (rdev->irq.stat_regs.cik.disp_int_cont4 & LB_D5_VBLANK_INTERRUPT) { 8088 - if (rdev->irq.crtc_vblank_int[4]) { 8089 - drm_handle_vblank(rdev->ddev, 4); 8090 - rdev->pm.vblank_sync = true; 8091 - wake_up(&rdev->irq.vblank_queue); 8092 - } 8093 - if (atomic_read(&rdev->irq.pflip[4])) 8094 - radeon_crtc_handle_vblank(rdev, 4); 8095 - rdev->irq.stat_regs.cik.disp_int_cont4 &= ~LB_D5_VBLANK_INTERRUPT; 8096 - DRM_DEBUG("IH: D5 vblank\n"); 8071 + if (!(rdev->irq.stat_regs.cik.disp_int_cont4 & LB_D5_VBLANK_INTERRUPT)) 8072 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 8073 + 8074 + if (rdev->irq.crtc_vblank_int[4]) { 8075 + drm_handle_vblank(rdev->ddev, 4); 8076 + rdev->pm.vblank_sync = true; 8077 + wake_up(&rdev->irq.vblank_queue); 8097 8078 } 8079 + if (atomic_read(&rdev->irq.pflip[4])) 8080 + radeon_crtc_handle_vblank(rdev, 4); 8081 + rdev->irq.stat_regs.cik.disp_int_cont4 &= ~LB_D5_VBLANK_INTERRUPT; 8082 + DRM_DEBUG("IH: D5 vblank\n"); 8083 + 8098 8084 break; 8099 8085 case 1: /* D5 vline */ 8100 - if (rdev->irq.stat_regs.cik.disp_int_cont4 & LB_D5_VLINE_INTERRUPT) { 8101 - rdev->irq.stat_regs.cik.disp_int_cont4 &= ~LB_D5_VLINE_INTERRUPT; 8102 - DRM_DEBUG("IH: D5 vline\n"); 8103 - } 8086 + if (!(rdev->irq.stat_regs.cik.disp_int_cont4 & LB_D5_VLINE_INTERRUPT)) 8087 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 8088 + 8089 + rdev->irq.stat_regs.cik.disp_int_cont4 &= ~LB_D5_VLINE_INTERRUPT; 8090 + DRM_DEBUG("IH: D5 vline\n"); 8091 + 8104 8092 break; 8105 8093 default: 8106 8094 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data); ··· 8114 8094 case 6: /* D6 vblank/vline */ 8115 8095 switch (src_data) { 8116 8096 case 0: /* D6 vblank */ 8117 - if (rdev->irq.stat_regs.cik.disp_int_cont5 & LB_D6_VBLANK_INTERRUPT) { 8118 - if (rdev->irq.crtc_vblank_int[5]) { 8119 - drm_handle_vblank(rdev->ddev, 5); 8120 - rdev->pm.vblank_sync = true; 8121 - wake_up(&rdev->irq.vblank_queue); 8122 - } 8123 - if (atomic_read(&rdev->irq.pflip[5])) 8124 - radeon_crtc_handle_vblank(rdev, 5); 8125 - rdev->irq.stat_regs.cik.disp_int_cont5 &= ~LB_D6_VBLANK_INTERRUPT; 8126 - DRM_DEBUG("IH: D6 vblank\n"); 8097 + if (!(rdev->irq.stat_regs.cik.disp_int_cont5 & LB_D6_VBLANK_INTERRUPT)) 8098 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 8099 + 8100 + if (rdev->irq.crtc_vblank_int[5]) { 8101 + drm_handle_vblank(rdev->ddev, 5); 8102 + rdev->pm.vblank_sync = true; 8103 + wake_up(&rdev->irq.vblank_queue); 8127 8104 } 8105 + if (atomic_read(&rdev->irq.pflip[5])) 8106 + radeon_crtc_handle_vblank(rdev, 5); 8107 + rdev->irq.stat_regs.cik.disp_int_cont5 &= ~LB_D6_VBLANK_INTERRUPT; 8108 + DRM_DEBUG("IH: D6 vblank\n"); 8109 + 8128 8110 break; 8129 8111 case 1: /* D6 vline */ 8130 - if (rdev->irq.stat_regs.cik.disp_int_cont5 & LB_D6_VLINE_INTERRUPT) { 8131 - rdev->irq.stat_regs.cik.disp_int_cont5 &= ~LB_D6_VLINE_INTERRUPT; 8132 - DRM_DEBUG("IH: D6 vline\n"); 8133 - } 8112 + if (!(rdev->irq.stat_regs.cik.disp_int_cont5 & LB_D6_VLINE_INTERRUPT)) 8113 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 8114 + 8115 + rdev->irq.stat_regs.cik.disp_int_cont5 &= ~LB_D6_VLINE_INTERRUPT; 8116 + DRM_DEBUG("IH: D6 vline\n"); 8117 + 8134 8118 break; 8135 8119 default: 8136 8120 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data); ··· 8154 8130 case 42: /* HPD hotplug */ 8155 8131 switch (src_data) { 8156 8132 case 0: 8157 - if (rdev->irq.stat_regs.cik.disp_int & DC_HPD1_INTERRUPT) { 8158 - rdev->irq.stat_regs.cik.disp_int &= ~DC_HPD1_INTERRUPT; 8159 - queue_hotplug = true; 8160 - DRM_DEBUG("IH: HPD1\n"); 8161 - } 8133 + if (!(rdev->irq.stat_regs.cik.disp_int & DC_HPD1_INTERRUPT)) 8134 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 8135 + 8136 + rdev->irq.stat_regs.cik.disp_int &= ~DC_HPD1_INTERRUPT; 8137 + queue_hotplug = true; 8138 + DRM_DEBUG("IH: HPD1\n"); 8139 + 8162 8140 break; 8163 8141 case 1: 8164 - if (rdev->irq.stat_regs.cik.disp_int_cont & DC_HPD2_INTERRUPT) { 8165 - rdev->irq.stat_regs.cik.disp_int_cont &= ~DC_HPD2_INTERRUPT; 8166 - queue_hotplug = true; 8167 - DRM_DEBUG("IH: HPD2\n"); 8168 - } 8142 + if (!(rdev->irq.stat_regs.cik.disp_int_cont & DC_HPD2_INTERRUPT)) 8143 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 8144 + 8145 + rdev->irq.stat_regs.cik.disp_int_cont &= ~DC_HPD2_INTERRUPT; 8146 + queue_hotplug = true; 8147 + DRM_DEBUG("IH: HPD2\n"); 8148 + 8169 8149 break; 8170 8150 case 2: 8171 - if (rdev->irq.stat_regs.cik.disp_int_cont2 & DC_HPD3_INTERRUPT) { 8172 - rdev->irq.stat_regs.cik.disp_int_cont2 &= ~DC_HPD3_INTERRUPT; 8173 - queue_hotplug = true; 8174 - DRM_DEBUG("IH: HPD3\n"); 8175 - } 8151 + if (!(rdev->irq.stat_regs.cik.disp_int_cont2 & DC_HPD3_INTERRUPT)) 8152 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 8153 + 8154 + rdev->irq.stat_regs.cik.disp_int_cont2 &= ~DC_HPD3_INTERRUPT; 8155 + queue_hotplug = true; 8156 + DRM_DEBUG("IH: HPD3\n"); 8157 + 8176 8158 break; 8177 8159 case 3: 8178 - if (rdev->irq.stat_regs.cik.disp_int_cont3 & DC_HPD4_INTERRUPT) { 8179 - rdev->irq.stat_regs.cik.disp_int_cont3 &= ~DC_HPD4_INTERRUPT; 8180 - queue_hotplug = true; 8181 - DRM_DEBUG("IH: HPD4\n"); 8182 - } 8160 + if (!(rdev->irq.stat_regs.cik.disp_int_cont3 & DC_HPD4_INTERRUPT)) 8161 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 8162 + 8163 + rdev->irq.stat_regs.cik.disp_int_cont3 &= ~DC_HPD4_INTERRUPT; 8164 + queue_hotplug = true; 8165 + DRM_DEBUG("IH: HPD4\n"); 8166 + 8183 8167 break; 8184 8168 case 4: 8185 - if (rdev->irq.stat_regs.cik.disp_int_cont4 & DC_HPD5_INTERRUPT) { 8186 - rdev->irq.stat_regs.cik.disp_int_cont4 &= ~DC_HPD5_INTERRUPT; 8187 - queue_hotplug = true; 8188 - DRM_DEBUG("IH: HPD5\n"); 8189 - } 8169 + if (!(rdev->irq.stat_regs.cik.disp_int_cont4 & DC_HPD5_INTERRUPT)) 8170 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 8171 + 8172 + rdev->irq.stat_regs.cik.disp_int_cont4 &= ~DC_HPD5_INTERRUPT; 8173 + queue_hotplug = true; 8174 + DRM_DEBUG("IH: HPD5\n"); 8175 + 8190 8176 break; 8191 8177 case 5: 8192 - if (rdev->irq.stat_regs.cik.disp_int_cont5 & DC_HPD6_INTERRUPT) { 8193 - rdev->irq.stat_regs.cik.disp_int_cont5 &= ~DC_HPD6_INTERRUPT; 8194 - queue_hotplug = true; 8195 - DRM_DEBUG("IH: HPD6\n"); 8196 - } 8178 + if (!(rdev->irq.stat_regs.cik.disp_int_cont5 & DC_HPD6_INTERRUPT)) 8179 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 8180 + 8181 + rdev->irq.stat_regs.cik.disp_int_cont5 &= ~DC_HPD6_INTERRUPT; 8182 + queue_hotplug = true; 8183 + DRM_DEBUG("IH: HPD6\n"); 8184 + 8197 8185 break; 8198 8186 case 6: 8199 - if (rdev->irq.stat_regs.cik.disp_int & DC_HPD1_RX_INTERRUPT) { 8200 - rdev->irq.stat_regs.cik.disp_int &= ~DC_HPD1_RX_INTERRUPT; 8201 - queue_dp = true; 8202 - DRM_DEBUG("IH: HPD_RX 1\n"); 8203 - } 8187 + if (!(rdev->irq.stat_regs.cik.disp_int & DC_HPD1_RX_INTERRUPT)) 8188 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 8189 + 8190 + rdev->irq.stat_regs.cik.disp_int &= ~DC_HPD1_RX_INTERRUPT; 8191 + queue_dp = true; 8192 + DRM_DEBUG("IH: HPD_RX 1\n"); 8193 + 8204 8194 break; 8205 8195 case 7: 8206 - if (rdev->irq.stat_regs.cik.disp_int_cont & DC_HPD2_RX_INTERRUPT) { 8207 - rdev->irq.stat_regs.cik.disp_int_cont &= ~DC_HPD2_RX_INTERRUPT; 8208 - queue_dp = true; 8209 - DRM_DEBUG("IH: HPD_RX 2\n"); 8210 - } 8196 + if (!(rdev->irq.stat_regs.cik.disp_int_cont & DC_HPD2_RX_INTERRUPT)) 8197 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 8198 + 8199 + rdev->irq.stat_regs.cik.disp_int_cont &= ~DC_HPD2_RX_INTERRUPT; 8200 + queue_dp = true; 8201 + DRM_DEBUG("IH: HPD_RX 2\n"); 8202 + 8211 8203 break; 8212 8204 case 8: 8213 - if (rdev->irq.stat_regs.cik.disp_int_cont2 & DC_HPD3_RX_INTERRUPT) { 8214 - rdev->irq.stat_regs.cik.disp_int_cont2 &= ~DC_HPD3_RX_INTERRUPT; 8215 - queue_dp = true; 8216 - DRM_DEBUG("IH: HPD_RX 3\n"); 8217 - } 8205 + if (!(rdev->irq.stat_regs.cik.disp_int_cont2 & DC_HPD3_RX_INTERRUPT)) 8206 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 8207 + 8208 + rdev->irq.stat_regs.cik.disp_int_cont2 &= ~DC_HPD3_RX_INTERRUPT; 8209 + queue_dp = true; 8210 + DRM_DEBUG("IH: HPD_RX 3\n"); 8211 + 8218 8212 break; 8219 8213 case 9: 8220 - if (rdev->irq.stat_regs.cik.disp_int_cont3 & DC_HPD4_RX_INTERRUPT) { 8221 - rdev->irq.stat_regs.cik.disp_int_cont3 &= ~DC_HPD4_RX_INTERRUPT; 8222 - queue_dp = true; 8223 - DRM_DEBUG("IH: HPD_RX 4\n"); 8224 - } 8214 + if (!(rdev->irq.stat_regs.cik.disp_int_cont3 & DC_HPD4_RX_INTERRUPT)) 8215 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 8216 + 8217 + rdev->irq.stat_regs.cik.disp_int_cont3 &= ~DC_HPD4_RX_INTERRUPT; 8218 + queue_dp = true; 8219 + DRM_DEBUG("IH: HPD_RX 4\n"); 8220 + 8225 8221 break; 8226 8222 case 10: 8227 - if (rdev->irq.stat_regs.cik.disp_int_cont4 & DC_HPD5_RX_INTERRUPT) { 8228 - rdev->irq.stat_regs.cik.disp_int_cont4 &= ~DC_HPD5_RX_INTERRUPT; 8229 - queue_dp = true; 8230 - DRM_DEBUG("IH: HPD_RX 5\n"); 8231 - } 8223 + if (!(rdev->irq.stat_regs.cik.disp_int_cont4 & DC_HPD5_RX_INTERRUPT)) 8224 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 8225 + 8226 + rdev->irq.stat_regs.cik.disp_int_cont4 &= ~DC_HPD5_RX_INTERRUPT; 8227 + queue_dp = true; 8228 + DRM_DEBUG("IH: HPD_RX 5\n"); 8229 + 8232 8230 break; 8233 8231 case 11: 8234 - if (rdev->irq.stat_regs.cik.disp_int_cont5 & DC_HPD6_RX_INTERRUPT) { 8235 - rdev->irq.stat_regs.cik.disp_int_cont5 &= ~DC_HPD6_RX_INTERRUPT; 8236 - queue_dp = true; 8237 - DRM_DEBUG("IH: HPD_RX 6\n"); 8238 - } 8232 + if (!(rdev->irq.stat_regs.cik.disp_int_cont5 & DC_HPD6_RX_INTERRUPT)) 8233 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 8234 + 8235 + rdev->irq.stat_regs.cik.disp_int_cont5 &= ~DC_HPD6_RX_INTERRUPT; 8236 + queue_dp = true; 8237 + DRM_DEBUG("IH: HPD_RX 6\n"); 8238 + 8239 8239 break; 8240 8240 default: 8241 8241 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+217 -175
drivers/gpu/drm/radeon/evergreen.c
··· 4924 4924 return IRQ_NONE; 4925 4925 4926 4926 rptr = rdev->ih.rptr; 4927 - DRM_DEBUG("r600_irq_process start: rptr %d, wptr %d\n", rptr, wptr); 4927 + DRM_DEBUG("evergreen_irq_process start: rptr %d, wptr %d\n", rptr, wptr); 4928 4928 4929 4929 /* Order reading of wptr vs. reading of IH ring data */ 4930 4930 rmb(); ··· 4942 4942 case 1: /* D1 vblank/vline */ 4943 4943 switch (src_data) { 4944 4944 case 0: /* D1 vblank */ 4945 - if (rdev->irq.stat_regs.evergreen.disp_int & LB_D1_VBLANK_INTERRUPT) { 4946 - if (rdev->irq.crtc_vblank_int[0]) { 4947 - drm_handle_vblank(rdev->ddev, 0); 4948 - rdev->pm.vblank_sync = true; 4949 - wake_up(&rdev->irq.vblank_queue); 4950 - } 4951 - if (atomic_read(&rdev->irq.pflip[0])) 4952 - radeon_crtc_handle_vblank(rdev, 0); 4953 - rdev->irq.stat_regs.evergreen.disp_int &= ~LB_D1_VBLANK_INTERRUPT; 4954 - DRM_DEBUG("IH: D1 vblank\n"); 4945 + if (!(rdev->irq.stat_regs.evergreen.disp_int & LB_D1_VBLANK_INTERRUPT)) 4946 + DRM_DEBUG("IH: D1 vblank - IH event w/o asserted irq bit?\n"); 4947 + 4948 + if (rdev->irq.crtc_vblank_int[0]) { 4949 + drm_handle_vblank(rdev->ddev, 0); 4950 + rdev->pm.vblank_sync = true; 4951 + wake_up(&rdev->irq.vblank_queue); 4955 4952 } 4953 + if (atomic_read(&rdev->irq.pflip[0])) 4954 + radeon_crtc_handle_vblank(rdev, 0); 4955 + rdev->irq.stat_regs.evergreen.disp_int &= ~LB_D1_VBLANK_INTERRUPT; 4956 + DRM_DEBUG("IH: D1 vblank\n"); 4957 + 4956 4958 break; 4957 4959 case 1: /* D1 vline */ 4958 - if (rdev->irq.stat_regs.evergreen.disp_int & LB_D1_VLINE_INTERRUPT) { 4959 - rdev->irq.stat_regs.evergreen.disp_int &= ~LB_D1_VLINE_INTERRUPT; 4960 - DRM_DEBUG("IH: D1 vline\n"); 4961 - } 4960 + if (!(rdev->irq.stat_regs.evergreen.disp_int & LB_D1_VLINE_INTERRUPT)) 4961 + DRM_DEBUG("IH: D1 vline - IH event w/o asserted irq bit?\n"); 4962 + 4963 + rdev->irq.stat_regs.evergreen.disp_int &= ~LB_D1_VLINE_INTERRUPT; 4964 + DRM_DEBUG("IH: D1 vline\n"); 4965 + 4962 4966 break; 4963 4967 default: 4964 4968 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data); ··· 4972 4968 case 2: /* D2 vblank/vline */ 4973 4969 switch (src_data) { 4974 4970 case 0: /* D2 vblank */ 4975 - if (rdev->irq.stat_regs.evergreen.disp_int_cont & LB_D2_VBLANK_INTERRUPT) { 4976 - if (rdev->irq.crtc_vblank_int[1]) { 4977 - drm_handle_vblank(rdev->ddev, 1); 4978 - rdev->pm.vblank_sync = true; 4979 - wake_up(&rdev->irq.vblank_queue); 4980 - } 4981 - if (atomic_read(&rdev->irq.pflip[1])) 4982 - radeon_crtc_handle_vblank(rdev, 1); 4983 - rdev->irq.stat_regs.evergreen.disp_int_cont &= ~LB_D2_VBLANK_INTERRUPT; 4984 - DRM_DEBUG("IH: D2 vblank\n"); 4971 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont & LB_D2_VBLANK_INTERRUPT)) 4972 + DRM_DEBUG("IH: D2 vblank - IH event w/o asserted irq bit?\n"); 4973 + 4974 + if (rdev->irq.crtc_vblank_int[1]) { 4975 + drm_handle_vblank(rdev->ddev, 1); 4976 + rdev->pm.vblank_sync = true; 4977 + wake_up(&rdev->irq.vblank_queue); 4985 4978 } 4979 + if (atomic_read(&rdev->irq.pflip[1])) 4980 + radeon_crtc_handle_vblank(rdev, 1); 4981 + rdev->irq.stat_regs.evergreen.disp_int_cont &= ~LB_D2_VBLANK_INTERRUPT; 4982 + DRM_DEBUG("IH: D2 vblank\n"); 4983 + 4986 4984 break; 4987 4985 case 1: /* D2 vline */ 4988 - if (rdev->irq.stat_regs.evergreen.disp_int_cont & LB_D2_VLINE_INTERRUPT) { 4989 - rdev->irq.stat_regs.evergreen.disp_int_cont &= ~LB_D2_VLINE_INTERRUPT; 4990 - DRM_DEBUG("IH: D2 vline\n"); 4991 - } 4986 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont & LB_D2_VLINE_INTERRUPT)) 4987 + DRM_DEBUG("IH: D2 vline - IH event w/o asserted irq bit?\n"); 4988 + 4989 + rdev->irq.stat_regs.evergreen.disp_int_cont &= ~LB_D2_VLINE_INTERRUPT; 4990 + DRM_DEBUG("IH: D2 vline\n"); 4991 + 4992 4992 break; 4993 4993 default: 4994 4994 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data); ··· 5002 4994 case 3: /* D3 vblank/vline */ 5003 4995 switch (src_data) { 5004 4996 case 0: /* D3 vblank */ 5005 - if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & LB_D3_VBLANK_INTERRUPT) { 5006 - if (rdev->irq.crtc_vblank_int[2]) { 5007 - drm_handle_vblank(rdev->ddev, 2); 5008 - rdev->pm.vblank_sync = true; 5009 - wake_up(&rdev->irq.vblank_queue); 5010 - } 5011 - if (atomic_read(&rdev->irq.pflip[2])) 5012 - radeon_crtc_handle_vblank(rdev, 2); 5013 - rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~LB_D3_VBLANK_INTERRUPT; 5014 - DRM_DEBUG("IH: D3 vblank\n"); 4997 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont2 & LB_D3_VBLANK_INTERRUPT)) 4998 + DRM_DEBUG("IH: D3 vblank - IH event w/o asserted irq bit?\n"); 4999 + 5000 + if (rdev->irq.crtc_vblank_int[2]) { 5001 + drm_handle_vblank(rdev->ddev, 2); 5002 + rdev->pm.vblank_sync = true; 5003 + wake_up(&rdev->irq.vblank_queue); 5015 5004 } 5005 + if (atomic_read(&rdev->irq.pflip[2])) 5006 + radeon_crtc_handle_vblank(rdev, 2); 5007 + rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~LB_D3_VBLANK_INTERRUPT; 5008 + DRM_DEBUG("IH: D3 vblank\n"); 5009 + 5016 5010 break; 5017 5011 case 1: /* D3 vline */ 5018 - if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & LB_D3_VLINE_INTERRUPT) { 5019 - rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~LB_D3_VLINE_INTERRUPT; 5020 - DRM_DEBUG("IH: D3 vline\n"); 5021 - } 5012 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont2 & LB_D3_VLINE_INTERRUPT)) 5013 + DRM_DEBUG("IH: D3 vline - IH event w/o asserted irq bit?\n"); 5014 + 5015 + rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~LB_D3_VLINE_INTERRUPT; 5016 + DRM_DEBUG("IH: D3 vline\n"); 5017 + 5022 5018 break; 5023 5019 default: 5024 5020 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data); ··· 5032 5020 case 4: /* D4 vblank/vline */ 5033 5021 switch (src_data) { 5034 5022 case 0: /* D4 vblank */ 5035 - if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & LB_D4_VBLANK_INTERRUPT) { 5036 - if (rdev->irq.crtc_vblank_int[3]) { 5037 - drm_handle_vblank(rdev->ddev, 3); 5038 - rdev->pm.vblank_sync = true; 5039 - wake_up(&rdev->irq.vblank_queue); 5040 - } 5041 - if (atomic_read(&rdev->irq.pflip[3])) 5042 - radeon_crtc_handle_vblank(rdev, 3); 5043 - rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~LB_D4_VBLANK_INTERRUPT; 5044 - DRM_DEBUG("IH: D4 vblank\n"); 5023 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont3 & LB_D4_VBLANK_INTERRUPT)) 5024 + DRM_DEBUG("IH: D4 vblank - IH event w/o asserted irq bit?\n"); 5025 + 5026 + if (rdev->irq.crtc_vblank_int[3]) { 5027 + drm_handle_vblank(rdev->ddev, 3); 5028 + rdev->pm.vblank_sync = true; 5029 + wake_up(&rdev->irq.vblank_queue); 5045 5030 } 5031 + if (atomic_read(&rdev->irq.pflip[3])) 5032 + radeon_crtc_handle_vblank(rdev, 3); 5033 + rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~LB_D4_VBLANK_INTERRUPT; 5034 + DRM_DEBUG("IH: D4 vblank\n"); 5035 + 5046 5036 break; 5047 5037 case 1: /* D4 vline */ 5048 - if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & LB_D4_VLINE_INTERRUPT) { 5049 - rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~LB_D4_VLINE_INTERRUPT; 5050 - DRM_DEBUG("IH: D4 vline\n"); 5051 - } 5038 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont3 & LB_D4_VLINE_INTERRUPT)) 5039 + DRM_DEBUG("IH: D4 vline - IH event w/o asserted irq bit?\n"); 5040 + 5041 + rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~LB_D4_VLINE_INTERRUPT; 5042 + DRM_DEBUG("IH: D4 vline\n"); 5043 + 5052 5044 break; 5053 5045 default: 5054 5046 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data); ··· 5062 5046 case 5: /* D5 vblank/vline */ 5063 5047 switch (src_data) { 5064 5048 case 0: /* D5 vblank */ 5065 - if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & LB_D5_VBLANK_INTERRUPT) { 5066 - if (rdev->irq.crtc_vblank_int[4]) { 5067 - drm_handle_vblank(rdev->ddev, 4); 5068 - rdev->pm.vblank_sync = true; 5069 - wake_up(&rdev->irq.vblank_queue); 5070 - } 5071 - if (atomic_read(&rdev->irq.pflip[4])) 5072 - radeon_crtc_handle_vblank(rdev, 4); 5073 - rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~LB_D5_VBLANK_INTERRUPT; 5074 - DRM_DEBUG("IH: D5 vblank\n"); 5049 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont4 & LB_D5_VBLANK_INTERRUPT)) 5050 + DRM_DEBUG("IH: D5 vblank - IH event w/o asserted irq bit?\n"); 5051 + 5052 + if (rdev->irq.crtc_vblank_int[4]) { 5053 + drm_handle_vblank(rdev->ddev, 4); 5054 + rdev->pm.vblank_sync = true; 5055 + wake_up(&rdev->irq.vblank_queue); 5075 5056 } 5057 + if (atomic_read(&rdev->irq.pflip[4])) 5058 + radeon_crtc_handle_vblank(rdev, 4); 5059 + rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~LB_D5_VBLANK_INTERRUPT; 5060 + DRM_DEBUG("IH: D5 vblank\n"); 5061 + 5076 5062 break; 5077 5063 case 1: /* D5 vline */ 5078 - if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & LB_D5_VLINE_INTERRUPT) { 5079 - rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~LB_D5_VLINE_INTERRUPT; 5080 - DRM_DEBUG("IH: D5 vline\n"); 5081 - } 5064 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont4 & LB_D5_VLINE_INTERRUPT)) 5065 + DRM_DEBUG("IH: D5 vline - IH event w/o asserted irq bit?\n"); 5066 + 5067 + rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~LB_D5_VLINE_INTERRUPT; 5068 + DRM_DEBUG("IH: D5 vline\n"); 5069 + 5082 5070 break; 5083 5071 default: 5084 5072 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data); ··· 5092 5072 case 6: /* D6 vblank/vline */ 5093 5073 switch (src_data) { 5094 5074 case 0: /* D6 vblank */ 5095 - if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & LB_D6_VBLANK_INTERRUPT) { 5096 - if (rdev->irq.crtc_vblank_int[5]) { 5097 - drm_handle_vblank(rdev->ddev, 5); 5098 - rdev->pm.vblank_sync = true; 5099 - wake_up(&rdev->irq.vblank_queue); 5100 - } 5101 - if (atomic_read(&rdev->irq.pflip[5])) 5102 - radeon_crtc_handle_vblank(rdev, 5); 5103 - rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~LB_D6_VBLANK_INTERRUPT; 5104 - DRM_DEBUG("IH: D6 vblank\n"); 5075 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont5 & LB_D6_VBLANK_INTERRUPT)) 5076 + DRM_DEBUG("IH: D6 vblank - IH event w/o asserted irq bit?\n"); 5077 + 5078 + if (rdev->irq.crtc_vblank_int[5]) { 5079 + drm_handle_vblank(rdev->ddev, 5); 5080 + rdev->pm.vblank_sync = true; 5081 + wake_up(&rdev->irq.vblank_queue); 5105 5082 } 5083 + if (atomic_read(&rdev->irq.pflip[5])) 5084 + radeon_crtc_handle_vblank(rdev, 5); 5085 + rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~LB_D6_VBLANK_INTERRUPT; 5086 + DRM_DEBUG("IH: D6 vblank\n"); 5087 + 5106 5088 break; 5107 5089 case 1: /* D6 vline */ 5108 - if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & LB_D6_VLINE_INTERRUPT) { 5109 - rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~LB_D6_VLINE_INTERRUPT; 5110 - DRM_DEBUG("IH: D6 vline\n"); 5111 - } 5090 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont5 & LB_D6_VLINE_INTERRUPT)) 5091 + DRM_DEBUG("IH: D6 vline - IH event w/o asserted irq bit?\n"); 5092 + 5093 + rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~LB_D6_VLINE_INTERRUPT; 5094 + DRM_DEBUG("IH: D6 vline\n"); 5095 + 5112 5096 break; 5113 5097 default: 5114 5098 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data); ··· 5132 5108 case 42: /* HPD hotplug */ 5133 5109 switch (src_data) { 5134 5110 case 0: 5135 - if (rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_INTERRUPT) { 5136 - rdev->irq.stat_regs.evergreen.disp_int &= ~DC_HPD1_INTERRUPT; 5137 - queue_hotplug = true; 5138 - DRM_DEBUG("IH: HPD1\n"); 5139 - } 5111 + if (!(rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_INTERRUPT)) 5112 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 5113 + 5114 + rdev->irq.stat_regs.evergreen.disp_int &= ~DC_HPD1_INTERRUPT; 5115 + queue_hotplug = true; 5116 + DRM_DEBUG("IH: HPD1\n"); 5140 5117 break; 5141 5118 case 1: 5142 - if (rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_INTERRUPT) { 5143 - rdev->irq.stat_regs.evergreen.disp_int_cont &= ~DC_HPD2_INTERRUPT; 5144 - queue_hotplug = true; 5145 - DRM_DEBUG("IH: HPD2\n"); 5146 - } 5119 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_INTERRUPT)) 5120 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 5121 + 5122 + rdev->irq.stat_regs.evergreen.disp_int_cont &= ~DC_HPD2_INTERRUPT; 5123 + queue_hotplug = true; 5124 + DRM_DEBUG("IH: HPD2\n"); 5147 5125 break; 5148 5126 case 2: 5149 - if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_INTERRUPT) { 5150 - rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~DC_HPD3_INTERRUPT; 5151 - queue_hotplug = true; 5152 - DRM_DEBUG("IH: HPD3\n"); 5153 - } 5127 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_INTERRUPT)) 5128 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 5129 + 5130 + rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~DC_HPD3_INTERRUPT; 5131 + queue_hotplug = true; 5132 + DRM_DEBUG("IH: HPD3\n"); 5154 5133 break; 5155 5134 case 3: 5156 - if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_INTERRUPT) { 5157 - rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~DC_HPD4_INTERRUPT; 5158 - queue_hotplug = true; 5159 - DRM_DEBUG("IH: HPD4\n"); 5160 - } 5135 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_INTERRUPT)) 5136 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 5137 + 5138 + rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~DC_HPD4_INTERRUPT; 5139 + queue_hotplug = true; 5140 + DRM_DEBUG("IH: HPD4\n"); 5161 5141 break; 5162 5142 case 4: 5163 - if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_INTERRUPT) { 5164 - rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~DC_HPD5_INTERRUPT; 5165 - queue_hotplug = true; 5166 - DRM_DEBUG("IH: HPD5\n"); 5167 - } 5143 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_INTERRUPT)) 5144 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 5145 + 5146 + rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~DC_HPD5_INTERRUPT; 5147 + queue_hotplug = true; 5148 + DRM_DEBUG("IH: HPD5\n"); 5168 5149 break; 5169 5150 case 5: 5170 - if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_INTERRUPT) { 5171 - rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~DC_HPD6_INTERRUPT; 5172 - queue_hotplug = true; 5173 - DRM_DEBUG("IH: HPD6\n"); 5174 - } 5151 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_INTERRUPT)) 5152 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 5153 + 5154 + rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~DC_HPD6_INTERRUPT; 5155 + queue_hotplug = true; 5156 + DRM_DEBUG("IH: HPD6\n"); 5175 5157 break; 5176 5158 case 6: 5177 - if (rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_RX_INTERRUPT) { 5178 - rdev->irq.stat_regs.evergreen.disp_int &= ~DC_HPD1_RX_INTERRUPT; 5179 - queue_dp = true; 5180 - DRM_DEBUG("IH: HPD_RX 1\n"); 5181 - } 5159 + if (!(rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_RX_INTERRUPT)) 5160 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 5161 + 5162 + rdev->irq.stat_regs.evergreen.disp_int &= ~DC_HPD1_RX_INTERRUPT; 5163 + queue_dp = true; 5164 + DRM_DEBUG("IH: HPD_RX 1\n"); 5182 5165 break; 5183 5166 case 7: 5184 - if (rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_RX_INTERRUPT) { 5185 - rdev->irq.stat_regs.evergreen.disp_int_cont &= ~DC_HPD2_RX_INTERRUPT; 5186 - queue_dp = true; 5187 - DRM_DEBUG("IH: HPD_RX 2\n"); 5188 - } 5167 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_RX_INTERRUPT)) 5168 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 5169 + 5170 + rdev->irq.stat_regs.evergreen.disp_int_cont &= ~DC_HPD2_RX_INTERRUPT; 5171 + queue_dp = true; 5172 + DRM_DEBUG("IH: HPD_RX 2\n"); 5189 5173 break; 5190 5174 case 8: 5191 - if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_RX_INTERRUPT) { 5192 - rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~DC_HPD3_RX_INTERRUPT; 5193 - queue_dp = true; 5194 - DRM_DEBUG("IH: HPD_RX 3\n"); 5195 - } 5175 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_RX_INTERRUPT)) 5176 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 5177 + 5178 + rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~DC_HPD3_RX_INTERRUPT; 5179 + queue_dp = true; 5180 + DRM_DEBUG("IH: HPD_RX 3\n"); 5196 5181 break; 5197 5182 case 9: 5198 - if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_RX_INTERRUPT) { 5199 - rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~DC_HPD4_RX_INTERRUPT; 5200 - queue_dp = true; 5201 - DRM_DEBUG("IH: HPD_RX 4\n"); 5202 - } 5183 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_RX_INTERRUPT)) 5184 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 5185 + 5186 + rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~DC_HPD4_RX_INTERRUPT; 5187 + queue_dp = true; 5188 + DRM_DEBUG("IH: HPD_RX 4\n"); 5203 5189 break; 5204 5190 case 10: 5205 - if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_RX_INTERRUPT) { 5206 - rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~DC_HPD5_RX_INTERRUPT; 5207 - queue_dp = true; 5208 - DRM_DEBUG("IH: HPD_RX 5\n"); 5209 - } 5191 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_RX_INTERRUPT)) 5192 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 5193 + 5194 + rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~DC_HPD5_RX_INTERRUPT; 5195 + queue_dp = true; 5196 + DRM_DEBUG("IH: HPD_RX 5\n"); 5210 5197 break; 5211 5198 case 11: 5212 - if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_RX_INTERRUPT) { 5213 - rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~DC_HPD6_RX_INTERRUPT; 5214 - queue_dp = true; 5215 - DRM_DEBUG("IH: HPD_RX 6\n"); 5216 - } 5199 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_RX_INTERRUPT)) 5200 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 5201 + 5202 + rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~DC_HPD6_RX_INTERRUPT; 5203 + queue_dp = true; 5204 + DRM_DEBUG("IH: HPD_RX 6\n"); 5217 5205 break; 5218 5206 default: 5219 5207 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data); ··· 5235 5199 case 44: /* hdmi */ 5236 5200 switch (src_data) { 5237 5201 case 0: 5238 - if (rdev->irq.stat_regs.evergreen.afmt_status1 & AFMT_AZ_FORMAT_WTRIG) { 5239 - rdev->irq.stat_regs.evergreen.afmt_status1 &= ~AFMT_AZ_FORMAT_WTRIG; 5240 - queue_hdmi = true; 5241 - DRM_DEBUG("IH: HDMI0\n"); 5242 - } 5202 + if (!(rdev->irq.stat_regs.evergreen.afmt_status1 & AFMT_AZ_FORMAT_WTRIG)) 5203 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 5204 + 5205 + rdev->irq.stat_regs.evergreen.afmt_status1 &= ~AFMT_AZ_FORMAT_WTRIG; 5206 + queue_hdmi = true; 5207 + DRM_DEBUG("IH: HDMI0\n"); 5243 5208 break; 5244 5209 case 1: 5245 - if (rdev->irq.stat_regs.evergreen.afmt_status2 & AFMT_AZ_FORMAT_WTRIG) { 5246 - rdev->irq.stat_regs.evergreen.afmt_status2 &= ~AFMT_AZ_FORMAT_WTRIG; 5247 - queue_hdmi = true; 5248 - DRM_DEBUG("IH: HDMI1\n"); 5249 - } 5210 + if (!(rdev->irq.stat_regs.evergreen.afmt_status2 & AFMT_AZ_FORMAT_WTRIG)) 5211 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 5212 + 5213 + rdev->irq.stat_regs.evergreen.afmt_status2 &= ~AFMT_AZ_FORMAT_WTRIG; 5214 + queue_hdmi = true; 5215 + DRM_DEBUG("IH: HDMI1\n"); 5250 5216 break; 5251 5217 case 2: 5252 - if (rdev->irq.stat_regs.evergreen.afmt_status3 & AFMT_AZ_FORMAT_WTRIG) { 5253 - rdev->irq.stat_regs.evergreen.afmt_status3 &= ~AFMT_AZ_FORMAT_WTRIG; 5254 - queue_hdmi = true; 5255 - DRM_DEBUG("IH: HDMI2\n"); 5256 - } 5218 + if (!(rdev->irq.stat_regs.evergreen.afmt_status3 & AFMT_AZ_FORMAT_WTRIG)) 5219 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 5220 + 5221 + rdev->irq.stat_regs.evergreen.afmt_status3 &= ~AFMT_AZ_FORMAT_WTRIG; 5222 + queue_hdmi = true; 5223 + DRM_DEBUG("IH: HDMI2\n"); 5257 5224 break; 5258 5225 case 3: 5259 - if (rdev->irq.stat_regs.evergreen.afmt_status4 & AFMT_AZ_FORMAT_WTRIG) { 5260 - rdev->irq.stat_regs.evergreen.afmt_status4 &= ~AFMT_AZ_FORMAT_WTRIG; 5261 - queue_hdmi = true; 5262 - DRM_DEBUG("IH: HDMI3\n"); 5263 - } 5226 + if (!(rdev->irq.stat_regs.evergreen.afmt_status4 & AFMT_AZ_FORMAT_WTRIG)) 5227 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 5228 + 5229 + rdev->irq.stat_regs.evergreen.afmt_status4 &= ~AFMT_AZ_FORMAT_WTRIG; 5230 + queue_hdmi = true; 5231 + DRM_DEBUG("IH: HDMI3\n"); 5264 5232 break; 5265 5233 case 4: 5266 - if (rdev->irq.stat_regs.evergreen.afmt_status5 & AFMT_AZ_FORMAT_WTRIG) { 5267 - rdev->irq.stat_regs.evergreen.afmt_status5 &= ~AFMT_AZ_FORMAT_WTRIG; 5268 - queue_hdmi = true; 5269 - DRM_DEBUG("IH: HDMI4\n"); 5270 - } 5234 + if (!(rdev->irq.stat_regs.evergreen.afmt_status5 & AFMT_AZ_FORMAT_WTRIG)) 5235 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 5236 + 5237 + rdev->irq.stat_regs.evergreen.afmt_status5 &= ~AFMT_AZ_FORMAT_WTRIG; 5238 + queue_hdmi = true; 5239 + DRM_DEBUG("IH: HDMI4\n"); 5271 5240 break; 5272 5241 case 5: 5273 - if (rdev->irq.stat_regs.evergreen.afmt_status6 & AFMT_AZ_FORMAT_WTRIG) { 5274 - rdev->irq.stat_regs.evergreen.afmt_status6 &= ~AFMT_AZ_FORMAT_WTRIG; 5275 - queue_hdmi = true; 5276 - DRM_DEBUG("IH: HDMI5\n"); 5277 - } 5242 + if (!(rdev->irq.stat_regs.evergreen.afmt_status6 & AFMT_AZ_FORMAT_WTRIG)) 5243 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 5244 + 5245 + rdev->irq.stat_regs.evergreen.afmt_status6 &= ~AFMT_AZ_FORMAT_WTRIG; 5246 + queue_hdmi = true; 5247 + DRM_DEBUG("IH: HDMI5\n"); 5278 5248 break; 5279 5249 default: 5280 5250 DRM_ERROR("Unhandled interrupt: %d %d\n", src_id, src_data);
+14 -11
drivers/gpu/drm/radeon/ni.c
··· 2162 2162 DRM_ERROR("radeon: failed initializing UVD (%d).\n", r); 2163 2163 } 2164 2164 2165 - ring = &rdev->ring[TN_RING_TYPE_VCE1_INDEX]; 2166 - if (ring->ring_size) 2167 - r = radeon_ring_init(rdev, ring, ring->ring_size, 0, 0x0); 2165 + if (rdev->family == CHIP_ARUBA) { 2166 + ring = &rdev->ring[TN_RING_TYPE_VCE1_INDEX]; 2167 + if (ring->ring_size) 2168 + r = radeon_ring_init(rdev, ring, ring->ring_size, 0, 0x0); 2168 2169 2169 - ring = &rdev->ring[TN_RING_TYPE_VCE2_INDEX]; 2170 - if (ring->ring_size) 2171 - r = radeon_ring_init(rdev, ring, ring->ring_size, 0, 0x0); 2170 + ring = &rdev->ring[TN_RING_TYPE_VCE2_INDEX]; 2171 + if (ring->ring_size) 2172 + r = radeon_ring_init(rdev, ring, ring->ring_size, 0, 0x0); 2172 2173 2173 - if (!r) 2174 - r = vce_v1_0_init(rdev); 2175 - else if (r != -ENOENT) 2176 - DRM_ERROR("radeon: failed initializing VCE (%d).\n", r); 2174 + if (!r) 2175 + r = vce_v1_0_init(rdev); 2176 + if (r) 2177 + DRM_ERROR("radeon: failed initializing VCE (%d).\n", r); 2178 + } 2177 2179 2178 2180 r = radeon_ib_pool_init(rdev); 2179 2181 if (r) { ··· 2398 2396 radeon_irq_kms_fini(rdev); 2399 2397 uvd_v1_0_fini(rdev); 2400 2398 radeon_uvd_fini(rdev); 2401 - radeon_vce_fini(rdev); 2399 + if (rdev->family == CHIP_ARUBA) 2400 + radeon_vce_fini(rdev); 2402 2401 cayman_pcie_gart_fini(rdev); 2403 2402 r600_vram_scratch_fini(rdev); 2404 2403 radeon_gem_fini(rdev);
+87 -68
drivers/gpu/drm/radeon/r600.c
··· 4086 4086 case 1: /* D1 vblank/vline */ 4087 4087 switch (src_data) { 4088 4088 case 0: /* D1 vblank */ 4089 - if (rdev->irq.stat_regs.r600.disp_int & LB_D1_VBLANK_INTERRUPT) { 4090 - if (rdev->irq.crtc_vblank_int[0]) { 4091 - drm_handle_vblank(rdev->ddev, 0); 4092 - rdev->pm.vblank_sync = true; 4093 - wake_up(&rdev->irq.vblank_queue); 4094 - } 4095 - if (atomic_read(&rdev->irq.pflip[0])) 4096 - radeon_crtc_handle_vblank(rdev, 0); 4097 - rdev->irq.stat_regs.r600.disp_int &= ~LB_D1_VBLANK_INTERRUPT; 4098 - DRM_DEBUG("IH: D1 vblank\n"); 4089 + if (!(rdev->irq.stat_regs.r600.disp_int & LB_D1_VBLANK_INTERRUPT)) 4090 + DRM_DEBUG("IH: D1 vblank - IH event w/o asserted irq bit?\n"); 4091 + 4092 + if (rdev->irq.crtc_vblank_int[0]) { 4093 + drm_handle_vblank(rdev->ddev, 0); 4094 + rdev->pm.vblank_sync = true; 4095 + wake_up(&rdev->irq.vblank_queue); 4099 4096 } 4097 + if (atomic_read(&rdev->irq.pflip[0])) 4098 + radeon_crtc_handle_vblank(rdev, 0); 4099 + rdev->irq.stat_regs.r600.disp_int &= ~LB_D1_VBLANK_INTERRUPT; 4100 + DRM_DEBUG("IH: D1 vblank\n"); 4101 + 4100 4102 break; 4101 4103 case 1: /* D1 vline */ 4102 - if (rdev->irq.stat_regs.r600.disp_int & LB_D1_VLINE_INTERRUPT) { 4103 - rdev->irq.stat_regs.r600.disp_int &= ~LB_D1_VLINE_INTERRUPT; 4104 - DRM_DEBUG("IH: D1 vline\n"); 4105 - } 4104 + if (!(rdev->irq.stat_regs.r600.disp_int & LB_D1_VLINE_INTERRUPT)) 4105 + DRM_DEBUG("IH: D1 vline - IH event w/o asserted irq bit?\n"); 4106 + 4107 + rdev->irq.stat_regs.r600.disp_int &= ~LB_D1_VLINE_INTERRUPT; 4108 + DRM_DEBUG("IH: D1 vline\n"); 4109 + 4106 4110 break; 4107 4111 default: 4108 4112 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data); ··· 4116 4112 case 5: /* D2 vblank/vline */ 4117 4113 switch (src_data) { 4118 4114 case 0: /* D2 vblank */ 4119 - if (rdev->irq.stat_regs.r600.disp_int & LB_D2_VBLANK_INTERRUPT) { 4120 - if (rdev->irq.crtc_vblank_int[1]) { 4121 - drm_handle_vblank(rdev->ddev, 1); 4122 - rdev->pm.vblank_sync = true; 4123 - wake_up(&rdev->irq.vblank_queue); 4124 - } 4125 - if (atomic_read(&rdev->irq.pflip[1])) 4126 - radeon_crtc_handle_vblank(rdev, 1); 4127 - rdev->irq.stat_regs.r600.disp_int &= ~LB_D2_VBLANK_INTERRUPT; 4128 - DRM_DEBUG("IH: D2 vblank\n"); 4115 + if (!(rdev->irq.stat_regs.r600.disp_int & LB_D2_VBLANK_INTERRUPT)) 4116 + DRM_DEBUG("IH: D2 vblank - IH event w/o asserted irq bit?\n"); 4117 + 4118 + if (rdev->irq.crtc_vblank_int[1]) { 4119 + drm_handle_vblank(rdev->ddev, 1); 4120 + rdev->pm.vblank_sync = true; 4121 + wake_up(&rdev->irq.vblank_queue); 4129 4122 } 4123 + if (atomic_read(&rdev->irq.pflip[1])) 4124 + radeon_crtc_handle_vblank(rdev, 1); 4125 + rdev->irq.stat_regs.r600.disp_int &= ~LB_D2_VBLANK_INTERRUPT; 4126 + DRM_DEBUG("IH: D2 vblank\n"); 4127 + 4130 4128 break; 4131 4129 case 1: /* D1 vline */ 4132 - if (rdev->irq.stat_regs.r600.disp_int & LB_D2_VLINE_INTERRUPT) { 4133 - rdev->irq.stat_regs.r600.disp_int &= ~LB_D2_VLINE_INTERRUPT; 4134 - DRM_DEBUG("IH: D2 vline\n"); 4135 - } 4130 + if (!(rdev->irq.stat_regs.r600.disp_int & LB_D2_VLINE_INTERRUPT)) 4131 + DRM_DEBUG("IH: D2 vline - IH event w/o asserted irq bit?\n"); 4132 + 4133 + rdev->irq.stat_regs.r600.disp_int &= ~LB_D2_VLINE_INTERRUPT; 4134 + DRM_DEBUG("IH: D2 vline\n"); 4135 + 4136 4136 break; 4137 4137 default: 4138 4138 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data); ··· 4156 4148 case 19: /* HPD/DAC hotplug */ 4157 4149 switch (src_data) { 4158 4150 case 0: 4159 - if (rdev->irq.stat_regs.r600.disp_int & DC_HPD1_INTERRUPT) { 4160 - rdev->irq.stat_regs.r600.disp_int &= ~DC_HPD1_INTERRUPT; 4161 - queue_hotplug = true; 4162 - DRM_DEBUG("IH: HPD1\n"); 4163 - } 4151 + if (!(rdev->irq.stat_regs.r600.disp_int & DC_HPD1_INTERRUPT)) 4152 + DRM_DEBUG("IH: HPD1 - IH event w/o asserted irq bit?\n"); 4153 + 4154 + rdev->irq.stat_regs.r600.disp_int &= ~DC_HPD1_INTERRUPT; 4155 + queue_hotplug = true; 4156 + DRM_DEBUG("IH: HPD1\n"); 4164 4157 break; 4165 4158 case 1: 4166 - if (rdev->irq.stat_regs.r600.disp_int & DC_HPD2_INTERRUPT) { 4167 - rdev->irq.stat_regs.r600.disp_int &= ~DC_HPD2_INTERRUPT; 4168 - queue_hotplug = true; 4169 - DRM_DEBUG("IH: HPD2\n"); 4170 - } 4159 + if (!(rdev->irq.stat_regs.r600.disp_int & DC_HPD2_INTERRUPT)) 4160 + DRM_DEBUG("IH: HPD2 - IH event w/o asserted irq bit?\n"); 4161 + 4162 + rdev->irq.stat_regs.r600.disp_int &= ~DC_HPD2_INTERRUPT; 4163 + queue_hotplug = true; 4164 + DRM_DEBUG("IH: HPD2\n"); 4171 4165 break; 4172 4166 case 4: 4173 - if (rdev->irq.stat_regs.r600.disp_int_cont & DC_HPD3_INTERRUPT) { 4174 - rdev->irq.stat_regs.r600.disp_int_cont &= ~DC_HPD3_INTERRUPT; 4175 - queue_hotplug = true; 4176 - DRM_DEBUG("IH: HPD3\n"); 4177 - } 4167 + if (!(rdev->irq.stat_regs.r600.disp_int_cont & DC_HPD3_INTERRUPT)) 4168 + DRM_DEBUG("IH: HPD3 - IH event w/o asserted irq bit?\n"); 4169 + 4170 + rdev->irq.stat_regs.r600.disp_int_cont &= ~DC_HPD3_INTERRUPT; 4171 + queue_hotplug = true; 4172 + DRM_DEBUG("IH: HPD3\n"); 4178 4173 break; 4179 4174 case 5: 4180 - if (rdev->irq.stat_regs.r600.disp_int_cont & DC_HPD4_INTERRUPT) { 4181 - rdev->irq.stat_regs.r600.disp_int_cont &= ~DC_HPD4_INTERRUPT; 4182 - queue_hotplug = true; 4183 - DRM_DEBUG("IH: HPD4\n"); 4184 - } 4175 + if (!(rdev->irq.stat_regs.r600.disp_int_cont & DC_HPD4_INTERRUPT)) 4176 + DRM_DEBUG("IH: HPD4 - IH event w/o asserted irq bit?\n"); 4177 + 4178 + rdev->irq.stat_regs.r600.disp_int_cont &= ~DC_HPD4_INTERRUPT; 4179 + queue_hotplug = true; 4180 + DRM_DEBUG("IH: HPD4\n"); 4185 4181 break; 4186 4182 case 10: 4187 - if (rdev->irq.stat_regs.r600.disp_int_cont2 & DC_HPD5_INTERRUPT) { 4188 - rdev->irq.stat_regs.r600.disp_int_cont2 &= ~DC_HPD5_INTERRUPT; 4189 - queue_hotplug = true; 4190 - DRM_DEBUG("IH: HPD5\n"); 4191 - } 4183 + if (!(rdev->irq.stat_regs.r600.disp_int_cont2 & DC_HPD5_INTERRUPT)) 4184 + DRM_DEBUG("IH: HPD5 - IH event w/o asserted irq bit?\n"); 4185 + 4186 + rdev->irq.stat_regs.r600.disp_int_cont2 &= ~DC_HPD5_INTERRUPT; 4187 + queue_hotplug = true; 4188 + DRM_DEBUG("IH: HPD5\n"); 4192 4189 break; 4193 4190 case 12: 4194 - if (rdev->irq.stat_regs.r600.disp_int_cont2 & DC_HPD6_INTERRUPT) { 4195 - rdev->irq.stat_regs.r600.disp_int_cont2 &= ~DC_HPD6_INTERRUPT; 4196 - queue_hotplug = true; 4197 - DRM_DEBUG("IH: HPD6\n"); 4198 - } 4191 + if (!(rdev->irq.stat_regs.r600.disp_int_cont2 & DC_HPD6_INTERRUPT)) 4192 + DRM_DEBUG("IH: HPD6 - IH event w/o asserted irq bit?\n"); 4193 + 4194 + rdev->irq.stat_regs.r600.disp_int_cont2 &= ~DC_HPD6_INTERRUPT; 4195 + queue_hotplug = true; 4196 + DRM_DEBUG("IH: HPD6\n"); 4197 + 4199 4198 break; 4200 4199 default: 4201 4200 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data); ··· 4212 4197 case 21: /* hdmi */ 4213 4198 switch (src_data) { 4214 4199 case 4: 4215 - if (rdev->irq.stat_regs.r600.hdmi0_status & HDMI0_AZ_FORMAT_WTRIG) { 4216 - rdev->irq.stat_regs.r600.hdmi0_status &= ~HDMI0_AZ_FORMAT_WTRIG; 4217 - queue_hdmi = true; 4218 - DRM_DEBUG("IH: HDMI0\n"); 4219 - } 4200 + if (!(rdev->irq.stat_regs.r600.hdmi0_status & HDMI0_AZ_FORMAT_WTRIG)) 4201 + DRM_DEBUG("IH: HDMI0 - IH event w/o asserted irq bit?\n"); 4202 + 4203 + rdev->irq.stat_regs.r600.hdmi0_status &= ~HDMI0_AZ_FORMAT_WTRIG; 4204 + queue_hdmi = true; 4205 + DRM_DEBUG("IH: HDMI0\n"); 4206 + 4220 4207 break; 4221 4208 case 5: 4222 - if (rdev->irq.stat_regs.r600.hdmi1_status & HDMI0_AZ_FORMAT_WTRIG) { 4223 - rdev->irq.stat_regs.r600.hdmi1_status &= ~HDMI0_AZ_FORMAT_WTRIG; 4224 - queue_hdmi = true; 4225 - DRM_DEBUG("IH: HDMI1\n"); 4226 - } 4209 + if (!(rdev->irq.stat_regs.r600.hdmi1_status & HDMI0_AZ_FORMAT_WTRIG)) 4210 + DRM_DEBUG("IH: HDMI1 - IH event w/o asserted irq bit?\n"); 4211 + 4212 + rdev->irq.stat_regs.r600.hdmi1_status &= ~HDMI0_AZ_FORMAT_WTRIG; 4213 + queue_hdmi = true; 4214 + DRM_DEBUG("IH: HDMI1\n"); 4215 + 4227 4216 break; 4228 4217 default: 4229 4218 DRM_ERROR("Unhandled interrupt: %d %d\n", src_id, src_data);
+1 -1
drivers/gpu/drm/radeon/r600_cp.c
··· 2483 2483 struct drm_buf *buf; 2484 2484 u32 *buffer; 2485 2485 const u8 __user *data; 2486 - int size, pass_size; 2486 + unsigned int size, pass_size; 2487 2487 u64 src_offset, dst_offset; 2488 2488 2489 2489 if (!radeon_check_offset(dev_priv, tex->offset)) {
+44 -65
drivers/gpu/drm/radeon/radeon_cursor.c
··· 91 91 struct radeon_device *rdev = crtc->dev->dev_private; 92 92 93 93 if (ASIC_IS_DCE4(rdev)) { 94 + WREG32(EVERGREEN_CUR_SURFACE_ADDRESS_HIGH + radeon_crtc->crtc_offset, 95 + upper_32_bits(radeon_crtc->cursor_addr)); 96 + WREG32(EVERGREEN_CUR_SURFACE_ADDRESS + radeon_crtc->crtc_offset, 97 + lower_32_bits(radeon_crtc->cursor_addr)); 94 98 WREG32(RADEON_MM_INDEX, EVERGREEN_CUR_CONTROL + radeon_crtc->crtc_offset); 95 99 WREG32(RADEON_MM_DATA, EVERGREEN_CURSOR_EN | 96 100 EVERGREEN_CURSOR_MODE(EVERGREEN_CURSOR_24_8_PRE_MULT) | 97 101 EVERGREEN_CURSOR_URGENT_CONTROL(EVERGREEN_CURSOR_URGENT_1_2)); 98 102 } else if (ASIC_IS_AVIVO(rdev)) { 103 + if (rdev->family >= CHIP_RV770) { 104 + if (radeon_crtc->crtc_id) 105 + WREG32(R700_D2CUR_SURFACE_ADDRESS_HIGH, 106 + upper_32_bits(radeon_crtc->cursor_addr)); 107 + else 108 + WREG32(R700_D1CUR_SURFACE_ADDRESS_HIGH, 109 + upper_32_bits(radeon_crtc->cursor_addr)); 110 + } 111 + 112 + WREG32(AVIVO_D1CUR_SURFACE_ADDRESS + radeon_crtc->crtc_offset, 113 + lower_32_bits(radeon_crtc->cursor_addr)); 99 114 WREG32(RADEON_MM_INDEX, AVIVO_D1CUR_CONTROL + radeon_crtc->crtc_offset); 100 115 WREG32(RADEON_MM_DATA, AVIVO_D1CURSOR_EN | 101 116 (AVIVO_D1CURSOR_MODE_24BPP << AVIVO_D1CURSOR_MODE_SHIFT)); 102 117 } else { 118 + /* offset is from DISP(2)_BASE_ADDRESS */ 119 + WREG32(RADEON_CUR_OFFSET + radeon_crtc->crtc_offset, 120 + radeon_crtc->cursor_addr - radeon_crtc->legacy_display_base_addr); 121 + 103 122 switch (radeon_crtc->crtc_id) { 104 123 case 0: 105 124 WREG32(RADEON_MM_INDEX, RADEON_CRTC_GEN_CNTL); ··· 224 205 | (x << 16) 225 206 | y)); 226 207 /* offset is from DISP(2)_BASE_ADDRESS */ 227 - WREG32(RADEON_CUR_OFFSET + radeon_crtc->crtc_offset, (radeon_crtc->legacy_cursor_offset + 228 - (yorigin * 256))); 208 + WREG32(RADEON_CUR_OFFSET + radeon_crtc->crtc_offset, 209 + radeon_crtc->cursor_addr - radeon_crtc->legacy_display_base_addr + 210 + yorigin * 256); 229 211 } 230 212 231 213 radeon_crtc->cursor_x = x; ··· 247 227 return ret; 248 228 } 249 229 250 - static int radeon_set_cursor(struct drm_crtc *crtc, struct drm_gem_object *obj) 251 - { 252 - struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc); 253 - struct radeon_device *rdev = crtc->dev->dev_private; 254 - struct radeon_bo *robj = gem_to_radeon_bo(obj); 255 - uint64_t gpu_addr; 256 - int ret; 257 - 258 - ret = radeon_bo_reserve(robj, false); 259 - if (unlikely(ret != 0)) 260 - goto fail; 261 - /* Only 27 bit offset for legacy cursor */ 262 - ret = radeon_bo_pin_restricted(robj, RADEON_GEM_DOMAIN_VRAM, 263 - ASIC_IS_AVIVO(rdev) ? 0 : 1 << 27, 264 - &gpu_addr); 265 - radeon_bo_unreserve(robj); 266 - if (ret) 267 - goto fail; 268 - 269 - if (ASIC_IS_DCE4(rdev)) { 270 - WREG32(EVERGREEN_CUR_SURFACE_ADDRESS_HIGH + radeon_crtc->crtc_offset, 271 - upper_32_bits(gpu_addr)); 272 - WREG32(EVERGREEN_CUR_SURFACE_ADDRESS + radeon_crtc->crtc_offset, 273 - gpu_addr & 0xffffffff); 274 - } else if (ASIC_IS_AVIVO(rdev)) { 275 - if (rdev->family >= CHIP_RV770) { 276 - if (radeon_crtc->crtc_id) 277 - WREG32(R700_D2CUR_SURFACE_ADDRESS_HIGH, upper_32_bits(gpu_addr)); 278 - else 279 - WREG32(R700_D1CUR_SURFACE_ADDRESS_HIGH, upper_32_bits(gpu_addr)); 280 - } 281 - WREG32(AVIVO_D1CUR_SURFACE_ADDRESS + radeon_crtc->crtc_offset, 282 - gpu_addr & 0xffffffff); 283 - } else { 284 - radeon_crtc->legacy_cursor_offset = gpu_addr - radeon_crtc->legacy_display_base_addr; 285 - /* offset is from DISP(2)_BASE_ADDRESS */ 286 - WREG32(RADEON_CUR_OFFSET + radeon_crtc->crtc_offset, radeon_crtc->legacy_cursor_offset); 287 - } 288 - 289 - return 0; 290 - 291 - fail: 292 - drm_gem_object_unreference_unlocked(obj); 293 - 294 - return ret; 295 - } 296 - 297 230 int radeon_crtc_cursor_set2(struct drm_crtc *crtc, 298 231 struct drm_file *file_priv, 299 232 uint32_t handle, ··· 256 283 int32_t hot_y) 257 284 { 258 285 struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc); 286 + struct radeon_device *rdev = crtc->dev->dev_private; 259 287 struct drm_gem_object *obj; 288 + struct radeon_bo *robj; 260 289 int ret; 261 290 262 291 if (!handle) { ··· 280 305 return -ENOENT; 281 306 } 282 307 308 + robj = gem_to_radeon_bo(obj); 309 + ret = radeon_bo_reserve(robj, false); 310 + if (ret != 0) { 311 + drm_gem_object_unreference_unlocked(obj); 312 + return ret; 313 + } 314 + /* Only 27 bit offset for legacy cursor */ 315 + ret = radeon_bo_pin_restricted(robj, RADEON_GEM_DOMAIN_VRAM, 316 + ASIC_IS_AVIVO(rdev) ? 0 : 1 << 27, 317 + &radeon_crtc->cursor_addr); 318 + radeon_bo_unreserve(robj); 319 + if (ret) { 320 + DRM_ERROR("Failed to pin new cursor BO (%d)\n", ret); 321 + drm_gem_object_unreference_unlocked(obj); 322 + return ret; 323 + } 324 + 283 325 radeon_crtc->cursor_width = width; 284 326 radeon_crtc->cursor_height = height; 285 327 ··· 315 323 radeon_crtc->cursor_hot_y = hot_y; 316 324 } 317 325 318 - ret = radeon_set_cursor(crtc, obj); 319 - 320 - if (ret) 321 - DRM_ERROR("radeon_set_cursor returned %d, not changing cursor\n", 322 - ret); 323 - else 324 - radeon_show_cursor(crtc); 326 + radeon_show_cursor(crtc); 325 327 326 328 radeon_lock_cursor(crtc, false); 327 329 ··· 327 341 radeon_bo_unpin(robj); 328 342 radeon_bo_unreserve(robj); 329 343 } 330 - if (radeon_crtc->cursor_bo != obj) 331 - drm_gem_object_unreference_unlocked(radeon_crtc->cursor_bo); 344 + drm_gem_object_unreference_unlocked(radeon_crtc->cursor_bo); 332 345 } 333 346 334 347 radeon_crtc->cursor_bo = obj; ··· 345 360 void radeon_cursor_reset(struct drm_crtc *crtc) 346 361 { 347 362 struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc); 348 - int ret; 349 363 350 364 if (radeon_crtc->cursor_bo) { 351 365 radeon_lock_cursor(crtc, true); ··· 352 368 radeon_cursor_move_locked(crtc, radeon_crtc->cursor_x, 353 369 radeon_crtc->cursor_y); 354 370 355 - ret = radeon_set_cursor(crtc, radeon_crtc->cursor_bo); 356 - if (ret) 357 - DRM_ERROR("radeon_set_cursor returned %d, not showing " 358 - "cursor\n", ret); 359 - else 360 - radeon_show_cursor(crtc); 371 + radeon_show_cursor(crtc); 361 372 362 373 radeon_lock_cursor(crtc, false); 363 374 }
+52 -14
drivers/gpu/drm/radeon/radeon_device.c
··· 1080 1080 } 1081 1081 1082 1082 /** 1083 + * Determine a sensible default GART size according to ASIC family. 1084 + * 1085 + * @family ASIC family name 1086 + */ 1087 + static int radeon_gart_size_auto(enum radeon_family family) 1088 + { 1089 + /* default to a larger gart size on newer asics */ 1090 + if (family >= CHIP_TAHITI) 1091 + return 2048; 1092 + else if (family >= CHIP_RV770) 1093 + return 1024; 1094 + else 1095 + return 512; 1096 + } 1097 + 1098 + /** 1083 1099 * radeon_check_arguments - validate module params 1084 1100 * 1085 1101 * @rdev: radeon_device pointer ··· 1113 1097 } 1114 1098 1115 1099 if (radeon_gart_size == -1) { 1116 - /* default to a larger gart size on newer asics */ 1117 - if (rdev->family >= CHIP_RV770) 1118 - radeon_gart_size = 1024; 1119 - else 1120 - radeon_gart_size = 512; 1100 + radeon_gart_size = radeon_gart_size_auto(rdev->family); 1121 1101 } 1122 1102 /* gtt size must be power of two and greater or equal to 32M */ 1123 1103 if (radeon_gart_size < 32) { 1124 1104 dev_warn(rdev->dev, "gart size (%d) too small\n", 1125 1105 radeon_gart_size); 1126 - if (rdev->family >= CHIP_RV770) 1127 - radeon_gart_size = 1024; 1128 - else 1129 - radeon_gart_size = 512; 1106 + radeon_gart_size = radeon_gart_size_auto(rdev->family); 1130 1107 } else if (!radeon_check_pot_argument(radeon_gart_size)) { 1131 1108 dev_warn(rdev->dev, "gart size (%d) must be a power of 2\n", 1132 1109 radeon_gart_size); 1133 - if (rdev->family >= CHIP_RV770) 1134 - radeon_gart_size = 1024; 1135 - else 1136 - radeon_gart_size = 512; 1110 + radeon_gart_size = radeon_gart_size_auto(rdev->family); 1137 1111 } 1138 1112 rdev->mc.gtt_size = (uint64_t)radeon_gart_size << 20; 1139 1113 ··· 1578 1572 drm_helper_connector_dpms(connector, DRM_MODE_DPMS_OFF); 1579 1573 } 1580 1574 1581 - /* unpin the front buffers */ 1575 + /* unpin the front buffers and cursors */ 1582 1576 list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) { 1577 + struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc); 1583 1578 struct radeon_framebuffer *rfb = to_radeon_framebuffer(crtc->primary->fb); 1584 1579 struct radeon_bo *robj; 1580 + 1581 + if (radeon_crtc->cursor_bo) { 1582 + struct radeon_bo *robj = gem_to_radeon_bo(radeon_crtc->cursor_bo); 1583 + r = radeon_bo_reserve(robj, false); 1584 + if (r == 0) { 1585 + radeon_bo_unpin(robj); 1586 + radeon_bo_unreserve(robj); 1587 + } 1588 + } 1585 1589 1586 1590 if (rfb == NULL || rfb->obj == NULL) { 1587 1591 continue; ··· 1655 1639 { 1656 1640 struct drm_connector *connector; 1657 1641 struct radeon_device *rdev = dev->dev_private; 1642 + struct drm_crtc *crtc; 1658 1643 int r; 1659 1644 1660 1645 if (dev->switch_power_state == DRM_SWITCH_POWER_OFF) ··· 1694 1677 } 1695 1678 1696 1679 radeon_restore_bios_scratch_regs(rdev); 1680 + 1681 + /* pin cursors */ 1682 + list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) { 1683 + struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc); 1684 + 1685 + if (radeon_crtc->cursor_bo) { 1686 + struct radeon_bo *robj = gem_to_radeon_bo(radeon_crtc->cursor_bo); 1687 + r = radeon_bo_reserve(robj, false); 1688 + if (r == 0) { 1689 + /* Only 27 bit offset for legacy cursor */ 1690 + r = radeon_bo_pin_restricted(robj, 1691 + RADEON_GEM_DOMAIN_VRAM, 1692 + ASIC_IS_AVIVO(rdev) ? 1693 + 0 : 1 << 27, 1694 + &radeon_crtc->cursor_addr); 1695 + if (r != 0) 1696 + DRM_ERROR("Failed to pin cursor BO (%d)\n", r); 1697 + radeon_bo_unreserve(robj); 1698 + } 1699 + } 1700 + } 1697 1701 1698 1702 /* init dig PHYs, disp eng pll */ 1699 1703 if (rdev->is_atom_bios) {
+1
drivers/gpu/drm/radeon/radeon_fb.c
··· 257 257 } 258 258 259 259 info->par = rfbdev; 260 + info->skip_vt_switch = true; 260 261 261 262 ret = radeon_framebuffer_init(rdev->ddev, &rfbdev->rfb, &mode_cmd, gobj); 262 263 if (ret) {
+8 -4
drivers/gpu/drm/radeon/radeon_gart.c
··· 260 260 } 261 261 } 262 262 } 263 - mb(); 264 - radeon_gart_tlb_flush(rdev); 263 + if (rdev->gart.ptr) { 264 + mb(); 265 + radeon_gart_tlb_flush(rdev); 266 + } 265 267 } 266 268 267 269 /** ··· 308 306 page_base += RADEON_GPU_PAGE_SIZE; 309 307 } 310 308 } 311 - mb(); 312 - radeon_gart_tlb_flush(rdev); 309 + if (rdev->gart.ptr) { 310 + mb(); 311 + radeon_gart_tlb_flush(rdev); 312 + } 313 313 return 0; 314 314 } 315 315
+10 -3
drivers/gpu/drm/radeon/radeon_gem.c
··· 36 36 if (robj) { 37 37 if (robj->gem_base.import_attach) 38 38 drm_prime_gem_destroy(&robj->gem_base, robj->tbo.sg); 39 + radeon_mn_unregister(robj); 39 40 radeon_bo_unref(&robj); 40 41 } 41 42 } ··· 429 428 int radeon_gem_busy_ioctl(struct drm_device *dev, void *data, 430 429 struct drm_file *filp) 431 430 { 432 - struct radeon_device *rdev = dev->dev_private; 433 431 struct drm_radeon_gem_busy *args = data; 434 432 struct drm_gem_object *gobj; 435 433 struct radeon_bo *robj; ··· 440 440 return -ENOENT; 441 441 } 442 442 robj = gem_to_radeon_bo(gobj); 443 - r = radeon_bo_wait(robj, &cur_placement, true); 443 + 444 + r = reservation_object_test_signaled_rcu(robj->tbo.resv, true); 445 + if (r == 0) 446 + r = -EBUSY; 447 + else 448 + r = 0; 449 + 450 + cur_placement = ACCESS_ONCE(robj->tbo.mem.mem_type); 444 451 args->domain = radeon_mem_type_to_domain(cur_placement); 445 452 drm_gem_object_unreference_unlocked(gobj); 446 - r = radeon_gem_handle_lockup(rdev, r); 447 453 return r; 448 454 } 449 455 ··· 477 471 r = ret; 478 472 479 473 /* Flush HDP cache via MMIO if necessary */ 474 + cur_placement = ACCESS_ONCE(robj->tbo.mem.mem_type); 480 475 if (rdev->asic->mmio_hdp_flush && 481 476 radeon_mem_type_to_domain(cur_placement) == RADEON_GEM_DOMAIN_VRAM) 482 477 robj->rdev->asic->mmio_hdp_flush(rdev);
-1
drivers/gpu/drm/radeon/radeon_mode.h
··· 343 343 int max_cursor_width; 344 344 int max_cursor_height; 345 345 uint32_t legacy_display_base_addr; 346 - uint32_t legacy_cursor_offset; 347 346 enum radeon_rmx_type rmx_type; 348 347 u8 h_border; 349 348 u8 v_border;
-1
drivers/gpu/drm/radeon/radeon_object.c
··· 75 75 bo = container_of(tbo, struct radeon_bo, tbo); 76 76 77 77 radeon_update_memory_usage(bo, bo->tbo.mem.mem_type, -1); 78 - radeon_mn_unregister(bo); 79 78 80 79 mutex_lock(&bo->rdev->gem.mutex); 81 80 list_del_init(&bo->list);
+19 -21
drivers/gpu/drm/radeon/radeon_vm.c
··· 493 493 } 494 494 495 495 if (bo_va->it.start || bo_va->it.last) { 496 - spin_lock(&vm->status_lock); 497 - if (list_empty(&bo_va->vm_status)) { 498 - /* add a clone of the bo_va to clear the old address */ 499 - struct radeon_bo_va *tmp; 500 - spin_unlock(&vm->status_lock); 501 - tmp = kzalloc(sizeof(struct radeon_bo_va), GFP_KERNEL); 502 - if (!tmp) { 503 - mutex_unlock(&vm->mutex); 504 - r = -ENOMEM; 505 - goto error_unreserve; 506 - } 507 - tmp->it.start = bo_va->it.start; 508 - tmp->it.last = bo_va->it.last; 509 - tmp->vm = vm; 510 - tmp->bo = radeon_bo_ref(bo_va->bo); 511 - spin_lock(&vm->status_lock); 512 - list_add(&tmp->vm_status, &vm->freed); 496 + /* add a clone of the bo_va to clear the old address */ 497 + struct radeon_bo_va *tmp; 498 + tmp = kzalloc(sizeof(struct radeon_bo_va), GFP_KERNEL); 499 + if (!tmp) { 500 + mutex_unlock(&vm->mutex); 501 + r = -ENOMEM; 502 + goto error_unreserve; 513 503 } 514 - spin_unlock(&vm->status_lock); 504 + tmp->it.start = bo_va->it.start; 505 + tmp->it.last = bo_va->it.last; 506 + tmp->vm = vm; 507 + tmp->bo = radeon_bo_ref(bo_va->bo); 515 508 516 509 interval_tree_remove(&bo_va->it, &vm->va); 510 + spin_lock(&vm->status_lock); 517 511 bo_va->it.start = 0; 518 512 bo_va->it.last = 0; 513 + list_del_init(&bo_va->vm_status); 514 + list_add(&tmp->vm_status, &vm->freed); 515 + spin_unlock(&vm->status_lock); 519 516 } 520 517 521 518 if (soffset || eoffset) { 519 + spin_lock(&vm->status_lock); 522 520 bo_va->it.start = soffset; 523 521 bo_va->it.last = eoffset - 1; 524 - interval_tree_insert(&bo_va->it, &vm->va); 525 - spin_lock(&vm->status_lock); 526 522 list_add(&bo_va->vm_status, &vm->cleared); 527 523 spin_unlock(&vm->status_lock); 524 + interval_tree_insert(&bo_va->it, &vm->va); 528 525 } 529 526 530 527 bo_va->flags = flags; ··· 1155 1158 1156 1159 list_for_each_entry(bo_va, &bo->va, bo_list) { 1157 1160 spin_lock(&bo_va->vm->status_lock); 1158 - if (list_empty(&bo_va->vm_status)) 1161 + if (list_empty(&bo_va->vm_status) && 1162 + (bo_va->it.start || bo_va->it.last)) 1159 1163 list_add(&bo_va->vm_status, &bo_va->vm->invalidated); 1160 1164 spin_unlock(&bo_va->vm->status_lock); 1161 1165 }
+192 -144
drivers/gpu/drm/radeon/si.c
··· 6466 6466 case 1: /* D1 vblank/vline */ 6467 6467 switch (src_data) { 6468 6468 case 0: /* D1 vblank */ 6469 - if (rdev->irq.stat_regs.evergreen.disp_int & LB_D1_VBLANK_INTERRUPT) { 6470 - if (rdev->irq.crtc_vblank_int[0]) { 6471 - drm_handle_vblank(rdev->ddev, 0); 6472 - rdev->pm.vblank_sync = true; 6473 - wake_up(&rdev->irq.vblank_queue); 6474 - } 6475 - if (atomic_read(&rdev->irq.pflip[0])) 6476 - radeon_crtc_handle_vblank(rdev, 0); 6477 - rdev->irq.stat_regs.evergreen.disp_int &= ~LB_D1_VBLANK_INTERRUPT; 6478 - DRM_DEBUG("IH: D1 vblank\n"); 6469 + if (!(rdev->irq.stat_regs.evergreen.disp_int & LB_D1_VBLANK_INTERRUPT)) 6470 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 6471 + 6472 + if (rdev->irq.crtc_vblank_int[0]) { 6473 + drm_handle_vblank(rdev->ddev, 0); 6474 + rdev->pm.vblank_sync = true; 6475 + wake_up(&rdev->irq.vblank_queue); 6479 6476 } 6477 + if (atomic_read(&rdev->irq.pflip[0])) 6478 + radeon_crtc_handle_vblank(rdev, 0); 6479 + rdev->irq.stat_regs.evergreen.disp_int &= ~LB_D1_VBLANK_INTERRUPT; 6480 + DRM_DEBUG("IH: D1 vblank\n"); 6481 + 6480 6482 break; 6481 6483 case 1: /* D1 vline */ 6482 - if (rdev->irq.stat_regs.evergreen.disp_int & LB_D1_VLINE_INTERRUPT) { 6483 - rdev->irq.stat_regs.evergreen.disp_int &= ~LB_D1_VLINE_INTERRUPT; 6484 - DRM_DEBUG("IH: D1 vline\n"); 6485 - } 6484 + if (!(rdev->irq.stat_regs.evergreen.disp_int & LB_D1_VLINE_INTERRUPT)) 6485 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 6486 + 6487 + rdev->irq.stat_regs.evergreen.disp_int &= ~LB_D1_VLINE_INTERRUPT; 6488 + DRM_DEBUG("IH: D1 vline\n"); 6489 + 6486 6490 break; 6487 6491 default: 6488 6492 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data); ··· 6496 6492 case 2: /* D2 vblank/vline */ 6497 6493 switch (src_data) { 6498 6494 case 0: /* D2 vblank */ 6499 - if (rdev->irq.stat_regs.evergreen.disp_int_cont & LB_D2_VBLANK_INTERRUPT) { 6500 - if (rdev->irq.crtc_vblank_int[1]) { 6501 - drm_handle_vblank(rdev->ddev, 1); 6502 - rdev->pm.vblank_sync = true; 6503 - wake_up(&rdev->irq.vblank_queue); 6504 - } 6505 - if (atomic_read(&rdev->irq.pflip[1])) 6506 - radeon_crtc_handle_vblank(rdev, 1); 6507 - rdev->irq.stat_regs.evergreen.disp_int_cont &= ~LB_D2_VBLANK_INTERRUPT; 6508 - DRM_DEBUG("IH: D2 vblank\n"); 6495 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont & LB_D2_VBLANK_INTERRUPT)) 6496 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 6497 + 6498 + if (rdev->irq.crtc_vblank_int[1]) { 6499 + drm_handle_vblank(rdev->ddev, 1); 6500 + rdev->pm.vblank_sync = true; 6501 + wake_up(&rdev->irq.vblank_queue); 6509 6502 } 6503 + if (atomic_read(&rdev->irq.pflip[1])) 6504 + radeon_crtc_handle_vblank(rdev, 1); 6505 + rdev->irq.stat_regs.evergreen.disp_int_cont &= ~LB_D2_VBLANK_INTERRUPT; 6506 + DRM_DEBUG("IH: D2 vblank\n"); 6507 + 6510 6508 break; 6511 6509 case 1: /* D2 vline */ 6512 - if (rdev->irq.stat_regs.evergreen.disp_int_cont & LB_D2_VLINE_INTERRUPT) { 6513 - rdev->irq.stat_regs.evergreen.disp_int_cont &= ~LB_D2_VLINE_INTERRUPT; 6514 - DRM_DEBUG("IH: D2 vline\n"); 6515 - } 6510 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont & LB_D2_VLINE_INTERRUPT)) 6511 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 6512 + 6513 + rdev->irq.stat_regs.evergreen.disp_int_cont &= ~LB_D2_VLINE_INTERRUPT; 6514 + DRM_DEBUG("IH: D2 vline\n"); 6515 + 6516 6516 break; 6517 6517 default: 6518 6518 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data); ··· 6526 6518 case 3: /* D3 vblank/vline */ 6527 6519 switch (src_data) { 6528 6520 case 0: /* D3 vblank */ 6529 - if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & LB_D3_VBLANK_INTERRUPT) { 6530 - if (rdev->irq.crtc_vblank_int[2]) { 6531 - drm_handle_vblank(rdev->ddev, 2); 6532 - rdev->pm.vblank_sync = true; 6533 - wake_up(&rdev->irq.vblank_queue); 6534 - } 6535 - if (atomic_read(&rdev->irq.pflip[2])) 6536 - radeon_crtc_handle_vblank(rdev, 2); 6537 - rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~LB_D3_VBLANK_INTERRUPT; 6538 - DRM_DEBUG("IH: D3 vblank\n"); 6521 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont2 & LB_D3_VBLANK_INTERRUPT)) 6522 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 6523 + 6524 + if (rdev->irq.crtc_vblank_int[2]) { 6525 + drm_handle_vblank(rdev->ddev, 2); 6526 + rdev->pm.vblank_sync = true; 6527 + wake_up(&rdev->irq.vblank_queue); 6539 6528 } 6529 + if (atomic_read(&rdev->irq.pflip[2])) 6530 + radeon_crtc_handle_vblank(rdev, 2); 6531 + rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~LB_D3_VBLANK_INTERRUPT; 6532 + DRM_DEBUG("IH: D3 vblank\n"); 6533 + 6540 6534 break; 6541 6535 case 1: /* D3 vline */ 6542 - if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & LB_D3_VLINE_INTERRUPT) { 6543 - rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~LB_D3_VLINE_INTERRUPT; 6544 - DRM_DEBUG("IH: D3 vline\n"); 6545 - } 6536 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont2 & LB_D3_VLINE_INTERRUPT)) 6537 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 6538 + 6539 + rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~LB_D3_VLINE_INTERRUPT; 6540 + DRM_DEBUG("IH: D3 vline\n"); 6541 + 6546 6542 break; 6547 6543 default: 6548 6544 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data); ··· 6556 6544 case 4: /* D4 vblank/vline */ 6557 6545 switch (src_data) { 6558 6546 case 0: /* D4 vblank */ 6559 - if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & LB_D4_VBLANK_INTERRUPT) { 6560 - if (rdev->irq.crtc_vblank_int[3]) { 6561 - drm_handle_vblank(rdev->ddev, 3); 6562 - rdev->pm.vblank_sync = true; 6563 - wake_up(&rdev->irq.vblank_queue); 6564 - } 6565 - if (atomic_read(&rdev->irq.pflip[3])) 6566 - radeon_crtc_handle_vblank(rdev, 3); 6567 - rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~LB_D4_VBLANK_INTERRUPT; 6568 - DRM_DEBUG("IH: D4 vblank\n"); 6547 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont3 & LB_D4_VBLANK_INTERRUPT)) 6548 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 6549 + 6550 + if (rdev->irq.crtc_vblank_int[3]) { 6551 + drm_handle_vblank(rdev->ddev, 3); 6552 + rdev->pm.vblank_sync = true; 6553 + wake_up(&rdev->irq.vblank_queue); 6569 6554 } 6555 + if (atomic_read(&rdev->irq.pflip[3])) 6556 + radeon_crtc_handle_vblank(rdev, 3); 6557 + rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~LB_D4_VBLANK_INTERRUPT; 6558 + DRM_DEBUG("IH: D4 vblank\n"); 6559 + 6570 6560 break; 6571 6561 case 1: /* D4 vline */ 6572 - if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & LB_D4_VLINE_INTERRUPT) { 6573 - rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~LB_D4_VLINE_INTERRUPT; 6574 - DRM_DEBUG("IH: D4 vline\n"); 6575 - } 6562 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont3 & LB_D4_VLINE_INTERRUPT)) 6563 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 6564 + 6565 + rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~LB_D4_VLINE_INTERRUPT; 6566 + DRM_DEBUG("IH: D4 vline\n"); 6567 + 6576 6568 break; 6577 6569 default: 6578 6570 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data); ··· 6586 6570 case 5: /* D5 vblank/vline */ 6587 6571 switch (src_data) { 6588 6572 case 0: /* D5 vblank */ 6589 - if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & LB_D5_VBLANK_INTERRUPT) { 6590 - if (rdev->irq.crtc_vblank_int[4]) { 6591 - drm_handle_vblank(rdev->ddev, 4); 6592 - rdev->pm.vblank_sync = true; 6593 - wake_up(&rdev->irq.vblank_queue); 6594 - } 6595 - if (atomic_read(&rdev->irq.pflip[4])) 6596 - radeon_crtc_handle_vblank(rdev, 4); 6597 - rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~LB_D5_VBLANK_INTERRUPT; 6598 - DRM_DEBUG("IH: D5 vblank\n"); 6573 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont4 & LB_D5_VBLANK_INTERRUPT)) 6574 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 6575 + 6576 + if (rdev->irq.crtc_vblank_int[4]) { 6577 + drm_handle_vblank(rdev->ddev, 4); 6578 + rdev->pm.vblank_sync = true; 6579 + wake_up(&rdev->irq.vblank_queue); 6599 6580 } 6581 + if (atomic_read(&rdev->irq.pflip[4])) 6582 + radeon_crtc_handle_vblank(rdev, 4); 6583 + rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~LB_D5_VBLANK_INTERRUPT; 6584 + DRM_DEBUG("IH: D5 vblank\n"); 6585 + 6600 6586 break; 6601 6587 case 1: /* D5 vline */ 6602 - if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & LB_D5_VLINE_INTERRUPT) { 6603 - rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~LB_D5_VLINE_INTERRUPT; 6604 - DRM_DEBUG("IH: D5 vline\n"); 6605 - } 6588 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont4 & LB_D5_VLINE_INTERRUPT)) 6589 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 6590 + 6591 + rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~LB_D5_VLINE_INTERRUPT; 6592 + DRM_DEBUG("IH: D5 vline\n"); 6593 + 6606 6594 break; 6607 6595 default: 6608 6596 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data); ··· 6616 6596 case 6: /* D6 vblank/vline */ 6617 6597 switch (src_data) { 6618 6598 case 0: /* D6 vblank */ 6619 - if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & LB_D6_VBLANK_INTERRUPT) { 6620 - if (rdev->irq.crtc_vblank_int[5]) { 6621 - drm_handle_vblank(rdev->ddev, 5); 6622 - rdev->pm.vblank_sync = true; 6623 - wake_up(&rdev->irq.vblank_queue); 6624 - } 6625 - if (atomic_read(&rdev->irq.pflip[5])) 6626 - radeon_crtc_handle_vblank(rdev, 5); 6627 - rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~LB_D6_VBLANK_INTERRUPT; 6628 - DRM_DEBUG("IH: D6 vblank\n"); 6599 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont5 & LB_D6_VBLANK_INTERRUPT)) 6600 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 6601 + 6602 + if (rdev->irq.crtc_vblank_int[5]) { 6603 + drm_handle_vblank(rdev->ddev, 5); 6604 + rdev->pm.vblank_sync = true; 6605 + wake_up(&rdev->irq.vblank_queue); 6629 6606 } 6607 + if (atomic_read(&rdev->irq.pflip[5])) 6608 + radeon_crtc_handle_vblank(rdev, 5); 6609 + rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~LB_D6_VBLANK_INTERRUPT; 6610 + DRM_DEBUG("IH: D6 vblank\n"); 6611 + 6630 6612 break; 6631 6613 case 1: /* D6 vline */ 6632 - if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & LB_D6_VLINE_INTERRUPT) { 6633 - rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~LB_D6_VLINE_INTERRUPT; 6634 - DRM_DEBUG("IH: D6 vline\n"); 6635 - } 6614 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont5 & LB_D6_VLINE_INTERRUPT)) 6615 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 6616 + 6617 + rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~LB_D6_VLINE_INTERRUPT; 6618 + DRM_DEBUG("IH: D6 vline\n"); 6619 + 6636 6620 break; 6637 6621 default: 6638 6622 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data); ··· 6656 6632 case 42: /* HPD hotplug */ 6657 6633 switch (src_data) { 6658 6634 case 0: 6659 - if (rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_INTERRUPT) { 6660 - rdev->irq.stat_regs.evergreen.disp_int &= ~DC_HPD1_INTERRUPT; 6661 - queue_hotplug = true; 6662 - DRM_DEBUG("IH: HPD1\n"); 6663 - } 6635 + if (!(rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_INTERRUPT)) 6636 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 6637 + 6638 + rdev->irq.stat_regs.evergreen.disp_int &= ~DC_HPD1_INTERRUPT; 6639 + queue_hotplug = true; 6640 + DRM_DEBUG("IH: HPD1\n"); 6641 + 6664 6642 break; 6665 6643 case 1: 6666 - if (rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_INTERRUPT) { 6667 - rdev->irq.stat_regs.evergreen.disp_int_cont &= ~DC_HPD2_INTERRUPT; 6668 - queue_hotplug = true; 6669 - DRM_DEBUG("IH: HPD2\n"); 6670 - } 6644 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_INTERRUPT)) 6645 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 6646 + 6647 + rdev->irq.stat_regs.evergreen.disp_int_cont &= ~DC_HPD2_INTERRUPT; 6648 + queue_hotplug = true; 6649 + DRM_DEBUG("IH: HPD2\n"); 6650 + 6671 6651 break; 6672 6652 case 2: 6673 - if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_INTERRUPT) { 6674 - rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~DC_HPD3_INTERRUPT; 6675 - queue_hotplug = true; 6676 - DRM_DEBUG("IH: HPD3\n"); 6677 - } 6653 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_INTERRUPT)) 6654 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 6655 + 6656 + rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~DC_HPD3_INTERRUPT; 6657 + queue_hotplug = true; 6658 + DRM_DEBUG("IH: HPD3\n"); 6659 + 6678 6660 break; 6679 6661 case 3: 6680 - if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_INTERRUPT) { 6681 - rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~DC_HPD4_INTERRUPT; 6682 - queue_hotplug = true; 6683 - DRM_DEBUG("IH: HPD4\n"); 6684 - } 6662 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_INTERRUPT)) 6663 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 6664 + 6665 + rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~DC_HPD4_INTERRUPT; 6666 + queue_hotplug = true; 6667 + DRM_DEBUG("IH: HPD4\n"); 6668 + 6685 6669 break; 6686 6670 case 4: 6687 - if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_INTERRUPT) { 6688 - rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~DC_HPD5_INTERRUPT; 6689 - queue_hotplug = true; 6690 - DRM_DEBUG("IH: HPD5\n"); 6691 - } 6671 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_INTERRUPT)) 6672 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 6673 + 6674 + rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~DC_HPD5_INTERRUPT; 6675 + queue_hotplug = true; 6676 + DRM_DEBUG("IH: HPD5\n"); 6677 + 6692 6678 break; 6693 6679 case 5: 6694 - if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_INTERRUPT) { 6695 - rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~DC_HPD6_INTERRUPT; 6696 - queue_hotplug = true; 6697 - DRM_DEBUG("IH: HPD6\n"); 6698 - } 6680 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_INTERRUPT)) 6681 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 6682 + 6683 + rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~DC_HPD6_INTERRUPT; 6684 + queue_hotplug = true; 6685 + DRM_DEBUG("IH: HPD6\n"); 6686 + 6699 6687 break; 6700 6688 case 6: 6701 - if (rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_RX_INTERRUPT) { 6702 - rdev->irq.stat_regs.evergreen.disp_int &= ~DC_HPD1_RX_INTERRUPT; 6703 - queue_dp = true; 6704 - DRM_DEBUG("IH: HPD_RX 1\n"); 6705 - } 6689 + if (!(rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_RX_INTERRUPT)) 6690 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 6691 + 6692 + rdev->irq.stat_regs.evergreen.disp_int &= ~DC_HPD1_RX_INTERRUPT; 6693 + queue_dp = true; 6694 + DRM_DEBUG("IH: HPD_RX 1\n"); 6695 + 6706 6696 break; 6707 6697 case 7: 6708 - if (rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_RX_INTERRUPT) { 6709 - rdev->irq.stat_regs.evergreen.disp_int_cont &= ~DC_HPD2_RX_INTERRUPT; 6710 - queue_dp = true; 6711 - DRM_DEBUG("IH: HPD_RX 2\n"); 6712 - } 6698 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_RX_INTERRUPT)) 6699 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 6700 + 6701 + rdev->irq.stat_regs.evergreen.disp_int_cont &= ~DC_HPD2_RX_INTERRUPT; 6702 + queue_dp = true; 6703 + DRM_DEBUG("IH: HPD_RX 2\n"); 6704 + 6713 6705 break; 6714 6706 case 8: 6715 - if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_RX_INTERRUPT) { 6716 - rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~DC_HPD3_RX_INTERRUPT; 6717 - queue_dp = true; 6718 - DRM_DEBUG("IH: HPD_RX 3\n"); 6719 - } 6707 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_RX_INTERRUPT)) 6708 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 6709 + 6710 + rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~DC_HPD3_RX_INTERRUPT; 6711 + queue_dp = true; 6712 + DRM_DEBUG("IH: HPD_RX 3\n"); 6713 + 6720 6714 break; 6721 6715 case 9: 6722 - if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_RX_INTERRUPT) { 6723 - rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~DC_HPD4_RX_INTERRUPT; 6724 - queue_dp = true; 6725 - DRM_DEBUG("IH: HPD_RX 4\n"); 6726 - } 6716 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_RX_INTERRUPT)) 6717 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 6718 + 6719 + rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~DC_HPD4_RX_INTERRUPT; 6720 + queue_dp = true; 6721 + DRM_DEBUG("IH: HPD_RX 4\n"); 6722 + 6727 6723 break; 6728 6724 case 10: 6729 - if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_RX_INTERRUPT) { 6730 - rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~DC_HPD5_RX_INTERRUPT; 6731 - queue_dp = true; 6732 - DRM_DEBUG("IH: HPD_RX 5\n"); 6733 - } 6725 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_RX_INTERRUPT)) 6726 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 6727 + 6728 + rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~DC_HPD5_RX_INTERRUPT; 6729 + queue_dp = true; 6730 + DRM_DEBUG("IH: HPD_RX 5\n"); 6731 + 6734 6732 break; 6735 6733 case 11: 6736 - if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_RX_INTERRUPT) { 6737 - rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~DC_HPD6_RX_INTERRUPT; 6738 - queue_dp = true; 6739 - DRM_DEBUG("IH: HPD_RX 6\n"); 6740 - } 6734 + if (!(rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_RX_INTERRUPT)) 6735 + DRM_DEBUG("IH: IH event w/o asserted irq bit?\n"); 6736 + 6737 + rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~DC_HPD6_RX_INTERRUPT; 6738 + queue_dp = true; 6739 + DRM_DEBUG("IH: HPD_RX 6\n"); 6740 + 6741 6741 break; 6742 6742 default: 6743 6743 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+1
drivers/gpu/drm/radeon/si_dpm.c
··· 2926 2926 /* PITCAIRN - https://bugs.freedesktop.org/show_bug.cgi?id=76490 */ 2927 2927 { PCI_VENDOR_ID_ATI, 0x6810, 0x1462, 0x3036, 0, 120000 }, 2928 2928 { PCI_VENDOR_ID_ATI, 0x6811, 0x174b, 0xe271, 0, 120000 }, 2929 + { PCI_VENDOR_ID_ATI, 0x6810, 0x174b, 0xe271, 85000, 90000 }, 2929 2930 { 0, 0, 0, 0 }, 2930 2931 }; 2931 2932
-1
drivers/gpu/drm/rockchip/rockchip_drm_drv.c
··· 555 555 .probe = rockchip_drm_platform_probe, 556 556 .remove = rockchip_drm_platform_remove, 557 557 .driver = { 558 - .owner = THIS_MODULE, 559 558 .name = "rockchip-drm", 560 559 .of_match_table = rockchip_drm_dt_ids, 561 560 .pm = &rockchip_drm_pm_ops,
+2 -1
drivers/gpu/drm/rockchip/rockchip_drm_fb.c
··· 162 162 struct rockchip_drm_private *private = dev->dev_private; 163 163 struct drm_fb_helper *fb_helper = &private->fbdev_helper; 164 164 165 - drm_fb_helper_hotplug_event(fb_helper); 165 + if (fb_helper) 166 + drm_fb_helper_hotplug_event(fb_helper); 166 167 } 167 168 168 169 static const struct drm_mode_config_funcs rockchip_drm_mode_config_funcs = {
+34 -33
drivers/gpu/drm/rockchip/rockchip_drm_gem.c
··· 54 54 &rk_obj->dma_attrs); 55 55 } 56 56 57 + static int rockchip_drm_gem_object_mmap(struct drm_gem_object *obj, 58 + struct vm_area_struct *vma) 59 + 60 + { 61 + int ret; 62 + struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj); 63 + struct drm_device *drm = obj->dev; 64 + 65 + /* 66 + * dma_alloc_attrs() allocated a struct page table for rk_obj, so clear 67 + * VM_PFNMAP flag that was set by drm_gem_mmap_obj()/drm_gem_mmap(). 68 + */ 69 + vma->vm_flags &= ~VM_PFNMAP; 70 + 71 + ret = dma_mmap_attrs(drm->dev, vma, rk_obj->kvaddr, rk_obj->dma_addr, 72 + obj->size, &rk_obj->dma_attrs); 73 + if (ret) 74 + drm_gem_vm_close(vma); 75 + 76 + return ret; 77 + } 78 + 57 79 int rockchip_gem_mmap_buf(struct drm_gem_object *obj, 58 80 struct vm_area_struct *vma) 59 81 { 60 - struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj); 61 82 struct drm_device *drm = obj->dev; 62 - unsigned long vm_size; 83 + int ret; 63 84 64 - vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP; 65 - vm_size = vma->vm_end - vma->vm_start; 85 + mutex_lock(&drm->struct_mutex); 86 + ret = drm_gem_mmap_obj(obj, obj->size, vma); 87 + mutex_unlock(&drm->struct_mutex); 88 + if (ret) 89 + return ret; 66 90 67 - if (vm_size > obj->size) 68 - return -EINVAL; 69 - 70 - return dma_mmap_attrs(drm->dev, vma, rk_obj->kvaddr, rk_obj->dma_addr, 71 - obj->size, &rk_obj->dma_attrs); 91 + return rockchip_drm_gem_object_mmap(obj, vma); 72 92 } 73 93 74 94 /* drm driver mmap file operations */ 75 95 int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma) 76 96 { 77 - struct drm_file *priv = filp->private_data; 78 - struct drm_device *dev = priv->minor->dev; 79 97 struct drm_gem_object *obj; 80 - struct drm_vma_offset_node *node; 81 98 int ret; 82 99 83 - if (drm_device_is_unplugged(dev)) 84 - return -ENODEV; 100 + ret = drm_gem_mmap(filp, vma); 101 + if (ret) 102 + return ret; 85 103 86 - mutex_lock(&dev->struct_mutex); 104 + obj = vma->vm_private_data; 87 105 88 - node = drm_vma_offset_exact_lookup(dev->vma_offset_manager, 89 - vma->vm_pgoff, 90 - vma_pages(vma)); 91 - if (!node) { 92 - mutex_unlock(&dev->struct_mutex); 93 - DRM_ERROR("failed to find vma node.\n"); 94 - return -EINVAL; 95 - } else if (!drm_vma_node_is_allowed(node, filp)) { 96 - mutex_unlock(&dev->struct_mutex); 97 - return -EACCES; 98 - } 99 - 100 - obj = container_of(node, struct drm_gem_object, vma_node); 101 - ret = rockchip_gem_mmap_buf(obj, vma); 102 - 103 - mutex_unlock(&dev->struct_mutex); 104 - 105 - return ret; 106 + return rockchip_drm_gem_object_mmap(obj, vma); 106 107 } 107 108 108 109 struct rockchip_gem_object *
+36 -13
drivers/gpu/drm/rockchip/rockchip_drm_vop.c
··· 170 170 171 171 struct vop_reg enable; 172 172 struct vop_reg format; 173 + struct vop_reg rb_swap; 173 174 struct vop_reg act_info; 174 175 struct vop_reg dsp_info; 175 176 struct vop_reg dsp_st; ··· 200 199 static const uint32_t formats_01[] = { 201 200 DRM_FORMAT_XRGB8888, 202 201 DRM_FORMAT_ARGB8888, 202 + DRM_FORMAT_XBGR8888, 203 + DRM_FORMAT_ABGR8888, 203 204 DRM_FORMAT_RGB888, 205 + DRM_FORMAT_BGR888, 204 206 DRM_FORMAT_RGB565, 207 + DRM_FORMAT_BGR565, 205 208 DRM_FORMAT_NV12, 206 209 DRM_FORMAT_NV16, 207 210 DRM_FORMAT_NV24, ··· 214 209 static const uint32_t formats_234[] = { 215 210 DRM_FORMAT_XRGB8888, 216 211 DRM_FORMAT_ARGB8888, 212 + DRM_FORMAT_XBGR8888, 213 + DRM_FORMAT_ABGR8888, 217 214 DRM_FORMAT_RGB888, 215 + DRM_FORMAT_BGR888, 218 216 DRM_FORMAT_RGB565, 217 + DRM_FORMAT_BGR565, 219 218 }; 220 219 221 220 static const struct vop_win_phy win01_data = { ··· 227 218 .nformats = ARRAY_SIZE(formats_01), 228 219 .enable = VOP_REG(WIN0_CTRL0, 0x1, 0), 229 220 .format = VOP_REG(WIN0_CTRL0, 0x7, 1), 221 + .rb_swap = VOP_REG(WIN0_CTRL0, 0x1, 12), 230 222 .act_info = VOP_REG(WIN0_ACT_INFO, 0x1fff1fff, 0), 231 223 .dsp_info = VOP_REG(WIN0_DSP_INFO, 0x0fff0fff, 0), 232 224 .dsp_st = VOP_REG(WIN0_DSP_ST, 0x1fff1fff, 0), ··· 244 234 .nformats = ARRAY_SIZE(formats_234), 245 235 .enable = VOP_REG(WIN2_CTRL0, 0x1, 0), 246 236 .format = VOP_REG(WIN2_CTRL0, 0x7, 1), 237 + .rb_swap = VOP_REG(WIN2_CTRL0, 0x1, 12), 247 238 .dsp_info = VOP_REG(WIN2_DSP_INFO0, 0x0fff0fff, 0), 248 239 .dsp_st = VOP_REG(WIN2_DSP_ST0, 0x1fff1fff, 0), 249 240 .yrgb_mst = VOP_REG(WIN2_MST0, 0xffffffff, 0), 250 241 .yrgb_vir = VOP_REG(WIN2_VIR0_1, 0x1fff, 0), 251 242 .src_alpha_ctl = VOP_REG(WIN2_SRC_ALPHA_CTRL, 0xff, 0), 252 243 .dst_alpha_ctl = VOP_REG(WIN2_DST_ALPHA_CTRL, 0xff, 0), 253 - }; 254 - 255 - static const struct vop_win_phy cursor_data = { 256 - .data_formats = formats_234, 257 - .nformats = ARRAY_SIZE(formats_234), 258 - .enable = VOP_REG(HWC_CTRL0, 0x1, 0), 259 - .format = VOP_REG(HWC_CTRL0, 0x7, 1), 260 - .dsp_st = VOP_REG(HWC_DSP_ST, 0x1fff1fff, 0), 261 - .yrgb_mst = VOP_REG(HWC_MST, 0xffffffff, 0), 262 244 }; 263 245 264 246 static const struct vop_ctrl ctrl_data = { ··· 284 282 /* 285 283 * Note: rk3288 has a dedicated 'cursor' window, however, that window requires 286 284 * special support to get alpha blending working. For now, just use overlay 287 - * window 1 for the drm cursor. 285 + * window 3 for the drm cursor. 286 + * 288 287 */ 289 288 static const struct vop_win_data rk3288_vop_win_data[] = { 290 289 { .base = 0x00, .phy = &win01_data, .type = DRM_PLANE_TYPE_PRIMARY }, 291 - { .base = 0x40, .phy = &win01_data, .type = DRM_PLANE_TYPE_CURSOR }, 290 + { .base = 0x40, .phy = &win01_data, .type = DRM_PLANE_TYPE_OVERLAY }, 292 291 { .base = 0x00, .phy = &win23_data, .type = DRM_PLANE_TYPE_OVERLAY }, 293 - { .base = 0x50, .phy = &win23_data, .type = DRM_PLANE_TYPE_OVERLAY }, 294 - { .base = 0x00, .phy = &cursor_data, .type = DRM_PLANE_TYPE_OVERLAY }, 292 + { .base = 0x50, .phy = &win23_data, .type = DRM_PLANE_TYPE_CURSOR }, 295 293 }; 296 294 297 295 static const struct vop_data rk3288_vop = { ··· 354 352 } 355 353 } 356 354 355 + static bool has_rb_swapped(uint32_t format) 356 + { 357 + switch (format) { 358 + case DRM_FORMAT_XBGR8888: 359 + case DRM_FORMAT_ABGR8888: 360 + case DRM_FORMAT_BGR888: 361 + case DRM_FORMAT_BGR565: 362 + return true; 363 + default: 364 + return false; 365 + } 366 + } 367 + 357 368 static enum vop_data_format vop_convert_format(uint32_t format) 358 369 { 359 370 switch (format) { 360 371 case DRM_FORMAT_XRGB8888: 361 372 case DRM_FORMAT_ARGB8888: 373 + case DRM_FORMAT_XBGR8888: 374 + case DRM_FORMAT_ABGR8888: 362 375 return VOP_FMT_ARGB8888; 363 376 case DRM_FORMAT_RGB888: 377 + case DRM_FORMAT_BGR888: 364 378 return VOP_FMT_RGB888; 365 379 case DRM_FORMAT_RGB565: 380 + case DRM_FORMAT_BGR565: 366 381 return VOP_FMT_RGB565; 367 382 case DRM_FORMAT_NV12: 368 383 return VOP_FMT_YUV420SP; ··· 397 378 { 398 379 switch (format) { 399 380 case DRM_FORMAT_ARGB8888: 381 + case DRM_FORMAT_ABGR8888: 400 382 return true; 401 383 default: 402 384 return false; ··· 608 588 enum vop_data_format format; 609 589 uint32_t val; 610 590 bool is_alpha; 591 + bool rb_swap; 611 592 bool visible; 612 593 int ret; 613 594 struct drm_rect dest = { ··· 642 621 return 0; 643 622 644 623 is_alpha = is_alpha_support(fb->pixel_format); 624 + rb_swap = has_rb_swapped(fb->pixel_format); 645 625 format = vop_convert_format(fb->pixel_format); 646 626 if (format < 0) 647 627 return format; ··· 711 689 val = (dsp_sty - 1) << 16; 712 690 val |= (dsp_stx - 1) & 0xffff; 713 691 VOP_WIN_SET(vop, win, dsp_st, val); 692 + VOP_WIN_SET(vop, win, rb_swap, rb_swap); 714 693 715 694 if (is_alpha) { 716 695 VOP_WIN_SET(vop, win, dst_alpha_ctl,
+6 -7
drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
··· 963 963 } else { 964 964 pool->npages_free += count; 965 965 list_splice(&ttm_dma->pages_list, &pool->free_list); 966 - npages = count; 967 - if (pool->npages_free > _manager->options.max_size) { 966 + /* 967 + * Wait to have at at least NUM_PAGES_TO_ALLOC number of pages 968 + * to free in order to minimize calls to set_memory_wb(). 969 + */ 970 + if (pool->npages_free >= (_manager->options.max_size + 971 + NUM_PAGES_TO_ALLOC)) 968 972 npages = pool->npages_free - _manager->options.max_size; 969 - /* free at least NUM_PAGES_TO_ALLOC number of pages 970 - * to reduce calls to set_memory_wb */ 971 - if (npages < NUM_PAGES_TO_ALLOC) 972 - npages = NUM_PAGES_TO_ALLOC; 973 - } 974 973 } 975 974 spin_unlock_irqrestore(&pool->lock, irq_flags); 976 975
+3
drivers/gpu/ipu-v3/ipu-common.c
··· 1107 1107 return ret; 1108 1108 } 1109 1109 1110 + for (i = 0; i < IPU_NUM_IRQS; i += 32) 1111 + ipu_cm_write(ipu, 0, IPU_INT_CTRL(i / 32)); 1112 + 1110 1113 for (i = 0; i < IPU_NUM_IRQS; i += 32) { 1111 1114 gc = irq_get_domain_generic_chip(ipu->domain, i); 1112 1115 gc->reg_base = ipu->cm_reg;
+1
drivers/i2c/busses/Kconfig
··· 633 633 config I2C_MT65XX 634 634 tristate "MediaTek I2C adapter" 635 635 depends on ARCH_MEDIATEK || COMPILE_TEST 636 + depends on HAS_DMA 636 637 help 637 638 This selects the MediaTek(R) Integrated Inter Circuit bus driver 638 639 for MT65xx and MT81xx.
+8 -7
drivers/i2c/busses/i2c-jz4780.c
··· 764 764 if (IS_ERR(i2c->clk)) 765 765 return PTR_ERR(i2c->clk); 766 766 767 - clk_prepare_enable(i2c->clk); 767 + ret = clk_prepare_enable(i2c->clk); 768 + if (ret) 769 + return ret; 768 770 769 - if (of_property_read_u32(pdev->dev.of_node, "clock-frequency", 770 - &clk_freq)) { 771 + ret = of_property_read_u32(pdev->dev.of_node, "clock-frequency", 772 + &clk_freq); 773 + if (ret) { 771 774 dev_err(&pdev->dev, "clock-frequency not specified in DT"); 772 - return clk_freq; 775 + goto err; 773 776 } 774 777 775 778 i2c->speed = clk_freq / 1000; ··· 793 790 i2c->irq = platform_get_irq(pdev, 0); 794 791 ret = devm_request_irq(&pdev->dev, i2c->irq, jz4780_i2c_irq, 0, 795 792 dev_name(&pdev->dev), i2c); 796 - if (ret) { 797 - ret = -ENODEV; 793 + if (ret) 798 794 goto err; 799 - } 800 795 801 796 ret = i2c_add_adapter(&i2c->adap); 802 797 if (ret < 0) {
+1
drivers/i2c/busses/i2c-xgene-slimpro.c
··· 419 419 rc = i2c_add_adapter(adapter); 420 420 if (rc) { 421 421 dev_err(&pdev->dev, "Adapter registeration failed\n"); 422 + mbox_free_channel(ctx->mbox_chan); 422 423 return rc; 423 424 } 424 425
+15 -1
drivers/i2c/i2c-core.c
··· 1012 1012 */ 1013 1013 void i2c_unregister_device(struct i2c_client *client) 1014 1014 { 1015 + if (client->dev.of_node) 1016 + of_node_clear_flag(client->dev.of_node, OF_POPULATED); 1015 1017 device_unregister(&client->dev); 1016 1018 } 1017 1019 EXPORT_SYMBOL_GPL(i2c_unregister_device); ··· 1322 1320 1323 1321 dev_dbg(&adap->dev, "of_i2c: walking child nodes\n"); 1324 1322 1325 - for_each_available_child_of_node(adap->dev.of_node, node) 1323 + for_each_available_child_of_node(adap->dev.of_node, node) { 1324 + if (of_node_test_and_set_flag(node, OF_POPULATED)) 1325 + continue; 1326 1326 of_i2c_register_device(adap, node); 1327 + } 1327 1328 } 1328 1329 1329 1330 static int of_dev_node_match(struct device *dev, void *data) ··· 1858 1853 if (adap == NULL) 1859 1854 return NOTIFY_OK; /* not for us */ 1860 1855 1856 + if (of_node_test_and_set_flag(rd->dn, OF_POPULATED)) { 1857 + put_device(&adap->dev); 1858 + return NOTIFY_OK; 1859 + } 1860 + 1861 1861 client = of_i2c_register_device(adap, rd->dn); 1862 1862 put_device(&adap->dev); 1863 1863 ··· 1873 1863 } 1874 1864 break; 1875 1865 case OF_RECONFIG_CHANGE_REMOVE: 1866 + /* already depopulated? */ 1867 + if (!of_node_check_flag(rd->dn, OF_POPULATED)) 1868 + return NOTIFY_OK; 1869 + 1876 1870 /* find our device by node */ 1877 1871 client = of_find_i2c_device_by_node(rd->dn); 1878 1872 if (client == NULL)
+1 -1
drivers/iio/accel/bmc150-accel.c
··· 1471 1471 { 1472 1472 int i; 1473 1473 1474 - for (i = from; i >= 0; i++) { 1474 + for (i = from; i >= 0; i--) { 1475 1475 if (data->triggers[i].indio_trig) { 1476 1476 iio_trigger_unregister(data->triggers[i].indio_trig); 1477 1477 data->triggers[i].indio_trig = NULL;
+1 -2
drivers/iio/adc/Kconfig
··· 163 163 164 164 config CC10001_ADC 165 165 tristate "Cosmic Circuits 10001 ADC driver" 166 - depends on HAVE_CLK || REGULATOR 167 - depends on HAS_IOMEM 166 + depends on HAS_IOMEM && HAVE_CLK && REGULATOR 168 167 select IIO_BUFFER 169 168 select IIO_TRIGGERED_BUFFER 170 169 help
+4 -4
drivers/iio/adc/at91_adc.c
··· 182 182 u8 ts_pen_detect_sensitivity; 183 183 184 184 /* startup time calculate function */ 185 - u32 (*calc_startup_ticks)(u8 startup_time, u32 adc_clk_khz); 185 + u32 (*calc_startup_ticks)(u32 startup_time, u32 adc_clk_khz); 186 186 187 187 u8 num_channels; 188 188 struct at91_adc_reg_desc registers; ··· 201 201 u8 num_channels; 202 202 void __iomem *reg_base; 203 203 struct at91_adc_reg_desc *registers; 204 - u8 startup_time; 204 + u32 startup_time; 205 205 u8 sample_hold_time; 206 206 bool sleep_mode; 207 207 struct iio_trigger **trig; ··· 779 779 return ret; 780 780 } 781 781 782 - static u32 calc_startup_ticks_9260(u8 startup_time, u32 adc_clk_khz) 782 + static u32 calc_startup_ticks_9260(u32 startup_time, u32 adc_clk_khz) 783 783 { 784 784 /* 785 785 * Number of ticks needed to cover the startup time of the ADC ··· 790 790 return round_up((startup_time * adc_clk_khz / 1000) - 1, 8) / 8; 791 791 } 792 792 793 - static u32 calc_startup_ticks_9x5(u8 startup_time, u32 adc_clk_khz) 793 + static u32 calc_startup_ticks_9x5(u32 startup_time, u32 adc_clk_khz) 794 794 { 795 795 /* 796 796 * For sama5d3x and at91sam9x5, the formula changes to:
+4
drivers/iio/adc/rockchip_saradc.c
··· 349 349 }; 350 350 351 351 module_platform_driver(rockchip_saradc_driver); 352 + 353 + MODULE_AUTHOR("Heiko Stuebner <heiko@sntech.de>"); 354 + MODULE_DESCRIPTION("Rockchip SARADC driver"); 355 + MODULE_LICENSE("GPL v2");
+2 -1
drivers/iio/adc/twl4030-madc.c
··· 833 833 irq = platform_get_irq(pdev, 0); 834 834 ret = devm_request_threaded_irq(&pdev->dev, irq, NULL, 835 835 twl4030_madc_threaded_irq_handler, 836 - IRQF_TRIGGER_RISING, "twl4030_madc", madc); 836 + IRQF_TRIGGER_RISING | IRQF_ONESHOT, 837 + "twl4030_madc", madc); 837 838 if (ret) { 838 839 dev_err(&pdev->dev, "could not request irq\n"); 839 840 goto err_i2c;
+10 -1
drivers/iio/common/hid-sensors/hid-sensor-trigger.c
··· 36 36 s32 poll_value = 0; 37 37 38 38 if (state) { 39 + if (!atomic_read(&st->user_requested_state)) 40 + return 0; 39 41 if (sensor_hub_device_open(st->hsdev)) 40 42 return -EIO; 41 43 ··· 54 52 55 53 poll_value = hid_sensor_read_poll_value(st); 56 54 } else { 57 - if (!atomic_dec_and_test(&st->data_ready)) 55 + int val; 56 + 57 + val = atomic_dec_if_positive(&st->data_ready); 58 + if (val < 0) 58 59 return 0; 60 + 59 61 sensor_hub_device_close(st->hsdev); 60 62 state_val = hid_sensor_get_usage_index(st->hsdev, 61 63 st->power_state.report_id, ··· 98 92 99 93 int hid_sensor_power_state(struct hid_sensor_common *st, bool state) 100 94 { 95 + 101 96 #ifdef CONFIG_PM 102 97 int ret; 103 98 99 + atomic_set(&st->user_requested_state, state); 104 100 if (state) 105 101 ret = pm_runtime_get_sync(&st->pdev->dev); 106 102 else { ··· 117 109 118 110 return 0; 119 111 #else 112 + atomic_set(&st->user_requested_state, state); 120 113 return _hid_sensor_power_state(st, state); 121 114 #endif 122 115 }
+2 -2
drivers/iio/dac/ad5624r_spi.c
··· 22 22 #include "ad5624r.h" 23 23 24 24 static int ad5624r_spi_write(struct spi_device *spi, 25 - u8 cmd, u8 addr, u16 val, u8 len) 25 + u8 cmd, u8 addr, u16 val, u8 shift) 26 26 { 27 27 u32 data; 28 28 u8 msg[3]; ··· 35 35 * 14-, 12-bit input code followed by 0, 2, or 4 don't care bits, 36 36 * for the AD5664R, AD5644R, and AD5624R, respectively. 37 37 */ 38 - data = (0 << 22) | (cmd << 19) | (addr << 16) | (val << (16 - len)); 38 + data = (0 << 22) | (cmd << 19) | (addr << 16) | (val << shift); 39 39 msg[0] = data >> 16; 40 40 msg[1] = data >> 8; 41 41 msg[2] = data;
+18
drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
··· 431 431 return -EINVAL; 432 432 } 433 433 434 + static int inv_write_raw_get_fmt(struct iio_dev *indio_dev, 435 + struct iio_chan_spec const *chan, long mask) 436 + { 437 + switch (mask) { 438 + case IIO_CHAN_INFO_SCALE: 439 + switch (chan->type) { 440 + case IIO_ANGL_VEL: 441 + return IIO_VAL_INT_PLUS_NANO; 442 + default: 443 + return IIO_VAL_INT_PLUS_MICRO; 444 + } 445 + default: 446 + return IIO_VAL_INT_PLUS_MICRO; 447 + } 448 + 449 + return -EINVAL; 450 + } 434 451 static int inv_mpu6050_write_accel_scale(struct inv_mpu6050_state *st, int val) 435 452 { 436 453 int result, i; ··· 719 702 .driver_module = THIS_MODULE, 720 703 .read_raw = &inv_mpu6050_read_raw, 721 704 .write_raw = &inv_mpu6050_write_raw, 705 + .write_raw_get_fmt = &inv_write_raw_get_fmt, 722 706 .attrs = &inv_attribute_group, 723 707 .validate_trigger = inv_mpu6050_validate_trigger, 724 708 };
+2
drivers/iio/light/Kconfig
··· 199 199 config LTR501 200 200 tristate "LTR-501ALS-01 light sensor" 201 201 depends on I2C 202 + select REGMAP_I2C 202 203 select IIO_BUFFER 203 204 select IIO_TRIGGERED_BUFFER 204 205 help ··· 213 212 config STK3310 214 213 tristate "STK3310 ALS and proximity sensor" 215 214 depends on I2C 215 + select REGMAP_I2C 216 216 help 217 217 Say yes here to get support for the Sensortek STK3310 ambient light 218 218 and proximity sensor. The STK3311 model is also supported by this
+1 -1
drivers/iio/light/cm3323.c
··· 123 123 for (i = 0; i < ARRAY_SIZE(cm3323_int_time); i++) { 124 124 if (val == cm3323_int_time[i].val && 125 125 val2 == cm3323_int_time[i].val2) { 126 - reg_conf = data->reg_conf; 126 + reg_conf = data->reg_conf & ~CM3323_CONF_IT_MASK; 127 127 reg_conf |= i << CM3323_CONF_IT_SHIFT; 128 128 129 129 ret = i2c_smbus_write_word_data(data->client,
+1 -1
drivers/iio/light/ltr501.c
··· 1302 1302 if (ret < 0) 1303 1303 return ret; 1304 1304 1305 - data->als_contr = ret | data->chip_info->als_mode_active; 1305 + data->als_contr = status | data->chip_info->als_mode_active; 1306 1306 1307 1307 ret = regmap_read(data->regmap, LTR501_PS_CONTR, &status); 1308 1308 if (ret < 0)
+17 -36
drivers/iio/light/stk3310.c
··· 43 43 #define STK3311_CHIP_ID_VAL 0x1D 44 44 #define STK3310_PSINT_EN 0x01 45 45 #define STK3310_PS_MAX_VAL 0xFFFF 46 - #define STK3310_THRESH_MAX 0xFFFF 47 46 48 47 #define STK3310_DRIVER_NAME "stk3310" 49 48 #define STK3310_REGMAP_NAME "stk3310_regmap" ··· 83 84 REG_FIELD(STK3310_REG_FLAG, 4, 4); 84 85 static const struct reg_field stk3310_reg_field_flag_nf = 85 86 REG_FIELD(STK3310_REG_FLAG, 0, 0); 86 - /* 87 - * Maximum PS values with regard to scale. Used to export the 'inverse' 88 - * PS value (high values for far objects, low values for near objects). 89 - */ 87 + 88 + /* Estimate maximum proximity values with regard to measurement scale. */ 90 89 static const int stk3310_ps_max[4] = { 91 - STK3310_PS_MAX_VAL / 64, 92 - STK3310_PS_MAX_VAL / 16, 93 - STK3310_PS_MAX_VAL / 4, 94 - STK3310_PS_MAX_VAL, 90 + STK3310_PS_MAX_VAL / 640, 91 + STK3310_PS_MAX_VAL / 160, 92 + STK3310_PS_MAX_VAL / 40, 93 + STK3310_PS_MAX_VAL / 10 95 94 }; 96 95 97 96 static const int stk3310_scale_table[][2] = { ··· 125 128 /* Proximity event */ 126 129 { 127 130 .type = IIO_EV_TYPE_THRESH, 128 - .dir = IIO_EV_DIR_FALLING, 131 + .dir = IIO_EV_DIR_RISING, 129 132 .mask_separate = BIT(IIO_EV_INFO_VALUE) | 130 133 BIT(IIO_EV_INFO_ENABLE), 131 134 }, 132 135 /* Out-of-proximity event */ 133 136 { 134 137 .type = IIO_EV_TYPE_THRESH, 135 - .dir = IIO_EV_DIR_RISING, 138 + .dir = IIO_EV_DIR_FALLING, 136 139 .mask_separate = BIT(IIO_EV_INFO_VALUE) | 137 140 BIT(IIO_EV_INFO_ENABLE), 138 141 }, ··· 202 205 u8 reg; 203 206 u16 buf; 204 207 int ret; 205 - unsigned int index; 206 208 struct stk3310_data *data = iio_priv(indio_dev); 207 209 208 210 if (info != IIO_EV_INFO_VALUE) 209 211 return -EINVAL; 210 212 211 - /* 212 - * Only proximity interrupts are implemented at the moment. 213 - * Since we're inverting proximity values, the sensor's 'high' 214 - * threshold will become our 'low' threshold, associated with 215 - * 'near' events. Similarly, the sensor's 'low' threshold will 216 - * be our 'high' threshold, associated with 'far' events. 217 - */ 213 + /* Only proximity interrupts are implemented at the moment. */ 218 214 if (dir == IIO_EV_DIR_RISING) 219 - reg = STK3310_REG_THDL_PS; 220 - else if (dir == IIO_EV_DIR_FALLING) 221 215 reg = STK3310_REG_THDH_PS; 216 + else if (dir == IIO_EV_DIR_FALLING) 217 + reg = STK3310_REG_THDL_PS; 222 218 else 223 219 return -EINVAL; 224 220 ··· 222 232 dev_err(&data->client->dev, "register read failed\n"); 223 233 return ret; 224 234 } 225 - regmap_field_read(data->reg_ps_gain, &index); 226 - *val = swab16(stk3310_ps_max[index] - buf); 235 + *val = swab16(buf); 227 236 228 237 return IIO_VAL_INT; 229 238 } ··· 246 257 return -EINVAL; 247 258 248 259 if (dir == IIO_EV_DIR_RISING) 249 - reg = STK3310_REG_THDL_PS; 250 - else if (dir == IIO_EV_DIR_FALLING) 251 260 reg = STK3310_REG_THDH_PS; 261 + else if (dir == IIO_EV_DIR_FALLING) 262 + reg = STK3310_REG_THDL_PS; 252 263 else 253 264 return -EINVAL; 254 265 255 - buf = swab16(stk3310_ps_max[index] - val); 266 + buf = swab16(val); 256 267 ret = regmap_bulk_write(data->regmap, reg, &buf, 2); 257 268 if (ret < 0) 258 269 dev_err(&client->dev, "failed to set PS threshold!\n"); ··· 323 334 return ret; 324 335 } 325 336 *val = swab16(buf); 326 - if (chan->type == IIO_PROXIMITY) { 327 - /* 328 - * Invert the proximity data so we return low values 329 - * for close objects and high values for far ones. 330 - */ 331 - regmap_field_read(data->reg_ps_gain, &index); 332 - *val = stk3310_ps_max[index] - *val; 333 - } 334 337 mutex_unlock(&data->lock); 335 338 return IIO_VAL_INT; 336 339 case IIO_CHAN_INFO_INT_TIME: ··· 562 581 } 563 582 event = IIO_UNMOD_EVENT_CODE(IIO_PROXIMITY, 1, 564 583 IIO_EV_TYPE_THRESH, 565 - (dir ? IIO_EV_DIR_RISING : 566 - IIO_EV_DIR_FALLING)); 584 + (dir ? IIO_EV_DIR_FALLING : 585 + IIO_EV_DIR_RISING)); 567 586 iio_push_event(indio_dev, event, data->timestamp); 568 587 569 588 /* Reset the interrupt flag */
+1 -1
drivers/iio/light/tcs3414.c
··· 185 185 if (val != 0) 186 186 return -EINVAL; 187 187 for (i = 0; i < ARRAY_SIZE(tcs3414_times); i++) { 188 - if (val == tcs3414_times[i] * 1000) { 188 + if (val2 == tcs3414_times[i] * 1000) { 189 189 data->timing &= ~TCS3414_INTEG_MASK; 190 190 data->timing |= i; 191 191 return i2c_smbus_write_byte_data(
+21 -14
drivers/iio/magnetometer/mmc35240.c
··· 84 84 #define MMC35240_OTP_START_ADDR 0x1B 85 85 86 86 enum mmc35240_resolution { 87 - MMC35240_16_BITS_SLOW = 0, /* 100 Hz */ 88 - MMC35240_16_BITS_FAST, /* 200 Hz */ 89 - MMC35240_14_BITS, /* 333 Hz */ 90 - MMC35240_12_BITS, /* 666 Hz */ 87 + MMC35240_16_BITS_SLOW = 0, /* 7.92 ms */ 88 + MMC35240_16_BITS_FAST, /* 4.08 ms */ 89 + MMC35240_14_BITS, /* 2.16 ms */ 90 + MMC35240_12_BITS, /* 1.20 ms */ 91 91 }; 92 92 93 93 enum mmc35240_axis { ··· 100 100 int sens[3]; /* sensitivity per X, Y, Z axis */ 101 101 int nfo; /* null field output */ 102 102 } mmc35240_props_table[] = { 103 - /* 16 bits, 100Hz ODR */ 103 + /* 16 bits, 125Hz ODR */ 104 104 { 105 105 {1024, 1024, 1024}, 106 106 32768, 107 107 }, 108 - /* 16 bits, 200Hz ODR */ 108 + /* 16 bits, 250Hz ODR */ 109 109 { 110 110 {1024, 1024, 770}, 111 111 32768, 112 112 }, 113 - /* 14 bits, 333Hz ODR */ 113 + /* 14 bits, 450Hz ODR */ 114 114 { 115 115 {256, 256, 193}, 116 116 8192, 117 117 }, 118 - /* 12 bits, 666Hz ODR */ 118 + /* 12 bits, 800Hz ODR */ 119 119 { 120 120 {64, 64, 48}, 121 121 2048, ··· 133 133 int axis_scale[3]; 134 134 }; 135 135 136 - static const int mmc35240_samp_freq[] = {100, 200, 333, 666}; 136 + static const struct { 137 + int val; 138 + int val2; 139 + } mmc35240_samp_freq[] = { {1, 500000}, 140 + {13, 0}, 141 + {25, 0}, 142 + {50, 0} }; 137 143 138 - static IIO_CONST_ATTR_SAMP_FREQ_AVAIL("100 200 333 666"); 144 + static IIO_CONST_ATTR_SAMP_FREQ_AVAIL("1.5 13 25 50"); 139 145 140 146 #define MMC35240_CHANNEL(_axis) { \ 141 147 .type = IIO_MAGN, \ ··· 174 168 int i; 175 169 176 170 for (i = 0; i < ARRAY_SIZE(mmc35240_samp_freq); i++) 177 - if (mmc35240_samp_freq[i] == val) 171 + if (mmc35240_samp_freq[i].val == val && 172 + mmc35240_samp_freq[i].val2 == val2) 178 173 return i; 179 174 return -EINVAL; 180 175 } ··· 385 378 if (i < 0 || i >= ARRAY_SIZE(mmc35240_samp_freq)) 386 379 return -EINVAL; 387 380 388 - *val = mmc35240_samp_freq[i]; 389 - *val2 = 0; 390 - return IIO_VAL_INT; 381 + *val = mmc35240_samp_freq[i].val; 382 + *val2 = mmc35240_samp_freq[i].val2; 383 + return IIO_VAL_INT_PLUS_MICRO; 391 384 default: 392 385 return -EINVAL; 393 386 }
+16 -14
drivers/iio/proximity/sx9500.c
··· 80 80 #define SX9500_COMPSTAT_MASK GENMASK(3, 0) 81 81 82 82 #define SX9500_NUM_CHANNELS 4 83 + #define SX9500_CHAN_MASK GENMASK(SX9500_NUM_CHANNELS - 1, 0) 83 84 84 85 struct sx9500_data { 85 86 struct mutex mutex; ··· 282 281 if (ret < 0) 283 282 return ret; 284 283 285 - *val = 32767 - (s16)be16_to_cpu(regval); 284 + *val = be16_to_cpu(regval); 286 285 287 286 return IIO_VAL_INT; 288 287 } ··· 330 329 else 331 330 ret = sx9500_wait_for_sample(data); 332 331 333 - if (ret < 0) 334 - return ret; 335 - 336 332 mutex_lock(&data->mutex); 333 + 334 + if (ret < 0) 335 + goto out_dec_data_rdy; 337 336 338 337 ret = sx9500_read_prox_data(data, chan, val); 339 338 if (ret < 0) 340 - goto out; 341 - 342 - ret = sx9500_dec_chan_users(data, chan->channel); 343 - if (ret < 0) 344 - goto out; 339 + goto out_dec_data_rdy; 345 340 346 341 ret = sx9500_dec_data_rdy_users(data); 342 + if (ret < 0) 343 + goto out_dec_chan; 344 + 345 + ret = sx9500_dec_chan_users(data, chan->channel); 347 346 if (ret < 0) 348 347 goto out; 349 348 ··· 351 350 352 351 goto out; 353 352 353 + out_dec_data_rdy: 354 + sx9500_dec_data_rdy_users(data); 354 355 out_dec_chan: 355 356 sx9500_dec_chan_users(data, chan->channel); 356 357 out: ··· 682 679 static int sx9500_buffer_preenable(struct iio_dev *indio_dev) 683 680 { 684 681 struct sx9500_data *data = iio_priv(indio_dev); 685 - int ret, i; 682 + int ret = 0, i; 686 683 687 684 mutex_lock(&data->mutex); 688 685 ··· 706 703 static int sx9500_buffer_predisable(struct iio_dev *indio_dev) 707 704 { 708 705 struct sx9500_data *data = iio_priv(indio_dev); 709 - int ret, i; 706 + int ret = 0, i; 710 707 711 708 iio_triggered_buffer_predisable(indio_dev); 712 709 ··· 803 800 unsigned int val; 804 801 805 802 ret = regmap_update_bits(data->regmap, SX9500_REG_PROX_CTRL0, 806 - GENMASK(SX9500_NUM_CHANNELS, 0), 807 - GENMASK(SX9500_NUM_CHANNELS, 0)); 803 + SX9500_CHAN_MASK, SX9500_CHAN_MASK); 808 804 if (ret < 0) 809 805 return ret; 810 806 ··· 823 821 824 822 out: 825 823 regmap_update_bits(data->regmap, SX9500_REG_PROX_CTRL0, 826 - GENMASK(SX9500_NUM_CHANNELS, 0), 0); 824 + SX9500_CHAN_MASK, 0); 827 825 return ret; 828 826 } 829 827
+3
drivers/iio/temperature/tmp006.c
··· 132 132 struct tmp006_data *data = iio_priv(indio_dev); 133 133 int i; 134 134 135 + if (mask != IIO_CHAN_INFO_SAMP_FREQ) 136 + return -EINVAL; 137 + 135 138 for (i = 0; i < ARRAY_SIZE(tmp006_freqs); i++) 136 139 if ((val == tmp006_freqs[i][0]) && 137 140 (val2 == tmp006_freqs[i][1])) {
+2 -2
drivers/infiniband/core/agent.c
··· 88 88 struct ib_ah *ah; 89 89 struct ib_mad_send_wr_private *mad_send_wr; 90 90 91 - if (device->node_type == RDMA_NODE_IB_SWITCH) 91 + if (rdma_cap_ib_switch(device)) 92 92 port_priv = ib_get_agent_port(device, 0); 93 93 else 94 94 port_priv = ib_get_agent_port(device, port_num); ··· 122 122 memcpy(send_buf->mad, mad_hdr, resp_mad_len); 123 123 send_buf->ah = ah; 124 124 125 - if (device->node_type == RDMA_NODE_IB_SWITCH) { 125 + if (rdma_cap_ib_switch(device)) { 126 126 mad_send_wr = container_of(send_buf, 127 127 struct ib_mad_send_wr_private, 128 128 send_buf);
+55 -6
drivers/infiniband/core/cm.c
··· 169 169 struct ib_device *ib_device; 170 170 struct device *device; 171 171 u8 ack_delay; 172 + int going_down; 172 173 struct cm_port *port[0]; 173 174 }; 174 175 ··· 806 805 { 807 806 int wait_time; 808 807 unsigned long flags; 808 + struct cm_device *cm_dev; 809 + 810 + cm_dev = ib_get_client_data(cm_id_priv->id.device, &cm_client); 811 + if (!cm_dev) 812 + return; 809 813 810 814 spin_lock_irqsave(&cm.lock, flags); 811 815 cm_cleanup_timewait(cm_id_priv->timewait_info); ··· 824 818 */ 825 819 cm_id_priv->id.state = IB_CM_TIMEWAIT; 826 820 wait_time = cm_convert_to_ms(cm_id_priv->av.timeout); 827 - queue_delayed_work(cm.wq, &cm_id_priv->timewait_info->work.work, 828 - msecs_to_jiffies(wait_time)); 821 + 822 + /* Check if the device started its remove_one */ 823 + spin_lock_irq(&cm.lock); 824 + if (!cm_dev->going_down) 825 + queue_delayed_work(cm.wq, &cm_id_priv->timewait_info->work.work, 826 + msecs_to_jiffies(wait_time)); 827 + spin_unlock_irq(&cm.lock); 828 + 829 829 cm_id_priv->timewait_info = NULL; 830 830 } 831 831 ··· 3317 3305 struct cm_work *work; 3318 3306 unsigned long flags; 3319 3307 int ret = 0; 3308 + struct cm_device *cm_dev; 3309 + 3310 + cm_dev = ib_get_client_data(cm_id->device, &cm_client); 3311 + if (!cm_dev) 3312 + return -ENODEV; 3320 3313 3321 3314 work = kmalloc(sizeof *work, GFP_ATOMIC); 3322 3315 if (!work) ··· 3360 3343 work->remote_id = cm_id->remote_id; 3361 3344 work->mad_recv_wc = NULL; 3362 3345 work->cm_event.event = IB_CM_USER_ESTABLISHED; 3363 - queue_delayed_work(cm.wq, &work->work, 0); 3346 + 3347 + /* Check if the device started its remove_one */ 3348 + spin_lock_irq(&cm.lock); 3349 + if (!cm_dev->going_down) { 3350 + queue_delayed_work(cm.wq, &work->work, 0); 3351 + } else { 3352 + kfree(work); 3353 + ret = -ENODEV; 3354 + } 3355 + spin_unlock_irq(&cm.lock); 3356 + 3364 3357 out: 3365 3358 return ret; 3366 3359 } ··· 3421 3394 enum ib_cm_event_type event; 3422 3395 u16 attr_id; 3423 3396 int paths = 0; 3397 + int going_down = 0; 3424 3398 3425 3399 switch (mad_recv_wc->recv_buf.mad->mad_hdr.attr_id) { 3426 3400 case CM_REQ_ATTR_ID: ··· 3480 3452 work->cm_event.event = event; 3481 3453 work->mad_recv_wc = mad_recv_wc; 3482 3454 work->port = port; 3483 - queue_delayed_work(cm.wq, &work->work, 0); 3455 + 3456 + /* Check if the device started its remove_one */ 3457 + spin_lock_irq(&cm.lock); 3458 + if (!port->cm_dev->going_down) 3459 + queue_delayed_work(cm.wq, &work->work, 0); 3460 + else 3461 + going_down = 1; 3462 + spin_unlock_irq(&cm.lock); 3463 + 3464 + if (going_down) { 3465 + kfree(work); 3466 + ib_free_recv_mad(mad_recv_wc); 3467 + } 3484 3468 } 3485 3469 3486 3470 static int cm_init_qp_init_attr(struct cm_id_private *cm_id_priv, ··· 3811 3771 3812 3772 cm_dev->ib_device = ib_device; 3813 3773 cm_get_ack_delay(cm_dev); 3814 - 3774 + cm_dev->going_down = 0; 3815 3775 cm_dev->device = device_create(&cm_class, &ib_device->dev, 3816 3776 MKDEV(0, 0), NULL, 3817 3777 "%s", ib_device->name); ··· 3904 3864 list_del(&cm_dev->list); 3905 3865 write_unlock_irqrestore(&cm.device_lock, flags); 3906 3866 3867 + spin_lock_irq(&cm.lock); 3868 + cm_dev->going_down = 1; 3869 + spin_unlock_irq(&cm.lock); 3870 + 3907 3871 for (i = 1; i <= ib_device->phys_port_cnt; i++) { 3908 3872 if (!rdma_cap_ib_cm(ib_device, i)) 3909 3873 continue; 3910 3874 3911 3875 port = cm_dev->port[i-1]; 3912 3876 ib_modify_port(ib_device, port->port_num, 0, &port_modify); 3913 - ib_unregister_mad_agent(port->mad_agent); 3877 + /* 3878 + * We flush the queue here after the going_down set, this 3879 + * verify that no new works will be queued in the recv handler, 3880 + * after that we can call the unregister_mad_agent 3881 + */ 3914 3882 flush_workqueue(cm.wq); 3883 + ib_unregister_mad_agent(port->mad_agent); 3915 3884 cm_remove_port_fs(port); 3916 3885 } 3917 3886 device_unregister(cm_dev->device);
+16 -17
drivers/infiniband/core/iwpm_msg.c
··· 67 67 err_str = "Invalid port mapper client"; 68 68 goto pid_query_error; 69 69 } 70 - if (iwpm_registered_client(nl_client)) 70 + if (iwpm_check_registration(nl_client, IWPM_REG_VALID) || 71 + iwpm_user_pid == IWPM_PID_UNAVAILABLE) 71 72 return 0; 72 73 skb = iwpm_create_nlmsg(RDMA_NL_IWPM_REG_PID, &nlh, nl_client); 73 74 if (!skb) { ··· 107 106 ret = ibnl_multicast(skb, nlh, RDMA_NL_GROUP_IWPM, GFP_KERNEL); 108 107 if (ret) { 109 108 skb = NULL; /* skb is freed in the netlink send-op handling */ 110 - iwpm_set_registered(nl_client, 1); 111 109 iwpm_user_pid = IWPM_PID_UNAVAILABLE; 112 110 err_str = "Unable to send a nlmsg"; 113 111 goto pid_query_error; ··· 144 144 err_str = "Invalid port mapper client"; 145 145 goto add_mapping_error; 146 146 } 147 - if (!iwpm_registered_client(nl_client)) { 147 + if (!iwpm_valid_pid()) 148 + return 0; 149 + if (!iwpm_check_registration(nl_client, IWPM_REG_VALID)) { 148 150 err_str = "Unregistered port mapper client"; 149 151 goto add_mapping_error; 150 152 } 151 - if (!iwpm_valid_pid()) 152 - return 0; 153 153 skb = iwpm_create_nlmsg(RDMA_NL_IWPM_ADD_MAPPING, &nlh, nl_client); 154 154 if (!skb) { 155 155 err_str = "Unable to create a nlmsg"; ··· 214 214 err_str = "Invalid port mapper client"; 215 215 goto query_mapping_error; 216 216 } 217 - if (!iwpm_registered_client(nl_client)) { 217 + if (!iwpm_valid_pid()) 218 + return 0; 219 + if (!iwpm_check_registration(nl_client, IWPM_REG_VALID)) { 218 220 err_str = "Unregistered port mapper client"; 219 221 goto query_mapping_error; 220 222 } 221 - if (!iwpm_valid_pid()) 222 - return 0; 223 223 ret = -ENOMEM; 224 224 skb = iwpm_create_nlmsg(RDMA_NL_IWPM_QUERY_MAPPING, &nlh, nl_client); 225 225 if (!skb) { ··· 288 288 err_str = "Invalid port mapper client"; 289 289 goto remove_mapping_error; 290 290 } 291 - if (!iwpm_registered_client(nl_client)) { 291 + if (!iwpm_valid_pid()) 292 + return 0; 293 + if (iwpm_check_registration(nl_client, IWPM_REG_UNDEF)) { 292 294 err_str = "Unregistered port mapper client"; 293 295 goto remove_mapping_error; 294 296 } 295 - if (!iwpm_valid_pid()) 296 - return 0; 297 297 skb = iwpm_create_nlmsg(RDMA_NL_IWPM_REMOVE_MAPPING, &nlh, nl_client); 298 298 if (!skb) { 299 299 ret = -ENOMEM; ··· 388 388 pr_debug("%s: iWarp Port Mapper (pid = %d) is available!\n", 389 389 __func__, iwpm_user_pid); 390 390 if (iwpm_valid_client(nl_client)) 391 - iwpm_set_registered(nl_client, 1); 391 + iwpm_set_registration(nl_client, IWPM_REG_VALID); 392 392 register_pid_response_exit: 393 393 nlmsg_request->request_done = 1; 394 394 /* always for found nlmsg_request */ ··· 644 644 { 645 645 struct nlattr *nltb[IWPM_NLA_MAPINFO_REQ_MAX]; 646 646 const char *msg_type = "Mapping Info response"; 647 - int iwpm_pid; 648 647 u8 nl_client; 649 648 char *iwpm_name; 650 649 u16 iwpm_version; ··· 668 669 __func__, nl_client); 669 670 return ret; 670 671 } 671 - iwpm_set_registered(nl_client, 0); 672 + iwpm_set_registration(nl_client, IWPM_REG_INCOMPL); 672 673 atomic_set(&echo_nlmsg_seq, cb->nlh->nlmsg_seq); 674 + iwpm_user_pid = cb->nlh->nlmsg_pid; 673 675 if (!iwpm_mapinfo_available()) 674 676 return 0; 675 - iwpm_pid = cb->nlh->nlmsg_pid; 676 677 pr_debug("%s: iWarp Port Mapper (pid = %d) is available!\n", 677 - __func__, iwpm_pid); 678 - ret = iwpm_send_mapinfo(nl_client, iwpm_pid); 678 + __func__, iwpm_user_pid); 679 + ret = iwpm_send_mapinfo(nl_client, iwpm_user_pid); 679 680 return ret; 680 681 } 681 682 EXPORT_SYMBOL(iwpm_mapping_info_cb);
+10 -2
drivers/infiniband/core/iwpm_util.c
··· 78 78 mutex_unlock(&iwpm_admin_lock); 79 79 if (!ret) { 80 80 iwpm_set_valid(nl_client, 1); 81 + iwpm_set_registration(nl_client, IWPM_REG_UNDEF); 81 82 pr_debug("%s: Mapinfo and reminfo tables are created\n", 82 83 __func__); 83 84 } ··· 107 106 } 108 107 mutex_unlock(&iwpm_admin_lock); 109 108 iwpm_set_valid(nl_client, 0); 109 + iwpm_set_registration(nl_client, IWPM_REG_UNDEF); 110 110 return 0; 111 111 } 112 112 EXPORT_SYMBOL(iwpm_exit); ··· 399 397 } 400 398 401 399 /* valid client */ 402 - int iwpm_registered_client(u8 nl_client) 400 + u32 iwpm_get_registration(u8 nl_client) 403 401 { 404 402 return iwpm_admin.reg_list[nl_client]; 405 403 } 406 404 407 405 /* valid client */ 408 - void iwpm_set_registered(u8 nl_client, int reg) 406 + void iwpm_set_registration(u8 nl_client, u32 reg) 409 407 { 410 408 iwpm_admin.reg_list[nl_client] = reg; 409 + } 410 + 411 + /* valid client */ 412 + u32 iwpm_check_registration(u8 nl_client, u32 reg) 413 + { 414 + return (iwpm_get_registration(nl_client) & reg); 411 415 } 412 416 413 417 int iwpm_compare_sockaddr(struct sockaddr_storage *a_sockaddr,
+22 -6
drivers/infiniband/core/iwpm_util.h
··· 58 58 #define IWPM_PID_UNDEFINED -1 59 59 #define IWPM_PID_UNAVAILABLE -2 60 60 61 + #define IWPM_REG_UNDEF 0x01 62 + #define IWPM_REG_VALID 0x02 63 + #define IWPM_REG_INCOMPL 0x04 64 + 61 65 struct iwpm_nlmsg_request { 62 66 struct list_head inprocess_list; 63 67 __u32 nlmsg_seq; ··· 92 88 atomic_t refcount; 93 89 atomic_t nlmsg_seq; 94 90 int client_list[RDMA_NL_NUM_CLIENTS]; 95 - int reg_list[RDMA_NL_NUM_CLIENTS]; 91 + u32 reg_list[RDMA_NL_NUM_CLIENTS]; 96 92 }; 97 93 98 94 /** ··· 163 159 void iwpm_set_valid(u8 nl_client, int valid); 164 160 165 161 /** 166 - * iwpm_registered_client - Check if the port mapper client is registered 162 + * iwpm_check_registration - Check if the client registration 163 + * matches the given one 167 164 * @nl_client: The index of the netlink client 165 + * @reg: The given registration type to compare with 168 166 * 169 167 * Call iwpm_register_pid() to register a client 168 + * Returns true if the client registration matches reg, 169 + * otherwise returns false 170 170 */ 171 - int iwpm_registered_client(u8 nl_client); 171 + u32 iwpm_check_registration(u8 nl_client, u32 reg); 172 172 173 173 /** 174 - * iwpm_set_registered - Set the port mapper client to registered or not 174 + * iwpm_set_registration - Set the client registration 175 175 * @nl_client: The index of the netlink client 176 - * @reg: 1 if registered or 0 if not 176 + * @reg: Registration type to set 177 177 */ 178 - void iwpm_set_registered(u8 nl_client, int reg); 178 + void iwpm_set_registration(u8 nl_client, u32 reg); 179 + 180 + /** 181 + * iwpm_get_registration 182 + * @nl_client: The index of the netlink client 183 + * 184 + * Returns the client registration type 185 + */ 186 + u32 iwpm_get_registration(u8 nl_client); 179 187 180 188 /** 181 189 * iwpm_send_mapinfo - Send local and mapped IPv4/IPv6 address info of
+17 -30
drivers/infiniband/core/mad.c
··· 769 769 bool opa = rdma_cap_opa_mad(mad_agent_priv->qp_info->port_priv->device, 770 770 mad_agent_priv->qp_info->port_priv->port_num); 771 771 772 - if (device->node_type == RDMA_NODE_IB_SWITCH && 772 + if (rdma_cap_ib_switch(device) && 773 773 smp->mgmt_class == IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE) 774 774 port_num = send_wr->wr.ud.port_num; 775 775 else ··· 787 787 if ((opa_get_smp_direction(opa_smp) 788 788 ? opa_smp->route.dr.dr_dlid : opa_smp->route.dr.dr_slid) == 789 789 OPA_LID_PERMISSIVE && 790 - opa_smi_handle_dr_smp_send(opa_smp, device->node_type, 790 + opa_smi_handle_dr_smp_send(opa_smp, 791 + rdma_cap_ib_switch(device), 791 792 port_num) == IB_SMI_DISCARD) { 792 793 ret = -EINVAL; 793 794 dev_err(&device->dev, "OPA Invalid directed route\n"); 794 795 goto out; 795 796 } 796 797 opa_drslid = be32_to_cpu(opa_smp->route.dr.dr_slid); 797 - if (opa_drslid != OPA_LID_PERMISSIVE && 798 + if (opa_drslid != be32_to_cpu(OPA_LID_PERMISSIVE) && 798 799 opa_drslid & 0xffff0000) { 799 800 ret = -EINVAL; 800 801 dev_err(&device->dev, "OPA Invalid dr_slid 0x%x\n", ··· 811 810 } else { 812 811 if ((ib_get_smp_direction(smp) ? smp->dr_dlid : smp->dr_slid) == 813 812 IB_LID_PERMISSIVE && 814 - smi_handle_dr_smp_send(smp, device->node_type, port_num) == 813 + smi_handle_dr_smp_send(smp, rdma_cap_ib_switch(device), port_num) == 815 814 IB_SMI_DISCARD) { 816 815 ret = -EINVAL; 817 816 dev_err(&device->dev, "Invalid directed route\n"); ··· 2031 2030 struct ib_smp *smp = (struct ib_smp *)recv->mad; 2032 2031 2033 2032 if (smi_handle_dr_smp_recv(smp, 2034 - port_priv->device->node_type, 2033 + rdma_cap_ib_switch(port_priv->device), 2035 2034 port_num, 2036 2035 port_priv->device->phys_port_cnt) == 2037 2036 IB_SMI_DISCARD) ··· 2043 2042 2044 2043 if (retsmi == IB_SMI_SEND) { /* don't forward */ 2045 2044 if (smi_handle_dr_smp_send(smp, 2046 - port_priv->device->node_type, 2045 + rdma_cap_ib_switch(port_priv->device), 2047 2046 port_num) == IB_SMI_DISCARD) 2048 2047 return IB_SMI_DISCARD; 2049 2048 2050 2049 if (smi_check_local_smp(smp, port_priv->device) == IB_SMI_DISCARD) 2051 2050 return IB_SMI_DISCARD; 2052 - } else if (port_priv->device->node_type == RDMA_NODE_IB_SWITCH) { 2051 + } else if (rdma_cap_ib_switch(port_priv->device)) { 2053 2052 /* forward case for switches */ 2054 2053 memcpy(response, recv, mad_priv_size(response)); 2055 2054 response->header.recv_wc.wc = &response->header.wc; ··· 2116 2115 struct opa_smp *smp = (struct opa_smp *)recv->mad; 2117 2116 2118 2117 if (opa_smi_handle_dr_smp_recv(smp, 2119 - port_priv->device->node_type, 2118 + rdma_cap_ib_switch(port_priv->device), 2120 2119 port_num, 2121 2120 port_priv->device->phys_port_cnt) == 2122 2121 IB_SMI_DISCARD) ··· 2128 2127 2129 2128 if (retsmi == IB_SMI_SEND) { /* don't forward */ 2130 2129 if (opa_smi_handle_dr_smp_send(smp, 2131 - port_priv->device->node_type, 2130 + rdma_cap_ib_switch(port_priv->device), 2132 2131 port_num) == IB_SMI_DISCARD) 2133 2132 return IB_SMI_DISCARD; 2134 2133 ··· 2136 2135 IB_SMI_DISCARD) 2137 2136 return IB_SMI_DISCARD; 2138 2137 2139 - } else if (port_priv->device->node_type == RDMA_NODE_IB_SWITCH) { 2138 + } else if (rdma_cap_ib_switch(port_priv->device)) { 2140 2139 /* forward case for switches */ 2141 2140 memcpy(response, recv, mad_priv_size(response)); 2142 2141 response->header.recv_wc.wc = &response->header.wc; ··· 2236 2235 goto out; 2237 2236 } 2238 2237 2239 - if (port_priv->device->node_type == RDMA_NODE_IB_SWITCH) 2238 + if (rdma_cap_ib_switch(port_priv->device)) 2240 2239 port_num = wc->port_num; 2241 2240 else 2242 2241 port_num = port_priv->port_num; ··· 3298 3297 3299 3298 static void ib_mad_init_device(struct ib_device *device) 3300 3299 { 3301 - int start, end, i; 3300 + int start, i; 3302 3301 3303 - if (device->node_type == RDMA_NODE_IB_SWITCH) { 3304 - start = 0; 3305 - end = 0; 3306 - } else { 3307 - start = 1; 3308 - end = device->phys_port_cnt; 3309 - } 3302 + start = rdma_start_port(device); 3310 3303 3311 - for (i = start; i <= end; i++) { 3304 + for (i = start; i <= rdma_end_port(device); i++) { 3312 3305 if (!rdma_cap_ib_mad(device, i)) 3313 3306 continue; 3314 3307 ··· 3337 3342 3338 3343 static void ib_mad_remove_device(struct ib_device *device) 3339 3344 { 3340 - int start, end, i; 3345 + int i; 3341 3346 3342 - if (device->node_type == RDMA_NODE_IB_SWITCH) { 3343 - start = 0; 3344 - end = 0; 3345 - } else { 3346 - start = 1; 3347 - end = device->phys_port_cnt; 3348 - } 3349 - 3350 - for (i = start; i <= end; i++) { 3347 + for (i = rdma_start_port(device); i <= rdma_end_port(device); i++) { 3351 3348 if (!rdma_cap_ib_mad(device, i)) 3352 3349 continue; 3353 3350
+2 -6
drivers/infiniband/core/multicast.c
··· 812 812 if (!dev) 813 813 return; 814 814 815 - if (device->node_type == RDMA_NODE_IB_SWITCH) 816 - dev->start_port = dev->end_port = 0; 817 - else { 818 - dev->start_port = 1; 819 - dev->end_port = device->phys_port_cnt; 820 - } 815 + dev->start_port = rdma_start_port(device); 816 + dev->end_port = rdma_end_port(device); 821 817 822 818 for (i = 0; i <= dev->end_port - dev->start_port; i++) { 823 819 if (!rdma_cap_ib_mcast(device, dev->start_port + i))
+2 -2
drivers/infiniband/core/opa_smi.h
··· 39 39 40 40 #include "smi.h" 41 41 42 - enum smi_action opa_smi_handle_dr_smp_recv(struct opa_smp *smp, u8 node_type, 42 + enum smi_action opa_smi_handle_dr_smp_recv(struct opa_smp *smp, bool is_switch, 43 43 int port_num, int phys_port_cnt); 44 44 int opa_smi_get_fwd_port(struct opa_smp *smp); 45 45 extern enum smi_forward_action opa_smi_check_forward_dr_smp(struct opa_smp *smp); 46 46 extern enum smi_action opa_smi_handle_dr_smp_send(struct opa_smp *smp, 47 - u8 node_type, int port_num); 47 + bool is_switch, int port_num); 48 48 49 49 /* 50 50 * Return IB_SMI_HANDLE if the SMP should be handled by the local SMA/SM
+2 -6
drivers/infiniband/core/sa_query.c
··· 1156 1156 int s, e, i; 1157 1157 int count = 0; 1158 1158 1159 - if (device->node_type == RDMA_NODE_IB_SWITCH) 1160 - s = e = 0; 1161 - else { 1162 - s = 1; 1163 - e = device->phys_port_cnt; 1164 - } 1159 + s = rdma_start_port(device); 1160 + e = rdma_end_port(device); 1165 1161 1166 1162 sa_dev = kzalloc(sizeof *sa_dev + 1167 1163 (e - s + 1) * sizeof (struct ib_sa_port),
+18 -19
drivers/infiniband/core/smi.c
··· 41 41 #include "smi.h" 42 42 #include "opa_smi.h" 43 43 44 - static enum smi_action __smi_handle_dr_smp_send(u8 node_type, int port_num, 44 + static enum smi_action __smi_handle_dr_smp_send(bool is_switch, int port_num, 45 45 u8 *hop_ptr, u8 hop_cnt, 46 46 const u8 *initial_path, 47 47 const u8 *return_path, ··· 64 64 65 65 /* C14-9:2 */ 66 66 if (*hop_ptr && *hop_ptr < hop_cnt) { 67 - if (node_type != RDMA_NODE_IB_SWITCH) 67 + if (!is_switch) 68 68 return IB_SMI_DISCARD; 69 69 70 70 /* return_path set when received */ ··· 77 77 if (*hop_ptr == hop_cnt) { 78 78 /* return_path set when received */ 79 79 (*hop_ptr)++; 80 - return (node_type == RDMA_NODE_IB_SWITCH || 80 + return (is_switch || 81 81 dr_dlid_is_permissive ? 82 82 IB_SMI_HANDLE : IB_SMI_DISCARD); 83 83 } ··· 96 96 97 97 /* C14-13:2 */ 98 98 if (2 <= *hop_ptr && *hop_ptr <= hop_cnt) { 99 - if (node_type != RDMA_NODE_IB_SWITCH) 99 + if (!is_switch) 100 100 return IB_SMI_DISCARD; 101 101 102 102 (*hop_ptr)--; ··· 108 108 if (*hop_ptr == 1) { 109 109 (*hop_ptr)--; 110 110 /* C14-13:3 -- SMPs destined for SM shouldn't be here */ 111 - return (node_type == RDMA_NODE_IB_SWITCH || 111 + return (is_switch || 112 112 dr_slid_is_permissive ? 113 113 IB_SMI_HANDLE : IB_SMI_DISCARD); 114 114 } ··· 127 127 * Return IB_SMI_DISCARD if the SMP should be discarded 128 128 */ 129 129 enum smi_action smi_handle_dr_smp_send(struct ib_smp *smp, 130 - u8 node_type, int port_num) 130 + bool is_switch, int port_num) 131 131 { 132 - return __smi_handle_dr_smp_send(node_type, port_num, 132 + return __smi_handle_dr_smp_send(is_switch, port_num, 133 133 &smp->hop_ptr, smp->hop_cnt, 134 134 smp->initial_path, 135 135 smp->return_path, ··· 139 139 } 140 140 141 141 enum smi_action opa_smi_handle_dr_smp_send(struct opa_smp *smp, 142 - u8 node_type, int port_num) 142 + bool is_switch, int port_num) 143 143 { 144 - return __smi_handle_dr_smp_send(node_type, port_num, 144 + return __smi_handle_dr_smp_send(is_switch, port_num, 145 145 &smp->hop_ptr, smp->hop_cnt, 146 146 smp->route.dr.initial_path, 147 147 smp->route.dr.return_path, ··· 152 152 OPA_LID_PERMISSIVE); 153 153 } 154 154 155 - static enum smi_action __smi_handle_dr_smp_recv(u8 node_type, int port_num, 155 + static enum smi_action __smi_handle_dr_smp_recv(bool is_switch, int port_num, 156 156 int phys_port_cnt, 157 157 u8 *hop_ptr, u8 hop_cnt, 158 158 const u8 *initial_path, ··· 173 173 174 174 /* C14-9:2 -- intermediate hop */ 175 175 if (*hop_ptr && *hop_ptr < hop_cnt) { 176 - if (node_type != RDMA_NODE_IB_SWITCH) 176 + if (!is_switch) 177 177 return IB_SMI_DISCARD; 178 178 179 179 return_path[*hop_ptr] = port_num; ··· 188 188 return_path[*hop_ptr] = port_num; 189 189 /* hop_ptr updated when sending */ 190 190 191 - return (node_type == RDMA_NODE_IB_SWITCH || 191 + return (is_switch || 192 192 dr_dlid_is_permissive ? 193 193 IB_SMI_HANDLE : IB_SMI_DISCARD); 194 194 } ··· 208 208 209 209 /* C14-13:2 */ 210 210 if (2 <= *hop_ptr && *hop_ptr <= hop_cnt) { 211 - if (node_type != RDMA_NODE_IB_SWITCH) 211 + if (!is_switch) 212 212 return IB_SMI_DISCARD; 213 213 214 214 /* hop_ptr updated when sending */ ··· 224 224 return IB_SMI_HANDLE; 225 225 } 226 226 /* hop_ptr updated when sending */ 227 - return (node_type == RDMA_NODE_IB_SWITCH ? 228 - IB_SMI_HANDLE : IB_SMI_DISCARD); 227 + return (is_switch ? IB_SMI_HANDLE : IB_SMI_DISCARD); 229 228 } 230 229 231 230 /* C14-13:4 -- hop_ptr = 0 -> give to SM */ ··· 237 238 * Adjust information for a received SMP 238 239 * Return IB_SMI_DISCARD if the SMP should be dropped 239 240 */ 240 - enum smi_action smi_handle_dr_smp_recv(struct ib_smp *smp, u8 node_type, 241 + enum smi_action smi_handle_dr_smp_recv(struct ib_smp *smp, bool is_switch, 241 242 int port_num, int phys_port_cnt) 242 243 { 243 - return __smi_handle_dr_smp_recv(node_type, port_num, phys_port_cnt, 244 + return __smi_handle_dr_smp_recv(is_switch, port_num, phys_port_cnt, 244 245 &smp->hop_ptr, smp->hop_cnt, 245 246 smp->initial_path, 246 247 smp->return_path, ··· 253 254 * Adjust information for a received SMP 254 255 * Return IB_SMI_DISCARD if the SMP should be dropped 255 256 */ 256 - enum smi_action opa_smi_handle_dr_smp_recv(struct opa_smp *smp, u8 node_type, 257 + enum smi_action opa_smi_handle_dr_smp_recv(struct opa_smp *smp, bool is_switch, 257 258 int port_num, int phys_port_cnt) 258 259 { 259 - return __smi_handle_dr_smp_recv(node_type, port_num, phys_port_cnt, 260 + return __smi_handle_dr_smp_recv(is_switch, port_num, phys_port_cnt, 260 261 &smp->hop_ptr, smp->hop_cnt, 261 262 smp->route.dr.initial_path, 262 263 smp->route.dr.return_path,
+2 -2
drivers/infiniband/core/smi.h
··· 51 51 IB_SMI_FORWARD /* SMP should be forwarded (for switches only) */ 52 52 }; 53 53 54 - enum smi_action smi_handle_dr_smp_recv(struct ib_smp *smp, u8 node_type, 54 + enum smi_action smi_handle_dr_smp_recv(struct ib_smp *smp, bool is_switch, 55 55 int port_num, int phys_port_cnt); 56 56 int smi_get_fwd_port(struct ib_smp *smp); 57 57 extern enum smi_forward_action smi_check_forward_dr_smp(struct ib_smp *smp); 58 58 extern enum smi_action smi_handle_dr_smp_send(struct ib_smp *smp, 59 - u8 node_type, int port_num); 59 + bool is_switch, int port_num); 60 60 61 61 /* 62 62 * Return IB_SMI_HANDLE if the SMP should be handled by the local SMA/SM
+1 -1
drivers/infiniband/core/sysfs.c
··· 870 870 goto err_put; 871 871 } 872 872 873 - if (device->node_type == RDMA_NODE_IB_SWITCH) { 873 + if (rdma_cap_ib_switch(device)) { 874 874 ret = add_port(device, 0, port_callback); 875 875 if (ret) 876 876 goto err_put;
+2 -2
drivers/infiniband/core/ucm.c
··· 1193 1193 return 0; 1194 1194 } 1195 1195 1196 + static DECLARE_BITMAP(overflow_map, IB_UCM_MAX_DEVICES); 1196 1197 static void ib_ucm_release_dev(struct device *dev) 1197 1198 { 1198 1199 struct ib_ucm_device *ucm_dev; ··· 1203 1202 if (ucm_dev->devnum < IB_UCM_MAX_DEVICES) 1204 1203 clear_bit(ucm_dev->devnum, dev_map); 1205 1204 else 1206 - clear_bit(ucm_dev->devnum - IB_UCM_MAX_DEVICES, dev_map); 1205 + clear_bit(ucm_dev->devnum - IB_UCM_MAX_DEVICES, overflow_map); 1207 1206 kfree(ucm_dev); 1208 1207 } 1209 1208 ··· 1227 1226 static DEVICE_ATTR(ibdev, S_IRUGO, show_ibdev, NULL); 1228 1227 1229 1228 static dev_t overflow_maj; 1230 - static DECLARE_BITMAP(overflow_map, IB_UCM_MAX_DEVICES); 1231 1229 static int find_overflow_devnum(void) 1232 1230 { 1233 1231 int ret;
+3 -2
drivers/infiniband/core/ucma.c
··· 1354 1354 /* Acquire mutex's based on pointer comparison to prevent deadlock. */ 1355 1355 if (file1 < file2) { 1356 1356 mutex_lock(&file1->mut); 1357 - mutex_lock(&file2->mut); 1357 + mutex_lock_nested(&file2->mut, SINGLE_DEPTH_NESTING); 1358 1358 } else { 1359 1359 mutex_lock(&file2->mut); 1360 - mutex_lock(&file1->mut); 1360 + mutex_lock_nested(&file1->mut, SINGLE_DEPTH_NESTING); 1361 1361 } 1362 1362 } 1363 1363 ··· 1616 1616 device_remove_file(ucma_misc.this_device, &dev_attr_abi_version); 1617 1617 misc_deregister(&ucma_misc); 1618 1618 idr_destroy(&ctx_idr); 1619 + idr_destroy(&multicast_idr); 1619 1620 } 1620 1621 1621 1622 module_init(ucma_init);
+3 -2
drivers/infiniband/hw/ehca/ehca_sqp.c
··· 226 226 const struct ib_mad *in_mad = (const struct ib_mad *)in; 227 227 struct ib_mad *out_mad = (struct ib_mad *)out; 228 228 229 - BUG_ON(in_mad_size != sizeof(*in_mad) || 230 - *out_mad_size != sizeof(*out_mad)); 229 + if (WARN_ON_ONCE(in_mad_size != sizeof(*in_mad) || 230 + *out_mad_size != sizeof(*out_mad))) 231 + return IB_MAD_RESULT_FAILURE; 231 232 232 233 if (!port_num || port_num > ibdev->phys_port_cnt || !in_wc) 233 234 return IB_MAD_RESULT_FAILURE;
+3 -2
drivers/infiniband/hw/ipath/ipath_mad.c
··· 1499 1499 const struct ib_mad *in_mad = (const struct ib_mad *)in; 1500 1500 struct ib_mad *out_mad = (struct ib_mad *)out; 1501 1501 1502 - BUG_ON(in_mad_size != sizeof(*in_mad) || 1503 - *out_mad_size != sizeof(*out_mad)); 1502 + if (WARN_ON_ONCE(in_mad_size != sizeof(*in_mad) || 1503 + *out_mad_size != sizeof(*out_mad))) 1504 + return IB_MAD_RESULT_FAILURE; 1504 1505 1505 1506 switch (in_mad->mad_hdr.mgmt_class) { 1506 1507 case IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE:
+2 -2
drivers/infiniband/hw/ipath/ipath_verbs.c
··· 2044 2044 2045 2045 spin_lock_init(&idev->qp_table.lock); 2046 2046 spin_lock_init(&idev->lk_table.lock); 2047 - idev->sm_lid = __constant_be16_to_cpu(IB_LID_PERMISSIVE); 2047 + idev->sm_lid = be16_to_cpu(IB_LID_PERMISSIVE); 2048 2048 /* Set the prefix to the default value (see ch. 4.1.1) */ 2049 - idev->gid_prefix = __constant_cpu_to_be64(0xfe80000000000000ULL); 2049 + idev->gid_prefix = cpu_to_be64(0xfe80000000000000ULL); 2050 2050 2051 2051 ret = ipath_init_qp_table(idev, ib_ipath_qp_table_size); 2052 2052 if (ret)
+22 -12
drivers/infiniband/hw/mlx4/mad.c
··· 860 860 struct mlx4_ib_dev *dev = to_mdev(ibdev); 861 861 const struct ib_mad *in_mad = (const struct ib_mad *)in; 862 862 struct ib_mad *out_mad = (struct ib_mad *)out; 863 + enum rdma_link_layer link = rdma_port_get_link_layer(ibdev, port_num); 863 864 864 - BUG_ON(in_mad_size != sizeof(*in_mad) || 865 - *out_mad_size != sizeof(*out_mad)); 865 + if (WARN_ON_ONCE(in_mad_size != sizeof(*in_mad) || 866 + *out_mad_size != sizeof(*out_mad))) 867 + return IB_MAD_RESULT_FAILURE; 866 868 867 - switch (rdma_port_get_link_layer(ibdev, port_num)) { 868 - case IB_LINK_LAYER_INFINIBAND: 869 - if (!mlx4_is_slave(dev->dev)) 870 - return ib_process_mad(ibdev, mad_flags, port_num, in_wc, 871 - in_grh, in_mad, out_mad); 872 - case IB_LINK_LAYER_ETHERNET: 873 - return iboe_process_mad(ibdev, mad_flags, port_num, in_wc, 874 - in_grh, in_mad, out_mad); 875 - default: 876 - return -EINVAL; 869 + /* iboe_process_mad() which uses the HCA flow-counters to implement IB PMA 870 + * queries, should be called only by VFs and for that specific purpose 871 + */ 872 + if (link == IB_LINK_LAYER_INFINIBAND) { 873 + if (mlx4_is_slave(dev->dev) && 874 + in_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_PERF_MGMT && 875 + in_mad->mad_hdr.attr_id == IB_PMA_PORT_COUNTERS) 876 + return iboe_process_mad(ibdev, mad_flags, port_num, in_wc, 877 + in_grh, in_mad, out_mad); 878 + 879 + return ib_process_mad(ibdev, mad_flags, port_num, in_wc, 880 + in_grh, in_mad, out_mad); 877 881 } 882 + 883 + if (link == IB_LINK_LAYER_ETHERNET) 884 + return iboe_process_mad(ibdev, mad_flags, port_num, in_wc, 885 + in_grh, in_mad, out_mad); 886 + 887 + return -EINVAL; 878 888 } 879 889 880 890 static void send_handler(struct ib_mad_agent *agent,
+18 -15
drivers/infiniband/hw/mlx4/main.c
··· 253 253 props->hca_core_clock = dev->dev->caps.hca_core_clock * 1000UL; 254 254 props->timestamp_mask = 0xFFFFFFFFFFFFULL; 255 255 256 - err = mlx4_get_internal_clock_params(dev->dev, &clock_params); 257 - if (err) 258 - goto out; 256 + if (!mlx4_is_slave(dev->dev)) 257 + err = mlx4_get_internal_clock_params(dev->dev, &clock_params); 259 258 260 259 if (uhw->outlen >= resp.response_length + sizeof(resp.hca_core_clock_offset)) { 261 - resp.hca_core_clock_offset = clock_params.offset % PAGE_SIZE; 262 260 resp.response_length += sizeof(resp.hca_core_clock_offset); 263 - resp.comp_mask |= QUERY_DEVICE_RESP_MASK_TIMESTAMP; 261 + if (!err && !mlx4_is_slave(dev->dev)) { 262 + resp.comp_mask |= QUERY_DEVICE_RESP_MASK_TIMESTAMP; 263 + resp.hca_core_clock_offset = clock_params.offset % PAGE_SIZE; 264 + } 264 265 } 265 266 266 267 if (uhw->outlen) { ··· 2670 2669 dm = kcalloc(ports, sizeof(*dm), GFP_ATOMIC); 2671 2670 if (!dm) { 2672 2671 pr_err("failed to allocate memory for tunneling qp update\n"); 2673 - goto out; 2672 + return; 2674 2673 } 2675 2674 2676 2675 for (i = 0; i < ports; i++) { 2677 2676 dm[i] = kmalloc(sizeof (struct mlx4_ib_demux_work), GFP_ATOMIC); 2678 2677 if (!dm[i]) { 2679 2678 pr_err("failed to allocate memory for tunneling qp update work struct\n"); 2680 - for (i = 0; i < dev->caps.num_ports; i++) { 2681 - if (dm[i]) 2682 - kfree(dm[i]); 2683 - } 2679 + while (--i >= 0) 2680 + kfree(dm[i]); 2684 2681 goto out; 2685 2682 } 2686 - } 2687 - /* initialize or tear down tunnel QPs for the slave */ 2688 - for (i = 0; i < ports; i++) { 2689 2683 INIT_WORK(&dm[i]->work, mlx4_ib_tunnels_update_work); 2690 2684 dm[i]->port = first_port + i + 1; 2691 2685 dm[i]->slave = slave; 2692 2686 dm[i]->do_init = do_init; 2693 2687 dm[i]->dev = ibdev; 2694 - spin_lock_irqsave(&ibdev->sriov.going_down_lock, flags); 2695 - if (!ibdev->sriov.is_going_down) 2688 + } 2689 + /* initialize or tear down tunnel QPs for the slave */ 2690 + spin_lock_irqsave(&ibdev->sriov.going_down_lock, flags); 2691 + if (!ibdev->sriov.is_going_down) { 2692 + for (i = 0; i < ports; i++) 2696 2693 queue_work(ibdev->sriov.demux[i].ud_wq, &dm[i]->work); 2697 2694 spin_unlock_irqrestore(&ibdev->sriov.going_down_lock, flags); 2695 + } else { 2696 + spin_unlock_irqrestore(&ibdev->sriov.going_down_lock, flags); 2697 + for (i = 0; i < ports; i++) 2698 + kfree(dm[i]); 2698 2699 } 2699 2700 out: 2700 2701 kfree(dm);
+3 -2
drivers/infiniband/hw/mlx5/mad.c
··· 68 68 const struct ib_mad *in_mad = (const struct ib_mad *)in; 69 69 struct ib_mad *out_mad = (struct ib_mad *)out; 70 70 71 - BUG_ON(in_mad_size != sizeof(*in_mad) || 72 - *out_mad_size != sizeof(*out_mad)); 71 + if (WARN_ON_ONCE(in_mad_size != sizeof(*in_mad) || 72 + *out_mad_size != sizeof(*out_mad))) 73 + return IB_MAD_RESULT_FAILURE; 73 74 74 75 slid = in_wc ? in_wc->slid : be16_to_cpu(IB_LID_PERMISSIVE); 75 76
+3 -2
drivers/infiniband/hw/mthca/mthca_mad.c
··· 209 209 const struct ib_mad *in_mad = (const struct ib_mad *)in; 210 210 struct ib_mad *out_mad = (struct ib_mad *)out; 211 211 212 - BUG_ON(in_mad_size != sizeof(*in_mad) || 213 - *out_mad_size != sizeof(*out_mad)); 212 + if (WARN_ON_ONCE(in_mad_size != sizeof(*in_mad) || 213 + *out_mad_size != sizeof(*out_mad))) 214 + return IB_MAD_RESULT_FAILURE; 214 215 215 216 /* Forward locally generated traps to the SM */ 216 217 if (in_mad->mad_hdr.method == IB_MGMT_METHOD_TRAP &&
+3 -2
drivers/infiniband/hw/nes/nes_cm.c
··· 1520 1520 int rc = arpindex; 1521 1521 struct net_device *netdev; 1522 1522 struct nes_adapter *nesadapter = nesvnic->nesdev->nesadapter; 1523 + __be32 dst_ipaddr = htonl(dst_ip); 1523 1524 1524 - rt = ip_route_output(&init_net, htonl(dst_ip), 0, 0, 0); 1525 + rt = ip_route_output(&init_net, dst_ipaddr, nesvnic->local_ipaddr, 0, 0); 1525 1526 if (IS_ERR(rt)) { 1526 1527 printk(KERN_ERR "%s: ip_route_output_key failed for 0x%08X\n", 1527 1528 __func__, dst_ip); ··· 1534 1533 else 1535 1534 netdev = nesvnic->netdev; 1536 1535 1537 - neigh = neigh_lookup(&arp_tbl, &rt->rt_gateway, netdev); 1536 + neigh = dst_neigh_lookup(&rt->dst, &dst_ipaddr); 1538 1537 1539 1538 rcu_read_lock(); 1540 1539 if (neigh) {
+1 -1
drivers/infiniband/hw/nes/nes_hw.c
··· 3861 3861 (((u32)mac_addr[2]) << 24) | (((u32)mac_addr[3]) << 16) | 3862 3862 (((u32)mac_addr[4]) << 8) | (u32)mac_addr[5]); 3863 3863 cqp_wqe->wqe_words[NES_CQP_ARP_WQE_MAC_HIGH_IDX] = cpu_to_le32( 3864 - (((u32)mac_addr[0]) << 16) | (u32)mac_addr[1]); 3864 + (((u32)mac_addr[0]) << 8) | (u32)mac_addr[1]); 3865 3865 } else { 3866 3866 cqp_wqe->wqe_words[NES_CQP_ARP_WQE_MAC_ADDR_LOW_IDX] = 0; 3867 3867 cqp_wqe->wqe_words[NES_CQP_ARP_WQE_MAC_HIGH_IDX] = 0;
+3 -2
drivers/infiniband/hw/ocrdma/ocrdma_ah.c
··· 215 215 const struct ib_mad *in_mad = (const struct ib_mad *)in; 216 216 struct ib_mad *out_mad = (struct ib_mad *)out; 217 217 218 - BUG_ON(in_mad_size != sizeof(*in_mad) || 219 - *out_mad_size != sizeof(*out_mad)); 218 + if (WARN_ON_ONCE(in_mad_size != sizeof(*in_mad) || 219 + *out_mad_size != sizeof(*out_mad))) 220 + return IB_MAD_RESULT_FAILURE; 220 221 221 222 switch (in_mad->mad_hdr.mgmt_class) { 222 223 case IB_MGMT_CLASS_PERF_MGMT:
+1
drivers/infiniband/hw/ocrdma/ocrdma_main.c
··· 696 696 ocrdma_unregister_inet6addr_notifier(); 697 697 ocrdma_unregister_inetaddr_notifier(); 698 698 ocrdma_rem_debugfs(); 699 + idr_destroy(&ocrdma_dev_id); 699 700 } 700 701 701 702 module_init(ocrdma_init_module);
+3 -2
drivers/infiniband/hw/qib/qib_mad.c
··· 2412 2412 const struct ib_mad *in_mad = (const struct ib_mad *)in; 2413 2413 struct ib_mad *out_mad = (struct ib_mad *)out; 2414 2414 2415 - BUG_ON(in_mad_size != sizeof(*in_mad) || 2416 - *out_mad_size != sizeof(*out_mad)); 2415 + if (WARN_ON_ONCE(in_mad_size != sizeof(*in_mad) || 2416 + *out_mad_size != sizeof(*out_mad))) 2417 + return IB_MAD_RESULT_FAILURE; 2417 2418 2418 2419 switch (in_mad->mad_hdr.mgmt_class) { 2419 2420 case IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE:
+28 -1
drivers/infiniband/ulp/ipoib/ipoib.h
··· 239 239 struct net_device *dev; 240 240 struct ipoib_neigh *neigh; 241 241 struct ipoib_path *path; 242 - struct ipoib_cm_tx_buf *tx_ring; 242 + struct ipoib_tx_buf *tx_ring; 243 243 unsigned tx_head; 244 244 unsigned tx_tail; 245 245 unsigned long flags; ··· 503 503 504 504 void ipoib_mcast_dev_down(struct net_device *dev); 505 505 void ipoib_mcast_dev_flush(struct net_device *dev); 506 + 507 + int ipoib_dma_map_tx(struct ib_device *ca, struct ipoib_tx_buf *tx_req); 508 + void ipoib_dma_unmap_tx(struct ipoib_dev_priv *priv, 509 + struct ipoib_tx_buf *tx_req); 510 + 511 + static inline void ipoib_build_sge(struct ipoib_dev_priv *priv, 512 + struct ipoib_tx_buf *tx_req) 513 + { 514 + int i, off; 515 + struct sk_buff *skb = tx_req->skb; 516 + skb_frag_t *frags = skb_shinfo(skb)->frags; 517 + int nr_frags = skb_shinfo(skb)->nr_frags; 518 + u64 *mapping = tx_req->mapping; 519 + 520 + if (skb_headlen(skb)) { 521 + priv->tx_sge[0].addr = mapping[0]; 522 + priv->tx_sge[0].length = skb_headlen(skb); 523 + off = 1; 524 + } else 525 + off = 0; 526 + 527 + for (i = 0; i < nr_frags; ++i) { 528 + priv->tx_sge[i + off].addr = mapping[i + off]; 529 + priv->tx_sge[i + off].length = skb_frag_size(&frags[i]); 530 + } 531 + priv->tx_wr.num_sge = nr_frags + off; 532 + } 506 533 507 534 #ifdef CONFIG_INFINIBAND_IPOIB_DEBUG 508 535 struct ipoib_mcast_iter *ipoib_mcast_iter_init(struct net_device *dev);
+14 -19
drivers/infiniband/ulp/ipoib/ipoib_cm.c
··· 694 694 static inline int post_send(struct ipoib_dev_priv *priv, 695 695 struct ipoib_cm_tx *tx, 696 696 unsigned int wr_id, 697 - u64 addr, int len) 697 + struct ipoib_tx_buf *tx_req) 698 698 { 699 699 struct ib_send_wr *bad_wr; 700 700 701 - priv->tx_sge[0].addr = addr; 702 - priv->tx_sge[0].length = len; 701 + ipoib_build_sge(priv, tx_req); 703 702 704 - priv->tx_wr.num_sge = 1; 705 703 priv->tx_wr.wr_id = wr_id | IPOIB_OP_CM; 706 704 707 705 return ib_post_send(tx->qp, &priv->tx_wr, &bad_wr); ··· 708 710 void ipoib_cm_send(struct net_device *dev, struct sk_buff *skb, struct ipoib_cm_tx *tx) 709 711 { 710 712 struct ipoib_dev_priv *priv = netdev_priv(dev); 711 - struct ipoib_cm_tx_buf *tx_req; 712 - u64 addr; 713 + struct ipoib_tx_buf *tx_req; 713 714 int rc; 714 715 715 716 if (unlikely(skb->len > tx->mtu)) { ··· 732 735 */ 733 736 tx_req = &tx->tx_ring[tx->tx_head & (ipoib_sendq_size - 1)]; 734 737 tx_req->skb = skb; 735 - addr = ib_dma_map_single(priv->ca, skb->data, skb->len, DMA_TO_DEVICE); 736 - if (unlikely(ib_dma_mapping_error(priv->ca, addr))) { 738 + 739 + if (unlikely(ipoib_dma_map_tx(priv->ca, tx_req))) { 737 740 ++dev->stats.tx_errors; 738 741 dev_kfree_skb_any(skb); 739 742 return; 740 743 } 741 744 742 - tx_req->mapping = addr; 743 - 744 745 skb_orphan(skb); 745 746 skb_dst_drop(skb); 746 747 747 - rc = post_send(priv, tx, tx->tx_head & (ipoib_sendq_size - 1), 748 - addr, skb->len); 748 + rc = post_send(priv, tx, tx->tx_head & (ipoib_sendq_size - 1), tx_req); 749 749 if (unlikely(rc)) { 750 750 ipoib_warn(priv, "post_send failed, error %d\n", rc); 751 751 ++dev->stats.tx_errors; 752 - ib_dma_unmap_single(priv->ca, addr, skb->len, DMA_TO_DEVICE); 752 + ipoib_dma_unmap_tx(priv, tx_req); 753 753 dev_kfree_skb_any(skb); 754 754 } else { 755 755 dev->trans_start = jiffies; ··· 771 777 struct ipoib_dev_priv *priv = netdev_priv(dev); 772 778 struct ipoib_cm_tx *tx = wc->qp->qp_context; 773 779 unsigned int wr_id = wc->wr_id & ~IPOIB_OP_CM; 774 - struct ipoib_cm_tx_buf *tx_req; 780 + struct ipoib_tx_buf *tx_req; 775 781 unsigned long flags; 776 782 777 783 ipoib_dbg_data(priv, "cm send completion: id %d, status: %d\n", ··· 785 791 786 792 tx_req = &tx->tx_ring[wr_id]; 787 793 788 - ib_dma_unmap_single(priv->ca, tx_req->mapping, tx_req->skb->len, DMA_TO_DEVICE); 794 + ipoib_dma_unmap_tx(priv, tx_req); 789 795 790 796 /* FIXME: is this right? Shouldn't we only increment on success? */ 791 797 ++dev->stats.tx_packets; ··· 1030 1036 1031 1037 struct ib_qp *tx_qp; 1032 1038 1039 + if (dev->features & NETIF_F_SG) 1040 + attr.cap.max_send_sge = MAX_SKB_FRAGS + 1; 1041 + 1033 1042 tx_qp = ib_create_qp(priv->pd, &attr); 1034 1043 if (PTR_ERR(tx_qp) == -EINVAL) { 1035 1044 ipoib_warn(priv, "can't use GFP_NOIO for QPs on device %s, using GFP_KERNEL\n", ··· 1167 1170 static void ipoib_cm_tx_destroy(struct ipoib_cm_tx *p) 1168 1171 { 1169 1172 struct ipoib_dev_priv *priv = netdev_priv(p->dev); 1170 - struct ipoib_cm_tx_buf *tx_req; 1173 + struct ipoib_tx_buf *tx_req; 1171 1174 unsigned long begin; 1172 1175 1173 1176 ipoib_dbg(priv, "Destroy active connection 0x%x head 0x%x tail 0x%x\n", ··· 1194 1197 1195 1198 while ((int) p->tx_tail - (int) p->tx_head < 0) { 1196 1199 tx_req = &p->tx_ring[p->tx_tail & (ipoib_sendq_size - 1)]; 1197 - ib_dma_unmap_single(priv->ca, tx_req->mapping, tx_req->skb->len, 1198 - DMA_TO_DEVICE); 1200 + ipoib_dma_unmap_tx(priv, tx_req); 1199 1201 dev_kfree_skb_any(tx_req->skb); 1200 1202 ++p->tx_tail; 1201 1203 netif_tx_lock_bh(p->dev); ··· 1450 1454 &priv->cm.stale_task, IPOIB_CM_RX_DELAY); 1451 1455 spin_unlock_irq(&priv->lock); 1452 1456 } 1453 - 1454 1457 1455 1458 static ssize_t show_mode(struct device *d, struct device_attribute *attr, 1456 1459 char *buf)
+18 -31
drivers/infiniband/ulp/ipoib/ipoib_ib.c
··· 263 263 "for buf %d\n", wr_id); 264 264 } 265 265 266 - static int ipoib_dma_map_tx(struct ib_device *ca, 267 - struct ipoib_tx_buf *tx_req) 266 + int ipoib_dma_map_tx(struct ib_device *ca, struct ipoib_tx_buf *tx_req) 268 267 { 269 268 struct sk_buff *skb = tx_req->skb; 270 269 u64 *mapping = tx_req->mapping; ··· 304 305 return -EIO; 305 306 } 306 307 307 - static void ipoib_dma_unmap_tx(struct ib_device *ca, 308 - struct ipoib_tx_buf *tx_req) 308 + void ipoib_dma_unmap_tx(struct ipoib_dev_priv *priv, 309 + struct ipoib_tx_buf *tx_req) 309 310 { 310 311 struct sk_buff *skb = tx_req->skb; 311 312 u64 *mapping = tx_req->mapping; ··· 313 314 int off; 314 315 315 316 if (skb_headlen(skb)) { 316 - ib_dma_unmap_single(ca, mapping[0], skb_headlen(skb), DMA_TO_DEVICE); 317 + ib_dma_unmap_single(priv->ca, mapping[0], skb_headlen(skb), 318 + DMA_TO_DEVICE); 317 319 off = 1; 318 320 } else 319 321 off = 0; ··· 322 322 for (i = 0; i < skb_shinfo(skb)->nr_frags; ++i) { 323 323 const skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; 324 324 325 - ib_dma_unmap_page(ca, mapping[i + off], skb_frag_size(frag), 326 - DMA_TO_DEVICE); 325 + ib_dma_unmap_page(priv->ca, mapping[i + off], 326 + skb_frag_size(frag), DMA_TO_DEVICE); 327 327 } 328 328 } 329 329 ··· 389 389 390 390 tx_req = &priv->tx_ring[wr_id]; 391 391 392 - ipoib_dma_unmap_tx(priv->ca, tx_req); 392 + ipoib_dma_unmap_tx(priv, tx_req); 393 393 394 394 ++dev->stats.tx_packets; 395 395 dev->stats.tx_bytes += tx_req->skb->len; ··· 514 514 void *head, int hlen) 515 515 { 516 516 struct ib_send_wr *bad_wr; 517 - int i, off; 518 517 struct sk_buff *skb = tx_req->skb; 519 - skb_frag_t *frags = skb_shinfo(skb)->frags; 520 - int nr_frags = skb_shinfo(skb)->nr_frags; 521 - u64 *mapping = tx_req->mapping; 522 518 523 - if (skb_headlen(skb)) { 524 - priv->tx_sge[0].addr = mapping[0]; 525 - priv->tx_sge[0].length = skb_headlen(skb); 526 - off = 1; 527 - } else 528 - off = 0; 519 + ipoib_build_sge(priv, tx_req); 529 520 530 - for (i = 0; i < nr_frags; ++i) { 531 - priv->tx_sge[i + off].addr = mapping[i + off]; 532 - priv->tx_sge[i + off].length = skb_frag_size(&frags[i]); 533 - } 534 - priv->tx_wr.num_sge = nr_frags + off; 535 521 priv->tx_wr.wr_id = wr_id; 536 522 priv->tx_wr.wr.ud.remote_qpn = qpn; 537 523 priv->tx_wr.wr.ud.ah = address; ··· 603 617 ipoib_warn(priv, "post_send failed, error %d\n", rc); 604 618 ++dev->stats.tx_errors; 605 619 --priv->tx_outstanding; 606 - ipoib_dma_unmap_tx(priv->ca, tx_req); 620 + ipoib_dma_unmap_tx(priv, tx_req); 607 621 dev_kfree_skb_any(skb); 608 622 if (netif_queue_stopped(dev)) 609 623 netif_wake_queue(dev); ··· 854 868 while ((int) priv->tx_tail - (int) priv->tx_head < 0) { 855 869 tx_req = &priv->tx_ring[priv->tx_tail & 856 870 (ipoib_sendq_size - 1)]; 857 - ipoib_dma_unmap_tx(priv->ca, tx_req); 871 + ipoib_dma_unmap_tx(priv, tx_req); 858 872 dev_kfree_skb_any(tx_req->skb); 859 873 ++priv->tx_tail; 860 874 --priv->tx_outstanding; ··· 971 985 } 972 986 973 987 static void __ipoib_ib_dev_flush(struct ipoib_dev_priv *priv, 974 - enum ipoib_flush_level level) 988 + enum ipoib_flush_level level, 989 + int nesting) 975 990 { 976 991 struct ipoib_dev_priv *cpriv; 977 992 struct net_device *dev = priv->dev; 978 993 int result; 979 994 980 - down_read(&priv->vlan_rwsem); 995 + down_read_nested(&priv->vlan_rwsem, nesting); 981 996 982 997 /* 983 998 * Flush any child interfaces too -- they might be up even if 984 999 * the parent is down. 985 1000 */ 986 1001 list_for_each_entry(cpriv, &priv->child_intfs, list) 987 - __ipoib_ib_dev_flush(cpriv, level); 1002 + __ipoib_ib_dev_flush(cpriv, level, nesting + 1); 988 1003 989 1004 up_read(&priv->vlan_rwsem); 990 1005 ··· 1063 1076 struct ipoib_dev_priv *priv = 1064 1077 container_of(work, struct ipoib_dev_priv, flush_light); 1065 1078 1066 - __ipoib_ib_dev_flush(priv, IPOIB_FLUSH_LIGHT); 1079 + __ipoib_ib_dev_flush(priv, IPOIB_FLUSH_LIGHT, 0); 1067 1080 } 1068 1081 1069 1082 void ipoib_ib_dev_flush_normal(struct work_struct *work) ··· 1071 1084 struct ipoib_dev_priv *priv = 1072 1085 container_of(work, struct ipoib_dev_priv, flush_normal); 1073 1086 1074 - __ipoib_ib_dev_flush(priv, IPOIB_FLUSH_NORMAL); 1087 + __ipoib_ib_dev_flush(priv, IPOIB_FLUSH_NORMAL, 0); 1075 1088 } 1076 1089 1077 1090 void ipoib_ib_dev_flush_heavy(struct work_struct *work) ··· 1079 1092 struct ipoib_dev_priv *priv = 1080 1093 container_of(work, struct ipoib_dev_priv, flush_heavy); 1081 1094 1082 - __ipoib_ib_dev_flush(priv, IPOIB_FLUSH_HEAVY); 1095 + __ipoib_ib_dev_flush(priv, IPOIB_FLUSH_HEAVY, 0); 1083 1096 } 1084 1097 1085 1098 void ipoib_ib_dev_cleanup(struct net_device *dev)
+8 -13
drivers/infiniband/ulp/ipoib/ipoib_main.c
··· 190 190 struct ipoib_dev_priv *priv = netdev_priv(dev); 191 191 192 192 if (test_bit(IPOIB_FLAG_ADMIN_CM, &priv->flags)) 193 - features &= ~(NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO); 193 + features &= ~(NETIF_F_IP_CSUM | NETIF_F_TSO); 194 194 195 195 return features; 196 196 } ··· 232 232 ipoib_warn(priv, "enabling connected mode " 233 233 "will cause multicast packet drops\n"); 234 234 netdev_update_features(dev); 235 + dev_set_mtu(dev, ipoib_cm_max_mtu(dev)); 235 236 rtnl_unlock(); 236 237 priv->tx_wr.send_flags &= ~IB_SEND_IP_CSUM; 237 238 ··· 1578 1577 SET_NETDEV_DEV(priv->dev, hca->dma_device); 1579 1578 priv->dev->dev_id = port - 1; 1580 1579 1581 - if (!ib_query_port(hca, port, &attr)) 1580 + result = ib_query_port(hca, port, &attr); 1581 + if (!result) 1582 1582 priv->max_ib_mtu = ib_mtu_enum_to_int(attr.max_mtu); 1583 1583 else { 1584 1584 printk(KERN_WARNING "%s: ib_query_port %d failed\n", ··· 1600 1598 goto device_init_failed; 1601 1599 } 1602 1600 1603 - if (ipoib_set_dev_features(priv, hca)) 1601 + result = ipoib_set_dev_features(priv, hca); 1602 + if (result) 1604 1603 goto device_init_failed; 1605 1604 1606 1605 /* ··· 1687 1684 struct list_head *dev_list; 1688 1685 struct net_device *dev; 1689 1686 struct ipoib_dev_priv *priv; 1690 - int s, e, p; 1687 + int p; 1691 1688 int count = 0; 1692 1689 1693 1690 dev_list = kmalloc(sizeof *dev_list, GFP_KERNEL); ··· 1696 1693 1697 1694 INIT_LIST_HEAD(dev_list); 1698 1695 1699 - if (device->node_type == RDMA_NODE_IB_SWITCH) { 1700 - s = 0; 1701 - e = 0; 1702 - } else { 1703 - s = 1; 1704 - e = device->phys_port_cnt; 1705 - } 1706 - 1707 - for (p = s; p <= e; ++p) { 1696 + for (p = rdma_start_port(device); p <= rdma_end_port(device); ++p) { 1708 1697 if (!rdma_protocol_ib(device, p)) 1709 1698 continue; 1710 1699 dev = ipoib_add_port("ib%d", device, p);
+6 -17
drivers/infiniband/ulp/srp/ib_srp.c
··· 161 161 { 162 162 int tmo, res; 163 163 164 - if (strncmp(val, "off", 3) != 0) { 165 - res = kstrtoint(val, 0, &tmo); 166 - if (res) 167 - goto out; 168 - } else { 169 - tmo = -1; 170 - } 164 + res = srp_parse_tmo(&tmo, val); 165 + if (res) 166 + goto out; 167 + 171 168 if (kp->arg == &srp_reconnect_delay) 172 169 res = srp_tmo_valid(tmo, srp_fast_io_fail_tmo, 173 170 srp_dev_loss_tmo); ··· 3376 3379 struct srp_device *srp_dev; 3377 3380 struct ib_device_attr *dev_attr; 3378 3381 struct srp_host *host; 3379 - int mr_page_shift, s, e, p; 3382 + int mr_page_shift, p; 3380 3383 u64 max_pages_per_mr; 3381 3384 3382 3385 dev_attr = kmalloc(sizeof *dev_attr, GFP_KERNEL); ··· 3440 3443 if (IS_ERR(srp_dev->mr)) 3441 3444 goto err_pd; 3442 3445 3443 - if (device->node_type == RDMA_NODE_IB_SWITCH) { 3444 - s = 0; 3445 - e = 0; 3446 - } else { 3447 - s = 1; 3448 - e = device->phys_port_cnt; 3449 - } 3450 - 3451 - for (p = s; p <= e; ++p) { 3446 + for (p = rdma_start_port(device); p <= rdma_end_port(device); ++p) { 3452 3447 host = srp_add_port(srp_dev, p); 3453 3448 if (host) 3454 3449 list_add_tail(&host->list, &srp_dev->dev_list);
+35 -36
drivers/infiniband/ulp/srpt/ib_srpt.c
··· 302 302 int i; 303 303 304 304 ioui = (struct ib_dm_iou_info *)mad->data; 305 - ioui->change_id = __constant_cpu_to_be16(1); 305 + ioui->change_id = cpu_to_be16(1); 306 306 ioui->max_controllers = 16; 307 307 308 308 /* set present for slot 1 and empty for the rest */ ··· 330 330 331 331 if (!slot || slot > 16) { 332 332 mad->mad_hdr.status 333 - = __constant_cpu_to_be16(DM_MAD_STATUS_INVALID_FIELD); 333 + = cpu_to_be16(DM_MAD_STATUS_INVALID_FIELD); 334 334 return; 335 335 } 336 336 337 337 if (slot > 2) { 338 338 mad->mad_hdr.status 339 - = __constant_cpu_to_be16(DM_MAD_STATUS_NO_IOC); 339 + = cpu_to_be16(DM_MAD_STATUS_NO_IOC); 340 340 return; 341 341 } 342 342 ··· 348 348 iocp->device_version = cpu_to_be16(sdev->dev_attr.hw_ver); 349 349 iocp->subsys_vendor_id = cpu_to_be32(sdev->dev_attr.vendor_id); 350 350 iocp->subsys_device_id = 0x0; 351 - iocp->io_class = __constant_cpu_to_be16(SRP_REV16A_IB_IO_CLASS); 352 - iocp->io_subclass = __constant_cpu_to_be16(SRP_IO_SUBCLASS); 353 - iocp->protocol = __constant_cpu_to_be16(SRP_PROTOCOL); 354 - iocp->protocol_version = __constant_cpu_to_be16(SRP_PROTOCOL_VERSION); 351 + iocp->io_class = cpu_to_be16(SRP_REV16A_IB_IO_CLASS); 352 + iocp->io_subclass = cpu_to_be16(SRP_IO_SUBCLASS); 353 + iocp->protocol = cpu_to_be16(SRP_PROTOCOL); 354 + iocp->protocol_version = cpu_to_be16(SRP_PROTOCOL_VERSION); 355 355 iocp->send_queue_depth = cpu_to_be16(sdev->srq_size); 356 356 iocp->rdma_read_depth = 4; 357 357 iocp->send_size = cpu_to_be32(srp_max_req_size); ··· 379 379 380 380 if (!slot || slot > 16) { 381 381 mad->mad_hdr.status 382 - = __constant_cpu_to_be16(DM_MAD_STATUS_INVALID_FIELD); 382 + = cpu_to_be16(DM_MAD_STATUS_INVALID_FIELD); 383 383 return; 384 384 } 385 385 386 386 if (slot > 2 || lo > hi || hi > 1) { 387 387 mad->mad_hdr.status 388 - = __constant_cpu_to_be16(DM_MAD_STATUS_NO_IOC); 388 + = cpu_to_be16(DM_MAD_STATUS_NO_IOC); 389 389 return; 390 390 } 391 391 ··· 436 436 break; 437 437 default: 438 438 rsp_mad->mad_hdr.status = 439 - __constant_cpu_to_be16(DM_MAD_STATUS_UNSUP_METHOD_ATTR); 439 + cpu_to_be16(DM_MAD_STATUS_UNSUP_METHOD_ATTR); 440 440 break; 441 441 } 442 442 } ··· 493 493 break; 494 494 case IB_MGMT_METHOD_SET: 495 495 dm_mad->mad_hdr.status = 496 - __constant_cpu_to_be16(DM_MAD_STATUS_UNSUP_METHOD_ATTR); 496 + cpu_to_be16(DM_MAD_STATUS_UNSUP_METHOD_ATTR); 497 497 break; 498 498 default: 499 499 dm_mad->mad_hdr.status = 500 - __constant_cpu_to_be16(DM_MAD_STATUS_UNSUP_METHOD); 500 + cpu_to_be16(DM_MAD_STATUS_UNSUP_METHOD); 501 501 break; 502 502 } 503 503 ··· 1535 1535 memset(srp_rsp, 0, sizeof *srp_rsp); 1536 1536 srp_rsp->opcode = SRP_RSP; 1537 1537 srp_rsp->req_lim_delta = 1538 - __constant_cpu_to_be32(1 + atomic_xchg(&ch->req_lim_delta, 0)); 1538 + cpu_to_be32(1 + atomic_xchg(&ch->req_lim_delta, 0)); 1539 1539 srp_rsp->tag = tag; 1540 1540 srp_rsp->status = status; 1541 1541 ··· 1585 1585 memset(srp_rsp, 0, sizeof *srp_rsp); 1586 1586 1587 1587 srp_rsp->opcode = SRP_RSP; 1588 - srp_rsp->req_lim_delta = __constant_cpu_to_be32(1 1589 - + atomic_xchg(&ch->req_lim_delta, 0)); 1588 + srp_rsp->req_lim_delta = 1589 + cpu_to_be32(1 + atomic_xchg(&ch->req_lim_delta, 0)); 1590 1590 srp_rsp->tag = tag; 1591 1591 1592 1592 srp_rsp->flags |= SRP_RSP_FLAG_RSPVALID; ··· 1630 1630 switch (len) { 1631 1631 case 8: 1632 1632 if ((*((__be64 *)lun) & 1633 - __constant_cpu_to_be64(0x0000FFFFFFFFFFFFLL)) != 0) 1633 + cpu_to_be64(0x0000FFFFFFFFFFFFLL)) != 0) 1634 1634 goto out_err; 1635 1635 break; 1636 1636 case 4: ··· 2449 2449 } 2450 2450 2451 2451 if (it_iu_len > srp_max_req_size || it_iu_len < 64) { 2452 - rej->reason = __constant_cpu_to_be32( 2453 - SRP_LOGIN_REJ_REQ_IT_IU_LENGTH_TOO_LARGE); 2452 + rej->reason = cpu_to_be32( 2453 + SRP_LOGIN_REJ_REQ_IT_IU_LENGTH_TOO_LARGE); 2454 2454 ret = -EINVAL; 2455 2455 pr_err("rejected SRP_LOGIN_REQ because its" 2456 2456 " length (%d bytes) is out of range (%d .. %d)\n", ··· 2459 2459 } 2460 2460 2461 2461 if (!sport->enabled) { 2462 - rej->reason = __constant_cpu_to_be32( 2463 - SRP_LOGIN_REJ_INSUFFICIENT_RESOURCES); 2462 + rej->reason = cpu_to_be32( 2463 + SRP_LOGIN_REJ_INSUFFICIENT_RESOURCES); 2464 2464 ret = -EINVAL; 2465 2465 pr_err("rejected SRP_LOGIN_REQ because the target port" 2466 2466 " has not yet been enabled\n"); ··· 2505 2505 if (*(__be64 *)req->target_port_id != cpu_to_be64(srpt_service_guid) 2506 2506 || *(__be64 *)(req->target_port_id + 8) != 2507 2507 cpu_to_be64(srpt_service_guid)) { 2508 - rej->reason = __constant_cpu_to_be32( 2509 - SRP_LOGIN_REJ_UNABLE_ASSOCIATE_CHANNEL); 2508 + rej->reason = cpu_to_be32( 2509 + SRP_LOGIN_REJ_UNABLE_ASSOCIATE_CHANNEL); 2510 2510 ret = -ENOMEM; 2511 2511 pr_err("rejected SRP_LOGIN_REQ because it" 2512 2512 " has an invalid target port identifier.\n"); ··· 2515 2515 2516 2516 ch = kzalloc(sizeof *ch, GFP_KERNEL); 2517 2517 if (!ch) { 2518 - rej->reason = __constant_cpu_to_be32( 2519 - SRP_LOGIN_REJ_INSUFFICIENT_RESOURCES); 2518 + rej->reason = cpu_to_be32( 2519 + SRP_LOGIN_REJ_INSUFFICIENT_RESOURCES); 2520 2520 pr_err("rejected SRP_LOGIN_REQ because no memory.\n"); 2521 2521 ret = -ENOMEM; 2522 2522 goto reject; ··· 2552 2552 2553 2553 ret = srpt_create_ch_ib(ch); 2554 2554 if (ret) { 2555 - rej->reason = __constant_cpu_to_be32( 2556 - SRP_LOGIN_REJ_INSUFFICIENT_RESOURCES); 2555 + rej->reason = cpu_to_be32( 2556 + SRP_LOGIN_REJ_INSUFFICIENT_RESOURCES); 2557 2557 pr_err("rejected SRP_LOGIN_REQ because creating" 2558 2558 " a new RDMA channel failed.\n"); 2559 2559 goto free_ring; ··· 2561 2561 2562 2562 ret = srpt_ch_qp_rtr(ch, ch->qp); 2563 2563 if (ret) { 2564 - rej->reason = __constant_cpu_to_be32( 2565 - SRP_LOGIN_REJ_INSUFFICIENT_RESOURCES); 2564 + rej->reason = cpu_to_be32(SRP_LOGIN_REJ_INSUFFICIENT_RESOURCES); 2566 2565 pr_err("rejected SRP_LOGIN_REQ because enabling" 2567 2566 " RTR failed (error code = %d)\n", ret); 2568 2567 goto destroy_ib; ··· 2579 2580 if (!nacl) { 2580 2581 pr_info("Rejected login because no ACL has been" 2581 2582 " configured yet for initiator %s.\n", ch->sess_name); 2582 - rej->reason = __constant_cpu_to_be32( 2583 - SRP_LOGIN_REJ_CHANNEL_LIMIT_REACHED); 2583 + rej->reason = cpu_to_be32( 2584 + SRP_LOGIN_REJ_CHANNEL_LIMIT_REACHED); 2584 2585 goto destroy_ib; 2585 2586 } 2586 2587 2587 2588 ch->sess = transport_init_session(TARGET_PROT_NORMAL); 2588 2589 if (IS_ERR(ch->sess)) { 2589 - rej->reason = __constant_cpu_to_be32( 2590 - SRP_LOGIN_REJ_INSUFFICIENT_RESOURCES); 2590 + rej->reason = cpu_to_be32( 2591 + SRP_LOGIN_REJ_INSUFFICIENT_RESOURCES); 2591 2592 pr_debug("Failed to create session\n"); 2592 2593 goto deregister_session; 2593 2594 } ··· 2603 2604 rsp->max_it_iu_len = req->req_it_iu_len; 2604 2605 rsp->max_ti_iu_len = req->req_it_iu_len; 2605 2606 ch->max_ti_iu_len = it_iu_len; 2606 - rsp->buf_fmt = __constant_cpu_to_be16(SRP_BUF_FORMAT_DIRECT 2607 - | SRP_BUF_FORMAT_INDIRECT); 2607 + rsp->buf_fmt = cpu_to_be16(SRP_BUF_FORMAT_DIRECT 2608 + | SRP_BUF_FORMAT_INDIRECT); 2608 2609 rsp->req_lim_delta = cpu_to_be32(ch->rq_size); 2609 2610 atomic_set(&ch->req_lim, ch->rq_size); 2610 2611 atomic_set(&ch->req_lim_delta, 0); ··· 2654 2655 reject: 2655 2656 rej->opcode = SRP_LOGIN_REJ; 2656 2657 rej->tag = req->tag; 2657 - rej->buf_fmt = __constant_cpu_to_be16(SRP_BUF_FORMAT_DIRECT 2658 - | SRP_BUF_FORMAT_INDIRECT); 2658 + rej->buf_fmt = cpu_to_be16(SRP_BUF_FORMAT_DIRECT 2659 + | SRP_BUF_FORMAT_INDIRECT); 2659 2660 2660 2661 ib_send_cm_rej(cm_id, IB_CM_REJ_CONSUMER_DEFINED, NULL, 0, 2661 2662 (void *)rej, sizeof *rej);
+5 -7
drivers/input/mouse/elan_i2c_core.c
··· 771 771 */ 772 772 static void elan_report_contact(struct elan_tp_data *data, 773 773 int contact_num, bool contact_valid, 774 - bool hover_event, u8 *finger_data) 774 + u8 *finger_data) 775 775 { 776 776 struct input_dev *input = data->input; 777 777 unsigned int pos_x, pos_y; ··· 815 815 input_mt_report_slot_state(input, MT_TOOL_FINGER, true); 816 816 input_report_abs(input, ABS_MT_POSITION_X, pos_x); 817 817 input_report_abs(input, ABS_MT_POSITION_Y, data->max_y - pos_y); 818 - input_report_abs(input, ABS_MT_DISTANCE, hover_event); 819 - input_report_abs(input, ABS_MT_PRESSURE, 820 - hover_event ? 0 : scaled_pressure); 818 + input_report_abs(input, ABS_MT_PRESSURE, scaled_pressure); 821 819 input_report_abs(input, ABS_TOOL_WIDTH, mk_x); 822 820 input_report_abs(input, ABS_MT_TOUCH_MAJOR, major); 823 821 input_report_abs(input, ABS_MT_TOUCH_MINOR, minor); ··· 837 839 hover_event = hover_info & 0x40; 838 840 for (i = 0; i < ETP_MAX_FINGERS; i++) { 839 841 contact_valid = tp_info & (1U << (3 + i)); 840 - elan_report_contact(data, i, contact_valid, hover_event, 841 - finger_data); 842 + elan_report_contact(data, i, contact_valid, finger_data); 842 843 843 844 if (contact_valid) 844 845 finger_data += ETP_FINGER_DATA_LEN; 845 846 } 846 847 847 848 input_report_key(input, BTN_LEFT, tp_info & 0x01); 849 + input_report_abs(input, ABS_DISTANCE, hover_event != 0); 848 850 input_mt_report_pointer_emulation(input, true); 849 851 input_sync(input); 850 852 } ··· 920 922 input_abs_set_res(input, ABS_Y, data->y_res); 921 923 input_set_abs_params(input, ABS_PRESSURE, 0, ETP_MAX_PRESSURE, 0, 0); 922 924 input_set_abs_params(input, ABS_TOOL_WIDTH, 0, ETP_FINGER_WIDTH, 0, 0); 925 + input_set_abs_params(input, ABS_DISTANCE, 0, 1, 0, 0); 923 926 924 927 /* And MT parameters */ 925 928 input_set_abs_params(input, ABS_MT_POSITION_X, 0, data->max_x, 0, 0); ··· 933 934 ETP_FINGER_WIDTH * max_width, 0, 0); 934 935 input_set_abs_params(input, ABS_MT_TOUCH_MINOR, 0, 935 936 ETP_FINGER_WIDTH * min_width, 0, 0); 936 - input_set_abs_params(input, ABS_MT_DISTANCE, 0, 1, 0, 0); 937 937 938 938 data->input = input; 939 939
+1 -1
drivers/input/mouse/synaptics.c
··· 1199 1199 ABS_MT_POSITION_Y); 1200 1200 /* Image sensors can report per-contact pressure */ 1201 1201 input_set_abs_params(dev, ABS_MT_PRESSURE, 0, 255, 0, 0); 1202 - input_mt_init_slots(dev, 3, INPUT_MT_POINTER | INPUT_MT_TRACK); 1202 + input_mt_init_slots(dev, 2, INPUT_MT_POINTER | INPUT_MT_TRACK); 1203 1203 1204 1204 /* Image sensors can signal 4 and 5 finger clicks */ 1205 1205 __set_bit(BTN_TOOL_QUADTAP, dev->keybit);
+75 -36
drivers/irqchip/irq-gic-v3-its.c
··· 75 75 76 76 #define ITS_ITT_ALIGN SZ_256 77 77 78 + struct event_lpi_map { 79 + unsigned long *lpi_map; 80 + u16 *col_map; 81 + irq_hw_number_t lpi_base; 82 + int nr_lpis; 83 + }; 84 + 78 85 /* 79 86 * The ITS view of a device - belongs to an ITS, a collection, owns an 80 87 * interrupt translation table, and a list of interrupts. ··· 89 82 struct its_device { 90 83 struct list_head entry; 91 84 struct its_node *its; 92 - struct its_collection *collection; 85 + struct event_lpi_map event_map; 93 86 void *itt; 94 - unsigned long *lpi_map; 95 - irq_hw_number_t lpi_base; 96 - int nr_lpis; 97 87 u32 nr_ites; 98 88 u32 device_id; 99 89 }; ··· 102 98 103 99 #define gic_data_rdist() (raw_cpu_ptr(gic_rdists->rdist)) 104 100 #define gic_data_rdist_rd_base() (gic_data_rdist()->rd_base) 101 + 102 + static struct its_collection *dev_event_to_col(struct its_device *its_dev, 103 + u32 event) 104 + { 105 + struct its_node *its = its_dev->its; 106 + 107 + return its->collections + its_dev->event_map.col_map[event]; 108 + } 105 109 106 110 /* 107 111 * ITS command descriptors - parameters to be encoded in a command ··· 146 134 struct { 147 135 struct its_device *dev; 148 136 struct its_collection *col; 149 - u32 id; 137 + u32 event_id; 150 138 } its_movi_cmd; 151 139 152 140 struct { ··· 253 241 254 242 its_fixup_cmd(cmd); 255 243 256 - return desc->its_mapd_cmd.dev->collection; 244 + return NULL; 257 245 } 258 246 259 247 static struct its_collection *its_build_mapc_cmd(struct its_cmd_block *cmd, ··· 272 260 static struct its_collection *its_build_mapvi_cmd(struct its_cmd_block *cmd, 273 261 struct its_cmd_desc *desc) 274 262 { 263 + struct its_collection *col; 264 + 265 + col = dev_event_to_col(desc->its_mapvi_cmd.dev, 266 + desc->its_mapvi_cmd.event_id); 267 + 275 268 its_encode_cmd(cmd, GITS_CMD_MAPVI); 276 269 its_encode_devid(cmd, desc->its_mapvi_cmd.dev->device_id); 277 270 its_encode_event_id(cmd, desc->its_mapvi_cmd.event_id); 278 271 its_encode_phys_id(cmd, desc->its_mapvi_cmd.phys_id); 279 - its_encode_collection(cmd, desc->its_mapvi_cmd.dev->collection->col_id); 272 + its_encode_collection(cmd, col->col_id); 280 273 281 274 its_fixup_cmd(cmd); 282 275 283 - return desc->its_mapvi_cmd.dev->collection; 276 + return col; 284 277 } 285 278 286 279 static struct its_collection *its_build_movi_cmd(struct its_cmd_block *cmd, 287 280 struct its_cmd_desc *desc) 288 281 { 282 + struct its_collection *col; 283 + 284 + col = dev_event_to_col(desc->its_movi_cmd.dev, 285 + desc->its_movi_cmd.event_id); 286 + 289 287 its_encode_cmd(cmd, GITS_CMD_MOVI); 290 288 its_encode_devid(cmd, desc->its_movi_cmd.dev->device_id); 291 - its_encode_event_id(cmd, desc->its_movi_cmd.id); 289 + its_encode_event_id(cmd, desc->its_movi_cmd.event_id); 292 290 its_encode_collection(cmd, desc->its_movi_cmd.col->col_id); 293 291 294 292 its_fixup_cmd(cmd); 295 293 296 - return desc->its_movi_cmd.dev->collection; 294 + return col; 297 295 } 298 296 299 297 static struct its_collection *its_build_discard_cmd(struct its_cmd_block *cmd, 300 298 struct its_cmd_desc *desc) 301 299 { 300 + struct its_collection *col; 301 + 302 + col = dev_event_to_col(desc->its_discard_cmd.dev, 303 + desc->its_discard_cmd.event_id); 304 + 302 305 its_encode_cmd(cmd, GITS_CMD_DISCARD); 303 306 its_encode_devid(cmd, desc->its_discard_cmd.dev->device_id); 304 307 its_encode_event_id(cmd, desc->its_discard_cmd.event_id); 305 308 306 309 its_fixup_cmd(cmd); 307 310 308 - return desc->its_discard_cmd.dev->collection; 311 + return col; 309 312 } 310 313 311 314 static struct its_collection *its_build_inv_cmd(struct its_cmd_block *cmd, 312 315 struct its_cmd_desc *desc) 313 316 { 317 + struct its_collection *col; 318 + 319 + col = dev_event_to_col(desc->its_inv_cmd.dev, 320 + desc->its_inv_cmd.event_id); 321 + 314 322 its_encode_cmd(cmd, GITS_CMD_INV); 315 323 its_encode_devid(cmd, desc->its_inv_cmd.dev->device_id); 316 324 its_encode_event_id(cmd, desc->its_inv_cmd.event_id); 317 325 318 326 its_fixup_cmd(cmd); 319 327 320 - return desc->its_inv_cmd.dev->collection; 328 + return col; 321 329 } 322 330 323 331 static struct its_collection *its_build_invall_cmd(struct its_cmd_block *cmd, ··· 529 497 530 498 desc.its_movi_cmd.dev = dev; 531 499 desc.its_movi_cmd.col = col; 532 - desc.its_movi_cmd.id = id; 500 + desc.its_movi_cmd.event_id = id; 533 501 534 502 its_send_single_command(dev->its, its_build_movi_cmd, &desc); 535 503 } ··· 560 528 static inline u32 its_get_event_id(struct irq_data *d) 561 529 { 562 530 struct its_device *its_dev = irq_data_get_irq_chip_data(d); 563 - return d->hwirq - its_dev->lpi_base; 531 + return d->hwirq - its_dev->event_map.lpi_base; 564 532 } 565 533 566 534 static void lpi_set_config(struct irq_data *d, bool enable) ··· 615 583 616 584 target_col = &its_dev->its->collections[cpu]; 617 585 its_send_movi(its_dev, target_col, id); 618 - its_dev->collection = target_col; 586 + its_dev->event_map.col_map[id] = cpu; 619 587 620 588 return IRQ_SET_MASK_OK_DONE; 621 589 } ··· 745 713 return bitmap; 746 714 } 747 715 748 - static void its_lpi_free(unsigned long *bitmap, int base, int nr_ids) 716 + static void its_lpi_free(struct event_lpi_map *map) 749 717 { 718 + int base = map->lpi_base; 719 + int nr_ids = map->nr_lpis; 750 720 int lpi; 751 721 752 722 spin_lock(&lpi_lock); ··· 765 731 766 732 spin_unlock(&lpi_lock); 767 733 768 - kfree(bitmap); 734 + kfree(map->lpi_map); 735 + kfree(map->col_map); 769 736 } 770 737 771 738 /* ··· 1134 1099 struct its_device *dev; 1135 1100 unsigned long *lpi_map; 1136 1101 unsigned long flags; 1102 + u16 *col_map = NULL; 1137 1103 void *itt; 1138 1104 int lpi_base; 1139 1105 int nr_lpis; 1140 1106 int nr_ites; 1141 - int cpu; 1142 1107 int sz; 1143 1108 1144 1109 dev = kzalloc(sizeof(*dev), GFP_KERNEL); ··· 1152 1117 sz = max(sz, ITS_ITT_ALIGN) + ITS_ITT_ALIGN - 1; 1153 1118 itt = kzalloc(sz, GFP_KERNEL); 1154 1119 lpi_map = its_lpi_alloc_chunks(nvecs, &lpi_base, &nr_lpis); 1120 + if (lpi_map) 1121 + col_map = kzalloc(sizeof(*col_map) * nr_lpis, GFP_KERNEL); 1155 1122 1156 - if (!dev || !itt || !lpi_map) { 1123 + if (!dev || !itt || !lpi_map || !col_map) { 1157 1124 kfree(dev); 1158 1125 kfree(itt); 1159 1126 kfree(lpi_map); 1127 + kfree(col_map); 1160 1128 return NULL; 1161 1129 } 1162 1130 1163 1131 dev->its = its; 1164 1132 dev->itt = itt; 1165 1133 dev->nr_ites = nr_ites; 1166 - dev->lpi_map = lpi_map; 1167 - dev->lpi_base = lpi_base; 1168 - dev->nr_lpis = nr_lpis; 1134 + dev->event_map.lpi_map = lpi_map; 1135 + dev->event_map.col_map = col_map; 1136 + dev->event_map.lpi_base = lpi_base; 1137 + dev->event_map.nr_lpis = nr_lpis; 1169 1138 dev->device_id = dev_id; 1170 1139 INIT_LIST_HEAD(&dev->entry); 1171 1140 1172 1141 raw_spin_lock_irqsave(&its->lock, flags); 1173 1142 list_add(&dev->entry, &its->its_device_list); 1174 1143 raw_spin_unlock_irqrestore(&its->lock, flags); 1175 - 1176 - /* Bind the device to the first possible CPU */ 1177 - cpu = cpumask_first(cpu_online_mask); 1178 - dev->collection = &its->collections[cpu]; 1179 1144 1180 1145 /* Map device to its ITT */ 1181 1146 its_send_mapd(dev, 1); ··· 1198 1163 { 1199 1164 int idx; 1200 1165 1201 - idx = find_first_zero_bit(dev->lpi_map, dev->nr_lpis); 1202 - if (idx == dev->nr_lpis) 1166 + idx = find_first_zero_bit(dev->event_map.lpi_map, 1167 + dev->event_map.nr_lpis); 1168 + if (idx == dev->event_map.nr_lpis) 1203 1169 return -ENOSPC; 1204 1170 1205 - *hwirq = dev->lpi_base + idx; 1206 - set_bit(idx, dev->lpi_map); 1171 + *hwirq = dev->event_map.lpi_base + idx; 1172 + set_bit(idx, dev->event_map.lpi_map); 1207 1173 1208 1174 return 0; 1209 1175 } ··· 1324 1288 irq_domain_set_hwirq_and_chip(domain, virq + i, 1325 1289 hwirq, &its_irq_chip, its_dev); 1326 1290 dev_dbg(info->scratchpad[1].ptr, "ID:%d pID:%d vID:%d\n", 1327 - (int)(hwirq - its_dev->lpi_base), (int)hwirq, virq + i); 1291 + (int)(hwirq - its_dev->event_map.lpi_base), 1292 + (int)hwirq, virq + i); 1328 1293 } 1329 1294 1330 1295 return 0; ··· 1336 1299 { 1337 1300 struct its_device *its_dev = irq_data_get_irq_chip_data(d); 1338 1301 u32 event = its_get_event_id(d); 1302 + 1303 + /* Bind the LPI to the first possible CPU */ 1304 + its_dev->event_map.col_map[event] = cpumask_first(cpu_online_mask); 1339 1305 1340 1306 /* Map the GIC IRQ and event to the device */ 1341 1307 its_send_mapvi(its_dev, d->hwirq, event); ··· 1367 1327 u32 event = its_get_event_id(data); 1368 1328 1369 1329 /* Mark interrupt index as unused */ 1370 - clear_bit(event, its_dev->lpi_map); 1330 + clear_bit(event, its_dev->event_map.lpi_map); 1371 1331 1372 1332 /* Nuke the entry in the domain */ 1373 1333 irq_domain_reset_irq_data(data); 1374 1334 } 1375 1335 1376 1336 /* If all interrupts have been freed, start mopping the floor */ 1377 - if (bitmap_empty(its_dev->lpi_map, its_dev->nr_lpis)) { 1378 - its_lpi_free(its_dev->lpi_map, 1379 - its_dev->lpi_base, 1380 - its_dev->nr_lpis); 1337 + if (bitmap_empty(its_dev->event_map.lpi_map, 1338 + its_dev->event_map.nr_lpis)) { 1339 + its_lpi_free(&its_dev->event_map); 1381 1340 1382 1341 /* Unmap device/itt */ 1383 1342 its_send_mapd(its_dev, 0);
+1 -1
drivers/irqchip/irq-gic.c
··· 1055 1055 1056 1056 processor = (struct acpi_madt_generic_interrupt *)header; 1057 1057 1058 - if (BAD_MADT_ENTRY(processor, end)) 1058 + if (BAD_MADT_GICC_ENTRY(processor, end)) 1059 1059 return -EINVAL; 1060 1060 1061 1061 /*
-10
drivers/irqchip/irq-mips-gic.c
··· 257 257 return MIPS_CPU_IRQ_BASE + cp0_fdc_irq; 258 258 } 259 259 260 - /* 261 - * Some cores claim the FDC is routable but it doesn't actually seem to 262 - * be connected. 263 - */ 264 - switch (current_cpu_type()) { 265 - case CPU_INTERAPTIV: 266 - case CPU_PROAPTIV: 267 - return -1; 268 - } 269 - 270 260 return irq_create_mapping(gic_irq_domain, 271 261 GIC_LOCAL_TO_HWIRQ(GIC_LOCAL_INT_FDC)); 272 262 }
+1 -1
drivers/irqchip/spear-shirq.c
··· 2 2 * SPEAr platform shared irq layer source file 3 3 * 4 4 * Copyright (C) 2009-2012 ST Microelectronics 5 - * Viresh Kumar <viresh.linux@gmail.com> 5 + * Viresh Kumar <vireshk@kernel.org> 6 6 * 7 7 * Copyright (C) 2012 ST Microelectronics 8 8 * Shiraz Hashim <shiraz.linux.kernel@gmail.com>
-3
drivers/md/bcache/closure.h
··· 320 320 do { \ 321 321 set_closure_fn(_cl, _fn, _wq); \ 322 322 closure_sub(_cl, CLOSURE_RUNNING + 1); \ 323 - return; \ 324 323 } while (0) 325 324 326 325 /** ··· 348 349 do { \ 349 350 set_closure_fn(_cl, _fn, _wq); \ 350 351 closure_queue(_cl); \ 351 - return; \ 352 352 } while (0) 353 353 354 354 /** ··· 363 365 do { \ 364 366 set_closure_fn(_cl, _destructor, NULL); \ 365 367 closure_sub(_cl, CLOSURE_RUNNING - CLOSURE_DESTRUCTOR + 1); \ 366 - return; \ 367 368 } while (0) 368 369 369 370 /**
+1
drivers/md/bcache/io.c
··· 105 105 } while (n != bio); 106 106 107 107 continue_at(&s->cl, bch_bio_submit_split_done, NULL); 108 + return; 108 109 submit: 109 110 generic_make_request(bio); 110 111 }
+2
drivers/md/bcache/journal.c
··· 592 592 593 593 if (!w->need_write) { 594 594 closure_return_with_destructor(cl, journal_write_unlock); 595 + return; 595 596 } else if (journal_full(&c->journal)) { 596 597 journal_reclaim(c); 597 598 spin_unlock(&c->journal.lock); 598 599 599 600 btree_flush_write(c); 600 601 continue_at(cl, journal_write, system_wq); 602 + return; 601 603 } 602 604 603 605 c->journal.blocks_free -= set_blocks(w->data, block_bytes(c));
+11 -3
drivers/md/bcache/request.c
··· 88 88 if (journal_ref) 89 89 atomic_dec_bug(journal_ref); 90 90 91 - if (!op->insert_data_done) 91 + if (!op->insert_data_done) { 92 92 continue_at(cl, bch_data_insert_start, op->wq); 93 + return; 94 + } 93 95 94 96 bch_keylist_free(&op->insert_keys); 95 97 closure_return(cl); ··· 218 216 /* 1 for the device pointer and 1 for the chksum */ 219 217 if (bch_keylist_realloc(&op->insert_keys, 220 218 3 + (op->csum ? 1 : 0), 221 - op->c)) 219 + op->c)) { 222 220 continue_at(cl, bch_data_insert_keys, op->wq); 221 + return; 222 + } 223 223 224 224 k = op->insert_keys.top; 225 225 bkey_init(k); ··· 259 255 260 256 op->insert_data_done = true; 261 257 continue_at(cl, bch_data_insert_keys, op->wq); 258 + return; 262 259 err: 263 260 /* bch_alloc_sectors() blocks if s->writeback = true */ 264 261 BUG_ON(op->writeback); ··· 581 576 ret = bch_btree_map_keys(&s->op, s->iop.c, 582 577 &KEY(s->iop.inode, bio->bi_iter.bi_sector, 0), 583 578 cache_lookup_fn, MAP_END_KEY); 584 - if (ret == -EAGAIN) 579 + if (ret == -EAGAIN) { 585 580 continue_at(cl, cache_lookup, bcache_wq); 581 + return; 582 + } 586 583 587 584 closure_return(cl); 588 585 } ··· 1092 1085 continue_at_nobarrier(&s->cl, 1093 1086 flash_dev_nodata, 1094 1087 bcache_wq); 1088 + return; 1095 1089 } else if (rw) { 1096 1090 bch_keybuf_check_overlapping(&s->iop.c->moving_gc_keys, 1097 1091 &KEY(d->id, bio->bi_iter.bi_sector, 0),
+23 -15
drivers/md/dm-cache-target.c
··· 424 424 wake_up(&cache->migration_wait); 425 425 426 426 mempool_free(mg, cache->migration_pool); 427 - wake_worker(cache); 428 427 } 429 428 430 429 static int prealloc_data_structs(struct cache *cache, struct prealloc *p) ··· 1946 1947 1947 1948 static void process_deferred_bios(struct cache *cache) 1948 1949 { 1950 + bool prealloc_used = false; 1949 1951 unsigned long flags; 1950 1952 struct bio_list bios; 1951 1953 struct bio *bio; ··· 1981 1981 process_discard_bio(cache, &structs, bio); 1982 1982 else 1983 1983 process_bio(cache, &structs, bio); 1984 + prealloc_used = true; 1984 1985 } 1985 1986 1986 - prealloc_free_structs(cache, &structs); 1987 + if (prealloc_used) 1988 + prealloc_free_structs(cache, &structs); 1987 1989 } 1988 1990 1989 1991 static void process_deferred_cells(struct cache *cache) 1990 1992 { 1993 + bool prealloc_used = false; 1991 1994 unsigned long flags; 1992 1995 struct dm_bio_prison_cell *cell, *tmp; 1993 1996 struct list_head cells; ··· 2018 2015 } 2019 2016 2020 2017 process_cell(cache, &structs, cell); 2018 + prealloc_used = true; 2021 2019 } 2022 2020 2023 - prealloc_free_structs(cache, &structs); 2021 + if (prealloc_used) 2022 + prealloc_free_structs(cache, &structs); 2024 2023 } 2025 2024 2026 2025 static void process_deferred_flush_bios(struct cache *cache, bool submit_bios) ··· 2067 2062 2068 2063 static void writeback_some_dirty_blocks(struct cache *cache) 2069 2064 { 2070 - int r = 0; 2065 + bool prealloc_used = false; 2071 2066 dm_oblock_t oblock; 2072 2067 dm_cblock_t cblock; 2073 2068 struct prealloc structs; ··· 2077 2072 memset(&structs, 0, sizeof(structs)); 2078 2073 2079 2074 while (spare_migration_bandwidth(cache)) { 2080 - if (prealloc_data_structs(cache, &structs)) 2081 - break; 2075 + if (policy_writeback_work(cache->policy, &oblock, &cblock, busy)) 2076 + break; /* no work to do */ 2082 2077 2083 - r = policy_writeback_work(cache->policy, &oblock, &cblock, busy); 2084 - if (r) 2085 - break; 2086 - 2087 - r = get_cell(cache, oblock, &structs, &old_ocell); 2088 - if (r) { 2078 + if (prealloc_data_structs(cache, &structs) || 2079 + get_cell(cache, oblock, &structs, &old_ocell)) { 2089 2080 policy_set_dirty(cache->policy, oblock); 2090 2081 break; 2091 2082 } 2092 2083 2093 2084 writeback(cache, &structs, oblock, cblock, old_ocell); 2085 + prealloc_used = true; 2094 2086 } 2095 2087 2096 - prealloc_free_structs(cache, &structs); 2088 + if (prealloc_used) 2089 + prealloc_free_structs(cache, &structs); 2097 2090 } 2098 2091 2099 2092 /*---------------------------------------------------------------- ··· 3499 3496 * <#demotions> <#promotions> <#dirty> 3500 3497 * <#features> <features>* 3501 3498 * <#core args> <core args> 3502 - * <policy name> <#policy args> <policy args>* <cache metadata mode> 3499 + * <policy name> <#policy args> <policy args>* <cache metadata mode> <needs_check> 3503 3500 */ 3504 3501 static void cache_status(struct dm_target *ti, status_type_t type, 3505 3502 unsigned status_flags, char *result, unsigned maxlen) ··· 3584 3581 DMEMIT("ro "); 3585 3582 else 3586 3583 DMEMIT("rw "); 3584 + 3585 + if (dm_cache_metadata_needs_check(cache->cmd)) 3586 + DMEMIT("needs_check "); 3587 + else 3588 + DMEMIT("- "); 3587 3589 3588 3590 break; 3589 3591 ··· 3828 3820 3829 3821 static struct target_type cache_target = { 3830 3822 .name = "cache", 3831 - .version = {1, 7, 0}, 3823 + .version = {1, 8, 0}, 3832 3824 .module = THIS_MODULE, 3833 3825 .ctr = cache_ctr, 3834 3826 .dtr = cache_dtr,
+37 -7
drivers/md/dm-thin.c
··· 18 18 #include <linux/init.h> 19 19 #include <linux/module.h> 20 20 #include <linux/slab.h> 21 + #include <linux/vmalloc.h> 21 22 #include <linux/sort.h> 22 23 #include <linux/rbtree.h> 23 24 ··· 269 268 process_mapping_fn process_prepared_mapping; 270 269 process_mapping_fn process_prepared_discard; 271 270 272 - struct dm_bio_prison_cell *cell_sort_array[CELL_SORT_ARRAY_SIZE]; 271 + struct dm_bio_prison_cell **cell_sort_array; 273 272 }; 274 273 275 274 static enum pool_mode get_pool_mode(struct pool *pool); ··· 2282 2281 queue_delayed_work(pool->wq, &pool->waker, COMMIT_PERIOD); 2283 2282 } 2284 2283 2284 + static void notify_of_pool_mode_change_to_oods(struct pool *pool); 2285 + 2285 2286 /* 2286 2287 * We're holding onto IO to allow userland time to react. After the 2287 2288 * timeout either the pool will have been resized (and thus back in 2288 - * PM_WRITE mode), or we degrade to PM_READ_ONLY and start erroring IO. 2289 + * PM_WRITE mode), or we degrade to PM_OUT_OF_DATA_SPACE w/ error_if_no_space. 2289 2290 */ 2290 2291 static void do_no_space_timeout(struct work_struct *ws) 2291 2292 { 2292 2293 struct pool *pool = container_of(to_delayed_work(ws), struct pool, 2293 2294 no_space_timeout); 2294 2295 2295 - if (get_pool_mode(pool) == PM_OUT_OF_DATA_SPACE && !pool->pf.error_if_no_space) 2296 - set_pool_mode(pool, PM_READ_ONLY); 2296 + if (get_pool_mode(pool) == PM_OUT_OF_DATA_SPACE && !pool->pf.error_if_no_space) { 2297 + pool->pf.error_if_no_space = true; 2298 + notify_of_pool_mode_change_to_oods(pool); 2299 + error_retry_list(pool); 2300 + } 2297 2301 } 2298 2302 2299 2303 /*----------------------------------------------------------------*/ ··· 2374 2368 dm_table_event(pool->ti->table); 2375 2369 DMINFO("%s: switching pool to %s mode", 2376 2370 dm_device_name(pool->pool_md), new_mode); 2371 + } 2372 + 2373 + static void notify_of_pool_mode_change_to_oods(struct pool *pool) 2374 + { 2375 + if (!pool->pf.error_if_no_space) 2376 + notify_of_pool_mode_change(pool, "out-of-data-space (queue IO)"); 2377 + else 2378 + notify_of_pool_mode_change(pool, "out-of-data-space (error IO)"); 2377 2379 } 2378 2380 2379 2381 static bool passdown_enabled(struct pool_c *pt) ··· 2468 2454 * frequently seeing this mode. 2469 2455 */ 2470 2456 if (old_mode != new_mode) 2471 - notify_of_pool_mode_change(pool, "out-of-data-space"); 2457 + notify_of_pool_mode_change_to_oods(pool); 2472 2458 pool->process_bio = process_bio_read_only; 2473 2459 pool->process_discard = process_discard_bio; 2474 2460 pool->process_cell = process_cell_read_only; ··· 2791 2777 { 2792 2778 __pool_table_remove(pool); 2793 2779 2780 + vfree(pool->cell_sort_array); 2794 2781 if (dm_pool_metadata_close(pool->pmd) < 0) 2795 2782 DMWARN("%s: dm_pool_metadata_close() failed.", __func__); 2796 2783 ··· 2904 2889 goto bad_mapping_pool; 2905 2890 } 2906 2891 2892 + pool->cell_sort_array = vmalloc(sizeof(*pool->cell_sort_array) * CELL_SORT_ARRAY_SIZE); 2893 + if (!pool->cell_sort_array) { 2894 + *error = "Error allocating cell sort array"; 2895 + err_p = ERR_PTR(-ENOMEM); 2896 + goto bad_sort_array; 2897 + } 2898 + 2907 2899 pool->ref_count = 1; 2908 2900 pool->last_commit_jiffies = jiffies; 2909 2901 pool->pool_md = pool_md; ··· 2919 2897 2920 2898 return pool; 2921 2899 2900 + bad_sort_array: 2901 + mempool_destroy(pool->mapping_pool); 2922 2902 bad_mapping_pool: 2923 2903 dm_deferred_set_destroy(pool->all_io_ds); 2924 2904 bad_all_io_ds: ··· 3738 3714 * Status line is: 3739 3715 * <transaction id> <used metadata sectors>/<total metadata sectors> 3740 3716 * <used data sectors>/<total data sectors> <held metadata root> 3717 + * <pool mode> <discard config> <no space config> <needs_check> 3741 3718 */ 3742 3719 static void pool_status(struct dm_target *ti, status_type_t type, 3743 3720 unsigned status_flags, char *result, unsigned maxlen) ··· 3839 3814 DMEMIT("error_if_no_space "); 3840 3815 else 3841 3816 DMEMIT("queue_if_no_space "); 3817 + 3818 + if (dm_pool_metadata_needs_check(pool->pmd)) 3819 + DMEMIT("needs_check "); 3820 + else 3821 + DMEMIT("- "); 3842 3822 3843 3823 break; 3844 3824 ··· 3948 3918 .name = "thin-pool", 3949 3919 .features = DM_TARGET_SINGLETON | DM_TARGET_ALWAYS_WRITEABLE | 3950 3920 DM_TARGET_IMMUTABLE, 3951 - .version = {1, 15, 0}, 3921 + .version = {1, 16, 0}, 3952 3922 .module = THIS_MODULE, 3953 3923 .ctr = pool_ctr, 3954 3924 .dtr = pool_dtr, ··· 4335 4305 4336 4306 static struct target_type thin_target = { 4337 4307 .name = "thin", 4338 - .version = {1, 15, 0}, 4308 + .version = {1, 16, 0}, 4339 4309 .module = THIS_MODULE, 4340 4310 .ctr = thin_ctr, 4341 4311 .dtr = thin_dtr,
+4 -8
drivers/md/dm.c
··· 1067 1067 */ 1068 1068 static void rq_completed(struct mapped_device *md, int rw, bool run_queue) 1069 1069 { 1070 - int nr_requests_pending; 1071 - 1072 1070 atomic_dec(&md->pending[rw]); 1073 1071 1074 1072 /* nudge anyone waiting on suspend queue */ 1075 - nr_requests_pending = md_in_flight(md); 1076 - if (!nr_requests_pending) 1073 + if (!md_in_flight(md)) 1077 1074 wake_up(&md->wait); 1078 1075 1079 1076 /* ··· 1082 1085 if (run_queue) { 1083 1086 if (md->queue->mq_ops) 1084 1087 blk_mq_run_hw_queues(md->queue, true); 1085 - else if (!nr_requests_pending || 1086 - (nr_requests_pending >= md->queue->nr_congestion_on)) 1088 + else 1087 1089 blk_run_queue_async(md->queue); 1088 1090 } 1089 1091 ··· 2277 2281 2278 2282 static void cleanup_mapped_device(struct mapped_device *md) 2279 2283 { 2280 - cleanup_srcu_struct(&md->io_barrier); 2281 - 2282 2284 if (md->wq) 2283 2285 destroy_workqueue(md->wq); 2284 2286 if (md->kworker_task) ··· 2287 2293 mempool_destroy(md->rq_pool); 2288 2294 if (md->bs) 2289 2295 bioset_free(md->bs); 2296 + 2297 + cleanup_srcu_struct(&md->io_barrier); 2290 2298 2291 2299 if (md->disk) { 2292 2300 spin_lock(&_minor_lock);
+3 -3
drivers/md/persistent-data/dm-btree-remove.c
··· 309 309 310 310 if (s < 0 && nr_center < -s) { 311 311 /* not enough in central node */ 312 - shift(left, center, nr_center); 313 - s = nr_center - target; 312 + shift(left, center, -nr_center); 313 + s += nr_center; 314 314 shift(left, right, s); 315 315 nr_right += s; 316 316 } else ··· 323 323 if (s > 0 && nr_center < s) { 324 324 /* not enough in central node */ 325 325 shift(center, right, nr_center); 326 - s = target - nr_center; 326 + s -= nr_center; 327 327 shift(left, right, s); 328 328 nr_left -= s; 329 329 } else
+1 -1
drivers/md/persistent-data/dm-btree.c
··· 255 255 int r; 256 256 struct del_stack *s; 257 257 258 - s = kmalloc(sizeof(*s), GFP_KERNEL); 258 + s = kmalloc(sizeof(*s), GFP_NOIO); 259 259 if (!s) 260 260 return -ENOMEM; 261 261 s->info = info;
+1 -7
drivers/memory/omap-gpmc.c
··· 2074 2074 ret = gpmc_probe_nand_child(pdev, child); 2075 2075 else if (of_node_cmp(child->name, "onenand") == 0) 2076 2076 ret = gpmc_probe_onenand_child(pdev, child); 2077 - else if (of_node_cmp(child->name, "ethernet") == 0 || 2078 - of_node_cmp(child->name, "nor") == 0 || 2079 - of_node_cmp(child->name, "uart") == 0) 2077 + else 2080 2078 ret = gpmc_probe_generic_child(pdev, child); 2081 - 2082 - if (WARN(ret < 0, "%s: probing gpmc child %s failed\n", 2083 - __func__, child->full_name)) 2084 - of_node_put(child); 2085 2079 } 2086 2080 2087 2081 return 0;
+1 -1
drivers/mfd/stmpe-i2c.c
··· 6 6 * 7 7 * License Terms: GNU General Public License, version 2 8 8 * Author: Rabin Vincent <rabin.vincent@stericsson.com> for ST-Ericsson 9 - * Author: Viresh Kumar <viresh.linux@gmail.com> for ST Microelectronics 9 + * Author: Viresh Kumar <vireshk@kernel.org> for ST Microelectronics 10 10 */ 11 11 12 12 #include <linux/i2c.h>
+2 -2
drivers/mfd/stmpe-spi.c
··· 4 4 * Copyright (C) ST Microelectronics SA 2011 5 5 * 6 6 * License Terms: GNU General Public License, version 2 7 - * Author: Viresh Kumar <viresh.linux@gmail.com> for ST Microelectronics 7 + * Author: Viresh Kumar <vireshk@kernel.org> for ST Microelectronics 8 8 */ 9 9 10 10 #include <linux/spi/spi.h> ··· 146 146 147 147 MODULE_LICENSE("GPL v2"); 148 148 MODULE_DESCRIPTION("STMPE MFD SPI Interface Driver"); 149 - MODULE_AUTHOR("Viresh Kumar <viresh.linux@gmail.com>"); 149 + MODULE_AUTHOR("Viresh Kumar <vireshk@kernel.org>");
+5 -7
drivers/misc/cxl/api.c
··· 23 23 24 24 afu = cxl_pci_to_afu(dev); 25 25 26 + get_device(&afu->dev); 26 27 ctx = cxl_context_alloc(); 27 28 if (IS_ERR(ctx)) 28 29 return ctx; ··· 32 31 rc = cxl_context_init(ctx, afu, false, NULL); 33 32 if (rc) { 34 33 kfree(ctx); 34 + put_device(&afu->dev); 35 35 return ERR_PTR(-ENOMEM); 36 36 } 37 37 cxl_assign_psn_space(ctx); ··· 61 59 { 62 60 if (ctx->status != CLOSED) 63 61 return -EBUSY; 62 + 63 + put_device(&ctx->afu->dev); 64 64 65 65 cxl_context_free(ctx); 66 66 ··· 163 159 } 164 160 165 161 ctx->status = STARTED; 166 - get_device(&ctx->afu->dev); 167 162 out: 168 163 mutex_unlock(&ctx->status_mutex); 169 164 return rc; ··· 178 175 /* Stop a context. Returns 0 on success, otherwise -Errno */ 179 176 int cxl_stop_context(struct cxl_context *ctx) 180 177 { 181 - int rc; 182 - 183 - rc = __detach_context(ctx); 184 - if (!rc) 185 - put_device(&ctx->afu->dev); 186 - return rc; 178 + return __detach_context(ctx); 187 179 } 188 180 EXPORT_SYMBOL_GPL(cxl_stop_context); 189 181
+11 -3
drivers/misc/cxl/context.c
··· 113 113 114 114 if (ctx->afu->current_mode == CXL_MODE_DEDICATED) { 115 115 area = ctx->afu->psn_phys; 116 - if (offset > ctx->afu->adapter->ps_size) 116 + if (offset >= ctx->afu->adapter->ps_size) 117 117 return VM_FAULT_SIGBUS; 118 118 } else { 119 119 area = ctx->psn_phys; 120 - if (offset > ctx->psn_size) 120 + if (offset >= ctx->psn_size) 121 121 return VM_FAULT_SIGBUS; 122 122 } 123 123 ··· 145 145 */ 146 146 int cxl_context_iomap(struct cxl_context *ctx, struct vm_area_struct *vma) 147 147 { 148 + u64 start = vma->vm_pgoff << PAGE_SHIFT; 148 149 u64 len = vma->vm_end - vma->vm_start; 149 - len = min(len, ctx->psn_size); 150 + 151 + if (ctx->afu->current_mode == CXL_MODE_DEDICATED) { 152 + if (start + len > ctx->afu->adapter->ps_size) 153 + return -EINVAL; 154 + } else { 155 + if (start + len > ctx->psn_size) 156 + return -EINVAL; 157 + } 150 158 151 159 if (ctx->afu->current_mode != CXL_MODE_DEDICATED) { 152 160 /* make sure there is a valid per process space for this AFU */
+1 -1
drivers/misc/cxl/main.c
··· 73 73 spin_lock(&adapter->afu_list_lock); 74 74 for (slice = 0; slice < adapter->slices; slice++) { 75 75 afu = adapter->afu[slice]; 76 - if (!afu->enabled) 76 + if (!afu || !afu->enabled) 77 77 continue; 78 78 rcu_read_lock(); 79 79 idr_for_each_entry(&afu->contexts_idr, ctx, id)
+1 -1
drivers/misc/cxl/pci.c
··· 539 539 540 540 static void cxl_unmap_slice_regs(struct cxl_afu *afu) 541 541 { 542 - if (afu->p1n_mmio) 542 + if (afu->p2n_mmio) 543 543 iounmap(afu->p2n_mmio); 544 544 if (afu->p1n_mmio) 545 545 iounmap(afu->p1n_mmio);
+2 -1
drivers/misc/cxl/vphb.c
··· 112 112 unsigned long addr; 113 113 114 114 phb = pci_bus_to_host(bus); 115 - afu = (struct cxl_afu *)phb->private_data; 116 115 if (phb == NULL) 117 116 return PCIBIOS_DEVICE_NOT_FOUND; 117 + afu = (struct cxl_afu *)phb->private_data; 118 + 118 119 if (cxl_pcie_cfg_record(bus->number, devfn) > afu->crs_num) 119 120 return PCIBIOS_DEVICE_NOT_FOUND; 120 121 if (offset >= (unsigned long)phb->cfg_data)
-16
drivers/misc/mei/bus.c
··· 552 552 schedule_work(&device->event_work); 553 553 } 554 554 555 - void mei_cl_bus_remove_devices(struct mei_device *dev) 556 - { 557 - struct mei_cl *cl, *next; 558 - 559 - mutex_lock(&dev->device_lock); 560 - list_for_each_entry_safe(cl, next, &dev->device_list, device_link) { 561 - if (cl->device) 562 - mei_cl_remove_device(cl->device); 563 - 564 - list_del(&cl->device_link); 565 - mei_cl_unlink(cl); 566 - kfree(cl); 567 - } 568 - mutex_unlock(&dev->device_lock); 569 - } 570 - 571 555 int __init mei_cl_bus_init(void) 572 556 { 573 557 return bus_register(&mei_cl_bus_type);
-2
drivers/misc/mei/init.c
··· 333 333 334 334 mei_nfc_host_exit(dev); 335 335 336 - mei_cl_bus_remove_devices(dev); 337 - 338 336 mutex_lock(&dev->device_lock); 339 337 340 338 mei_wd_stop(dev);
+2 -1
drivers/misc/mei/nfc.c
··· 402 402 403 403 cldev->priv_data = NULL; 404 404 405 - mutex_lock(&dev->device_lock); 406 405 /* Need to remove the device here 407 406 * since mei_nfc_free will unlink the clients 408 407 */ 409 408 mei_cl_remove_device(cldev); 409 + 410 + mutex_lock(&dev->device_lock); 410 411 mei_nfc_free(ndev); 411 412 mutex_unlock(&dev->device_lock); 412 413 }
+2 -2
drivers/mmc/host/sdhci-spear.c
··· 4 4 * Support of SDHCI platform devices for spear soc family 5 5 * 6 6 * Copyright (C) 2010 ST Microelectronics 7 - * Viresh Kumar <viresh.linux@gmail.com> 7 + * Viresh Kumar <vireshk@kernel.org> 8 8 * 9 9 * Inspired by sdhci-pltfm.c 10 10 * ··· 211 211 module_platform_driver(sdhci_driver); 212 212 213 213 MODULE_DESCRIPTION("SPEAr Secure Digital Host Controller Interface driver"); 214 - MODULE_AUTHOR("Viresh Kumar <viresh.linux@gmail.com>"); 214 + MODULE_AUTHOR("Viresh Kumar <vireshk@kernel.org>"); 215 215 MODULE_LICENSE("GPL v2");
+34 -17
drivers/net/bonding/bond_main.c
··· 689 689 690 690 } 691 691 692 - static bool bond_should_change_active(struct bonding *bond) 692 + static struct slave *bond_choose_primary_or_current(struct bonding *bond) 693 693 { 694 694 struct slave *prim = rtnl_dereference(bond->primary_slave); 695 695 struct slave *curr = rtnl_dereference(bond->curr_active_slave); 696 696 697 - if (!prim || !curr || curr->link != BOND_LINK_UP) 698 - return true; 697 + if (!prim || prim->link != BOND_LINK_UP) { 698 + if (!curr || curr->link != BOND_LINK_UP) 699 + return NULL; 700 + return curr; 701 + } 702 + 699 703 if (bond->force_primary) { 700 704 bond->force_primary = false; 701 - return true; 705 + return prim; 702 706 } 703 - if (bond->params.primary_reselect == BOND_PRI_RESELECT_BETTER && 704 - (prim->speed < curr->speed || 705 - (prim->speed == curr->speed && prim->duplex <= curr->duplex))) 706 - return false; 707 - if (bond->params.primary_reselect == BOND_PRI_RESELECT_FAILURE) 708 - return false; 709 - return true; 707 + 708 + if (!curr || curr->link != BOND_LINK_UP) 709 + return prim; 710 + 711 + /* At this point, prim and curr are both up */ 712 + switch (bond->params.primary_reselect) { 713 + case BOND_PRI_RESELECT_ALWAYS: 714 + return prim; 715 + case BOND_PRI_RESELECT_BETTER: 716 + if (prim->speed < curr->speed) 717 + return curr; 718 + if (prim->speed == curr->speed && prim->duplex <= curr->duplex) 719 + return curr; 720 + return prim; 721 + case BOND_PRI_RESELECT_FAILURE: 722 + return curr; 723 + default: 724 + netdev_err(bond->dev, "impossible primary_reselect %d\n", 725 + bond->params.primary_reselect); 726 + return curr; 727 + } 710 728 } 711 729 712 730 /** 713 - * find_best_interface - select the best available slave to be the active one 731 + * bond_find_best_slave - select the best available slave to be the active one 714 732 * @bond: our bonding struct 715 733 */ 716 734 static struct slave *bond_find_best_slave(struct bonding *bond) 717 735 { 718 - struct slave *slave, *bestslave = NULL, *primary; 736 + struct slave *slave, *bestslave = NULL; 719 737 struct list_head *iter; 720 738 int mintime = bond->params.updelay; 721 739 722 - primary = rtnl_dereference(bond->primary_slave); 723 - if (primary && primary->link == BOND_LINK_UP && 724 - bond_should_change_active(bond)) 725 - return primary; 740 + slave = bond_choose_primary_or_current(bond); 741 + if (slave) 742 + return slave; 726 743 727 744 bond_for_each_slave(bond, slave, iter) { 728 745 if (slave->link == BOND_LINK_UP)
+8 -2
drivers/net/can/c_can/c_can.c
··· 592 592 { 593 593 struct c_can_priv *priv = netdev_priv(dev); 594 594 int err; 595 + struct pinctrl *p; 595 596 596 597 /* basic c_can configuration */ 597 598 err = c_can_chip_config(dev); ··· 605 604 606 605 priv->can.state = CAN_STATE_ERROR_ACTIVE; 607 606 608 - /* activate pins */ 609 - pinctrl_pm_select_default_state(dev->dev.parent); 607 + /* Attempt to use "active" if available else use "default" */ 608 + p = pinctrl_get_select(priv->device, "active"); 609 + if (!IS_ERR(p)) 610 + pinctrl_put(p); 611 + else 612 + pinctrl_pm_select_default_state(priv->device); 613 + 610 614 return 0; 611 615 } 612 616
+2 -5
drivers/net/can/dev.c
··· 440 440 struct can_frame *cf = (struct can_frame *)skb->data; 441 441 u8 dlc = cf->can_dlc; 442 442 443 - if (!(skb->tstamp.tv64)) 444 - __net_timestamp(skb); 445 - 446 443 netif_rx(priv->echo_skb[idx]); 447 444 priv->echo_skb[idx] = NULL; 448 445 ··· 575 578 if (unlikely(!skb)) 576 579 return NULL; 577 580 578 - __net_timestamp(skb); 579 581 skb->protocol = htons(ETH_P_CAN); 580 582 skb->pkt_type = PACKET_BROADCAST; 581 583 skb->ip_summed = CHECKSUM_UNNECESSARY; ··· 585 589 586 590 can_skb_reserve(skb); 587 591 can_skb_prv(skb)->ifindex = dev->ifindex; 592 + can_skb_prv(skb)->skbcnt = 0; 588 593 589 594 *cf = (struct can_frame *)skb_put(skb, sizeof(struct can_frame)); 590 595 memset(*cf, 0, sizeof(struct can_frame)); ··· 604 607 if (unlikely(!skb)) 605 608 return NULL; 606 609 607 - __net_timestamp(skb); 608 610 skb->protocol = htons(ETH_P_CANFD); 609 611 skb->pkt_type = PACKET_BROADCAST; 610 612 skb->ip_summed = CHECKSUM_UNNECESSARY; ··· 614 618 615 619 can_skb_reserve(skb); 616 620 can_skb_prv(skb)->ifindex = dev->ifindex; 621 + can_skb_prv(skb)->skbcnt = 0; 617 622 618 623 *cfd = (struct canfd_frame *)skb_put(skb, sizeof(struct canfd_frame)); 619 624 memset(*cfd, 0, sizeof(struct canfd_frame));
+10 -6
drivers/net/can/rcar_can.c
··· 508 508 509 509 err = clk_prepare_enable(priv->clk); 510 510 if (err) { 511 - netdev_err(ndev, "failed to enable periperal clock, error %d\n", 511 + netdev_err(ndev, 512 + "failed to enable peripheral clock, error %d\n", 512 513 err); 513 514 goto out; 514 515 } ··· 527 526 napi_enable(&priv->napi); 528 527 err = request_irq(ndev->irq, rcar_can_interrupt, 0, ndev->name, ndev); 529 528 if (err) { 530 - netdev_err(ndev, "error requesting interrupt %x\n", ndev->irq); 529 + netdev_err(ndev, "request_irq(%d) failed, error %d\n", 530 + ndev->irq, err); 531 531 goto out_close; 532 532 } 533 533 can_led_event(ndev, CAN_LED_EVENT_OPEN); ··· 760 758 } 761 759 762 760 irq = platform_get_irq(pdev, 0); 763 - if (!irq) { 761 + if (irq < 0) { 764 762 dev_err(&pdev->dev, "No IRQ resource\n"); 763 + err = irq; 765 764 goto fail; 766 765 } 767 766 ··· 785 782 priv->clk = devm_clk_get(&pdev->dev, "clkp1"); 786 783 if (IS_ERR(priv->clk)) { 787 784 err = PTR_ERR(priv->clk); 788 - dev_err(&pdev->dev, "cannot get peripheral clock: %d\n", err); 785 + dev_err(&pdev->dev, "cannot get peripheral clock, error %d\n", 786 + err); 789 787 goto fail_clk; 790 788 } 791 789 ··· 798 794 priv->can_clk = devm_clk_get(&pdev->dev, clock_names[clock_select]); 799 795 if (IS_ERR(priv->can_clk)) { 800 796 err = PTR_ERR(priv->can_clk); 801 - dev_err(&pdev->dev, "cannot get CAN clock: %d\n", err); 797 + dev_err(&pdev->dev, "cannot get CAN clock, error %d\n", err); 802 798 goto fail_clk; 803 799 } 804 800 ··· 827 823 828 824 devm_can_led_init(ndev); 829 825 830 - dev_info(&pdev->dev, "device registered (reg_base=%p, irq=%u)\n", 826 + dev_info(&pdev->dev, "device registered (regs @ %p, IRQ%d)\n", 831 827 priv->regs, ndev->irq); 832 828 833 829 return 0;
+1 -1
drivers/net/can/slcan.c
··· 207 207 if (!skb) 208 208 return; 209 209 210 - __net_timestamp(skb); 211 210 skb->dev = sl->dev; 212 211 skb->protocol = htons(ETH_P_CAN); 213 212 skb->pkt_type = PACKET_BROADCAST; ··· 214 215 215 216 can_skb_reserve(skb); 216 217 can_skb_prv(skb)->ifindex = sl->dev->ifindex; 218 + can_skb_prv(skb)->skbcnt = 0; 217 219 218 220 memcpy(skb_put(skb, sizeof(struct can_frame)), 219 221 &cf, sizeof(struct can_frame));
-3
drivers/net/can/vcan.c
··· 78 78 skb->dev = dev; 79 79 skb->ip_summed = CHECKSUM_UNNECESSARY; 80 80 81 - if (!(skb->tstamp.tv64)) 82 - __net_timestamp(skb); 83 - 84 81 netif_rx_ni(skb); 85 82 } 86 83
+3 -1
drivers/net/ethernet/3com/3c59x.c
··· 2382 2382 void __iomem *ioaddr; 2383 2383 int status; 2384 2384 int work_done = max_interrupt_work; 2385 + int handled = 0; 2385 2386 2386 2387 ioaddr = vp->ioaddr; 2387 2388 ··· 2401 2400 2402 2401 if ((status & IntLatch) == 0) 2403 2402 goto handler_exit; /* No interrupt: shared IRQs can cause this */ 2403 + handled = 1; 2404 2404 2405 2405 if (status == 0xffff) { /* h/w no longer present (hotplug)? */ 2406 2406 if (vortex_debug > 1) ··· 2503 2501 handler_exit: 2504 2502 vp->handling_irq = 0; 2505 2503 spin_unlock(&vp->lock); 2506 - return IRQ_HANDLED; 2504 + return IRQ_RETVAL(handled); 2507 2505 } 2508 2506 2509 2507 static int vortex_rx(struct net_device *dev)
+2 -1
drivers/net/ethernet/amd/xgbe/xgbe-desc.c
··· 303 303 get_page(pa->pages); 304 304 bd->pa = *pa; 305 305 306 - bd->dma = pa->pages_dma + pa->pages_offset; 306 + bd->dma_base = pa->pages_dma; 307 + bd->dma_off = pa->pages_offset; 307 308 bd->dma_len = len; 308 309 309 310 pa->pages_offset += len;
+7 -4
drivers/net/ethernet/amd/xgbe/xgbe-dev.c
··· 1110 1110 unsigned int rx_usecs = pdata->rx_usecs; 1111 1111 unsigned int rx_frames = pdata->rx_frames; 1112 1112 unsigned int inte; 1113 + dma_addr_t hdr_dma, buf_dma; 1113 1114 1114 1115 if (!rx_usecs && !rx_frames) { 1115 1116 /* No coalescing, interrupt for every descriptor */ ··· 1130 1129 * Set buffer 2 (hi) address to buffer dma address (hi) and 1131 1130 * set control bits OWN and INTE 1132 1131 */ 1133 - rdesc->desc0 = cpu_to_le32(lower_32_bits(rdata->rx.hdr.dma)); 1134 - rdesc->desc1 = cpu_to_le32(upper_32_bits(rdata->rx.hdr.dma)); 1135 - rdesc->desc2 = cpu_to_le32(lower_32_bits(rdata->rx.buf.dma)); 1136 - rdesc->desc3 = cpu_to_le32(upper_32_bits(rdata->rx.buf.dma)); 1132 + hdr_dma = rdata->rx.hdr.dma_base + rdata->rx.hdr.dma_off; 1133 + buf_dma = rdata->rx.buf.dma_base + rdata->rx.buf.dma_off; 1134 + rdesc->desc0 = cpu_to_le32(lower_32_bits(hdr_dma)); 1135 + rdesc->desc1 = cpu_to_le32(upper_32_bits(hdr_dma)); 1136 + rdesc->desc2 = cpu_to_le32(lower_32_bits(buf_dma)); 1137 + rdesc->desc3 = cpu_to_le32(upper_32_bits(buf_dma)); 1137 1138 1138 1139 XGMAC_SET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, INTE, inte); 1139 1140
+11 -6
drivers/net/ethernet/amd/xgbe/xgbe-drv.c
··· 1765 1765 /* Start with the header buffer which may contain just the header 1766 1766 * or the header plus data 1767 1767 */ 1768 - dma_sync_single_for_cpu(pdata->dev, rdata->rx.hdr.dma, 1769 - rdata->rx.hdr.dma_len, DMA_FROM_DEVICE); 1768 + dma_sync_single_range_for_cpu(pdata->dev, rdata->rx.hdr.dma_base, 1769 + rdata->rx.hdr.dma_off, 1770 + rdata->rx.hdr.dma_len, DMA_FROM_DEVICE); 1770 1771 1771 1772 packet = page_address(rdata->rx.hdr.pa.pages) + 1772 1773 rdata->rx.hdr.pa.pages_offset; ··· 1779 1778 len -= copy_len; 1780 1779 if (len) { 1781 1780 /* Add the remaining data as a frag */ 1782 - dma_sync_single_for_cpu(pdata->dev, rdata->rx.buf.dma, 1783 - rdata->rx.buf.dma_len, DMA_FROM_DEVICE); 1781 + dma_sync_single_range_for_cpu(pdata->dev, 1782 + rdata->rx.buf.dma_base, 1783 + rdata->rx.buf.dma_off, 1784 + rdata->rx.buf.dma_len, 1785 + DMA_FROM_DEVICE); 1784 1786 1785 1787 skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, 1786 1788 rdata->rx.buf.pa.pages, ··· 1949 1945 if (!skb) 1950 1946 error = 1; 1951 1947 } else if (rdesc_len) { 1952 - dma_sync_single_for_cpu(pdata->dev, 1953 - rdata->rx.buf.dma, 1948 + dma_sync_single_range_for_cpu(pdata->dev, 1949 + rdata->rx.buf.dma_base, 1950 + rdata->rx.buf.dma_off, 1954 1951 rdata->rx.buf.dma_len, 1955 1952 DMA_FROM_DEVICE); 1956 1953
+2 -1
drivers/net/ethernet/amd/xgbe/xgbe.h
··· 337 337 struct xgbe_page_alloc pa; 338 338 struct xgbe_page_alloc pa_unmap; 339 339 340 - dma_addr_t dma; 340 + dma_addr_t dma_base; 341 + unsigned long dma_off; 341 342 unsigned int dma_len; 342 343 }; 343 344
+1 -1
drivers/net/ethernet/broadcom/bcmsysport.c
··· 1793 1793 macaddr = of_get_mac_address(dn); 1794 1794 if (!macaddr || !is_valid_ether_addr(macaddr)) { 1795 1795 dev_warn(&pdev->dev, "using random Ethernet MAC\n"); 1796 - random_ether_addr(dev->dev_addr); 1796 + eth_hw_addr_random(dev); 1797 1797 } else { 1798 1798 ether_addr_copy(dev->dev_addr, macaddr); 1799 1799 }
-4
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 1230 1230 new_skb = skb_realloc_headroom(skb, sizeof(*status)); 1231 1231 dev_kfree_skb(skb); 1232 1232 if (!new_skb) { 1233 - dev->stats.tx_errors++; 1234 1233 dev->stats.tx_dropped++; 1235 1234 return NULL; 1236 1235 } ··· 1464 1465 1465 1466 if (unlikely(!skb)) { 1466 1467 dev->stats.rx_dropped++; 1467 - dev->stats.rx_errors++; 1468 1468 goto next; 1469 1469 } 1470 1470 ··· 1491 1493 if (unlikely(!(dma_flag & DMA_EOP) || !(dma_flag & DMA_SOP))) { 1492 1494 netif_err(priv, rx_status, dev, 1493 1495 "dropping fragmented packet!\n"); 1494 - dev->stats.rx_dropped++; 1495 1496 dev->stats.rx_errors++; 1496 1497 dev_kfree_skb_any(skb); 1497 1498 goto next; ··· 1512 1515 dev->stats.rx_frame_errors++; 1513 1516 if (dma_flag & DMA_RX_LG) 1514 1517 dev->stats.rx_length_errors++; 1515 - dev->stats.rx_dropped++; 1516 1518 dev->stats.rx_errors++; 1517 1519 dev_kfree_skb_any(skb); 1518 1520 goto next;
-9
drivers/net/ethernet/broadcom/sb1250-mac.c
··· 1508 1508 __raw_writeq(reg, port); 1509 1509 port = s->sbm_base + R_MAC_ETHERNET_ADDR; 1510 1510 1511 - #ifdef CONFIG_SB1_PASS_1_WORKAROUNDS 1512 - /* 1513 - * Pass1 SOCs do not receive packets addressed to the 1514 - * destination address in the R_MAC_ETHERNET_ADDR register. 1515 - * Set the value to zero. 1516 - */ 1517 - __raw_writeq(0, port); 1518 - #else 1519 1511 __raw_writeq(reg, port); 1520 - #endif 1521 1512 1522 1513 /* 1523 1514 * Set the receive filter for no packets, and write values
+13 -12
drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c
··· 952 952 * eventually have to put a format interpreter in here ... 953 953 */ 954 954 seq_printf(seq, "%10d %15llu %8s %8s ", 955 - e->seqno, e->timestamp, 955 + be32_to_cpu(e->seqno), 956 + be64_to_cpu(e->timestamp), 956 957 (e->level < ARRAY_SIZE(devlog_level_strings) 957 958 ? devlog_level_strings[e->level] 958 959 : "UNKNOWN"), 959 960 (e->facility < ARRAY_SIZE(devlog_facility_strings) 960 961 ? devlog_facility_strings[e->facility] 961 962 : "UNKNOWN")); 962 - seq_printf(seq, e->fmt, e->params[0], e->params[1], 963 - e->params[2], e->params[3], e->params[4], 964 - e->params[5], e->params[6], e->params[7]); 963 + seq_printf(seq, e->fmt, 964 + be32_to_cpu(e->params[0]), 965 + be32_to_cpu(e->params[1]), 966 + be32_to_cpu(e->params[2]), 967 + be32_to_cpu(e->params[3]), 968 + be32_to_cpu(e->params[4]), 969 + be32_to_cpu(e->params[5]), 970 + be32_to_cpu(e->params[6]), 971 + be32_to_cpu(e->params[7])); 965 972 } 966 973 return 0; 967 974 } ··· 1050 1043 return ret; 1051 1044 } 1052 1045 1053 - /* Translate log multi-byte integral elements into host native format 1054 - * and determine where the first entry in the log is. 1046 + /* Find the earliest (lowest Sequence Number) log entry in the 1047 + * circular Device Log. 1055 1048 */ 1056 1049 for (fseqno = ~((u32)0), index = 0; index < dinfo->nentries; index++) { 1057 1050 struct fw_devlog_e *e = &dinfo->log[index]; 1058 - int i; 1059 1051 __u32 seqno; 1060 1052 1061 1053 if (e->timestamp == 0) 1062 1054 continue; 1063 1055 1064 - e->timestamp = (__force __be64)be64_to_cpu(e->timestamp); 1065 1056 seqno = be32_to_cpu(e->seqno); 1066 - for (i = 0; i < 8; i++) 1067 - e->params[i] = 1068 - (__force __be32)be32_to_cpu(e->params[i]); 1069 - 1070 1057 if (seqno < fseqno) { 1071 1058 fseqno = seqno; 1072 1059 dinfo->first = index;
+2 -2
drivers/net/ethernet/cisco/enic/enic_main.c
··· 1170 1170 wq_work_done, 1171 1171 0 /* dont unmask intr */, 1172 1172 0 /* dont reset intr timer */); 1173 - return rq_work_done; 1173 + return budget; 1174 1174 } 1175 1175 1176 1176 if (budget > 0) ··· 1191 1191 0 /* don't reset intr timer */); 1192 1192 1193 1193 err = vnic_rq_fill(&enic->rq[0], enic_rq_alloc_buf); 1194 + enic_poll_unlock_napi(&enic->rq[cq_rq], napi); 1194 1195 1195 1196 /* Buffer allocation failed. Stay in polling 1196 1197 * mode so we can try to fill the ring again. ··· 1209 1208 napi_complete(napi); 1210 1209 vnic_intr_unmask(&enic->intr[intr]); 1211 1210 } 1212 - enic_poll_unlock_napi(&enic->rq[cq_rq], napi); 1213 1211 1214 1212 return rq_work_done; 1215 1213 }
+75 -13
drivers/net/ethernet/freescale/fec_main.c
··· 24 24 #include <linux/module.h> 25 25 #include <linux/kernel.h> 26 26 #include <linux/string.h> 27 + #include <linux/pm_runtime.h> 27 28 #include <linux/ptrace.h> 28 29 #include <linux/errno.h> 29 30 #include <linux/ioport.h> ··· 78 77 #define FEC_ENET_RAEM_V 0x8 79 78 #define FEC_ENET_RAFL_V 0x8 80 79 #define FEC_ENET_OPD_V 0xFFF0 80 + #define FEC_MDIO_PM_TIMEOUT 100 /* ms */ 81 81 82 82 static struct platform_device_id fec_devtype[] = { 83 83 { ··· 1769 1767 static int fec_enet_mdio_read(struct mii_bus *bus, int mii_id, int regnum) 1770 1768 { 1771 1769 struct fec_enet_private *fep = bus->priv; 1770 + struct device *dev = &fep->pdev->dev; 1772 1771 unsigned long time_left; 1772 + int ret = 0; 1773 + 1774 + ret = pm_runtime_get_sync(dev); 1775 + if (IS_ERR_VALUE(ret)) 1776 + return ret; 1773 1777 1774 1778 fep->mii_timeout = 0; 1775 1779 init_completion(&fep->mdio_done); ··· 1791 1783 if (time_left == 0) { 1792 1784 fep->mii_timeout = 1; 1793 1785 netdev_err(fep->netdev, "MDIO read timeout\n"); 1794 - return -ETIMEDOUT; 1786 + ret = -ETIMEDOUT; 1787 + goto out; 1795 1788 } 1796 1789 1797 - /* return value */ 1798 - return FEC_MMFR_DATA(readl(fep->hwp + FEC_MII_DATA)); 1790 + ret = FEC_MMFR_DATA(readl(fep->hwp + FEC_MII_DATA)); 1791 + 1792 + out: 1793 + pm_runtime_mark_last_busy(dev); 1794 + pm_runtime_put_autosuspend(dev); 1795 + 1796 + return ret; 1799 1797 } 1800 1798 1801 1799 static int fec_enet_mdio_write(struct mii_bus *bus, int mii_id, int regnum, 1802 1800 u16 value) 1803 1801 { 1804 1802 struct fec_enet_private *fep = bus->priv; 1803 + struct device *dev = &fep->pdev->dev; 1805 1804 unsigned long time_left; 1805 + int ret = 0; 1806 + 1807 + ret = pm_runtime_get_sync(dev); 1808 + if (IS_ERR_VALUE(ret)) 1809 + return ret; 1806 1810 1807 1811 fep->mii_timeout = 0; 1808 1812 init_completion(&fep->mdio_done); ··· 1831 1811 if (time_left == 0) { 1832 1812 fep->mii_timeout = 1; 1833 1813 netdev_err(fep->netdev, "MDIO write timeout\n"); 1834 - return -ETIMEDOUT; 1814 + ret = -ETIMEDOUT; 1835 1815 } 1836 1816 1837 - return 0; 1817 + pm_runtime_mark_last_busy(dev); 1818 + pm_runtime_put_autosuspend(dev); 1819 + 1820 + return ret; 1838 1821 } 1839 1822 1840 1823 static int fec_enet_clk_enable(struct net_device *ndev, bool enable) ··· 1849 1826 ret = clk_prepare_enable(fep->clk_ahb); 1850 1827 if (ret) 1851 1828 return ret; 1852 - ret = clk_prepare_enable(fep->clk_ipg); 1853 - if (ret) 1854 - goto failed_clk_ipg; 1855 1829 if (fep->clk_enet_out) { 1856 1830 ret = clk_prepare_enable(fep->clk_enet_out); 1857 1831 if (ret) ··· 1872 1852 } 1873 1853 } else { 1874 1854 clk_disable_unprepare(fep->clk_ahb); 1875 - clk_disable_unprepare(fep->clk_ipg); 1876 1855 if (fep->clk_enet_out) 1877 1856 clk_disable_unprepare(fep->clk_enet_out); 1878 1857 if (fep->clk_ptp) { ··· 1893 1874 if (fep->clk_enet_out) 1894 1875 clk_disable_unprepare(fep->clk_enet_out); 1895 1876 failed_clk_enet_out: 1896 - clk_disable_unprepare(fep->clk_ipg); 1897 - failed_clk_ipg: 1898 1877 clk_disable_unprepare(fep->clk_ahb); 1899 1878 1900 1879 return ret; ··· 2864 2847 struct fec_enet_private *fep = netdev_priv(ndev); 2865 2848 int ret; 2866 2849 2850 + ret = pm_runtime_get_sync(&fep->pdev->dev); 2851 + if (IS_ERR_VALUE(ret)) 2852 + return ret; 2853 + 2867 2854 pinctrl_pm_select_default_state(&fep->pdev->dev); 2868 2855 ret = fec_enet_clk_enable(ndev, true); 2869 2856 if (ret) 2870 - return ret; 2857 + goto clk_enable; 2871 2858 2872 2859 /* I should reset the ring buffers here, but I don't yet know 2873 2860 * a simple way to do that. ··· 2902 2881 fec_enet_free_buffers(ndev); 2903 2882 err_enet_alloc: 2904 2883 fec_enet_clk_enable(ndev, false); 2884 + clk_enable: 2885 + pm_runtime_mark_last_busy(&fep->pdev->dev); 2886 + pm_runtime_put_autosuspend(&fep->pdev->dev); 2905 2887 pinctrl_pm_select_sleep_state(&fep->pdev->dev); 2906 2888 return ret; 2907 2889 } ··· 2927 2903 2928 2904 fec_enet_clk_enable(ndev, false); 2929 2905 pinctrl_pm_select_sleep_state(&fep->pdev->dev); 2906 + pm_runtime_mark_last_busy(&fep->pdev->dev); 2907 + pm_runtime_put_autosuspend(&fep->pdev->dev); 2908 + 2930 2909 fec_enet_free_buffers(ndev); 2931 2910 2932 2911 return 0; ··· 3415 3388 if (ret) 3416 3389 goto failed_clk; 3417 3390 3391 + ret = clk_prepare_enable(fep->clk_ipg); 3392 + if (ret) 3393 + goto failed_clk_ipg; 3394 + 3418 3395 fep->reg_phy = devm_regulator_get(&pdev->dev, "phy"); 3419 3396 if (!IS_ERR(fep->reg_phy)) { 3420 3397 ret = regulator_enable(fep->reg_phy); ··· 3465 3434 netif_carrier_off(ndev); 3466 3435 fec_enet_clk_enable(ndev, false); 3467 3436 pinctrl_pm_select_sleep_state(&pdev->dev); 3437 + pm_runtime_set_active(&pdev->dev); 3438 + pm_runtime_enable(&pdev->dev); 3468 3439 3469 3440 ret = register_netdev(ndev); 3470 3441 if (ret) ··· 3480 3447 3481 3448 fep->rx_copybreak = COPYBREAK_DEFAULT; 3482 3449 INIT_WORK(&fep->tx_timeout_work, fec_enet_timeout_work); 3450 + 3451 + pm_runtime_set_autosuspend_delay(&pdev->dev, FEC_MDIO_PM_TIMEOUT); 3452 + pm_runtime_use_autosuspend(&pdev->dev); 3453 + pm_runtime_mark_last_busy(&pdev->dev); 3454 + pm_runtime_put_autosuspend(&pdev->dev); 3455 + 3483 3456 return 0; 3484 3457 3485 3458 failed_register: ··· 3496 3457 if (fep->reg_phy) 3497 3458 regulator_disable(fep->reg_phy); 3498 3459 failed_regulator: 3460 + clk_disable_unprepare(fep->clk_ipg); 3461 + failed_clk_ipg: 3499 3462 fec_enet_clk_enable(ndev, false); 3500 3463 failed_clk: 3501 3464 failed_phy: ··· 3609 3568 return ret; 3610 3569 } 3611 3570 3612 - static SIMPLE_DEV_PM_OPS(fec_pm_ops, fec_suspend, fec_resume); 3571 + static int __maybe_unused fec_runtime_suspend(struct device *dev) 3572 + { 3573 + struct net_device *ndev = dev_get_drvdata(dev); 3574 + struct fec_enet_private *fep = netdev_priv(ndev); 3575 + 3576 + clk_disable_unprepare(fep->clk_ipg); 3577 + 3578 + return 0; 3579 + } 3580 + 3581 + static int __maybe_unused fec_runtime_resume(struct device *dev) 3582 + { 3583 + struct net_device *ndev = dev_get_drvdata(dev); 3584 + struct fec_enet_private *fep = netdev_priv(ndev); 3585 + 3586 + return clk_prepare_enable(fep->clk_ipg); 3587 + } 3588 + 3589 + static const struct dev_pm_ops fec_pm_ops = { 3590 + SET_SYSTEM_SLEEP_PM_OPS(fec_suspend, fec_resume) 3591 + SET_RUNTIME_PM_OPS(fec_runtime_suspend, fec_runtime_resume, NULL) 3592 + }; 3613 3593 3614 3594 static struct platform_driver fec_driver = { 3615 3595 .driver = {
+145 -27
drivers/net/ethernet/sfc/ef10.c
··· 101 101 return resource_size(&efx->pci_dev->resource[bar]); 102 102 } 103 103 104 + static bool efx_ef10_is_vf(struct efx_nic *efx) 105 + { 106 + return efx->type->is_vf; 107 + } 108 + 104 109 static int efx_ef10_get_pf_index(struct efx_nic *efx) 105 110 { 106 111 MCDI_DECLARE_BUF(outbuf, MC_CMD_GET_FUNCTION_INFO_OUT_LEN); ··· 680 675 static int efx_ef10_probe_pf(struct efx_nic *efx) 681 676 { 682 677 return efx_ef10_probe(efx); 678 + } 679 + 680 + int efx_ef10_vadaptor_alloc(struct efx_nic *efx, unsigned int port_id) 681 + { 682 + MCDI_DECLARE_BUF(inbuf, MC_CMD_VADAPTOR_ALLOC_IN_LEN); 683 + 684 + MCDI_SET_DWORD(inbuf, VADAPTOR_ALLOC_IN_UPSTREAM_PORT_ID, port_id); 685 + return efx_mcdi_rpc(efx, MC_CMD_VADAPTOR_ALLOC, inbuf, sizeof(inbuf), 686 + NULL, 0, NULL); 687 + } 688 + 689 + int efx_ef10_vadaptor_free(struct efx_nic *efx, unsigned int port_id) 690 + { 691 + MCDI_DECLARE_BUF(inbuf, MC_CMD_VADAPTOR_FREE_IN_LEN); 692 + 693 + MCDI_SET_DWORD(inbuf, VADAPTOR_FREE_IN_UPSTREAM_PORT_ID, port_id); 694 + return efx_mcdi_rpc(efx, MC_CMD_VADAPTOR_FREE, inbuf, sizeof(inbuf), 695 + NULL, 0, NULL); 696 + } 697 + 698 + int efx_ef10_vport_add_mac(struct efx_nic *efx, 699 + unsigned int port_id, u8 *mac) 700 + { 701 + MCDI_DECLARE_BUF(inbuf, MC_CMD_VPORT_ADD_MAC_ADDRESS_IN_LEN); 702 + 703 + MCDI_SET_DWORD(inbuf, VPORT_ADD_MAC_ADDRESS_IN_VPORT_ID, port_id); 704 + ether_addr_copy(MCDI_PTR(inbuf, VPORT_ADD_MAC_ADDRESS_IN_MACADDR), mac); 705 + 706 + return efx_mcdi_rpc(efx, MC_CMD_VPORT_ADD_MAC_ADDRESS, inbuf, 707 + sizeof(inbuf), NULL, 0, NULL); 708 + } 709 + 710 + int efx_ef10_vport_del_mac(struct efx_nic *efx, 711 + unsigned int port_id, u8 *mac) 712 + { 713 + MCDI_DECLARE_BUF(inbuf, MC_CMD_VPORT_DEL_MAC_ADDRESS_IN_LEN); 714 + 715 + MCDI_SET_DWORD(inbuf, VPORT_DEL_MAC_ADDRESS_IN_VPORT_ID, port_id); 716 + ether_addr_copy(MCDI_PTR(inbuf, VPORT_DEL_MAC_ADDRESS_IN_MACADDR), mac); 717 + 718 + return efx_mcdi_rpc(efx, MC_CMD_VPORT_DEL_MAC_ADDRESS, inbuf, 719 + sizeof(inbuf), NULL, 0, NULL); 683 720 } 684 721 685 722 #ifdef CONFIG_SFC_SRIOV ··· 3851 3804 WARN_ON(remove_failed); 3852 3805 } 3853 3806 3807 + static int efx_ef10_vport_set_mac_address(struct efx_nic *efx) 3808 + { 3809 + struct efx_ef10_nic_data *nic_data = efx->nic_data; 3810 + u8 mac_old[ETH_ALEN]; 3811 + int rc, rc2; 3812 + 3813 + /* Only reconfigure a PF-created vport */ 3814 + if (is_zero_ether_addr(nic_data->vport_mac)) 3815 + return 0; 3816 + 3817 + efx_device_detach_sync(efx); 3818 + efx_net_stop(efx->net_dev); 3819 + down_write(&efx->filter_sem); 3820 + efx_ef10_filter_table_remove(efx); 3821 + up_write(&efx->filter_sem); 3822 + 3823 + rc = efx_ef10_vadaptor_free(efx, nic_data->vport_id); 3824 + if (rc) 3825 + goto restore_filters; 3826 + 3827 + ether_addr_copy(mac_old, nic_data->vport_mac); 3828 + rc = efx_ef10_vport_del_mac(efx, nic_data->vport_id, 3829 + nic_data->vport_mac); 3830 + if (rc) 3831 + goto restore_vadaptor; 3832 + 3833 + rc = efx_ef10_vport_add_mac(efx, nic_data->vport_id, 3834 + efx->net_dev->dev_addr); 3835 + if (!rc) { 3836 + ether_addr_copy(nic_data->vport_mac, efx->net_dev->dev_addr); 3837 + } else { 3838 + rc2 = efx_ef10_vport_add_mac(efx, nic_data->vport_id, mac_old); 3839 + if (rc2) { 3840 + /* Failed to add original MAC, so clear vport_mac */ 3841 + eth_zero_addr(nic_data->vport_mac); 3842 + goto reset_nic; 3843 + } 3844 + } 3845 + 3846 + restore_vadaptor: 3847 + rc2 = efx_ef10_vadaptor_alloc(efx, nic_data->vport_id); 3848 + if (rc2) 3849 + goto reset_nic; 3850 + restore_filters: 3851 + down_write(&efx->filter_sem); 3852 + rc2 = efx_ef10_filter_table_probe(efx); 3853 + up_write(&efx->filter_sem); 3854 + if (rc2) 3855 + goto reset_nic; 3856 + 3857 + rc2 = efx_net_open(efx->net_dev); 3858 + if (rc2) 3859 + goto reset_nic; 3860 + 3861 + netif_device_attach(efx->net_dev); 3862 + 3863 + return rc; 3864 + 3865 + reset_nic: 3866 + netif_err(efx, drv, efx->net_dev, 3867 + "Failed to restore when changing MAC address - scheduling reset\n"); 3868 + efx_schedule_reset(efx, RESET_TYPE_DATAPATH); 3869 + 3870 + return rc ? rc : rc2; 3871 + } 3872 + 3854 3873 static int efx_ef10_set_mac_address(struct efx_nic *efx) 3855 3874 { 3856 3875 MCDI_DECLARE_BUF(inbuf, MC_CMD_VADAPTOR_SET_MAC_IN_LEN); ··· 3933 3820 efx->net_dev->dev_addr); 3934 3821 MCDI_SET_DWORD(inbuf, VADAPTOR_SET_MAC_IN_UPSTREAM_PORT_ID, 3935 3822 nic_data->vport_id); 3936 - rc = efx_mcdi_rpc(efx, MC_CMD_VADAPTOR_SET_MAC, inbuf, 3937 - sizeof(inbuf), NULL, 0, NULL); 3823 + rc = efx_mcdi_rpc_quiet(efx, MC_CMD_VADAPTOR_SET_MAC, inbuf, 3824 + sizeof(inbuf), NULL, 0, NULL); 3938 3825 3939 3826 efx_ef10_filter_table_probe(efx); 3940 3827 up_write(&efx->filter_sem); ··· 3942 3829 efx_net_open(efx->net_dev); 3943 3830 netif_device_attach(efx->net_dev); 3944 3831 3945 - #if !defined(CONFIG_SFC_SRIOV) 3946 - if (rc == -EPERM) 3947 - netif_err(efx, drv, efx->net_dev, 3948 - "Cannot change MAC address; use sfboot to enable mac-spoofing" 3949 - " on this interface\n"); 3950 - #else 3951 - if (rc == -EPERM) { 3832 + #ifdef CONFIG_SFC_SRIOV 3833 + if (efx->pci_dev->is_virtfn && efx->pci_dev->physfn) { 3952 3834 struct pci_dev *pci_dev_pf = efx->pci_dev->physfn; 3953 3835 3954 - /* Switch to PF and change MAC address on vport */ 3955 - if (efx->pci_dev->is_virtfn && pci_dev_pf) { 3956 - struct efx_nic *efx_pf = pci_get_drvdata(pci_dev_pf); 3836 + if (rc == -EPERM) { 3837 + struct efx_nic *efx_pf; 3957 3838 3958 - if (!efx_ef10_sriov_set_vf_mac(efx_pf, 3839 + /* Switch to PF and change MAC address on vport */ 3840 + efx_pf = pci_get_drvdata(pci_dev_pf); 3841 + 3842 + rc = efx_ef10_sriov_set_vf_mac(efx_pf, 3959 3843 nic_data->vf_index, 3960 - efx->net_dev->dev_addr)) 3961 - return 0; 3962 - } 3963 - netif_err(efx, drv, efx->net_dev, 3964 - "Cannot change MAC address; use sfboot to enable mac-spoofing" 3965 - " on this interface\n"); 3966 - } else if (efx->pci_dev->is_virtfn) { 3967 - /* Successfully changed by VF (with MAC spoofing), so update the 3968 - * parent PF if possible. 3969 - */ 3970 - struct pci_dev *pci_dev_pf = efx->pci_dev->physfn; 3971 - 3972 - if (pci_dev_pf) { 3844 + efx->net_dev->dev_addr); 3845 + } else if (!rc) { 3973 3846 struct efx_nic *efx_pf = pci_get_drvdata(pci_dev_pf); 3974 3847 struct efx_ef10_nic_data *nic_data = efx_pf->nic_data; 3975 3848 unsigned int i; 3976 3849 3850 + /* MAC address successfully changed by VF (with MAC 3851 + * spoofing) so update the parent PF if possible. 3852 + */ 3977 3853 for (i = 0; i < efx_pf->vf_count; ++i) { 3978 3854 struct ef10_vf *vf = nic_data->vf + i; 3979 3855 ··· 3973 3871 } 3974 3872 } 3975 3873 } 3976 - } 3874 + } else 3977 3875 #endif 3876 + if (rc == -EPERM) { 3877 + netif_err(efx, drv, efx->net_dev, 3878 + "Cannot change MAC address; use sfboot to enable" 3879 + " mac-spoofing on this interface\n"); 3880 + } else if (rc == -ENOSYS && !efx_ef10_is_vf(efx)) { 3881 + /* If the active MCFW does not support MC_CMD_VADAPTOR_SET_MAC 3882 + * fall-back to the method of changing the MAC address on the 3883 + * vport. This only applies to PFs because such versions of 3884 + * MCFW do not support VFs. 3885 + */ 3886 + rc = efx_ef10_vport_set_mac_address(efx); 3887 + } else { 3888 + efx_mcdi_display_error(efx, MC_CMD_VADAPTOR_SET_MAC, 3889 + sizeof(inbuf), NULL, 0, rc); 3890 + } 3891 + 3978 3892 return rc; 3979 3893 } 3980 3894
+11 -48
drivers/net/ethernet/sfc/ef10_sriov.c
··· 29 29 NULL, 0, NULL); 30 30 } 31 31 32 - static int efx_ef10_vport_add_mac(struct efx_nic *efx, 33 - unsigned int port_id, u8 *mac) 34 - { 35 - MCDI_DECLARE_BUF(inbuf, MC_CMD_VPORT_ADD_MAC_ADDRESS_IN_LEN); 36 - 37 - MCDI_SET_DWORD(inbuf, VPORT_ADD_MAC_ADDRESS_IN_VPORT_ID, port_id); 38 - ether_addr_copy(MCDI_PTR(inbuf, VPORT_ADD_MAC_ADDRESS_IN_MACADDR), mac); 39 - 40 - return efx_mcdi_rpc(efx, MC_CMD_VPORT_ADD_MAC_ADDRESS, inbuf, 41 - sizeof(inbuf), NULL, 0, NULL); 42 - } 43 - 44 - static int efx_ef10_vport_del_mac(struct efx_nic *efx, 45 - unsigned int port_id, u8 *mac) 46 - { 47 - MCDI_DECLARE_BUF(inbuf, MC_CMD_VPORT_DEL_MAC_ADDRESS_IN_LEN); 48 - 49 - MCDI_SET_DWORD(inbuf, VPORT_DEL_MAC_ADDRESS_IN_VPORT_ID, port_id); 50 - ether_addr_copy(MCDI_PTR(inbuf, VPORT_DEL_MAC_ADDRESS_IN_MACADDR), mac); 51 - 52 - return efx_mcdi_rpc(efx, MC_CMD_VPORT_DEL_MAC_ADDRESS, inbuf, 53 - sizeof(inbuf), NULL, 0, NULL); 54 - } 55 - 56 32 static int efx_ef10_vswitch_alloc(struct efx_nic *efx, unsigned int port_id, 57 33 unsigned int vswitch_type) 58 34 { ··· 109 133 MCDI_SET_DWORD(inbuf, VPORT_FREE_IN_VPORT_ID, port_id); 110 134 111 135 return efx_mcdi_rpc(efx, MC_CMD_VPORT_FREE, inbuf, sizeof(inbuf), 112 - NULL, 0, NULL); 113 - } 114 - 115 - static int efx_ef10_vadaptor_alloc(struct efx_nic *efx, unsigned int port_id) 116 - { 117 - MCDI_DECLARE_BUF(inbuf, MC_CMD_VADAPTOR_ALLOC_IN_LEN); 118 - 119 - MCDI_SET_DWORD(inbuf, VADAPTOR_ALLOC_IN_UPSTREAM_PORT_ID, port_id); 120 - return efx_mcdi_rpc(efx, MC_CMD_VADAPTOR_ALLOC, inbuf, sizeof(inbuf), 121 - NULL, 0, NULL); 122 - } 123 - 124 - static int efx_ef10_vadaptor_free(struct efx_nic *efx, unsigned int port_id) 125 - { 126 - MCDI_DECLARE_BUF(inbuf, MC_CMD_VADAPTOR_FREE_IN_LEN); 127 - 128 - MCDI_SET_DWORD(inbuf, VADAPTOR_FREE_IN_UPSTREAM_PORT_ID, port_id); 129 - return efx_mcdi_rpc(efx, MC_CMD_VADAPTOR_FREE, inbuf, sizeof(inbuf), 130 136 NULL, 0, NULL); 131 137 } 132 138 ··· 598 640 MC_CMD_VPORT_ALLOC_IN_VPORT_TYPE_NORMAL, 599 641 vf->vlan, &vf->vport_id); 600 642 if (rc) 601 - goto reset_nic; 643 + goto reset_nic_up_write; 602 644 603 645 restore_mac: 604 646 if (!is_zero_ether_addr(vf->mac)) { 605 647 rc2 = efx_ef10_vport_add_mac(efx, vf->vport_id, vf->mac); 606 648 if (rc2) { 607 649 eth_zero_addr(vf->mac); 608 - goto reset_nic; 650 + goto reset_nic_up_write; 609 651 } 610 652 } 611 653 612 654 restore_evb_port: 613 655 rc2 = efx_ef10_evb_port_assign(efx, vf->vport_id, vf_i); 614 656 if (rc2) 615 - goto reset_nic; 657 + goto reset_nic_up_write; 616 658 else 617 659 vf->vport_assigned = 1; 618 660 ··· 620 662 if (vf->efx) { 621 663 rc2 = efx_ef10_vadaptor_alloc(vf->efx, EVB_PORT_ID_ASSIGNED); 622 664 if (rc2) 623 - goto reset_nic; 665 + goto reset_nic_up_write; 624 666 } 625 667 626 668 restore_filters: 627 669 if (vf->efx) { 628 670 rc2 = vf->efx->type->filter_table_probe(vf->efx); 629 671 if (rc2) 630 - goto reset_nic; 672 + goto reset_nic_up_write; 673 + 674 + up_write(&vf->efx->filter_sem); 631 675 632 676 up_write(&vf->efx->filter_sem); 633 677 ··· 641 681 } 642 682 return rc; 643 683 684 + reset_nic_up_write: 685 + if (vf->efx) 686 + up_write(&vf->efx->filter_sem); 687 + 644 688 reset_nic: 645 689 if (vf->efx) { 646 - up_write(&vf->efx->filter_sem); 647 690 netif_err(efx, drv, efx->net_dev, 648 691 "Failed to restore VF - scheduling reset.\n"); 649 692 efx_schedule_reset(vf->efx, RESET_TYPE_DATAPATH);
+6
drivers/net/ethernet/sfc/ef10_sriov.h
··· 65 65 int efx_ef10_vswitching_restore_vf(struct efx_nic *efx); 66 66 void efx_ef10_vswitching_remove_pf(struct efx_nic *efx); 67 67 void efx_ef10_vswitching_remove_vf(struct efx_nic *efx); 68 + int efx_ef10_vport_add_mac(struct efx_nic *efx, 69 + unsigned int port_id, u8 *mac); 70 + int efx_ef10_vport_del_mac(struct efx_nic *efx, 71 + unsigned int port_id, u8 *mac); 72 + int efx_ef10_vadaptor_alloc(struct efx_nic *efx, unsigned int port_id); 73 + int efx_ef10_vadaptor_free(struct efx_nic *efx, unsigned int port_id); 68 74 69 75 #endif /* EF10_SRIOV_H */
+14
drivers/net/ethernet/sfc/efx.c
··· 245 245 */ 246 246 static int efx_process_channel(struct efx_channel *channel, int budget) 247 247 { 248 + struct efx_tx_queue *tx_queue; 248 249 int spent; 249 250 250 251 if (unlikely(!channel->enabled)) 251 252 return 0; 253 + 254 + efx_for_each_channel_tx_queue(tx_queue, channel) { 255 + tx_queue->pkts_compl = 0; 256 + tx_queue->bytes_compl = 0; 257 + } 252 258 253 259 spent = efx_nic_process_eventq(channel, budget); 254 260 if (spent && efx_channel_has_rx_queue(channel)) { ··· 263 257 264 258 efx_rx_flush_packet(channel); 265 259 efx_fast_push_rx_descriptors(rx_queue, true); 260 + } 261 + 262 + /* Update BQL */ 263 + efx_for_each_channel_tx_queue(tx_queue, channel) { 264 + if (tx_queue->bytes_compl) { 265 + netdev_tx_completed_queue(tx_queue->core_txq, 266 + tx_queue->pkts_compl, tx_queue->bytes_compl); 267 + } 266 268 } 267 269 268 270 return spent;
+2
drivers/net/ethernet/sfc/net_driver.h
··· 241 241 unsigned int read_count ____cacheline_aligned_in_smp; 242 242 unsigned int old_write_count; 243 243 unsigned int merge_events; 244 + unsigned int bytes_compl; 245 + unsigned int pkts_compl; 244 246 245 247 /* Members used only on the xmit path */ 246 248 unsigned int insert_count ____cacheline_aligned_in_smp;
+2 -1
drivers/net/ethernet/sfc/tx.c
··· 617 617 EFX_BUG_ON_PARANOID(index > tx_queue->ptr_mask); 618 618 619 619 efx_dequeue_buffers(tx_queue, index, &pkts_compl, &bytes_compl); 620 - netdev_tx_completed_queue(tx_queue->core_txq, pkts_compl, bytes_compl); 620 + tx_queue->pkts_compl += pkts_compl; 621 + tx_queue->bytes_compl += bytes_compl; 621 622 622 623 if (pkts_compl > 1) 623 624 ++tx_queue->merge_events;
+7 -18
drivers/net/ethernet/ti/cpsw.c
··· 138 138 #define CPSW_CMINTMAX_INTVL (1000 / CPSW_CMINTMIN_CNT) 139 139 #define CPSW_CMINTMIN_INTVL ((1000 / CPSW_CMINTMAX_CNT) + 1) 140 140 141 - #define cpsw_enable_irq(priv) \ 142 - do { \ 143 - u32 i; \ 144 - for (i = 0; i < priv->num_irqs; i++) \ 145 - enable_irq(priv->irqs_table[i]); \ 146 - } while (0) 147 - #define cpsw_disable_irq(priv) \ 148 - do { \ 149 - u32 i; \ 150 - for (i = 0; i < priv->num_irqs; i++) \ 151 - disable_irq_nosync(priv->irqs_table[i]); \ 152 - } while (0) 153 - 154 141 #define cpsw_slave_index(priv) \ 155 142 ((priv->data.dual_emac) ? priv->emac_port : \ 156 143 priv->data.active_slave) ··· 496 509 (func)(slave++, ##arg); \ 497 510 } while (0) 498 511 #define cpsw_get_slave_ndev(priv, __slave_no__) \ 499 - (priv->slaves[__slave_no__].ndev) 512 + ((__slave_no__ < priv->data.slaves) ? \ 513 + priv->slaves[__slave_no__].ndev : NULL) 500 514 #define cpsw_get_slave_priv(priv, __slave_no__) \ 501 - ((priv->slaves[__slave_no__].ndev) ? \ 515 + (((__slave_no__ < priv->data.slaves) && \ 516 + (priv->slaves[__slave_no__].ndev)) ? \ 502 517 netdev_priv(priv->slaves[__slave_no__].ndev) : NULL) \ 503 518 504 519 #define cpsw_dual_emac_src_port_detect(status, priv, ndev, skb) \ ··· 770 781 771 782 cpsw_intr_disable(priv); 772 783 if (priv->irq_enabled == true) { 773 - cpsw_disable_irq(priv); 784 + disable_irq_nosync(priv->irqs_table[0]); 774 785 priv->irq_enabled = false; 775 786 } 776 787 ··· 806 817 prim_cpsw = cpsw_get_slave_priv(priv, 0); 807 818 if (prim_cpsw->irq_enabled == false) { 808 819 prim_cpsw->irq_enabled = true; 809 - cpsw_enable_irq(priv); 820 + enable_irq(priv->irqs_table[0]); 810 821 } 811 822 } 812 823 ··· 1322 1333 if (prim_cpsw->irq_enabled == false) { 1323 1334 if ((priv == prim_cpsw) || !netif_running(prim_cpsw->ndev)) { 1324 1335 prim_cpsw->irq_enabled = true; 1325 - cpsw_enable_irq(prim_cpsw); 1336 + enable_irq(prim_cpsw->irqs_table[0]); 1326 1337 } 1327 1338 } 1328 1339
+4 -4
drivers/net/ethernet/xilinx/xilinx_axienet_main.c
··· 1530 1530 /* Map device registers */ 1531 1531 ethres = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1532 1532 lp->regs = devm_ioremap_resource(&pdev->dev, ethres); 1533 - if (!lp->regs) { 1533 + if (IS_ERR(lp->regs)) { 1534 1534 dev_err(&pdev->dev, "could not map Axi Ethernet regs.\n"); 1535 - ret = -ENOMEM; 1535 + ret = PTR_ERR(lp->regs); 1536 1536 goto free_netdev; 1537 1537 } 1538 1538 ··· 1599 1599 goto free_netdev; 1600 1600 } 1601 1601 lp->dma_regs = devm_ioremap_resource(&pdev->dev, &dmares); 1602 - if (!lp->dma_regs) { 1602 + if (IS_ERR(lp->dma_regs)) { 1603 1603 dev_err(&pdev->dev, "could not map DMA regs\n"); 1604 - ret = -ENOMEM; 1604 + ret = PTR_ERR(lp->dma_regs); 1605 1605 goto free_netdev; 1606 1606 } 1607 1607 lp->rx_irq = irq_of_parse_and_map(np, 1);
+1
drivers/net/hamradio/bpqether.c
··· 482 482 memcpy(dev->dev_addr, &ax25_defaddr, AX25_ADDR_LEN); 483 483 484 484 dev->flags = 0; 485 + dev->features = NETIF_F_LLTX; /* Allow recursion */ 485 486 486 487 #if defined(CONFIG_AX25) || defined(CONFIG_AX25_MODULE) 487 488 dev->header_ops = &ax25_header_ops;
+1
drivers/net/macvtap.c
··· 1355 1355 class_unregister(macvtap_class); 1356 1356 cdev_del(&macvtap_cdev); 1357 1357 unregister_chrdev_region(macvtap_major, MACVTAP_NUM_DEVS); 1358 + idr_destroy(&minor_idr); 1358 1359 } 1359 1360 module_exit(macvtap_exit); 1360 1361
+1 -1
drivers/net/phy/Kconfig
··· 191 191 192 192 config MDIO_BUS_MUX_MMIOREG 193 193 tristate "Support for MMIO device-controlled MDIO bus multiplexers" 194 - depends on OF_MDIO 194 + depends on OF_MDIO && HAS_IOMEM 195 195 select MDIO_BUS_MUX 196 196 help 197 197 This module provides a driver for MDIO bus multiplexers that
+8
drivers/net/usb/cdc_ether.c
··· 523 523 #define REALTEK_VENDOR_ID 0x0bda 524 524 #define SAMSUNG_VENDOR_ID 0x04e8 525 525 #define LENOVO_VENDOR_ID 0x17ef 526 + #define NVIDIA_VENDOR_ID 0x0955 526 527 527 528 static const struct usb_device_id products[] = { 528 529 /* BLACKLIST !! ··· 707 706 /* Lenovo Thinkpad USB 3.0 Ethernet Adapters (based on Realtek RTL8153) */ 708 707 { 709 708 USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0x7205, USB_CLASS_COMM, 709 + USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 710 + .driver_info = 0, 711 + }, 712 + 713 + /* NVIDIA Tegra USB 3.0 Ethernet Adapters (based on Realtek RTL8153) */ 714 + { 715 + USB_DEVICE_AND_INTERFACE_INFO(NVIDIA_VENDOR_ID, 0x09ff, USB_CLASS_COMM, 710 716 USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 711 717 .driver_info = 0, 712 718 },
+1 -1
drivers/net/usb/cdc_mbim.c
··· 158 158 if (!cdc_ncm_comm_intf_is_mbim(intf->cur_altsetting)) 159 159 goto err; 160 160 161 - ret = cdc_ncm_bind_common(dev, intf, data_altsetting); 161 + ret = cdc_ncm_bind_common(dev, intf, data_altsetting, 0); 162 162 if (ret) 163 163 goto err; 164 164
+56 -7
drivers/net/usb/cdc_ncm.c
··· 6 6 * Original author: Hans Petter Selasky <hans.petter.selasky@stericsson.com> 7 7 * 8 8 * USB Host Driver for Network Control Model (NCM) 9 - * http://www.usb.org/developers/devclass_docs/NCM10.zip 9 + * http://www.usb.org/developers/docs/devclass_docs/NCM10_012011.zip 10 10 * 11 11 * The NCM encoding, decoding and initialization logic 12 12 * derives from FreeBSD 8.x. if_cdce.c and if_cdcereg.h ··· 684 684 ctx->tx_curr_skb = NULL; 685 685 } 686 686 687 + kfree(ctx->delayed_ndp16); 688 + 687 689 kfree(ctx); 688 690 } 689 691 690 - int cdc_ncm_bind_common(struct usbnet *dev, struct usb_interface *intf, u8 data_altsetting) 692 + int cdc_ncm_bind_common(struct usbnet *dev, struct usb_interface *intf, u8 data_altsetting, int drvflags) 691 693 { 692 694 const struct usb_cdc_union_desc *union_desc = NULL; 693 695 struct cdc_ncm_ctx *ctx; ··· 857 855 /* finish setting up the device specific data */ 858 856 cdc_ncm_setup(dev); 859 857 858 + /* Device-specific flags */ 859 + ctx->drvflags = drvflags; 860 + 861 + /* Allocate the delayed NDP if needed. */ 862 + if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END) { 863 + ctx->delayed_ndp16 = kzalloc(ctx->max_ndp_size, GFP_KERNEL); 864 + if (!ctx->delayed_ndp16) 865 + goto error2; 866 + dev_info(&intf->dev, "NDP will be placed at end of frame for this device."); 867 + } 868 + 860 869 /* override ethtool_ops */ 861 870 dev->net->ethtool_ops = &cdc_ncm_ethtool_ops; 862 871 ··· 967 954 if (cdc_ncm_select_altsetting(intf) != CDC_NCM_COMM_ALTSETTING_NCM) 968 955 return -ENODEV; 969 956 970 - /* The NCM data altsetting is fixed */ 971 - ret = cdc_ncm_bind_common(dev, intf, CDC_NCM_DATA_ALTSETTING_NCM); 957 + /* The NCM data altsetting is fixed, so we hard-coded it. 958 + * Additionally, generic NCM devices are assumed to accept arbitrarily 959 + * placed NDP. 960 + */ 961 + ret = cdc_ncm_bind_common(dev, intf, CDC_NCM_DATA_ALTSETTING_NCM, 0); 972 962 973 963 /* 974 964 * We should get an event when network connection is "connected" or ··· 1002 986 struct usb_cdc_ncm_nth16 *nth16 = (void *)skb->data; 1003 987 size_t ndpoffset = le16_to_cpu(nth16->wNdpIndex); 1004 988 989 + /* If NDP should be moved to the end of the NCM package, we can't follow the 990 + * NTH16 header as we would normally do. NDP isn't written to the SKB yet, and 991 + * the wNdpIndex field in the header is actually not consistent with reality. It will be later. 992 + */ 993 + if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END) 994 + if (ctx->delayed_ndp16->dwSignature == sign) 995 + return ctx->delayed_ndp16; 996 + 1005 997 /* follow the chain of NDPs, looking for a match */ 1006 998 while (ndpoffset) { 1007 999 ndp16 = (struct usb_cdc_ncm_ndp16 *)(skb->data + ndpoffset); ··· 1019 995 } 1020 996 1021 997 /* align new NDP */ 1022 - cdc_ncm_align_tail(skb, ctx->tx_ndp_modulus, 0, ctx->tx_max); 998 + if (!(ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END)) 999 + cdc_ncm_align_tail(skb, ctx->tx_ndp_modulus, 0, ctx->tx_max); 1023 1000 1024 1001 /* verify that there is room for the NDP and the datagram (reserve) */ 1025 1002 if ((ctx->tx_max - skb->len - reserve) < ctx->max_ndp_size) ··· 1033 1008 nth16->wNdpIndex = cpu_to_le16(skb->len); 1034 1009 1035 1010 /* push a new empty NDP */ 1036 - ndp16 = (struct usb_cdc_ncm_ndp16 *)memset(skb_put(skb, ctx->max_ndp_size), 0, ctx->max_ndp_size); 1011 + if (!(ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END)) 1012 + ndp16 = (struct usb_cdc_ncm_ndp16 *)memset(skb_put(skb, ctx->max_ndp_size), 0, ctx->max_ndp_size); 1013 + else 1014 + ndp16 = ctx->delayed_ndp16; 1015 + 1037 1016 ndp16->dwSignature = sign; 1038 1017 ndp16->wLength = cpu_to_le16(sizeof(struct usb_cdc_ncm_ndp16) + sizeof(struct usb_cdc_ncm_dpe16)); 1039 1018 return ndp16; ··· 1052 1023 struct sk_buff *skb_out; 1053 1024 u16 n = 0, index, ndplen; 1054 1025 u8 ready2send = 0; 1026 + u32 delayed_ndp_size; 1027 + 1028 + /* When our NDP gets written in cdc_ncm_ndp(), then skb_out->len gets updated 1029 + * accordingly. Otherwise, we should check here. 1030 + */ 1031 + if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END) 1032 + delayed_ndp_size = ctx->max_ndp_size; 1033 + else 1034 + delayed_ndp_size = 0; 1055 1035 1056 1036 /* if there is a remaining skb, it gets priority */ 1057 1037 if (skb != NULL) { ··· 1115 1077 cdc_ncm_align_tail(skb_out, ctx->tx_modulus, ctx->tx_remainder, ctx->tx_max); 1116 1078 1117 1079 /* check if we had enough room left for both NDP and frame */ 1118 - if (!ndp16 || skb_out->len + skb->len > ctx->tx_max) { 1080 + if (!ndp16 || skb_out->len + skb->len + delayed_ndp_size > ctx->tx_max) { 1119 1081 if (n == 0) { 1120 1082 /* won't fit, MTU problem? */ 1121 1083 dev_kfree_skb_any(skb); ··· 1186 1148 ctx->tx_reason_max_datagram++; /* count reason for transmitting */ 1187 1149 /* frame goes out */ 1188 1150 /* variables will be reset at next call */ 1151 + } 1152 + 1153 + /* If requested, put NDP at end of frame. */ 1154 + if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END) { 1155 + nth16 = (struct usb_cdc_ncm_nth16 *)skb_out->data; 1156 + cdc_ncm_align_tail(skb_out, ctx->tx_ndp_modulus, 0, ctx->tx_max); 1157 + nth16->wNdpIndex = cpu_to_le16(skb_out->len); 1158 + memcpy(skb_put(skb_out, ctx->max_ndp_size), ctx->delayed_ndp16, ctx->max_ndp_size); 1159 + 1160 + /* Zero out delayed NDP - signature checking will naturally fail. */ 1161 + ndp16 = memset(ctx->delayed_ndp16, 0, ctx->max_ndp_size); 1189 1162 } 1190 1163 1191 1164 /* If collected data size is less or equal ctx->min_tx_pkt
+5 -2
drivers/net/usb/huawei_cdc_ncm.c
··· 73 73 struct usb_driver *subdriver = ERR_PTR(-ENODEV); 74 74 int ret = -ENODEV; 75 75 struct huawei_cdc_ncm_state *drvstate = (void *)&usbnet_dev->data; 76 + int drvflags = 0; 76 77 77 78 /* altsetting should always be 1 for NCM devices - so we hard-coded 78 - * it here 79 + * it here. Some huawei devices will need the NDP part of the NCM package to 80 + * be at the end of the frame. 79 81 */ 80 - ret = cdc_ncm_bind_common(usbnet_dev, intf, 1); 82 + drvflags |= CDC_NCM_FLAG_NDP_TO_END; 83 + ret = cdc_ncm_bind_common(usbnet_dev, intf, 1, drvflags); 81 84 if (ret) 82 85 goto err; 83 86
+2
drivers/net/usb/r8152.c
··· 494 494 #define VENDOR_ID_REALTEK 0x0bda 495 495 #define VENDOR_ID_SAMSUNG 0x04e8 496 496 #define VENDOR_ID_LENOVO 0x17ef 497 + #define VENDOR_ID_NVIDIA 0x0955 497 498 498 499 #define MCU_TYPE_PLA 0x0100 499 500 #define MCU_TYPE_USB 0x0000 ··· 4118 4117 {REALTEK_USB_DEVICE(VENDOR_ID_SAMSUNG, 0xa101)}, 4119 4118 {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7205)}, 4120 4119 {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x304f)}, 4120 + {REALTEK_USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff)}, 4121 4121 {} 4122 4122 }; 4123 4123
+4 -4
drivers/net/vmxnet3/vmxnet3_drv.c
··· 1216 1216 static const u32 rxprod_reg[2] = { 1217 1217 VMXNET3_REG_RXPROD, VMXNET3_REG_RXPROD2 1218 1218 }; 1219 - u32 num_rxd = 0; 1219 + u32 num_pkts = 0; 1220 1220 bool skip_page_frags = false; 1221 1221 struct Vmxnet3_RxCompDesc *rcd; 1222 1222 struct vmxnet3_rx_ctx *ctx = &rq->rx_ctx; ··· 1235 1235 struct Vmxnet3_RxDesc *rxd; 1236 1236 u32 idx, ring_idx; 1237 1237 struct vmxnet3_cmd_ring *ring = NULL; 1238 - if (num_rxd >= quota) { 1238 + if (num_pkts >= quota) { 1239 1239 /* we may stop even before we see the EOP desc of 1240 1240 * the current pkt 1241 1241 */ 1242 1242 break; 1243 1243 } 1244 - num_rxd++; 1245 1244 BUG_ON(rcd->rqID != rq->qid && rcd->rqID != rq->qid2); 1246 1245 idx = rcd->rxdIdx; 1247 1246 ring_idx = rcd->rqID < adapter->num_rx_queues ? 0 : 1; ··· 1412 1413 napi_gro_receive(&rq->napi, skb); 1413 1414 1414 1415 ctx->skb = NULL; 1416 + num_pkts++; 1415 1417 } 1416 1418 1417 1419 rcd_done: ··· 1443 1443 &rq->comp_ring.base[rq->comp_ring.next2proc].rcd, &rxComp); 1444 1444 } 1445 1445 1446 - return num_rxd; 1446 + return num_pkts; 1447 1447 } 1448 1448 1449 1449
+1 -1
drivers/net/wan/z85230.c
··· 1044 1044 * @dev: The network device to attach 1045 1045 * @c: The Z8530 channel to configure in sync DMA mode. 1046 1046 * 1047 - * Set up a Z85x30 device for synchronous DMA tranmission. One 1047 + * Set up a Z85x30 device for synchronous DMA transmission. One 1048 1048 * ISA DMA channel must be available for this to work. The receive 1049 1049 * side is run in PIO mode, but then it has the bigger FIFO. 1050 1050 */
+3 -8
drivers/nvdimm/bus.c
··· 535 535 __func__, dimm_name, cmd_name, i); 536 536 return -ENXIO; 537 537 } 538 - if (!access_ok(VERIFY_READ, p + in_len, in_size)) 539 - return -EFAULT; 540 538 if (in_len < sizeof(in_env)) 541 539 copy = min_t(u32, sizeof(in_env) - in_len, in_size); 542 540 else ··· 555 557 __func__, dimm_name, cmd_name, i); 556 558 return -EFAULT; 557 559 } 558 - if (!access_ok(VERIFY_WRITE, p + in_len + out_len, out_size)) 559 - return -EFAULT; 560 560 if (out_len < sizeof(out_env)) 561 561 copy = min_t(u32, sizeof(out_env) - out_len, out_size); 562 562 else ··· 566 570 } 567 571 568 572 buf_len = out_len + in_len; 569 - if (!access_ok(VERIFY_WRITE, p, sizeof(buf_len))) 570 - return -EFAULT; 571 - 572 573 if (buf_len > ND_IOCTL_MAX_BUFLEN) { 573 574 dev_dbg(dev, "%s:%s cmd: %s buf_len: %zu > %d\n", __func__, 574 575 dimm_name, cmd_name, buf_len, ··· 699 706 nvdimm_major = rc; 700 707 701 708 nd_class = class_create(THIS_MODULE, "nd"); 702 - if (IS_ERR(nd_class)) 709 + if (IS_ERR(nd_class)) { 710 + rc = PTR_ERR(nd_class); 703 711 goto err_class; 712 + } 704 713 705 714 return 0; 706 715
+1 -1
drivers/pinctrl/spear/pinctrl-spear.c
··· 2 2 * Driver for the ST Microelectronics SPEAr pinmux 3 3 * 4 4 * Copyright (C) 2012 ST Microelectronics 5 - * Viresh Kumar <viresh.linux@gmail.com> 5 + * Viresh Kumar <vireshk@kernel.org> 6 6 * 7 7 * Inspired from: 8 8 * - U300 Pinctl drivers
+1 -1
drivers/pinctrl/spear/pinctrl-spear.h
··· 2 2 * Driver header file for the ST Microelectronics SPEAr pinmux 3 3 * 4 4 * Copyright (C) 2012 ST Microelectronics 5 - * Viresh Kumar <viresh.linux@gmail.com> 5 + * Viresh Kumar <vireshk@kernel.org> 6 6 * 7 7 * This file is licensed under the terms of the GNU General Public 8 8 * License version 2. This program is licensed "as is" without any
+2 -2
drivers/pinctrl/spear/pinctrl-spear1310.c
··· 2 2 * Driver for the ST Microelectronics SPEAr1310 pinmux 3 3 * 4 4 * Copyright (C) 2012 ST Microelectronics 5 - * Viresh Kumar <viresh.linux@gmail.com> 5 + * Viresh Kumar <vireshk@kernel.org> 6 6 * 7 7 * This file is licensed under the terms of the GNU General Public 8 8 * License version 2. This program is licensed "as is" without any ··· 2730 2730 } 2731 2731 module_exit(spear1310_pinctrl_exit); 2732 2732 2733 - MODULE_AUTHOR("Viresh Kumar <viresh.linux@gmail.com>"); 2733 + MODULE_AUTHOR("Viresh Kumar <vireshk@kernel.org>"); 2734 2734 MODULE_DESCRIPTION("ST Microelectronics SPEAr1310 pinctrl driver"); 2735 2735 MODULE_LICENSE("GPL v2"); 2736 2736 MODULE_DEVICE_TABLE(of, spear1310_pinctrl_of_match);
+2 -2
drivers/pinctrl/spear/pinctrl-spear1340.c
··· 2 2 * Driver for the ST Microelectronics SPEAr1340 pinmux 3 3 * 4 4 * Copyright (C) 2012 ST Microelectronics 5 - * Viresh Kumar <viresh.linux@gmail.com> 5 + * Viresh Kumar <vireshk@kernel.org> 6 6 * 7 7 * This file is licensed under the terms of the GNU General Public 8 8 * License version 2. This program is licensed "as is" without any ··· 2046 2046 } 2047 2047 module_exit(spear1340_pinctrl_exit); 2048 2048 2049 - MODULE_AUTHOR("Viresh Kumar <viresh.linux@gmail.com>"); 2049 + MODULE_AUTHOR("Viresh Kumar <vireshk@kernel.org>"); 2050 2050 MODULE_DESCRIPTION("ST Microelectronics SPEAr1340 pinctrl driver"); 2051 2051 MODULE_LICENSE("GPL v2"); 2052 2052 MODULE_DEVICE_TABLE(of, spear1340_pinctrl_of_match);
+2 -2
drivers/pinctrl/spear/pinctrl-spear300.c
··· 2 2 * Driver for the ST Microelectronics SPEAr300 pinmux 3 3 * 4 4 * Copyright (C) 2012 ST Microelectronics 5 - * Viresh Kumar <viresh.linux@gmail.com> 5 + * Viresh Kumar <vireshk@kernel.org> 6 6 * 7 7 * This file is licensed under the terms of the GNU General Public 8 8 * License version 2. This program is licensed "as is" without any ··· 703 703 } 704 704 module_exit(spear300_pinctrl_exit); 705 705 706 - MODULE_AUTHOR("Viresh Kumar <viresh.linux@gmail.com>"); 706 + MODULE_AUTHOR("Viresh Kumar <vireshk@kernel.org>"); 707 707 MODULE_DESCRIPTION("ST Microelectronics SPEAr300 pinctrl driver"); 708 708 MODULE_LICENSE("GPL v2"); 709 709 MODULE_DEVICE_TABLE(of, spear300_pinctrl_of_match);
+2 -2
drivers/pinctrl/spear/pinctrl-spear310.c
··· 2 2 * Driver for the ST Microelectronics SPEAr310 pinmux 3 3 * 4 4 * Copyright (C) 2012 ST Microelectronics 5 - * Viresh Kumar <viresh.linux@gmail.com> 5 + * Viresh Kumar <vireshk@kernel.org> 6 6 * 7 7 * This file is licensed under the terms of the GNU General Public 8 8 * License version 2. This program is licensed "as is" without any ··· 426 426 } 427 427 module_exit(spear310_pinctrl_exit); 428 428 429 - MODULE_AUTHOR("Viresh Kumar <viresh.linux@gmail.com>"); 429 + MODULE_AUTHOR("Viresh Kumar <vireshk@kernel.org>"); 430 430 MODULE_DESCRIPTION("ST Microelectronics SPEAr310 pinctrl driver"); 431 431 MODULE_LICENSE("GPL v2"); 432 432 MODULE_DEVICE_TABLE(of, spear310_pinctrl_of_match);
+2 -2
drivers/pinctrl/spear/pinctrl-spear320.c
··· 2 2 * Driver for the ST Microelectronics SPEAr320 pinmux 3 3 * 4 4 * Copyright (C) 2012 ST Microelectronics 5 - * Viresh Kumar <viresh.linux@gmail.com> 5 + * Viresh Kumar <vireshk@kernel.org> 6 6 * 7 7 * This file is licensed under the terms of the GNU General Public 8 8 * License version 2. This program is licensed "as is" without any ··· 3467 3467 } 3468 3468 module_exit(spear320_pinctrl_exit); 3469 3469 3470 - MODULE_AUTHOR("Viresh Kumar <viresh.linux@gmail.com>"); 3470 + MODULE_AUTHOR("Viresh Kumar <vireshk@kernel.org>"); 3471 3471 MODULE_DESCRIPTION("ST Microelectronics SPEAr320 pinctrl driver"); 3472 3472 MODULE_LICENSE("GPL v2"); 3473 3473 MODULE_DEVICE_TABLE(of, spear320_pinctrl_of_match);
+1 -1
drivers/pinctrl/spear/pinctrl-spear3xx.c
··· 2 2 * Driver for the ST Microelectronics SPEAr3xx pinmux 3 3 * 4 4 * Copyright (C) 2012 ST Microelectronics 5 - * Viresh Kumar <viresh.linux@gmail.com> 5 + * Viresh Kumar <vireshk@kernel.org> 6 6 * 7 7 * This file is licensed under the terms of the GNU General Public 8 8 * License version 2. This program is licensed "as is" without any
+1 -1
drivers/pinctrl/spear/pinctrl-spear3xx.h
··· 2 2 * Header file for the ST Microelectronics SPEAr3xx pinmux 3 3 * 4 4 * Copyright (C) 2012 ST Microelectronics 5 - * Viresh Kumar <viresh.linux@gmail.com> 5 + * Viresh Kumar <vireshk@kernel.org> 6 6 * 7 7 * This file is licensed under the terms of the GNU General Public 8 8 * License version 2. This program is licensed "as is" without any
+125 -46
drivers/platform/x86/dell-laptop.c
··· 309 309 static struct calling_interface_buffer *buffer; 310 310 static DEFINE_MUTEX(buffer_mutex); 311 311 312 - static int hwswitch_state; 312 + static void clear_buffer(void) 313 + { 314 + memset(buffer, 0, sizeof(struct calling_interface_buffer)); 315 + } 313 316 314 317 static void get_buffer(void) 315 318 { 316 319 mutex_lock(&buffer_mutex); 317 - memset(buffer, 0, sizeof(struct calling_interface_buffer)); 320 + clear_buffer(); 318 321 } 319 322 320 323 static void release_buffer(void) ··· 551 548 int disable = blocked ? 1 : 0; 552 549 unsigned long radio = (unsigned long)data; 553 550 int hwswitch_bit = (unsigned long)data - 1; 551 + int hwswitch; 552 + int status; 553 + int ret; 554 554 555 555 get_buffer(); 556 + 556 557 dell_send_request(buffer, 17, 11); 558 + ret = buffer->output[0]; 559 + status = buffer->output[1]; 560 + 561 + if (ret != 0) 562 + goto out; 563 + 564 + clear_buffer(); 565 + 566 + buffer->input[0] = 0x2; 567 + dell_send_request(buffer, 17, 11); 568 + ret = buffer->output[0]; 569 + hwswitch = buffer->output[1]; 557 570 558 571 /* If the hardware switch controls this radio, and the hardware 559 572 switch is disabled, always disable the radio */ 560 - if ((hwswitch_state & BIT(hwswitch_bit)) && 561 - !(buffer->output[1] & BIT(16))) 573 + if (ret == 0 && (hwswitch & BIT(hwswitch_bit)) && 574 + (status & BIT(0)) && !(status & BIT(16))) 562 575 disable = 1; 576 + 577 + clear_buffer(); 563 578 564 579 buffer->input[0] = (1 | (radio<<8) | (disable << 16)); 565 580 dell_send_request(buffer, 17, 11); 581 + ret = buffer->output[0]; 566 582 583 + out: 567 584 release_buffer(); 568 - return 0; 585 + return dell_smi_error(ret); 569 586 } 570 587 571 588 /* Must be called with the buffer held */ ··· 595 572 if (status & BIT(0)) { 596 573 /* Has hw-switch, sync sw_state to BIOS */ 597 574 int block = rfkill_blocked(rfkill); 575 + clear_buffer(); 598 576 buffer->input[0] = (1 | (radio << 8) | (block << 16)); 599 577 dell_send_request(buffer, 17, 11); 600 578 } else { ··· 605 581 } 606 582 607 583 static void dell_rfkill_update_hw_state(struct rfkill *rfkill, int radio, 608 - int status) 584 + int status, int hwswitch) 609 585 { 610 - if (hwswitch_state & (BIT(radio - 1))) 586 + if (hwswitch & (BIT(radio - 1))) 611 587 rfkill_set_hw_state(rfkill, !(status & BIT(16))); 612 588 } 613 589 614 590 static void dell_rfkill_query(struct rfkill *rfkill, void *data) 615 591 { 592 + int radio = ((unsigned long)data & 0xF); 593 + int hwswitch; 616 594 int status; 595 + int ret; 617 596 618 597 get_buffer(); 598 + 619 599 dell_send_request(buffer, 17, 11); 600 + ret = buffer->output[0]; 620 601 status = buffer->output[1]; 621 602 622 - dell_rfkill_update_hw_state(rfkill, (unsigned long)data, status); 603 + if (ret != 0 || !(status & BIT(0))) { 604 + release_buffer(); 605 + return; 606 + } 607 + 608 + clear_buffer(); 609 + 610 + buffer->input[0] = 0x2; 611 + dell_send_request(buffer, 17, 11); 612 + ret = buffer->output[0]; 613 + hwswitch = buffer->output[1]; 623 614 624 615 release_buffer(); 616 + 617 + if (ret != 0) 618 + return; 619 + 620 + dell_rfkill_update_hw_state(rfkill, radio, status, hwswitch); 625 621 } 626 622 627 623 static const struct rfkill_ops dell_rfkill_ops = { ··· 653 609 654 610 static int dell_debugfs_show(struct seq_file *s, void *data) 655 611 { 612 + int hwswitch_state; 613 + int hwswitch_ret; 656 614 int status; 615 + int ret; 657 616 658 617 get_buffer(); 618 + 659 619 dell_send_request(buffer, 17, 11); 620 + ret = buffer->output[0]; 660 621 status = buffer->output[1]; 622 + 623 + clear_buffer(); 624 + 625 + buffer->input[0] = 0x2; 626 + dell_send_request(buffer, 17, 11); 627 + hwswitch_ret = buffer->output[0]; 628 + hwswitch_state = buffer->output[1]; 629 + 661 630 release_buffer(); 662 631 632 + seq_printf(s, "return:\t%d\n", ret); 663 633 seq_printf(s, "status:\t0x%X\n", status); 664 634 seq_printf(s, "Bit 0 : Hardware switch supported: %lu\n", 665 635 status & BIT(0)); ··· 715 657 seq_printf(s, "Bit 21: WiGig is blocked: %lu\n", 716 658 (status & BIT(21)) >> 21); 717 659 718 - seq_printf(s, "\nhwswitch_state:\t0x%X\n", hwswitch_state); 660 + seq_printf(s, "\nhwswitch_return:\t%d\n", hwswitch_ret); 661 + seq_printf(s, "hwswitch_state:\t0x%X\n", hwswitch_state); 719 662 seq_printf(s, "Bit 0 : Wifi controlled by switch: %lu\n", 720 663 hwswitch_state & BIT(0)); 721 664 seq_printf(s, "Bit 1 : Bluetooth controlled by switch: %lu\n", ··· 752 693 753 694 static void dell_update_rfkill(struct work_struct *ignored) 754 695 { 696 + int hwswitch = 0; 755 697 int status; 698 + int ret; 756 699 757 700 get_buffer(); 701 + 758 702 dell_send_request(buffer, 17, 11); 703 + ret = buffer->output[0]; 759 704 status = buffer->output[1]; 760 705 706 + if (ret != 0) 707 + goto out; 708 + 709 + clear_buffer(); 710 + 711 + buffer->input[0] = 0x2; 712 + dell_send_request(buffer, 17, 11); 713 + ret = buffer->output[0]; 714 + 715 + if (ret == 0 && (status & BIT(0))) 716 + hwswitch = buffer->output[1]; 717 + 761 718 if (wifi_rfkill) { 762 - dell_rfkill_update_hw_state(wifi_rfkill, 1, status); 719 + dell_rfkill_update_hw_state(wifi_rfkill, 1, status, hwswitch); 763 720 dell_rfkill_update_sw_state(wifi_rfkill, 1, status); 764 721 } 765 722 if (bluetooth_rfkill) { 766 - dell_rfkill_update_hw_state(bluetooth_rfkill, 2, status); 723 + dell_rfkill_update_hw_state(bluetooth_rfkill, 2, status, 724 + hwswitch); 767 725 dell_rfkill_update_sw_state(bluetooth_rfkill, 2, status); 768 726 } 769 727 if (wwan_rfkill) { 770 - dell_rfkill_update_hw_state(wwan_rfkill, 3, status); 728 + dell_rfkill_update_hw_state(wwan_rfkill, 3, status, hwswitch); 771 729 dell_rfkill_update_sw_state(wwan_rfkill, 3, status); 772 730 } 773 731 732 + out: 774 733 release_buffer(); 775 734 } 776 735 static DECLARE_DELAYED_WORK(dell_rfkill_work, dell_update_rfkill); ··· 850 773 851 774 get_buffer(); 852 775 dell_send_request(buffer, 17, 11); 776 + ret = buffer->output[0]; 853 777 status = buffer->output[1]; 854 - buffer->input[0] = 0x2; 855 - dell_send_request(buffer, 17, 11); 856 - hwswitch_state = buffer->output[1]; 857 778 release_buffer(); 858 779 859 - if (!(status & BIT(0))) { 860 - if (force_rfkill) { 861 - /* No hwsitch, clear all hw-controlled bits */ 862 - hwswitch_state &= ~7; 863 - } else { 864 - /* rfkill is only tested on laptops with a hwswitch */ 865 - return 0; 866 - } 867 - } 780 + /* dell wireless info smbios call is not supported */ 781 + if (ret != 0) 782 + return 0; 783 + 784 + /* rfkill is only tested on laptops with a hwswitch */ 785 + if (!(status & BIT(0)) && !force_rfkill) 786 + return 0; 868 787 869 788 if ((status & (1<<2|1<<8)) == (1<<2|1<<8)) { 870 789 wifi_rfkill = rfkill_alloc("dell-wifi", &platform_device->dev, ··· 1005 932 1006 933 static int dell_send_intensity(struct backlight_device *bd) 1007 934 { 1008 - int ret = 0; 935 + int token; 936 + int ret; 937 + 938 + token = find_token_location(BRIGHTNESS_TOKEN); 939 + if (token == -1) 940 + return -ENODEV; 1009 941 1010 942 get_buffer(); 1011 - buffer->input[0] = find_token_location(BRIGHTNESS_TOKEN); 943 + buffer->input[0] = token; 1012 944 buffer->input[1] = bd->props.brightness; 1013 - 1014 - if (buffer->input[0] == -1) { 1015 - ret = -ENODEV; 1016 - goto out; 1017 - } 1018 945 1019 946 if (power_supply_is_system_supplied() > 0) 1020 947 dell_send_request(buffer, 1, 2); 1021 948 else 1022 949 dell_send_request(buffer, 1, 1); 1023 950 1024 - out: 951 + ret = dell_smi_error(buffer->output[0]); 952 + 1025 953 release_buffer(); 1026 954 return ret; 1027 955 } 1028 956 1029 957 static int dell_get_intensity(struct backlight_device *bd) 1030 958 { 1031 - int ret = 0; 959 + int token; 960 + int ret; 961 + 962 + token = find_token_location(BRIGHTNESS_TOKEN); 963 + if (token == -1) 964 + return -ENODEV; 1032 965 1033 966 get_buffer(); 1034 - buffer->input[0] = find_token_location(BRIGHTNESS_TOKEN); 1035 - 1036 - if (buffer->input[0] == -1) { 1037 - ret = -ENODEV; 1038 - goto out; 1039 - } 967 + buffer->input[0] = token; 1040 968 1041 969 if (power_supply_is_system_supplied() > 0) 1042 970 dell_send_request(buffer, 0, 2); 1043 971 else 1044 972 dell_send_request(buffer, 0, 1); 1045 973 1046 - ret = buffer->output[1]; 974 + if (buffer->output[0]) 975 + ret = dell_smi_error(buffer->output[0]); 976 + else 977 + ret = buffer->output[1]; 1047 978 1048 - out: 1049 979 release_buffer(); 1050 980 return ret; 1051 981 } ··· 2112 2036 static int __init dell_init(void) 2113 2037 { 2114 2038 int max_intensity = 0; 2039 + int token; 2115 2040 int ret; 2116 2041 2117 2042 if (!dmi_check_system(dell_device_table)) ··· 2171 2094 if (acpi_video_get_backlight_type() != acpi_backlight_vendor) 2172 2095 return 0; 2173 2096 2174 - get_buffer(); 2175 - buffer->input[0] = find_token_location(BRIGHTNESS_TOKEN); 2176 - if (buffer->input[0] != -1) { 2097 + token = find_token_location(BRIGHTNESS_TOKEN); 2098 + if (token != -1) { 2099 + get_buffer(); 2100 + buffer->input[0] = token; 2177 2101 dell_send_request(buffer, 0, 2); 2178 - max_intensity = buffer->output[3]; 2102 + if (buffer->output[0] == 0) 2103 + max_intensity = buffer->output[3]; 2104 + release_buffer(); 2179 2105 } 2180 - release_buffer(); 2181 2106 2182 2107 if (max_intensity) { 2183 2108 struct backlight_properties props;
+48 -35
drivers/platform/x86/intel_pmc_ipc.c
··· 96 96 struct completion cmd_complete; 97 97 98 98 /* The following PMC BARs share the same ACPI device with the IPC */ 99 - void *acpi_io_base; 99 + resource_size_t acpi_io_base; 100 100 int acpi_io_size; 101 101 struct platform_device *tco_dev; 102 102 103 103 /* gcr */ 104 - void *gcr_base; 104 + resource_size_t gcr_base; 105 105 int gcr_size; 106 106 107 107 /* punit */ 108 - void *punit_base; 108 + resource_size_t punit_base; 109 109 int punit_size; 110 - void *punit_base2; 110 + resource_size_t punit_base2; 111 111 int punit_size2; 112 112 struct platform_device *punit_dev; 113 113 } ipcdev; ··· 210 210 return ret; 211 211 } 212 212 213 - /* 214 - * intel_pmc_ipc_simple_command 215 - * @cmd: command 216 - * @sub: sub type 213 + /** 214 + * intel_pmc_ipc_simple_command() - Simple IPC command 215 + * @cmd: IPC command code. 216 + * @sub: IPC command sub type. 217 + * 218 + * Send a simple IPC command to PMC when don't need to specify 219 + * input/output data and source/dest pointers. 220 + * 221 + * Return: an IPC error code or 0 on success. 217 222 */ 218 223 int intel_pmc_ipc_simple_command(int cmd, int sub) 219 224 { ··· 237 232 } 238 233 EXPORT_SYMBOL_GPL(intel_pmc_ipc_simple_command); 239 234 240 - /* 241 - * intel_pmc_ipc_raw_cmd 242 - * @cmd: command 243 - * @sub: sub type 244 - * @in: input data 245 - * @inlen: input length in bytes 246 - * @out: output data 247 - * @outlen: output length in dwords 248 - * @sptr: data writing to SPTR register 249 - * @dptr: data writing to DPTR register 235 + /** 236 + * intel_pmc_ipc_raw_cmd() - IPC command with data and pointers 237 + * @cmd: IPC command code. 238 + * @sub: IPC command sub type. 239 + * @in: input data of this IPC command. 240 + * @inlen: input data length in bytes. 241 + * @out: output data of this IPC command. 242 + * @outlen: output data length in dwords. 243 + * @sptr: data writing to SPTR register. 244 + * @dptr: data writing to DPTR register. 245 + * 246 + * Send an IPC command to PMC with input/output data and source/dest pointers. 247 + * 248 + * Return: an IPC error code or 0 on success. 250 249 */ 251 250 int intel_pmc_ipc_raw_cmd(u32 cmd, u32 sub, u8 *in, u32 inlen, u32 *out, 252 251 u32 outlen, u32 dptr, u32 sptr) ··· 287 278 } 288 279 EXPORT_SYMBOL_GPL(intel_pmc_ipc_raw_cmd); 289 280 290 - /* 291 - * intel_pmc_ipc_command 292 - * @cmd: command 293 - * @sub: sub type 294 - * @in: input data 295 - * @inlen: input length in bytes 296 - * @out: output data 297 - * @outlen: output length in dwords 281 + /** 282 + * intel_pmc_ipc_command() - IPC command with input/output data 283 + * @cmd: IPC command code. 284 + * @sub: IPC command sub type. 285 + * @in: input data of this IPC command. 286 + * @inlen: input data length in bytes. 287 + * @out: output data of this IPC command. 288 + * @outlen: output data length in dwords. 289 + * 290 + * Send an IPC command to PMC with input/output data. 291 + * 292 + * Return: an IPC error code or 0 on success. 298 293 */ 299 294 int intel_pmc_ipc_command(u32 cmd, u32 sub, u8 *in, u32 inlen, 300 295 u32 *out, u32 outlen) ··· 493 480 pdev->dev.parent = ipcdev.dev; 494 481 495 482 res = punit_res; 496 - res->start = (resource_size_t)ipcdev.punit_base; 483 + res->start = ipcdev.punit_base; 497 484 res->end = res->start + ipcdev.punit_size - 1; 498 485 499 486 res = punit_res + PUNIT_RESOURCE_INTER; 500 - res->start = (resource_size_t)ipcdev.punit_base2; 487 + res->start = ipcdev.punit_base2; 501 488 res->end = res->start + ipcdev.punit_size2 - 1; 502 489 503 490 ret = platform_device_add_resources(pdev, punit_res, ··· 535 522 pdev->dev.parent = ipcdev.dev; 536 523 537 524 res = tco_res + TCO_RESOURCE_ACPI_IO; 538 - res->start = (resource_size_t)ipcdev.acpi_io_base + TCO_BASE_OFFSET; 525 + res->start = ipcdev.acpi_io_base + TCO_BASE_OFFSET; 539 526 res->end = res->start + TCO_REGS_SIZE - 1; 540 527 541 528 res = tco_res + TCO_RESOURCE_SMI_EN_IO; 542 - res->start = (resource_size_t)ipcdev.acpi_io_base + SMI_EN_OFFSET; 529 + res->start = ipcdev.acpi_io_base + SMI_EN_OFFSET; 543 530 res->end = res->start + SMI_EN_SIZE - 1; 544 531 545 532 res = tco_res + TCO_RESOURCE_GCR_MEM; 546 - res->start = (resource_size_t)ipcdev.gcr_base; 533 + res->start = ipcdev.gcr_base; 547 534 res->end = res->start + ipcdev.gcr_size - 1; 548 535 549 536 ret = platform_device_add_resources(pdev, tco_res, ARRAY_SIZE(tco_res)); ··· 602 589 return -ENXIO; 603 590 } 604 591 size = resource_size(res); 605 - ipcdev.acpi_io_base = (void *)res->start; 592 + ipcdev.acpi_io_base = res->start; 606 593 ipcdev.acpi_io_size = size; 607 594 dev_info(&pdev->dev, "io res: %llx %x\n", 608 595 (long long)res->start, (int)resource_size(res)); ··· 614 601 return -ENXIO; 615 602 } 616 603 size = resource_size(res); 617 - ipcdev.punit_base = (void *)res->start; 604 + ipcdev.punit_base = res->start; 618 605 ipcdev.punit_size = size; 619 606 dev_info(&pdev->dev, "punit data res: %llx %x\n", 620 607 (long long)res->start, (int)resource_size(res)); ··· 626 613 return -ENXIO; 627 614 } 628 615 size = resource_size(res); 629 - ipcdev.punit_base2 = (void *)res->start; 616 + ipcdev.punit_base2 = res->start; 630 617 ipcdev.punit_size2 = size; 631 618 dev_info(&pdev->dev, "punit interface res: %llx %x\n", 632 619 (long long)res->start, (int)resource_size(res)); ··· 650 637 } 651 638 ipcdev.ipc_base = addr; 652 639 653 - ipcdev.gcr_base = (void *)(res->start + size); 640 + ipcdev.gcr_base = res->start + size; 654 641 ipcdev.gcr_size = PLAT_RESOURCE_GCR_SIZE; 655 642 dev_info(&pdev->dev, "ipc res: %llx %x\n", 656 643 (long long)res->start, (int)resource_size(res));
+3 -3
drivers/platform/x86/intel_scu_ipc.c
··· 216 216 int nc; 217 217 u32 offset = 0; 218 218 int err; 219 - u8 cbuf[IPC_WWBUF_SIZE] = { }; 219 + u8 cbuf[IPC_WWBUF_SIZE]; 220 220 u32 *wbuf = (u32 *)&cbuf; 221 221 222 - mutex_lock(&ipclock); 223 - 224 222 memset(cbuf, 0, sizeof(cbuf)); 223 + 224 + mutex_lock(&ipclock); 225 225 226 226 if (ipcdev.pdev == NULL) { 227 227 mutex_unlock(&ipclock);
+9 -26
drivers/pnp/system.c
··· 7 7 * Bjorn Helgaas <bjorn.helgaas@hp.com> 8 8 */ 9 9 10 - #include <linux/acpi.h> 11 10 #include <linux/pnp.h> 12 11 #include <linux/device.h> 13 12 #include <linux/init.h> ··· 22 23 {"", 0} 23 24 }; 24 25 25 - #ifdef CONFIG_ACPI 26 - static bool __reserve_range(u64 start, unsigned int length, bool io, char *desc) 27 - { 28 - u8 space_id = io ? ACPI_ADR_SPACE_SYSTEM_IO : ACPI_ADR_SPACE_SYSTEM_MEMORY; 29 - return !acpi_reserve_region(start, length, space_id, IORESOURCE_BUSY, desc); 30 - } 31 - #else 32 - static bool __reserve_range(u64 start, unsigned int length, bool io, char *desc) 33 - { 34 - struct resource *res; 35 - 36 - res = io ? request_region(start, length, desc) : 37 - request_mem_region(start, length, desc); 38 - if (res) { 39 - res->flags &= ~IORESOURCE_BUSY; 40 - return true; 41 - } 42 - return false; 43 - } 44 - #endif 45 - 46 26 static void reserve_range(struct pnp_dev *dev, struct resource *r, int port) 47 27 { 48 28 char *regionid; 49 29 const char *pnpid = dev_name(&dev->dev); 50 30 resource_size_t start = r->start, end = r->end; 51 - bool reserved; 31 + struct resource *res; 52 32 53 33 regionid = kmalloc(16, GFP_KERNEL); 54 34 if (!regionid) 55 35 return; 56 36 57 37 snprintf(regionid, 16, "pnp %s", pnpid); 58 - reserved = __reserve_range(start, end - start + 1, !!port, regionid); 59 - if (!reserved) 38 + if (port) 39 + res = request_region(start, end - start + 1, regionid); 40 + else 41 + res = request_mem_region(start, end - start + 1, regionid); 42 + if (res) 43 + res->flags &= ~IORESOURCE_BUSY; 44 + else 60 45 kfree(regionid); 61 46 62 47 /* ··· 49 66 * have double reservations. 50 67 */ 51 68 dev_info(&dev->dev, "%pR %s reserved\n", r, 52 - reserved ? "has been" : "could not be"); 69 + res ? "has been" : "could not be"); 53 70 } 54 71 55 72 static void reserve_resources_of_dev(struct pnp_dev *dev)
+1 -1
drivers/rtc/rtc-armada38x.c
··· 88 88 { 89 89 struct armada38x_rtc *rtc = dev_get_drvdata(dev); 90 90 int ret = 0; 91 - unsigned long time, flags; 91 + unsigned long time; 92 92 93 93 ret = rtc_tm_to_time(tm, &time); 94 94
+2 -2
drivers/rtc/rtc-mt6397.c
··· 343 343 goto out_dispose_irq; 344 344 } 345 345 346 + device_init_wakeup(&pdev->dev, 1); 347 + 346 348 rtc->rtc_dev = rtc_device_register("mt6397-rtc", &pdev->dev, 347 349 &mtk_rtc_ops, THIS_MODULE); 348 350 if (IS_ERR(rtc->rtc_dev)) { ··· 352 350 ret = PTR_ERR(rtc->rtc_dev); 353 351 goto out_free_irq; 354 352 } 355 - 356 - device_init_wakeup(&pdev->dev, 1); 357 353 358 354 return 0; 359 355
+29 -7
drivers/s390/block/dasd.c
··· 1863 1863 } 1864 1864 1865 1865 /* 1866 + * return 1 when device is not eligible for IO 1867 + */ 1868 + static int __dasd_device_is_unusable(struct dasd_device *device, 1869 + struct dasd_ccw_req *cqr) 1870 + { 1871 + int mask = ~(DASD_STOPPED_DC_WAIT | DASD_UNRESUMED_PM); 1872 + 1873 + if (test_bit(DASD_FLAG_OFFLINE, &device->flags)) { 1874 + /* dasd is being set offline. */ 1875 + return 1; 1876 + } 1877 + if (device->stopped) { 1878 + if (device->stopped & mask) { 1879 + /* stopped and CQR will not change that. */ 1880 + return 1; 1881 + } 1882 + if (!test_bit(DASD_CQR_VERIFY_PATH, &cqr->flags)) { 1883 + /* CQR is not able to change device to 1884 + * operational. */ 1885 + return 1; 1886 + } 1887 + /* CQR required to get device operational. */ 1888 + } 1889 + return 0; 1890 + } 1891 + 1892 + /* 1866 1893 * Take a look at the first request on the ccw queue and check 1867 1894 * if it needs to be started. 1868 1895 */ ··· 1903 1876 cqr = list_entry(device->ccw_queue.next, struct dasd_ccw_req, devlist); 1904 1877 if (cqr->status != DASD_CQR_QUEUED) 1905 1878 return; 1906 - /* when device is stopped, return request to previous layer 1907 - * exception: only the disconnect or unresumed bits are set and the 1908 - * cqr is a path verification request 1909 - */ 1910 - if (device->stopped && 1911 - !(!(device->stopped & ~(DASD_STOPPED_DC_WAIT | DASD_UNRESUMED_PM)) 1912 - && test_bit(DASD_CQR_VERIFY_PATH, &cqr->flags))) { 1879 + /* if device is not usable return request to upper layer */ 1880 + if (__dasd_device_is_unusable(device, cqr)) { 1913 1881 cqr->intrc = -EAGAIN; 1914 1882 cqr->status = DASD_CQR_CLEARED; 1915 1883 dasd_schedule_device_bh(device);
+2 -1
drivers/s390/block/dasd_alias.c
··· 699 699 struct dasd_device, alias_list); 700 700 spin_unlock_irqrestore(&lcu->lock, flags); 701 701 alias_priv = (struct dasd_eckd_private *) alias_device->private; 702 - if ((alias_priv->count < private->count) && !alias_device->stopped) 702 + if ((alias_priv->count < private->count) && !alias_device->stopped && 703 + !test_bit(DASD_FLAG_OFFLINE, &alias_device->flags)) 703 704 return alias_device; 704 705 else 705 706 return NULL;
+1
drivers/s390/char/sclp_early.c
··· 7 7 #define KMSG_COMPONENT "sclp_early" 8 8 #define pr_fmt(fmt) KMSG_COMPONENT ": " fmt 9 9 10 + #include <linux/errno.h> 10 11 #include <asm/ctl_reg.h> 11 12 #include <asm/sclp.h> 12 13 #include <asm/ipl.h>
+7
drivers/s390/crypto/zcrypt_api.c
··· 54 54 "Copyright IBM Corp. 2001, 2012"); 55 55 MODULE_LICENSE("GPL"); 56 56 57 + static int zcrypt_hwrng_seed = 1; 58 + module_param_named(hwrng_seed, zcrypt_hwrng_seed, int, S_IRUSR|S_IRGRP); 59 + MODULE_PARM_DESC(hwrng_seed, "Turn on/off hwrng auto seed, default is 1 (on)."); 60 + 57 61 static DEFINE_SPINLOCK(zcrypt_device_lock); 58 62 static LIST_HEAD(zcrypt_device_list); 59 63 static int zcrypt_device_count = 0; ··· 1377 1373 static struct hwrng zcrypt_rng_dev = { 1378 1374 .name = "zcrypt", 1379 1375 .data_read = zcrypt_rng_data_read, 1376 + .quality = 990, 1380 1377 }; 1381 1378 1382 1379 static int zcrypt_rng_device_add(void) ··· 1392 1387 goto out; 1393 1388 } 1394 1389 zcrypt_rng_buffer_index = 0; 1390 + if (!zcrypt_hwrng_seed) 1391 + zcrypt_rng_dev.quality = 0; 1395 1392 rc = hwrng_register(&zcrypt_rng_dev); 1396 1393 if (rc) 1397 1394 goto out_free;
+1 -1
drivers/scsi/scsi_sysfs.c
··· 859 859 860 860 depth = simple_strtoul(buf, NULL, 0); 861 861 862 - if (depth < 1 || depth > sht->can_queue) 862 + if (depth < 1 || depth > sdev->host->can_queue) 863 863 return -EINVAL; 864 864 865 865 retval = sht->change_queue_depth(sdev, depth);
+2 -1
drivers/scsi/scsi_transport_srp.c
··· 203 203 return tmo >= 0 ? sprintf(buf, "%d\n", tmo) : sprintf(buf, "off\n"); 204 204 } 205 205 206 - static int srp_parse_tmo(int *tmo, const char *buf) 206 + int srp_parse_tmo(int *tmo, const char *buf) 207 207 { 208 208 int res = 0; 209 209 ··· 214 214 215 215 return res; 216 216 } 217 + EXPORT_SYMBOL(srp_parse_tmo); 217 218 218 219 static ssize_t show_reconnect_delay(struct device *dev, 219 220 struct device_attribute *attr, char *buf)
+1 -1
drivers/scsi/st.c
··· 1329 1329 spin_lock(&st_use_lock); 1330 1330 STp->in_use = 0; 1331 1331 spin_unlock(&st_use_lock); 1332 - scsi_tape_put(STp); 1333 1332 if (resumed) 1334 1333 scsi_autopm_put_device(STp->device); 1334 + scsi_tape_put(STp); 1335 1335 return retval; 1336 1336 1337 1337 }
+1 -1
drivers/staging/board/Kconfig
··· 1 1 config STAGING_BOARD 2 2 bool "Staging Board Support" 3 - depends on OF_ADDRESS 3 + depends on OF_ADDRESS && OF_IRQ && CLKDEV_LOOKUP 4 4 help 5 5 Select to enable per-board staging support code. 6 6
+1 -1
drivers/staging/vt6655/device_main.c
··· 1411 1411 1412 1412 priv->current_aid = conf->aid; 1413 1413 1414 - if (changed & BSS_CHANGED_BSSID) { 1414 + if (changed & BSS_CHANGED_BSSID && conf->bssid) { 1415 1415 unsigned long flags; 1416 1416 1417 1417 spin_lock_irqsave(&priv->lock, flags);
+1 -1
drivers/staging/vt6656/main_usb.c
··· 701 701 702 702 priv->current_aid = conf->aid; 703 703 704 - if (changed & BSS_CHANGED_BSSID) 704 + if (changed & BSS_CHANGED_BSSID && conf->bssid) 705 705 vnt_mac_set_bssid_addr(priv, (u8 *)conf->bssid); 706 706 707 707
+15 -40
drivers/usb/dwc2/core.c
··· 72 72 dev_dbg(hsotg->dev, "%s\n", __func__); 73 73 74 74 /* Backup Host regs */ 75 - hr = hsotg->hr_backup; 76 - if (!hr) { 77 - hr = devm_kzalloc(hsotg->dev, sizeof(*hr), GFP_KERNEL); 78 - if (!hr) { 79 - dev_err(hsotg->dev, "%s: can't allocate host regs\n", 80 - __func__); 81 - return -ENOMEM; 82 - } 83 - 84 - hsotg->hr_backup = hr; 85 - } 75 + hr = &hsotg->hr_backup; 86 76 hr->hcfg = readl(hsotg->regs + HCFG); 87 77 hr->haintmsk = readl(hsotg->regs + HAINTMSK); 88 78 for (i = 0; i < hsotg->core_params->host_channels; ++i) ··· 80 90 81 91 hr->hprt0 = readl(hsotg->regs + HPRT0); 82 92 hr->hfir = readl(hsotg->regs + HFIR); 93 + hr->valid = true; 83 94 84 95 return 0; 85 96 } ··· 100 109 dev_dbg(hsotg->dev, "%s\n", __func__); 101 110 102 111 /* Restore host regs */ 103 - hr = hsotg->hr_backup; 104 - if (!hr) { 112 + hr = &hsotg->hr_backup; 113 + if (!hr->valid) { 105 114 dev_err(hsotg->dev, "%s: no host registers to restore\n", 106 115 __func__); 107 116 return -EINVAL; 108 117 } 118 + hr->valid = false; 109 119 110 120 writel(hr->hcfg, hsotg->regs + HCFG); 111 121 writel(hr->haintmsk, hsotg->regs + HAINTMSK); ··· 144 152 dev_dbg(hsotg->dev, "%s\n", __func__); 145 153 146 154 /* Backup dev regs */ 147 - dr = hsotg->dr_backup; 148 - if (!dr) { 149 - dr = devm_kzalloc(hsotg->dev, sizeof(*dr), GFP_KERNEL); 150 - if (!dr) { 151 - dev_err(hsotg->dev, "%s: can't allocate device regs\n", 152 - __func__); 153 - return -ENOMEM; 154 - } 155 - 156 - hsotg->dr_backup = dr; 157 - } 155 + dr = &hsotg->dr_backup; 158 156 159 157 dr->dcfg = readl(hsotg->regs + DCFG); 160 158 dr->dctl = readl(hsotg->regs + DCTL); ··· 177 195 dr->doeptsiz[i] = readl(hsotg->regs + DOEPTSIZ(i)); 178 196 dr->doepdma[i] = readl(hsotg->regs + DOEPDMA(i)); 179 197 } 180 - 198 + dr->valid = true; 181 199 return 0; 182 200 } 183 201 ··· 197 215 dev_dbg(hsotg->dev, "%s\n", __func__); 198 216 199 217 /* Restore dev regs */ 200 - dr = hsotg->dr_backup; 201 - if (!dr) { 218 + dr = &hsotg->dr_backup; 219 + if (!dr->valid) { 202 220 dev_err(hsotg->dev, "%s: no device registers to restore\n", 203 221 __func__); 204 222 return -EINVAL; 205 223 } 224 + dr->valid = false; 206 225 207 226 writel(dr->dcfg, hsotg->regs + DCFG); 208 227 writel(dr->dctl, hsotg->regs + DCTL); ··· 251 268 int i; 252 269 253 270 /* Backup global regs */ 254 - gr = hsotg->gr_backup; 255 - if (!gr) { 256 - gr = devm_kzalloc(hsotg->dev, sizeof(*gr), GFP_KERNEL); 257 - if (!gr) { 258 - dev_err(hsotg->dev, "%s: can't allocate global regs\n", 259 - __func__); 260 - return -ENOMEM; 261 - } 262 - 263 - hsotg->gr_backup = gr; 264 - } 271 + gr = &hsotg->gr_backup; 265 272 266 273 gr->gotgctl = readl(hsotg->regs + GOTGCTL); 267 274 gr->gintmsk = readl(hsotg->regs + GINTMSK); ··· 264 291 for (i = 0; i < MAX_EPS_CHANNELS; i++) 265 292 gr->dtxfsiz[i] = readl(hsotg->regs + DPTXFSIZN(i)); 266 293 294 + gr->valid = true; 267 295 return 0; 268 296 } 269 297 ··· 283 309 dev_dbg(hsotg->dev, "%s\n", __func__); 284 310 285 311 /* Restore global regs */ 286 - gr = hsotg->gr_backup; 287 - if (!gr) { 312 + gr = &hsotg->gr_backup; 313 + if (!gr->valid) { 288 314 dev_err(hsotg->dev, "%s: no global registers to restore\n", 289 315 __func__); 290 316 return -EINVAL; 291 317 } 318 + gr->valid = false; 292 319 293 320 writel(0xffffffff, hsotg->regs + GINTSTS); 294 321 writel(gr->gotgctl, hsotg->regs + GOTGCTL);
+6 -3
drivers/usb/dwc2/core.h
··· 492 492 u32 gdfifocfg; 493 493 u32 dtxfsiz[MAX_EPS_CHANNELS]; 494 494 u32 gpwrdn; 495 + bool valid; 495 496 }; 496 497 497 498 /** ··· 522 521 u32 doepctl[MAX_EPS_CHANNELS]; 523 522 u32 doeptsiz[MAX_EPS_CHANNELS]; 524 523 u32 doepdma[MAX_EPS_CHANNELS]; 524 + bool valid; 525 525 }; 526 526 527 527 /** ··· 540 538 u32 hcintmsk[MAX_EPS_CHANNELS]; 541 539 u32 hprt0; 542 540 u32 hfir; 541 + bool valid; 543 542 }; 544 543 545 544 /** ··· 708 705 struct work_struct wf_otg; 709 706 struct timer_list wkp_timer; 710 707 enum dwc2_lx_state lx_state; 711 - struct dwc2_gregs_backup *gr_backup; 712 - struct dwc2_dregs_backup *dr_backup; 713 - struct dwc2_hregs_backup *hr_backup; 708 + struct dwc2_gregs_backup gr_backup; 709 + struct dwc2_dregs_backup dr_backup; 710 + struct dwc2_hregs_backup hr_backup; 714 711 715 712 struct dentry *debug_root; 716 713 struct debugfs_regset32 *regset;
+43 -14
drivers/usb/dwc2/hcd.c
··· 359 359 360 360 /* Caller must hold driver lock */ 361 361 static int dwc2_hcd_urb_enqueue(struct dwc2_hsotg *hsotg, 362 - struct dwc2_hcd_urb *urb, void **ep_handle, 363 - gfp_t mem_flags) 362 + struct dwc2_hcd_urb *urb, struct dwc2_qh *qh, 363 + struct dwc2_qtd *qtd) 364 364 { 365 - struct dwc2_qtd *qtd; 366 365 u32 intr_mask; 367 366 int retval; 368 367 int dev_speed; ··· 385 386 return -ENODEV; 386 387 } 387 388 388 - qtd = kzalloc(sizeof(*qtd), mem_flags); 389 389 if (!qtd) 390 - return -ENOMEM; 390 + return -EINVAL; 391 391 392 392 dwc2_hcd_qtd_init(qtd, urb); 393 - retval = dwc2_hcd_qtd_add(hsotg, qtd, (struct dwc2_qh **)ep_handle, 394 - mem_flags); 393 + retval = dwc2_hcd_qtd_add(hsotg, qtd, qh); 395 394 if (retval) { 396 395 dev_err(hsotg->dev, 397 396 "DWC OTG HCD URB Enqueue failed adding QTD. Error status %d\n", 398 397 retval); 399 - kfree(qtd); 400 398 return retval; 401 399 } 402 400 ··· 2441 2445 u32 tflags = 0; 2442 2446 void *buf; 2443 2447 unsigned long flags; 2448 + struct dwc2_qh *qh; 2449 + bool qh_allocated = false; 2450 + struct dwc2_qtd *qtd; 2444 2451 2445 2452 if (dbg_urb(urb)) { 2446 2453 dev_vdbg(hsotg->dev, "DWC OTG HCD URB Enqueue\n"); ··· 2522 2523 urb->iso_frame_desc[i].length); 2523 2524 2524 2525 urb->hcpriv = dwc2_urb; 2526 + qh = (struct dwc2_qh *) ep->hcpriv; 2527 + /* Create QH for the endpoint if it doesn't exist */ 2528 + if (!qh) { 2529 + qh = dwc2_hcd_qh_create(hsotg, dwc2_urb, mem_flags); 2530 + if (!qh) { 2531 + retval = -ENOMEM; 2532 + goto fail0; 2533 + } 2534 + ep->hcpriv = qh; 2535 + qh_allocated = true; 2536 + } 2537 + 2538 + qtd = kzalloc(sizeof(*qtd), mem_flags); 2539 + if (!qtd) { 2540 + retval = -ENOMEM; 2541 + goto fail1; 2542 + } 2525 2543 2526 2544 spin_lock_irqsave(&hsotg->lock, flags); 2527 2545 retval = usb_hcd_link_urb_to_ep(hcd, urb); 2528 2546 if (retval) 2529 - goto fail1; 2530 - 2531 - retval = dwc2_hcd_urb_enqueue(hsotg, dwc2_urb, &ep->hcpriv, mem_flags); 2532 - if (retval) 2533 2547 goto fail2; 2548 + 2549 + retval = dwc2_hcd_urb_enqueue(hsotg, dwc2_urb, qh, qtd); 2550 + if (retval) 2551 + goto fail3; 2534 2552 2535 2553 if (alloc_bandwidth) { 2536 2554 dwc2_allocate_bus_bandwidth(hcd, ··· 2559 2543 2560 2544 return 0; 2561 2545 2562 - fail2: 2546 + fail3: 2563 2547 dwc2_urb->priv = NULL; 2564 2548 usb_hcd_unlink_urb_from_ep(hcd, urb); 2565 - fail1: 2549 + fail2: 2566 2550 spin_unlock_irqrestore(&hsotg->lock, flags); 2567 2551 urb->hcpriv = NULL; 2552 + kfree(qtd); 2553 + fail1: 2554 + if (qh_allocated) { 2555 + struct dwc2_qtd *qtd2, *qtd2_tmp; 2556 + 2557 + ep->hcpriv = NULL; 2558 + dwc2_hcd_qh_unlink(hsotg, qh); 2559 + /* Free each QTD in the QH's QTD list */ 2560 + list_for_each_entry_safe(qtd2, qtd2_tmp, &qh->qtd_list, 2561 + qtd_list_entry) 2562 + dwc2_hcd_qtd_unlink_and_free(hsotg, qtd2, qh); 2563 + dwc2_hcd_qh_free(hsotg, qh); 2564 + } 2568 2565 fail0: 2569 2566 kfree(dwc2_urb); 2570 2567
+4 -1
drivers/usb/dwc2/hcd.h
··· 463 463 /* Schedule Queue Functions */ 464 464 /* Implemented in hcd_queue.c */ 465 465 extern void dwc2_hcd_init_usecs(struct dwc2_hsotg *hsotg); 466 + extern struct dwc2_qh *dwc2_hcd_qh_create(struct dwc2_hsotg *hsotg, 467 + struct dwc2_hcd_urb *urb, 468 + gfp_t mem_flags); 466 469 extern void dwc2_hcd_qh_free(struct dwc2_hsotg *hsotg, struct dwc2_qh *qh); 467 470 extern int dwc2_hcd_qh_add(struct dwc2_hsotg *hsotg, struct dwc2_qh *qh); 468 471 extern void dwc2_hcd_qh_unlink(struct dwc2_hsotg *hsotg, struct dwc2_qh *qh); ··· 474 471 475 472 extern void dwc2_hcd_qtd_init(struct dwc2_qtd *qtd, struct dwc2_hcd_urb *urb); 476 473 extern int dwc2_hcd_qtd_add(struct dwc2_hsotg *hsotg, struct dwc2_qtd *qtd, 477 - struct dwc2_qh **qh, gfp_t mem_flags); 474 + struct dwc2_qh *qh); 478 475 479 476 /* Unlinks and frees a QTD */ 480 477 static inline void dwc2_hcd_qtd_unlink_and_free(struct dwc2_hsotg *hsotg,
+12 -37
drivers/usb/dwc2/hcd_queue.c
··· 191 191 * 192 192 * Return: Pointer to the newly allocated QH, or NULL on error 193 193 */ 194 - static struct dwc2_qh *dwc2_hcd_qh_create(struct dwc2_hsotg *hsotg, 194 + struct dwc2_qh *dwc2_hcd_qh_create(struct dwc2_hsotg *hsotg, 195 195 struct dwc2_hcd_urb *urb, 196 196 gfp_t mem_flags) 197 197 { ··· 767 767 * 768 768 * @hsotg: The DWC HCD structure 769 769 * @qtd: The QTD to add 770 - * @qh: Out parameter to return queue head 771 - * @atomic_alloc: Flag to do atomic alloc if needed 770 + * @qh: Queue head to add qtd to 772 771 * 773 772 * Return: 0 if successful, negative error code otherwise 774 773 * 775 - * Finds the correct QH to place the QTD into. If it does not find a QH, it 776 - * will create a new QH. If the QH to which the QTD is added is not currently 777 - * scheduled, it is placed into the proper schedule based on its EP type. 774 + * If the QH to which the QTD is added is not currently scheduled, it is placed 775 + * into the proper schedule based on its EP type. 778 776 */ 779 777 int dwc2_hcd_qtd_add(struct dwc2_hsotg *hsotg, struct dwc2_qtd *qtd, 780 - struct dwc2_qh **qh, gfp_t mem_flags) 778 + struct dwc2_qh *qh) 781 779 { 782 - struct dwc2_hcd_urb *urb = qtd->urb; 783 - int allocated = 0; 784 780 int retval; 785 781 786 - /* 787 - * Get the QH which holds the QTD-list to insert to. Create QH if it 788 - * doesn't exist. 789 - */ 790 - if (*qh == NULL) { 791 - *qh = dwc2_hcd_qh_create(hsotg, urb, mem_flags); 792 - if (*qh == NULL) 793 - return -ENOMEM; 794 - allocated = 1; 782 + if (unlikely(!qh)) { 783 + dev_err(hsotg->dev, "%s: Invalid QH\n", __func__); 784 + retval = -EINVAL; 785 + goto fail; 795 786 } 796 787 797 - retval = dwc2_hcd_qh_add(hsotg, *qh); 788 + retval = dwc2_hcd_qh_add(hsotg, qh); 798 789 if (retval) 799 790 goto fail; 800 791 801 - qtd->qh = *qh; 802 - list_add_tail(&qtd->qtd_list_entry, &(*qh)->qtd_list); 792 + qtd->qh = qh; 793 + list_add_tail(&qtd->qtd_list_entry, &qh->qtd_list); 803 794 804 795 return 0; 805 - 806 796 fail: 807 - if (allocated) { 808 - struct dwc2_qtd *qtd2, *qtd2_tmp; 809 - struct dwc2_qh *qh_tmp = *qh; 810 - 811 - *qh = NULL; 812 - dwc2_hcd_qh_unlink(hsotg, qh_tmp); 813 - 814 - /* Free each QTD in the QH's QTD list */ 815 - list_for_each_entry_safe(qtd2, qtd2_tmp, &qh_tmp->qtd_list, 816 - qtd_list_entry) 817 - dwc2_hcd_qtd_unlink_and_free(hsotg, qtd2, qh_tmp); 818 - 819 - dwc2_hcd_qh_free(hsotg, qh_tmp); 820 - } 821 - 822 797 return retval; 823 798 }
+4 -2
drivers/usb/dwc3/core.c
··· 446 446 /* Select the HS PHY interface */ 447 447 switch (DWC3_GHWPARAMS3_HSPHY_IFC(dwc->hwparams.hwparams3)) { 448 448 case DWC3_GHWPARAMS3_HSPHY_IFC_UTMI_ULPI: 449 - if (!strncmp(dwc->hsphy_interface, "utmi", 4)) { 449 + if (dwc->hsphy_interface && 450 + !strncmp(dwc->hsphy_interface, "utmi", 4)) { 450 451 reg &= ~DWC3_GUSB2PHYCFG_ULPI_UTMI; 451 452 break; 452 - } else if (!strncmp(dwc->hsphy_interface, "ulpi", 4)) { 453 + } else if (dwc->hsphy_interface && 454 + !strncmp(dwc->hsphy_interface, "ulpi", 4)) { 453 455 reg |= DWC3_GUSB2PHYCFG_ULPI_UTMI; 454 456 dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg); 455 457 } else {
+7 -4
drivers/usb/gadget/composite.c
··· 1758 1758 * take such requests too, if that's ever needed: to work 1759 1759 * in config 0, etc. 1760 1760 */ 1761 - list_for_each_entry(f, &cdev->config->functions, list) 1762 - if (f->req_match && f->req_match(f, ctrl)) 1763 - goto try_fun_setup; 1764 - f = NULL; 1761 + if (cdev->config) { 1762 + list_for_each_entry(f, &cdev->config->functions, list) 1763 + if (f->req_match && f->req_match(f, ctrl)) 1764 + goto try_fun_setup; 1765 + f = NULL; 1766 + } 1767 + 1765 1768 switch (ctrl->bRequestType & USB_RECIP_MASK) { 1766 1769 case USB_RECIP_INTERFACE: 1767 1770 if (!cdev->config || intf >= MAX_CONFIG_INTERFACES)
+1 -1
drivers/usb/gadget/configfs.c
··· 571 571 if (IS_ERR(fi)) 572 572 return ERR_CAST(fi); 573 573 574 - ret = config_item_set_name(&fi->group.cg_item, name); 574 + ret = config_item_set_name(&fi->group.cg_item, "%s", name); 575 575 if (ret) { 576 576 usb_put_function_instance(fi); 577 577 return ERR_PTR(ret);
+4 -2
drivers/usb/gadget/function/f_fs.c
··· 924 924 925 925 kiocb->private = p; 926 926 927 - kiocb_set_cancel_fn(kiocb, ffs_aio_cancel); 927 + if (p->aio) 928 + kiocb_set_cancel_fn(kiocb, ffs_aio_cancel); 928 929 929 930 res = ffs_epfile_io(kiocb->ki_filp, p); 930 931 if (res == -EIOCBQUEUED) ··· 969 968 970 969 kiocb->private = p; 971 970 972 - kiocb_set_cancel_fn(kiocb, ffs_aio_cancel); 971 + if (p->aio) 972 + kiocb_set_cancel_fn(kiocb, ffs_aio_cancel); 973 973 974 974 res = ffs_epfile_io(kiocb->ki_filp, p); 975 975 if (res == -EIOCBQUEUED)
+13 -3
drivers/usb/gadget/function/f_mass_storage.c
··· 2786 2786 return -EINVAL; 2787 2787 } 2788 2788 2789 - curlun = kcalloc(nluns, sizeof(*curlun), GFP_KERNEL); 2789 + curlun = kcalloc(FSG_MAX_LUNS, sizeof(*curlun), GFP_KERNEL); 2790 2790 if (unlikely(!curlun)) 2791 2791 return -ENOMEM; 2792 2792 ··· 2795 2795 2796 2796 common->luns = curlun; 2797 2797 common->nluns = nluns; 2798 - 2799 - pr_info("Number of LUNs=%d\n", common->nluns); 2800 2798 2801 2799 return 0; 2802 2800 } ··· 3561 3563 struct fsg_opts *opts = fsg_opts_from_func_inst(fi); 3562 3564 struct fsg_common *common = opts->common; 3563 3565 struct fsg_dev *fsg; 3566 + unsigned nluns, i; 3564 3567 3565 3568 fsg = kzalloc(sizeof(*fsg), GFP_KERNEL); 3566 3569 if (unlikely(!fsg)) 3567 3570 return ERR_PTR(-ENOMEM); 3568 3571 3569 3572 mutex_lock(&opts->lock); 3573 + if (!opts->refcnt) { 3574 + for (nluns = i = 0; i < FSG_MAX_LUNS; ++i) 3575 + if (common->luns[i]) 3576 + nluns = i + 1; 3577 + if (!nluns) 3578 + pr_warn("No LUNS defined, continuing anyway\n"); 3579 + else 3580 + common->nluns = nluns; 3581 + pr_info("Number of LUNs=%u\n", common->nluns); 3582 + } 3570 3583 opts->refcnt++; 3571 3584 mutex_unlock(&opts->lock); 3585 + 3572 3586 fsg->function.name = FSG_DRIVER_DESC; 3573 3587 fsg->function.bind = fsg_bind; 3574 3588 fsg->function.unbind = fsg_unbind;
+1 -3
drivers/usb/gadget/function/f_midi.c
··· 1145 1145 if (opts->id && !midi->id) { 1146 1146 status = -ENOMEM; 1147 1147 mutex_unlock(&opts->lock); 1148 - goto kstrdup_fail; 1148 + goto setup_fail; 1149 1149 } 1150 1150 midi->in_ports = opts->in_ports; 1151 1151 midi->out_ports = opts->out_ports; ··· 1164 1164 1165 1165 return &midi->func; 1166 1166 1167 - kstrdup_fail: 1168 - f_midi_unregister_card(midi); 1169 1167 setup_fail: 1170 1168 for (--i; i >= 0; i--) 1171 1169 kfree(midi->in_port[i]);
+1 -2
drivers/usb/gadget/udc/fotg210-udc.c
··· 1171 1171 udc_name, fotg210); 1172 1172 if (ret < 0) { 1173 1173 pr_err("request_irq error (%d)\n", ret); 1174 - goto err_irq; 1174 + goto err_req; 1175 1175 } 1176 1176 1177 1177 ret = usb_add_gadget_udc(&pdev->dev, &fotg210->gadget); ··· 1183 1183 return 0; 1184 1184 1185 1185 err_add_udc: 1186 - err_irq: 1187 1186 free_irq(ires->start, fotg210); 1188 1187 1189 1188 err_req:
+1 -3
drivers/usb/musb/musb_virthub.c
··· 275 275 #ifdef CONFIG_USB_MUSB_HOST 276 276 return 1; 277 277 #else 278 - if (musb->port_mode == MUSB_PORT_MODE_HOST) 279 - return 1; 280 - return musb->g.dev.driver != NULL; 278 + return musb->port_mode == MUSB_PORT_MODE_HOST; 281 279 #endif 282 280 } 283 281
+3
drivers/usb/phy/phy-mxs-usb.c
··· 217 217 { 218 218 unsigned int vbus_value; 219 219 220 + if (!mxs_phy->regmap_anatop) 221 + return false; 222 + 220 223 if (mxs_phy->port_id == 0) 221 224 regmap_read(mxs_phy->regmap_anatop, 222 225 ANADIG_USB1_VBUS_DET_STAT,
+1
drivers/usb/serial/cp210x.c
··· 187 187 { USB_DEVICE(0x1FB9, 0x0602) }, /* Lake Shore Model 648 Magnet Power Supply */ 188 188 { USB_DEVICE(0x1FB9, 0x0700) }, /* Lake Shore Model 737 VSM Controller */ 189 189 { USB_DEVICE(0x1FB9, 0x0701) }, /* Lake Shore Model 776 Hall Matrix */ 190 + { USB_DEVICE(0x2626, 0xEA60) }, /* Aruba Networks 7xxx USB Serial Console */ 190 191 { USB_DEVICE(0x3195, 0xF190) }, /* Link Instruments MSO-19 */ 191 192 { USB_DEVICE(0x3195, 0xF280) }, /* Link Instruments MSO-28 */ 192 193 { USB_DEVICE(0x3195, 0xF281) }, /* Link Instruments MSO-28 */
+138 -115
drivers/usb/serial/mos7720.c
··· 121 121 static const unsigned int dummy; /* for clarity in register access fns */ 122 122 123 123 enum mos_regs { 124 - THR, /* serial port regs */ 125 - RHR, 126 - IER, 127 - FCR, 128 - ISR, 129 - LCR, 130 - MCR, 131 - LSR, 132 - MSR, 133 - SPR, 134 - DLL, 135 - DLM, 136 - DPR, /* parallel port regs */ 137 - DSR, 138 - DCR, 139 - ECR, 140 - SP1_REG, /* device control regs */ 141 - SP2_REG, /* serial port 2 (7720 only) */ 142 - PP_REG, 143 - SP_CONTROL_REG, 124 + MOS7720_THR, /* serial port regs */ 125 + MOS7720_RHR, 126 + MOS7720_IER, 127 + MOS7720_FCR, 128 + MOS7720_ISR, 129 + MOS7720_LCR, 130 + MOS7720_MCR, 131 + MOS7720_LSR, 132 + MOS7720_MSR, 133 + MOS7720_SPR, 134 + MOS7720_DLL, 135 + MOS7720_DLM, 136 + MOS7720_DPR, /* parallel port regs */ 137 + MOS7720_DSR, 138 + MOS7720_DCR, 139 + MOS7720_ECR, 140 + MOS7720_SP1_REG, /* device control regs */ 141 + MOS7720_SP2_REG, /* serial port 2 (7720 only) */ 142 + MOS7720_PP_REG, 143 + MOS7720_SP_CONTROL_REG, 144 144 }; 145 145 146 146 /* ··· 150 150 static inline __u16 get_reg_index(enum mos_regs reg) 151 151 { 152 152 static const __u16 mos7715_index_lookup_table[] = { 153 - 0x00, /* THR */ 154 - 0x00, /* RHR */ 155 - 0x01, /* IER */ 156 - 0x02, /* FCR */ 157 - 0x02, /* ISR */ 158 - 0x03, /* LCR */ 159 - 0x04, /* MCR */ 160 - 0x05, /* LSR */ 161 - 0x06, /* MSR */ 162 - 0x07, /* SPR */ 163 - 0x00, /* DLL */ 164 - 0x01, /* DLM */ 165 - 0x00, /* DPR */ 166 - 0x01, /* DSR */ 167 - 0x02, /* DCR */ 168 - 0x0a, /* ECR */ 169 - 0x01, /* SP1_REG */ 170 - 0x02, /* SP2_REG (7720 only) */ 171 - 0x04, /* PP_REG (7715 only) */ 172 - 0x08, /* SP_CONTROL_REG */ 153 + 0x00, /* MOS7720_THR */ 154 + 0x00, /* MOS7720_RHR */ 155 + 0x01, /* MOS7720_IER */ 156 + 0x02, /* MOS7720_FCR */ 157 + 0x02, /* MOS7720_ISR */ 158 + 0x03, /* MOS7720_LCR */ 159 + 0x04, /* MOS7720_MCR */ 160 + 0x05, /* MOS7720_LSR */ 161 + 0x06, /* MOS7720_MSR */ 162 + 0x07, /* MOS7720_SPR */ 163 + 0x00, /* MOS7720_DLL */ 164 + 0x01, /* MOS7720_DLM */ 165 + 0x00, /* MOS7720_DPR */ 166 + 0x01, /* MOS7720_DSR */ 167 + 0x02, /* MOS7720_DCR */ 168 + 0x0a, /* MOS7720_ECR */ 169 + 0x01, /* MOS7720_SP1_REG */ 170 + 0x02, /* MOS7720_SP2_REG (7720 only) */ 171 + 0x04, /* MOS7720_PP_REG (7715 only) */ 172 + 0x08, /* MOS7720_SP_CONTROL_REG */ 173 173 }; 174 174 return mos7715_index_lookup_table[reg]; 175 175 } ··· 181 181 static inline __u16 get_reg_value(enum mos_regs reg, 182 182 unsigned int serial_portnum) 183 183 { 184 - if (reg >= SP1_REG) /* control reg */ 184 + if (reg >= MOS7720_SP1_REG) /* control reg */ 185 185 return 0x0000; 186 186 187 - else if (reg >= DPR) /* parallel port reg (7715 only) */ 187 + else if (reg >= MOS7720_DPR) /* parallel port reg (7715 only) */ 188 188 return 0x0100; 189 189 190 190 else /* serial port reg */ ··· 252 252 enum mos7715_pp_modes mode) 253 253 { 254 254 mos_parport->shadowECR = mode; 255 - write_mos_reg(mos_parport->serial, dummy, ECR, mos_parport->shadowECR); 255 + write_mos_reg(mos_parport->serial, dummy, MOS7720_ECR, 256 + mos_parport->shadowECR); 256 257 return 0; 257 258 } 258 259 ··· 487 486 if (parport_prologue(pp) < 0) 488 487 return; 489 488 mos7715_change_mode(mos_parport, SPP); 490 - write_mos_reg(mos_parport->serial, dummy, DPR, (__u8)d); 489 + write_mos_reg(mos_parport->serial, dummy, MOS7720_DPR, (__u8)d); 491 490 parport_epilogue(pp); 492 491 } 493 492 ··· 498 497 499 498 if (parport_prologue(pp) < 0) 500 499 return 0; 501 - read_mos_reg(mos_parport->serial, dummy, DPR, &d); 500 + read_mos_reg(mos_parport->serial, dummy, MOS7720_DPR, &d); 502 501 parport_epilogue(pp); 503 502 return d; 504 503 } ··· 511 510 if (parport_prologue(pp) < 0) 512 511 return; 513 512 data = ((__u8)d & 0x0f) | (mos_parport->shadowDCR & 0xf0); 514 - write_mos_reg(mos_parport->serial, dummy, DCR, data); 513 + write_mos_reg(mos_parport->serial, dummy, MOS7720_DCR, data); 515 514 mos_parport->shadowDCR = data; 516 515 parport_epilogue(pp); 517 516 } ··· 544 543 if (parport_prologue(pp) < 0) 545 544 return 0; 546 545 mos_parport->shadowDCR = (mos_parport->shadowDCR & (~mask)) ^ val; 547 - write_mos_reg(mos_parport->serial, dummy, DCR, mos_parport->shadowDCR); 546 + write_mos_reg(mos_parport->serial, dummy, MOS7720_DCR, 547 + mos_parport->shadowDCR); 548 548 dcr = mos_parport->shadowDCR & 0x0f; 549 549 parport_epilogue(pp); 550 550 return dcr; ··· 583 581 return; 584 582 mos7715_change_mode(mos_parport, PS2); 585 583 mos_parport->shadowDCR &= ~0x20; 586 - write_mos_reg(mos_parport->serial, dummy, DCR, mos_parport->shadowDCR); 584 + write_mos_reg(mos_parport->serial, dummy, MOS7720_DCR, 585 + mos_parport->shadowDCR); 587 586 parport_epilogue(pp); 588 587 } 589 588 ··· 596 593 return; 597 594 mos7715_change_mode(mos_parport, PS2); 598 595 mos_parport->shadowDCR |= 0x20; 599 - write_mos_reg(mos_parport->serial, dummy, DCR, mos_parport->shadowDCR); 596 + write_mos_reg(mos_parport->serial, dummy, MOS7720_DCR, 597 + mos_parport->shadowDCR); 600 598 parport_epilogue(pp); 601 599 } 602 600 ··· 637 633 spin_unlock(&release_lock); 638 634 return; 639 635 } 640 - write_parport_reg_nonblock(mos_parport, DCR, mos_parport->shadowDCR); 641 - write_parport_reg_nonblock(mos_parport, ECR, mos_parport->shadowECR); 636 + write_parport_reg_nonblock(mos_parport, MOS7720_DCR, 637 + mos_parport->shadowDCR); 638 + write_parport_reg_nonblock(mos_parport, MOS7720_ECR, 639 + mos_parport->shadowECR); 642 640 spin_unlock(&release_lock); 643 641 } 644 642 ··· 720 714 init_completion(&mos_parport->syncmsg_compl); 721 715 722 716 /* cycle parallel port reset bit */ 723 - write_mos_reg(mos_parport->serial, dummy, PP_REG, (__u8)0x80); 724 - write_mos_reg(mos_parport->serial, dummy, PP_REG, (__u8)0x00); 717 + write_mos_reg(mos_parport->serial, dummy, MOS7720_PP_REG, (__u8)0x80); 718 + write_mos_reg(mos_parport->serial, dummy, MOS7720_PP_REG, (__u8)0x00); 725 719 726 720 /* initialize device registers */ 727 721 mos_parport->shadowDCR = DCR_INIT_VAL; 728 - write_mos_reg(mos_parport->serial, dummy, DCR, mos_parport->shadowDCR); 722 + write_mos_reg(mos_parport->serial, dummy, MOS7720_DCR, 723 + mos_parport->shadowDCR); 729 724 mos_parport->shadowECR = ECR_INIT_VAL; 730 - write_mos_reg(mos_parport->serial, dummy, ECR, mos_parport->shadowECR); 725 + write_mos_reg(mos_parport->serial, dummy, MOS7720_ECR, 726 + mos_parport->shadowECR); 731 727 732 728 /* register with parport core */ 733 729 mos_parport->pp = parport_register_port(0, PARPORT_IRQ_NONE, ··· 1041 1033 /* Initialize MCS7720 -- Write Init values to corresponding Registers 1042 1034 * 1043 1035 * Register Index 1044 - * 0 : THR/RHR 1045 - * 1 : IER 1046 - * 2 : FCR 1047 - * 3 : LCR 1048 - * 4 : MCR 1049 - * 5 : LSR 1050 - * 6 : MSR 1051 - * 7 : SPR 1036 + * 0 : MOS7720_THR/MOS7720_RHR 1037 + * 1 : MOS7720_IER 1038 + * 2 : MOS7720_FCR 1039 + * 3 : MOS7720_LCR 1040 + * 4 : MOS7720_MCR 1041 + * 5 : MOS7720_LSR 1042 + * 6 : MOS7720_MSR 1043 + * 7 : MOS7720_SPR 1052 1044 * 1053 1045 * 0x08 : SP1/2 Control Reg 1054 1046 */ 1055 1047 port_number = port->port_number; 1056 - read_mos_reg(serial, port_number, LSR, &data); 1048 + read_mos_reg(serial, port_number, MOS7720_LSR, &data); 1057 1049 1058 1050 dev_dbg(&port->dev, "SS::%p LSR:%x\n", mos7720_port, data); 1059 1051 1060 - write_mos_reg(serial, dummy, SP1_REG, 0x02); 1061 - write_mos_reg(serial, dummy, SP2_REG, 0x02); 1052 + write_mos_reg(serial, dummy, MOS7720_SP1_REG, 0x02); 1053 + write_mos_reg(serial, dummy, MOS7720_SP2_REG, 0x02); 1062 1054 1063 - write_mos_reg(serial, port_number, IER, 0x00); 1064 - write_mos_reg(serial, port_number, FCR, 0x00); 1055 + write_mos_reg(serial, port_number, MOS7720_IER, 0x00); 1056 + write_mos_reg(serial, port_number, MOS7720_FCR, 0x00); 1065 1057 1066 - write_mos_reg(serial, port_number, FCR, 0xcf); 1058 + write_mos_reg(serial, port_number, MOS7720_FCR, 0xcf); 1067 1059 mos7720_port->shadowLCR = 0x03; 1068 - write_mos_reg(serial, port_number, LCR, mos7720_port->shadowLCR); 1060 + write_mos_reg(serial, port_number, MOS7720_LCR, 1061 + mos7720_port->shadowLCR); 1069 1062 mos7720_port->shadowMCR = 0x0b; 1070 - write_mos_reg(serial, port_number, MCR, mos7720_port->shadowMCR); 1063 + write_mos_reg(serial, port_number, MOS7720_MCR, 1064 + mos7720_port->shadowMCR); 1071 1065 1072 - write_mos_reg(serial, port_number, SP_CONTROL_REG, 0x00); 1073 - read_mos_reg(serial, dummy, SP_CONTROL_REG, &data); 1066 + write_mos_reg(serial, port_number, MOS7720_SP_CONTROL_REG, 0x00); 1067 + read_mos_reg(serial, dummy, MOS7720_SP_CONTROL_REG, &data); 1074 1068 data = data | (port->port_number + 1); 1075 - write_mos_reg(serial, dummy, SP_CONTROL_REG, data); 1069 + write_mos_reg(serial, dummy, MOS7720_SP_CONTROL_REG, data); 1076 1070 mos7720_port->shadowLCR = 0x83; 1077 - write_mos_reg(serial, port_number, LCR, mos7720_port->shadowLCR); 1078 - write_mos_reg(serial, port_number, THR, 0x0c); 1079 - write_mos_reg(serial, port_number, IER, 0x00); 1071 + write_mos_reg(serial, port_number, MOS7720_LCR, 1072 + mos7720_port->shadowLCR); 1073 + write_mos_reg(serial, port_number, MOS7720_THR, 0x0c); 1074 + write_mos_reg(serial, port_number, MOS7720_IER, 0x00); 1080 1075 mos7720_port->shadowLCR = 0x03; 1081 - write_mos_reg(serial, port_number, LCR, mos7720_port->shadowLCR); 1082 - write_mos_reg(serial, port_number, IER, 0x0c); 1076 + write_mos_reg(serial, port_number, MOS7720_LCR, 1077 + mos7720_port->shadowLCR); 1078 + write_mos_reg(serial, port_number, MOS7720_IER, 0x0c); 1083 1079 1084 1080 response = usb_submit_urb(port->read_urb, GFP_KERNEL); 1085 1081 if (response) ··· 1156 1144 usb_kill_urb(port->write_urb); 1157 1145 usb_kill_urb(port->read_urb); 1158 1146 1159 - write_mos_reg(serial, port->port_number, MCR, 0x00); 1160 - write_mos_reg(serial, port->port_number, IER, 0x00); 1147 + write_mos_reg(serial, port->port_number, MOS7720_MCR, 0x00); 1148 + write_mos_reg(serial, port->port_number, MOS7720_IER, 0x00); 1161 1149 1162 1150 mos7720_port->open = 0; 1163 1151 } ··· 1181 1169 data = mos7720_port->shadowLCR & ~UART_LCR_SBC; 1182 1170 1183 1171 mos7720_port->shadowLCR = data; 1184 - write_mos_reg(serial, port->port_number, LCR, mos7720_port->shadowLCR); 1172 + write_mos_reg(serial, port->port_number, MOS7720_LCR, 1173 + mos7720_port->shadowLCR); 1185 1174 } 1186 1175 1187 1176 /* ··· 1310 1297 /* if we are implementing RTS/CTS, toggle that line */ 1311 1298 if (tty->termios.c_cflag & CRTSCTS) { 1312 1299 mos7720_port->shadowMCR &= ~UART_MCR_RTS; 1313 - write_mos_reg(port->serial, port->port_number, MCR, 1300 + write_mos_reg(port->serial, port->port_number, MOS7720_MCR, 1314 1301 mos7720_port->shadowMCR); 1315 1302 } 1316 1303 } ··· 1340 1327 /* if we are implementing RTS/CTS, toggle that line */ 1341 1328 if (tty->termios.c_cflag & CRTSCTS) { 1342 1329 mos7720_port->shadowMCR |= UART_MCR_RTS; 1343 - write_mos_reg(port->serial, port->port_number, MCR, 1330 + write_mos_reg(port->serial, port->port_number, MOS7720_MCR, 1344 1331 mos7720_port->shadowMCR); 1345 1332 } 1346 1333 } ··· 1365 1352 dev_dbg(&port->dev, "Sending Setting Commands ..........\n"); 1366 1353 port_number = port->port_number; 1367 1354 1368 - write_mos_reg(serial, port_number, IER, 0x00); 1369 - write_mos_reg(serial, port_number, FCR, 0x00); 1370 - write_mos_reg(serial, port_number, FCR, 0xcf); 1355 + write_mos_reg(serial, port_number, MOS7720_IER, 0x00); 1356 + write_mos_reg(serial, port_number, MOS7720_FCR, 0x00); 1357 + write_mos_reg(serial, port_number, MOS7720_FCR, 0xcf); 1371 1358 mos7720_port->shadowMCR = 0x0b; 1372 - write_mos_reg(serial, port_number, MCR, mos7720_port->shadowMCR); 1373 - write_mos_reg(serial, dummy, SP_CONTROL_REG, 0x00); 1359 + write_mos_reg(serial, port_number, MOS7720_MCR, 1360 + mos7720_port->shadowMCR); 1361 + write_mos_reg(serial, dummy, MOS7720_SP_CONTROL_REG, 0x00); 1374 1362 1375 1363 /*********************************************** 1376 1364 * Set for higher rates * 1377 1365 ***********************************************/ 1378 1366 /* writing baud rate verbatum into uart clock field clearly not right */ 1379 1367 if (port_number == 0) 1380 - sp_reg = SP1_REG; 1368 + sp_reg = MOS7720_SP1_REG; 1381 1369 else 1382 - sp_reg = SP2_REG; 1370 + sp_reg = MOS7720_SP2_REG; 1383 1371 write_mos_reg(serial, dummy, sp_reg, baud * 0x10); 1384 - write_mos_reg(serial, dummy, SP_CONTROL_REG, 0x03); 1372 + write_mos_reg(serial, dummy, MOS7720_SP_CONTROL_REG, 0x03); 1385 1373 mos7720_port->shadowMCR = 0x2b; 1386 - write_mos_reg(serial, port_number, MCR, mos7720_port->shadowMCR); 1374 + write_mos_reg(serial, port_number, MOS7720_MCR, 1375 + mos7720_port->shadowMCR); 1387 1376 1388 1377 /*********************************************** 1389 1378 * Set DLL/DLM 1390 1379 ***********************************************/ 1391 1380 mos7720_port->shadowLCR = mos7720_port->shadowLCR | UART_LCR_DLAB; 1392 - write_mos_reg(serial, port_number, LCR, mos7720_port->shadowLCR); 1393 - write_mos_reg(serial, port_number, DLL, 0x01); 1394 - write_mos_reg(serial, port_number, DLM, 0x00); 1381 + write_mos_reg(serial, port_number, MOS7720_LCR, 1382 + mos7720_port->shadowLCR); 1383 + write_mos_reg(serial, port_number, MOS7720_DLL, 0x01); 1384 + write_mos_reg(serial, port_number, MOS7720_DLM, 0x00); 1395 1385 mos7720_port->shadowLCR = mos7720_port->shadowLCR & ~UART_LCR_DLAB; 1396 - write_mos_reg(serial, port_number, LCR, mos7720_port->shadowLCR); 1386 + write_mos_reg(serial, port_number, MOS7720_LCR, 1387 + mos7720_port->shadowLCR); 1397 1388 1398 1389 return 0; 1399 1390 } ··· 1505 1488 1506 1489 /* Enable access to divisor latch */ 1507 1490 mos7720_port->shadowLCR = mos7720_port->shadowLCR | UART_LCR_DLAB; 1508 - write_mos_reg(serial, number, LCR, mos7720_port->shadowLCR); 1491 + write_mos_reg(serial, number, MOS7720_LCR, mos7720_port->shadowLCR); 1509 1492 1510 1493 /* Write the divisor */ 1511 - write_mos_reg(serial, number, DLL, (__u8)(divisor & 0xff)); 1512 - write_mos_reg(serial, number, DLM, (__u8)((divisor & 0xff00) >> 8)); 1494 + write_mos_reg(serial, number, MOS7720_DLL, (__u8)(divisor & 0xff)); 1495 + write_mos_reg(serial, number, MOS7720_DLM, 1496 + (__u8)((divisor & 0xff00) >> 8)); 1513 1497 1514 1498 /* Disable access to divisor latch */ 1515 1499 mos7720_port->shadowLCR = mos7720_port->shadowLCR & ~UART_LCR_DLAB; 1516 - write_mos_reg(serial, number, LCR, mos7720_port->shadowLCR); 1500 + write_mos_reg(serial, number, MOS7720_LCR, mos7720_port->shadowLCR); 1517 1501 1518 1502 return status; 1519 1503 } ··· 1618 1600 1619 1601 1620 1602 /* Disable Interrupts */ 1621 - write_mos_reg(serial, port_number, IER, 0x00); 1622 - write_mos_reg(serial, port_number, FCR, 0x00); 1623 - write_mos_reg(serial, port_number, FCR, 0xcf); 1603 + write_mos_reg(serial, port_number, MOS7720_IER, 0x00); 1604 + write_mos_reg(serial, port_number, MOS7720_FCR, 0x00); 1605 + write_mos_reg(serial, port_number, MOS7720_FCR, 0xcf); 1624 1606 1625 1607 /* Send the updated LCR value to the mos7720 */ 1626 - write_mos_reg(serial, port_number, LCR, mos7720_port->shadowLCR); 1608 + write_mos_reg(serial, port_number, MOS7720_LCR, 1609 + mos7720_port->shadowLCR); 1627 1610 mos7720_port->shadowMCR = 0x0b; 1628 - write_mos_reg(serial, port_number, MCR, mos7720_port->shadowMCR); 1611 + write_mos_reg(serial, port_number, MOS7720_MCR, 1612 + mos7720_port->shadowMCR); 1629 1613 1630 1614 /* set up the MCR register and send it to the mos7720 */ 1631 1615 mos7720_port->shadowMCR = UART_MCR_OUT2; ··· 1639 1619 /* To set hardware flow control to the specified * 1640 1620 * serial port, in SP1/2_CONTROL_REG */ 1641 1621 if (port_number) 1642 - write_mos_reg(serial, dummy, SP_CONTROL_REG, 0x01); 1622 + write_mos_reg(serial, dummy, MOS7720_SP_CONTROL_REG, 1623 + 0x01); 1643 1624 else 1644 - write_mos_reg(serial, dummy, SP_CONTROL_REG, 0x02); 1625 + write_mos_reg(serial, dummy, MOS7720_SP_CONTROL_REG, 1626 + 0x02); 1645 1627 1646 1628 } else 1647 1629 mos7720_port->shadowMCR &= ~(UART_MCR_XONANY); 1648 1630 1649 - write_mos_reg(serial, port_number, MCR, mos7720_port->shadowMCR); 1631 + write_mos_reg(serial, port_number, MOS7720_MCR, 1632 + mos7720_port->shadowMCR); 1650 1633 1651 1634 /* Determine divisor based on baud rate */ 1652 1635 baud = tty_get_baud_rate(tty); ··· 1662 1639 if (baud >= 230400) { 1663 1640 set_higher_rates(mos7720_port, baud); 1664 1641 /* Enable Interrupts */ 1665 - write_mos_reg(serial, port_number, IER, 0x0c); 1642 + write_mos_reg(serial, port_number, MOS7720_IER, 0x0c); 1666 1643 return; 1667 1644 } 1668 1645 ··· 1673 1650 if (cflag & CBAUD) 1674 1651 tty_encode_baud_rate(tty, baud, baud); 1675 1652 /* Enable Interrupts */ 1676 - write_mos_reg(serial, port_number, IER, 0x0c); 1653 + write_mos_reg(serial, port_number, MOS7720_IER, 0x0c); 1677 1654 1678 1655 if (port->read_urb->status != -EINPROGRESS) { 1679 1656 status = usb_submit_urb(port->read_urb, GFP_KERNEL); ··· 1748 1725 1749 1726 count = mos7720_chars_in_buffer(tty); 1750 1727 if (count == 0) { 1751 - read_mos_reg(port->serial, port_number, LSR, &data); 1728 + read_mos_reg(port->serial, port_number, MOS7720_LSR, &data); 1752 1729 if ((data & (UART_LSR_TEMT | UART_LSR_THRE)) 1753 1730 == (UART_LSR_TEMT | UART_LSR_THRE)) { 1754 1731 dev_dbg(&port->dev, "%s -- Empty\n", __func__); ··· 1805 1782 mcr &= ~UART_MCR_LOOP; 1806 1783 1807 1784 mos7720_port->shadowMCR = mcr; 1808 - write_mos_reg(port->serial, port->port_number, MCR, 1785 + write_mos_reg(port->serial, port->port_number, MOS7720_MCR, 1809 1786 mos7720_port->shadowMCR); 1810 1787 1811 1788 return 0; ··· 1850 1827 } 1851 1828 1852 1829 mos7720_port->shadowMCR = mcr; 1853 - write_mos_reg(port->serial, port->port_number, MCR, 1830 + write_mos_reg(port->serial, port->port_number, MOS7720_MCR, 1854 1831 mos7720_port->shadowMCR); 1855 1832 1856 1833 return 0; ··· 1965 1942 } 1966 1943 #endif 1967 1944 /* LSR For Port 1 */ 1968 - read_mos_reg(serial, 0, LSR, &data); 1945 + read_mos_reg(serial, 0, MOS7720_LSR, &data); 1969 1946 dev_dbg(&dev->dev, "LSR:%x\n", data); 1970 1947 1971 1948 return 0;
+1
drivers/usb/serial/option.c
··· 1765 1765 { USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x7d03, 0xff, 0x00, 0x00) }, 1766 1766 { USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e01, 0xff, 0xff, 0xff) }, /* D-Link DWM-152/C1 */ 1767 1767 { USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e02, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/C1 */ 1768 + { USB_DEVICE_INTERFACE_CLASS(0x2020, 0x4000, 0xff) }, /* OLICARD300 - MT6225 */ 1768 1769 { USB_DEVICE(INOVIA_VENDOR_ID, INOVIA_SEW858) }, 1769 1770 { USB_DEVICE(VIATELECOM_VENDOR_ID, VIATELECOM_PRODUCT_CDS7) }, 1770 1771 { } /* Terminating entry */
+1
drivers/usb/serial/usb-serial.c
··· 1306 1306 tty_unregister_driver(usb_serial_tty_driver); 1307 1307 put_tty_driver(usb_serial_tty_driver); 1308 1308 bus_unregister(&usb_serial_bus_type); 1309 + idr_destroy(&serial_minors); 1309 1310 } 1310 1311 1311 1312
+38 -2
drivers/video/fbdev/stifb.c
··· 121 121 #define REG_3 0x0004a0 122 122 #define REG_4 0x000600 123 123 #define REG_6 0x000800 124 + #define REG_7 0x000804 124 125 #define REG_8 0x000820 125 126 #define REG_9 0x000a04 126 127 #define REG_10 0x018000 ··· 136 135 #define REG_21 0x200218 137 136 #define REG_22 0x0005a0 138 137 #define REG_23 0x0005c0 138 + #define REG_24 0x000808 139 + #define REG_25 0x000b00 139 140 #define REG_26 0x200118 140 141 #define REG_27 0x200308 141 142 #define REG_32 0x21003c ··· 431 428 432 429 #define SET_LENXY_START_RECFILL(fb, lenxy) \ 433 430 WRITE_WORD(lenxy, fb, REG_9) 431 + 432 + #define SETUP_COPYAREA(fb) \ 433 + WRITE_BYTE(0, fb, REG_16b1) 434 434 435 435 static void 436 436 HYPER_ENABLE_DISABLE_DISPLAY(struct stifb_info *fb, int enable) ··· 1010 1004 return 0; 1011 1005 } 1012 1006 1007 + static void 1008 + stifb_copyarea(struct fb_info *info, const struct fb_copyarea *area) 1009 + { 1010 + struct stifb_info *fb = container_of(info, struct stifb_info, info); 1011 + 1012 + SETUP_COPYAREA(fb); 1013 + 1014 + SETUP_HW(fb); 1015 + if (fb->info.var.bits_per_pixel == 32) { 1016 + WRITE_WORD(0xBBA0A000, fb, REG_10); 1017 + 1018 + NGLE_REALLY_SET_IMAGE_PLANEMASK(fb, 0xffffffff); 1019 + } else { 1020 + WRITE_WORD(fb->id == S9000_ID_HCRX ? 0x13a02000 : 0x13a01000, fb, REG_10); 1021 + 1022 + NGLE_REALLY_SET_IMAGE_PLANEMASK(fb, 0xff); 1023 + } 1024 + 1025 + NGLE_QUICK_SET_IMAGE_BITMAP_OP(fb, 1026 + IBOvals(RopSrc, MaskAddrOffset(0), 1027 + BitmapExtent08, StaticReg(1), 1028 + DataDynamic, MaskOtc, BGx(0), FGx(0))); 1029 + 1030 + WRITE_WORD(((area->sx << 16) | area->sy), fb, REG_24); 1031 + WRITE_WORD(((area->width << 16) | area->height), fb, REG_7); 1032 + WRITE_WORD(((area->dx << 16) | area->dy), fb, REG_25); 1033 + 1034 + SETUP_FB(fb); 1035 + } 1036 + 1013 1037 static void __init 1014 1038 stifb_init_display(struct stifb_info *fb) 1015 1039 { ··· 1105 1069 .fb_setcolreg = stifb_setcolreg, 1106 1070 .fb_blank = stifb_blank, 1107 1071 .fb_fillrect = cfb_fillrect, 1108 - .fb_copyarea = cfb_copyarea, 1072 + .fb_copyarea = stifb_copyarea, 1109 1073 .fb_imageblit = cfb_imageblit, 1110 1074 }; 1111 1075 ··· 1294 1258 info->fbops = &stifb_ops; 1295 1259 info->screen_base = ioremap_nocache(REGION_BASE(fb,1), fix->smem_len); 1296 1260 info->screen_size = fix->smem_len; 1297 - info->flags = FBINFO_DEFAULT; 1261 + info->flags = FBINFO_DEFAULT | FBINFO_HWACCEL_COPYAREA; 1298 1262 info->pseudo_palette = &fb->pseudo_palette; 1299 1263 1300 1264 /* This has to be done !!! */
+2 -2
drivers/watchdog/sp805_wdt.c
··· 4 4 * Watchdog driver for ARM SP805 watchdog module 5 5 * 6 6 * Copyright (C) 2010 ST Microelectronics 7 - * Viresh Kumar <viresh.linux@gmail.com> 7 + * Viresh Kumar <vireshk@kernel.org> 8 8 * 9 9 * This file is licensed under the terms of the GNU General Public 10 10 * License version 2 or later. This program is licensed "as is" without any ··· 303 303 304 304 module_amba_driver(sp805_wdt_driver); 305 305 306 - MODULE_AUTHOR("Viresh Kumar <viresh.linux@gmail.com>"); 306 + MODULE_AUTHOR("Viresh Kumar <vireshk@kernel.org>"); 307 307 MODULE_DESCRIPTION("ARM SP805 Watchdog Driver"); 308 308 MODULE_LICENSE("GPL");
+1 -2
fs/9p/vfs_inode.c
··· 540 540 unlock_new_inode(inode); 541 541 return inode; 542 542 error: 543 - unlock_new_inode(inode); 544 - iput(inode); 543 + iget_failed(inode); 545 544 return ERR_PTR(retval); 546 545 547 546 }
+1 -2
fs/9p/vfs_inode_dotl.c
··· 149 149 unlock_new_inode(inode); 150 150 return inode; 151 151 error: 152 - unlock_new_inode(inode); 153 - iput(inode); 152 + iget_failed(inode); 154 153 return ERR_PTR(retval); 155 154 156 155 }
+2
fs/btrfs/btrfs_inode.h
··· 44 44 #define BTRFS_INODE_IN_DELALLOC_LIST 9 45 45 #define BTRFS_INODE_READDIO_NEED_LOCK 10 46 46 #define BTRFS_INODE_HAS_PROPS 11 47 + /* DIO is ready to submit */ 48 + #define BTRFS_INODE_DIO_READY 12 47 49 /* 48 50 * The following 3 bits are meant only for the btree inode. 49 51 * When any of them is set, it means an error happened while writing an
+1
fs/btrfs/ctree.h
··· 1778 1778 spinlock_t unused_bgs_lock; 1779 1779 struct list_head unused_bgs; 1780 1780 struct mutex unused_bg_unpin_mutex; 1781 + struct mutex delete_unused_bgs_mutex; 1781 1782 1782 1783 /* For btrfs to record security options */ 1783 1784 struct security_mnt_opts security_opts;
+40 -1
fs/btrfs/disk-io.c
··· 1751 1751 { 1752 1752 struct btrfs_root *root = arg; 1753 1753 int again; 1754 + struct btrfs_trans_handle *trans; 1754 1755 1755 1756 do { 1756 1757 again = 0; ··· 1773 1772 } 1774 1773 1775 1774 btrfs_run_delayed_iputs(root); 1776 - btrfs_delete_unused_bgs(root->fs_info); 1777 1775 again = btrfs_clean_one_deleted_snapshot(root); 1778 1776 mutex_unlock(&root->fs_info->cleaner_mutex); 1779 1777 ··· 1781 1781 * needn't do anything special here. 1782 1782 */ 1783 1783 btrfs_run_defrag_inodes(root->fs_info); 1784 + 1785 + /* 1786 + * Acquires fs_info->delete_unused_bgs_mutex to avoid racing 1787 + * with relocation (btrfs_relocate_chunk) and relocation 1788 + * acquires fs_info->cleaner_mutex (btrfs_relocate_block_group) 1789 + * after acquiring fs_info->delete_unused_bgs_mutex. So we 1790 + * can't hold, nor need to, fs_info->cleaner_mutex when deleting 1791 + * unused block groups. 1792 + */ 1793 + btrfs_delete_unused_bgs(root->fs_info); 1784 1794 sleep: 1785 1795 if (!try_to_freeze() && !again) { 1786 1796 set_current_state(TASK_INTERRUPTIBLE); ··· 1799 1789 __set_current_state(TASK_RUNNING); 1800 1790 } 1801 1791 } while (!kthread_should_stop()); 1792 + 1793 + /* 1794 + * Transaction kthread is stopped before us and wakes us up. 1795 + * However we might have started a new transaction and COWed some 1796 + * tree blocks when deleting unused block groups for example. So 1797 + * make sure we commit the transaction we started to have a clean 1798 + * shutdown when evicting the btree inode - if it has dirty pages 1799 + * when we do the final iput() on it, eviction will trigger a 1800 + * writeback for it which will fail with null pointer dereferences 1801 + * since work queues and other resources were already released and 1802 + * destroyed by the time the iput/eviction/writeback is made. 1803 + */ 1804 + trans = btrfs_attach_transaction(root); 1805 + if (IS_ERR(trans)) { 1806 + if (PTR_ERR(trans) != -ENOENT) 1807 + btrfs_err(root->fs_info, 1808 + "cleaner transaction attach returned %ld", 1809 + PTR_ERR(trans)); 1810 + } else { 1811 + int ret; 1812 + 1813 + ret = btrfs_commit_transaction(trans, root); 1814 + if (ret) 1815 + btrfs_err(root->fs_info, 1816 + "cleaner open transaction commit returned %d", 1817 + ret); 1818 + } 1819 + 1802 1820 return 0; 1803 1821 } 1804 1822 ··· 2530 2492 spin_lock_init(&fs_info->unused_bgs_lock); 2531 2493 rwlock_init(&fs_info->tree_mod_log_lock); 2532 2494 mutex_init(&fs_info->unused_bg_unpin_mutex); 2495 + mutex_init(&fs_info->delete_unused_bgs_mutex); 2533 2496 mutex_init(&fs_info->reloc_mutex); 2534 2497 mutex_init(&fs_info->delalloc_root_mutex); 2535 2498 seqlock_init(&fs_info->profiles_lock);
+16
fs/btrfs/extent-tree.c
··· 2296 2296 static inline struct btrfs_delayed_ref_node * 2297 2297 select_delayed_ref(struct btrfs_delayed_ref_head *head) 2298 2298 { 2299 + struct btrfs_delayed_ref_node *ref; 2300 + 2299 2301 if (list_empty(&head->ref_list)) 2300 2302 return NULL; 2303 + 2304 + /* 2305 + * Select a delayed ref of type BTRFS_ADD_DELAYED_REF first. 2306 + * This is to prevent a ref count from going down to zero, which deletes 2307 + * the extent item from the extent tree, when there still are references 2308 + * to add, which would fail because they would not find the extent item. 2309 + */ 2310 + list_for_each_entry(ref, &head->ref_list, list) { 2311 + if (ref->action == BTRFS_ADD_DELAYED_REF) 2312 + return ref; 2313 + } 2301 2314 2302 2315 return list_entry(head->ref_list.next, struct btrfs_delayed_ref_node, 2303 2316 list); ··· 9902 9889 } 9903 9890 spin_unlock(&fs_info->unused_bgs_lock); 9904 9891 9892 + mutex_lock(&root->fs_info->delete_unused_bgs_mutex); 9893 + 9905 9894 /* Don't want to race with allocators so take the groups_sem */ 9906 9895 down_write(&space_info->groups_sem); 9907 9896 spin_lock(&block_group->lock); ··· 9998 9983 end_trans: 9999 9984 btrfs_end_transaction(trans, root); 10000 9985 next: 9986 + mutex_unlock(&root->fs_info->delete_unused_bgs_mutex); 10001 9987 btrfs_put_block_group(block_group); 10002 9988 spin_lock(&fs_info->unused_bgs_lock); 10003 9989 }
+12 -5
fs/btrfs/inode-map.c
··· 246 246 { 247 247 struct btrfs_free_space_ctl *ctl = root->free_ino_ctl; 248 248 struct rb_root *rbroot = &root->free_ino_pinned->free_space_offset; 249 + spinlock_t *rbroot_lock = &root->free_ino_pinned->tree_lock; 249 250 struct btrfs_free_space *info; 250 251 struct rb_node *n; 251 252 u64 count; ··· 255 254 return; 256 255 257 256 while (1) { 257 + bool add_to_ctl = true; 258 + 259 + spin_lock(rbroot_lock); 258 260 n = rb_first(rbroot); 259 - if (!n) 261 + if (!n) { 262 + spin_unlock(rbroot_lock); 260 263 break; 264 + } 261 265 262 266 info = rb_entry(n, struct btrfs_free_space, offset_index); 263 267 BUG_ON(info->bitmap); /* Logic error */ 264 268 265 269 if (info->offset > root->ino_cache_progress) 266 - goto free; 270 + add_to_ctl = false; 267 271 else if (info->offset + info->bytes > root->ino_cache_progress) 268 272 count = root->ino_cache_progress - info->offset + 1; 269 273 else 270 274 count = info->bytes; 271 275 272 - __btrfs_add_free_space(ctl, info->offset, count); 273 - free: 274 276 rb_erase(&info->offset_index, rbroot); 275 - kfree(info); 277 + spin_unlock(rbroot_lock); 278 + if (add_to_ctl) 279 + __btrfs_add_free_space(ctl, info->offset, count); 280 + kmem_cache_free(btrfs_free_space_cachep, info); 276 281 } 277 282 } 278 283
+65 -29
fs/btrfs/inode.c
··· 4209 4209 u64 extent_num_bytes = 0; 4210 4210 u64 extent_offset = 0; 4211 4211 u64 item_end = 0; 4212 - u64 last_size = (u64)-1; 4212 + u64 last_size = new_size; 4213 4213 u32 found_type = (u8)-1; 4214 4214 int found_extent; 4215 4215 int del_item; ··· 4493 4493 btrfs_abort_transaction(trans, root, ret); 4494 4494 } 4495 4495 error: 4496 - if (last_size != (u64)-1 && 4497 - root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID) 4496 + if (root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID) 4498 4497 btrfs_ordered_update_i_size(inode, last_size, NULL); 4499 4498 4500 4499 btrfs_free_path(path); ··· 4988 4989 /* 4989 4990 * Keep looping until we have no more ranges in the io tree. 4990 4991 * We can have ongoing bios started by readpages (called from readahead) 4991 - * that didn't get their end io callbacks called yet or they are still 4992 - * in progress ((extent_io.c:end_bio_extent_readpage()). This means some 4992 + * that have their endio callback (extent_io.c:end_bio_extent_readpage) 4993 + * still in progress (unlocked the pages in the bio but did not yet 4994 + * unlocked the ranges in the io tree). Therefore this means some 4993 4995 * ranges can still be locked and eviction started because before 4994 4996 * submitting those bios, which are executed by a separate task (work 4995 4997 * queue kthread), inode references (inode->i_count) were not taken ··· 7546 7546 7547 7547 current->journal_info = outstanding_extents; 7548 7548 btrfs_free_reserved_data_space(inode, len); 7549 + set_bit(BTRFS_INODE_DIO_READY, &BTRFS_I(inode)->runtime_flags); 7549 7550 } 7550 7551 7551 7552 /* ··· 7872 7871 struct bio *dio_bio; 7873 7872 int ret; 7874 7873 7875 - if (err) 7876 - goto out_done; 7877 7874 again: 7878 7875 ret = btrfs_dec_test_first_ordered_pending(inode, &ordered, 7879 7876 &ordered_offset, ··· 7894 7895 ordered = NULL; 7895 7896 goto again; 7896 7897 } 7897 - out_done: 7898 7898 dio_bio = dip->dio_bio; 7899 7899 7900 7900 kfree(dip); ··· 8161 8163 static void btrfs_submit_direct(int rw, struct bio *dio_bio, 8162 8164 struct inode *inode, loff_t file_offset) 8163 8165 { 8164 - struct btrfs_root *root = BTRFS_I(inode)->root; 8165 - struct btrfs_dio_private *dip; 8166 - struct bio *io_bio; 8166 + struct btrfs_dio_private *dip = NULL; 8167 + struct bio *io_bio = NULL; 8167 8168 struct btrfs_io_bio *btrfs_bio; 8168 8169 int skip_sum; 8169 8170 int write = rw & REQ_WRITE; ··· 8179 8182 dip = kzalloc(sizeof(*dip), GFP_NOFS); 8180 8183 if (!dip) { 8181 8184 ret = -ENOMEM; 8182 - goto free_io_bio; 8185 + goto free_ordered; 8183 8186 } 8184 8187 8185 8188 dip->private = dio_bio->bi_private; ··· 8207 8210 8208 8211 if (btrfs_bio->end_io) 8209 8212 btrfs_bio->end_io(btrfs_bio, ret); 8210 - free_io_bio: 8211 - bio_put(io_bio); 8212 8213 8213 8214 free_ordered: 8214 8215 /* 8215 - * If this is a write, we need to clean up the reserved space and kill 8216 - * the ordered extent. 8216 + * If we arrived here it means either we failed to submit the dip 8217 + * or we either failed to clone the dio_bio or failed to allocate the 8218 + * dip. If we cloned the dio_bio and allocated the dip, we can just 8219 + * call bio_endio against our io_bio so that we get proper resource 8220 + * cleanup if we fail to submit the dip, otherwise, we must do the 8221 + * same as btrfs_endio_direct_[write|read] because we can't call these 8222 + * callbacks - they require an allocated dip and a clone of dio_bio. 8217 8223 */ 8218 - if (write) { 8219 - struct btrfs_ordered_extent *ordered; 8220 - ordered = btrfs_lookup_ordered_extent(inode, file_offset); 8221 - if (!test_bit(BTRFS_ORDERED_PREALLOC, &ordered->flags) && 8222 - !test_bit(BTRFS_ORDERED_NOCOW, &ordered->flags)) 8223 - btrfs_free_reserved_extent(root, ordered->start, 8224 - ordered->disk_len, 1); 8225 - btrfs_put_ordered_extent(ordered); 8226 - btrfs_put_ordered_extent(ordered); 8224 + if (io_bio && dip) { 8225 + bio_endio(io_bio, ret); 8226 + /* 8227 + * The end io callbacks free our dip, do the final put on io_bio 8228 + * and all the cleanup and final put for dio_bio (through 8229 + * dio_end_io()). 8230 + */ 8231 + dip = NULL; 8232 + io_bio = NULL; 8233 + } else { 8234 + if (write) { 8235 + struct btrfs_ordered_extent *ordered; 8236 + 8237 + ordered = btrfs_lookup_ordered_extent(inode, 8238 + file_offset); 8239 + set_bit(BTRFS_ORDERED_IOERR, &ordered->flags); 8240 + /* 8241 + * Decrements our ref on the ordered extent and removes 8242 + * the ordered extent from the inode's ordered tree, 8243 + * doing all the proper resource cleanup such as for the 8244 + * reserved space and waking up any waiters for this 8245 + * ordered extent (through btrfs_remove_ordered_extent). 8246 + */ 8247 + btrfs_finish_ordered_io(ordered); 8248 + } else { 8249 + unlock_extent(&BTRFS_I(inode)->io_tree, file_offset, 8250 + file_offset + dio_bio->bi_iter.bi_size - 1); 8251 + } 8252 + clear_bit(BIO_UPTODATE, &dio_bio->bi_flags); 8253 + /* 8254 + * Releases and cleans up our dio_bio, no need to bio_put() 8255 + * nor bio_endio()/bio_io_error() against dio_bio. 8256 + */ 8257 + dio_end_io(dio_bio, ret); 8227 8258 } 8228 - bio_endio(dio_bio, ret); 8259 + if (io_bio) 8260 + bio_put(io_bio); 8261 + kfree(dip); 8229 8262 } 8230 8263 8231 8264 static ssize_t check_direct_IO(struct btrfs_root *root, struct kiocb *iocb, ··· 8357 8330 btrfs_submit_direct, flags); 8358 8331 if (iov_iter_rw(iter) == WRITE) { 8359 8332 current->journal_info = NULL; 8360 - if (ret < 0 && ret != -EIOCBQUEUED) 8361 - btrfs_delalloc_release_space(inode, count); 8362 - else if (ret >= 0 && (size_t)ret < count) 8333 + if (ret < 0 && ret != -EIOCBQUEUED) { 8334 + /* 8335 + * If the error comes from submitting stage, 8336 + * btrfs_get_blocsk_direct() has free'd data space, 8337 + * and metadata space will be handled by 8338 + * finish_ordered_fn, don't do that again to make 8339 + * sure bytes_may_use is correct. 8340 + */ 8341 + if (!test_and_clear_bit(BTRFS_INODE_DIO_READY, 8342 + &BTRFS_I(inode)->runtime_flags)) 8343 + btrfs_delalloc_release_space(inode, count); 8344 + } else if (ret >= 0 && (size_t)ret < count) 8363 8345 btrfs_delalloc_release_space(inode, 8364 8346 count - (size_t)ret); 8365 8347 }
+206 -55
fs/btrfs/ioctl.c
··· 87 87 88 88 89 89 static int btrfs_clone(struct inode *src, struct inode *inode, 90 - u64 off, u64 olen, u64 olen_aligned, u64 destoff); 90 + u64 off, u64 olen, u64 olen_aligned, u64 destoff, 91 + int no_time_update); 91 92 92 93 /* Mask out flags that are inappropriate for the given type of inode. */ 93 94 static inline __u32 btrfs_mask_flags(umode_t mode, __u32 flags) ··· 2766 2765 return ret; 2767 2766 } 2768 2767 2769 - static struct page *extent_same_get_page(struct inode *inode, u64 off) 2768 + static struct page *extent_same_get_page(struct inode *inode, pgoff_t index) 2770 2769 { 2771 2770 struct page *page; 2772 - pgoff_t index; 2773 2771 struct extent_io_tree *tree = &BTRFS_I(inode)->io_tree; 2774 - 2775 - index = off >> PAGE_CACHE_SHIFT; 2776 2772 2777 2773 page = grab_cache_page(inode->i_mapping, index); 2778 2774 if (!page) ··· 2789 2791 unlock_page(page); 2790 2792 2791 2793 return page; 2794 + } 2795 + 2796 + static int gather_extent_pages(struct inode *inode, struct page **pages, 2797 + int num_pages, u64 off) 2798 + { 2799 + int i; 2800 + pgoff_t index = off >> PAGE_CACHE_SHIFT; 2801 + 2802 + for (i = 0; i < num_pages; i++) { 2803 + pages[i] = extent_same_get_page(inode, index + i); 2804 + if (!pages[i]) 2805 + return -ENOMEM; 2806 + } 2807 + return 0; 2792 2808 } 2793 2809 2794 2810 static inline void lock_extent_range(struct inode *inode, u64 off, u64 len) ··· 2830 2818 } 2831 2819 } 2832 2820 2833 - static void btrfs_double_unlock(struct inode *inode1, u64 loff1, 2834 - struct inode *inode2, u64 loff2, u64 len) 2821 + static void btrfs_double_inode_unlock(struct inode *inode1, struct inode *inode2) 2835 2822 { 2836 - unlock_extent(&BTRFS_I(inode1)->io_tree, loff1, loff1 + len - 1); 2837 - unlock_extent(&BTRFS_I(inode2)->io_tree, loff2, loff2 + len - 1); 2838 - 2839 2823 mutex_unlock(&inode1->i_mutex); 2840 2824 mutex_unlock(&inode2->i_mutex); 2841 2825 } 2842 2826 2843 - static void btrfs_double_lock(struct inode *inode1, u64 loff1, 2844 - struct inode *inode2, u64 loff2, u64 len) 2827 + static void btrfs_double_inode_lock(struct inode *inode1, struct inode *inode2) 2828 + { 2829 + if (inode1 < inode2) 2830 + swap(inode1, inode2); 2831 + 2832 + mutex_lock_nested(&inode1->i_mutex, I_MUTEX_PARENT); 2833 + if (inode1 != inode2) 2834 + mutex_lock_nested(&inode2->i_mutex, I_MUTEX_CHILD); 2835 + } 2836 + 2837 + static void btrfs_double_extent_unlock(struct inode *inode1, u64 loff1, 2838 + struct inode *inode2, u64 loff2, u64 len) 2839 + { 2840 + unlock_extent(&BTRFS_I(inode1)->io_tree, loff1, loff1 + len - 1); 2841 + unlock_extent(&BTRFS_I(inode2)->io_tree, loff2, loff2 + len - 1); 2842 + } 2843 + 2844 + static void btrfs_double_extent_lock(struct inode *inode1, u64 loff1, 2845 + struct inode *inode2, u64 loff2, u64 len) 2845 2846 { 2846 2847 if (inode1 < inode2) { 2847 2848 swap(inode1, inode2); 2848 2849 swap(loff1, loff2); 2849 2850 } 2850 - 2851 - mutex_lock_nested(&inode1->i_mutex, I_MUTEX_PARENT); 2852 2851 lock_extent_range(inode1, loff1, len); 2853 - if (inode1 != inode2) { 2854 - mutex_lock_nested(&inode2->i_mutex, I_MUTEX_CHILD); 2852 + if (inode1 != inode2) 2855 2853 lock_extent_range(inode2, loff2, len); 2854 + } 2855 + 2856 + struct cmp_pages { 2857 + int num_pages; 2858 + struct page **src_pages; 2859 + struct page **dst_pages; 2860 + }; 2861 + 2862 + static void btrfs_cmp_data_free(struct cmp_pages *cmp) 2863 + { 2864 + int i; 2865 + struct page *pg; 2866 + 2867 + for (i = 0; i < cmp->num_pages; i++) { 2868 + pg = cmp->src_pages[i]; 2869 + if (pg) 2870 + page_cache_release(pg); 2871 + pg = cmp->dst_pages[i]; 2872 + if (pg) 2873 + page_cache_release(pg); 2856 2874 } 2875 + kfree(cmp->src_pages); 2876 + kfree(cmp->dst_pages); 2877 + } 2878 + 2879 + static int btrfs_cmp_data_prepare(struct inode *src, u64 loff, 2880 + struct inode *dst, u64 dst_loff, 2881 + u64 len, struct cmp_pages *cmp) 2882 + { 2883 + int ret; 2884 + int num_pages = PAGE_CACHE_ALIGN(len) >> PAGE_CACHE_SHIFT; 2885 + struct page **src_pgarr, **dst_pgarr; 2886 + 2887 + /* 2888 + * We must gather up all the pages before we initiate our 2889 + * extent locking. We use an array for the page pointers. Size 2890 + * of the array is bounded by len, which is in turn bounded by 2891 + * BTRFS_MAX_DEDUPE_LEN. 2892 + */ 2893 + src_pgarr = kzalloc(num_pages * sizeof(struct page *), GFP_NOFS); 2894 + dst_pgarr = kzalloc(num_pages * sizeof(struct page *), GFP_NOFS); 2895 + if (!src_pgarr || !dst_pgarr) { 2896 + kfree(src_pgarr); 2897 + kfree(dst_pgarr); 2898 + return -ENOMEM; 2899 + } 2900 + cmp->num_pages = num_pages; 2901 + cmp->src_pages = src_pgarr; 2902 + cmp->dst_pages = dst_pgarr; 2903 + 2904 + ret = gather_extent_pages(src, cmp->src_pages, cmp->num_pages, loff); 2905 + if (ret) 2906 + goto out; 2907 + 2908 + ret = gather_extent_pages(dst, cmp->dst_pages, cmp->num_pages, dst_loff); 2909 + 2910 + out: 2911 + if (ret) 2912 + btrfs_cmp_data_free(cmp); 2913 + return 0; 2857 2914 } 2858 2915 2859 2916 static int btrfs_cmp_data(struct inode *src, u64 loff, struct inode *dst, 2860 - u64 dst_loff, u64 len) 2917 + u64 dst_loff, u64 len, struct cmp_pages *cmp) 2861 2918 { 2862 2919 int ret = 0; 2920 + int i; 2863 2921 struct page *src_page, *dst_page; 2864 2922 unsigned int cmp_len = PAGE_CACHE_SIZE; 2865 2923 void *addr, *dst_addr; 2866 2924 2925 + i = 0; 2867 2926 while (len) { 2868 2927 if (len < PAGE_CACHE_SIZE) 2869 2928 cmp_len = len; 2870 2929 2871 - src_page = extent_same_get_page(src, loff); 2872 - if (!src_page) 2873 - return -EINVAL; 2874 - dst_page = extent_same_get_page(dst, dst_loff); 2875 - if (!dst_page) { 2876 - page_cache_release(src_page); 2877 - return -EINVAL; 2878 - } 2930 + BUG_ON(i >= cmp->num_pages); 2931 + 2932 + src_page = cmp->src_pages[i]; 2933 + dst_page = cmp->dst_pages[i]; 2934 + 2879 2935 addr = kmap_atomic(src_page); 2880 2936 dst_addr = kmap_atomic(dst_page); 2881 2937 ··· 2955 2875 2956 2876 kunmap_atomic(addr); 2957 2877 kunmap_atomic(dst_addr); 2958 - page_cache_release(src_page); 2959 - page_cache_release(dst_page); 2960 2878 2961 2879 if (ret) 2962 2880 break; 2963 2881 2964 - loff += cmp_len; 2965 - dst_loff += cmp_len; 2966 2882 len -= cmp_len; 2883 + i++; 2967 2884 } 2968 2885 2969 2886 return ret; ··· 2991 2914 { 2992 2915 int ret; 2993 2916 u64 len = olen; 2917 + struct cmp_pages cmp; 2918 + int same_inode = 0; 2919 + u64 same_lock_start = 0; 2920 + u64 same_lock_len = 0; 2994 2921 2995 - /* 2996 - * btrfs_clone() can't handle extents in the same file 2997 - * yet. Once that works, we can drop this check and replace it 2998 - * with a check for the same inode, but overlapping extents. 2999 - */ 3000 2922 if (src == dst) 3001 - return -EINVAL; 2923 + same_inode = 1; 3002 2924 3003 2925 if (len == 0) 3004 2926 return 0; 3005 2927 3006 - btrfs_double_lock(src, loff, dst, dst_loff, len); 2928 + if (same_inode) { 2929 + mutex_lock(&src->i_mutex); 3007 2930 3008 - ret = extent_same_check_offsets(src, loff, &len, olen); 3009 - if (ret) 3010 - goto out_unlock; 2931 + ret = extent_same_check_offsets(src, loff, &len, olen); 2932 + if (ret) 2933 + goto out_unlock; 3011 2934 3012 - ret = extent_same_check_offsets(dst, dst_loff, &len, olen); 3013 - if (ret) 3014 - goto out_unlock; 2935 + /* 2936 + * Single inode case wants the same checks, except we 2937 + * don't want our length pushed out past i_size as 2938 + * comparing that data range makes no sense. 2939 + * 2940 + * extent_same_check_offsets() will do this for an 2941 + * unaligned length at i_size, so catch it here and 2942 + * reject the request. 2943 + * 2944 + * This effectively means we require aligned extents 2945 + * for the single-inode case, whereas the other cases 2946 + * allow an unaligned length so long as it ends at 2947 + * i_size. 2948 + */ 2949 + if (len != olen) { 2950 + ret = -EINVAL; 2951 + goto out_unlock; 2952 + } 2953 + 2954 + /* Check for overlapping ranges */ 2955 + if (dst_loff + len > loff && dst_loff < loff + len) { 2956 + ret = -EINVAL; 2957 + goto out_unlock; 2958 + } 2959 + 2960 + same_lock_start = min_t(u64, loff, dst_loff); 2961 + same_lock_len = max_t(u64, loff, dst_loff) + len - same_lock_start; 2962 + } else { 2963 + btrfs_double_inode_lock(src, dst); 2964 + 2965 + ret = extent_same_check_offsets(src, loff, &len, olen); 2966 + if (ret) 2967 + goto out_unlock; 2968 + 2969 + ret = extent_same_check_offsets(dst, dst_loff, &len, olen); 2970 + if (ret) 2971 + goto out_unlock; 2972 + } 3015 2973 3016 2974 /* don't make the dst file partly checksummed */ 3017 2975 if ((BTRFS_I(src)->flags & BTRFS_INODE_NODATASUM) != ··· 3055 2943 goto out_unlock; 3056 2944 } 3057 2945 3058 - ret = btrfs_cmp_data(src, loff, dst, dst_loff, len); 3059 - if (ret == 0) 3060 - ret = btrfs_clone(src, dst, loff, olen, len, dst_loff); 2946 + ret = btrfs_cmp_data_prepare(src, loff, dst, dst_loff, olen, &cmp); 2947 + if (ret) 2948 + goto out_unlock; 3061 2949 2950 + if (same_inode) 2951 + lock_extent_range(src, same_lock_start, same_lock_len); 2952 + else 2953 + btrfs_double_extent_lock(src, loff, dst, dst_loff, len); 2954 + 2955 + /* pass original length for comparison so we stay within i_size */ 2956 + ret = btrfs_cmp_data(src, loff, dst, dst_loff, olen, &cmp); 2957 + if (ret == 0) 2958 + ret = btrfs_clone(src, dst, loff, olen, len, dst_loff, 1); 2959 + 2960 + if (same_inode) 2961 + unlock_extent(&BTRFS_I(src)->io_tree, same_lock_start, 2962 + same_lock_start + same_lock_len - 1); 2963 + else 2964 + btrfs_double_extent_unlock(src, loff, dst, dst_loff, len); 2965 + 2966 + btrfs_cmp_data_free(&cmp); 3062 2967 out_unlock: 3063 - btrfs_double_unlock(src, loff, dst, dst_loff, len); 2968 + if (same_inode) 2969 + mutex_unlock(&src->i_mutex); 2970 + else 2971 + btrfs_double_inode_unlock(src, dst); 3064 2972 3065 2973 return ret; 3066 2974 } ··· 3090 2958 static long btrfs_ioctl_file_extent_same(struct file *file, 3091 2959 struct btrfs_ioctl_same_args __user *argp) 3092 2960 { 3093 - struct btrfs_ioctl_same_args *same; 2961 + struct btrfs_ioctl_same_args *same = NULL; 3094 2962 struct btrfs_ioctl_same_extent_info *info; 3095 2963 struct inode *src = file_inode(file); 3096 2964 u64 off; ··· 3120 2988 3121 2989 if (IS_ERR(same)) { 3122 2990 ret = PTR_ERR(same); 2991 + same = NULL; 3123 2992 goto out; 3124 2993 } 3125 2994 ··· 3191 3058 3192 3059 out: 3193 3060 mnt_drop_write_file(file); 3061 + kfree(same); 3194 3062 return ret; 3195 3063 } 3196 3064 ··· 3234 3100 struct inode *inode, 3235 3101 u64 endoff, 3236 3102 const u64 destoff, 3237 - const u64 olen) 3103 + const u64 olen, 3104 + int no_time_update) 3238 3105 { 3239 3106 struct btrfs_root *root = BTRFS_I(inode)->root; 3240 3107 int ret; 3241 3108 3242 3109 inode_inc_iversion(inode); 3243 - inode->i_mtime = inode->i_ctime = CURRENT_TIME; 3110 + if (!no_time_update) 3111 + inode->i_mtime = inode->i_ctime = CURRENT_TIME; 3244 3112 /* 3245 3113 * We round up to the block size at eof when determining which 3246 3114 * extents to clone above, but shouldn't round up the file size. ··· 3327 3191 * @inode: Inode to clone to 3328 3192 * @off: Offset within source to start clone from 3329 3193 * @olen: Original length, passed by user, of range to clone 3330 - * @olen_aligned: Block-aligned value of olen, extent_same uses 3331 - * identical values here 3194 + * @olen_aligned: Block-aligned value of olen 3332 3195 * @destoff: Offset within @inode to start clone 3196 + * @no_time_update: Whether to update mtime/ctime on the target inode 3333 3197 */ 3334 3198 static int btrfs_clone(struct inode *src, struct inode *inode, 3335 3199 const u64 off, const u64 olen, const u64 olen_aligned, 3336 - const u64 destoff) 3200 + const u64 destoff, int no_time_update) 3337 3201 { 3338 3202 struct btrfs_root *root = BTRFS_I(inode)->root; 3339 3203 struct btrfs_path *path = NULL; ··· 3588 3452 u64 trim = 0; 3589 3453 u64 aligned_end = 0; 3590 3454 3455 + /* 3456 + * Don't copy an inline extent into an offset 3457 + * greater than zero. Having an inline extent 3458 + * at such an offset results in chaos as btrfs 3459 + * isn't prepared for such cases. Just skip 3460 + * this case for the same reasons as commented 3461 + * at btrfs_ioctl_clone(). 3462 + */ 3463 + if (last_dest_end > 0) { 3464 + ret = -EOPNOTSUPP; 3465 + btrfs_end_transaction(trans, root); 3466 + goto out; 3467 + } 3468 + 3591 3469 if (off > key.offset) { 3592 3470 skip = off - key.offset; 3593 3471 new_key.offset += skip; ··· 3671 3521 root->sectorsize); 3672 3522 ret = clone_finish_inode_update(trans, inode, 3673 3523 last_dest_end, 3674 - destoff, olen); 3524 + destoff, olen, 3525 + no_time_update); 3675 3526 if (ret) 3676 3527 goto out; 3677 3528 if (new_key.offset + datal >= destoff + len) ··· 3710 3559 clone_update_extent_map(inode, trans, NULL, last_dest_end, 3711 3560 destoff + len - last_dest_end); 3712 3561 ret = clone_finish_inode_update(trans, inode, destoff + len, 3713 - destoff, olen); 3562 + destoff, olen, no_time_update); 3714 3563 } 3715 3564 3716 3565 out: ··· 3847 3696 lock_extent_range(inode, destoff, len); 3848 3697 } 3849 3698 3850 - ret = btrfs_clone(src, inode, off, olen, len, destoff); 3699 + ret = btrfs_clone(src, inode, off, olen, len, destoff, 0); 3851 3700 3852 3701 if (same_inode) { 3853 3702 u64 lock_start = min_t(u64, off, destoff);
+5
fs/btrfs/ordered-data.c
··· 552 552 trace_btrfs_ordered_extent_put(entry->inode, entry); 553 553 554 554 if (atomic_dec_and_test(&entry->refs)) { 555 + ASSERT(list_empty(&entry->log_list)); 556 + ASSERT(list_empty(&entry->trans_list)); 557 + ASSERT(list_empty(&entry->root_extent_list)); 558 + ASSERT(RB_EMPTY_NODE(&entry->rb_node)); 555 559 if (entry->inode) 556 560 btrfs_add_delayed_iput(entry->inode); 557 561 while (!list_empty(&entry->list)) { ··· 583 579 spin_lock_irq(&tree->lock); 584 580 node = &entry->rb_node; 585 581 rb_erase(node, &tree->tree); 582 + RB_CLEAR_NODE(node); 586 583 if (tree->last == node) 587 584 tree->last = NULL; 588 585 set_bit(BTRFS_ORDERED_COMPLETE, &entry->flags);
+41 -8
fs/btrfs/qgroup.c
··· 1349 1349 struct btrfs_root *quota_root; 1350 1350 struct btrfs_qgroup *qgroup; 1351 1351 int ret = 0; 1352 + /* Sometimes we would want to clear the limit on this qgroup. 1353 + * To meet this requirement, we treat the -1 as a special value 1354 + * which tell kernel to clear the limit on this qgroup. 1355 + */ 1356 + const u64 CLEAR_VALUE = -1; 1352 1357 1353 1358 mutex_lock(&fs_info->qgroup_ioctl_lock); 1354 1359 quota_root = fs_info->quota_root; ··· 1369 1364 } 1370 1365 1371 1366 spin_lock(&fs_info->qgroup_lock); 1372 - if (limit->flags & BTRFS_QGROUP_LIMIT_MAX_RFER) 1373 - qgroup->max_rfer = limit->max_rfer; 1374 - if (limit->flags & BTRFS_QGROUP_LIMIT_MAX_EXCL) 1375 - qgroup->max_excl = limit->max_excl; 1376 - if (limit->flags & BTRFS_QGROUP_LIMIT_RSV_RFER) 1377 - qgroup->rsv_rfer = limit->rsv_rfer; 1378 - if (limit->flags & BTRFS_QGROUP_LIMIT_RSV_EXCL) 1379 - qgroup->rsv_excl = limit->rsv_excl; 1367 + if (limit->flags & BTRFS_QGROUP_LIMIT_MAX_RFER) { 1368 + if (limit->max_rfer == CLEAR_VALUE) { 1369 + qgroup->lim_flags &= ~BTRFS_QGROUP_LIMIT_MAX_RFER; 1370 + limit->flags &= ~BTRFS_QGROUP_LIMIT_MAX_RFER; 1371 + qgroup->max_rfer = 0; 1372 + } else { 1373 + qgroup->max_rfer = limit->max_rfer; 1374 + } 1375 + } 1376 + if (limit->flags & BTRFS_QGROUP_LIMIT_MAX_EXCL) { 1377 + if (limit->max_excl == CLEAR_VALUE) { 1378 + qgroup->lim_flags &= ~BTRFS_QGROUP_LIMIT_MAX_EXCL; 1379 + limit->flags &= ~BTRFS_QGROUP_LIMIT_MAX_EXCL; 1380 + qgroup->max_excl = 0; 1381 + } else { 1382 + qgroup->max_excl = limit->max_excl; 1383 + } 1384 + } 1385 + if (limit->flags & BTRFS_QGROUP_LIMIT_RSV_RFER) { 1386 + if (limit->rsv_rfer == CLEAR_VALUE) { 1387 + qgroup->lim_flags &= ~BTRFS_QGROUP_LIMIT_RSV_RFER; 1388 + limit->flags &= ~BTRFS_QGROUP_LIMIT_RSV_RFER; 1389 + qgroup->rsv_rfer = 0; 1390 + } else { 1391 + qgroup->rsv_rfer = limit->rsv_rfer; 1392 + } 1393 + } 1394 + if (limit->flags & BTRFS_QGROUP_LIMIT_RSV_EXCL) { 1395 + if (limit->rsv_excl == CLEAR_VALUE) { 1396 + qgroup->lim_flags &= ~BTRFS_QGROUP_LIMIT_RSV_EXCL; 1397 + limit->flags &= ~BTRFS_QGROUP_LIMIT_RSV_EXCL; 1398 + qgroup->rsv_excl = 0; 1399 + } else { 1400 + qgroup->rsv_excl = limit->rsv_excl; 1401 + } 1402 + } 1380 1403 qgroup->lim_flags |= limit->flags; 1381 1404 1382 1405 spin_unlock(&fs_info->qgroup_lock);
+1 -1
fs/btrfs/relocation.c
··· 4049 4049 if (trans && progress && err == -ENOSPC) { 4050 4050 ret = btrfs_force_chunk_alloc(trans, rc->extent_root, 4051 4051 rc->block_group->flags); 4052 - if (ret == 0) { 4052 + if (ret == 1) { 4053 4053 err = 0; 4054 4054 progress = 0; 4055 4055 goto restart;
+20 -19
fs/btrfs/scrub.c
··· 3571 3571 static noinline_for_stack int scrub_workers_get(struct btrfs_fs_info *fs_info, 3572 3572 int is_dev_replace) 3573 3573 { 3574 - int ret = 0; 3575 3574 unsigned int flags = WQ_FREEZABLE | WQ_UNBOUND; 3576 3575 int max_active = fs_info->thread_pool_size; 3577 3576 ··· 3583 3584 fs_info->scrub_workers = 3584 3585 btrfs_alloc_workqueue("btrfs-scrub", flags, 3585 3586 max_active, 4); 3586 - if (!fs_info->scrub_workers) { 3587 - ret = -ENOMEM; 3588 - goto out; 3589 - } 3587 + if (!fs_info->scrub_workers) 3588 + goto fail_scrub_workers; 3589 + 3590 3590 fs_info->scrub_wr_completion_workers = 3591 3591 btrfs_alloc_workqueue("btrfs-scrubwrc", flags, 3592 3592 max_active, 2); 3593 - if (!fs_info->scrub_wr_completion_workers) { 3594 - ret = -ENOMEM; 3595 - goto out; 3596 - } 3593 + if (!fs_info->scrub_wr_completion_workers) 3594 + goto fail_scrub_wr_completion_workers; 3595 + 3597 3596 fs_info->scrub_nocow_workers = 3598 3597 btrfs_alloc_workqueue("btrfs-scrubnc", flags, 1, 0); 3599 - if (!fs_info->scrub_nocow_workers) { 3600 - ret = -ENOMEM; 3601 - goto out; 3602 - } 3598 + if (!fs_info->scrub_nocow_workers) 3599 + goto fail_scrub_nocow_workers; 3603 3600 fs_info->scrub_parity_workers = 3604 3601 btrfs_alloc_workqueue("btrfs-scrubparity", flags, 3605 3602 max_active, 2); 3606 - if (!fs_info->scrub_parity_workers) { 3607 - ret = -ENOMEM; 3608 - goto out; 3609 - } 3603 + if (!fs_info->scrub_parity_workers) 3604 + goto fail_scrub_parity_workers; 3610 3605 } 3611 3606 ++fs_info->scrub_workers_refcnt; 3612 - out: 3613 - return ret; 3607 + return 0; 3608 + 3609 + fail_scrub_parity_workers: 3610 + btrfs_destroy_workqueue(fs_info->scrub_nocow_workers); 3611 + fail_scrub_nocow_workers: 3612 + btrfs_destroy_workqueue(fs_info->scrub_wr_completion_workers); 3613 + fail_scrub_wr_completion_workers: 3614 + btrfs_destroy_workqueue(fs_info->scrub_workers); 3615 + fail_scrub_workers: 3616 + return -ENOMEM; 3614 3617 } 3615 3618 3616 3619 static noinline_for_stack void scrub_workers_put(struct btrfs_fs_info *fs_info)
+2 -2
fs/btrfs/transaction.c
··· 761 761 762 762 if (!list_empty(&trans->ordered)) { 763 763 spin_lock(&info->trans_lock); 764 - list_splice(&trans->ordered, &cur_trans->pending_ordered); 764 + list_splice_init(&trans->ordered, &cur_trans->pending_ordered); 765 765 spin_unlock(&info->trans_lock); 766 766 } 767 767 ··· 1866 1866 } 1867 1867 1868 1868 spin_lock(&root->fs_info->trans_lock); 1869 - list_splice(&trans->ordered, &cur_trans->pending_ordered); 1869 + list_splice_init(&trans->ordered, &cur_trans->pending_ordered); 1870 1870 if (cur_trans->state >= TRANS_STATE_COMMIT_START) { 1871 1871 spin_unlock(&root->fs_info->trans_lock); 1872 1872 atomic_inc(&cur_trans->use_count);
+221 -5
fs/btrfs/tree-log.c
··· 4117 4117 return 0; 4118 4118 } 4119 4119 4120 + /* 4121 + * At the moment we always log all xattrs. This is to figure out at log replay 4122 + * time which xattrs must have their deletion replayed. If a xattr is missing 4123 + * in the log tree and exists in the fs/subvol tree, we delete it. This is 4124 + * because if a xattr is deleted, the inode is fsynced and a power failure 4125 + * happens, causing the log to be replayed the next time the fs is mounted, 4126 + * we want the xattr to not exist anymore (same behaviour as other filesystems 4127 + * with a journal, ext3/4, xfs, f2fs, etc). 4128 + */ 4129 + static int btrfs_log_all_xattrs(struct btrfs_trans_handle *trans, 4130 + struct btrfs_root *root, 4131 + struct inode *inode, 4132 + struct btrfs_path *path, 4133 + struct btrfs_path *dst_path) 4134 + { 4135 + int ret; 4136 + struct btrfs_key key; 4137 + const u64 ino = btrfs_ino(inode); 4138 + int ins_nr = 0; 4139 + int start_slot = 0; 4140 + 4141 + key.objectid = ino; 4142 + key.type = BTRFS_XATTR_ITEM_KEY; 4143 + key.offset = 0; 4144 + 4145 + ret = btrfs_search_slot(NULL, root, &key, path, 0, 0); 4146 + if (ret < 0) 4147 + return ret; 4148 + 4149 + while (true) { 4150 + int slot = path->slots[0]; 4151 + struct extent_buffer *leaf = path->nodes[0]; 4152 + int nritems = btrfs_header_nritems(leaf); 4153 + 4154 + if (slot >= nritems) { 4155 + if (ins_nr > 0) { 4156 + u64 last_extent = 0; 4157 + 4158 + ret = copy_items(trans, inode, dst_path, path, 4159 + &last_extent, start_slot, 4160 + ins_nr, 1, 0); 4161 + /* can't be 1, extent items aren't processed */ 4162 + ASSERT(ret <= 0); 4163 + if (ret < 0) 4164 + return ret; 4165 + ins_nr = 0; 4166 + } 4167 + ret = btrfs_next_leaf(root, path); 4168 + if (ret < 0) 4169 + return ret; 4170 + else if (ret > 0) 4171 + break; 4172 + continue; 4173 + } 4174 + 4175 + btrfs_item_key_to_cpu(leaf, &key, slot); 4176 + if (key.objectid != ino || key.type != BTRFS_XATTR_ITEM_KEY) 4177 + break; 4178 + 4179 + if (ins_nr == 0) 4180 + start_slot = slot; 4181 + ins_nr++; 4182 + path->slots[0]++; 4183 + cond_resched(); 4184 + } 4185 + if (ins_nr > 0) { 4186 + u64 last_extent = 0; 4187 + 4188 + ret = copy_items(trans, inode, dst_path, path, 4189 + &last_extent, start_slot, 4190 + ins_nr, 1, 0); 4191 + /* can't be 1, extent items aren't processed */ 4192 + ASSERT(ret <= 0); 4193 + if (ret < 0) 4194 + return ret; 4195 + } 4196 + 4197 + return 0; 4198 + } 4199 + 4200 + /* 4201 + * If the no holes feature is enabled we need to make sure any hole between the 4202 + * last extent and the i_size of our inode is explicitly marked in the log. This 4203 + * is to make sure that doing something like: 4204 + * 4205 + * 1) create file with 128Kb of data 4206 + * 2) truncate file to 64Kb 4207 + * 3) truncate file to 256Kb 4208 + * 4) fsync file 4209 + * 5) <crash/power failure> 4210 + * 6) mount fs and trigger log replay 4211 + * 4212 + * Will give us a file with a size of 256Kb, the first 64Kb of data match what 4213 + * the file had in its first 64Kb of data at step 1 and the last 192Kb of the 4214 + * file correspond to a hole. The presence of explicit holes in a log tree is 4215 + * what guarantees that log replay will remove/adjust file extent items in the 4216 + * fs/subvol tree. 4217 + * 4218 + * Here we do not need to care about holes between extents, that is already done 4219 + * by copy_items(). We also only need to do this in the full sync path, where we 4220 + * lookup for extents from the fs/subvol tree only. In the fast path case, we 4221 + * lookup the list of modified extent maps and if any represents a hole, we 4222 + * insert a corresponding extent representing a hole in the log tree. 4223 + */ 4224 + static int btrfs_log_trailing_hole(struct btrfs_trans_handle *trans, 4225 + struct btrfs_root *root, 4226 + struct inode *inode, 4227 + struct btrfs_path *path) 4228 + { 4229 + int ret; 4230 + struct btrfs_key key; 4231 + u64 hole_start; 4232 + u64 hole_size; 4233 + struct extent_buffer *leaf; 4234 + struct btrfs_root *log = root->log_root; 4235 + const u64 ino = btrfs_ino(inode); 4236 + const u64 i_size = i_size_read(inode); 4237 + 4238 + if (!btrfs_fs_incompat(root->fs_info, NO_HOLES)) 4239 + return 0; 4240 + 4241 + key.objectid = ino; 4242 + key.type = BTRFS_EXTENT_DATA_KEY; 4243 + key.offset = (u64)-1; 4244 + 4245 + ret = btrfs_search_slot(NULL, root, &key, path, 0, 0); 4246 + ASSERT(ret != 0); 4247 + if (ret < 0) 4248 + return ret; 4249 + 4250 + ASSERT(path->slots[0] > 0); 4251 + path->slots[0]--; 4252 + leaf = path->nodes[0]; 4253 + btrfs_item_key_to_cpu(leaf, &key, path->slots[0]); 4254 + 4255 + if (key.objectid != ino || key.type != BTRFS_EXTENT_DATA_KEY) { 4256 + /* inode does not have any extents */ 4257 + hole_start = 0; 4258 + hole_size = i_size; 4259 + } else { 4260 + struct btrfs_file_extent_item *extent; 4261 + u64 len; 4262 + 4263 + /* 4264 + * If there's an extent beyond i_size, an explicit hole was 4265 + * already inserted by copy_items(). 4266 + */ 4267 + if (key.offset >= i_size) 4268 + return 0; 4269 + 4270 + extent = btrfs_item_ptr(leaf, path->slots[0], 4271 + struct btrfs_file_extent_item); 4272 + 4273 + if (btrfs_file_extent_type(leaf, extent) == 4274 + BTRFS_FILE_EXTENT_INLINE) { 4275 + len = btrfs_file_extent_inline_len(leaf, 4276 + path->slots[0], 4277 + extent); 4278 + ASSERT(len == i_size); 4279 + return 0; 4280 + } 4281 + 4282 + len = btrfs_file_extent_num_bytes(leaf, extent); 4283 + /* Last extent goes beyond i_size, no need to log a hole. */ 4284 + if (key.offset + len > i_size) 4285 + return 0; 4286 + hole_start = key.offset + len; 4287 + hole_size = i_size - hole_start; 4288 + } 4289 + btrfs_release_path(path); 4290 + 4291 + /* Last extent ends at i_size. */ 4292 + if (hole_size == 0) 4293 + return 0; 4294 + 4295 + hole_size = ALIGN(hole_size, root->sectorsize); 4296 + ret = btrfs_insert_file_extent(trans, log, ino, hole_start, 0, 0, 4297 + hole_size, 0, hole_size, 0, 0, 0); 4298 + return ret; 4299 + } 4300 + 4120 4301 /* log a single inode in the tree log. 4121 4302 * At least one parent directory for this inode must exist in the tree 4122 4303 * or be logged already. ··· 4336 4155 u64 ino = btrfs_ino(inode); 4337 4156 struct extent_map_tree *em_tree = &BTRFS_I(inode)->extent_tree; 4338 4157 u64 logged_isize = 0; 4158 + bool need_log_inode_item = true; 4339 4159 4340 4160 path = btrfs_alloc_path(); 4341 4161 if (!path) ··· 4445 4263 } else { 4446 4264 if (inode_only == LOG_INODE_ALL) 4447 4265 fast_search = true; 4448 - ret = log_inode_item(trans, log, dst_path, inode); 4449 - if (ret) { 4450 - err = ret; 4451 - goto out_unlock; 4452 - } 4453 4266 goto log_extents; 4454 4267 } 4455 4268 ··· 4466 4289 break; 4467 4290 if (min_key.type > max_key.type) 4468 4291 break; 4292 + 4293 + if (min_key.type == BTRFS_INODE_ITEM_KEY) 4294 + need_log_inode_item = false; 4295 + 4296 + /* Skip xattrs, we log them later with btrfs_log_all_xattrs() */ 4297 + if (min_key.type == BTRFS_XATTR_ITEM_KEY) { 4298 + if (ins_nr == 0) 4299 + goto next_slot; 4300 + ret = copy_items(trans, inode, dst_path, path, 4301 + &last_extent, ins_start_slot, 4302 + ins_nr, inode_only, logged_isize); 4303 + if (ret < 0) { 4304 + err = ret; 4305 + goto out_unlock; 4306 + } 4307 + ins_nr = 0; 4308 + if (ret) { 4309 + btrfs_release_path(path); 4310 + continue; 4311 + } 4312 + goto next_slot; 4313 + } 4469 4314 4470 4315 src = path->nodes[0]; 4471 4316 if (ins_nr && ins_start_slot + ins_nr == path->slots[0]) { ··· 4556 4357 ins_nr = 0; 4557 4358 } 4558 4359 4360 + btrfs_release_path(path); 4361 + btrfs_release_path(dst_path); 4362 + err = btrfs_log_all_xattrs(trans, root, inode, path, dst_path); 4363 + if (err) 4364 + goto out_unlock; 4365 + if (max_key.type >= BTRFS_EXTENT_DATA_KEY && !fast_search) { 4366 + btrfs_release_path(path); 4367 + btrfs_release_path(dst_path); 4368 + err = btrfs_log_trailing_hole(trans, root, inode, path); 4369 + if (err) 4370 + goto out_unlock; 4371 + } 4559 4372 log_extents: 4560 4373 btrfs_release_path(path); 4561 4374 btrfs_release_path(dst_path); 4375 + if (need_log_inode_item) { 4376 + err = log_inode_item(trans, log, dst_path, inode); 4377 + if (err) 4378 + goto out_unlock; 4379 + } 4562 4380 if (fast_search) { 4563 4381 /* 4564 4382 * Some ordered extents started by fsync might have completed
+44 -6
fs/btrfs/volumes.c
··· 2766 2766 root = root->fs_info->chunk_root; 2767 2767 extent_root = root->fs_info->extent_root; 2768 2768 2769 + /* 2770 + * Prevent races with automatic removal of unused block groups. 2771 + * After we relocate and before we remove the chunk with offset 2772 + * chunk_offset, automatic removal of the block group can kick in, 2773 + * resulting in a failure when calling btrfs_remove_chunk() below. 2774 + * 2775 + * Make sure to acquire this mutex before doing a tree search (dev 2776 + * or chunk trees) to find chunks. Otherwise the cleaner kthread might 2777 + * call btrfs_remove_chunk() (through btrfs_delete_unused_bgs()) after 2778 + * we release the path used to search the chunk/dev tree and before 2779 + * the current task acquires this mutex and calls us. 2780 + */ 2781 + ASSERT(mutex_is_locked(&root->fs_info->delete_unused_bgs_mutex)); 2782 + 2769 2783 ret = btrfs_can_relocate(extent_root, chunk_offset); 2770 2784 if (ret) 2771 2785 return -ENOSPC; ··· 2828 2814 key.type = BTRFS_CHUNK_ITEM_KEY; 2829 2815 2830 2816 while (1) { 2817 + mutex_lock(&root->fs_info->delete_unused_bgs_mutex); 2831 2818 ret = btrfs_search_slot(NULL, chunk_root, &key, path, 0, 0); 2832 - if (ret < 0) 2819 + if (ret < 0) { 2820 + mutex_unlock(&root->fs_info->delete_unused_bgs_mutex); 2833 2821 goto error; 2822 + } 2834 2823 BUG_ON(ret == 0); /* Corruption */ 2835 2824 2836 2825 ret = btrfs_previous_item(chunk_root, path, key.objectid, 2837 2826 key.type); 2827 + if (ret) 2828 + mutex_unlock(&root->fs_info->delete_unused_bgs_mutex); 2838 2829 if (ret < 0) 2839 2830 goto error; 2840 2831 if (ret > 0) ··· 2862 2843 else 2863 2844 BUG_ON(ret); 2864 2845 } 2846 + mutex_unlock(&root->fs_info->delete_unused_bgs_mutex); 2865 2847 2866 2848 if (found_key.offset == 0) 2867 2849 break; ··· 3319 3299 goto error; 3320 3300 } 3321 3301 3302 + mutex_lock(&fs_info->delete_unused_bgs_mutex); 3322 3303 ret = btrfs_search_slot(NULL, chunk_root, &key, path, 0, 0); 3323 - if (ret < 0) 3304 + if (ret < 0) { 3305 + mutex_unlock(&fs_info->delete_unused_bgs_mutex); 3324 3306 goto error; 3307 + } 3325 3308 3326 3309 /* 3327 3310 * this shouldn't happen, it means the last relocate ··· 3336 3313 ret = btrfs_previous_item(chunk_root, path, 0, 3337 3314 BTRFS_CHUNK_ITEM_KEY); 3338 3315 if (ret) { 3316 + mutex_unlock(&fs_info->delete_unused_bgs_mutex); 3339 3317 ret = 0; 3340 3318 break; 3341 3319 } ··· 3345 3321 slot = path->slots[0]; 3346 3322 btrfs_item_key_to_cpu(leaf, &found_key, slot); 3347 3323 3348 - if (found_key.objectid != key.objectid) 3324 + if (found_key.objectid != key.objectid) { 3325 + mutex_unlock(&fs_info->delete_unused_bgs_mutex); 3349 3326 break; 3327 + } 3350 3328 3351 3329 chunk = btrfs_item_ptr(leaf, slot, struct btrfs_chunk); 3352 3330 ··· 3361 3335 ret = should_balance_chunk(chunk_root, leaf, chunk, 3362 3336 found_key.offset); 3363 3337 btrfs_release_path(path); 3364 - if (!ret) 3338 + if (!ret) { 3339 + mutex_unlock(&fs_info->delete_unused_bgs_mutex); 3365 3340 goto loop; 3341 + } 3366 3342 3367 3343 if (counting) { 3344 + mutex_unlock(&fs_info->delete_unused_bgs_mutex); 3368 3345 spin_lock(&fs_info->balance_lock); 3369 3346 bctl->stat.expected++; 3370 3347 spin_unlock(&fs_info->balance_lock); ··· 3377 3348 ret = btrfs_relocate_chunk(chunk_root, 3378 3349 found_key.objectid, 3379 3350 found_key.offset); 3351 + mutex_unlock(&fs_info->delete_unused_bgs_mutex); 3380 3352 if (ret && ret != -ENOSPC) 3381 3353 goto error; 3382 3354 if (ret == -ENOSPC) { ··· 4117 4087 key.type = BTRFS_DEV_EXTENT_KEY; 4118 4088 4119 4089 do { 4090 + mutex_lock(&root->fs_info->delete_unused_bgs_mutex); 4120 4091 ret = btrfs_search_slot(NULL, root, &key, path, 0, 0); 4121 - if (ret < 0) 4092 + if (ret < 0) { 4093 + mutex_unlock(&root->fs_info->delete_unused_bgs_mutex); 4122 4094 goto done; 4095 + } 4123 4096 4124 4097 ret = btrfs_previous_item(root, path, 0, key.type); 4098 + if (ret) 4099 + mutex_unlock(&root->fs_info->delete_unused_bgs_mutex); 4125 4100 if (ret < 0) 4126 4101 goto done; 4127 4102 if (ret) { ··· 4140 4105 btrfs_item_key_to_cpu(l, &key, path->slots[0]); 4141 4106 4142 4107 if (key.objectid != device->devid) { 4108 + mutex_unlock(&root->fs_info->delete_unused_bgs_mutex); 4143 4109 btrfs_release_path(path); 4144 4110 break; 4145 4111 } ··· 4149 4113 length = btrfs_dev_extent_length(l, dev_extent); 4150 4114 4151 4115 if (key.offset + length <= new_size) { 4116 + mutex_unlock(&root->fs_info->delete_unused_bgs_mutex); 4152 4117 btrfs_release_path(path); 4153 4118 break; 4154 4119 } ··· 4159 4122 btrfs_release_path(path); 4160 4123 4161 4124 ret = btrfs_relocate_chunk(root, chunk_objectid, chunk_offset); 4125 + mutex_unlock(&root->fs_info->delete_unused_bgs_mutex); 4162 4126 if (ret && ret != -ENOSPC) 4163 4127 goto done; 4164 4128 if (ret == -ENOSPC) ··· 5753 5715 static void btrfs_end_bio(struct bio *bio, int err) 5754 5716 { 5755 5717 struct btrfs_bio *bbio = bio->bi_private; 5756 - struct btrfs_device *dev = bbio->stripes[0].dev; 5757 5718 int is_orig_bio = 0; 5758 5719 5759 5720 if (err) { ··· 5760 5723 if (err == -EIO || err == -EREMOTEIO) { 5761 5724 unsigned int stripe_index = 5762 5725 btrfs_io_bio(bio)->stripe_index; 5726 + struct btrfs_device *dev; 5763 5727 5764 5728 BUG_ON(stripe_index >= bbio->num_stripes); 5765 5729 dev = bbio->stripes[stripe_index].dev;
+1
fs/compat_ioctl.c
··· 896 896 /* 'X' - originally XFS but some now in the VFS */ 897 897 COMPATIBLE_IOCTL(FIFREEZE) 898 898 COMPATIBLE_IOCTL(FITHAW) 899 + COMPATIBLE_IOCTL(FITRIM) 899 900 COMPATIBLE_IOCTL(KDGETKEYCODE) 900 901 COMPATIBLE_IOCTL(KDSETKEYCODE) 901 902 COMPATIBLE_IOCTL(KDGKBTYPE)
+2 -2
fs/configfs/item.c
··· 115 115 const char *name, 116 116 struct config_item_type *type) 117 117 { 118 - config_item_set_name(item, name); 118 + config_item_set_name(item, "%s", name); 119 119 item->ci_type = type; 120 120 config_item_init(item); 121 121 } ··· 124 124 void config_group_init_type_name(struct config_group *group, const char *name, 125 125 struct config_item_type *type) 126 126 { 127 - config_item_set_name(&group->cg_item, name); 127 + config_item_set_name(&group->cg_item, "%s", name); 128 128 group->cg_item.ci_type = type; 129 129 config_group_init(group); 130 130 }
+5 -2
fs/dcache.c
··· 642 642 643 643 /* 644 644 * If we have a d_op->d_delete() operation, we sould not 645 - * let the dentry count go to zero, so use "put__or_lock". 645 + * let the dentry count go to zero, so use "put_or_lock". 646 646 */ 647 647 if (unlikely(dentry->d_flags & DCACHE_OP_DELETE)) 648 648 return lockref_put_or_lock(&dentry->d_lockref); ··· 697 697 */ 698 698 smp_rmb(); 699 699 d_flags = ACCESS_ONCE(dentry->d_flags); 700 - d_flags &= DCACHE_REFERENCED | DCACHE_LRU_LIST; 700 + d_flags &= DCACHE_REFERENCED | DCACHE_LRU_LIST | DCACHE_DISCONNECTED; 701 701 702 702 /* Nothing to do? Dropping the reference was all we needed? */ 703 703 if (d_flags == (DCACHE_REFERENCED | DCACHE_LRU_LIST) && !d_unhashed(dentry)) ··· 774 774 775 775 /* Unreachable? Get rid of it */ 776 776 if (unlikely(d_unhashed(dentry))) 777 + goto kill_it; 778 + 779 + if (unlikely(dentry->d_flags & DCACHE_DISCONNECTED)) 777 780 goto kill_it; 778 781 779 782 if (unlikely(dentry->d_flags & DCACHE_OP_DELETE)) {
-1
fs/ecryptfs/file.c
··· 325 325 return rc; 326 326 327 327 switch (cmd) { 328 - case FITRIM: 329 328 case FS_IOC32_GETFLAGS: 330 329 case FS_IOC32_SETFLAGS: 331 330 case FS_IOC32_GETVERSION:
+3 -3
fs/ext4/extents.c
··· 504 504 struct buffer_head *bh; 505 505 int err; 506 506 507 - bh = sb_getblk(inode->i_sb, pblk); 507 + bh = sb_getblk_gfp(inode->i_sb, pblk, __GFP_MOVABLE | GFP_NOFS); 508 508 if (unlikely(!bh)) 509 509 return ERR_PTR(-ENOMEM); 510 510 ··· 1089 1089 err = -EIO; 1090 1090 goto cleanup; 1091 1091 } 1092 - bh = sb_getblk(inode->i_sb, newblock); 1092 + bh = sb_getblk_gfp(inode->i_sb, newblock, __GFP_MOVABLE | GFP_NOFS); 1093 1093 if (unlikely(!bh)) { 1094 1094 err = -ENOMEM; 1095 1095 goto cleanup; ··· 1283 1283 if (newblock == 0) 1284 1284 return err; 1285 1285 1286 - bh = sb_getblk(inode->i_sb, newblock); 1286 + bh = sb_getblk_gfp(inode->i_sb, newblock, __GFP_MOVABLE | GFP_NOFS); 1287 1287 if (unlikely(!bh)) 1288 1288 return -ENOMEM; 1289 1289 lock_buffer(bh);
+18 -4
fs/ext4/inode.c
··· 1323 1323 unsigned int offset, 1324 1324 unsigned int length) 1325 1325 { 1326 - int to_release = 0; 1326 + int to_release = 0, contiguous_blks = 0; 1327 1327 struct buffer_head *head, *bh; 1328 1328 unsigned int curr_off = 0; 1329 1329 struct inode *inode = page->mapping->host; ··· 1344 1344 1345 1345 if ((offset <= curr_off) && (buffer_delay(bh))) { 1346 1346 to_release++; 1347 + contiguous_blks++; 1347 1348 clear_buffer_delay(bh); 1349 + } else if (contiguous_blks) { 1350 + lblk = page->index << 1351 + (PAGE_CACHE_SHIFT - inode->i_blkbits); 1352 + lblk += (curr_off >> inode->i_blkbits) - 1353 + contiguous_blks; 1354 + ext4_es_remove_extent(inode, lblk, contiguous_blks); 1355 + contiguous_blks = 0; 1348 1356 } 1349 1357 curr_off = next_off; 1350 1358 } while ((bh = bh->b_this_page) != head); 1351 1359 1352 - if (to_release) { 1360 + if (contiguous_blks) { 1353 1361 lblk = page->index << (PAGE_CACHE_SHIFT - inode->i_blkbits); 1354 - ext4_es_remove_extent(inode, lblk, to_release); 1362 + lblk += (curr_off >> inode->i_blkbits) - contiguous_blks; 1363 + ext4_es_remove_extent(inode, lblk, contiguous_blks); 1355 1364 } 1356 1365 1357 1366 /* If we have released all the blocks belonging to a cluster, then we ··· 4353 4344 int inode_size = EXT4_INODE_SIZE(sb); 4354 4345 4355 4346 oi.orig_ino = orig_ino; 4356 - ino = (orig_ino & ~(inodes_per_block - 1)) + 1; 4347 + /* 4348 + * Calculate the first inode in the inode table block. Inode 4349 + * numbers are one-based. That is, the first inode in a block 4350 + * (assuming 4k blocks and 256 byte inodes) is (n*16 + 1). 4351 + */ 4352 + ino = ((orig_ino - 1) & ~(inodes_per_block - 1)) + 1; 4357 4353 for (i = 0; i < inodes_per_block; i++, ino++, buf += inode_size) { 4358 4354 if (ino == orig_ino) 4359 4355 continue;
-1
fs/ext4/ioctl.c
··· 755 755 return err; 756 756 } 757 757 case EXT4_IOC_MOVE_EXT: 758 - case FITRIM: 759 758 case EXT4_IOC_RESIZE_FS: 760 759 case EXT4_IOC_PRECACHE_EXTENTS: 761 760 case EXT4_IOC_SET_ENCRYPTION_POLICY:
+5 -11
fs/ext4/mballoc.c
··· 4816 4816 /* 4817 4817 * blocks being freed are metadata. these blocks shouldn't 4818 4818 * be used until this transaction is committed 4819 + * 4820 + * We use __GFP_NOFAIL because ext4_free_blocks() is not allowed 4821 + * to fail. 4819 4822 */ 4820 - retry: 4821 - new_entry = kmem_cache_alloc(ext4_free_data_cachep, GFP_NOFS); 4822 - if (!new_entry) { 4823 - /* 4824 - * We use a retry loop because 4825 - * ext4_free_blocks() is not allowed to fail. 4826 - */ 4827 - cond_resched(); 4828 - congestion_wait(BLK_RW_ASYNC, HZ/50); 4829 - goto retry; 4830 - } 4823 + new_entry = kmem_cache_alloc(ext4_free_data_cachep, 4824 + GFP_NOFS|__GFP_NOFAIL); 4831 4825 new_entry->efd_start_cluster = bit; 4832 4826 new_entry->efd_group = block_group; 4833 4827 new_entry->efd_count = count_clusters;
+14 -3
fs/ext4/migrate.c
··· 620 620 struct ext4_inode_info *ei = EXT4_I(inode); 621 621 struct ext4_extent *ex; 622 622 unsigned int i, len; 623 + ext4_lblk_t start, end; 623 624 ext4_fsblk_t blk; 624 625 handle_t *handle; 625 626 int ret; ··· 633 632 if (EXT4_HAS_RO_COMPAT_FEATURE(inode->i_sb, 634 633 EXT4_FEATURE_RO_COMPAT_BIGALLOC)) 635 634 return -EOPNOTSUPP; 635 + 636 + /* 637 + * In order to get correct extent info, force all delayed allocation 638 + * blocks to be allocated, otherwise delayed allocation blocks may not 639 + * be reflected and bypass the checks on extent header. 640 + */ 641 + if (test_opt(inode->i_sb, DELALLOC)) 642 + ext4_alloc_da_blocks(inode); 636 643 637 644 handle = ext4_journal_start(inode, EXT4_HT_MIGRATE, 1); 638 645 if (IS_ERR(handle)) ··· 659 650 goto errout; 660 651 } 661 652 if (eh->eh_entries == 0) 662 - blk = len = 0; 653 + blk = len = start = end = 0; 663 654 else { 664 655 len = le16_to_cpu(ex->ee_len); 665 656 blk = ext4_ext_pblock(ex); 666 - if (len > EXT4_NDIR_BLOCKS) { 657 + start = le32_to_cpu(ex->ee_block); 658 + end = start + len - 1; 659 + if (end >= EXT4_NDIR_BLOCKS) { 667 660 ret = -EOPNOTSUPP; 668 661 goto errout; 669 662 } ··· 673 662 674 663 ext4_clear_inode_flag(inode, EXT4_INODE_EXTENTS); 675 664 memset(ei->i_data, 0, sizeof(ei->i_data)); 676 - for (i=0; i < len; i++) 665 + for (i = start; i <= end; i++) 677 666 ei->i_data[i] = cpu_to_le32(blk++); 678 667 ext4_mark_inode_dirty(handle, inode); 679 668 errout:
+95
fs/hpfs/alloc.c
··· 484 484 a->btree.first_free = cpu_to_le16(8); 485 485 return a; 486 486 } 487 + 488 + static unsigned find_run(__le32 *bmp, unsigned *idx) 489 + { 490 + unsigned len; 491 + while (tstbits(bmp, *idx, 1)) { 492 + (*idx)++; 493 + if (unlikely(*idx >= 0x4000)) 494 + return 0; 495 + } 496 + len = 1; 497 + while (!tstbits(bmp, *idx + len, 1)) 498 + len++; 499 + return len; 500 + } 501 + 502 + static int do_trim(struct super_block *s, secno start, unsigned len, secno limit_start, secno limit_end, unsigned minlen, unsigned *result) 503 + { 504 + int err; 505 + secno end; 506 + if (fatal_signal_pending(current)) 507 + return -EINTR; 508 + end = start + len; 509 + if (start < limit_start) 510 + start = limit_start; 511 + if (end > limit_end) 512 + end = limit_end; 513 + if (start >= end) 514 + return 0; 515 + if (end - start < minlen) 516 + return 0; 517 + err = sb_issue_discard(s, start, end - start, GFP_NOFS, 0); 518 + if (err) 519 + return err; 520 + *result += end - start; 521 + return 0; 522 + } 523 + 524 + int hpfs_trim_fs(struct super_block *s, u64 start, u64 end, u64 minlen, unsigned *result) 525 + { 526 + int err = 0; 527 + struct hpfs_sb_info *sbi = hpfs_sb(s); 528 + unsigned idx, len, start_bmp, end_bmp; 529 + __le32 *bmp; 530 + struct quad_buffer_head qbh; 531 + 532 + *result = 0; 533 + if (!end || end > sbi->sb_fs_size) 534 + end = sbi->sb_fs_size; 535 + if (start >= sbi->sb_fs_size) 536 + return 0; 537 + if (minlen > 0x4000) 538 + return 0; 539 + if (start < sbi->sb_dirband_start + sbi->sb_dirband_size && end > sbi->sb_dirband_start) { 540 + hpfs_lock(s); 541 + if (s->s_flags & MS_RDONLY) { 542 + err = -EROFS; 543 + goto unlock_1; 544 + } 545 + if (!(bmp = hpfs_map_dnode_bitmap(s, &qbh))) { 546 + err = -EIO; 547 + goto unlock_1; 548 + } 549 + idx = 0; 550 + while ((len = find_run(bmp, &idx)) && !err) { 551 + err = do_trim(s, sbi->sb_dirband_start + idx * 4, len * 4, start, end, minlen, result); 552 + idx += len; 553 + } 554 + hpfs_brelse4(&qbh); 555 + unlock_1: 556 + hpfs_unlock(s); 557 + } 558 + start_bmp = start >> 14; 559 + end_bmp = (end + 0x3fff) >> 14; 560 + while (start_bmp < end_bmp && !err) { 561 + hpfs_lock(s); 562 + if (s->s_flags & MS_RDONLY) { 563 + err = -EROFS; 564 + goto unlock_2; 565 + } 566 + if (!(bmp = hpfs_map_bitmap(s, start_bmp, &qbh, "trim"))) { 567 + err = -EIO; 568 + goto unlock_2; 569 + } 570 + idx = 0; 571 + while ((len = find_run(bmp, &idx)) && !err) { 572 + err = do_trim(s, (start_bmp << 14) + idx, len, start, end, minlen, result); 573 + idx += len; 574 + } 575 + hpfs_brelse4(&qbh); 576 + unlock_2: 577 + hpfs_unlock(s); 578 + start_bmp++; 579 + } 580 + return err; 581 + }
+1
fs/hpfs/dir.c
··· 327 327 .iterate = hpfs_readdir, 328 328 .release = hpfs_dir_release, 329 329 .fsync = hpfs_file_fsync, 330 + .unlocked_ioctl = hpfs_ioctl, 330 331 };
+1
fs/hpfs/file.c
··· 203 203 .release = hpfs_file_release, 204 204 .fsync = hpfs_file_fsync, 205 205 .splice_read = generic_file_splice_read, 206 + .unlocked_ioctl = hpfs_ioctl, 206 207 }; 207 208 208 209 const struct inode_operations hpfs_file_iops =
+4
fs/hpfs/hpfs_fn.h
··· 18 18 #include <linux/pagemap.h> 19 19 #include <linux/buffer_head.h> 20 20 #include <linux/slab.h> 21 + #include <linux/sched.h> 22 + #include <linux/blkdev.h> 21 23 #include <asm/unaligned.h> 22 24 23 25 #include "hpfs.h" ··· 202 200 struct dnode *hpfs_alloc_dnode(struct super_block *, secno, dnode_secno *, struct quad_buffer_head *); 203 201 struct fnode *hpfs_alloc_fnode(struct super_block *, secno, fnode_secno *, struct buffer_head **); 204 202 struct anode *hpfs_alloc_anode(struct super_block *, secno, anode_secno *, struct buffer_head **); 203 + int hpfs_trim_fs(struct super_block *, u64, u64, u64, unsigned *); 205 204 206 205 /* anode.c */ 207 206 ··· 321 318 void hpfs_error(struct super_block *, const char *, ...); 322 319 int hpfs_stop_cycles(struct super_block *, int, int *, int *, char *); 323 320 unsigned hpfs_get_free_dnodes(struct super_block *); 321 + long hpfs_ioctl(struct file *file, unsigned cmd, unsigned long arg); 324 322 325 323 /* 326 324 * local time (HPFS) to GMT (Unix)
+40 -7
fs/hpfs/super.c
··· 52 52 } 53 53 54 54 /* Filesystem error... */ 55 - static char err_buf[1024]; 56 - 57 55 void hpfs_error(struct super_block *s, const char *fmt, ...) 58 56 { 57 + struct va_format vaf; 59 58 va_list args; 60 59 61 60 va_start(args, fmt); 62 - vsnprintf(err_buf, sizeof(err_buf), fmt, args); 61 + 62 + vaf.fmt = fmt; 63 + vaf.va = &args; 64 + 65 + pr_err("filesystem error: %pV", &vaf); 66 + 63 67 va_end(args); 64 68 65 - pr_err("filesystem error: %s", err_buf); 66 69 if (!hpfs_sb(s)->sb_was_error) { 67 70 if (hpfs_sb(s)->sb_err == 2) { 68 71 pr_cont("; crashing the system because you wanted it\n"); ··· 199 196 return 0; 200 197 } 201 198 199 + 200 + long hpfs_ioctl(struct file *file, unsigned cmd, unsigned long arg) 201 + { 202 + switch (cmd) { 203 + case FITRIM: { 204 + struct fstrim_range range; 205 + secno n_trimmed; 206 + int r; 207 + if (!capable(CAP_SYS_ADMIN)) 208 + return -EPERM; 209 + if (copy_from_user(&range, (struct fstrim_range __user *)arg, sizeof(range))) 210 + return -EFAULT; 211 + r = hpfs_trim_fs(file_inode(file)->i_sb, range.start >> 9, (range.start + range.len) >> 9, (range.minlen + 511) >> 9, &n_trimmed); 212 + if (r) 213 + return r; 214 + range.len = (u64)n_trimmed << 9; 215 + if (copy_to_user((struct fstrim_range __user *)arg, &range, sizeof(range))) 216 + return -EFAULT; 217 + return 0; 218 + } 219 + default: { 220 + return -ENOIOCTLCMD; 221 + } 222 + } 223 + } 224 + 225 + 202 226 static struct kmem_cache * hpfs_inode_cachep; 203 227 204 228 static struct inode *hpfs_alloc_inode(struct super_block *sb) 205 229 { 206 230 struct hpfs_inode_info *ei; 207 - ei = (struct hpfs_inode_info *)kmem_cache_alloc(hpfs_inode_cachep, GFP_NOFS); 231 + ei = kmem_cache_alloc(hpfs_inode_cachep, GFP_NOFS); 208 232 if (!ei) 209 233 return NULL; 210 234 ei->vfs_inode.i_version = 1; ··· 454 424 int o; 455 425 struct hpfs_sb_info *sbi = hpfs_sb(s); 456 426 char *new_opts = kstrdup(data, GFP_KERNEL); 457 - 427 + 428 + if (!new_opts) 429 + return -ENOMEM; 430 + 458 431 sync_filesystem(s); 459 432 460 433 *flags |= MS_NOATIME; 461 - 434 + 462 435 hpfs_lock(s); 463 436 uid = sbi->sb_uid; gid = sbi->sb_gid; 464 437 umask = 0777 & ~sbi->sb_mode;
+1 -1
fs/jfs/file.c
··· 76 76 if (ji->active_ag == -1) { 77 77 struct jfs_sb_info *jfs_sb = JFS_SBI(inode->i_sb); 78 78 ji->active_ag = BLKTOAG(addressPXD(&ji->ixpxd), jfs_sb); 79 - atomic_inc( &jfs_sb->bmap->db_active[ji->active_ag]); 79 + atomic_inc(&jfs_sb->bmap->db_active[ji->active_ag]); 80 80 } 81 81 spin_unlock_irq(&ji->ag_lock); 82 82 }
+2 -2
fs/jfs/inode.c
··· 134 134 * It has been committed since the last change, but was still 135 135 * on the dirty inode list. 136 136 */ 137 - if (!test_cflag(COMMIT_Dirty, inode)) { 137 + if (!test_cflag(COMMIT_Dirty, inode)) { 138 138 /* Make sure committed changes hit the disk */ 139 139 jfs_flush_journal(JFS_SBI(inode->i_sb)->log, wait); 140 140 return 0; 141 - } 141 + } 142 142 143 143 if (jfs_commit_inode(inode, wait)) { 144 144 jfs_err("jfs_write_inode: jfs_commit_inode failed!");
-3
fs/jfs/ioctl.c
··· 180 180 case JFS_IOC_SETFLAGS32: 181 181 cmd = JFS_IOC_SETFLAGS; 182 182 break; 183 - case FITRIM: 184 - cmd = FITRIM; 185 - break; 186 183 } 187 184 return jfs_ioctl(filp, cmd, arg); 188 185 }
+13 -14
fs/jfs/namei.c
··· 1160 1160 rc = dtModify(tid, new_dir, &new_dname, &ino, 1161 1161 old_ip->i_ino, JFS_RENAME); 1162 1162 if (rc) 1163 - goto out4; 1163 + goto out_tx; 1164 1164 drop_nlink(new_ip); 1165 1165 if (S_ISDIR(new_ip->i_mode)) { 1166 1166 drop_nlink(new_ip); ··· 1185 1185 if ((new_size = commitZeroLink(tid, new_ip)) < 0) { 1186 1186 txAbort(tid, 1); /* Marks FS Dirty */ 1187 1187 rc = new_size; 1188 - goto out4; 1188 + goto out_tx; 1189 1189 } 1190 1190 tblk = tid_to_tblock(tid); 1191 1191 tblk->xflag |= COMMIT_DELETE; ··· 1203 1203 if (rc) { 1204 1204 jfs_err("jfs_rename didn't expect dtSearch to fail " 1205 1205 "w/rc = %d", rc); 1206 - goto out4; 1206 + goto out_tx; 1207 1207 } 1208 1208 1209 1209 ino = old_ip->i_ino; ··· 1211 1211 if (rc) { 1212 1212 if (rc == -EIO) 1213 1213 jfs_err("jfs_rename: dtInsert returned -EIO"); 1214 - goto out4; 1214 + goto out_tx; 1215 1215 } 1216 1216 if (S_ISDIR(old_ip->i_mode)) 1217 1217 inc_nlink(new_dir); ··· 1226 1226 jfs_err("jfs_rename did not expect dtDelete to return rc = %d", 1227 1227 rc); 1228 1228 txAbort(tid, 1); /* Marks Filesystem dirty */ 1229 - goto out4; 1229 + goto out_tx; 1230 1230 } 1231 1231 if (S_ISDIR(old_ip->i_mode)) { 1232 1232 drop_nlink(old_dir); ··· 1285 1285 1286 1286 rc = txCommit(tid, ipcount, iplist, commit_flag); 1287 1287 1288 - out4: 1288 + out_tx: 1289 1289 txEnd(tid); 1290 1290 if (new_ip) 1291 1291 mutex_unlock(&JFS_IP(new_ip)->commit_mutex); ··· 1308 1308 } 1309 1309 if (new_ip && (new_ip->i_nlink == 0)) 1310 1310 set_cflag(COMMIT_Nolink, new_ip); 1311 - out3: 1312 - free_UCSname(&new_dname); 1313 - out2: 1314 - free_UCSname(&old_dname); 1315 - out1: 1316 - if (new_ip && !S_ISDIR(new_ip->i_mode)) 1317 - IWRITE_UNLOCK(new_ip); 1318 1311 /* 1319 1312 * Truncating the directory index table is not guaranteed. It 1320 1313 * may need to be done iteratively ··· 1318 1325 1319 1326 clear_cflag(COMMIT_Stale, old_dir); 1320 1327 } 1321 - 1328 + if (new_ip && !S_ISDIR(new_ip->i_mode)) 1329 + IWRITE_UNLOCK(new_ip); 1330 + out3: 1331 + free_UCSname(&new_dname); 1332 + out2: 1333 + free_UCSname(&old_dname); 1334 + out1: 1322 1335 jfs_info("jfs_rename: returning %d", rc); 1323 1336 return rc; 1324 1337 }
+18 -20
fs/locks.c
··· 862 862 * whether or not a lock was successfully freed by testing the return 863 863 * value for -ENOENT. 864 864 */ 865 - static int flock_lock_file(struct file *filp, struct file_lock *request) 865 + static int flock_lock_inode(struct inode *inode, struct file_lock *request) 866 866 { 867 867 struct file_lock *new_fl = NULL; 868 868 struct file_lock *fl; 869 869 struct file_lock_context *ctx; 870 - struct inode *inode = file_inode(filp); 871 870 int error = 0; 872 871 bool found = false; 873 872 LIST_HEAD(dispose); ··· 889 890 goto find_conflict; 890 891 891 892 list_for_each_entry(fl, &ctx->flc_flock, fl_list) { 892 - if (filp != fl->fl_file) 893 + if (request->fl_file != fl->fl_file) 893 894 continue; 894 895 if (request->fl_type == fl->fl_type) 895 896 goto out; ··· 1163 1164 EXPORT_SYMBOL(posix_lock_file); 1164 1165 1165 1166 /** 1166 - * posix_lock_file_wait - Apply a POSIX-style lock to a file 1167 - * @filp: The file to apply the lock to 1167 + * posix_lock_inode_wait - Apply a POSIX-style lock to a file 1168 + * @inode: inode of file to which lock request should be applied 1168 1169 * @fl: The lock to be applied 1169 1170 * 1170 - * Add a POSIX style lock to a file. 1171 - * We merge adjacent & overlapping locks whenever possible. 1172 - * POSIX locks are sorted by owner task, then by starting address 1171 + * Variant of posix_lock_file_wait that does not take a filp, and so can be 1172 + * used after the filp has already been torn down. 1173 1173 */ 1174 - int posix_lock_file_wait(struct file *filp, struct file_lock *fl) 1174 + int posix_lock_inode_wait(struct inode *inode, struct file_lock *fl) 1175 1175 { 1176 1176 int error; 1177 1177 might_sleep (); 1178 1178 for (;;) { 1179 - error = posix_lock_file(filp, fl, NULL); 1179 + error = __posix_lock_file(inode, fl, NULL); 1180 1180 if (error != FILE_LOCK_DEFERRED) 1181 1181 break; 1182 1182 error = wait_event_interruptible(fl->fl_wait, !fl->fl_next); ··· 1187 1189 } 1188 1190 return error; 1189 1191 } 1190 - EXPORT_SYMBOL(posix_lock_file_wait); 1192 + EXPORT_SYMBOL(posix_lock_inode_wait); 1191 1193 1192 1194 /** 1193 1195 * locks_mandatory_locked - Check for an active lock ··· 1849 1851 } 1850 1852 1851 1853 /** 1852 - * flock_lock_file_wait - Apply a FLOCK-style lock to a file 1853 - * @filp: The file to apply the lock to 1854 + * flock_lock_inode_wait - Apply a FLOCK-style lock to a file 1855 + * @inode: inode of the file to apply to 1854 1856 * @fl: The lock to be applied 1855 1857 * 1856 - * Add a FLOCK style lock to a file. 1858 + * Apply a FLOCK style lock request to an inode. 1857 1859 */ 1858 - int flock_lock_file_wait(struct file *filp, struct file_lock *fl) 1860 + int flock_lock_inode_wait(struct inode *inode, struct file_lock *fl) 1859 1861 { 1860 1862 int error; 1861 1863 might_sleep(); 1862 1864 for (;;) { 1863 - error = flock_lock_file(filp, fl); 1865 + error = flock_lock_inode(inode, fl); 1864 1866 if (error != FILE_LOCK_DEFERRED) 1865 1867 break; 1866 1868 error = wait_event_interruptible(fl->fl_wait, !fl->fl_next); ··· 1872 1874 } 1873 1875 return error; 1874 1876 } 1875 - 1876 - EXPORT_SYMBOL(flock_lock_file_wait); 1877 + EXPORT_SYMBOL(flock_lock_inode_wait); 1877 1878 1878 1879 /** 1879 1880 * sys_flock: - flock() system call. ··· 2398 2401 .fl_type = F_UNLCK, 2399 2402 .fl_end = OFFSET_MAX, 2400 2403 }; 2401 - struct file_lock_context *flctx = file_inode(filp)->i_flctx; 2404 + struct inode *inode = file_inode(filp); 2405 + struct file_lock_context *flctx = inode->i_flctx; 2402 2406 2403 2407 if (list_empty(&flctx->flc_flock)) 2404 2408 return; ··· 2407 2409 if (filp->f_op->flock) 2408 2410 filp->f_op->flock(filp, F_SETLKW, &fl); 2409 2411 else 2410 - flock_lock_file(filp, &fl); 2412 + flock_lock_inode(inode, &fl); 2411 2413 2412 2414 if (fl.fl_ops && fl.fl_ops->fl_release_private) 2413 2415 fl.fl_ops->fl_release_private(&fl);
+8 -10
fs/nfs/nfs4proc.c
··· 5439 5439 return err; 5440 5440 } 5441 5441 5442 - static int do_vfs_lock(struct file *file, struct file_lock *fl) 5442 + static int do_vfs_lock(struct inode *inode, struct file_lock *fl) 5443 5443 { 5444 5444 int res = 0; 5445 5445 switch (fl->fl_flags & (FL_POSIX|FL_FLOCK)) { 5446 5446 case FL_POSIX: 5447 - res = posix_lock_file_wait(file, fl); 5447 + res = posix_lock_inode_wait(inode, fl); 5448 5448 break; 5449 5449 case FL_FLOCK: 5450 - res = flock_lock_file_wait(file, fl); 5450 + res = flock_lock_inode_wait(inode, fl); 5451 5451 break; 5452 5452 default: 5453 5453 BUG(); ··· 5484 5484 atomic_inc(&lsp->ls_count); 5485 5485 /* Ensure we don't close file until we're done freeing locks! */ 5486 5486 p->ctx = get_nfs_open_context(ctx); 5487 - get_file(fl->fl_file); 5488 5487 memcpy(&p->fl, fl, sizeof(p->fl)); 5489 5488 p->server = NFS_SERVER(inode); 5490 5489 return p; ··· 5495 5496 nfs_free_seqid(calldata->arg.seqid); 5496 5497 nfs4_put_lock_state(calldata->lsp); 5497 5498 put_nfs_open_context(calldata->ctx); 5498 - fput(calldata->fl.fl_file); 5499 5499 kfree(calldata); 5500 5500 } 5501 5501 ··· 5507 5509 switch (task->tk_status) { 5508 5510 case 0: 5509 5511 renew_lease(calldata->server, calldata->timestamp); 5510 - do_vfs_lock(calldata->fl.fl_file, &calldata->fl); 5512 + do_vfs_lock(calldata->lsp->ls_state->inode, &calldata->fl); 5511 5513 if (nfs4_update_lock_stateid(calldata->lsp, 5512 5514 &calldata->res.stateid)) 5513 5515 break; ··· 5615 5617 mutex_lock(&sp->so_delegreturn_mutex); 5616 5618 /* Exclude nfs4_reclaim_open_stateid() - note nesting! */ 5617 5619 down_read(&nfsi->rwsem); 5618 - if (do_vfs_lock(request->fl_file, request) == -ENOENT) { 5620 + if (do_vfs_lock(inode, request) == -ENOENT) { 5619 5621 up_read(&nfsi->rwsem); 5620 5622 mutex_unlock(&sp->so_delegreturn_mutex); 5621 5623 goto out; ··· 5756 5758 data->timestamp); 5757 5759 if (data->arg.new_lock) { 5758 5760 data->fl.fl_flags &= ~(FL_SLEEP | FL_ACCESS); 5759 - if (do_vfs_lock(data->fl.fl_file, &data->fl) < 0) { 5761 + if (do_vfs_lock(lsp->ls_state->inode, &data->fl) < 0) { 5760 5762 rpc_restart_call_prepare(task); 5761 5763 break; 5762 5764 } ··· 5998 6000 if (status != 0) 5999 6001 goto out; 6000 6002 request->fl_flags |= FL_ACCESS; 6001 - status = do_vfs_lock(request->fl_file, request); 6003 + status = do_vfs_lock(state->inode, request); 6002 6004 if (status < 0) 6003 6005 goto out; 6004 6006 down_read(&nfsi->rwsem); ··· 6006 6008 /* Yes: cache locks! */ 6007 6009 /* ...but avoid races with delegation recall... */ 6008 6010 request->fl_flags = fl_flags & ~FL_SLEEP; 6009 - status = do_vfs_lock(request->fl_file, request); 6011 + status = do_vfs_lock(state->inode, request); 6010 6012 up_read(&nfsi->rwsem); 6011 6013 goto out; 6012 6014 }
-1
fs/nilfs2/ioctl.c
··· 1369 1369 case NILFS_IOCTL_SYNC: 1370 1370 case NILFS_IOCTL_RESIZE: 1371 1371 case NILFS_IOCTL_SET_ALLOC_RANGE: 1372 - case FITRIM: 1373 1372 break; 1374 1373 default: 1375 1374 return -ENOIOCTLCMD;
+14 -20
fs/notify/mark.c
··· 152 152 BUG(); 153 153 154 154 list_del_init(&mark->g_list); 155 - 156 155 spin_unlock(&mark->lock); 157 156 158 157 if (inode && (mark->flags & FSNOTIFY_MARK_FLAG_OBJECT_PINNED)) 159 158 iput(inode); 160 - /* release lock temporarily */ 161 - mutex_unlock(&group->mark_mutex); 162 159 163 160 spin_lock(&destroy_lock); 164 161 list_add(&mark->g_list, &destroy_list); 165 162 spin_unlock(&destroy_lock); 166 163 wake_up(&destroy_waitq); 167 - /* 168 - * We don't necessarily have a ref on mark from caller so the above destroy 169 - * may have actually freed it, unless this group provides a 'freeing_mark' 170 - * function which must be holding a reference. 171 - */ 172 - 173 - /* 174 - * Some groups like to know that marks are being freed. This is a 175 - * callback to the group function to let it know that this mark 176 - * is being freed. 177 - */ 178 - if (group->ops->freeing_mark) 179 - group->ops->freeing_mark(mark, group); 180 164 181 165 /* 182 166 * __fsnotify_update_child_dentry_flags(inode); ··· 175 191 */ 176 192 177 193 atomic_dec(&group->num_marks); 178 - 179 - mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING); 180 194 } 181 195 182 196 void fsnotify_destroy_mark(struct fsnotify_mark *mark, ··· 187 205 188 206 /* 189 207 * Destroy all marks in the given list. The marks must be already detached from 190 - * the original inode / vfsmount. 208 + * the original inode / vfsmount. Note that we can race with 209 + * fsnotify_clear_marks_by_group_flags(). However we hold a reference to each 210 + * mark so they won't get freed from under us and nobody else touches our 211 + * free_list list_head. 191 212 */ 192 213 void fsnotify_destroy_marks(struct list_head *to_free) 193 214 { ··· 391 406 } 392 407 393 408 /* 394 - * clear any marks in a group in which mark->flags & flags is true 409 + * Clear any marks in a group in which mark->flags & flags is true. 395 410 */ 396 411 void fsnotify_clear_marks_by_group_flags(struct fsnotify_group *group, 397 412 unsigned int flags) ··· 445 460 { 446 461 struct fsnotify_mark *mark, *next; 447 462 struct list_head private_destroy_list; 463 + struct fsnotify_group *group; 448 464 449 465 for (;;) { 450 466 spin_lock(&destroy_lock); ··· 457 471 458 472 list_for_each_entry_safe(mark, next, &private_destroy_list, g_list) { 459 473 list_del_init(&mark->g_list); 474 + group = mark->group; 475 + /* 476 + * Some groups like to know that marks are being freed. 477 + * This is a callback to the group function to let it 478 + * know that this mark is being freed. 479 + */ 480 + if (group && group->ops->freeing_mark) 481 + group->ops->freeing_mark(mark, group); 460 482 fsnotify_put_mark(mark); 461 483 } 462 484
-1
fs/ocfs2/ioctl.c
··· 980 980 case OCFS2_IOC_GROUP_EXTEND: 981 981 case OCFS2_IOC_GROUP_ADD: 982 982 case OCFS2_IOC_GROUP_ADD64: 983 - case FITRIM: 984 983 break; 985 984 case OCFS2_IOC_REFLINK: 986 985 if (copy_from_user(&args, argp, sizeof(args)))
+3
fs/overlayfs/inode.c
··· 343 343 struct path realpath; 344 344 enum ovl_path_type type; 345 345 346 + if (d_is_dir(dentry)) 347 + return d_backing_inode(dentry); 348 + 346 349 type = ovl_path_real(dentry, &realpath); 347 350 if (ovl_open_need_copy_up(file_flags, type, realpath.dentry)) { 348 351 err = ovl_want_write(dentry);
+6
fs/proc/Kconfig
··· 75 75 config PROC_CHILDREN 76 76 bool "Include /proc/<pid>/task/<tid>/children file" 77 77 default n 78 + help 79 + Provides a fast way to retrieve first level children pids of a task. See 80 + <file:Documentation/filesystems/proc.txt> for more information. 81 + 82 + Say Y if you are running any user-space software which takes benefit from 83 + this interface. For example, rkt is such a piece of software.
+5
fs/proc/base.c
··· 243 243 len1 = arg_end - arg_start; 244 244 len2 = env_end - env_start; 245 245 246 + /* Empty ARGV. */ 247 + if (len1 == 0) { 248 + rv = 0; 249 + goto out_free_page; 250 + } 246 251 /* 247 252 * Inherently racy -- command line shares address space 248 253 * with code and data.
+2 -2
fs/proc/kcore.c
··· 92 92 roundup(sizeof(CORE_STR), 4)) + 93 93 roundup(sizeof(struct elf_prstatus), 4) + 94 94 roundup(sizeof(struct elf_prpsinfo), 4) + 95 - roundup(sizeof(struct task_struct), 4); 95 + roundup(arch_task_struct_size, 4); 96 96 *elf_buflen = PAGE_ALIGN(*elf_buflen); 97 97 return size + *elf_buflen; 98 98 } ··· 415 415 /* set up the task structure */ 416 416 notes[2].name = CORE_STR; 417 417 notes[2].type = NT_TASKSTRUCT; 418 - notes[2].datasz = sizeof(struct task_struct); 418 + notes[2].datasz = arch_task_struct_size; 419 419 notes[2].data = current; 420 420 421 421 nhdr->p_filesz += notesize(&notes[2]);
+16
include/asm-generic/mm-arch-hooks.h
··· 1 + /* 2 + * Architecture specific mm hooks 3 + */ 4 + 5 + #ifndef _ASM_GENERIC_MM_ARCH_HOOKS_H 6 + #define _ASM_GENERIC_MM_ARCH_HOOKS_H 7 + 8 + /* 9 + * This file should be included through arch/../include/asm/Kbuild for 10 + * the architecture which doesn't need specific mm hooks. 11 + * 12 + * In that case, the generic hooks defined in include/linux/mm-arch-hooks.h 13 + * are used. 14 + */ 15 + 16 + #endif /* _ASM_GENERIC_MM_ARCH_HOOKS_H */
+14 -10
include/linux/acpi.h
··· 58 58 acpi_fwnode_handle(adev) : NULL) 59 59 #define ACPI_HANDLE(dev) acpi_device_handle(ACPI_COMPANION(dev)) 60 60 61 + /** 62 + * ACPI_DEVICE_CLASS - macro used to describe an ACPI device with 63 + * the PCI-defined class-code information 64 + * 65 + * @_cls : the class, subclass, prog-if triple for this device 66 + * @_msk : the class mask for this device 67 + * 68 + * This macro is used to create a struct acpi_device_id that matches a 69 + * specific PCI class. The .id and .driver_data fields will be left 70 + * initialized with the default value. 71 + */ 72 + #define ACPI_DEVICE_CLASS(_cls, _msk) .cls = (_cls), .cls_msk = (_msk), 73 + 61 74 static inline bool has_acpi_companion(struct device *dev) 62 75 { 63 76 return is_acpi_node(dev->fwnode); ··· 322 309 323 310 int acpi_resources_are_enforced(void); 324 311 325 - int acpi_reserve_region(u64 start, unsigned int length, u8 space_id, 326 - unsigned long flags, char *desc); 327 - 328 312 #ifdef CONFIG_HIBERNATION 329 313 void __init acpi_no_s4_hw_signature(void); 330 314 #endif ··· 456 446 #define ACPI_COMPANION(dev) (NULL) 457 447 #define ACPI_COMPANION_SET(dev, adev) do { } while (0) 458 448 #define ACPI_HANDLE(dev) (NULL) 449 + #define ACPI_DEVICE_CLASS(_cls, _msk) .cls = (0), .cls_msk = (0), 459 450 460 451 struct fwnode_handle; 461 452 ··· 516 505 const char *name) 517 506 { 518 507 return 0; 519 - } 520 - 521 - static inline int acpi_reserve_region(u64 start, unsigned int length, 522 - u8 space_id, unsigned long flags, 523 - char *desc) 524 - { 525 - return -ENXIO; 526 508 } 527 509 528 510 struct acpi_table_header;
+1 -1
include/linux/amba/sp810.h
··· 2 2 * ARM PrimeXsys System Controller SP810 header file 3 3 * 4 4 * Copyright (C) 2009 ST Microelectronics 5 - * Viresh Kumar <viresh.linux@gmail.com> 5 + * Viresh Kumar <vireshk@kernel.org> 6 6 * 7 7 * This file is licensed under the terms of the GNU General Public 8 8 * License version 2. This program is licensed "as is" without any
+3 -8
include/linux/blk-cgroup.h
··· 47 47 48 48 struct blkcg_policy_data *pd[BLKCG_MAX_POLS]; 49 49 50 + struct list_head all_blkcgs_node; 50 51 #ifdef CONFIG_CGROUP_WRITEBACK 51 52 struct list_head cgwb_list; 52 53 #endif ··· 89 88 * Policies that need to keep per-blkcg data which is independent 90 89 * from any request_queue associated to it must specify its size 91 90 * with the cpd_size field of the blkcg_policy structure and 92 - * embed a blkcg_policy_data in it. blkcg core allocates 93 - * policy-specific per-blkcg structures lazily the first time 94 - * they are actually needed, so it handles them together with 95 - * blkgs. cpd_init() is invoked to let each policy handle 96 - * per-blkcg data. 91 + * embed a blkcg_policy_data in it. cpd_init() is invoked to let 92 + * each policy handle per-blkcg data. 97 93 */ 98 94 struct blkcg_policy_data { 99 95 /* the policy id this per-policy data belongs to */ 100 96 int plid; 101 - 102 - /* used during policy activation */ 103 - struct list_head alloc_node; 104 97 }; 105 98 106 99 /* association between a blk cgroup and a request queue */
+7
include/linux/buffer_head.h
··· 317 317 return __getblk_gfp(sb->s_bdev, block, sb->s_blocksize, __GFP_MOVABLE); 318 318 } 319 319 320 + 321 + static inline struct buffer_head * 322 + sb_getblk_gfp(struct super_block *sb, sector_t block, gfp_t gfp) 323 + { 324 + return __getblk_gfp(sb->s_bdev, block, sb->s_blocksize, gfp); 325 + } 326 + 320 327 static inline struct buffer_head * 321 328 sb_find_get_block(struct super_block *sb, sector_t block) 322 329 {
+2
include/linux/can/skb.h
··· 27 27 /** 28 28 * struct can_skb_priv - private additional data inside CAN sk_buffs 29 29 * @ifindex: ifindex of the first interface the CAN frame appeared on 30 + * @skbcnt: atomic counter to have an unique id together with skb pointer 30 31 * @cf: align to the following CAN frame at skb->data 31 32 */ 32 33 struct can_skb_priv { 33 34 int ifindex; 35 + int skbcnt; 34 36 struct can_frame cf[0]; 35 37 }; 36 38
+3
include/linux/ceph/messenger.h
··· 8 8 #include <linux/radix-tree.h> 9 9 #include <linux/uio.h> 10 10 #include <linux/workqueue.h> 11 + #include <net/net_namespace.h> 11 12 12 13 #include <linux/ceph/types.h> 13 14 #include <linux/ceph/buffer.h> ··· 57 56 struct ceph_entity_addr my_enc_addr; 58 57 59 58 atomic_t stopping; 59 + possible_net_t net; 60 60 bool nocrc; 61 61 bool tcp_nodelay; 62 62 ··· 269 267 u64 required_features, 270 268 bool nocrc, 271 269 bool tcp_nodelay); 270 + extern void ceph_messenger_fini(struct ceph_messenger *msgr); 272 271 273 272 extern void ceph_con_init(struct ceph_connection *con, void *private, 274 273 const struct ceph_connection_operations *ops,
+4 -3
include/linux/clkdev.h
··· 33 33 } 34 34 35 35 struct clk_lookup *clkdev_alloc(struct clk *clk, const char *con_id, 36 - const char *dev_fmt, ...); 36 + const char *dev_fmt, ...) __printf(3, 4); 37 37 38 38 void clkdev_add(struct clk_lookup *cl); 39 39 void clkdev_drop(struct clk_lookup *cl); 40 40 41 41 struct clk_lookup *clkdev_create(struct clk *clk, const char *con_id, 42 - const char *dev_fmt, ...); 42 + const char *dev_fmt, ...) __printf(3, 4); 43 43 44 44 void clkdev_add_table(struct clk_lookup *, size_t); 45 45 int clk_add_alias(const char *, const char *, const char *, struct device *); 46 46 47 - int clk_register_clkdev(struct clk *, const char *, const char *, ...); 47 + int clk_register_clkdev(struct clk *, const char *, const char *, ...) 48 + __printf(3, 4); 48 49 int clk_register_clkdevs(struct clk *, struct clk_lookup *, size_t); 49 50 50 51 #ifdef CONFIG_COMMON_CLK
+1 -1
include/linux/compat.h
··· 424 424 425 425 asmlinkage long compat_sys_adjtimex(struct compat_timex __user *utp); 426 426 427 - extern int compat_printk(const char *fmt, ...); 427 + extern __printf(1, 2) int compat_printk(const char *fmt, ...); 428 428 extern void sigset_from_compat(sigset_t *set, const compat_sigset_t *compat); 429 429 extern void sigset_to_compat(compat_sigset_t *compat, const sigset_t *set); 430 430
+1 -1
include/linux/compiler.h
··· 17 17 # define __release(x) __context__(x,-1) 18 18 # define __cond_lock(x,c) ((c) ? ({ __acquire(x); 1; }) : 0) 19 19 # define __percpu __attribute__((noderef, address_space(3))) 20 + # define __pmem __attribute__((noderef, address_space(5))) 20 21 #ifdef CONFIG_SPARSE_RCU_POINTER 21 22 # define __rcu __attribute__((noderef, address_space(4))) 22 23 #else 23 24 # define __rcu 24 - # define __pmem __attribute__((noderef, address_space(5))) 25 25 #endif 26 26 extern void __chk_user_ptr(const volatile void __user *); 27 27 extern void __chk_io_ptr(const volatile void __iomem *);
+2 -1
include/linux/configfs.h
··· 64 64 struct dentry *ci_dentry; 65 65 }; 66 66 67 - extern int config_item_set_name(struct config_item *, const char *, ...); 67 + extern __printf(2, 3) 68 + int config_item_set_name(struct config_item *, const char *, ...); 68 69 69 70 static inline char *config_item_name(struct config_item * item) 70 71 {
+4 -3
include/linux/cpu.h
··· 40 40 extern int cpu_add_dev_attr_group(struct attribute_group *attrs); 41 41 extern void cpu_remove_dev_attr_group(struct attribute_group *attrs); 42 42 43 - extern struct device *cpu_device_create(struct device *parent, void *drvdata, 44 - const struct attribute_group **groups, 45 - const char *fmt, ...); 43 + extern __printf(4, 5) 44 + struct device *cpu_device_create(struct device *parent, void *drvdata, 45 + const struct attribute_group **groups, 46 + const char *fmt, ...); 46 47 #ifdef CONFIG_HOTPLUG_CPU 47 48 extern void unregister_cpu(struct cpu *cpu); 48 49 extern ssize_t arch_cpu_probe(const char *, size_t);
+2 -1
include/linux/dcache.h
··· 327 327 /* 328 328 * helper function for dentry_operations.d_dname() members 329 329 */ 330 - extern char *dynamic_dname(struct dentry *, char *, int, const char *, ...); 330 + extern __printf(4, 5) 331 + char *dynamic_dname(struct dentry *, char *, int, const char *, ...); 331 332 extern char *simple_dname(struct dentry *, char *, int); 332 333 333 334 extern char *__d_path(const struct path *, const struct path *, char *, int);
+7 -8
include/linux/device.h
··· 637 637 638 638 /* managed devm_k.alloc/kfree for device drivers */ 639 639 extern void *devm_kmalloc(struct device *dev, size_t size, gfp_t gfp); 640 - extern char *devm_kvasprintf(struct device *dev, gfp_t gfp, const char *fmt, 641 - va_list ap); 640 + extern __printf(3, 0) 641 + char *devm_kvasprintf(struct device *dev, gfp_t gfp, const char *fmt, 642 + va_list ap); 642 643 extern __printf(3, 4) 643 644 char *devm_kasprintf(struct device *dev, gfp_t gfp, const char *fmt, ...); 644 645 static inline void *devm_kzalloc(struct device *dev, size_t size, gfp_t gfp) ··· 1012 1011 /* 1013 1012 * Easy functions for dynamically creating devices on the fly 1014 1013 */ 1015 - extern struct device *device_create_vargs(struct class *cls, 1016 - struct device *parent, 1017 - dev_t devt, 1018 - void *drvdata, 1019 - const char *fmt, 1020 - va_list vargs); 1014 + extern __printf(5, 0) 1015 + struct device *device_create_vargs(struct class *cls, struct device *parent, 1016 + dev_t devt, void *drvdata, 1017 + const char *fmt, va_list vargs); 1021 1018 extern __printf(5, 6) 1022 1019 struct device *device_create(struct class *cls, struct device *parent, 1023 1020 dev_t devt, void *drvdata,
+20 -10
include/linux/fs.h
··· 1046 1046 extern void locks_release_private(struct file_lock *); 1047 1047 extern void posix_test_lock(struct file *, struct file_lock *); 1048 1048 extern int posix_lock_file(struct file *, struct file_lock *, struct file_lock *); 1049 - extern int posix_lock_file_wait(struct file *, struct file_lock *); 1049 + extern int posix_lock_inode_wait(struct inode *, struct file_lock *); 1050 1050 extern int posix_unblock_lock(struct file_lock *); 1051 1051 extern int vfs_test_lock(struct file *, struct file_lock *); 1052 1052 extern int vfs_lock_file(struct file *, unsigned int, struct file_lock *, struct file_lock *); 1053 1053 extern int vfs_cancel_lock(struct file *filp, struct file_lock *fl); 1054 - extern int flock_lock_file_wait(struct file *filp, struct file_lock *fl); 1054 + extern int flock_lock_inode_wait(struct inode *inode, struct file_lock *fl); 1055 1055 extern int __break_lease(struct inode *inode, unsigned int flags, unsigned int type); 1056 1056 extern void lease_get_mtime(struct inode *, struct timespec *time); 1057 1057 extern int generic_setlease(struct file *, long, struct file_lock **, void **priv); ··· 1137 1137 return -ENOLCK; 1138 1138 } 1139 1139 1140 - static inline int posix_lock_file_wait(struct file *filp, struct file_lock *fl) 1140 + static inline int posix_lock_inode_wait(struct inode *inode, 1141 + struct file_lock *fl) 1141 1142 { 1142 1143 return -ENOLCK; 1143 1144 } ··· 1164 1163 return 0; 1165 1164 } 1166 1165 1167 - static inline int flock_lock_file_wait(struct file *filp, 1168 - struct file_lock *request) 1166 + static inline int flock_lock_inode_wait(struct inode *inode, 1167 + struct file_lock *request) 1169 1168 { 1170 1169 return -ENOLCK; 1171 1170 } ··· 1203 1202 struct file *filp, struct files_struct *files) {} 1204 1203 #endif /* !CONFIG_FILE_LOCKING */ 1205 1204 1205 + static inline struct inode *file_inode(const struct file *f) 1206 + { 1207 + return f->f_inode; 1208 + } 1209 + 1210 + static inline int posix_lock_file_wait(struct file *filp, struct file_lock *fl) 1211 + { 1212 + return posix_lock_inode_wait(file_inode(filp), fl); 1213 + } 1214 + 1215 + static inline int flock_lock_file_wait(struct file *filp, struct file_lock *fl) 1216 + { 1217 + return flock_lock_inode_wait(file_inode(filp), fl); 1218 + } 1206 1219 1207 1220 struct fasync_struct { 1208 1221 spinlock_t fa_lock; ··· 2025 2010 extern void ihold(struct inode * inode); 2026 2011 extern void iput(struct inode *); 2027 2012 extern int generic_update_time(struct inode *, struct timespec *, int); 2028 - 2029 - static inline struct inode *file_inode(const struct file *f) 2030 - { 2031 - return f->f_inode; 2032 - } 2033 2013 2034 2014 /* /sys/fs */ 2035 2015 extern struct kobject *fs_kobj;
+1 -1
include/linux/gpio/driver.h
··· 45 45 * @base: identifies the first GPIO number handled by this chip; 46 46 * or, if negative during registration, requests dynamic ID allocation. 47 47 * DEPRECATION: providing anything non-negative and nailing the base 48 - * base offset of GPIO chips is deprecated. Please pass -1 as base to 48 + * offset of GPIO chips is deprecated. Please pass -1 as base to 49 49 * let gpiolib select the chip base in all possible cases. We want to 50 50 * get rid of the static GPIO number space in the long run. 51 51 * @ngpio: the number of GPIOs handled by this controller; the last GPIO
+1
include/linux/hid-sensor-hub.h
··· 230 230 struct platform_device *pdev; 231 231 unsigned usage_id; 232 232 atomic_t data_ready; 233 + atomic_t user_requested_state; 233 234 struct iio_trigger *trigger; 234 235 struct hid_sensor_hub_attribute_info poll; 235 236 struct hid_sensor_hub_attribute_info report_state;
+8 -9
include/linux/hugetlb.h
··· 460 460 return &mm->page_table_lock; 461 461 } 462 462 463 - static inline bool hugepages_supported(void) 464 - { 465 - /* 466 - * Some platform decide whether they support huge pages at boot 467 - * time. On these, such as powerpc, HPAGE_SHIFT is set to 0 when 468 - * there is no such support 469 - */ 470 - return HPAGE_SHIFT != 0; 471 - } 463 + #ifndef hugepages_supported 464 + /* 465 + * Some platform decide whether they support huge pages at boot 466 + * time. Some of them, such as powerpc, set HPAGE_SHIFT to 0 467 + * when there is no such support 468 + */ 469 + #define hugepages_supported() (HPAGE_SHIFT != 0) 470 + #endif 472 471 473 472 #else /* CONFIG_HUGETLB_PAGE */ 474 473 struct hstate {};
-78
include/linux/init.h
··· 282 282 void __init parse_early_options(char *cmdline); 283 283 #endif /* __ASSEMBLY__ */ 284 284 285 - /** 286 - * module_init() - driver initialization entry point 287 - * @x: function to be run at kernel boot time or module insertion 288 - * 289 - * module_init() will either be called during do_initcalls() (if 290 - * builtin) or at module insertion time (if a module). There can only 291 - * be one per module. 292 - */ 293 - #define module_init(x) __initcall(x); 294 - 295 - /** 296 - * module_exit() - driver exit entry point 297 - * @x: function to be run when driver is removed 298 - * 299 - * module_exit() will wrap the driver clean-up code 300 - * with cleanup_module() when used with rmmod when 301 - * the driver is a module. If the driver is statically 302 - * compiled into the kernel, module_exit() has no effect. 303 - * There can only be one per module. 304 - */ 305 - #define module_exit(x) __exitcall(x); 306 - 307 285 #else /* MODULE */ 308 - 309 - /* 310 - * In most cases loadable modules do not need custom 311 - * initcall levels. There are still some valid cases where 312 - * a driver may be needed early if built in, and does not 313 - * matter when built as a loadable module. Like bus 314 - * snooping debug drivers. 315 - */ 316 - #define early_initcall(fn) module_init(fn) 317 - #define core_initcall(fn) module_init(fn) 318 - #define core_initcall_sync(fn) module_init(fn) 319 - #define postcore_initcall(fn) module_init(fn) 320 - #define postcore_initcall_sync(fn) module_init(fn) 321 - #define arch_initcall(fn) module_init(fn) 322 - #define subsys_initcall(fn) module_init(fn) 323 - #define subsys_initcall_sync(fn) module_init(fn) 324 - #define fs_initcall(fn) module_init(fn) 325 - #define fs_initcall_sync(fn) module_init(fn) 326 - #define rootfs_initcall(fn) module_init(fn) 327 - #define device_initcall(fn) module_init(fn) 328 - #define device_initcall_sync(fn) module_init(fn) 329 - #define late_initcall(fn) module_init(fn) 330 - #define late_initcall_sync(fn) module_init(fn) 331 - 332 - #define console_initcall(fn) module_init(fn) 333 - #define security_initcall(fn) module_init(fn) 334 - 335 - /* Each module must use one module_init(). */ 336 - #define module_init(initfn) \ 337 - static inline initcall_t __inittest(void) \ 338 - { return initfn; } \ 339 - int init_module(void) __attribute__((alias(#initfn))); 340 - 341 - /* This is only required if you want to be unloadable. */ 342 - #define module_exit(exitfn) \ 343 - static inline exitcall_t __exittest(void) \ 344 - { return exitfn; } \ 345 - void cleanup_module(void) __attribute__((alias(#exitfn))); 346 286 347 287 #define __setup_param(str, unique_id, fn) /* nothing */ 348 288 #define __setup(str, func) /* nothing */ ··· 290 350 291 351 /* Data marked not to be saved by software suspend */ 292 352 #define __nosavedata __section(.data..nosave) 293 - 294 - /* This means "can be init if no module support, otherwise module load 295 - may call it." */ 296 - #ifdef CONFIG_MODULES 297 - #define __init_or_module 298 - #define __initdata_or_module 299 - #define __initconst_or_module 300 - #define __INIT_OR_MODULE .text 301 - #define __INITDATA_OR_MODULE .data 302 - #define __INITRODATA_OR_MODULE .section ".rodata","a",%progbits 303 - #else 304 - #define __init_or_module __init 305 - #define __initdata_or_module __initdata 306 - #define __initconst_or_module __initconst 307 - #define __INIT_OR_MODULE __INIT 308 - #define __INITDATA_OR_MODULE __INITDATA 309 - #define __INITRODATA_OR_MODULE __INITRODATA 310 - #endif /*CONFIG_MODULES*/ 311 353 312 354 #ifdef MODULE 313 355 #define __exit_p(x) x
+1 -1
include/linux/iommu.h
··· 258 258 void *data); 259 259 struct device *iommu_device_create(struct device *parent, void *drvdata, 260 260 const struct attribute_group **groups, 261 - const char *fmt, ...); 261 + const char *fmt, ...) __printf(4, 5); 262 262 void iommu_device_destroy(struct device *dev); 263 263 int iommu_device_link(struct device *dev, struct device *link); 264 264 void iommu_device_unlink(struct device *dev, struct device *link);
+6 -1
include/linux/irqdesc.h
··· 87 87 const char *name; 88 88 } ____cacheline_internodealigned_in_smp; 89 89 90 - #ifndef CONFIG_SPARSE_IRQ 90 + #ifdef CONFIG_SPARSE_IRQ 91 + extern void irq_lock_sparse(void); 92 + extern void irq_unlock_sparse(void); 93 + #else 94 + static inline void irq_lock_sparse(void) { } 95 + static inline void irq_unlock_sparse(void) { } 91 96 extern struct irq_desc irq_desc[NR_IRQS]; 92 97 #endif 93 98
+5 -4
include/linux/kernel.h
··· 411 411 int vscnprintf(char *buf, size_t size, const char *fmt, va_list args); 412 412 extern __printf(2, 3) 413 413 char *kasprintf(gfp_t gfp, const char *fmt, ...); 414 - extern char *kvasprintf(gfp_t gfp, const char *fmt, va_list args); 414 + extern __printf(2, 0) 415 + char *kvasprintf(gfp_t gfp, const char *fmt, va_list args); 415 416 416 417 extern __scanf(2, 3) 417 418 int sscanf(const char *, const char *, ...); ··· 680 679 __ftrace_vprintk(_THIS_IP_, fmt, vargs); \ 681 680 } while (0) 682 681 683 - extern int 682 + extern __printf(2, 0) int 684 683 __ftrace_vbprintk(unsigned long ip, const char *fmt, va_list ap); 685 684 686 - extern int 685 + extern __printf(2, 0) int 687 686 __ftrace_vprintk(unsigned long ip, const char *fmt, va_list ap); 688 687 689 688 extern void ftrace_dump(enum ftrace_dump_mode oops_dump_mode); ··· 703 702 { 704 703 return 0; 705 704 } 706 - static inline int 705 + static __printf(1, 0) inline int 707 706 ftrace_vprintk(const char *fmt, va_list ap) 708 707 { 709 708 return 0;
+3 -2
include/linux/kobject.h
··· 80 80 81 81 extern __printf(2, 3) 82 82 int kobject_set_name(struct kobject *kobj, const char *name, ...); 83 - extern int kobject_set_name_vargs(struct kobject *kobj, const char *fmt, 84 - va_list vargs); 83 + extern __printf(2, 0) 84 + int kobject_set_name_vargs(struct kobject *kobj, const char *fmt, 85 + va_list vargs); 85 86 86 87 static inline const char *kobject_name(const struct kobject *kobj) 87 88 {
+18
include/linux/kvm_host.h
··· 734 734 return false; 735 735 } 736 736 #endif 737 + #ifdef __KVM_HAVE_ARCH_ASSIGNED_DEVICE 738 + void kvm_arch_start_assignment(struct kvm *kvm); 739 + void kvm_arch_end_assignment(struct kvm *kvm); 740 + bool kvm_arch_has_assigned_device(struct kvm *kvm); 741 + #else 742 + static inline void kvm_arch_start_assignment(struct kvm *kvm) 743 + { 744 + } 745 + 746 + static inline void kvm_arch_end_assignment(struct kvm *kvm) 747 + { 748 + } 749 + 750 + static inline bool kvm_arch_has_assigned_device(struct kvm *kvm) 751 + { 752 + return false; 753 + } 754 + #endif 737 755 738 756 static inline wait_queue_head_t *kvm_arch_vcpu_wq(struct kvm_vcpu *vcpu) 739 757 {
+1 -1
include/linux/mmiotrace.h
··· 106 106 extern void disable_mmiotrace(void); 107 107 extern void mmio_trace_rw(struct mmiotrace_rw *rw); 108 108 extern void mmio_trace_mapping(struct mmiotrace_map *map); 109 - extern int mmio_trace_printk(const char *fmt, va_list args); 109 + extern __printf(1, 0) int mmio_trace_printk(const char *fmt, va_list args); 110 110 111 111 #endif /* _LINUX_MMIOTRACE_H */
+2
include/linux/mod_devicetable.h
··· 189 189 struct acpi_device_id { 190 190 __u8 id[ACPI_ID_LEN]; 191 191 kernel_ulong_t driver_data; 192 + __u32 cls; 193 + __u32 cls_msk; 192 194 }; 193 195 194 196 #define PNP_ID_LEN 8
+84
include/linux/module.h
··· 11 11 #include <linux/compiler.h> 12 12 #include <linux/cache.h> 13 13 #include <linux/kmod.h> 14 + #include <linux/init.h> 14 15 #include <linux/elf.h> 15 16 #include <linux/stringify.h> 16 17 #include <linux/kobject.h> ··· 71 70 /* These are either module local, or the kernel's dummy ones. */ 72 71 extern int init_module(void); 73 72 extern void cleanup_module(void); 73 + 74 + #ifndef MODULE 75 + /** 76 + * module_init() - driver initialization entry point 77 + * @x: function to be run at kernel boot time or module insertion 78 + * 79 + * module_init() will either be called during do_initcalls() (if 80 + * builtin) or at module insertion time (if a module). There can only 81 + * be one per module. 82 + */ 83 + #define module_init(x) __initcall(x); 84 + 85 + /** 86 + * module_exit() - driver exit entry point 87 + * @x: function to be run when driver is removed 88 + * 89 + * module_exit() will wrap the driver clean-up code 90 + * with cleanup_module() when used with rmmod when 91 + * the driver is a module. If the driver is statically 92 + * compiled into the kernel, module_exit() has no effect. 93 + * There can only be one per module. 94 + */ 95 + #define module_exit(x) __exitcall(x); 96 + 97 + #else /* MODULE */ 98 + 99 + /* 100 + * In most cases loadable modules do not need custom 101 + * initcall levels. There are still some valid cases where 102 + * a driver may be needed early if built in, and does not 103 + * matter when built as a loadable module. Like bus 104 + * snooping debug drivers. 105 + */ 106 + #define early_initcall(fn) module_init(fn) 107 + #define core_initcall(fn) module_init(fn) 108 + #define core_initcall_sync(fn) module_init(fn) 109 + #define postcore_initcall(fn) module_init(fn) 110 + #define postcore_initcall_sync(fn) module_init(fn) 111 + #define arch_initcall(fn) module_init(fn) 112 + #define subsys_initcall(fn) module_init(fn) 113 + #define subsys_initcall_sync(fn) module_init(fn) 114 + #define fs_initcall(fn) module_init(fn) 115 + #define fs_initcall_sync(fn) module_init(fn) 116 + #define rootfs_initcall(fn) module_init(fn) 117 + #define device_initcall(fn) module_init(fn) 118 + #define device_initcall_sync(fn) module_init(fn) 119 + #define late_initcall(fn) module_init(fn) 120 + #define late_initcall_sync(fn) module_init(fn) 121 + 122 + #define console_initcall(fn) module_init(fn) 123 + #define security_initcall(fn) module_init(fn) 124 + 125 + /* Each module must use one module_init(). */ 126 + #define module_init(initfn) \ 127 + static inline initcall_t __inittest(void) \ 128 + { return initfn; } \ 129 + int init_module(void) __attribute__((alias(#initfn))); 130 + 131 + /* This is only required if you want to be unloadable. */ 132 + #define module_exit(exitfn) \ 133 + static inline exitcall_t __exittest(void) \ 134 + { return exitfn; } \ 135 + void cleanup_module(void) __attribute__((alias(#exitfn))); 136 + 137 + #endif 138 + 139 + /* This means "can be init if no module support, otherwise module load 140 + may call it." */ 141 + #ifdef CONFIG_MODULES 142 + #define __init_or_module 143 + #define __initdata_or_module 144 + #define __initconst_or_module 145 + #define __INIT_OR_MODULE .text 146 + #define __INITDATA_OR_MODULE .data 147 + #define __INITRODATA_OR_MODULE .section ".rodata","a",%progbits 148 + #else 149 + #define __init_or_module __init 150 + #define __initdata_or_module __initdata 151 + #define __initconst_or_module __initconst 152 + #define __INIT_OR_MODULE __INIT 153 + #define __INITDATA_OR_MODULE __INITDATA 154 + #define __INITRODATA_OR_MODULE __INITRODATA 155 + #endif /*CONFIG_MODULES*/ 74 156 75 157 /* Archs provide a method of finding the correct exception table. */ 76 158 struct exception_table_entry;
+13
include/linux/page_owner.h
··· 8 8 extern void __reset_page_owner(struct page *page, unsigned int order); 9 9 extern void __set_page_owner(struct page *page, 10 10 unsigned int order, gfp_t gfp_mask); 11 + extern gfp_t __get_page_owner_gfp(struct page *page); 11 12 12 13 static inline void reset_page_owner(struct page *page, unsigned int order) 13 14 { ··· 26 25 27 26 __set_page_owner(page, order, gfp_mask); 28 27 } 28 + 29 + static inline gfp_t get_page_owner_gfp(struct page *page) 30 + { 31 + if (likely(!page_owner_inited)) 32 + return 0; 33 + 34 + return __get_page_owner_gfp(page); 35 + } 29 36 #else 30 37 static inline void reset_page_owner(struct page *page, unsigned int order) 31 38 { ··· 41 32 static inline void set_page_owner(struct page *page, 42 33 unsigned int order, gfp_t gfp_mask) 43 34 { 35 + } 36 + static inline gfp_t get_page_owner_gfp(struct page *page) 37 + { 38 + return 0; 44 39 } 45 40 46 41 #endif /* CONFIG_PAGE_OWNER */
+1 -1
include/linux/pata_arasan_cf_data.h
··· 4 4 * Arasan Compact Flash host controller platform data header file 5 5 * 6 6 * Copyright (C) 2011 ST Microelectronics 7 - * Viresh Kumar <viresh.linux@gmail.com> 7 + * Viresh Kumar <vireshk@kernel.org> 8 8 * 9 9 * This file is licensed under the terms of the GNU General Public 10 10 * License version 2. This program is licensed "as is" without any
+3 -3
include/linux/printk.h
··· 122 122 void early_printk(const char *s, ...) { } 123 123 #endif 124 124 125 - typedef int(*printk_func_t)(const char *fmt, va_list args); 125 + typedef __printf(1, 0) int (*printk_func_t)(const char *fmt, va_list args); 126 126 127 127 #ifdef CONFIG_PRINTK 128 128 asmlinkage __printf(5, 0) ··· 166 166 u32 log_buf_len_get(void); 167 167 void log_buf_kexec_setup(void); 168 168 void __init setup_log_buf(int early); 169 - void dump_stack_set_arch_desc(const char *fmt, ...); 169 + __printf(1, 2) void dump_stack_set_arch_desc(const char *fmt, ...); 170 170 void dump_stack_print_info(const char *log_lvl); 171 171 void show_regs_print_info(const char *log_lvl); 172 172 #else ··· 217 217 { 218 218 } 219 219 220 - static inline void dump_stack_set_arch_desc(const char *fmt, ...) 220 + static inline __printf(1, 2) void dump_stack_set_arch_desc(const char *fmt, ...) 221 221 { 222 222 } 223 223
+4
include/linux/rtc/sirfsoc_rtciobrg.h
··· 9 9 #ifndef _SIRFSOC_RTC_IOBRG_H_ 10 10 #define _SIRFSOC_RTC_IOBRG_H_ 11 11 12 + struct regmap_config; 13 + 12 14 extern void sirfsoc_rtc_iobrg_besyncing(void); 13 15 14 16 extern u32 sirfsoc_rtc_iobrg_readl(u32 addr); 15 17 16 18 extern void sirfsoc_rtc_iobrg_writel(u32 val, u32 addr); 19 + struct regmap *devm_regmap_init_iobg(struct device *dev, 20 + const struct regmap_config *config); 17 21 18 22 #endif
+14 -2
include/linux/sched.h
··· 1522 1522 /* hung task detection */ 1523 1523 unsigned long last_switch_count; 1524 1524 #endif 1525 - /* CPU-specific state of this task */ 1526 - struct thread_struct thread; 1527 1525 /* filesystem information */ 1528 1526 struct fs_struct *fs; 1529 1527 /* open file information */ ··· 1776 1778 unsigned long task_state_change; 1777 1779 #endif 1778 1780 int pagefault_disabled; 1781 + /* CPU-specific state of this task */ 1782 + struct thread_struct thread; 1783 + /* 1784 + * WARNING: on x86, 'thread_struct' contains a variable-sized 1785 + * structure. It *MUST* be at the end of 'task_struct'. 1786 + * 1787 + * Do not put anything below here! 1788 + */ 1779 1789 }; 1790 + 1791 + #ifdef CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT 1792 + extern int arch_task_struct_size __read_mostly; 1793 + #else 1794 + # define arch_task_struct_size (sizeof(struct task_struct)) 1795 + #endif 1780 1796 1781 1797 /* Future-safe accessor for struct task_struct's cpus_allowed. */ 1782 1798 #define tsk_cpus_allowed(tsk) (&(tsk)->cpus_allowed)
+5 -2
include/linux/tick.h
··· 67 67 static inline void tick_broadcast_control(enum tick_broadcast_mode mode) { } 68 68 #endif /* BROADCAST */ 69 69 70 - #if defined(CONFIG_GENERIC_CLOCKEVENTS_BROADCAST) && defined(CONFIG_TICK_ONESHOT) 70 + #ifdef CONFIG_GENERIC_CLOCKEVENTS 71 71 extern int tick_broadcast_oneshot_control(enum tick_broadcast_state state); 72 72 #else 73 - static inline int tick_broadcast_oneshot_control(enum tick_broadcast_state state) { return 0; } 73 + static inline int tick_broadcast_oneshot_control(enum tick_broadcast_state state) 74 + { 75 + return 0; 76 + } 74 77 #endif 75 78 76 79 static inline void tick_broadcast_enable(void)
-1
include/linux/timekeeping.h
··· 145 145 } 146 146 #endif 147 147 148 - #define do_posix_clock_monotonic_gettime(ts) ktime_get_ts(ts) 149 148 #define ktime_get_real_ts64(ts) getnstimeofday64(ts) 150 149 151 150 /*
+6 -1
include/linux/usb/cdc_ncm.h
··· 80 80 #define CDC_NCM_TIMER_INTERVAL_MIN 5UL 81 81 #define CDC_NCM_TIMER_INTERVAL_MAX (U32_MAX / NSEC_PER_USEC) 82 82 83 + /* Driver flags */ 84 + #define CDC_NCM_FLAG_NDP_TO_END 0x02 /* NDP is placed at end of frame */ 85 + 83 86 #define cdc_ncm_comm_intf_is_mbim(x) ((x)->desc.bInterfaceSubClass == USB_CDC_SUBCLASS_MBIM && \ 84 87 (x)->desc.bInterfaceProtocol == USB_CDC_PROTO_NONE) 85 88 #define cdc_ncm_data_intf_is_mbim(x) ((x)->desc.bInterfaceProtocol == USB_CDC_MBIM_PROTO_NTB) ··· 106 103 107 104 spinlock_t mtx; 108 105 atomic_t stop; 106 + int drvflags; 109 107 110 108 u32 timer_interval; 111 109 u32 max_ndp_size; 110 + struct usb_cdc_ncm_ndp16 *delayed_ndp16; 112 111 113 112 u32 tx_timer_pending; 114 113 u32 tx_curr_frame_num; ··· 138 133 }; 139 134 140 135 u8 cdc_ncm_select_altsetting(struct usb_interface *intf); 141 - int cdc_ncm_bind_common(struct usbnet *dev, struct usb_interface *intf, u8 data_altsetting); 136 + int cdc_ncm_bind_common(struct usbnet *dev, struct usb_interface *intf, u8 data_altsetting, int drvflags); 142 137 void cdc_ncm_unbind(struct usbnet *dev, struct usb_interface *intf); 143 138 struct sk_buff *cdc_ncm_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb, __le32 sign); 144 139 int cdc_ncm_rx_verify_nth16(struct cdc_ncm_ctx *ctx, struct sk_buff *skb_in);
+17 -3
include/rdma/ib_verbs.h
··· 1745 1745 char node_desc[64]; 1746 1746 __be64 node_guid; 1747 1747 u32 local_dma_lkey; 1748 + u16 is_switch:1; 1748 1749 u8 node_type; 1749 1750 u8 phys_port_cnt; 1750 1751 ··· 1825 1824 u8 port_num); 1826 1825 1827 1826 /** 1827 + * rdma_cap_ib_switch - Check if the device is IB switch 1828 + * @device: Device to check 1829 + * 1830 + * Device driver is responsible for setting is_switch bit on 1831 + * in ib_device structure at init time. 1832 + * 1833 + * Return: true if the device is IB switch. 1834 + */ 1835 + static inline bool rdma_cap_ib_switch(const struct ib_device *device) 1836 + { 1837 + return device->is_switch; 1838 + } 1839 + 1840 + /** 1828 1841 * rdma_start_port - Return the first valid port number for the device 1829 1842 * specified 1830 1843 * ··· 1848 1833 */ 1849 1834 static inline u8 rdma_start_port(const struct ib_device *device) 1850 1835 { 1851 - return (device->node_type == RDMA_NODE_IB_SWITCH) ? 0 : 1; 1836 + return rdma_cap_ib_switch(device) ? 0 : 1; 1852 1837 } 1853 1838 1854 1839 /** ··· 1861 1846 */ 1862 1847 static inline u8 rdma_end_port(const struct ib_device *device) 1863 1848 { 1864 - return (device->node_type == RDMA_NODE_IB_SWITCH) ? 1865 - 0 : device->phys_port_cnt; 1849 + return rdma_cap_ib_switch(device) ? 0 : device->phys_port_cnt; 1866 1850 } 1867 1851 1868 1852 static inline bool rdma_protocol_ib(const struct ib_device *device, u8 port_num)
+1
include/scsi/scsi_transport_srp.h
··· 119 119 extern void srp_rport_del(struct srp_rport *); 120 120 extern int srp_tmo_valid(int reconnect_delay, int fast_io_fail_tmo, 121 121 int dev_loss_tmo); 122 + int srp_parse_tmo(int *tmo, const char *buf); 122 123 extern int srp_reconnect_rport(struct srp_rport *rport); 123 124 extern void srp_start_tl_fail_timers(struct srp_rport *rport); 124 125 extern void srp_remove_host(struct Scsi_Host *);
+1
include/uapi/linux/netconf.h
··· 15 15 NETCONFA_RP_FILTER, 16 16 NETCONFA_MC_FORWARDING, 17 17 NETCONFA_PROXY_NEIGH, 18 + NETCONFA_IGNORE_ROUTES_WITH_LINKDOWN, 18 19 __NETCONFA_MAX 19 20 }; 20 21 #define NETCONFA_MAX (__NETCONFA_MAX - 1)
+1 -2
kernel/auditsc.c
··· 1021 1021 * for strings that are too long, we should not have created 1022 1022 * any. 1023 1023 */ 1024 - if (unlikely((len == 0) || len > MAX_ARG_STRLEN - 1)) { 1025 - WARN_ON(1); 1024 + if (WARN_ON_ONCE(len < 0 || len > MAX_ARG_STRLEN - 1)) { 1026 1025 send_sig(SIGKILL, current, 0); 1027 1026 return -1; 1028 1027 }
+12 -1
kernel/cpu.c
··· 21 21 #include <linux/suspend.h> 22 22 #include <linux/lockdep.h> 23 23 #include <linux/tick.h> 24 + #include <linux/irq.h> 24 25 #include <trace/events/power.h> 25 26 26 27 #include "smpboot.h" ··· 393 392 smpboot_park_threads(cpu); 394 393 395 394 /* 395 + * Prevent irq alloc/free while the dying cpu reorganizes the 396 + * interrupt affinities. 397 + */ 398 + irq_lock_sparse(); 399 + 400 + /* 396 401 * So now all preempt/rcu users must observe !cpu_active(). 397 402 */ 398 - 399 403 err = __stop_machine(take_cpu_down, &tcd_param, cpumask_of(cpu)); 400 404 if (err) { 401 405 /* CPU didn't die: tell everyone. Can't complain. */ 402 406 cpu_notify_nofail(CPU_DOWN_FAILED | mod, hcpu); 407 + irq_unlock_sparse(); 403 408 goto out_release; 404 409 } 405 410 BUG_ON(cpu_online(cpu)); ··· 421 414 cpu_relax(); 422 415 smp_mb(); /* Read from cpu_dead_idle before __cpu_die(). */ 423 416 per_cpu(cpu_dead_idle, cpu) = false; 417 + 418 + /* Interrupts are moved away from the dying cpu, reenable alloc/free */ 419 + irq_unlock_sparse(); 424 420 425 421 hotplug_cpu__broadcast_tick_pull(cpu); 426 422 /* This actually kills the CPU. */ ··· 529 519 530 520 /* Arch-specific enabling code. */ 531 521 ret = __cpu_up(cpu, idle); 522 + 532 523 if (ret != 0) 533 524 goto out_notify; 534 525 BUG_ON(!cpu_online(cpu));
-8
kernel/events/core.c
··· 4358 4358 rcu_read_unlock(); 4359 4359 } 4360 4360 4361 - static void rb_free_rcu(struct rcu_head *rcu_head) 4362 - { 4363 - struct ring_buffer *rb; 4364 - 4365 - rb = container_of(rcu_head, struct ring_buffer, rcu_head); 4366 - rb_free(rb); 4367 - } 4368 - 4369 4361 struct ring_buffer *ring_buffer_get(struct perf_event *event) 4370 4362 { 4371 4363 struct ring_buffer *rb;
+10
kernel/events/internal.h
··· 11 11 struct ring_buffer { 12 12 atomic_t refcount; 13 13 struct rcu_head rcu_head; 14 + struct irq_work irq_work; 14 15 #ifdef CONFIG_PERF_USE_VMALLOC 15 16 struct work_struct work; 16 17 int page_order; /* allocation order */ ··· 56 55 }; 57 56 58 57 extern void rb_free(struct ring_buffer *rb); 58 + 59 + static inline void rb_free_rcu(struct rcu_head *rcu_head) 60 + { 61 + struct ring_buffer *rb; 62 + 63 + rb = container_of(rcu_head, struct ring_buffer, rcu_head); 64 + rb_free(rb); 65 + } 66 + 59 67 extern struct ring_buffer * 60 68 rb_alloc(int nr_pages, long watermark, int cpu, int flags); 61 69 extern void perf_event_wakeup(struct perf_event *event);
+25 -2
kernel/events/ring_buffer.c
··· 221 221 rcu_read_unlock(); 222 222 } 223 223 224 + static void rb_irq_work(struct irq_work *work); 225 + 224 226 static void 225 227 ring_buffer_init(struct ring_buffer *rb, long watermark, int flags) 226 228 { ··· 243 241 244 242 INIT_LIST_HEAD(&rb->event_list); 245 243 spin_lock_init(&rb->event_lock); 244 + init_irq_work(&rb->irq_work, rb_irq_work); 245 + } 246 + 247 + static void ring_buffer_put_async(struct ring_buffer *rb) 248 + { 249 + if (!atomic_dec_and_test(&rb->refcount)) 250 + return; 251 + 252 + rb->rcu_head.next = (void *)rb; 253 + irq_work_queue(&rb->irq_work); 246 254 } 247 255 248 256 /* ··· 331 319 rb_free_aux(rb); 332 320 333 321 err: 334 - ring_buffer_put(rb); 322 + ring_buffer_put_async(rb); 335 323 handle->event = NULL; 336 324 337 325 return NULL; ··· 382 370 383 371 local_set(&rb->aux_nest, 0); 384 372 rb_free_aux(rb); 385 - ring_buffer_put(rb); 373 + ring_buffer_put_async(rb); 386 374 } 387 375 388 376 /* ··· 569 557 void rb_free_aux(struct ring_buffer *rb) 570 558 { 571 559 if (atomic_dec_and_test(&rb->aux_refcount)) 560 + irq_work_queue(&rb->irq_work); 561 + } 562 + 563 + static void rb_irq_work(struct irq_work *work) 564 + { 565 + struct ring_buffer *rb = container_of(work, struct ring_buffer, irq_work); 566 + 567 + if (!atomic_read(&rb->aux_refcount)) 572 568 __rb_free_aux(rb); 569 + 570 + if (rb->rcu_head.next == (void *)rb) 571 + call_rcu(&rb->rcu_head, rb_free_rcu); 573 572 } 574 573 575 574 #ifndef CONFIG_PERF_USE_VMALLOC
+6 -1
kernel/fork.c
··· 287 287 max_threads = clamp_t(u64, threads, MIN_THREADS, MAX_THREADS); 288 288 } 289 289 290 + #ifdef CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT 291 + /* Initialized by the architecture: */ 292 + int arch_task_struct_size __read_mostly; 293 + #endif 294 + 290 295 void __init fork_init(void) 291 296 { 292 297 #ifndef CONFIG_ARCH_TASK_STRUCT_ALLOCATOR ··· 300 295 #endif 301 296 /* create a slab on which task_structs can be allocated */ 302 297 task_struct_cachep = 303 - kmem_cache_create("task_struct", sizeof(struct task_struct), 298 + kmem_cache_create("task_struct", arch_task_struct_size, 304 299 ARCH_MIN_TASKALIGN, SLAB_PANIC | SLAB_NOTRACK, NULL); 305 300 #endif 306 301
-4
kernel/irq/internals.h
··· 76 76 77 77 #ifdef CONFIG_SPARSE_IRQ 78 78 static inline void irq_mark_irq(unsigned int irq) { } 79 - extern void irq_lock_sparse(void); 80 - extern void irq_unlock_sparse(void); 81 79 #else 82 80 extern void irq_mark_irq(unsigned int irq); 83 - static inline void irq_lock_sparse(void) { } 84 - static inline void irq_unlock_sparse(void) { } 85 81 #endif 86 82 87 83 extern void init_kstat_irqs(struct irq_desc *desc, int node, int nr);
+13 -5
kernel/irq/resend.c
··· 75 75 !desc->irq_data.chip->irq_retrigger(&desc->irq_data)) { 76 76 #ifdef CONFIG_HARDIRQS_SW_RESEND 77 77 /* 78 - * If the interrupt has a parent irq and runs 79 - * in the thread context of the parent irq, 80 - * retrigger the parent. 78 + * If the interrupt is running in the thread 79 + * context of the parent irq we need to be 80 + * careful, because we cannot trigger it 81 + * directly. 81 82 */ 82 - if (desc->parent_irq && 83 - irq_settings_is_nested_thread(desc)) 83 + if (irq_settings_is_nested_thread(desc)) { 84 + /* 85 + * If the parent_irq is valid, we 86 + * retrigger the parent, otherwise we 87 + * do nothing. 88 + */ 89 + if (!desc->parent_irq) 90 + return; 84 91 irq = desc->parent_irq; 92 + } 85 93 /* Set it pending and activate the softirq: */ 86 94 set_bit(irq, irqs_resend); 87 95 tasklet_schedule(&resend_tasklet);
+1
kernel/module.c
··· 3557 3557 mutex_lock(&module_mutex); 3558 3558 /* Unlink carefully: kallsyms could be walking list. */ 3559 3559 list_del_rcu(&mod->list); 3560 + mod_tree_remove(mod); 3560 3561 wake_up_all(&module_wq); 3561 3562 /* Wait for RCU-sched synchronizing before releasing mod->list. */ 3562 3563 synchronize_sched();
+1 -1
kernel/sched/fair.c
··· 3683 3683 cfs_rq->throttled = 1; 3684 3684 cfs_rq->throttled_clock = rq_clock(rq); 3685 3685 raw_spin_lock(&cfs_b->lock); 3686 - empty = list_empty(&cfs_rq->throttled_list); 3686 + empty = list_empty(&cfs_b->throttled_cfs_rq); 3687 3687 3688 3688 /* 3689 3689 * Add to the _head_ of the list, so that an already-started
+9 -15
kernel/time/clockevents.c
··· 120 120 /* The clockevent device is getting replaced. Shut it down. */ 121 121 122 122 case CLOCK_EVT_STATE_SHUTDOWN: 123 - return dev->set_state_shutdown(dev); 123 + if (dev->set_state_shutdown) 124 + return dev->set_state_shutdown(dev); 125 + return 0; 124 126 125 127 case CLOCK_EVT_STATE_PERIODIC: 126 128 /* Core internal bug */ 127 129 if (!(dev->features & CLOCK_EVT_FEAT_PERIODIC)) 128 130 return -ENOSYS; 129 - return dev->set_state_periodic(dev); 131 + if (dev->set_state_periodic) 132 + return dev->set_state_periodic(dev); 133 + return 0; 130 134 131 135 case CLOCK_EVT_STATE_ONESHOT: 132 136 /* Core internal bug */ 133 137 if (!(dev->features & CLOCK_EVT_FEAT_ONESHOT)) 134 138 return -ENOSYS; 135 - return dev->set_state_oneshot(dev); 139 + if (dev->set_state_oneshot) 140 + return dev->set_state_oneshot(dev); 141 + return 0; 136 142 137 143 case CLOCK_EVT_STATE_ONESHOT_STOPPED: 138 144 /* Core internal bug */ ··· 476 470 477 471 if (dev->features & CLOCK_EVT_FEAT_DUMMY) 478 472 return 0; 479 - 480 - /* New state-specific callbacks */ 481 - if (!dev->set_state_shutdown) 482 - return -EINVAL; 483 - 484 - if ((dev->features & CLOCK_EVT_FEAT_PERIODIC) && 485 - !dev->set_state_periodic) 486 - return -EINVAL; 487 - 488 - if ((dev->features & CLOCK_EVT_FEAT_ONESHOT) && 489 - !dev->set_state_oneshot) 490 - return -EINVAL; 491 473 492 474 return 0; 493 475 }
+108 -56
kernel/time/tick-broadcast.c
··· 159 159 { 160 160 struct clock_event_device *bc = tick_broadcast_device.evtdev; 161 161 unsigned long flags; 162 - int ret; 162 + int ret = 0; 163 163 164 164 raw_spin_lock_irqsave(&tick_broadcast_lock, flags); 165 165 ··· 221 221 * If we kept the cpu in the broadcast mask, 222 222 * tell the caller to leave the per cpu device 223 223 * in shutdown state. The periodic interrupt 224 - * is delivered by the broadcast device. 224 + * is delivered by the broadcast device, if 225 + * the broadcast device exists and is not 226 + * hrtimer based. 225 227 */ 226 - ret = cpumask_test_cpu(cpu, tick_broadcast_mask); 228 + if (bc && !(bc->features & CLOCK_EVT_FEAT_HRTIMER)) 229 + ret = cpumask_test_cpu(cpu, tick_broadcast_mask); 227 230 break; 228 231 default: 229 - /* Nothing to do */ 230 - ret = 0; 231 232 break; 232 233 } 233 234 } ··· 266 265 * Check, if the current cpu is in the mask 267 266 */ 268 267 if (cpumask_test_cpu(cpu, mask)) { 268 + struct clock_event_device *bc = tick_broadcast_device.evtdev; 269 + 269 270 cpumask_clear_cpu(cpu, mask); 270 - local = true; 271 + /* 272 + * We only run the local handler, if the broadcast 273 + * device is not hrtimer based. Otherwise we run into 274 + * a hrtimer recursion. 275 + * 276 + * local timer_interrupt() 277 + * local_handler() 278 + * expire_hrtimers() 279 + * bc_handler() 280 + * local_handler() 281 + * expire_hrtimers() 282 + */ 283 + local = !(bc->features & CLOCK_EVT_FEAT_HRTIMER); 271 284 } 272 285 273 286 if (!cpumask_empty(mask)) { ··· 316 301 bool bc_local; 317 302 318 303 raw_spin_lock(&tick_broadcast_lock); 304 + 305 + /* Handle spurious interrupts gracefully */ 306 + if (clockevent_state_shutdown(tick_broadcast_device.evtdev)) { 307 + raw_spin_unlock(&tick_broadcast_lock); 308 + return; 309 + } 310 + 319 311 bc_local = tick_do_periodic_broadcast(); 320 312 321 313 if (clockevent_state_oneshot(dev)) { ··· 381 359 case TICK_BROADCAST_ON: 382 360 cpumask_set_cpu(cpu, tick_broadcast_on); 383 361 if (!cpumask_test_and_set_cpu(cpu, tick_broadcast_mask)) { 384 - if (tick_broadcast_device.mode == 385 - TICKDEV_MODE_PERIODIC) 362 + /* 363 + * Only shutdown the cpu local device, if: 364 + * 365 + * - the broadcast device exists 366 + * - the broadcast device is not a hrtimer based one 367 + * - the broadcast device is in periodic mode to 368 + * avoid a hickup during switch to oneshot mode 369 + */ 370 + if (bc && !(bc->features & CLOCK_EVT_FEAT_HRTIMER) && 371 + tick_broadcast_device.mode == TICKDEV_MODE_PERIODIC) 386 372 clockevents_shutdown(dev); 387 373 } 388 374 break; ··· 409 379 break; 410 380 } 411 381 412 - if (cpumask_empty(tick_broadcast_mask)) { 413 - if (!bc_stopped) 414 - clockevents_shutdown(bc); 415 - } else if (bc_stopped) { 416 - if (tick_broadcast_device.mode == TICKDEV_MODE_PERIODIC) 417 - tick_broadcast_start_periodic(bc); 418 - else 419 - tick_broadcast_setup_oneshot(bc); 382 + if (bc) { 383 + if (cpumask_empty(tick_broadcast_mask)) { 384 + if (!bc_stopped) 385 + clockevents_shutdown(bc); 386 + } else if (bc_stopped) { 387 + if (tick_broadcast_device.mode == TICKDEV_MODE_PERIODIC) 388 + tick_broadcast_start_periodic(bc); 389 + else 390 + tick_broadcast_setup_oneshot(bc); 391 + } 420 392 } 421 393 raw_spin_unlock(&tick_broadcast_lock); 422 394 } ··· 694 662 clockevents_switch_state(dev, CLOCK_EVT_STATE_SHUTDOWN); 695 663 } 696 664 697 - /** 698 - * tick_broadcast_oneshot_control - Enter/exit broadcast oneshot mode 699 - * @state: The target state (enter/exit) 700 - * 701 - * The system enters/leaves a state, where affected devices might stop 702 - * Returns 0 on success, -EBUSY if the cpu is used to broadcast wakeups. 703 - * 704 - * Called with interrupts disabled, so clockevents_lock is not 705 - * required here because the local clock event device cannot go away 706 - * under us. 707 - */ 708 - int tick_broadcast_oneshot_control(enum tick_broadcast_state state) 665 + int __tick_broadcast_oneshot_control(enum tick_broadcast_state state) 709 666 { 710 667 struct clock_event_device *bc, *dev; 711 - struct tick_device *td; 712 668 int cpu, ret = 0; 713 669 ktime_t now; 714 670 715 671 /* 716 - * Periodic mode does not care about the enter/exit of power 717 - * states 672 + * If there is no broadcast device, tell the caller not to go 673 + * into deep idle. 718 674 */ 719 - if (tick_broadcast_device.mode == TICKDEV_MODE_PERIODIC) 720 - return 0; 675 + if (!tick_broadcast_device.evtdev) 676 + return -EBUSY; 721 677 722 - /* 723 - * We are called with preemtion disabled from the depth of the 724 - * idle code, so we can't be moved away. 725 - */ 726 - td = this_cpu_ptr(&tick_cpu_device); 727 - dev = td->evtdev; 728 - 729 - if (!(dev->features & CLOCK_EVT_FEAT_C3STOP)) 730 - return 0; 678 + dev = this_cpu_ptr(&tick_cpu_device)->evtdev; 731 679 732 680 raw_spin_lock(&tick_broadcast_lock); 733 681 bc = tick_broadcast_device.evtdev; 734 682 cpu = smp_processor_id(); 735 683 736 684 if (state == TICK_BROADCAST_ENTER) { 685 + /* 686 + * If the current CPU owns the hrtimer broadcast 687 + * mechanism, it cannot go deep idle and we do not add 688 + * the CPU to the broadcast mask. We don't have to go 689 + * through the EXIT path as the local timer is not 690 + * shutdown. 691 + */ 692 + ret = broadcast_needs_cpu(bc, cpu); 693 + if (ret) 694 + goto out; 695 + 696 + /* 697 + * If the broadcast device is in periodic mode, we 698 + * return. 699 + */ 700 + if (tick_broadcast_device.mode == TICKDEV_MODE_PERIODIC) { 701 + /* If it is a hrtimer based broadcast, return busy */ 702 + if (bc->features & CLOCK_EVT_FEAT_HRTIMER) 703 + ret = -EBUSY; 704 + goto out; 705 + } 706 + 737 707 if (!cpumask_test_and_set_cpu(cpu, tick_broadcast_oneshot_mask)) { 738 708 WARN_ON_ONCE(cpumask_test_cpu(cpu, tick_broadcast_pending_mask)); 709 + 710 + /* Conditionally shut down the local timer. */ 739 711 broadcast_shutdown_local(bc, dev); 712 + 740 713 /* 741 714 * We only reprogram the broadcast timer if we 742 715 * did not mark ourself in the force mask and 743 716 * if the cpu local event is earlier than the 744 717 * broadcast event. If the current CPU is in 745 718 * the force mask, then we are going to be 746 - * woken by the IPI right away. 719 + * woken by the IPI right away; we return 720 + * busy, so the CPU does not try to go deep 721 + * idle. 747 722 */ 748 - if (!cpumask_test_cpu(cpu, tick_broadcast_force_mask) && 749 - dev->next_event.tv64 < bc->next_event.tv64) 723 + if (cpumask_test_cpu(cpu, tick_broadcast_force_mask)) { 724 + ret = -EBUSY; 725 + } else if (dev->next_event.tv64 < bc->next_event.tv64) { 750 726 tick_broadcast_set_event(bc, cpu, dev->next_event); 727 + /* 728 + * In case of hrtimer broadcasts the 729 + * programming might have moved the 730 + * timer to this cpu. If yes, remove 731 + * us from the broadcast mask and 732 + * return busy. 733 + */ 734 + ret = broadcast_needs_cpu(bc, cpu); 735 + if (ret) { 736 + cpumask_clear_cpu(cpu, 737 + tick_broadcast_oneshot_mask); 738 + } 739 + } 751 740 } 752 - /* 753 - * If the current CPU owns the hrtimer broadcast 754 - * mechanism, it cannot go deep idle and we remove the 755 - * CPU from the broadcast mask. We don't have to go 756 - * through the EXIT path as the local timer is not 757 - * shutdown. 758 - */ 759 - ret = broadcast_needs_cpu(bc, cpu); 760 - if (ret) 761 - cpumask_clear_cpu(cpu, tick_broadcast_oneshot_mask); 762 741 } else { 763 742 if (cpumask_test_and_clear_cpu(cpu, tick_broadcast_oneshot_mask)) { 764 743 clockevents_switch_state(dev, CLOCK_EVT_STATE_ONESHOT); ··· 839 796 raw_spin_unlock(&tick_broadcast_lock); 840 797 return ret; 841 798 } 842 - EXPORT_SYMBOL_GPL(tick_broadcast_oneshot_control); 843 799 844 800 /* 845 801 * Reset the one shot broadcast for a cpu ··· 980 938 return bc ? bc->features & CLOCK_EVT_FEAT_ONESHOT : false; 981 939 } 982 940 941 + #else 942 + int __tick_broadcast_oneshot_control(enum tick_broadcast_state state) 943 + { 944 + struct clock_event_device *bc = tick_broadcast_device.evtdev; 945 + 946 + if (!bc || (bc->features & CLOCK_EVT_FEAT_HRTIMER)) 947 + return -EBUSY; 948 + 949 + return 0; 950 + } 983 951 #endif 984 952 985 953 void __init tick_broadcast_init(void)
+22
kernel/time/tick-common.c
··· 343 343 tick_install_broadcast_device(newdev); 344 344 } 345 345 346 + /** 347 + * tick_broadcast_oneshot_control - Enter/exit broadcast oneshot mode 348 + * @state: The target state (enter/exit) 349 + * 350 + * The system enters/leaves a state, where affected devices might stop 351 + * Returns 0 on success, -EBUSY if the cpu is used to broadcast wakeups. 352 + * 353 + * Called with interrupts disabled, so clockevents_lock is not 354 + * required here because the local clock event device cannot go away 355 + * under us. 356 + */ 357 + int tick_broadcast_oneshot_control(enum tick_broadcast_state state) 358 + { 359 + struct tick_device *td = this_cpu_ptr(&tick_cpu_device); 360 + 361 + if (!(td->evtdev->features & CLOCK_EVT_FEAT_C3STOP)) 362 + return 0; 363 + 364 + return __tick_broadcast_oneshot_control(state); 365 + } 366 + EXPORT_SYMBOL_GPL(tick_broadcast_oneshot_control); 367 + 346 368 #ifdef CONFIG_HOTPLUG_CPU 347 369 /* 348 370 * Transfer the do_timer job away from a dying cpu.
+10
kernel/time/tick-sched.h
··· 71 71 static inline void tick_cancel_sched_timer(int cpu) { } 72 72 #endif 73 73 74 + #ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST 75 + extern int __tick_broadcast_oneshot_control(enum tick_broadcast_state state); 76 + #else 77 + static inline int 78 + __tick_broadcast_oneshot_control(enum tick_broadcast_state state) 79 + { 80 + return -EBUSY; 81 + } 82 + #endif 83 + 74 84 #endif
+1
kernel/trace/trace.h
··· 444 444 445 445 TRACE_CONTROL_BIT, 446 446 447 + TRACE_BRANCH_BIT, 447 448 /* 448 449 * Abuse of the trace_recursion. 449 450 * As we need a way to maintain state if we are tracing the function
+10 -7
kernel/trace/trace_branch.c
··· 36 36 struct trace_branch *entry; 37 37 struct ring_buffer *buffer; 38 38 unsigned long flags; 39 - int cpu, pc; 39 + int pc; 40 40 const char *p; 41 + 42 + if (current->trace_recursion & TRACE_BRANCH_BIT) 43 + return; 41 44 42 45 /* 43 46 * I would love to save just the ftrace_likely_data pointer, but ··· 52 49 if (unlikely(!tr)) 53 50 return; 54 51 55 - local_irq_save(flags); 56 - cpu = raw_smp_processor_id(); 57 - data = per_cpu_ptr(tr->trace_buffer.data, cpu); 58 - if (atomic_inc_return(&data->disabled) != 1) 52 + raw_local_irq_save(flags); 53 + current->trace_recursion |= TRACE_BRANCH_BIT; 54 + data = this_cpu_ptr(tr->trace_buffer.data); 55 + if (atomic_read(&data->disabled)) 59 56 goto out; 60 57 61 58 pc = preempt_count(); ··· 84 81 __buffer_unlock_commit(buffer, event); 85 82 86 83 out: 87 - atomic_dec(&data->disabled); 88 - local_irq_restore(flags); 84 + current->trace_recursion &= ~TRACE_BRANCH_BIT; 85 + raw_local_irq_restore(flags); 89 86 } 90 87 91 88 static inline
-4
lib/Kconfig.kasan
··· 18 18 For better error detection enable CONFIG_STACKTRACE, 19 19 and add slub_debug=U to boot cmdline. 20 20 21 - config KASAN_SHADOW_OFFSET 22 - hex 23 - default 0xdffffc0000000000 if X86_64 24 - 25 21 choice 26 22 prompt "Instrumentation type" 27 23 depends on KASAN
+4 -1
lib/decompress.c
··· 59 59 { 60 60 const struct compress_format *cf; 61 61 62 - if (len < 2) 62 + if (len < 2) { 63 + if (name) 64 + *name = NULL; 63 65 return NULL; /* Need at least this much... */ 66 + } 64 67 65 68 pr_debug("Compressed data magic: %#.2x %#.2x\n", inbuf[0], inbuf[1]); 66 69
+3
lib/dma-debug.c
··· 574 574 unsigned long flags; 575 575 phys_addr_t cln; 576 576 577 + if (dma_debug_disabled()) 578 + return; 579 + 577 580 if (!page) 578 581 return; 579 582
+4 -3
lib/hexdump.c
··· 11 11 #include <linux/ctype.h> 12 12 #include <linux/kernel.h> 13 13 #include <linux/export.h> 14 + #include <asm/unaligned.h> 14 15 15 16 const char hex_asc[] = "0123456789abcdef"; 16 17 EXPORT_SYMBOL(hex_asc); ··· 140 139 for (j = 0; j < ngroups; j++) { 141 140 ret = snprintf(linebuf + lx, linebuflen - lx, 142 141 "%s%16.16llx", j ? " " : "", 143 - (unsigned long long)*(ptr8 + j)); 142 + get_unaligned(ptr8 + j)); 144 143 if (ret >= linebuflen - lx) 145 144 goto overflow1; 146 145 lx += ret; ··· 151 150 for (j = 0; j < ngroups; j++) { 152 151 ret = snprintf(linebuf + lx, linebuflen - lx, 153 152 "%s%8.8x", j ? " " : "", 154 - *(ptr4 + j)); 153 + get_unaligned(ptr4 + j)); 155 154 if (ret >= linebuflen - lx) 156 155 goto overflow1; 157 156 lx += ret; ··· 162 161 for (j = 0; j < ngroups; j++) { 163 162 ret = snprintf(linebuf + lx, linebuflen - lx, 164 163 "%s%4.4x", j ? " " : "", 165 - *(ptr2 + j)); 164 + get_unaligned(ptr2 + j)); 166 165 if (ret >= linebuflen - lx) 167 166 goto overflow1; 168 167 lx += ret;
+3 -2
lib/kobject.c
··· 337 337 } 338 338 EXPORT_SYMBOL(kobject_init); 339 339 340 - static int kobject_add_varg(struct kobject *kobj, struct kobject *parent, 341 - const char *fmt, va_list vargs) 340 + static __printf(3, 0) int kobject_add_varg(struct kobject *kobj, 341 + struct kobject *parent, 342 + const char *fmt, va_list vargs) 342 343 { 343 344 int retval; 344 345
+2 -2
lib/rhashtable.c
··· 610 610 iter->skip = 0; 611 611 } 612 612 613 + iter->p = NULL; 614 + 613 615 /* Ensure we see any new tables. */ 614 616 smp_rmb(); 615 617 ··· 621 619 iter->skip = 0; 622 620 return ERR_PTR(-EAGAIN); 623 621 } 624 - 625 - iter->p = NULL; 626 622 627 623 return NULL; 628 624 }
+6 -5
mm/cma_debug.c
··· 39 39 40 40 mutex_lock(&cma->lock); 41 41 /* pages counter is smaller than sizeof(int) */ 42 - used = bitmap_weight(cma->bitmap, (int)cma->count); 42 + used = bitmap_weight(cma->bitmap, (int)cma_bitmap_maxno(cma)); 43 43 mutex_unlock(&cma->lock); 44 44 *val = (u64)used << cma->order_per_bit; 45 45 ··· 52 52 struct cma *cma = data; 53 53 unsigned long maxchunk = 0; 54 54 unsigned long start, end = 0; 55 + unsigned long bitmap_maxno = cma_bitmap_maxno(cma); 55 56 56 57 mutex_lock(&cma->lock); 57 58 for (;;) { 58 - start = find_next_zero_bit(cma->bitmap, cma->count, end); 59 + start = find_next_zero_bit(cma->bitmap, bitmap_maxno, end); 59 60 if (start >= cma->count) 60 61 break; 61 - end = find_next_bit(cma->bitmap, cma->count, start); 62 + end = find_next_bit(cma->bitmap, bitmap_maxno, start); 62 63 maxchunk = max(end - start, maxchunk); 63 64 } 64 65 mutex_unlock(&cma->lock); ··· 171 170 172 171 tmp = debugfs_create_dir(name, cma_debugfs_root); 173 172 174 - debugfs_create_file("alloc", S_IWUSR, cma_debugfs_root, cma, 173 + debugfs_create_file("alloc", S_IWUSR, tmp, cma, 175 174 &cma_alloc_fops); 176 175 177 - debugfs_create_file("free", S_IWUSR, cma_debugfs_root, cma, 176 + debugfs_create_file("free", S_IWUSR, tmp, cma, 178 177 &cma_free_fops); 179 178 180 179 debugfs_create_file("base_pfn", S_IRUGO, tmp,
+13 -7
mm/memory.c
··· 2670 2670 2671 2671 pte_unmap(page_table); 2672 2672 2673 + /* File mapping without ->vm_ops ? */ 2674 + if (vma->vm_flags & VM_SHARED) 2675 + return VM_FAULT_SIGBUS; 2676 + 2673 2677 /* Check if we need to add a guard page to the stack */ 2674 2678 if (check_stack_guard_page(vma, address) < 0) 2675 2679 return VM_FAULT_SIGSEGV; ··· 3103 3099 - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; 3104 3100 3105 3101 pte_unmap(page_table); 3102 + /* The VMA was not fully populated on mmap() or missing VM_DONTEXPAND */ 3103 + if (!vma->vm_ops->fault) 3104 + return VM_FAULT_SIGBUS; 3106 3105 if (!(flags & FAULT_FLAG_WRITE)) 3107 3106 return do_read_fault(mm, vma, address, pmd, pgoff, flags, 3108 3107 orig_pte); ··· 3251 3244 barrier(); 3252 3245 if (!pte_present(entry)) { 3253 3246 if (pte_none(entry)) { 3254 - if (vma->vm_ops) { 3255 - if (likely(vma->vm_ops->fault)) 3256 - return do_fault(mm, vma, address, pte, 3257 - pmd, flags, entry); 3258 - } 3259 - return do_anonymous_page(mm, vma, address, 3260 - pte, pmd, flags); 3247 + if (vma->vm_ops) 3248 + return do_fault(mm, vma, address, pte, pmd, 3249 + flags, entry); 3250 + 3251 + return do_anonymous_page(mm, vma, address, pte, pmd, 3252 + flags); 3261 3253 } 3262 3254 return do_swap_page(mm, vma, address, 3263 3255 pte, pmd, flags, entry);
+8 -6
mm/page_alloc.c
··· 246 246 /* Returns true if the struct page for the pfn is uninitialised */ 247 247 static inline bool __meminit early_page_uninitialised(unsigned long pfn) 248 248 { 249 - int nid = early_pfn_to_nid(pfn); 250 - 251 - if (pfn >= NODE_DATA(nid)->first_deferred_pfn) 249 + if (pfn >= NODE_DATA(early_pfn_to_nid(pfn))->first_deferred_pfn) 252 250 return true; 253 251 254 252 return false; ··· 1948 1950 void split_page(struct page *page, unsigned int order) 1949 1951 { 1950 1952 int i; 1953 + gfp_t gfp_mask; 1951 1954 1952 1955 VM_BUG_ON_PAGE(PageCompound(page), page); 1953 1956 VM_BUG_ON_PAGE(!page_count(page), page); ··· 1962 1963 split_page(virt_to_page(page[0].shadow), order); 1963 1964 #endif 1964 1965 1965 - set_page_owner(page, 0, 0); 1966 + gfp_mask = get_page_owner_gfp(page); 1967 + set_page_owner(page, 0, gfp_mask); 1966 1968 for (i = 1; i < (1 << order); i++) { 1967 1969 set_page_refcounted(page + i); 1968 - set_page_owner(page + i, 0, 0); 1970 + set_page_owner(page + i, 0, gfp_mask); 1969 1971 } 1970 1972 } 1971 1973 EXPORT_SYMBOL_GPL(split_page); ··· 1996 1996 zone->free_area[order].nr_free--; 1997 1997 rmv_page_order(page); 1998 1998 1999 + set_page_owner(page, order, __GFP_MOVABLE); 2000 + 1999 2001 /* Set the pageblock if the isolated page is at least a pageblock */ 2000 2002 if (order >= pageblock_order - 1) { 2001 2003 struct page *endpage = page + (1 << order) - 1; ··· 2009 2007 } 2010 2008 } 2011 2009 2012 - set_page_owner(page, order, 0); 2010 + 2013 2011 return 1UL << order; 2014 2012 } 2015 2013
+7
mm/page_owner.c
··· 76 76 __set_bit(PAGE_EXT_OWNER, &page_ext->flags); 77 77 } 78 78 79 + gfp_t __get_page_owner_gfp(struct page *page) 80 + { 81 + struct page_ext *page_ext = lookup_page_ext(page); 82 + 83 + return page_ext->gfp_mask; 84 + } 85 + 79 86 static ssize_t 80 87 print_page_owner(char __user *buf, size_t count, unsigned long pfn, 81 88 struct page *page, struct page_ext *page_ext)
+1
net/bridge/br_forward.c
··· 42 42 } else { 43 43 skb_push(skb, ETH_HLEN); 44 44 br_drop_fake_rtable(skb); 45 + skb_sender_cpu_clear(skb); 45 46 dev_queue_xmit(skb); 46 47 } 47 48
+7 -9
net/bridge/br_mdb.c
··· 323 323 struct net_bridge_port_group *p; 324 324 struct net_bridge_port_group __rcu **pp; 325 325 struct net_bridge_mdb_htable *mdb; 326 + unsigned long now = jiffies; 326 327 int err; 327 328 328 329 mdb = mlock_dereference(br->mdb, br); ··· 348 347 if (unlikely(!p)) 349 348 return -ENOMEM; 350 349 rcu_assign_pointer(*pp, p); 350 + if (state == MDB_TEMPORARY) 351 + mod_timer(&p->timer, now + br->multicast_membership_interval); 351 352 352 353 br_mdb_notify(br->dev, port, group, RTM_NEWMDB); 353 354 return 0; ··· 374 371 if (!p || p->br != br || p->state == BR_STATE_DISABLED) 375 372 return -EINVAL; 376 373 374 + memset(&ip, 0, sizeof(ip)); 377 375 ip.proto = entry->addr.proto; 378 376 if (ip.proto == htons(ETH_P_IP)) 379 377 ip.u.ip4 = entry->addr.u.ip4; ··· 421 417 if (!netif_running(br->dev) || br->multicast_disabled) 422 418 return -EINVAL; 423 419 420 + memset(&ip, 0, sizeof(ip)); 424 421 ip.proto = entry->addr.proto; 425 - if (ip.proto == htons(ETH_P_IP)) { 426 - if (timer_pending(&br->ip4_other_query.timer)) 427 - return -EBUSY; 428 - 422 + if (ip.proto == htons(ETH_P_IP)) 429 423 ip.u.ip4 = entry->addr.u.ip4; 430 424 #if IS_ENABLED(CONFIG_IPV6) 431 - } else { 432 - if (timer_pending(&br->ip6_other_query.timer)) 433 - return -EBUSY; 434 - 425 + else 435 426 ip.u.ip6 = entry->addr.u.ip6; 436 427 #endif 437 - } 438 428 439 429 spin_lock_bh(&br->multicast_lock); 440 430 mdb = mlock_dereference(br->mdb, br);
+11 -5
net/bridge/br_netfilter_hooks.c
··· 111 111 /* largest possible L2 header, see br_nf_dev_queue_xmit() */ 112 112 #define NF_BRIDGE_MAX_MAC_HEADER_LENGTH (PPPOE_SES_HLEN + ETH_HLEN) 113 113 114 - #if IS_ENABLED(CONFIG_NF_DEFRAG_IPV4) 114 + #if IS_ENABLED(CONFIG_NF_DEFRAG_IPV4) || IS_ENABLED(CONFIG_NF_DEFRAG_IPV6) 115 115 struct brnf_frag_data { 116 116 char mac[NF_BRIDGE_MAX_MAC_HEADER_LENGTH]; 117 117 u8 encap_size; ··· 694 694 } 695 695 #endif 696 696 697 + #if IS_ENABLED(CONFIG_NF_DEFRAG_IPV4) 697 698 static int br_nf_ip_fragment(struct sock *sk, struct sk_buff *skb, 698 699 int (*output)(struct sock *, struct sk_buff *)) 699 700 { ··· 713 712 714 713 return ip_do_fragment(sk, skb, output); 715 714 } 715 + #endif 716 716 717 717 static unsigned int nf_bridge_mtu_reduction(const struct sk_buff *skb) 718 718 { ··· 744 742 struct brnf_frag_data *data; 745 743 746 744 if (br_validate_ipv4(skb)) 747 - return NF_DROP; 745 + goto drop; 748 746 749 747 IPCB(skb)->frag_max_size = nf_bridge->frag_max_size; 750 748 ··· 769 767 struct brnf_frag_data *data; 770 768 771 769 if (br_validate_ipv6(skb)) 772 - return NF_DROP; 770 + goto drop; 773 771 774 772 IP6CB(skb)->frag_max_size = nf_bridge->frag_max_size; 775 773 ··· 784 782 785 783 if (v6ops) 786 784 return v6ops->fragment(sk, skb, br_nf_push_frag_xmit); 787 - else 788 - return -EMSGSIZE; 785 + 786 + kfree_skb(skb); 787 + return -EMSGSIZE; 789 788 } 790 789 #endif 791 790 nf_bridge_info_free(skb); 792 791 return br_dev_queue_push_xmit(sk, skb); 792 + drop: 793 + kfree_skb(skb); 794 + return 0; 793 795 } 794 796 795 797 /* PF_BRIDGE/POST_ROUTING ********************************************/
+1 -1
net/bridge/br_netfilter_ipv6.c
··· 104 104 { 105 105 const struct ipv6hdr *hdr; 106 106 struct net_device *dev = skb->dev; 107 - struct inet6_dev *idev = in6_dev_get(skb->dev); 107 + struct inet6_dev *idev = __in6_dev_get(skb->dev); 108 108 u32 pkt_len; 109 109 u8 ip6h_len = sizeof(struct ipv6hdr); 110 110
+2
net/bridge/br_netlink.c
··· 457 457 if (nla_len(attr) != sizeof(struct bridge_vlan_info)) 458 458 return -EINVAL; 459 459 vinfo = nla_data(attr); 460 + if (!vinfo->vid || vinfo->vid >= VLAN_VID_MASK) 461 + return -EINVAL; 460 462 if (vinfo->flags & BRIDGE_VLAN_INFO_RANGE_BEGIN) { 461 463 if (vinfo_start) 462 464 return -EINVAL;
+7 -5
net/can/af_can.c
··· 89 89 struct s_stats can_stats; /* packet statistics */ 90 90 struct s_pstats can_pstats; /* receive list statistics */ 91 91 92 + static atomic_t skbcounter = ATOMIC_INIT(0); 93 + 92 94 /* 93 95 * af_can socket functions 94 96 */ ··· 312 310 return err; 313 311 } 314 312 315 - if (newskb) { 316 - if (!(newskb->tstamp.tv64)) 317 - __net_timestamp(newskb); 318 - 313 + if (newskb) 319 314 netif_rx_ni(newskb); 320 - } 321 315 322 316 /* update statistics */ 323 317 can_stats.tx_frames++; ··· 680 682 /* update statistics */ 681 683 can_stats.rx_frames++; 682 684 can_stats.rx_frames_delta++; 685 + 686 + /* create non-zero unique skb identifier together with *skb */ 687 + while (!(can_skb_prv(skb)->skbcnt)) 688 + can_skb_prv(skb)->skbcnt = atomic_inc_return(&skbcounter); 683 689 684 690 rcu_read_lock(); 685 691
+2
net/can/bcm.c
··· 261 261 262 262 can_skb_reserve(skb); 263 263 can_skb_prv(skb)->ifindex = dev->ifindex; 264 + can_skb_prv(skb)->skbcnt = 0; 264 265 265 266 memcpy(skb_put(skb, CFSIZ), cf, CFSIZ); 266 267 ··· 1218 1217 } 1219 1218 1220 1219 can_skb_prv(skb)->ifindex = dev->ifindex; 1220 + can_skb_prv(skb)->skbcnt = 0; 1221 1221 skb->dev = dev; 1222 1222 can_skb_set_owner(skb, sk); 1223 1223 err = can_send(skb, 1); /* send with loopback */
+4 -3
net/can/raw.c
··· 75 75 */ 76 76 77 77 struct uniqframe { 78 - ktime_t tstamp; 78 + int skbcnt; 79 79 const struct sk_buff *skb; 80 80 unsigned int join_rx_count; 81 81 }; ··· 133 133 134 134 /* eliminate multiple filter matches for the same skb */ 135 135 if (this_cpu_ptr(ro->uniq)->skb == oskb && 136 - ktime_equal(this_cpu_ptr(ro->uniq)->tstamp, oskb->tstamp)) { 136 + this_cpu_ptr(ro->uniq)->skbcnt == can_skb_prv(oskb)->skbcnt) { 137 137 if (ro->join_filters) { 138 138 this_cpu_inc(ro->uniq->join_rx_count); 139 139 /* drop frame until all enabled filters matched */ ··· 144 144 } 145 145 } else { 146 146 this_cpu_ptr(ro->uniq)->skb = oskb; 147 - this_cpu_ptr(ro->uniq)->tstamp = oskb->tstamp; 147 + this_cpu_ptr(ro->uniq)->skbcnt = can_skb_prv(oskb)->skbcnt; 148 148 this_cpu_ptr(ro->uniq)->join_rx_count = 1; 149 149 /* drop first frame to check all enabled filters? */ 150 150 if (ro->join_filters && ro->count > 1) ··· 749 749 750 750 can_skb_reserve(skb); 751 751 can_skb_prv(skb)->ifindex = dev->ifindex; 752 + can_skb_prv(skb)->skbcnt = 0; 752 753 753 754 err = memcpy_from_msg(skb_put(skb, size), msg, size); 754 755 if (err < 0)
+10 -6
net/ceph/ceph_common.c
··· 9 9 #include <keys/ceph-type.h> 10 10 #include <linux/module.h> 11 11 #include <linux/mount.h> 12 + #include <linux/nsproxy.h> 12 13 #include <linux/parser.h> 13 14 #include <linux/sched.h> 14 15 #include <linux/seq_file.h> ··· 17 16 #include <linux/statfs.h> 18 17 #include <linux/string.h> 19 18 #include <linux/vmalloc.h> 20 - #include <linux/nsproxy.h> 21 - #include <net/net_namespace.h> 22 19 23 20 24 21 #include <linux/ceph/ceph_features.h> ··· 129 130 int ofs = offsetof(struct ceph_options, mon_addr); 130 131 int i; 131 132 int ret; 133 + 134 + /* 135 + * Don't bother comparing options if network namespaces don't 136 + * match. 137 + */ 138 + if (!net_eq(current->nsproxy->net_ns, read_pnet(&client->msgr.net))) 139 + return -1; 132 140 133 141 ret = memcmp(opt1, opt2, ofs); 134 142 if (ret) ··· 340 334 const char *c; 341 335 int err = -ENOMEM; 342 336 substring_t argstr[MAX_OPT_ARGS]; 343 - 344 - if (current->nsproxy->net_ns != &init_net) 345 - return ERR_PTR(-EINVAL); 346 337 347 338 opt = kzalloc(sizeof(*opt), GFP_KERNEL); 348 339 if (!opt) ··· 611 608 fail_monc: 612 609 ceph_monc_stop(&client->monc); 613 610 fail: 611 + ceph_messenger_fini(&client->msgr); 614 612 kfree(client); 615 613 return ERR_PTR(err); 616 614 } ··· 625 621 626 622 /* unmount */ 627 623 ceph_osdc_stop(&client->osdc); 628 - 629 624 ceph_monc_stop(&client->monc); 625 + ceph_messenger_fini(&client->msgr); 630 626 631 627 ceph_debugfs_client_cleanup(client); 632 628
+16 -8
net/ceph/messenger.c
··· 6 6 #include <linux/inet.h> 7 7 #include <linux/kthread.h> 8 8 #include <linux/net.h> 9 + #include <linux/nsproxy.h> 9 10 #include <linux/slab.h> 10 11 #include <linux/socket.h> 11 12 #include <linux/string.h> ··· 480 479 int ret; 481 480 482 481 BUG_ON(con->sock); 483 - ret = sock_create_kern(&init_net, con->peer_addr.in_addr.ss_family, 482 + ret = sock_create_kern(read_pnet(&con->msgr->net), paddr->ss_family, 484 483 SOCK_STREAM, IPPROTO_TCP, &sock); 485 484 if (ret) 486 485 return ret; ··· 1732 1731 1733 1732 static bool addr_is_blank(struct sockaddr_storage *ss) 1734 1733 { 1734 + struct in_addr *addr = &((struct sockaddr_in *)ss)->sin_addr; 1735 + struct in6_addr *addr6 = &((struct sockaddr_in6 *)ss)->sin6_addr; 1736 + 1735 1737 switch (ss->ss_family) { 1736 1738 case AF_INET: 1737 - return ((struct sockaddr_in *)ss)->sin_addr.s_addr == 0; 1739 + return addr->s_addr == htonl(INADDR_ANY); 1738 1740 case AF_INET6: 1739 - return 1740 - ((struct sockaddr_in6 *)ss)->sin6_addr.s6_addr32[0] == 0 && 1741 - ((struct sockaddr_in6 *)ss)->sin6_addr.s6_addr32[1] == 0 && 1742 - ((struct sockaddr_in6 *)ss)->sin6_addr.s6_addr32[2] == 0 && 1743 - ((struct sockaddr_in6 *)ss)->sin6_addr.s6_addr32[3] == 0; 1741 + return ipv6_addr_any(addr6); 1742 + default: 1743 + return true; 1744 1744 } 1745 - return false; 1746 1745 } 1747 1746 1748 1747 static int addr_port(struct sockaddr_storage *ss) ··· 2945 2944 msgr->tcp_nodelay = tcp_nodelay; 2946 2945 2947 2946 atomic_set(&msgr->stopping, 0); 2947 + write_pnet(&msgr->net, get_net(current->nsproxy->net_ns)); 2948 2948 2949 2949 dout("%s %p\n", __func__, msgr); 2950 2950 } 2951 2951 EXPORT_SYMBOL(ceph_messenger_init); 2952 + 2953 + void ceph_messenger_fini(struct ceph_messenger *msgr) 2954 + { 2955 + put_net(read_pnet(&msgr->net)); 2956 + } 2957 + EXPORT_SYMBOL(ceph_messenger_fini); 2952 2958 2953 2959 static void clear_standby(struct ceph_connection *con) 2954 2960 {
+22 -23
net/core/dev.c
··· 677 677 if (dev->netdev_ops && dev->netdev_ops->ndo_get_iflink) 678 678 return dev->netdev_ops->ndo_get_iflink(dev); 679 679 680 - /* If dev->rtnl_link_ops is set, it's a virtual interface. */ 681 - if (dev->rtnl_link_ops) 682 - return 0; 683 - 684 680 return dev->ifindex; 685 681 } 686 682 EXPORT_SYMBOL(dev_get_iflink); ··· 3448 3452 local_irq_save(flags); 3449 3453 3450 3454 rps_lock(sd); 3455 + if (!netif_running(skb->dev)) 3456 + goto drop; 3451 3457 qlen = skb_queue_len(&sd->input_pkt_queue); 3452 3458 if (qlen <= netdev_max_backlog && !skb_flow_limit(skb, qlen)) { 3453 3459 if (qlen) { ··· 3471 3473 goto enqueue; 3472 3474 } 3473 3475 3476 + drop: 3474 3477 sd->dropped++; 3475 3478 rps_unlock(sd); 3476 3479 ··· 3774 3775 3775 3776 pt_prev = NULL; 3776 3777 3777 - rcu_read_lock(); 3778 - 3779 3778 another_round: 3780 3779 skb->skb_iif = skb->dev->ifindex; 3781 3780 ··· 3783 3786 skb->protocol == cpu_to_be16(ETH_P_8021AD)) { 3784 3787 skb = skb_vlan_untag(skb); 3785 3788 if (unlikely(!skb)) 3786 - goto unlock; 3789 + goto out; 3787 3790 } 3788 3791 3789 3792 #ifdef CONFIG_NET_CLS_ACT ··· 3813 3816 if (static_key_false(&ingress_needed)) { 3814 3817 skb = handle_ing(skb, &pt_prev, &ret, orig_dev); 3815 3818 if (!skb) 3816 - goto unlock; 3819 + goto out; 3817 3820 3818 3821 if (nf_ingress(skb, &pt_prev, &ret, orig_dev) < 0) 3819 - goto unlock; 3822 + goto out; 3820 3823 } 3821 3824 #endif 3822 3825 #ifdef CONFIG_NET_CLS_ACT ··· 3834 3837 if (vlan_do_receive(&skb)) 3835 3838 goto another_round; 3836 3839 else if (unlikely(!skb)) 3837 - goto unlock; 3840 + goto out; 3838 3841 } 3839 3842 3840 3843 rx_handler = rcu_dereference(skb->dev->rx_handler); ··· 3846 3849 switch (rx_handler(&skb)) { 3847 3850 case RX_HANDLER_CONSUMED: 3848 3851 ret = NET_RX_SUCCESS; 3849 - goto unlock; 3852 + goto out; 3850 3853 case RX_HANDLER_ANOTHER: 3851 3854 goto another_round; 3852 3855 case RX_HANDLER_EXACT: ··· 3900 3903 ret = NET_RX_DROP; 3901 3904 } 3902 3905 3903 - unlock: 3904 - rcu_read_unlock(); 3906 + out: 3905 3907 return ret; 3906 3908 } 3907 3909 ··· 3931 3935 3932 3936 static int netif_receive_skb_internal(struct sk_buff *skb) 3933 3937 { 3938 + int ret; 3939 + 3934 3940 net_timestamp_check(netdev_tstamp_prequeue, skb); 3935 3941 3936 3942 if (skb_defer_rx_timestamp(skb)) 3937 3943 return NET_RX_SUCCESS; 3938 3944 3945 + rcu_read_lock(); 3946 + 3939 3947 #ifdef CONFIG_RPS 3940 3948 if (static_key_false(&rps_needed)) { 3941 3949 struct rps_dev_flow voidflow, *rflow = &voidflow; 3942 - int cpu, ret; 3943 - 3944 - rcu_read_lock(); 3945 - 3946 - cpu = get_rps_cpu(skb->dev, skb, &rflow); 3950 + int cpu = get_rps_cpu(skb->dev, skb, &rflow); 3947 3951 3948 3952 if (cpu >= 0) { 3949 3953 ret = enqueue_to_backlog(skb, cpu, &rflow->last_qtail); 3950 3954 rcu_read_unlock(); 3951 3955 return ret; 3952 3956 } 3953 - rcu_read_unlock(); 3954 3957 } 3955 3958 #endif 3956 - return __netif_receive_skb(skb); 3959 + ret = __netif_receive_skb(skb); 3960 + rcu_read_unlock(); 3961 + return ret; 3957 3962 } 3958 3963 3959 3964 /** ··· 4499 4502 struct sk_buff *skb; 4500 4503 4501 4504 while ((skb = __skb_dequeue(&sd->process_queue))) { 4505 + rcu_read_lock(); 4502 4506 local_irq_enable(); 4503 4507 __netif_receive_skb(skb); 4508 + rcu_read_unlock(); 4504 4509 local_irq_disable(); 4505 4510 input_queue_head_incr(sd); 4506 4511 if (++work >= quota) { ··· 6138 6139 unlist_netdevice(dev); 6139 6140 6140 6141 dev->reg_state = NETREG_UNREGISTERING; 6142 + on_each_cpu(flush_backlog, dev, 1); 6141 6143 } 6142 6144 6143 6145 synchronize_net(); ··· 6409 6409 struct netdev_queue *tx; 6410 6410 size_t sz = count * sizeof(*tx); 6411 6411 6412 - BUG_ON(count < 1 || count > 0xffff); 6412 + if (count < 1 || count > 0xffff) 6413 + return -EINVAL; 6413 6414 6414 6415 tx = kzalloc(sz, GFP_KERNEL | __GFP_NOWARN | __GFP_REPEAT); 6415 6416 if (!tx) { ··· 6773 6772 } 6774 6773 6775 6774 dev->reg_state = NETREG_UNREGISTERED; 6776 - 6777 - on_each_cpu(flush_backlog, dev, 1); 6778 6775 6779 6776 netdev_wait_allrefs(dev); 6780 6777
+7 -6
net/core/gen_estimator.c
··· 66 66 67 67 NOTES. 68 68 69 - * avbps is scaled by 2^5, avpps is scaled by 2^10. 69 + * avbps and avpps are scaled by 2^5. 70 70 * both values are reported as 32 bit unsigned values. bps can 71 71 overflow for fast links : max speed being 34360Mbit/sec 72 72 * Minimal interval is HZ/4=250msec (it is the greatest common divisor ··· 85 85 struct gnet_stats_rate_est64 *rate_est; 86 86 spinlock_t *stats_lock; 87 87 int ewma_log; 88 + u32 last_packets; 89 + unsigned long avpps; 88 90 u64 last_bytes; 89 91 u64 avbps; 90 - u32 last_packets; 91 - u32 avpps; 92 92 struct rcu_head e_rcu; 93 93 struct rb_node node; 94 94 struct gnet_stats_basic_cpu __percpu *cpu_bstats; ··· 118 118 rcu_read_lock(); 119 119 list_for_each_entry_rcu(e, &elist[idx].list, list) { 120 120 struct gnet_stats_basic_packed b = {0}; 121 + unsigned long rate; 121 122 u64 brate; 122 - u32 rate; 123 123 124 124 spin_lock(e->stats_lock); 125 125 read_lock(&est_lock); ··· 133 133 e->avbps += (brate >> e->ewma_log) - (e->avbps >> e->ewma_log); 134 134 e->rate_est->bps = (e->avbps+0xF)>>5; 135 135 136 - rate = (b.packets - e->last_packets)<<(12 - idx); 136 + rate = b.packets - e->last_packets; 137 + rate <<= (7 - idx); 137 138 e->last_packets = b.packets; 138 139 e->avpps += (rate >> e->ewma_log) - (e->avpps >> e->ewma_log); 139 - e->rate_est->pps = (e->avpps+0x1FF)>>10; 140 + e->rate_est->pps = (e->avpps + 0xF) >> 5; 140 141 skip: 141 142 read_unlock(&est_lock); 142 143 spin_unlock(e->stats_lock);
+2 -7
net/core/pktgen.c
··· 3571 3571 pr_debug("%s removing thread\n", t->tsk->comm); 3572 3572 pktgen_rem_thread(t); 3573 3573 3574 - /* Wait for kthread_stop */ 3575 - while (!kthread_should_stop()) { 3576 - set_current_state(TASK_INTERRUPTIBLE); 3577 - schedule(); 3578 - } 3579 - __set_current_state(TASK_RUNNING); 3580 - 3581 3574 return 0; 3582 3575 } 3583 3576 ··· 3762 3769 } 3763 3770 3764 3771 t->net = pn; 3772 + get_task_struct(p); 3765 3773 wake_up_process(p); 3766 3774 wait_for_completion(&t->start_done); 3767 3775 ··· 3885 3891 t = list_entry(q, struct pktgen_thread, th_list); 3886 3892 list_del(&t->th_list); 3887 3893 kthread_stop(t->tsk); 3894 + put_task_struct(t->tsk); 3888 3895 kfree(t); 3889 3896 } 3890 3897
+96 -91
net/core/rtnetlink.c
··· 1328 1328 [IFLA_INFO_SLAVE_DATA] = { .type = NLA_NESTED }, 1329 1329 }; 1330 1330 1331 - static const struct nla_policy ifla_vfinfo_policy[IFLA_VF_INFO_MAX+1] = { 1332 - [IFLA_VF_INFO] = { .type = NLA_NESTED }, 1333 - }; 1334 - 1335 1331 static const struct nla_policy ifla_vf_policy[IFLA_VF_MAX+1] = { 1336 1332 [IFLA_VF_MAC] = { .len = sizeof(struct ifla_vf_mac) }, 1337 1333 [IFLA_VF_VLAN] = { .len = sizeof(struct ifla_vf_vlan) }, ··· 1484 1488 return 0; 1485 1489 } 1486 1490 1487 - static int do_setvfinfo(struct net_device *dev, struct nlattr *attr) 1491 + static int do_setvfinfo(struct net_device *dev, struct nlattr **tb) 1488 1492 { 1489 - int rem, err = -EINVAL; 1490 - struct nlattr *vf; 1491 1493 const struct net_device_ops *ops = dev->netdev_ops; 1494 + int err = -EINVAL; 1492 1495 1493 - nla_for_each_nested(vf, attr, rem) { 1494 - switch (nla_type(vf)) { 1495 - case IFLA_VF_MAC: { 1496 - struct ifla_vf_mac *ivm; 1497 - ivm = nla_data(vf); 1498 - err = -EOPNOTSUPP; 1499 - if (ops->ndo_set_vf_mac) 1500 - err = ops->ndo_set_vf_mac(dev, ivm->vf, 1501 - ivm->mac); 1502 - break; 1503 - } 1504 - case IFLA_VF_VLAN: { 1505 - struct ifla_vf_vlan *ivv; 1506 - ivv = nla_data(vf); 1507 - err = -EOPNOTSUPP; 1508 - if (ops->ndo_set_vf_vlan) 1509 - err = ops->ndo_set_vf_vlan(dev, ivv->vf, 1510 - ivv->vlan, 1511 - ivv->qos); 1512 - break; 1513 - } 1514 - case IFLA_VF_TX_RATE: { 1515 - struct ifla_vf_tx_rate *ivt; 1516 - struct ifla_vf_info ivf; 1517 - ivt = nla_data(vf); 1518 - err = -EOPNOTSUPP; 1519 - if (ops->ndo_get_vf_config) 1520 - err = ops->ndo_get_vf_config(dev, ivt->vf, 1521 - &ivf); 1522 - if (err) 1523 - break; 1524 - err = -EOPNOTSUPP; 1525 - if (ops->ndo_set_vf_rate) 1526 - err = ops->ndo_set_vf_rate(dev, ivt->vf, 1527 - ivf.min_tx_rate, 1528 - ivt->rate); 1529 - break; 1530 - } 1531 - case IFLA_VF_RATE: { 1532 - struct ifla_vf_rate *ivt; 1533 - ivt = nla_data(vf); 1534 - err = -EOPNOTSUPP; 1535 - if (ops->ndo_set_vf_rate) 1536 - err = ops->ndo_set_vf_rate(dev, ivt->vf, 1537 - ivt->min_tx_rate, 1538 - ivt->max_tx_rate); 1539 - break; 1540 - } 1541 - case IFLA_VF_SPOOFCHK: { 1542 - struct ifla_vf_spoofchk *ivs; 1543 - ivs = nla_data(vf); 1544 - err = -EOPNOTSUPP; 1545 - if (ops->ndo_set_vf_spoofchk) 1546 - err = ops->ndo_set_vf_spoofchk(dev, ivs->vf, 1547 - ivs->setting); 1548 - break; 1549 - } 1550 - case IFLA_VF_LINK_STATE: { 1551 - struct ifla_vf_link_state *ivl; 1552 - ivl = nla_data(vf); 1553 - err = -EOPNOTSUPP; 1554 - if (ops->ndo_set_vf_link_state) 1555 - err = ops->ndo_set_vf_link_state(dev, ivl->vf, 1556 - ivl->link_state); 1557 - break; 1558 - } 1559 - case IFLA_VF_RSS_QUERY_EN: { 1560 - struct ifla_vf_rss_query_en *ivrssq_en; 1496 + if (tb[IFLA_VF_MAC]) { 1497 + struct ifla_vf_mac *ivm = nla_data(tb[IFLA_VF_MAC]); 1561 1498 1562 - ivrssq_en = nla_data(vf); 1563 - err = -EOPNOTSUPP; 1564 - if (ops->ndo_set_vf_rss_query_en) 1565 - err = ops->ndo_set_vf_rss_query_en(dev, 1566 - ivrssq_en->vf, 1567 - ivrssq_en->setting); 1568 - break; 1569 - } 1570 - default: 1571 - err = -EINVAL; 1572 - break; 1573 - } 1574 - if (err) 1575 - break; 1499 + err = -EOPNOTSUPP; 1500 + if (ops->ndo_set_vf_mac) 1501 + err = ops->ndo_set_vf_mac(dev, ivm->vf, 1502 + ivm->mac); 1503 + if (err < 0) 1504 + return err; 1576 1505 } 1506 + 1507 + if (tb[IFLA_VF_VLAN]) { 1508 + struct ifla_vf_vlan *ivv = nla_data(tb[IFLA_VF_VLAN]); 1509 + 1510 + err = -EOPNOTSUPP; 1511 + if (ops->ndo_set_vf_vlan) 1512 + err = ops->ndo_set_vf_vlan(dev, ivv->vf, ivv->vlan, 1513 + ivv->qos); 1514 + if (err < 0) 1515 + return err; 1516 + } 1517 + 1518 + if (tb[IFLA_VF_TX_RATE]) { 1519 + struct ifla_vf_tx_rate *ivt = nla_data(tb[IFLA_VF_TX_RATE]); 1520 + struct ifla_vf_info ivf; 1521 + 1522 + err = -EOPNOTSUPP; 1523 + if (ops->ndo_get_vf_config) 1524 + err = ops->ndo_get_vf_config(dev, ivt->vf, &ivf); 1525 + if (err < 0) 1526 + return err; 1527 + 1528 + err = -EOPNOTSUPP; 1529 + if (ops->ndo_set_vf_rate) 1530 + err = ops->ndo_set_vf_rate(dev, ivt->vf, 1531 + ivf.min_tx_rate, 1532 + ivt->rate); 1533 + if (err < 0) 1534 + return err; 1535 + } 1536 + 1537 + if (tb[IFLA_VF_RATE]) { 1538 + struct ifla_vf_rate *ivt = nla_data(tb[IFLA_VF_RATE]); 1539 + 1540 + err = -EOPNOTSUPP; 1541 + if (ops->ndo_set_vf_rate) 1542 + err = ops->ndo_set_vf_rate(dev, ivt->vf, 1543 + ivt->min_tx_rate, 1544 + ivt->max_tx_rate); 1545 + if (err < 0) 1546 + return err; 1547 + } 1548 + 1549 + if (tb[IFLA_VF_SPOOFCHK]) { 1550 + struct ifla_vf_spoofchk *ivs = nla_data(tb[IFLA_VF_SPOOFCHK]); 1551 + 1552 + err = -EOPNOTSUPP; 1553 + if (ops->ndo_set_vf_spoofchk) 1554 + err = ops->ndo_set_vf_spoofchk(dev, ivs->vf, 1555 + ivs->setting); 1556 + if (err < 0) 1557 + return err; 1558 + } 1559 + 1560 + if (tb[IFLA_VF_LINK_STATE]) { 1561 + struct ifla_vf_link_state *ivl = nla_data(tb[IFLA_VF_LINK_STATE]); 1562 + 1563 + err = -EOPNOTSUPP; 1564 + if (ops->ndo_set_vf_link_state) 1565 + err = ops->ndo_set_vf_link_state(dev, ivl->vf, 1566 + ivl->link_state); 1567 + if (err < 0) 1568 + return err; 1569 + } 1570 + 1571 + if (tb[IFLA_VF_RSS_QUERY_EN]) { 1572 + struct ifla_vf_rss_query_en *ivrssq_en; 1573 + 1574 + err = -EOPNOTSUPP; 1575 + ivrssq_en = nla_data(tb[IFLA_VF_RSS_QUERY_EN]); 1576 + if (ops->ndo_set_vf_rss_query_en) 1577 + err = ops->ndo_set_vf_rss_query_en(dev, ivrssq_en->vf, 1578 + ivrssq_en->setting); 1579 + if (err < 0) 1580 + return err; 1581 + } 1582 + 1577 1583 return err; 1578 1584 } 1579 1585 ··· 1771 1773 } 1772 1774 1773 1775 if (tb[IFLA_VFINFO_LIST]) { 1776 + struct nlattr *vfinfo[IFLA_VF_MAX + 1]; 1774 1777 struct nlattr *attr; 1775 1778 int rem; 1779 + 1776 1780 nla_for_each_nested(attr, tb[IFLA_VFINFO_LIST], rem) { 1777 - if (nla_type(attr) != IFLA_VF_INFO) { 1781 + if (nla_type(attr) != IFLA_VF_INFO || 1782 + nla_len(attr) < NLA_HDRLEN) { 1778 1783 err = -EINVAL; 1779 1784 goto errout; 1780 1785 } 1781 - err = do_setvfinfo(dev, attr); 1786 + err = nla_parse_nested(vfinfo, IFLA_VF_MAX, attr, 1787 + ifla_vf_policy); 1788 + if (err < 0) 1789 + goto errout; 1790 + err = do_setvfinfo(dev, vfinfo); 1782 1791 if (err < 0) 1783 1792 goto errout; 1784 1793 status |= DO_SETLINK_NOTIFY;
+3 -3
net/dsa/dsa.c
··· 630 630 continue; 631 631 632 632 cd->sw_addr = be32_to_cpup(sw_addr); 633 - if (cd->sw_addr > PHY_MAX_ADDR) 633 + if (cd->sw_addr >= PHY_MAX_ADDR) 634 634 continue; 635 635 636 636 if (!of_property_read_u32(child, "eeprom-length", &eeprom_len)) ··· 642 642 continue; 643 643 644 644 port_index = be32_to_cpup(port_reg); 645 + if (port_index >= DSA_MAX_PORTS) 646 + break; 645 647 646 648 port_name = of_get_property(port, "label", NULL); 647 649 if (!port_name) ··· 668 666 goto out_free_chip; 669 667 } 670 668 671 - if (port_index == DSA_MAX_PORTS) 672 - break; 673 669 } 674 670 } 675 671
+13
net/ipv4/devinet.c
··· 1740 1740 size += nla_total_size(4); 1741 1741 if (type == -1 || type == NETCONFA_PROXY_NEIGH) 1742 1742 size += nla_total_size(4); 1743 + if (type == -1 || type == NETCONFA_IGNORE_ROUTES_WITH_LINKDOWN) 1744 + size += nla_total_size(4); 1743 1745 1744 1746 return size; 1745 1747 } ··· 1782 1780 nla_put_s32(skb, NETCONFA_PROXY_NEIGH, 1783 1781 IPV4_DEVCONF(*devconf, PROXY_ARP)) < 0) 1784 1782 goto nla_put_failure; 1783 + if ((type == -1 || type == NETCONFA_IGNORE_ROUTES_WITH_LINKDOWN) && 1784 + nla_put_s32(skb, NETCONFA_IGNORE_ROUTES_WITH_LINKDOWN, 1785 + IPV4_DEVCONF(*devconf, IGNORE_ROUTES_WITH_LINKDOWN)) < 0) 1786 + goto nla_put_failure; 1785 1787 1786 1788 nlmsg_end(skb, nlh); 1787 1789 return 0; ··· 1825 1819 [NETCONFA_FORWARDING] = { .len = sizeof(int) }, 1826 1820 [NETCONFA_RP_FILTER] = { .len = sizeof(int) }, 1827 1821 [NETCONFA_PROXY_NEIGH] = { .len = sizeof(int) }, 1822 + [NETCONFA_IGNORE_ROUTES_WITH_LINKDOWN] = { .len = sizeof(int) }, 1828 1823 }; 1829 1824 1830 1825 static int inet_netconf_get_devconf(struct sk_buff *in_skb, ··· 2053 2046 new_value != old_value) { 2054 2047 ifindex = devinet_conf_ifindex(net, cnf); 2055 2048 inet_netconf_notify_devconf(net, NETCONFA_PROXY_NEIGH, 2049 + ifindex, cnf); 2050 + } 2051 + if (i == IPV4_DEVCONF_IGNORE_ROUTES_WITH_LINKDOWN - 1 && 2052 + new_value != old_value) { 2053 + ifindex = devinet_conf_ifindex(net, cnf); 2054 + inet_netconf_notify_devconf(net, NETCONFA_IGNORE_ROUTES_WITH_LINKDOWN, 2056 2055 ifindex, cnf); 2057 2056 } 2058 2057 }
+2 -2
net/ipv4/inet_diag.c
··· 152 152 inet6_sk(sk)->tclass) < 0) 153 153 goto errout; 154 154 155 - if (ipv6_only_sock(sk) && 156 - nla_put_u8(skb, INET_DIAG_SKV6ONLY, 1)) 155 + if (((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_CLOSE)) && 156 + nla_put_u8(skb, INET_DIAG_SKV6ONLY, ipv6_only_sock(sk))) 157 157 goto errout; 158 158 } 159 159 #endif
+5 -3
net/ipv4/ip_tunnel.c
··· 586 586 EXPORT_SYMBOL(ip_tunnel_encap); 587 587 588 588 static int tnl_update_pmtu(struct net_device *dev, struct sk_buff *skb, 589 - struct rtable *rt, __be16 df) 589 + struct rtable *rt, __be16 df, 590 + const struct iphdr *inner_iph) 590 591 { 591 592 struct ip_tunnel *tunnel = netdev_priv(dev); 592 593 int pkt_size = skb->len - tunnel->hlen - dev->hard_header_len; ··· 604 603 605 604 if (skb->protocol == htons(ETH_P_IP)) { 606 605 if (!skb_is_gso(skb) && 607 - (df & htons(IP_DF)) && mtu < pkt_size) { 606 + (inner_iph->frag_off & htons(IP_DF)) && 607 + mtu < pkt_size) { 608 608 memset(IPCB(skb), 0, sizeof(*IPCB(skb))); 609 609 icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, htonl(mtu)); 610 610 return -E2BIG; ··· 739 737 goto tx_error; 740 738 } 741 739 742 - if (tnl_update_pmtu(dev, skb, rt, tnl_params->frag_off)) { 740 + if (tnl_update_pmtu(dev, skb, rt, tnl_params->frag_off, inner_iph)) { 743 741 ip_rt_put(rt); 744 742 goto tx_error; 745 743 }
+16 -9
net/ipv4/netfilter/arp_tables.c
··· 254 254 static const char nulldevname[IFNAMSIZ] __attribute__((aligned(sizeof(long)))); 255 255 unsigned int verdict = NF_DROP; 256 256 const struct arphdr *arp; 257 - struct arpt_entry *e, *back; 257 + struct arpt_entry *e, **jumpstack; 258 258 const char *indev, *outdev; 259 259 const void *table_base; 260 + unsigned int cpu, stackidx = 0; 260 261 const struct xt_table_info *private; 261 262 struct xt_action_param acpar; 262 263 unsigned int addend; ··· 271 270 local_bh_disable(); 272 271 addend = xt_write_recseq_begin(); 273 272 private = table->private; 273 + cpu = smp_processor_id(); 274 274 /* 275 275 * Ensure we load private-> members after we've fetched the base 276 276 * pointer. 277 277 */ 278 278 smp_read_barrier_depends(); 279 279 table_base = private->entries; 280 + jumpstack = (struct arpt_entry **)private->jumpstack[cpu]; 280 281 281 282 e = get_entry(table_base, private->hook_entry[hook]); 282 - back = get_entry(table_base, private->underflow[hook]); 283 283 284 284 acpar.in = state->in; 285 285 acpar.out = state->out; ··· 314 312 verdict = (unsigned int)(-v) - 1; 315 313 break; 316 314 } 317 - e = back; 318 - back = get_entry(table_base, back->comefrom); 315 + if (stackidx == 0) { 316 + e = get_entry(table_base, 317 + private->underflow[hook]); 318 + } else { 319 + e = jumpstack[--stackidx]; 320 + e = arpt_next_entry(e); 321 + } 319 322 continue; 320 323 } 321 324 if (table_base + v 322 325 != arpt_next_entry(e)) { 323 - /* Save old back ptr in next entry */ 324 - struct arpt_entry *next = arpt_next_entry(e); 325 - next->comefrom = (void *)back - table_base; 326 326 327 - /* set back pointer to next entry */ 328 - back = next; 327 + if (stackidx >= private->stacksize) { 328 + verdict = NF_DROP; 329 + break; 330 + } 331 + jumpstack[stackidx++] = e; 329 332 } 330 333 331 334 e = get_entry(table_base, v);
+3 -3
net/ipv6/ip6_input.c
··· 331 331 if (offset < 0) 332 332 goto out; 333 333 334 - if (!ipv6_is_mld(skb, nexthdr, offset)) 335 - goto out; 334 + if (ipv6_is_mld(skb, nexthdr, offset)) 335 + deliver = true; 336 336 337 - deliver = true; 337 + goto out; 338 338 } 339 339 /* unknown RA - process it normally */ 340 340 }
+1 -4
net/ipv6/route.c
··· 369 369 struct inet6_dev *idev; 370 370 371 371 dst_destroy_metrics_generic(dst); 372 - 373 - if (rt->rt6i_pcpu) 374 - free_percpu(rt->rt6i_pcpu); 375 - 372 + free_percpu(rt->rt6i_pcpu); 376 373 rt6_uncached_list_del(rt); 377 374 378 375 idev = rt->rt6i_idev;
+1 -1
net/netfilter/nf_queue.c
··· 213 213 214 214 if (verdict == NF_ACCEPT) { 215 215 next_hook: 216 - verdict = nf_iterate(&nf_hooks[entry->state.pf][entry->state.hook], 216 + verdict = nf_iterate(entry->state.hook_list, 217 217 skb, &entry->state, &elem); 218 218 } 219 219
+26 -14
net/netfilter/nfnetlink.c
··· 269 269 } 270 270 } 271 271 272 + enum { 273 + NFNL_BATCH_FAILURE = (1 << 0), 274 + NFNL_BATCH_DONE = (1 << 1), 275 + NFNL_BATCH_REPLAY = (1 << 2), 276 + }; 277 + 272 278 static void nfnetlink_rcv_batch(struct sk_buff *skb, struct nlmsghdr *nlh, 273 279 u_int16_t subsys_id) 274 280 { ··· 282 276 struct net *net = sock_net(skb->sk); 283 277 const struct nfnetlink_subsystem *ss; 284 278 const struct nfnl_callback *nc; 285 - bool success = true, done = false; 286 279 static LIST_HEAD(err_list); 280 + u32 status; 287 281 int err; 288 282 289 283 if (subsys_id >= NFNL_SUBSYS_COUNT) 290 284 return netlink_ack(skb, nlh, -EINVAL); 291 285 replay: 286 + status = 0; 287 + 292 288 skb = netlink_skb_clone(oskb, GFP_KERNEL); 293 289 if (!skb) 294 290 return netlink_ack(oskb, nlh, -ENOMEM); ··· 344 336 if (type == NFNL_MSG_BATCH_BEGIN) { 345 337 /* Malformed: Batch begin twice */ 346 338 nfnl_err_reset(&err_list); 347 - success = false; 339 + status |= NFNL_BATCH_FAILURE; 348 340 goto done; 349 341 } else if (type == NFNL_MSG_BATCH_END) { 350 - done = true; 342 + status |= NFNL_BATCH_DONE; 351 343 goto done; 352 344 } else if (type < NLMSG_MIN_TYPE) { 353 345 err = -EINVAL; ··· 390 382 * original skb. 391 383 */ 392 384 if (err == -EAGAIN) { 393 - nfnl_err_reset(&err_list); 394 - ss->abort(oskb); 395 - nfnl_unlock(subsys_id); 396 - kfree_skb(skb); 397 - goto replay; 385 + status |= NFNL_BATCH_REPLAY; 386 + goto next; 398 387 } 399 388 } 400 389 ack: ··· 407 402 */ 408 403 nfnl_err_reset(&err_list); 409 404 netlink_ack(skb, nlmsg_hdr(oskb), -ENOMEM); 410 - success = false; 405 + status |= NFNL_BATCH_FAILURE; 411 406 goto done; 412 407 } 413 408 /* We don't stop processing the batch on errors, thus, ··· 415 410 * triggers. 416 411 */ 417 412 if (err) 418 - success = false; 413 + status |= NFNL_BATCH_FAILURE; 419 414 } 420 - 415 + next: 421 416 msglen = NLMSG_ALIGN(nlh->nlmsg_len); 422 417 if (msglen > skb->len) 423 418 msglen = skb->len; 424 419 skb_pull(skb, msglen); 425 420 } 426 421 done: 427 - if (success && done) 428 - ss->commit(oskb); 429 - else 422 + if (status & NFNL_BATCH_REPLAY) { 430 423 ss->abort(oskb); 424 + nfnl_err_reset(&err_list); 425 + nfnl_unlock(subsys_id); 426 + kfree_skb(skb); 427 + goto replay; 428 + } else if (status == NFNL_BATCH_DONE) { 429 + ss->commit(oskb); 430 + } else { 431 + ss->abort(oskb); 432 + } 431 433 432 434 nfnl_err_deliver(&err_list, oskb); 433 435 nfnl_unlock(subsys_id);
+1 -1
net/netlink/af_netlink.c
··· 158 158 out: 159 159 spin_unlock(&netlink_tap_lock); 160 160 161 - if (found && nt->module) 161 + if (found) 162 162 module_put(nt->module); 163 163 164 164 return found ? 0 : -ENODEV;
+3 -1
net/rds/ib_rdma.c
··· 759 759 } 760 760 761 761 ibmr = rds_ib_alloc_fmr(rds_ibdev); 762 - if (IS_ERR(ibmr)) 762 + if (IS_ERR(ibmr)) { 763 + rds_ib_dev_put(rds_ibdev); 763 764 return ibmr; 765 + } 764 766 765 767 ret = rds_ib_map_fmr(rds_ibdev, ibmr, sg, nents); 766 768 if (ret == 0)
+1 -1
net/rds/transport.c
··· 73 73 74 74 void rds_trans_put(struct rds_transport *trans) 75 75 { 76 - if (trans && trans->t_owner) 76 + if (trans) 77 77 module_put(trans->t_owner); 78 78 } 79 79
+8 -4
net/switchdev/switchdev.c
··· 171 171 * released. 172 172 */ 173 173 174 - attr->trans = SWITCHDEV_TRANS_ABORT; 175 - __switchdev_port_attr_set(dev, attr); 174 + if (err != -EOPNOTSUPP) { 175 + attr->trans = SWITCHDEV_TRANS_ABORT; 176 + __switchdev_port_attr_set(dev, attr); 177 + } 176 178 177 179 return err; 178 180 } ··· 251 249 * released. 252 250 */ 253 251 254 - obj->trans = SWITCHDEV_TRANS_ABORT; 255 - __switchdev_port_obj_add(dev, obj); 252 + if (err != -EOPNOTSUPP) { 253 + obj->trans = SWITCHDEV_TRANS_ABORT; 254 + __switchdev_port_obj_add(dev, obj); 255 + } 256 256 257 257 return err; 258 258 }
+1
net/tipc/socket.c
··· 2007 2007 res = tipc_sk_create(sock_net(sock->sk), new_sock, 0, 1); 2008 2008 if (res) 2009 2009 goto exit; 2010 + security_sk_clone(sock->sk, new_sock->sk); 2010 2011 2011 2012 new_sk = new_sock->sk; 2012 2013 new_tsock = tipc_sk(new_sk);
+1 -1
scripts/checkpatch.pl
··· 2599 2599 # if LONG_LINE is ignored, the other 2 types are also ignored 2600 2600 # 2601 2601 2602 - if ($length > $max_line_length) { 2602 + if ($line =~ /^\+/ && $length > $max_line_length) { 2603 2603 my $msg_type = "LONG_LINE"; 2604 2604 2605 2605 # Check the allowed long line types first
+2
scripts/mod/devicetable-offsets.c
··· 63 63 64 64 DEVID(acpi_device_id); 65 65 DEVID_FIELD(acpi_device_id, id); 66 + DEVID_FIELD(acpi_device_id, cls); 67 + DEVID_FIELD(acpi_device_id, cls_msk); 66 68 67 69 DEVID(pnp_device_id); 68 70 DEVID_FIELD(pnp_device_id, id);
+30 -2
scripts/mod/file2alias.c
··· 523 523 } 524 524 ADD_TO_DEVTABLE("serio", serio_device_id, do_serio_entry); 525 525 526 - /* looks like: "acpi:ACPI0003 or acpi:PNP0C0B" or "acpi:LNXVIDEO" */ 526 + /* looks like: "acpi:ACPI0003" or "acpi:PNP0C0B" or "acpi:LNXVIDEO" or 527 + * "acpi:bbsspp" (bb=base-class, ss=sub-class, pp=prog-if) 528 + * 529 + * NOTE: Each driver should use one of the following : _HID, _CIDs 530 + * or _CLS. Also, bb, ss, and pp can be substituted with ?? 531 + * as don't care byte. 532 + */ 527 533 static int do_acpi_entry(const char *filename, 528 534 void *symval, char *alias) 529 535 { 530 536 DEF_FIELD_ADDR(symval, acpi_device_id, id); 531 - sprintf(alias, "acpi*:%s:*", *id); 537 + DEF_FIELD_ADDR(symval, acpi_device_id, cls); 538 + DEF_FIELD_ADDR(symval, acpi_device_id, cls_msk); 539 + 540 + if (id && strlen((const char *)*id)) 541 + sprintf(alias, "acpi*:%s:*", *id); 542 + else if (cls) { 543 + int i, byte_shift, cnt = 0; 544 + unsigned int msk; 545 + 546 + sprintf(&alias[cnt], "acpi*:"); 547 + cnt = 6; 548 + for (i = 1; i <= 3; i++) { 549 + byte_shift = 8 * (3-i); 550 + msk = (*cls_msk >> byte_shift) & 0xFF; 551 + if (msk) 552 + sprintf(&alias[cnt], "%02x", 553 + (*cls >> byte_shift) & 0xFF); 554 + else 555 + sprintf(&alias[cnt], "??"); 556 + cnt += 2; 557 + } 558 + sprintf(&alias[cnt], ":*"); 559 + } 532 560 return 1; 533 561 } 534 562 ADD_TO_DEVTABLE("acpi", acpi_device_id, do_acpi_entry);
+2 -1
scripts/mod/modpost.c
··· 886 886 #define TEXT_SECTIONS ".text", ".text.unlikely", ".sched.text", \ 887 887 ".kprobes.text" 888 888 #define OTHER_TEXT_SECTIONS ".ref.text", ".head.text", ".spinlock.text", \ 889 - ".fixup", ".entry.text", ".exception.text", ".text.*" 889 + ".fixup", ".entry.text", ".exception.text", ".text.*", \ 890 + ".coldtext" 890 891 891 892 #define INIT_SECTIONS ".init.*" 892 893 #define MEM_INIT_SECTIONS ".meminit.*"
+2 -1
security/selinux/hooks.c
··· 3283 3283 int rc = 0; 3284 3284 3285 3285 if (default_noexec && 3286 - (prot & PROT_EXEC) && (!file || (!shared && (prot & PROT_WRITE)))) { 3286 + (prot & PROT_EXEC) && (!file || IS_PRIVATE(file_inode(file)) || 3287 + (!shared && (prot & PROT_WRITE)))) { 3287 3288 /* 3288 3289 * We are making executable an anonymous mapping or a 3289 3290 * private file mapping that will also be writable.
+6
security/selinux/ss/ebitmap.c
··· 153 153 if (offset == (u32)-1) 154 154 return 0; 155 155 156 + /* don't waste ebitmap space if the netlabel bitmap is empty */ 157 + if (bitmap == 0) { 158 + offset += EBITMAP_UNIT_SIZE; 159 + continue; 160 + } 161 + 156 162 if (e_iter == NULL || 157 163 offset >= e_iter->startbit + EBITMAP_SIZE) { 158 164 e_prev = e_iter;
+1 -1
sound/pci/hda/hda_generic.c
··· 5175 5175 int err = 0; 5176 5176 5177 5177 mutex_lock(&spec->pcm_mutex); 5178 - if (!spec->indep_hp_enabled) 5178 + if (spec->indep_hp && !spec->indep_hp_enabled) 5179 5179 err = -EBUSY; 5180 5180 else 5181 5181 spec->active_streams |= 1 << STREAM_INDEP_HP;
+2
sound/pci/hda/patch_hdmi.c
··· 3527 3527 { .id = 0x80862807, .name = "Haswell HDMI", .patch = patch_generic_hdmi }, 3528 3528 { .id = 0x80862808, .name = "Broadwell HDMI", .patch = patch_generic_hdmi }, 3529 3529 { .id = 0x80862809, .name = "Skylake HDMI", .patch = patch_generic_hdmi }, 3530 + { .id = 0x8086280a, .name = "Broxton HDMI", .patch = patch_generic_hdmi }, 3530 3531 { .id = 0x80862880, .name = "CedarTrail HDMI", .patch = patch_generic_hdmi }, 3531 3532 { .id = 0x80862882, .name = "Valleyview2 HDMI", .patch = patch_generic_hdmi }, 3532 3533 { .id = 0x80862883, .name = "Braswell HDMI", .patch = patch_generic_hdmi }, ··· 3592 3591 MODULE_ALIAS("snd-hda-codec-id:80862807"); 3593 3592 MODULE_ALIAS("snd-hda-codec-id:80862808"); 3594 3593 MODULE_ALIAS("snd-hda-codec-id:80862809"); 3594 + MODULE_ALIAS("snd-hda-codec-id:8086280a"); 3595 3595 MODULE_ALIAS("snd-hda-codec-id:80862880"); 3596 3596 MODULE_ALIAS("snd-hda-codec-id:80862882"); 3597 3597 MODULE_ALIAS("snd-hda-codec-id:80862883");
+55
sound/pci/hda/patch_realtek.c
··· 4441 4441 } 4442 4442 } 4443 4443 4444 + /* Hook to update amp GPIO4 for automute */ 4445 + static void alc280_hp_gpio4_automute_hook(struct hda_codec *codec, 4446 + struct hda_jack_callback *jack) 4447 + { 4448 + struct alc_spec *spec = codec->spec; 4449 + 4450 + snd_hda_gen_hp_automute(codec, jack); 4451 + /* mute_led_polarity is set to 0, so we pass inverted value here */ 4452 + alc_update_gpio_led(codec, 0x10, !spec->gen.hp_jack_present); 4453 + } 4454 + 4455 + /* Manage GPIOs for HP EliteBook Folio 9480m. 4456 + * 4457 + * GPIO4 is the headphone amplifier power control 4458 + * GPIO3 is the audio output mute indicator LED 4459 + */ 4460 + 4461 + static void alc280_fixup_hp_9480m(struct hda_codec *codec, 4462 + const struct hda_fixup *fix, 4463 + int action) 4464 + { 4465 + struct alc_spec *spec = codec->spec; 4466 + static const struct hda_verb gpio_init[] = { 4467 + { 0x01, AC_VERB_SET_GPIO_MASK, 0x18 }, 4468 + { 0x01, AC_VERB_SET_GPIO_DIRECTION, 0x18 }, 4469 + {} 4470 + }; 4471 + 4472 + if (action == HDA_FIXUP_ACT_PRE_PROBE) { 4473 + /* Set the hooks to turn the headphone amp on/off 4474 + * as needed 4475 + */ 4476 + spec->gen.vmaster_mute.hook = alc_fixup_gpio_mute_hook; 4477 + spec->gen.hp_automute_hook = alc280_hp_gpio4_automute_hook; 4478 + 4479 + /* The GPIOs are currently off */ 4480 + spec->gpio_led = 0; 4481 + 4482 + /* GPIO3 is connected to the output mute LED, 4483 + * high is on, low is off 4484 + */ 4485 + spec->mute_led_polarity = 0; 4486 + spec->gpio_mute_led_mask = 0x08; 4487 + 4488 + /* Initialize GPIO configuration */ 4489 + snd_hda_add_verbs(codec, gpio_init); 4490 + } 4491 + } 4492 + 4444 4493 /* for hda_fixup_thinkpad_acpi() */ 4445 4494 #include "thinkpad_helper.c" 4446 4495 ··· 4570 4521 ALC286_FIXUP_HP_GPIO_LED, 4571 4522 ALC280_FIXUP_HP_GPIO2_MIC_HOTKEY, 4572 4523 ALC280_FIXUP_HP_DOCK_PINS, 4524 + ALC280_FIXUP_HP_9480M, 4573 4525 ALC288_FIXUP_DELL_HEADSET_MODE, 4574 4526 ALC288_FIXUP_DELL1_MIC_NO_PRESENCE, 4575 4527 ALC288_FIXUP_DELL_XPS_13_GPIO6, ··· 5093 5043 .chained = true, 5094 5044 .chain_id = ALC280_FIXUP_HP_GPIO4 5095 5045 }, 5046 + [ALC280_FIXUP_HP_9480M] = { 5047 + .type = HDA_FIXUP_FUNC, 5048 + .v.func = alc280_fixup_hp_9480m, 5049 + }, 5096 5050 [ALC288_FIXUP_DELL_HEADSET_MODE] = { 5097 5051 .type = HDA_FIXUP_FUNC, 5098 5052 .v.func = alc_fixup_headset_mode_dell_alc288, ··· 5215 5161 SND_PCI_QUIRK(0x103c, 0x22b7, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1), 5216 5162 SND_PCI_QUIRK(0x103c, 0x22bf, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1), 5217 5163 SND_PCI_QUIRK(0x103c, 0x22cf, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1), 5164 + SND_PCI_QUIRK(0x103c, 0x22db, "HP", ALC280_FIXUP_HP_9480M), 5218 5165 SND_PCI_QUIRK(0x103c, 0x22dc, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED), 5219 5166 SND_PCI_QUIRK(0x103c, 0x22fb, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED), 5220 5167 /* ALC290 */
+2 -7
sound/usb/line6/pcm.c
··· 186 186 int ret = 0; 187 187 188 188 spin_lock_irqsave(&pstr->lock, flags); 189 - if (!test_and_set_bit(type, &pstr->running)) { 190 - if (pstr->active_urbs || pstr->unlink_urbs) { 191 - ret = -EBUSY; 192 - goto error; 193 - } 194 - 189 + if (!test_and_set_bit(type, &pstr->running) && 190 + !(pstr->active_urbs || pstr->unlink_urbs)) { 195 191 pstr->count = 0; 196 192 /* Submit all currently available URBs */ 197 193 if (direction == SNDRV_PCM_STREAM_PLAYBACK) ··· 195 199 else 196 200 ret = line6_submit_audio_in_all_urbs(line6pcm); 197 201 } 198 - error: 199 202 if (ret < 0) 200 203 clear_bit(type, &pstr->running); 201 204 spin_unlock_irqrestore(&pstr->lock, flags);
+68
sound/usb/quirks-table.h
··· 2512 2512 } 2513 2513 }, 2514 2514 2515 + /* Steinberg devices */ 2516 + { 2517 + /* Steinberg MI2 */ 2518 + USB_DEVICE_VENDOR_SPEC(0x0a4e, 0x2040), 2519 + .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) { 2520 + .ifnum = QUIRK_ANY_INTERFACE, 2521 + .type = QUIRK_COMPOSITE, 2522 + .data = & (const struct snd_usb_audio_quirk[]) { 2523 + { 2524 + .ifnum = 0, 2525 + .type = QUIRK_AUDIO_STANDARD_INTERFACE 2526 + }, 2527 + { 2528 + .ifnum = 1, 2529 + .type = QUIRK_AUDIO_STANDARD_INTERFACE 2530 + }, 2531 + { 2532 + .ifnum = 2, 2533 + .type = QUIRK_AUDIO_STANDARD_INTERFACE 2534 + }, 2535 + { 2536 + .ifnum = 3, 2537 + .type = QUIRK_MIDI_FIXED_ENDPOINT, 2538 + .data = &(const struct snd_usb_midi_endpoint_info) { 2539 + .out_cables = 0x0001, 2540 + .in_cables = 0x0001 2541 + } 2542 + }, 2543 + { 2544 + .ifnum = -1 2545 + } 2546 + } 2547 + } 2548 + }, 2549 + { 2550 + /* Steinberg MI4 */ 2551 + USB_DEVICE_VENDOR_SPEC(0x0a4e, 0x4040), 2552 + .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) { 2553 + .ifnum = QUIRK_ANY_INTERFACE, 2554 + .type = QUIRK_COMPOSITE, 2555 + .data = & (const struct snd_usb_audio_quirk[]) { 2556 + { 2557 + .ifnum = 0, 2558 + .type = QUIRK_AUDIO_STANDARD_INTERFACE 2559 + }, 2560 + { 2561 + .ifnum = 1, 2562 + .type = QUIRK_AUDIO_STANDARD_INTERFACE 2563 + }, 2564 + { 2565 + .ifnum = 2, 2566 + .type = QUIRK_AUDIO_STANDARD_INTERFACE 2567 + }, 2568 + { 2569 + .ifnum = 3, 2570 + .type = QUIRK_MIDI_FIXED_ENDPOINT, 2571 + .data = &(const struct snd_usb_midi_endpoint_info) { 2572 + .out_cables = 0x0001, 2573 + .in_cables = 0x0001 2574 + } 2575 + }, 2576 + { 2577 + .ifnum = -1 2578 + } 2579 + } 2580 + } 2581 + }, 2582 + 2515 2583 /* TerraTec devices */ 2516 2584 { 2517 2585 USB_DEVICE_VENDOR_SPEC(0x0ccd, 0x0012),
+58
tools/include/linux/compiler.h
··· 41 41 42 42 #define ACCESS_ONCE(x) (*(volatile typeof(x) *)&(x)) 43 43 44 + #include <linux/types.h> 45 + 46 + static __always_inline void __read_once_size(const volatile void *p, void *res, int size) 47 + { 48 + switch (size) { 49 + case 1: *(__u8 *)res = *(volatile __u8 *)p; break; 50 + case 2: *(__u16 *)res = *(volatile __u16 *)p; break; 51 + case 4: *(__u32 *)res = *(volatile __u32 *)p; break; 52 + case 8: *(__u64 *)res = *(volatile __u64 *)p; break; 53 + default: 54 + barrier(); 55 + __builtin_memcpy((void *)res, (const void *)p, size); 56 + barrier(); 57 + } 58 + } 59 + 60 + static __always_inline void __write_once_size(volatile void *p, void *res, int size) 61 + { 62 + switch (size) { 63 + case 1: *(volatile __u8 *)p = *(__u8 *)res; break; 64 + case 2: *(volatile __u16 *)p = *(__u16 *)res; break; 65 + case 4: *(volatile __u32 *)p = *(__u32 *)res; break; 66 + case 8: *(volatile __u64 *)p = *(__u64 *)res; break; 67 + default: 68 + barrier(); 69 + __builtin_memcpy((void *)p, (const void *)res, size); 70 + barrier(); 71 + } 72 + } 73 + 74 + /* 75 + * Prevent the compiler from merging or refetching reads or writes. The 76 + * compiler is also forbidden from reordering successive instances of 77 + * READ_ONCE, WRITE_ONCE and ACCESS_ONCE (see below), but only when the 78 + * compiler is aware of some particular ordering. One way to make the 79 + * compiler aware of ordering is to put the two invocations of READ_ONCE, 80 + * WRITE_ONCE or ACCESS_ONCE() in different C statements. 81 + * 82 + * In contrast to ACCESS_ONCE these two macros will also work on aggregate 83 + * data types like structs or unions. If the size of the accessed data 84 + * type exceeds the word size of the machine (e.g., 32 bits or 64 bits) 85 + * READ_ONCE() and WRITE_ONCE() will fall back to memcpy and print a 86 + * compile-time warning. 87 + * 88 + * Their two major use cases are: (1) Mediating communication between 89 + * process-level code and irq/NMI handlers, all running on the same CPU, 90 + * and (2) Ensuring that the compiler does not fold, spindle, or otherwise 91 + * mutilate accesses that either do not require ordering or that interact 92 + * with an explicit memory barrier or atomic instruction that provides the 93 + * required ordering. 94 + */ 95 + 96 + #define READ_ONCE(x) \ 97 + ({ union { typeof(x) __val; char __c[1]; } __u; __read_once_size(&(x), __u.__c, sizeof(x)); __u.__val; }) 98 + 99 + #define WRITE_ONCE(x, val) \ 100 + ({ union { typeof(x) __val; char __c[1]; } __u = { .__val = (val) }; __write_once_size(&(x), __u.__c, sizeof(x)); __u.__val; }) 101 + 44 102 #endif /* _TOOLS_LINUX_COMPILER_H */
-10
tools/include/linux/export.h
··· 1 - #ifndef _TOOLS_LINUX_EXPORT_H_ 2 - #define _TOOLS_LINUX_EXPORT_H_ 3 - 4 - #define EXPORT_SYMBOL(sym) 5 - #define EXPORT_SYMBOL_GPL(sym) 6 - #define EXPORT_SYMBOL_GPL_FUTURE(sym) 7 - #define EXPORT_UNUSED_SYMBOL(sym) 8 - #define EXPORT_UNUSED_SYMBOL_GPL(sym) 9 - 10 - #endif
+104
tools/include/linux/rbtree.h
··· 1 + /* 2 + Red Black Trees 3 + (C) 1999 Andrea Arcangeli <andrea@suse.de> 4 + 5 + This program is free software; you can redistribute it and/or modify 6 + it under the terms of the GNU General Public License as published by 7 + the Free Software Foundation; either version 2 of the License, or 8 + (at your option) any later version. 9 + 10 + This program is distributed in the hope that it will be useful, 11 + but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + GNU General Public License for more details. 14 + 15 + You should have received a copy of the GNU General Public License 16 + along with this program; if not, write to the Free Software 17 + Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 18 + 19 + linux/include/linux/rbtree.h 20 + 21 + To use rbtrees you'll have to implement your own insert and search cores. 22 + This will avoid us to use callbacks and to drop drammatically performances. 23 + I know it's not the cleaner way, but in C (not in C++) to get 24 + performances and genericity... 25 + 26 + See Documentation/rbtree.txt for documentation and samples. 27 + */ 28 + 29 + #ifndef __TOOLS_LINUX_PERF_RBTREE_H 30 + #define __TOOLS_LINUX_PERF_RBTREE_H 31 + 32 + #include <linux/kernel.h> 33 + #include <linux/stddef.h> 34 + 35 + struct rb_node { 36 + unsigned long __rb_parent_color; 37 + struct rb_node *rb_right; 38 + struct rb_node *rb_left; 39 + } __attribute__((aligned(sizeof(long)))); 40 + /* The alignment might seem pointless, but allegedly CRIS needs it */ 41 + 42 + struct rb_root { 43 + struct rb_node *rb_node; 44 + }; 45 + 46 + 47 + #define rb_parent(r) ((struct rb_node *)((r)->__rb_parent_color & ~3)) 48 + 49 + #define RB_ROOT (struct rb_root) { NULL, } 50 + #define rb_entry(ptr, type, member) container_of(ptr, type, member) 51 + 52 + #define RB_EMPTY_ROOT(root) ((root)->rb_node == NULL) 53 + 54 + /* 'empty' nodes are nodes that are known not to be inserted in an rbtree */ 55 + #define RB_EMPTY_NODE(node) \ 56 + ((node)->__rb_parent_color == (unsigned long)(node)) 57 + #define RB_CLEAR_NODE(node) \ 58 + ((node)->__rb_parent_color = (unsigned long)(node)) 59 + 60 + 61 + extern void rb_insert_color(struct rb_node *, struct rb_root *); 62 + extern void rb_erase(struct rb_node *, struct rb_root *); 63 + 64 + 65 + /* Find logical next and previous nodes in a tree */ 66 + extern struct rb_node *rb_next(const struct rb_node *); 67 + extern struct rb_node *rb_prev(const struct rb_node *); 68 + extern struct rb_node *rb_first(const struct rb_root *); 69 + extern struct rb_node *rb_last(const struct rb_root *); 70 + 71 + /* Postorder iteration - always visit the parent after its children */ 72 + extern struct rb_node *rb_first_postorder(const struct rb_root *); 73 + extern struct rb_node *rb_next_postorder(const struct rb_node *); 74 + 75 + /* Fast replacement of a single node without remove/rebalance/add/rebalance */ 76 + extern void rb_replace_node(struct rb_node *victim, struct rb_node *new, 77 + struct rb_root *root); 78 + 79 + static inline void rb_link_node(struct rb_node *node, struct rb_node *parent, 80 + struct rb_node **rb_link) 81 + { 82 + node->__rb_parent_color = (unsigned long)parent; 83 + node->rb_left = node->rb_right = NULL; 84 + 85 + *rb_link = node; 86 + } 87 + 88 + #define rb_entry_safe(ptr, type, member) \ 89 + ({ typeof(ptr) ____ptr = (ptr); \ 90 + ____ptr ? rb_entry(____ptr, type, member) : NULL; \ 91 + }) 92 + 93 + 94 + /* 95 + * Handy for checking that we are not deleting an entry that is 96 + * already in a list, found in block/{blk-throttle,cfq-iosched}.c, 97 + * probably should be moved to lib/rbtree.c... 98 + */ 99 + static inline void rb_erase_init(struct rb_node *n, struct rb_root *root) 100 + { 101 + rb_erase(n, root); 102 + RB_CLEAR_NODE(n); 103 + } 104 + #endif /* __TOOLS_LINUX_PERF_RBTREE_H */
+245
tools/include/linux/rbtree_augmented.h
··· 1 + /* 2 + Red Black Trees 3 + (C) 1999 Andrea Arcangeli <andrea@suse.de> 4 + (C) 2002 David Woodhouse <dwmw2@infradead.org> 5 + (C) 2012 Michel Lespinasse <walken@google.com> 6 + 7 + This program is free software; you can redistribute it and/or modify 8 + it under the terms of the GNU General Public License as published by 9 + the Free Software Foundation; either version 2 of the License, or 10 + (at your option) any later version. 11 + 12 + This program is distributed in the hope that it will be useful, 13 + but WITHOUT ANY WARRANTY; without even the implied warranty of 14 + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 + GNU General Public License for more details. 16 + 17 + You should have received a copy of the GNU General Public License 18 + along with this program; if not, write to the Free Software 19 + Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 20 + 21 + tools/linux/include/linux/rbtree_augmented.h 22 + 23 + Copied from: 24 + linux/include/linux/rbtree_augmented.h 25 + */ 26 + 27 + #ifndef _TOOLS_LINUX_RBTREE_AUGMENTED_H 28 + #define _TOOLS_LINUX_RBTREE_AUGMENTED_H 29 + 30 + #include <linux/compiler.h> 31 + #include <linux/rbtree.h> 32 + 33 + /* 34 + * Please note - only struct rb_augment_callbacks and the prototypes for 35 + * rb_insert_augmented() and rb_erase_augmented() are intended to be public. 36 + * The rest are implementation details you are not expected to depend on. 37 + * 38 + * See Documentation/rbtree.txt for documentation and samples. 39 + */ 40 + 41 + struct rb_augment_callbacks { 42 + void (*propagate)(struct rb_node *node, struct rb_node *stop); 43 + void (*copy)(struct rb_node *old, struct rb_node *new); 44 + void (*rotate)(struct rb_node *old, struct rb_node *new); 45 + }; 46 + 47 + extern void __rb_insert_augmented(struct rb_node *node, struct rb_root *root, 48 + void (*augment_rotate)(struct rb_node *old, struct rb_node *new)); 49 + /* 50 + * Fixup the rbtree and update the augmented information when rebalancing. 51 + * 52 + * On insertion, the user must update the augmented information on the path 53 + * leading to the inserted node, then call rb_link_node() as usual and 54 + * rb_augment_inserted() instead of the usual rb_insert_color() call. 55 + * If rb_augment_inserted() rebalances the rbtree, it will callback into 56 + * a user provided function to update the augmented information on the 57 + * affected subtrees. 58 + */ 59 + static inline void 60 + rb_insert_augmented(struct rb_node *node, struct rb_root *root, 61 + const struct rb_augment_callbacks *augment) 62 + { 63 + __rb_insert_augmented(node, root, augment->rotate); 64 + } 65 + 66 + #define RB_DECLARE_CALLBACKS(rbstatic, rbname, rbstruct, rbfield, \ 67 + rbtype, rbaugmented, rbcompute) \ 68 + static inline void \ 69 + rbname ## _propagate(struct rb_node *rb, struct rb_node *stop) \ 70 + { \ 71 + while (rb != stop) { \ 72 + rbstruct *node = rb_entry(rb, rbstruct, rbfield); \ 73 + rbtype augmented = rbcompute(node); \ 74 + if (node->rbaugmented == augmented) \ 75 + break; \ 76 + node->rbaugmented = augmented; \ 77 + rb = rb_parent(&node->rbfield); \ 78 + } \ 79 + } \ 80 + static inline void \ 81 + rbname ## _copy(struct rb_node *rb_old, struct rb_node *rb_new) \ 82 + { \ 83 + rbstruct *old = rb_entry(rb_old, rbstruct, rbfield); \ 84 + rbstruct *new = rb_entry(rb_new, rbstruct, rbfield); \ 85 + new->rbaugmented = old->rbaugmented; \ 86 + } \ 87 + static void \ 88 + rbname ## _rotate(struct rb_node *rb_old, struct rb_node *rb_new) \ 89 + { \ 90 + rbstruct *old = rb_entry(rb_old, rbstruct, rbfield); \ 91 + rbstruct *new = rb_entry(rb_new, rbstruct, rbfield); \ 92 + new->rbaugmented = old->rbaugmented; \ 93 + old->rbaugmented = rbcompute(old); \ 94 + } \ 95 + rbstatic const struct rb_augment_callbacks rbname = { \ 96 + rbname ## _propagate, rbname ## _copy, rbname ## _rotate \ 97 + }; 98 + 99 + 100 + #define RB_RED 0 101 + #define RB_BLACK 1 102 + 103 + #define __rb_parent(pc) ((struct rb_node *)(pc & ~3)) 104 + 105 + #define __rb_color(pc) ((pc) & 1) 106 + #define __rb_is_black(pc) __rb_color(pc) 107 + #define __rb_is_red(pc) (!__rb_color(pc)) 108 + #define rb_color(rb) __rb_color((rb)->__rb_parent_color) 109 + #define rb_is_red(rb) __rb_is_red((rb)->__rb_parent_color) 110 + #define rb_is_black(rb) __rb_is_black((rb)->__rb_parent_color) 111 + 112 + static inline void rb_set_parent(struct rb_node *rb, struct rb_node *p) 113 + { 114 + rb->__rb_parent_color = rb_color(rb) | (unsigned long)p; 115 + } 116 + 117 + static inline void rb_set_parent_color(struct rb_node *rb, 118 + struct rb_node *p, int color) 119 + { 120 + rb->__rb_parent_color = (unsigned long)p | color; 121 + } 122 + 123 + static inline void 124 + __rb_change_child(struct rb_node *old, struct rb_node *new, 125 + struct rb_node *parent, struct rb_root *root) 126 + { 127 + if (parent) { 128 + if (parent->rb_left == old) 129 + parent->rb_left = new; 130 + else 131 + parent->rb_right = new; 132 + } else 133 + root->rb_node = new; 134 + } 135 + 136 + extern void __rb_erase_color(struct rb_node *parent, struct rb_root *root, 137 + void (*augment_rotate)(struct rb_node *old, struct rb_node *new)); 138 + 139 + static __always_inline struct rb_node * 140 + __rb_erase_augmented(struct rb_node *node, struct rb_root *root, 141 + const struct rb_augment_callbacks *augment) 142 + { 143 + struct rb_node *child = node->rb_right, *tmp = node->rb_left; 144 + struct rb_node *parent, *rebalance; 145 + unsigned long pc; 146 + 147 + if (!tmp) { 148 + /* 149 + * Case 1: node to erase has no more than 1 child (easy!) 150 + * 151 + * Note that if there is one child it must be red due to 5) 152 + * and node must be black due to 4). We adjust colors locally 153 + * so as to bypass __rb_erase_color() later on. 154 + */ 155 + pc = node->__rb_parent_color; 156 + parent = __rb_parent(pc); 157 + __rb_change_child(node, child, parent, root); 158 + if (child) { 159 + child->__rb_parent_color = pc; 160 + rebalance = NULL; 161 + } else 162 + rebalance = __rb_is_black(pc) ? parent : NULL; 163 + tmp = parent; 164 + } else if (!child) { 165 + /* Still case 1, but this time the child is node->rb_left */ 166 + tmp->__rb_parent_color = pc = node->__rb_parent_color; 167 + parent = __rb_parent(pc); 168 + __rb_change_child(node, tmp, parent, root); 169 + rebalance = NULL; 170 + tmp = parent; 171 + } else { 172 + struct rb_node *successor = child, *child2; 173 + tmp = child->rb_left; 174 + if (!tmp) { 175 + /* 176 + * Case 2: node's successor is its right child 177 + * 178 + * (n) (s) 179 + * / \ / \ 180 + * (x) (s) -> (x) (c) 181 + * \ 182 + * (c) 183 + */ 184 + parent = successor; 185 + child2 = successor->rb_right; 186 + augment->copy(node, successor); 187 + } else { 188 + /* 189 + * Case 3: node's successor is leftmost under 190 + * node's right child subtree 191 + * 192 + * (n) (s) 193 + * / \ / \ 194 + * (x) (y) -> (x) (y) 195 + * / / 196 + * (p) (p) 197 + * / / 198 + * (s) (c) 199 + * \ 200 + * (c) 201 + */ 202 + do { 203 + parent = successor; 204 + successor = tmp; 205 + tmp = tmp->rb_left; 206 + } while (tmp); 207 + parent->rb_left = child2 = successor->rb_right; 208 + successor->rb_right = child; 209 + rb_set_parent(child, successor); 210 + augment->copy(node, successor); 211 + augment->propagate(parent, successor); 212 + } 213 + 214 + successor->rb_left = tmp = node->rb_left; 215 + rb_set_parent(tmp, successor); 216 + 217 + pc = node->__rb_parent_color; 218 + tmp = __rb_parent(pc); 219 + __rb_change_child(node, successor, tmp, root); 220 + if (child2) { 221 + successor->__rb_parent_color = pc; 222 + rb_set_parent_color(child2, parent, RB_BLACK); 223 + rebalance = NULL; 224 + } else { 225 + unsigned long pc2 = successor->__rb_parent_color; 226 + successor->__rb_parent_color = pc; 227 + rebalance = __rb_is_black(pc2) ? parent : NULL; 228 + } 229 + tmp = successor; 230 + } 231 + 232 + augment->propagate(tmp, NULL); 233 + return rebalance; 234 + } 235 + 236 + static __always_inline void 237 + rb_erase_augmented(struct rb_node *node, struct rb_root *root, 238 + const struct rb_augment_callbacks *augment) 239 + { 240 + struct rb_node *rebalance = __rb_erase_augmented(node, root, augment); 241 + if (rebalance) 242 + __rb_erase_color(rebalance, root, augment->rotate); 243 + } 244 + 245 + #endif /* _TOOLS_LINUX_RBTREE_AUGMENTED_H */
+1 -1
tools/lib/api/Makefile
··· 36 36 37 37 clean: 38 38 $(call QUIET_CLEAN, libapi) $(RM) $(LIBFILE); \ 39 - find $(if $(OUTPUT),$(OUTPUT),.) -name \*.o | xargs $(RM) 39 + find $(if $(OUTPUT),$(OUTPUT),.) -name \*.o -or -name \*.o.cmd -or -name \*.o.d | xargs $(RM) 40 40 41 41 FORCE: 42 42
+62
tools/lib/hweight.c
··· 1 + #include <linux/bitops.h> 2 + #include <asm/types.h> 3 + 4 + /** 5 + * hweightN - returns the hamming weight of a N-bit word 6 + * @x: the word to weigh 7 + * 8 + * The Hamming Weight of a number is the total number of bits set in it. 9 + */ 10 + 11 + unsigned int __sw_hweight32(unsigned int w) 12 + { 13 + #ifdef CONFIG_ARCH_HAS_FAST_MULTIPLIER 14 + w -= (w >> 1) & 0x55555555; 15 + w = (w & 0x33333333) + ((w >> 2) & 0x33333333); 16 + w = (w + (w >> 4)) & 0x0f0f0f0f; 17 + return (w * 0x01010101) >> 24; 18 + #else 19 + unsigned int res = w - ((w >> 1) & 0x55555555); 20 + res = (res & 0x33333333) + ((res >> 2) & 0x33333333); 21 + res = (res + (res >> 4)) & 0x0F0F0F0F; 22 + res = res + (res >> 8); 23 + return (res + (res >> 16)) & 0x000000FF; 24 + #endif 25 + } 26 + 27 + unsigned int __sw_hweight16(unsigned int w) 28 + { 29 + unsigned int res = w - ((w >> 1) & 0x5555); 30 + res = (res & 0x3333) + ((res >> 2) & 0x3333); 31 + res = (res + (res >> 4)) & 0x0F0F; 32 + return (res + (res >> 8)) & 0x00FF; 33 + } 34 + 35 + unsigned int __sw_hweight8(unsigned int w) 36 + { 37 + unsigned int res = w - ((w >> 1) & 0x55); 38 + res = (res & 0x33) + ((res >> 2) & 0x33); 39 + return (res + (res >> 4)) & 0x0F; 40 + } 41 + 42 + unsigned long __sw_hweight64(__u64 w) 43 + { 44 + #if BITS_PER_LONG == 32 45 + return __sw_hweight32((unsigned int)(w >> 32)) + 46 + __sw_hweight32((unsigned int)w); 47 + #elif BITS_PER_LONG == 64 48 + #ifdef CONFIG_ARCH_HAS_FAST_MULTIPLIER 49 + w -= (w >> 1) & 0x5555555555555555ul; 50 + w = (w & 0x3333333333333333ul) + ((w >> 2) & 0x3333333333333333ul); 51 + w = (w + (w >> 4)) & 0x0f0f0f0f0f0f0f0ful; 52 + return (w * 0x0101010101010101ul) >> 56; 53 + #else 54 + __u64 res = w - ((w >> 1) & 0x5555555555555555ul); 55 + res = (res & 0x3333333333333333ul) + ((res >> 2) & 0x3333333333333333ul); 56 + res = (res + (res >> 4)) & 0x0F0F0F0F0F0F0F0Ful; 57 + res = res + (res >> 8); 58 + res = res + (res >> 16); 59 + return (res + (res >> 32)) & 0x00000000000000FFul; 60 + #endif 61 + #endif 62 + }
+548
tools/lib/rbtree.c
··· 1 + /* 2 + Red Black Trees 3 + (C) 1999 Andrea Arcangeli <andrea@suse.de> 4 + (C) 2002 David Woodhouse <dwmw2@infradead.org> 5 + (C) 2012 Michel Lespinasse <walken@google.com> 6 + 7 + This program is free software; you can redistribute it and/or modify 8 + it under the terms of the GNU General Public License as published by 9 + the Free Software Foundation; either version 2 of the License, or 10 + (at your option) any later version. 11 + 12 + This program is distributed in the hope that it will be useful, 13 + but WITHOUT ANY WARRANTY; without even the implied warranty of 14 + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 + GNU General Public License for more details. 16 + 17 + You should have received a copy of the GNU General Public License 18 + along with this program; if not, write to the Free Software 19 + Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 20 + 21 + linux/lib/rbtree.c 22 + */ 23 + 24 + #include <linux/rbtree_augmented.h> 25 + 26 + /* 27 + * red-black trees properties: http://en.wikipedia.org/wiki/Rbtree 28 + * 29 + * 1) A node is either red or black 30 + * 2) The root is black 31 + * 3) All leaves (NULL) are black 32 + * 4) Both children of every red node are black 33 + * 5) Every simple path from root to leaves contains the same number 34 + * of black nodes. 35 + * 36 + * 4 and 5 give the O(log n) guarantee, since 4 implies you cannot have two 37 + * consecutive red nodes in a path and every red node is therefore followed by 38 + * a black. So if B is the number of black nodes on every simple path (as per 39 + * 5), then the longest possible path due to 4 is 2B. 40 + * 41 + * We shall indicate color with case, where black nodes are uppercase and red 42 + * nodes will be lowercase. Unknown color nodes shall be drawn as red within 43 + * parentheses and have some accompanying text comment. 44 + */ 45 + 46 + static inline void rb_set_black(struct rb_node *rb) 47 + { 48 + rb->__rb_parent_color |= RB_BLACK; 49 + } 50 + 51 + static inline struct rb_node *rb_red_parent(struct rb_node *red) 52 + { 53 + return (struct rb_node *)red->__rb_parent_color; 54 + } 55 + 56 + /* 57 + * Helper function for rotations: 58 + * - old's parent and color get assigned to new 59 + * - old gets assigned new as a parent and 'color' as a color. 60 + */ 61 + static inline void 62 + __rb_rotate_set_parents(struct rb_node *old, struct rb_node *new, 63 + struct rb_root *root, int color) 64 + { 65 + struct rb_node *parent = rb_parent(old); 66 + new->__rb_parent_color = old->__rb_parent_color; 67 + rb_set_parent_color(old, new, color); 68 + __rb_change_child(old, new, parent, root); 69 + } 70 + 71 + static __always_inline void 72 + __rb_insert(struct rb_node *node, struct rb_root *root, 73 + void (*augment_rotate)(struct rb_node *old, struct rb_node *new)) 74 + { 75 + struct rb_node *parent = rb_red_parent(node), *gparent, *tmp; 76 + 77 + while (true) { 78 + /* 79 + * Loop invariant: node is red 80 + * 81 + * If there is a black parent, we are done. 82 + * Otherwise, take some corrective action as we don't 83 + * want a red root or two consecutive red nodes. 84 + */ 85 + if (!parent) { 86 + rb_set_parent_color(node, NULL, RB_BLACK); 87 + break; 88 + } else if (rb_is_black(parent)) 89 + break; 90 + 91 + gparent = rb_red_parent(parent); 92 + 93 + tmp = gparent->rb_right; 94 + if (parent != tmp) { /* parent == gparent->rb_left */ 95 + if (tmp && rb_is_red(tmp)) { 96 + /* 97 + * Case 1 - color flips 98 + * 99 + * G g 100 + * / \ / \ 101 + * p u --> P U 102 + * / / 103 + * n n 104 + * 105 + * However, since g's parent might be red, and 106 + * 4) does not allow this, we need to recurse 107 + * at g. 108 + */ 109 + rb_set_parent_color(tmp, gparent, RB_BLACK); 110 + rb_set_parent_color(parent, gparent, RB_BLACK); 111 + node = gparent; 112 + parent = rb_parent(node); 113 + rb_set_parent_color(node, parent, RB_RED); 114 + continue; 115 + } 116 + 117 + tmp = parent->rb_right; 118 + if (node == tmp) { 119 + /* 120 + * Case 2 - left rotate at parent 121 + * 122 + * G G 123 + * / \ / \ 124 + * p U --> n U 125 + * \ / 126 + * n p 127 + * 128 + * This still leaves us in violation of 4), the 129 + * continuation into Case 3 will fix that. 130 + */ 131 + parent->rb_right = tmp = node->rb_left; 132 + node->rb_left = parent; 133 + if (tmp) 134 + rb_set_parent_color(tmp, parent, 135 + RB_BLACK); 136 + rb_set_parent_color(parent, node, RB_RED); 137 + augment_rotate(parent, node); 138 + parent = node; 139 + tmp = node->rb_right; 140 + } 141 + 142 + /* 143 + * Case 3 - right rotate at gparent 144 + * 145 + * G P 146 + * / \ / \ 147 + * p U --> n g 148 + * / \ 149 + * n U 150 + */ 151 + gparent->rb_left = tmp; /* == parent->rb_right */ 152 + parent->rb_right = gparent; 153 + if (tmp) 154 + rb_set_parent_color(tmp, gparent, RB_BLACK); 155 + __rb_rotate_set_parents(gparent, parent, root, RB_RED); 156 + augment_rotate(gparent, parent); 157 + break; 158 + } else { 159 + tmp = gparent->rb_left; 160 + if (tmp && rb_is_red(tmp)) { 161 + /* Case 1 - color flips */ 162 + rb_set_parent_color(tmp, gparent, RB_BLACK); 163 + rb_set_parent_color(parent, gparent, RB_BLACK); 164 + node = gparent; 165 + parent = rb_parent(node); 166 + rb_set_parent_color(node, parent, RB_RED); 167 + continue; 168 + } 169 + 170 + tmp = parent->rb_left; 171 + if (node == tmp) { 172 + /* Case 2 - right rotate at parent */ 173 + parent->rb_left = tmp = node->rb_right; 174 + node->rb_right = parent; 175 + if (tmp) 176 + rb_set_parent_color(tmp, parent, 177 + RB_BLACK); 178 + rb_set_parent_color(parent, node, RB_RED); 179 + augment_rotate(parent, node); 180 + parent = node; 181 + tmp = node->rb_left; 182 + } 183 + 184 + /* Case 3 - left rotate at gparent */ 185 + gparent->rb_right = tmp; /* == parent->rb_left */ 186 + parent->rb_left = gparent; 187 + if (tmp) 188 + rb_set_parent_color(tmp, gparent, RB_BLACK); 189 + __rb_rotate_set_parents(gparent, parent, root, RB_RED); 190 + augment_rotate(gparent, parent); 191 + break; 192 + } 193 + } 194 + } 195 + 196 + /* 197 + * Inline version for rb_erase() use - we want to be able to inline 198 + * and eliminate the dummy_rotate callback there 199 + */ 200 + static __always_inline void 201 + ____rb_erase_color(struct rb_node *parent, struct rb_root *root, 202 + void (*augment_rotate)(struct rb_node *old, struct rb_node *new)) 203 + { 204 + struct rb_node *node = NULL, *sibling, *tmp1, *tmp2; 205 + 206 + while (true) { 207 + /* 208 + * Loop invariants: 209 + * - node is black (or NULL on first iteration) 210 + * - node is not the root (parent is not NULL) 211 + * - All leaf paths going through parent and node have a 212 + * black node count that is 1 lower than other leaf paths. 213 + */ 214 + sibling = parent->rb_right; 215 + if (node != sibling) { /* node == parent->rb_left */ 216 + if (rb_is_red(sibling)) { 217 + /* 218 + * Case 1 - left rotate at parent 219 + * 220 + * P S 221 + * / \ / \ 222 + * N s --> p Sr 223 + * / \ / \ 224 + * Sl Sr N Sl 225 + */ 226 + parent->rb_right = tmp1 = sibling->rb_left; 227 + sibling->rb_left = parent; 228 + rb_set_parent_color(tmp1, parent, RB_BLACK); 229 + __rb_rotate_set_parents(parent, sibling, root, 230 + RB_RED); 231 + augment_rotate(parent, sibling); 232 + sibling = tmp1; 233 + } 234 + tmp1 = sibling->rb_right; 235 + if (!tmp1 || rb_is_black(tmp1)) { 236 + tmp2 = sibling->rb_left; 237 + if (!tmp2 || rb_is_black(tmp2)) { 238 + /* 239 + * Case 2 - sibling color flip 240 + * (p could be either color here) 241 + * 242 + * (p) (p) 243 + * / \ / \ 244 + * N S --> N s 245 + * / \ / \ 246 + * Sl Sr Sl Sr 247 + * 248 + * This leaves us violating 5) which 249 + * can be fixed by flipping p to black 250 + * if it was red, or by recursing at p. 251 + * p is red when coming from Case 1. 252 + */ 253 + rb_set_parent_color(sibling, parent, 254 + RB_RED); 255 + if (rb_is_red(parent)) 256 + rb_set_black(parent); 257 + else { 258 + node = parent; 259 + parent = rb_parent(node); 260 + if (parent) 261 + continue; 262 + } 263 + break; 264 + } 265 + /* 266 + * Case 3 - right rotate at sibling 267 + * (p could be either color here) 268 + * 269 + * (p) (p) 270 + * / \ / \ 271 + * N S --> N Sl 272 + * / \ \ 273 + * sl Sr s 274 + * \ 275 + * Sr 276 + */ 277 + sibling->rb_left = tmp1 = tmp2->rb_right; 278 + tmp2->rb_right = sibling; 279 + parent->rb_right = tmp2; 280 + if (tmp1) 281 + rb_set_parent_color(tmp1, sibling, 282 + RB_BLACK); 283 + augment_rotate(sibling, tmp2); 284 + tmp1 = sibling; 285 + sibling = tmp2; 286 + } 287 + /* 288 + * Case 4 - left rotate at parent + color flips 289 + * (p and sl could be either color here. 290 + * After rotation, p becomes black, s acquires 291 + * p's color, and sl keeps its color) 292 + * 293 + * (p) (s) 294 + * / \ / \ 295 + * N S --> P Sr 296 + * / \ / \ 297 + * (sl) sr N (sl) 298 + */ 299 + parent->rb_right = tmp2 = sibling->rb_left; 300 + sibling->rb_left = parent; 301 + rb_set_parent_color(tmp1, sibling, RB_BLACK); 302 + if (tmp2) 303 + rb_set_parent(tmp2, parent); 304 + __rb_rotate_set_parents(parent, sibling, root, 305 + RB_BLACK); 306 + augment_rotate(parent, sibling); 307 + break; 308 + } else { 309 + sibling = parent->rb_left; 310 + if (rb_is_red(sibling)) { 311 + /* Case 1 - right rotate at parent */ 312 + parent->rb_left = tmp1 = sibling->rb_right; 313 + sibling->rb_right = parent; 314 + rb_set_parent_color(tmp1, parent, RB_BLACK); 315 + __rb_rotate_set_parents(parent, sibling, root, 316 + RB_RED); 317 + augment_rotate(parent, sibling); 318 + sibling = tmp1; 319 + } 320 + tmp1 = sibling->rb_left; 321 + if (!tmp1 || rb_is_black(tmp1)) { 322 + tmp2 = sibling->rb_right; 323 + if (!tmp2 || rb_is_black(tmp2)) { 324 + /* Case 2 - sibling color flip */ 325 + rb_set_parent_color(sibling, parent, 326 + RB_RED); 327 + if (rb_is_red(parent)) 328 + rb_set_black(parent); 329 + else { 330 + node = parent; 331 + parent = rb_parent(node); 332 + if (parent) 333 + continue; 334 + } 335 + break; 336 + } 337 + /* Case 3 - right rotate at sibling */ 338 + sibling->rb_right = tmp1 = tmp2->rb_left; 339 + tmp2->rb_left = sibling; 340 + parent->rb_left = tmp2; 341 + if (tmp1) 342 + rb_set_parent_color(tmp1, sibling, 343 + RB_BLACK); 344 + augment_rotate(sibling, tmp2); 345 + tmp1 = sibling; 346 + sibling = tmp2; 347 + } 348 + /* Case 4 - left rotate at parent + color flips */ 349 + parent->rb_left = tmp2 = sibling->rb_right; 350 + sibling->rb_right = parent; 351 + rb_set_parent_color(tmp1, sibling, RB_BLACK); 352 + if (tmp2) 353 + rb_set_parent(tmp2, parent); 354 + __rb_rotate_set_parents(parent, sibling, root, 355 + RB_BLACK); 356 + augment_rotate(parent, sibling); 357 + break; 358 + } 359 + } 360 + } 361 + 362 + /* Non-inline version for rb_erase_augmented() use */ 363 + void __rb_erase_color(struct rb_node *parent, struct rb_root *root, 364 + void (*augment_rotate)(struct rb_node *old, struct rb_node *new)) 365 + { 366 + ____rb_erase_color(parent, root, augment_rotate); 367 + } 368 + 369 + /* 370 + * Non-augmented rbtree manipulation functions. 371 + * 372 + * We use dummy augmented callbacks here, and have the compiler optimize them 373 + * out of the rb_insert_color() and rb_erase() function definitions. 374 + */ 375 + 376 + static inline void dummy_propagate(struct rb_node *node, struct rb_node *stop) {} 377 + static inline void dummy_copy(struct rb_node *old, struct rb_node *new) {} 378 + static inline void dummy_rotate(struct rb_node *old, struct rb_node *new) {} 379 + 380 + static const struct rb_augment_callbacks dummy_callbacks = { 381 + dummy_propagate, dummy_copy, dummy_rotate 382 + }; 383 + 384 + void rb_insert_color(struct rb_node *node, struct rb_root *root) 385 + { 386 + __rb_insert(node, root, dummy_rotate); 387 + } 388 + 389 + void rb_erase(struct rb_node *node, struct rb_root *root) 390 + { 391 + struct rb_node *rebalance; 392 + rebalance = __rb_erase_augmented(node, root, &dummy_callbacks); 393 + if (rebalance) 394 + ____rb_erase_color(rebalance, root, dummy_rotate); 395 + } 396 + 397 + /* 398 + * Augmented rbtree manipulation functions. 399 + * 400 + * This instantiates the same __always_inline functions as in the non-augmented 401 + * case, but this time with user-defined callbacks. 402 + */ 403 + 404 + void __rb_insert_augmented(struct rb_node *node, struct rb_root *root, 405 + void (*augment_rotate)(struct rb_node *old, struct rb_node *new)) 406 + { 407 + __rb_insert(node, root, augment_rotate); 408 + } 409 + 410 + /* 411 + * This function returns the first node (in sort order) of the tree. 412 + */ 413 + struct rb_node *rb_first(const struct rb_root *root) 414 + { 415 + struct rb_node *n; 416 + 417 + n = root->rb_node; 418 + if (!n) 419 + return NULL; 420 + while (n->rb_left) 421 + n = n->rb_left; 422 + return n; 423 + } 424 + 425 + struct rb_node *rb_last(const struct rb_root *root) 426 + { 427 + struct rb_node *n; 428 + 429 + n = root->rb_node; 430 + if (!n) 431 + return NULL; 432 + while (n->rb_right) 433 + n = n->rb_right; 434 + return n; 435 + } 436 + 437 + struct rb_node *rb_next(const struct rb_node *node) 438 + { 439 + struct rb_node *parent; 440 + 441 + if (RB_EMPTY_NODE(node)) 442 + return NULL; 443 + 444 + /* 445 + * If we have a right-hand child, go down and then left as far 446 + * as we can. 447 + */ 448 + if (node->rb_right) { 449 + node = node->rb_right; 450 + while (node->rb_left) 451 + node=node->rb_left; 452 + return (struct rb_node *)node; 453 + } 454 + 455 + /* 456 + * No right-hand children. Everything down and left is smaller than us, 457 + * so any 'next' node must be in the general direction of our parent. 458 + * Go up the tree; any time the ancestor is a right-hand child of its 459 + * parent, keep going up. First time it's a left-hand child of its 460 + * parent, said parent is our 'next' node. 461 + */ 462 + while ((parent = rb_parent(node)) && node == parent->rb_right) 463 + node = parent; 464 + 465 + return parent; 466 + } 467 + 468 + struct rb_node *rb_prev(const struct rb_node *node) 469 + { 470 + struct rb_node *parent; 471 + 472 + if (RB_EMPTY_NODE(node)) 473 + return NULL; 474 + 475 + /* 476 + * If we have a left-hand child, go down and then right as far 477 + * as we can. 478 + */ 479 + if (node->rb_left) { 480 + node = node->rb_left; 481 + while (node->rb_right) 482 + node=node->rb_right; 483 + return (struct rb_node *)node; 484 + } 485 + 486 + /* 487 + * No left-hand children. Go up till we find an ancestor which 488 + * is a right-hand child of its parent. 489 + */ 490 + while ((parent = rb_parent(node)) && node == parent->rb_left) 491 + node = parent; 492 + 493 + return parent; 494 + } 495 + 496 + void rb_replace_node(struct rb_node *victim, struct rb_node *new, 497 + struct rb_root *root) 498 + { 499 + struct rb_node *parent = rb_parent(victim); 500 + 501 + /* Set the surrounding nodes to point to the replacement */ 502 + __rb_change_child(victim, new, parent, root); 503 + if (victim->rb_left) 504 + rb_set_parent(victim->rb_left, new); 505 + if (victim->rb_right) 506 + rb_set_parent(victim->rb_right, new); 507 + 508 + /* Copy the pointers/colour from the victim to the replacement */ 509 + *new = *victim; 510 + } 511 + 512 + static struct rb_node *rb_left_deepest_node(const struct rb_node *node) 513 + { 514 + for (;;) { 515 + if (node->rb_left) 516 + node = node->rb_left; 517 + else if (node->rb_right) 518 + node = node->rb_right; 519 + else 520 + return (struct rb_node *)node; 521 + } 522 + } 523 + 524 + struct rb_node *rb_next_postorder(const struct rb_node *node) 525 + { 526 + const struct rb_node *parent; 527 + if (!node) 528 + return NULL; 529 + parent = rb_parent(node); 530 + 531 + /* If we're sitting on node, we've already seen our children */ 532 + if (parent && node == parent->rb_left && parent->rb_right) { 533 + /* If we are the parent's left node, go to the parent's right 534 + * node then all the way down to the left */ 535 + return rb_left_deepest_node(parent->rb_right); 536 + } else 537 + /* Otherwise we are the parent's right node, and the parent 538 + * should be next */ 539 + return (struct rb_node *)parent; 540 + } 541 + 542 + struct rb_node *rb_first_postorder(const struct rb_root *root) 543 + { 544 + if (!root->rb_node) 545 + return NULL; 546 + 547 + return rb_left_deepest_node(root->rb_node); 548 + }
+1 -1
tools/lib/traceevent/Makefile
··· 268 268 269 269 clean: 270 270 $(call QUIET_CLEAN, libtraceevent) \ 271 - $(RM) *.o *~ $(TARGETS) *.a *.so $(VERSION_FILES) .*.d \ 271 + $(RM) *.o *~ $(TARGETS) *.a *.so $(VERSION_FILES) .*.d .*.cmd \ 272 272 $(RM) TRACEEVENT-CFLAGS tags TAGS 273 273 274 274 PHONY += force plugins
+4 -4
tools/perf/MANIFEST
··· 18 18 tools/arch/x86/include/asm/rmwcc.h 19 19 tools/lib/traceevent 20 20 tools/lib/api 21 + tools/lib/hweight.c 22 + tools/lib/rbtree.c 21 23 tools/lib/symbol/kallsyms.c 22 24 tools/lib/symbol/kallsyms.h 23 25 tools/lib/util/find_next_bit.c ··· 46 44 tools/include/linux/list.h 47 45 tools/include/linux/log2.h 48 46 tools/include/linux/poison.h 47 + tools/include/linux/rbtree.h 48 + tools/include/linux/rbtree_augmented.h 49 49 tools/include/linux/types.h 50 50 include/asm-generic/bitops/arch_hweight.h 51 51 include/asm-generic/bitops/const_hweight.h ··· 55 51 include/asm-generic/bitops/__fls.h 56 52 include/asm-generic/bitops/fls.h 57 53 include/linux/perf_event.h 58 - include/linux/rbtree.h 59 54 include/linux/list.h 60 55 include/linux/hash.h 61 56 include/linux/stringify.h 62 - lib/hweight.c 63 - lib/rbtree.c 64 57 include/linux/swab.h 65 58 arch/*/include/asm/unistd*.h 66 59 arch/*/include/uapi/asm/unistd*.h ··· 66 65 arch/*/lib/memset*.S 67 66 include/linux/poison.h 68 67 include/linux/hw_breakpoint.h 69 - include/linux/rbtree_augmented.h 70 68 include/uapi/linux/perf_event.h 71 69 include/uapi/linux/const.h 72 70 include/uapi/linux/swab.h
+16 -3
tools/perf/Makefile.perf
··· 109 109 $(Q)$(SHELL_PATH) util/PERF-VERSION-GEN $(OUTPUT) 110 110 $(Q)touch $(OUTPUT)PERF-VERSION-FILE 111 111 112 - CC = $(CROSS_COMPILE)gcc 113 - LD ?= $(CROSS_COMPILE)ld 114 - AR = $(CROSS_COMPILE)ar 112 + # Makefiles suck: This macro sets a default value of $(2) for the 113 + # variable named by $(1), unless the variable has been set by 114 + # environment or command line. This is necessary for CC and AR 115 + # because make sets default values, so the simpler ?= approach 116 + # won't work as expected. 117 + define allow-override 118 + $(if $(or $(findstring environment,$(origin $(1))),\ 119 + $(findstring command line,$(origin $(1)))),,\ 120 + $(eval $(1) = $(2))) 121 + endef 122 + 123 + # Allow setting CC and AR and LD, or setting CROSS_COMPILE as a prefix. 124 + $(call allow-override,CC,$(CROSS_COMPILE)gcc) 125 + $(call allow-override,AR,$(CROSS_COMPILE)ar) 126 + $(call allow-override,LD,$(CROSS_COMPILE)ld) 127 + 115 128 PKG_CONFIG = $(CROSS_COMPILE)pkg-config 116 129 117 130 RM = rm -f
+2 -2
tools/perf/builtin-stat.c
··· 343 343 return 0; 344 344 } 345 345 346 - static void read_counters(bool close) 346 + static void read_counters(bool close_counters) 347 347 { 348 348 struct perf_evsel *counter; 349 349 ··· 354 354 if (process_counter(counter)) 355 355 pr_warning("failed to process counter %s\n", counter->name); 356 356 357 - if (close) { 357 + if (close_counters) { 358 358 perf_evsel__close_fd(counter, perf_evsel__nr_cpus(counter), 359 359 thread_map__nr(evsel_list->threads)); 360 360 }
+1 -1
tools/perf/ui/browsers/hists.c
··· 48 48 49 49 static bool hist_browser__has_filter(struct hist_browser *hb) 50 50 { 51 - return hists__has_filter(hb->hists) || hb->min_pcnt; 51 + return hists__has_filter(hb->hists) || hb->min_pcnt || symbol_conf.has_filter; 52 52 } 53 53 54 54 static int hist_browser__get_folding(struct hist_browser *browser)
+2 -2
tools/perf/util/Build
··· 139 139 $(call rule_mkdir) 140 140 $(call if_changed_dep,cc_o_c) 141 141 142 - $(OUTPUT)util/rbtree.o: ../../lib/rbtree.c FORCE 142 + $(OUTPUT)util/rbtree.o: ../lib/rbtree.c FORCE 143 143 $(call rule_mkdir) 144 144 $(call if_changed_dep,cc_o_c) 145 145 146 - $(OUTPUT)util/hweight.o: ../../lib/hweight.c FORCE 146 + $(OUTPUT)util/hweight.o: ../lib/hweight.c FORCE 147 147 $(call rule_mkdir) 148 148 $(call if_changed_dep,cc_o_c)
+5 -5
tools/perf/util/auxtrace.c
··· 53 53 { 54 54 struct perf_event_mmap_page *pc = userpg; 55 55 56 - #if BITS_PER_LONG != 64 && !defined(HAVE_SYNC_COMPARE_AND_SWAP_SUPPORT) 57 - pr_err("Cannot use AUX area tracing mmaps\n"); 58 - return -1; 59 - #endif 60 - 61 56 WARN_ONCE(mm->base, "Uninitialized auxtrace_mmap\n"); 62 57 63 58 mm->userpg = userpg; ··· 67 72 mm->base = NULL; 68 73 return 0; 69 74 } 75 + 76 + #if BITS_PER_LONG != 64 && !defined(HAVE_SYNC_COMPARE_AND_SWAP_SUPPORT) 77 + pr_err("Cannot use AUX area tracing mmaps\n"); 78 + return -1; 79 + #endif 70 80 71 81 pc->aux_offset = mp->offset; 72 82 pc->aux_size = mp->len;
-16
tools/perf/util/include/linux/rbtree.h
··· 1 - #ifndef __TOOLS_LINUX_PERF_RBTREE_H 2 - #define __TOOLS_LINUX_PERF_RBTREE_H 3 - #include <stdbool.h> 4 - #include "../../../../include/linux/rbtree.h" 5 - 6 - /* 7 - * Handy for checking that we are not deleting an entry that is 8 - * already in a list, found in block/{blk-throttle,cfq-iosched}.c, 9 - * probably should be moved to lib/rbtree.c... 10 - */ 11 - static inline void rb_erase_init(struct rb_node *n, struct rb_root *root) 12 - { 13 - rb_erase(n, root); 14 - RB_CLEAR_NODE(n); 15 - } 16 - #endif /* __TOOLS_LINUX_PERF_RBTREE_H */
-2
tools/perf/util/include/linux/rbtree_augmented.h
··· 1 - #include <stdbool.h> 2 - #include "../../../../include/linux/rbtree_augmented.h"
+2 -2
tools/perf/util/python-ext-sources
··· 10 10 util/evlist.c 11 11 util/evsel.c 12 12 util/cpumap.c 13 - ../../lib/hweight.c 13 + ../lib/hweight.c 14 14 util/thread_map.c 15 15 util/util.c 16 16 util/xyarray.c ··· 19 19 util/stat.c 20 20 util/strlist.c 21 21 util/trace-event.c 22 - ../../lib/rbtree.c 22 + ../lib/rbtree.c 23 23 util/string.c
+2
tools/perf/util/symbol.c
··· 1911 1911 pr_err("problems parsing %s list\n", list_name); 1912 1912 return -1; 1913 1913 } 1914 + 1915 + symbol_conf.has_filter = true; 1914 1916 return 0; 1915 1917 } 1916 1918
+2 -1
tools/perf/util/symbol.h
··· 105 105 demangle_kernel, 106 106 filter_relative, 107 107 show_hist_headers, 108 - branch_callstack; 108 + branch_callstack, 109 + has_filter; 109 110 const char *vmlinux_name, 110 111 *kallsyms_name, 111 112 *source_prefix,
+1 -2
tools/perf/util/thread_map.c
··· 136 136 if (grow) { 137 137 struct thread_map *tmp; 138 138 139 - tmp = realloc(threads, (sizeof(*threads) + 140 - max_threads * sizeof(pid_t))); 139 + tmp = thread_map__realloc(threads, max_threads); 141 140 if (tmp == NULL) 142 141 goto out_free_namelist; 143 142
+3 -5
tools/perf/util/vdso.c
··· 236 236 const char *file_name; 237 237 struct dso *dso; 238 238 239 - pthread_rwlock_wrlock(&machine->dsos.lock); 240 239 dso = __dsos__find(&machine->dsos, vdso_file->dso_name, true); 241 240 if (dso) 242 - goto out_unlock; 241 + goto out; 243 242 244 243 file_name = vdso__get_compat_file(vdso_file); 245 244 if (!file_name) 246 - goto out_unlock; 245 + goto out; 247 246 248 247 dso = __machine__addnew_vdso(machine, vdso_file->dso_name, file_name); 249 - out_unlock: 250 - pthread_rwlock_unlock(&machine->dsos.lock); 248 + out: 251 249 return dso; 252 250 } 253 251
+3
tools/testing/nvdimm/Kbuild
··· 1 + ldflags-y += --wrap=ioremap_wt 2 + ldflags-y += --wrap=ioremap_wc 3 + ldflags-y += --wrap=devm_ioremap_nocache 1 4 ldflags-y += --wrap=ioremap_cache 2 5 ldflags-y += --wrap=ioremap_nocache 3 6 ldflags-y += --wrap=iounmap
+27
tools/testing/nvdimm/test/iomap.c
··· 65 65 return fallback_fn(offset, size); 66 66 } 67 67 68 + void __iomem *__wrap_devm_ioremap_nocache(struct device *dev, 69 + resource_size_t offset, unsigned long size) 70 + { 71 + struct nfit_test_resource *nfit_res; 72 + 73 + rcu_read_lock(); 74 + nfit_res = get_nfit_res(offset); 75 + rcu_read_unlock(); 76 + if (nfit_res) 77 + return (void __iomem *) nfit_res->buf + offset 78 + - nfit_res->res->start; 79 + return devm_ioremap_nocache(dev, offset, size); 80 + } 81 + EXPORT_SYMBOL(__wrap_devm_ioremap_nocache); 82 + 68 83 void __iomem *__wrap_ioremap_cache(resource_size_t offset, unsigned long size) 69 84 { 70 85 return __nfit_test_ioremap(offset, size, ioremap_cache); ··· 91 76 return __nfit_test_ioremap(offset, size, ioremap_nocache); 92 77 } 93 78 EXPORT_SYMBOL(__wrap_ioremap_nocache); 79 + 80 + void __iomem *__wrap_ioremap_wt(resource_size_t offset, unsigned long size) 81 + { 82 + return __nfit_test_ioremap(offset, size, ioremap_wt); 83 + } 84 + EXPORT_SYMBOL(__wrap_ioremap_wt); 85 + 86 + void __iomem *__wrap_ioremap_wc(resource_size_t offset, unsigned long size) 87 + { 88 + return __nfit_test_ioremap(offset, size, ioremap_wc); 89 + } 90 + EXPORT_SYMBOL(__wrap_ioremap_wc); 94 91 95 92 void __wrap_iounmap(volatile void __iomem *addr) 96 93 {
+49 -3
tools/testing/nvdimm/test/nfit.c
··· 128 128 int num_pm; 129 129 void **dimm; 130 130 dma_addr_t *dimm_dma; 131 + void **flush; 132 + dma_addr_t *flush_dma; 131 133 void **label; 132 134 dma_addr_t *label_dma; 133 135 void **spa_set; ··· 157 155 int i, rc; 158 156 159 157 if (!nfit_mem || !test_bit(cmd, &nfit_mem->dsm_mask)) 160 - return -ENXIO; 158 + return -ENOTTY; 161 159 162 160 /* lookup label space for the given dimm */ 163 161 for (i = 0; i < ARRAY_SIZE(handle); i++) ··· 333 331 + sizeof(struct acpi_nfit_system_address) * NUM_SPA 334 332 + sizeof(struct acpi_nfit_memory_map) * NUM_MEM 335 333 + sizeof(struct acpi_nfit_control_region) * NUM_DCR 336 - + sizeof(struct acpi_nfit_data_region) * NUM_BDW; 334 + + sizeof(struct acpi_nfit_data_region) * NUM_BDW 335 + + sizeof(struct acpi_nfit_flush_address) * NUM_DCR; 337 336 int i; 338 337 339 338 t->nfit_buf = test_alloc(t, nfit_size, &t->nfit_dma); ··· 359 356 if (!t->label[i]) 360 357 return -ENOMEM; 361 358 sprintf(t->label[i], "label%d", i); 359 + 360 + t->flush[i] = test_alloc(t, 8, &t->flush_dma[i]); 361 + if (!t->flush[i]) 362 + return -ENOMEM; 362 363 } 363 364 364 365 for (i = 0; i < NUM_DCR; i++) { ··· 415 408 struct acpi_nfit_system_address *spa; 416 409 struct acpi_nfit_control_region *dcr; 417 410 struct acpi_nfit_data_region *bdw; 411 + struct acpi_nfit_flush_address *flush; 418 412 unsigned int offset; 419 413 420 414 nfit_test_init_header(nfit_buf, size); ··· 839 831 bdw->capacity = DIMM_SIZE; 840 832 bdw->start_address = 0; 841 833 834 + offset = offset + sizeof(struct acpi_nfit_data_region) * 4; 835 + /* flush0 (dimm0) */ 836 + flush = nfit_buf + offset; 837 + flush->header.type = ACPI_NFIT_TYPE_FLUSH_ADDRESS; 838 + flush->header.length = sizeof(struct acpi_nfit_flush_address); 839 + flush->device_handle = handle[0]; 840 + flush->hint_count = 1; 841 + flush->hint_address[0] = t->flush_dma[0]; 842 + 843 + /* flush1 (dimm1) */ 844 + flush = nfit_buf + offset + sizeof(struct acpi_nfit_flush_address) * 1; 845 + flush->header.type = ACPI_NFIT_TYPE_FLUSH_ADDRESS; 846 + flush->header.length = sizeof(struct acpi_nfit_flush_address); 847 + flush->device_handle = handle[1]; 848 + flush->hint_count = 1; 849 + flush->hint_address[0] = t->flush_dma[1]; 850 + 851 + /* flush2 (dimm2) */ 852 + flush = nfit_buf + offset + sizeof(struct acpi_nfit_flush_address) * 2; 853 + flush->header.type = ACPI_NFIT_TYPE_FLUSH_ADDRESS; 854 + flush->header.length = sizeof(struct acpi_nfit_flush_address); 855 + flush->device_handle = handle[2]; 856 + flush->hint_count = 1; 857 + flush->hint_address[0] = t->flush_dma[2]; 858 + 859 + /* flush3 (dimm3) */ 860 + flush = nfit_buf + offset + sizeof(struct acpi_nfit_flush_address) * 3; 861 + flush->header.type = ACPI_NFIT_TYPE_FLUSH_ADDRESS; 862 + flush->header.length = sizeof(struct acpi_nfit_flush_address); 863 + flush->device_handle = handle[3]; 864 + flush->hint_count = 1; 865 + flush->hint_address[0] = t->flush_dma[3]; 866 + 842 867 acpi_desc = &t->acpi_desc; 843 868 set_bit(ND_CMD_GET_CONFIG_SIZE, &acpi_desc->dimm_dsm_force_en); 844 869 set_bit(ND_CMD_GET_CONFIG_DATA, &acpi_desc->dimm_dsm_force_en); ··· 974 933 GFP_KERNEL); 975 934 nfit_test->dimm_dma = devm_kcalloc(dev, num, sizeof(dma_addr_t), 976 935 GFP_KERNEL); 936 + nfit_test->flush = devm_kcalloc(dev, num, sizeof(void *), 937 + GFP_KERNEL); 938 + nfit_test->flush_dma = devm_kcalloc(dev, num, sizeof(dma_addr_t), 939 + GFP_KERNEL); 977 940 nfit_test->label = devm_kcalloc(dev, num, sizeof(void *), 978 941 GFP_KERNEL); 979 942 nfit_test->label_dma = devm_kcalloc(dev, num, ··· 988 943 sizeof(dma_addr_t), GFP_KERNEL); 989 944 if (nfit_test->dimm && nfit_test->dimm_dma && nfit_test->label 990 945 && nfit_test->label_dma && nfit_test->dcr 991 - && nfit_test->dcr_dma) 946 + && nfit_test->dcr_dma && nfit_test->flush 947 + && nfit_test->flush_dma) 992 948 /* pass */; 993 949 else 994 950 return -ENOMEM;
+5
virt/kvm/vfio.c
··· 155 155 list_add_tail(&kvg->node, &kv->group_list); 156 156 kvg->vfio_group = vfio_group; 157 157 158 + kvm_arch_start_assignment(dev->kvm); 159 + 158 160 mutex_unlock(&kv->lock); 159 161 160 162 kvm_vfio_update_coherency(dev); ··· 191 189 ret = 0; 192 190 break; 193 191 } 192 + 193 + kvm_arch_end_assignment(dev->kvm); 194 194 195 195 mutex_unlock(&kv->lock); 196 196 ··· 243 239 kvm_vfio_group_put_external_user(kvg->vfio_group); 244 240 list_del(&kvg->node); 245 241 kfree(kvg); 242 + kvm_arch_end_assignment(dev->kvm); 246 243 } 247 244 248 245 kvm_vfio_update_coherency(dev);