···12391239 object is near the sensor, usually be observing12401240 reflectivity of infrared or ultrasound emitted.12411241 Often these sensors are unit less and as such conversion12421242- to SI units is not possible. Where it is, the units should12431243- be meters. If such a conversion is not possible, the reported12441244- values should behave in the same way as a distance, i.e. lower12451245- values indicate something is closer to the sensor.12421242+ to SI units is not possible. Higher proximity measurements12431243+ indicate closer objects, and vice versa.1246124412471245What: /sys/.../iio:deviceX/in_illuminance_input12481246What: /sys/.../iio:deviceX/in_illuminance_raw
···6060 Document Author6161 ---------------62626363- Viresh Kumar <viresh.linux@gmail.com>, (c) 2010-2012 ST Microelectronics6363+ Viresh Kumar <vireshk@kernel.org>, (c) 2010-2012 ST Microelectronics
+17-1
Documentation/arm/sunxi/README
···3636 + User Manual3737 http://dl.linux-sunxi.org/A20/A20%20User%20Manual%202013-03-22.pdf38383939- - Allwinner A233939+ - Allwinner A23 (sun8i)4040 + Datasheet4141 http://dl.linux-sunxi.org/A23/A23%20Datasheet%20V1.0%2020130830.pdf4242 + User Manual···5555 + User Manual5656 http://dl.linux-sunxi.org/A31/A3x_release_document/A31s/IC/A31s%20User%20Manual%20%20V1.0%2020130322.pdf57575858+ - Allwinner A33 (sun8i)5959+ + Datasheet6060+ http://dl.linux-sunxi.org/A33/A33%20Datasheet%20release%201.1.pdf6161+ + User Manual6262+ http://dl.linux-sunxi.org/A33/A33%20user%20manual%20release%201.1.pdf6363+6464+ - Allwinner H3 (sun8i)6565+ + Datasheet6666+ http://dl.linux-sunxi.org/H3/Allwinner_H3_Datasheet_V1.0.pdf6767+5868 * Quad ARM Cortex-A15, Quad ARM Cortex-A7 based SoCs5969 - Allwinner A806070 + Datasheet6171 http://dl.linux-sunxi.org/A80/A80_Datasheet_Revision_1.0_0404.pdf7272+7373+ * Octa ARM Cortex-A7 based SoCs7474+ - Allwinner A83T7575+ + Not Supported7676+ + Datasheet7777+ http://dl.linux-sunxi.org/A83T/A83T_datasheet_Revision_1.1.pdf
+6
Documentation/device-mapper/cache.txt
···258258 no further I/O will be permitted and the status will just259259 contain the string 'Fail'. The userspace recovery tools260260 should then be used.261261+needs_check : 'needs_check' if set, '-' if not set262262+ A metadata operation has failed, resulting in the needs_check263263+ flag being set in the metadata's superblock. The metadata264264+ device must be deactivated and checked/repaired before the265265+ cache can be made fully operational again. '-' indicates266266+ needs_check is not set.261267262268Messages263269--------
+8-1
Documentation/device-mapper/thin-provisioning.txt
···296296 underlying device. When this is enabled when loading the table,297297 it can get disabled if the underlying device doesn't support it.298298299299- ro|rw299299+ ro|rw|out_of_data_space300300 If the pool encounters certain types of device failures it will301301 drop into a read-only metadata mode in which no changes to302302 the pool metadata (like allocating new blocks) are permitted.···313313 'no_space_timeout' expires. The 'no_space_timeout' dm-thin-pool314314 module parameter can be used to change this timeout -- it315315 defaults to 60 seconds but may be disabled using a value of 0.316316+317317+ needs_check318318+ A metadata operation has failed, resulting in the needs_check319319+ flag being set in the metadata's superblock. The metadata320320+ device must be deactivated and checked/repaired before the321321+ thin-pool can be made fully operational again. '-' indicates322322+ needs_check is not set.316323317324iii) Messages318325
···6565- edid: verbatim EDID data block describing attached display.6666- ddc: phandle describing the i2c bus handling the display data6767 channel6868-- port: A port node with endpoint definitions as defined in6868+- port@[0-1]: Port nodes with endpoint definitions as defined in6969 Documentation/devicetree/bindings/media/video-interfaces.txt.7070+ Port 0 is the input port connected to the IPU display interface,7171+ port 1 is the output port connected to a panel.70727173example:7274···7775 edid = [edid-data];7876 interface-pix-fmt = "rgb24";79778080- port {7878+ port@0 {7979+ reg = <0>;8080+8181 display_in: endpoint {8282 remote-endpoint = <&ipu_di0_disp0>;8383+ };8484+ };8585+8686+ port@1 {8787+ reg = <1>;8888+8989+ display_out: endpoint {9090+ remote-endpoint = <&panel_in>;9191+ };9292+ };9393+};9494+9595+panel {9696+ ...9797+9898+ port {9999+ panel_in: endpoint {100100+ remote-endpoint = <&display_out>;83101 };84102 };85103};
···88Required properties:99- compatible : Should be of the form "ti,emif-<ip-rev>" where <ip-rev>1010 is the IP revision of the specific EMIF instance.1111+ For am437x should be ti,emif-am4372.11121213- phy-type : <u32> indicating the DDR phy type. Following are the1314 allowed values
+8
Documentation/kbuild/makefiles.txt
···952952 $(KBUILD_ARFLAGS) set by the top level Makefile to "D" (deterministic953953 mode) if this option is supported by $(AR).954954955955+ ARCH_CPPFLAGS, ARCH_AFLAGS, ARCH_CFLAGS Overrides the kbuild defaults956956+957957+ These variables are appended to the KBUILD_CPPFLAGS,958958+ KBUILD_AFLAGS, and KBUILD_CFLAGS, respectively, after the959959+ top-level Makefile has set any other flags. This provides a960960+ means for an architecture to override the defaults.961961+962962+955963--- 6.2 Add prerequisites to archheaders:956964957965 The archheaders: rule is used to generate header files that
+11-2
Documentation/power/swsusp.txt
···410410411411Q: Can I suspend-to-disk using a swap partition under LVM?412412413413-A: No. You can suspend successfully, but you'll not be able to414414-resume. uswsusp should be able to work with LVM. See suspend.sf.net.413413+A: Yes and No. You can suspend successfully, but the kernel will not be able414414+to resume on its own. You need an initramfs that can recognize the resume415415+situation, activate the logical volume containing the swap volume (but not416416+touch any filesystems!), and eventually call417417+418418+echo -n "$major:$minor" > /sys/power/resume419419+420420+where $major and $minor are the respective major and minor device numbers of421421+the swap volume.422422+423423+uswsusp works with LVM, too. See http://suspend.sourceforge.net/415424416425Q: I upgraded the kernel from 2.6.15 to 2.6.16. Both kernels were417426compiled with the similar configuration files. Anyway I found that
+77-59
MAINTAINERS
···361361F: drivers/input/touchscreen/ad7879.c362362363363ADDRESS SPACE LAYOUT RANDOMIZATION (ASLR)364364-M: Jiri Kosina <jkosina@suse.cz>364364+M: Jiri Kosina <jkosina@suse.com>365365S: Maintained366366367367ADM1025 HARDWARE MONITOR DRIVER368368-M: Jean Delvare <jdelvare@suse.de>368368+M: Jean Delvare <jdelvare@suse.com>369369L: lm-sensors@lm-sensors.org370370S: Maintained371371F: Documentation/hwmon/adm1025···430430F: drivers/macintosh/therm_adt746x.c431431432432ADT7475 HARDWARE MONITOR DRIVER433433-M: Jean Delvare <jdelvare@suse.de>433433+M: Jean Delvare <jdelvare@suse.com>434434L: lm-sensors@lm-sensors.org435435S: Maintained436436F: Documentation/hwmon/adt7475···445445446446ADVANSYS SCSI DRIVER447447M: Matthew Wilcox <matthew@wil.cx>448448-M: Hannes Reinecke <hare@suse.de>448448+M: Hannes Reinecke <hare@suse.com>449449L: linux-scsi@vger.kernel.org450450S: Maintained451451F: Documentation/scsi/advansys.txt···506506F: drivers/scsi/pcmcia/aha152x*507507508508AIC7XXX / AIC79XX SCSI DRIVER509509-M: Hannes Reinecke <hare@suse.de>509509+M: Hannes Reinecke <hare@suse.com>510510L: linux-scsi@vger.kernel.org511511S: Maintained512512F: drivers/scsi/aic7xxx/···746746F: sound/aoa/747747748748APM DRIVER749749-M: Jiri Kosina <jkosina@suse.cz>749749+M: Jiri Kosina <jkosina@suse.com>750750S: Odd fixes751751F: arch/x86/kernel/apm_32.c752752F: include/linux/apm_bios.h···10011001M: Baruch Siach <baruch@tkos.co.il>10021002L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)10031003S: Maintained10041004+F: arch/arm/boot/dts/cx92755*10041005N: digicolor1005100610061007ARM/EBSA110 MACHINE SUPPORT···13251324F: arch/arm/mach-pxa/palmtc.c1326132513271326ARM/PALM TREO SUPPORT13281328-M: Tomas Cech <sleep_walker@suse.cz>13271327+M: Tomas Cech <sleep_walker@suse.com>13291328L: linux-arm-kernel@lists.infradead.org13301329W: http://hackndev.com13311330S: Maintained···16151614L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)16161615S: Maintained16171616F: arch/arm/boot/dts/vexpress*16171617+F: arch/arm64/boot/dts/arm/vexpress*16181618F: arch/arm/mach-vexpress/16191619F: */*/vexpress*16201620F: */*/*/vexpress*···24062404BTRFS FILE SYSTEM24072405M: Chris Mason <clm@fb.com>24082406M: Josef Bacik <jbacik@fb.com>24092409-M: David Sterba <dsterba@suse.cz>24072407+M: David Sterba <dsterba@suse.com>24102408L: linux-btrfs@vger.kernel.org24112409W: http://btrfs.wiki.kernel.org/24122410Q: http://patchwork.kernel.org/project/linux-btrfs/list/···25642562F: arch/powerpc/oprofile/*cell*25652563F: arch/powerpc/platforms/cell/2566256425672567-CEPH DISTRIBUTED FILE SYSTEM CLIENT25652565+CEPH COMMON CODE (LIBCEPH)25662566+M: Ilya Dryomov <idryomov@gmail.com>25682567M: "Yan, Zheng" <zyan@redhat.com>25692568M: Sage Weil <sage@redhat.com>25702569L: ceph-devel@vger.kernel.org25712570W: http://ceph.com/25722571T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git25722572+T: git git://github.com/ceph/ceph-client.git25732573S: Supported25742574-F: Documentation/filesystems/ceph.txt25752575-F: fs/ceph/25762574F: net/ceph/25772575F: include/linux/ceph/25782576F: include/linux/crush/25772577+25782578+CEPH DISTRIBUTED FILE SYSTEM CLIENT (CEPH)25792579+M: "Yan, Zheng" <zyan@redhat.com>25802580+M: Sage Weil <sage@redhat.com>25812581+M: Ilya Dryomov <idryomov@gmail.com>25822582+L: ceph-devel@vger.kernel.org25832583+W: http://ceph.com/25842584+T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git25852585+T: git git://github.com/ceph/ceph-client.git25862586+S: Supported25872587+F: Documentation/filesystems/ceph.txt25882588+F: fs/ceph/2579258925802590CERTIFIED WIRELESS USB (WUSB) SUBSYSTEM:25812591L: linux-usb@vger.kernel.org···27492735M: Julia Lawall <Julia.Lawall@lip6.fr>27502736M: Gilles Muller <Gilles.Muller@lip6.fr>27512737M: Nicolas Palix <nicolas.palix@imag.fr>27522752-M: Michal Marek <mmarek@suse.cz>27382738+M: Michal Marek <mmarek@suse.com>27532739L: cocci@systeme.lip6.fr (moderated for non-subscribers)27542740T: git git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild.git misc27552741W: http://coccinelle.lip6.fr/···2865285128662852CONTROL GROUP - MEMORY RESOURCE CONTROLLER (MEMCG)28672853M: Johannes Weiner <hannes@cmpxchg.org>28682868-M: Michal Hocko <mhocko@suse.cz>28542854+M: Michal Hocko <mhocko@kernel.org>28692855L: cgroups@vger.kernel.org28702856L: linux-mm@kvack.org28712857S: Maintained···29462932F: arch/x86/kernel/msr.c2947293329482934CPU POWER MONITORING SUBSYSTEM29492949-M: Thomas Renninger <trenn@suse.de>29352935+M: Thomas Renninger <trenn@suse.com>29502936L: linux-pm@vger.kernel.org29512937S: Maintained29522938F: tools/power/cpupower/···31763162F: drivers/net/ethernet/dec/tulip/dmfe.c3177316331783164DC390/AM53C974 SCSI driver31793179-M: Hannes Reinecke <hare@suse.de>31653165+M: Hannes Reinecke <hare@suse.com>31803166L: linux-scsi@vger.kernel.org31813167S: Maintained31823168F: drivers/scsi/am53c974.c···33803366S: Maintained3381336733823368DISKQUOTA33833383-M: Jan Kara <jack@suse.cz>33693369+M: Jan Kara <jack@suse.com>33843370S: Maintained33853371F: Documentation/filesystems/quota.txt33863372F: fs/quota/···34363422F: drivers/hwmon/dme1737.c3437342334383424DMI/SMBIOS SUPPORT34393439-M: Jean Delvare <jdelvare@suse.de>34253425+M: Jean Delvare <jdelvare@suse.com>34403426S: Maintained34413427T: quilt http://jdelvare.nerim.net/devel/linux/jdelvare-dmi/34423428F: Documentation/ABI/testing/sysfs-firmware-dmi-tables···40524038F: drivers/of/of_net.c4053403940544040EXT2 FILE SYSTEM40554055-M: Jan Kara <jack@suse.cz>40414041+M: Jan Kara <jack@suse.com>40564042L: linux-ext4@vger.kernel.org40574043S: Maintained40584044F: Documentation/filesystems/ext2.txt···40604046F: include/linux/ext2*4061404740624048EXT3 FILE SYSTEM40634063-M: Jan Kara <jack@suse.cz>40494049+M: Jan Kara <jack@suse.com>40644050M: Andrew Morton <akpm@linux-foundation.org>40654051M: Andreas Dilger <adilger.kernel@dilger.ca>40664052L: linux-ext4@vger.kernel.org···41104096F: include/video/exynos_mipi*4111409741124098F71805F HARDWARE MONITORING DRIVER41134113-M: Jean Delvare <jdelvare@suse.de>40994099+M: Jean Delvare <jdelvare@suse.com>41144100L: lm-sensors@lm-sensors.org41154101S: Maintained41164102F: Documentation/hwmon/f71805f···42454231F: drivers/block/rsxx/4246423242474233FLOPPY DRIVER42484248-M: Jiri Kosina <jkosina@suse.cz>42344234+M: Jiri Kosina <jkosina@suse.com>42494235T: git git://git.kernel.org/pub/scm/linux/kernel/git/jikos/floppy.git42504236S: Odd fixes42514237F: drivers/block/floppy.c···4666465246674653H8/300 ARCHITECTURE46684654M: Yoshinori Sato <ysato@users.sourceforge.jp>46694669-L: uclinux-h8-devel@lists.sourceforge.jp46554655+L: uclinux-h8-devel@lists.sourceforge.jp (moderated for non-subscribers)46704656W: http://uclinux-h8.sourceforge.jp46714657T: git git://git.sourceforge.jp/gitroot/uclinux-h8/linux.git46724658S: Maintained···47134699F: drivers/media/usb/hackrf/4714470047154701HARDWARE MONITORING47164716-M: Jean Delvare <jdelvare@suse.de>47024702+M: Jean Delvare <jdelvare@suse.com>47174703M: Guenter Roeck <linux@roeck-us.net>47184704L: lm-sensors@lm-sensors.org47194705W: http://www.lm-sensors.org/···48164802F: arch/*/include/asm/suspend*.h4817480348184804HID CORE LAYER48194819-M: Jiri Kosina <jkosina@suse.cz>48054805+M: Jiri Kosina <jkosina@suse.com>48204806L: linux-input@vger.kernel.org48214807T: git git://git.kernel.org/pub/scm/linux/kernel/git/jikos/hid.git48224808S: Maintained···48254811F: include/uapi/linux/hid*4826481248274813HID SENSOR HUB DRIVERS48284828-M: Jiri Kosina <jkosina@suse.cz>48144814+M: Jiri Kosina <jkosina@suse.com>48294815M: Jonathan Cameron <jic23@kernel.org>48304816M: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>48314817L: linux-input@vger.kernel.org···49594945F: tools/hv/4960494649614947I2C OVER PARALLEL PORT49624962-M: Jean Delvare <jdelvare@suse.de>49484948+M: Jean Delvare <jdelvare@suse.com>49634949L: linux-i2c@vger.kernel.org49644950S: Maintained49654951F: Documentation/i2c/busses/i2c-parport···49684954F: drivers/i2c/busses/i2c-parport-light.c4969495549704956I2C/SMBUS CONTROLLER DRIVERS FOR PC49714971-M: Jean Delvare <jdelvare@suse.de>49574957+M: Jean Delvare <jdelvare@suse.com>49724958L: linux-i2c@vger.kernel.org49734959S: Maintained49744960F: Documentation/i2c/busses/i2c-ali1535···50094995F: Documentation/i2c/busses/i2c-ismt5010499650114997I2C/SMBUS STUB DRIVER50125012-M: Jean Delvare <jdelvare@suse.de>49984998+M: Jean Delvare <jdelvare@suse.com>50134999L: linux-i2c@vger.kernel.org50145000S: Maintained50155001F: drivers/i2c/i2c-stub.c···50365022S: Maintained5037502350385024I2C-TAOS-EVM DRIVER50395039-M: Jean Delvare <jdelvare@suse.de>50255025+M: Jean Delvare <jdelvare@suse.com>50405026L: linux-i2c@vger.kernel.org50415027S: Maintained50425028F: Documentation/i2c/busses/i2c-taos-evm···55655551F: net/netfilter/ipvs/5566555255675553IPWIRELESS DRIVER55685568-M: Jiri Kosina <jkosina@suse.cz>55695569-M: David Sterba <dsterba@suse.cz>55545554+M: Jiri Kosina <jkosina@suse.com>55555555+M: David Sterba <dsterba@suse.com>55705556S: Odd Fixes55715557F: drivers/tty/ipwireless/55725558···56865672F: drivers/isdn/hardware/eicon/5687567356885674IT87 HARDWARE MONITORING DRIVER56895689-M: Jean Delvare <jdelvare@suse.de>56755675+M: Jean Delvare <jdelvare@suse.com>56905676L: lm-sensors@lm-sensors.org56915677S: Maintained56925678F: Documentation/hwmon/it87···5753573957545740JOURNALLING LAYER FOR BLOCK DEVICES (JBD)57555741M: Andrew Morton <akpm@linux-foundation.org>57565756-M: Jan Kara <jack@suse.cz>57425742+M: Jan Kara <jack@suse.com>57575743L: linux-ext4@vger.kernel.org57585744S: Maintained57595745F: fs/jbd/···58175803F: fs/autofs4/5818580458195805KERNEL BUILD + files below scripts/ (unless maintained elsewhere)58205820-M: Michal Marek <mmarek@suse.cz>58065806+M: Michal Marek <mmarek@suse.com>58215807T: git git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild.git for-next58225808T: git git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild.git rc-fixes58235809L: linux-kbuild@vger.kernel.org···58815867F: arch/x86/kvm/svm.c5882586858835869KERNEL VIRTUAL MACHINE (KVM) FOR POWERPC58845884-M: Alexander Graf <agraf@suse.de>58705870+M: Alexander Graf <agraf@suse.com>58855871L: kvm-ppc@vger.kernel.org58865872W: http://kvm.qumranet.com58875873T: git git://github.com/agraf/linux-2.6.git···60386024F: include/linux/leds.h6039602560406026LEGACY EEPROM DRIVER60416041-M: Jean Delvare <jdelvare@suse.de>60276027+M: Jean Delvare <jdelvare@suse.com>60426028S: Maintained60436029F: Documentation/misc-devices/eeprom60446030F: drivers/misc/eeprom/eeprom.c···60916077F: include/linux/libata.h6092607860936079LIBATA PATA ARASAN COMPACT FLASH CONTROLLER60946094-M: Viresh Kumar <viresh.linux@gmail.com>60806080+M: Viresh Kumar <vireshk@kernel.org>60956081L: linux-ide@vger.kernel.org60966082T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/libata.git60976083S: Maintained···61616147Q: https://patchwork.kernel.org/project/linux-nvdimm/list/61626148S: Supported61636149F: drivers/nvdimm/pmem.c61506150+F: include/linux/pmem.h6164615161656152LINUX FOR IBM pSERIES (RS/6000)61666153M: Paul Mackerras <paulus@au.ibm.com>···61766161W: http://www.penguinppc.org/61776162L: linuxppc-dev@lists.ozlabs.org61786163Q: http://patchwork.ozlabs.org/project/linuxppc-dev/list/61796179-T: git git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc.git61646164+T: git git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git61806165S: Supported61816166F: Documentation/powerpc/61826167F: arch/powerpc/···62526237LIVE PATCHING62536238M: Josh Poimboeuf <jpoimboe@redhat.com>62546239M: Seth Jennings <sjenning@redhat.com>62556255-M: Jiri Kosina <jkosina@suse.cz>62566256-M: Vojtech Pavlik <vojtech@suse.cz>62406240+M: Jiri Kosina <jkosina@suse.com>62416241+M: Vojtech Pavlik <vojtech@suse.com>62576242S: Maintained62586243F: kernel/livepatch/62596244F: include/linux/livepatch.h···62796264F: drivers/hwmon/lm73.c6280626562816266LM78 HARDWARE MONITOR DRIVER62826282-M: Jean Delvare <jdelvare@suse.de>62676267+M: Jean Delvare <jdelvare@suse.com>62836268L: lm-sensors@lm-sensors.org62846269S: Maintained62856270F: Documentation/hwmon/lm7862866271F: drivers/hwmon/lm78.c6287627262886273LM83 HARDWARE MONITOR DRIVER62896289-M: Jean Delvare <jdelvare@suse.de>62746274+M: Jean Delvare <jdelvare@suse.com>62906275L: lm-sensors@lm-sensors.org62916276S: Maintained62926277F: Documentation/hwmon/lm8362936278F: drivers/hwmon/lm83.c6294627962956280LM90 HARDWARE MONITOR DRIVER62966296-M: Jean Delvare <jdelvare@suse.de>62816281+M: Jean Delvare <jdelvare@suse.com>62976282L: lm-sensors@lm-sensors.org62986283S: Maintained62996284F: Documentation/hwmon/lm90···70207005F: net/*/netfilter.c70217006F: net/*/netfilter/70227007F: net/netfilter/70087008+F: net/bridge/br_netfilter*.c7023700970247010NETLABEL70257011M: Paul Moore <paul@paul-moore.com>···77207704F: drivers/char/pc8736x_gpio.c7721770577227706PC87427 HARDWARE MONITORING DRIVER77237723-M: Jean Delvare <jdelvare@suse.de>77077707+M: Jean Delvare <jdelvare@suse.com>77247708L: lm-sensors@lm-sensors.org77257709S: Maintained77267710F: Documentation/hwmon/pc87427···79977981F: drivers/pinctrl/samsung/7998798279997983PIN CONTROLLER - ST SPEAR80008000-M: Viresh Kumar <viresh.linux@gmail.com>79847984+M: Viresh Kumar <vireshk@kernel.org>80017985L: spear-devel@list.st.com80027986L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)80037987W: http://www.st.com/spear···80057989F: drivers/pinctrl/spear/8006799080077991PKTCDVD DRIVER80088008-M: Jiri Kosina <jkosina@suse.cz>79927992+M: Jiri Kosina <jkosina@suse.com>80097993S: Maintained80107994F: drivers/block/pktcdvd.c80117995F: include/linux/pktcdvd.h···83828366M: Ilya Dryomov <idryomov@gmail.com>83838367M: Sage Weil <sage@redhat.com>83848368M: Alex Elder <elder@kernel.org>83858385-M: ceph-devel@vger.kernel.org83698369+L: ceph-devel@vger.kernel.org83868370W: http://ceph.com/83878371T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git83728372+T: git git://github.com/ceph/ceph-client.git83888373S: Supported83748374+F: Documentation/ABI/testing/sysfs-bus-rbd83898375F: drivers/block/rbd.c83908376F: drivers/block/rbd_types.h83918377···88968878F: drivers/tty/serial/8897887988988880SYNOPSYS DESIGNWARE DMAC DRIVER88998899-M: Viresh Kumar <viresh.linux@gmail.com>88818881+M: Viresh Kumar <vireshk@kernel.org>89008882M: Andy Shevchenko <andriy.shevchenko@linux.intel.com>89018883S: Maintained89028884F: include/linux/dma/dw.h···90639045F: drivers/mmc/host/sdhci-s3c*9064904690659047SECURE DIGITAL HOST CONTROLLER INTERFACE (SDHCI) ST SPEAR DRIVER90669066-M: Viresh Kumar <viresh.linux@gmail.com>90489048+M: Viresh Kumar <vireshk@kernel.org>90679049L: spear-devel@list.st.com90689050L: linux-mmc@vger.kernel.org90699051S: Maintained···94259407F: drivers/hwmon/sch5627.c9426940894279409SMSC47B397 HARDWARE MONITOR DRIVER94289428-M: Jean Delvare <jdelvare@suse.de>94109410+M: Jean Delvare <jdelvare@suse.com>94299411L: lm-sensors@lm-sensors.org94309412S: Maintained94319413F: Documentation/hwmon/smsc47b397···94749456F: drivers/media/pci/solo6x10/9475945794769458SOFTWARE RAID (Multiple Disks) SUPPORT94779477-M: Neil Brown <neilb@suse.de>94599459+M: Neil Brown <neilb@suse.com>94789460L: linux-raid@vger.kernel.org94799461S: Supported94809462F: drivers/md/···9517949995189500SOUND95199501M: Jaroslav Kysela <perex@perex.cz>95209520-M: Takashi Iwai <tiwai@suse.de>95029502+M: Takashi Iwai <tiwai@suse.com>95219503L: alsa-devel@alsa-project.org (moderated for non-subscribers)95229504W: http://www.alsa-project.org/95239505T: git git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound.git···96019583F: include/linux/compiler.h9602958496039585SPEAR PLATFORM SUPPORT96049604-M: Viresh Kumar <viresh.linux@gmail.com>95869586+M: Viresh Kumar <vireshk@kernel.org>96059587M: Shiraz Hashim <shiraz.linux.kernel@gmail.com>96069588L: spear-devel@list.st.com96079589L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)···96109592F: arch/arm/mach-spear/9611959396129594SPEAR CLOCK FRAMEWORK SUPPORT96139613-M: Viresh Kumar <viresh.linux@gmail.com>95959595+M: Viresh Kumar <vireshk@kernel.org>96149596L: spear-devel@list.st.com96159597L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)96169598W: http://www.st.com/spear···10400103821040110383TTY LAYER1040210384M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>1040310403-M: Jiri Slaby <jslaby@suse.cz>1038510385+M: Jiri Slaby <jslaby@suse.com>1040410386S: Supported1040510387T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty.git1040610388F: Documentation/serial/···1047410456F: arch/m68k/include/asm/*_no.*10475104571047610458UDF FILESYSTEM1047710477-M: Jan Kara <jack@suse.cz>1045910459+M: Jan Kara <jack@suse.com>1047810460S: Maintained1047910461F: Documentation/filesystems/udf.txt1048010462F: fs/udf/···1061710599F: include/linux/usb/gadget*10618106001061910601USB HID/HIDBP DRIVERS (USB KEYBOARDS, MICE, REMOTE CONTROLS, ...)1062010620-M: Jiri Kosina <jkosina@suse.cz>1060210602+M: Jiri Kosina <jkosina@suse.com>1062110603L: linux-usb@vger.kernel.org1062210604T: git git://git.kernel.org/pub/scm/linux/kernel/git/jikos/hid.git1062310605S: Maintained···1074210724F: drivers/usb/host/uhci*10743107251074410726USB "USBNET" DRIVER FRAMEWORK1074510745-M: Oliver Neukum <oneukum@suse.de>1072710727+M: Oliver Neukum <oneukum@suse.com>1074610728L: netdev@vger.kernel.org1074710729W: http://www.linux-usb.org/usbnet1074810730S: Maintained···1106911051F: drivers/hwmon/w83793.c11070110521107111053W83795 HARDWARE MONITORING DRIVER1107211072-M: Jean Delvare <jdelvare@suse.de>1105411054+M: Jean Delvare <jdelvare@suse.com>1107311055L: lm-sensors@lm-sensors.org1107411056S: Maintained1107511057F: drivers/hwmon/w83795.c
+6-5
Makefile
···11VERSION = 422PATCHLEVEL = 233SUBLEVEL = 044-EXTRAVERSION = -rc144+EXTRAVERSION = -rc355NAME = Hurr durr I'ma sheep6677# *DOCUMENTATION*···780780include scripts/Makefile.kasan781781include scripts/Makefile.extrawarn782782783783-# Add user supplied CPPFLAGS, AFLAGS and CFLAGS as the last assignments784784-KBUILD_CPPFLAGS += $(KCPPFLAGS)785785-KBUILD_AFLAGS += $(KAFLAGS)786786-KBUILD_CFLAGS += $(KCFLAGS)783783+# Add any arch overrides and user supplied CPPFLAGS, AFLAGS and CFLAGS as the784784+# last assignments785785+KBUILD_CPPFLAGS += $(ARCH_CPPFLAGS) $(KCPPFLAGS)786786+KBUILD_AFLAGS += $(ARCH_AFLAGS) $(KAFLAGS)787787+KBUILD_CFLAGS += $(ARCH_CFLAGS) $(KCFLAGS)787788788789# Use --build-id when available.789790LDFLAGS_BUILD_ID = $(patsubst -Wl$(comma)%,%,\
+4
arch/Kconfig
···221221config ARCH_THREAD_INFO_ALLOCATOR222222 bool223223224224+# Select if arch wants to size task_struct dynamically via arch_task_struct_size:225225+config ARCH_WANTS_DYNAMIC_TASK_STRUCT226226+ bool227227+224228config HAVE_REGS_AND_STACK_ACCESS_API225229 bool226230 help
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_ALPHA_MM_ARCH_HOOKS_H1313-#define _ASM_ALPHA_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_ALPHA_MM_ARCH_HOOKS_H */
+2-1
arch/arc/Kconfig
···115115116116config ARC_CPU_750D117117 bool "ARC750D"118118+ select ARC_CANT_LLSC118119 help119120 Support for ARC750 core120121···363362config ARC_HAS_LLSC364363 bool "Insn: LLOCK/SCOND (efficient atomic ops)"365364 default y366366- depends on !ARC_CPU_750D && !ARC_CANT_LLSC365365+ depends on !ARC_CANT_LLSC367366368367config ARC_HAS_SWAPE369368 bool "Insn: SWAPE (endian-swap)"
+2-1
arch/arc/Makefile
···49495050ifndef CONFIG_CC_OPTIMIZE_FOR_SIZE5151# Generic build system uses -O2, we want -O35252-cflags-y += -O35252+# Note: No need to add to cflags-y as that happens anyways5353+ARCH_CFLAGS += -O35354endif54555556# small data is default for elf32 tool-chain. If not usable, disable it
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_ARC_MM_ARCH_HOOKS_H1313-#define _ASM_ARC_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_ARC_MM_ARCH_HOOKS_H */
+1-1
arch/arc/include/asm/ptrace.h
···106106 long r25, r24, r23, r22, r21, r20, r19, r18, r17, r16, r15, r14, r13;107107};108108109109-#define instruction_pointer(regs) ((regs)->ret)109109+#define instruction_pointer(regs) (unsigned long)((regs)->ret)110110#define profile_pc(regs) instruction_pointer(regs)111111112112/* return 1 if user mode or 0 if kernel mode */
···468468noinline void slc_op(unsigned long paddr, unsigned long sz, const int op)469469{470470#ifdef CONFIG_ISA_ARCV2471471+ /*472472+ * SLC is shared between all cores and concurrent aux operations from473473+ * multiple cores need to be serialized using a spinlock474474+ * A concurrent operation can be silently ignored and/or the old/new475475+ * operation can remain incomplete forever (lockup in SLC_CTRL_BUSY loop476476+ * below)477477+ */478478+ static DEFINE_SPINLOCK(lock);471479 unsigned long flags;472480 unsigned int ctrl;473481474474- local_irq_save(flags);482482+ spin_lock_irqsave(&lock, flags);475483476484 /*477485 * The Region Flush operation is specified by CTRL.RGN_OP[11..9]···512504513505 while (read_aux_reg(ARC_REG_SLC_CTRL) & SLC_CTRL_BUSY);514506515515- local_irq_restore(flags);507507+ spin_unlock_irqrestore(&lock, flags);516508#endif517509}518510
+2-2
arch/arc/mm/dma.c
···60606161 /* This is kernel Virtual address (0x7000_0000 based) */6262 kvaddr = ioremap_nocache((unsigned long)paddr, size);6363- if (kvaddr != NULL)6464- memset(kvaddr, 0, size);6363+ if (kvaddr == NULL)6464+ return NULL;65656666 /* This is bus address, platform dependent */6767 *dma_handle = (dma_addr_t)paddr;
+6
arch/arm/Kconfig
···16931693config HIGHPTE16941694 bool "Allocate 2nd-level pagetables from highmem"16951695 depends on HIGHMEM16961696+ help16971697+ The VM uses one page of physical memory for each page table.16981698+ For systems with a lot of processes, this can use a lot of16991699+ precious low memory, eventually leading to low memory being17001700+ consumed by page tables. Setting this option will allow17011701+ user-space 2nd level page tables to reside in high memory.1696170216971703config HW_PERF_EVENTS16981704 bool "Enable hardware performance counter support for perf events"
+1-1
arch/arm/Kconfig.debug
···1635163516361636config DEBUG_SET_MODULE_RONX16371637 bool "Set loadable kernel module data as NX and text as RO"16381638- depends on MODULES16381638+ depends on MODULES && MMU16391639 ---help---16401640 This option helps catch unintended modifications to loadable16411641 kernel module's text and read-only data. It also prevents execution
+4
arch/arm/boot/dts/am335x-boneblack.dts
···8080 status = "okay";8181 };8282};8383+8484+&rtc {8585+ system-power-controller;8686+};
···11/*22 * DTS file for SPEAr1310 Evaluation Baord33 *44- * Copyright 2012 Viresh Kumar <viresh.linux@gmail.com>44+ * Copyright 2012 Viresh Kumar <vireshk@kernel.org>55 *66 * The code contained herein is licensed under the GNU General Public77 * License. You may obtain a copy of the GNU General Public License
+1-1
arch/arm/boot/dts/spear1310.dtsi
···11/*22 * DTS file for all SPEAr1310 SoCs33 *44- * Copyright 2012 Viresh Kumar <viresh.linux@gmail.com>44+ * Copyright 2012 Viresh Kumar <vireshk@kernel.org>55 *66 * The code contained herein is licensed under the GNU General Public77 * License. You may obtain a copy of the GNU General Public License
+1-1
arch/arm/boot/dts/spear1340-evb.dts
···11/*22 * DTS file for SPEAr1340 Evaluation Baord33 *44- * Copyright 2012 Viresh Kumar <viresh.linux@gmail.com>44+ * Copyright 2012 Viresh Kumar <vireshk@kernel.org>55 *66 * The code contained herein is licensed under the GNU General Public77 * License. You may obtain a copy of the GNU General Public License
+1-1
arch/arm/boot/dts/spear1340.dtsi
···11/*22 * DTS file for all SPEAr1340 SoCs33 *44- * Copyright 2012 Viresh Kumar <viresh.linux@gmail.com>44+ * Copyright 2012 Viresh Kumar <vireshk@kernel.org>55 *66 * The code contained herein is licensed under the GNU General Public77 * License. You may obtain a copy of the GNU General Public License
+1-1
arch/arm/boot/dts/spear13xx.dtsi
···11/*22 * DTS file for all SPEAr13xx SoCs33 *44- * Copyright 2012 Viresh Kumar <viresh.linux@gmail.com>44+ * Copyright 2012 Viresh Kumar <vireshk@kernel.org>55 *66 * The code contained herein is licensed under the GNU General Public77 * License. You may obtain a copy of the GNU General Public License
+1-1
arch/arm/boot/dts/spear300-evb.dts
···11/*22 * DTS file for SPEAr300 Evaluation Baord33 *44- * Copyright 2012 Viresh Kumar <viresh.linux@gmail.com>44+ * Copyright 2012 Viresh Kumar <vireshk@kernel.org>55 *66 * The code contained herein is licensed under the GNU General Public77 * License. You may obtain a copy of the GNU General Public License
+1-1
arch/arm/boot/dts/spear300.dtsi
···11/*22 * DTS file for SPEAr300 SoC33 *44- * Copyright 2012 Viresh Kumar <viresh.linux@gmail.com>44+ * Copyright 2012 Viresh Kumar <vireshk@kernel.org>55 *66 * The code contained herein is licensed under the GNU General Public77 * License. You may obtain a copy of the GNU General Public License
+1-1
arch/arm/boot/dts/spear310-evb.dts
···11/*22 * DTS file for SPEAr310 Evaluation Baord33 *44- * Copyright 2012 Viresh Kumar <viresh.linux@gmail.com>44+ * Copyright 2012 Viresh Kumar <vireshk@kernel.org>55 *66 * The code contained herein is licensed under the GNU General Public77 * License. You may obtain a copy of the GNU General Public License
+1-1
arch/arm/boot/dts/spear310.dtsi
···11/*22 * DTS file for SPEAr310 SoC33 *44- * Copyright 2012 Viresh Kumar <viresh.linux@gmail.com>44+ * Copyright 2012 Viresh Kumar <vireshk@kernel.org>55 *66 * The code contained herein is licensed under the GNU General Public77 * License. You may obtain a copy of the GNU General Public License
+1-1
arch/arm/boot/dts/spear320-evb.dts
···11/*22 * DTS file for SPEAr320 Evaluation Baord33 *44- * Copyright 2012 Viresh Kumar <viresh.linux@gmail.com>44+ * Copyright 2012 Viresh Kumar <vireshk@kernel.org>55 *66 * The code contained herein is licensed under the GNU General Public77 * License. You may obtain a copy of the GNU General Public License
+1-1
arch/arm/boot/dts/spear320.dtsi
···11/*22 * DTS file for SPEAr320 SoC33 *44- * Copyright 2012 Viresh Kumar <viresh.linux@gmail.com>44+ * Copyright 2012 Viresh Kumar <vireshk@kernel.org>55 *66 * The code contained herein is licensed under the GNU General Public77 * License. You may obtain a copy of the GNU General Public License
+1-1
arch/arm/boot/dts/spear3xx.dtsi
···11/*22 * DTS file for all SPEAr3xx SoCs33 *44- * Copyright 2012 Viresh Kumar <viresh.linux@gmail.com>44+ * Copyright 2012 Viresh Kumar <vireshk@kernel.org>55 *66 * The code contained herein is licensed under the GNU General Public77 * License. You may obtain a copy of the GNU General Public License
+7
arch/arm/boot/dts/ste-ccu8540.dts
···1717 model = "ST-Ericsson U8540 platform with Device Tree";1818 compatible = "st-ericsson,ccu8540", "st-ericsson,u8540";19192020+ /* This stablilizes the serial port enumeration */2121+ aliases {2222+ serial0 = &ux500_serial0;2323+ serial1 = &ux500_serial1;2424+ serial2 = &ux500_serial2;2525+ };2626+2027 memory@0 {2128 device_type = "memory";2229 reg = <0x20000000 0x1f000000>, <0xc0000000 0x3f000000>;
+7
arch/arm/boot/dts/ste-ccu9540.dts
···1616 model = "ST-Ericsson CCU9540 platform with Device Tree";1717 compatible = "st-ericsson,ccu9540", "st-ericsson,u9540";18181919+ /* This stablilizes the serial port enumeration */2020+ aliases {2121+ serial0 = &ux500_serial0;2222+ serial1 = &ux500_serial1;2323+ serial2 = &ux500_serial2;2424+ };2525+1926 memory {2027 reg = <0x00000000 0x20000000>;2128 };
···3232 status = "okay";3333 };34343535+ /* This UART is unused and thus left disabled */3536 uart@80121000 {3637 pinctrl-names = "default", "sleep";3738 pinctrl-0 = <&uart1_default_mode>;3839 pinctrl-1 = <&uart1_sleep_mode>;3939- status = "okay";4040 };41414242 uart@80007000 {
+7
arch/arm/boot/dts/ste-hrefprev60-stuib.dts
···1717 model = "ST-Ericsson HREF (pre-v60) and ST UIB";1818 compatible = "st-ericsson,mop500", "st-ericsson,u8500";19192020+ /* This stablilizes the serial port enumeration */2121+ aliases {2222+ serial0 = &ux500_serial0;2323+ serial1 = &ux500_serial1;2424+ serial2 = &ux500_serial2;2525+ };2626+2027 soc {2128 /* Reset line for the BU21013 touchscreen */2229 i2c@80110000 {
+7
arch/arm/boot/dts/ste-hrefprev60-tvk.dts
···1616/ {1717 model = "ST-Ericsson HREF (pre-v60) and TVK1281618 UIB";1818 compatible = "st-ericsson,mop500", "st-ericsson,u8500";1919+2020+ /* This stablilizes the serial port enumeration */2121+ aliases {2222+ serial0 = &ux500_serial0;2323+ serial1 = &ux500_serial1;2424+ serial2 = &ux500_serial2;2525+ };1926};
+5
arch/arm/boot/dts/ste-hrefprev60.dtsi
···2323 };24242525 soc {2626+ /* Enable UART1 on this board */2727+ uart@80121000 {2828+ status = "okay";2929+ };3030+2631 i2c@80004000 {2732 tps61052@33 {2833 compatible = "tps61052";
+7
arch/arm/boot/dts/ste-hrefv60plus-stuib.dts
···1919 model = "ST-Ericsson HREF (v60+) and ST UIB";2020 compatible = "st-ericsson,hrefv60+", "st-ericsson,u8500";21212222+ /* This stablilizes the serial port enumeration */2323+ aliases {2424+ serial0 = &ux500_serial0;2525+ serial1 = &ux500_serial1;2626+ serial2 = &ux500_serial2;2727+ };2828+2229 soc {2330 /* Reset line for the BU21013 touchscreen */2431 i2c@80110000 {
+7
arch/arm/boot/dts/ste-hrefv60plus-tvk.dts
···1818/ {1919 model = "ST-Ericsson HREF (v60+) and TVK1281618 UIB";2020 compatible = "st-ericsson,hrefv60+", "st-ericsson,u8500";2121+2222+ /* This stablilizes the serial port enumeration */2323+ aliases {2424+ serial0 = &ux500_serial0;2525+ serial1 = &ux500_serial1;2626+ serial2 = &ux500_serial2;2727+ };2128};
···140140 * The _caller variety takes a __builtin_return_address(0) value for141141 * /proc/vmalloc to use - and should only be used in non-inline functions.142142 */143143-extern void __iomem *__arm_ioremap_pfn_caller(unsigned long, unsigned long,144144- size_t, unsigned int, void *);145143extern void __iomem *__arm_ioremap_caller(phys_addr_t, size_t, unsigned int,146144 void *);147147-148145extern void __iomem *__arm_ioremap_pfn(unsigned long, unsigned long, size_t, unsigned int);149149-extern void __iomem *__arm_ioremap(phys_addr_t, size_t, unsigned int);150146extern void __iomem *__arm_ioremap_exec(phys_addr_t, size_t, bool cached);151147extern void __iounmap(volatile void __iomem *addr);152152-extern void __arm_iounmap(volatile void __iomem *addr);153148154149extern void __iomem * (*arch_ioremap_caller)(phys_addr_t, size_t,155150 unsigned int, void *);···316321static inline void memset_io(volatile void __iomem *dst, unsigned c,317322 size_t count)318323{319319- memset((void __force *)dst, c, count);324324+ extern void mmioset(void *, unsigned int, size_t);325325+ mmioset((void __force *)dst, c, count);320326}321327#define memset_io(dst,c,count) memset_io(dst,c,count)322328323329static inline void memcpy_fromio(void *to, const volatile void __iomem *from,324330 size_t count)325331{326326- memcpy(to, (const void __force *)from, count);332332+ extern void mmiocpy(void *, const void *, size_t);333333+ mmiocpy(to, (const void __force *)from, count);327334}328335#define memcpy_fromio(to,from,count) memcpy_fromio(to,from,count)329336330337static inline void memcpy_toio(volatile void __iomem *to, const void *from,331338 size_t count)332339{333333- memcpy((void __force *)to, from, count);340340+ extern void mmiocpy(void *, const void *, size_t);341341+ mmiocpy((void __force *)to, from, count);334342}335343#define memcpy_toio(to,from,count) memcpy_toio(to,from,count)336344···346348#endif /* readl */347349348350/*349349- * ioremap and friends.351351+ * ioremap() and friends.350352 *351351- * ioremap takes a PCI memory address, as specified in352352- * Documentation/io-mapping.txt.353353+ * ioremap() takes a resource address, and size. Due to the ARM memory354354+ * types, it is important to use the correct ioremap() function as each355355+ * mapping has specific properties.353356 *357357+ * Function Memory type Cacheability Cache hint358358+ * ioremap() Device n/a n/a359359+ * ioremap_nocache() Device n/a n/a360360+ * ioremap_cache() Normal Writeback Read allocate361361+ * ioremap_wc() Normal Non-cacheable n/a362362+ * ioremap_wt() Normal Non-cacheable n/a363363+ *364364+ * All device mappings have the following properties:365365+ * - no access speculation366366+ * - no repetition (eg, on return from an exception)367367+ * - number, order and size of accesses are maintained368368+ * - unaligned accesses are "unpredictable"369369+ * - writes may be delayed before they hit the endpoint device370370+ *371371+ * ioremap_nocache() is the same as ioremap() as there are too many device372372+ * drivers using this for device registers, and documentation which tells373373+ * people to use it for such for this to be any different. This is not a374374+ * safe fallback for memory-like mappings, or memory regions where the375375+ * compiler may generate unaligned accesses - eg, via inlining its own376376+ * memcpy.377377+ *378378+ * All normal memory mappings have the following properties:379379+ * - reads can be repeated with no side effects380380+ * - repeated reads return the last value written381381+ * - reads can fetch additional locations without side effects382382+ * - writes can be repeated (in certain cases) with no side effects383383+ * - writes can be merged before accessing the target384384+ * - unaligned accesses can be supported385385+ * - ordering is not guaranteed without explicit dependencies or barrier386386+ * instructions387387+ * - writes may be delayed before they hit the endpoint memory388388+ *389389+ * The cache hint is only a performance hint: CPUs may alias these hints.390390+ * Eg, a CPU not implementing read allocate but implementing write allocate391391+ * will provide a write allocate mapping instead.354392 */355355-#define ioremap(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE)356356-#define ioremap_nocache(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE)357357-#define ioremap_cache(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE_CACHED)358358-#define ioremap_wc(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE_WC)359359-#define ioremap_wt(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE)360360-#define iounmap __arm_iounmap393393+void __iomem *ioremap(resource_size_t res_cookie, size_t size);394394+#define ioremap ioremap395395+#define ioremap_nocache ioremap396396+397397+void __iomem *ioremap_cache(resource_size_t res_cookie, size_t size);398398+#define ioremap_cache ioremap_cache399399+400400+void __iomem *ioremap_wc(resource_size_t res_cookie, size_t size);401401+#define ioremap_wc ioremap_wc402402+#define ioremap_wt ioremap_wc403403+404404+void iounmap(volatile void __iomem *iomem_cookie);405405+#define iounmap iounmap361406362407/*363408 * io{read,write}{16,32}be() macros
+2-2
arch/arm/include/asm/memory.h
···275275 */276276#define __pa(x) __virt_to_phys((unsigned long)(x))277277#define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x)))278278-#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)278278+#define pfn_to_kaddr(pfn) __va((phys_addr_t)(pfn) << PAGE_SHIFT)279279280280extern phys_addr_t (*arch_virt_to_idmap)(unsigned long x);281281···286286 */287287static inline phys_addr_t __virt_to_idmap(unsigned long x)288288{289289- if (arch_virt_to_idmap)289289+ if (IS_ENABLED(CONFIG_MMU) && arch_virt_to_idmap)290290 return arch_virt_to_idmap(x);291291 else292292 return __virt_to_phys(x);
-15
arch/arm/include/asm/mm-arch-hooks.h
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_ARM_MM_ARCH_HOOKS_H1313-#define _ASM_ARM_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_ARM_MM_ARCH_HOOKS_H */
+30-1
arch/arm/include/asm/pgtable-2level.h
···129129130130/*131131 * These are the memory types, defined to be compatible with132132- * pre-ARMv6 CPUs cacheable and bufferable bits: XXCB132132+ * pre-ARMv6 CPUs cacheable and bufferable bits: n/a,n/a,C,B133133+ * ARMv6+ without TEX remapping, they are a table index.134134+ * ARMv6+ with TEX remapping, they correspond to n/a,TEX(0),C,B135135+ *136136+ * MT type Pre-ARMv6 ARMv6+ type / cacheable status137137+ * UNCACHED Uncached Strongly ordered138138+ * BUFFERABLE Bufferable Normal memory / non-cacheable139139+ * WRITETHROUGH Writethrough Normal memory / write through140140+ * WRITEBACK Writeback Normal memory / write back, read alloc141141+ * MINICACHE Minicache N/A142142+ * WRITEALLOC Writeback Normal memory / write back, write alloc143143+ * DEV_SHARED Uncached Device memory (shared)144144+ * DEV_NONSHARED Uncached Device memory (non-shared)145145+ * DEV_WC Bufferable Normal memory / non-cacheable146146+ * DEV_CACHED Writeback Normal memory / write back, read alloc147147+ * VECTORS Variable Normal memory / variable148148+ *149149+ * All normal memory mappings have the following properties:150150+ * - reads can be repeated with no side effects151151+ * - repeated reads return the last value written152152+ * - reads can fetch additional locations without side effects153153+ * - writes can be repeated (in certain cases) with no side effects154154+ * - writes can be merged before accessing the target155155+ * - unaligned accesses can be supported156156+ *157157+ * All device mappings have the following properties:158158+ * - no access speculation159159+ * - no repetition (eg, on return from an exception)160160+ * - number, order and size of accesses are maintained161161+ * - unaligned accesses are "unpredictable"133162 */134163#define L_PTE_MT_UNCACHED (_AT(pteval_t, 0x00) << 2) /* 0000 */135164#define L_PTE_MT_BUFFERABLE (_AT(pteval_t, 0x01) << 2) /* 0001 */
···818818 if (arch_find_n_match_cpu_physical_id(dn, cpu, NULL))819819 break;820820821821- of_node_put(dn);822821 if (cpu >= nr_cpu_ids) {823822 pr_warn("Failed to find logical CPU for %s\n",824823 dn->name);824824+ of_node_put(dn);825825 break;826826 }827827+ of_node_put(dn);827828828829 irqs[i] = cpu;829830 cpumask_set_cpu(cpu, &pmu->supported_cpus);
+1-1
arch/arm/kernel/reboot.c
···5050 flush_cache_all();51515252 /* Switch to the identity mapping. */5353- phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset);5353+ phys_reset = (phys_reset_t)(unsigned long)virt_to_idmap(cpu_reset);5454 phys_reset((unsigned long)addr);55555656 /* Should never get here. */
···11/*22- * RTC I/O Bridge interfaces for CSR SiRFprimaII22+ * RTC I/O Bridge interfaces for CSR SiRFprimaII/atlas733 * ARM access the registers of SYSRTC, GPSRTC and PWRC through this module44 *55 * Copyright (c) 2011 Cambridge Silicon Radio Limited, a CSR plc group company.···1010#include <linux/kernel.h>1111#include <linux/module.h>1212#include <linux/io.h>1313+#include <linux/regmap.h>1314#include <linux/of.h>1415#include <linux/of_address.h>1516#include <linux/of_device.h>···6766{6867 unsigned long flags, val;69686969+ /* TODO: add hwspinlock to sync with M3 */7070 spin_lock_irqsave(&rtciobrg_lock, flags);71717272 val = __sirfsoc_rtc_iobrg_readl(addr);···9290{9391 unsigned long flags;94929393+ /* TODO: add hwspinlock to sync with M3 */9594 spin_lock_irqsave(&rtciobrg_lock, flags);96959796 sirfsoc_rtc_iobrg_pre_writel(val, addr);···104101 spin_unlock_irqrestore(&rtciobrg_lock, flags);105102}106103EXPORT_SYMBOL_GPL(sirfsoc_rtc_iobrg_writel);104104+105105+106106+static int regmap_iobg_regwrite(void *context, unsigned int reg,107107+ unsigned int val)108108+{109109+ sirfsoc_rtc_iobrg_writel(val, reg);110110+ return 0;111111+}112112+113113+static int regmap_iobg_regread(void *context, unsigned int reg,114114+ unsigned int *val)115115+{116116+ *val = (u32)sirfsoc_rtc_iobrg_readl(reg);117117+ return 0;118118+}119119+120120+static struct regmap_bus regmap_iobg = {121121+ .reg_write = regmap_iobg_regwrite,122122+ .reg_read = regmap_iobg_regread,123123+};124124+125125+/**126126+ * devm_regmap_init_iobg(): Initialise managed register map127127+ *128128+ * @iobg: Device that will be interacted with129129+ * @config: Configuration for register map130130+ *131131+ * The return value will be an ERR_PTR() on error or a valid pointer132132+ * to a struct regmap. The regmap will be automatically freed by the133133+ * device management code.134134+ */135135+struct regmap *devm_regmap_init_iobg(struct device *dev,136136+ const struct regmap_config *config)137137+{138138+ const struct regmap_bus *bus = ®map_iobg;139139+140140+ return devm_regmap_init(dev, bus, dev, config);141141+}142142+EXPORT_SYMBOL_GPL(devm_regmap_init_iobg);107143108144static const struct of_device_id rtciobrg_ids[] = {109145 { .compatible = "sirf,prima2-rtciobg" },···174132}175133postcore_initcall(sirfsoc_rtciobrg_init);176134177177-MODULE_AUTHOR("Zhiwu Song <zhiwu.song@csr.com>, "178178- "Barry Song <baohua.song@csr.com>");135135+MODULE_AUTHOR("Zhiwu Song <zhiwu.song@csr.com>");136136+MODULE_AUTHOR("Barry Song <baohua.song@csr.com>");179137MODULE_DESCRIPTION("CSR SiRFprimaII rtc io bridge");180138MODULE_LICENSE("GPL v2");
···33 *44 * Copyright (C) 2009-2012 ST Microelectronics55 * Rajeev Kumar <rajeev-dlh.kumar@st.com>66- * Viresh Kumar <viresh.linux@gmail.com>66+ * Viresh Kumar <vireshk@kernel.org>77 *88 * This file is licensed under the terms of the GNU General Public99 * License version 2. This program is licensed "as is" without any
+1-1
arch/arm/mach-spear/include/mach/irqs.h
···33 *44 * Copyright (C) 2009-2012 ST Microelectronics55 * Rajeev Kumar <rajeev-dlh.kumar@st.com>66- * Viresh Kumar <viresh.linux@gmail.com>66+ * Viresh Kumar <vireshk@kernel.org>77 *88 * This file is licensed under the terms of the GNU General Public99 * License version 2. This program is licensed "as is" without any
+1-1
arch/arm/mach-spear/include/mach/misc_regs.h
···44 * Miscellaneous registers definitions for SPEAr3xx machine family55 *66 * Copyright (C) 2009 ST Microelectronics77- * Viresh Kumar <viresh.linux@gmail.com>77+ * Viresh Kumar <vireshk@kernel.org>88 *99 * This file is licensed under the terms of the GNU General Public1010 * License version 2. This program is licensed "as is" without any
+1-1
arch/arm/mach-spear/include/mach/spear.h
···33 *44 * Copyright (C) 2009,2012 ST Microelectronics55 * Rajeev Kumar<rajeev-dlh.kumar@st.com>66- * Viresh Kumar <viresh.linux@gmail.com>66+ * Viresh Kumar <vireshk@kernel.org>77 *88 * This file is licensed under the terms of the GNU General Public99 * License version 2. This program is licensed "as is" without any
+1-1
arch/arm/mach-spear/include/mach/uncompress.h
···44 * Serial port stubs for kernel decompress status messages55 *66 * Copyright (C) 2009 ST Microelectronics77- * Viresh Kumar <viresh.linux@gmail.com>77+ * Viresh Kumar <vireshk@kernel.org>88 *99 * This file is licensed under the terms of the GNU General Public1010 * License version 2. This program is licensed "as is" without any
+1-1
arch/arm/mach-spear/pl080.c
···44 * DMAC pl080 definitions for SPEAr platform55 *66 * Copyright (C) 2012 ST Microelectronics77- * Viresh Kumar <viresh.linux@gmail.com>77+ * Viresh Kumar <vireshk@kernel.org>88 *99 * This file is licensed under the terms of the GNU General Public1010 * License version 2. This program is licensed "as is" without any
+1-1
arch/arm/mach-spear/pl080.h
···44 * DMAC pl080 definitions for SPEAr platform55 *66 * Copyright (C) 2012 ST Microelectronics77- * Viresh Kumar <viresh.linux@gmail.com>77+ * Viresh Kumar <vireshk@kernel.org>88 *99 * This file is licensed under the terms of the GNU General Public1010 * License version 2. This program is licensed "as is" without any
+1-1
arch/arm/mach-spear/restart.c
···44 * SPEAr platform specific restart functions55 *66 * Copyright (C) 2009 ST Microelectronics77- * Viresh Kumar <viresh.linux@gmail.com>77+ * Viresh Kumar <vireshk@kernel.org>88 *99 * This file is licensed under the terms of the GNU General Public1010 * License version 2. This program is licensed "as is" without any
+1-1
arch/arm/mach-spear/spear1310.c
···44 * SPEAr1310 machine source file55 *66 * Copyright (C) 2012 ST Microelectronics77- * Viresh Kumar <viresh.linux@gmail.com>77+ * Viresh Kumar <vireshk@kernel.org>88 *99 * This file is licensed under the terms of the GNU General Public1010 * License version 2. This program is licensed "as is" without any
+1-1
arch/arm/mach-spear/spear1340.c
···44 * SPEAr1340 machine source file55 *66 * Copyright (C) 2012 ST Microelectronics77- * Viresh Kumar <viresh.linux@gmail.com>77+ * Viresh Kumar <vireshk@kernel.org>88 *99 * This file is licensed under the terms of the GNU General Public1010 * License version 2. This program is licensed "as is" without any
+1-1
arch/arm/mach-spear/spear13xx.c
···44 * SPEAr13XX machines common source file55 *66 * Copyright (C) 2012 ST Microelectronics77- * Viresh Kumar <viresh.linux@gmail.com>77+ * Viresh Kumar <vireshk@kernel.org>88 *99 * This file is licensed under the terms of the GNU General Public1010 * License version 2. This program is licensed "as is" without any
+1-1
arch/arm/mach-spear/spear300.c
···44 * SPEAr300 machine source file55 *66 * Copyright (C) 2009-2012 ST Microelectronics77- * Viresh Kumar <viresh.linux@gmail.com>77+ * Viresh Kumar <vireshk@kernel.org>88 *99 * This file is licensed under the terms of the GNU General Public1010 * License version 2. This program is licensed "as is" without any
+1-1
arch/arm/mach-spear/spear310.c
···44 * SPEAr310 machine source file55 *66 * Copyright (C) 2009-2012 ST Microelectronics77- * Viresh Kumar <viresh.linux@gmail.com>77+ * Viresh Kumar <vireshk@kernel.org>88 *99 * This file is licensed under the terms of the GNU General Public1010 * License version 2. This program is licensed "as is" without any
+1-1
arch/arm/mach-spear/spear320.c
···44 * SPEAr320 machine source file55 *66 * Copyright (C) 2009-2012 ST Microelectronics77- * Viresh Kumar <viresh.linux@gmail.com>77+ * Viresh Kumar <vireshk@kernel.org>88 *99 * This file is licensed under the terms of the GNU General Public1010 * License version 2. This program is licensed "as is" without any
+1-1
arch/arm/mach-spear/spear3xx.c
···44 * SPEAr3XX machines common source file55 *66 * Copyright (C) 2009-2012 ST Microelectronics77- * Viresh Kumar <viresh.linux@gmail.com>77+ * Viresh Kumar <vireshk@kernel.org>88 *99 * This file is licensed under the terms of the GNU General Public1010 * License version 2. This program is licensed "as is" without any
···1919#include <asm/psci.h>2020#include <asm/smp_plat.h>21212222+/* Macros for consistency checks of the GICC subtable of MADT */2323+#define ACPI_MADT_GICC_LENGTH \2424+ (acpi_gbl_FADT.header.revision < 6 ? 76 : 80)2525+2626+#define BAD_MADT_GICC_ENTRY(entry, end) \2727+ (!(entry) || (unsigned long)(entry) + sizeof(*(entry)) > (end) || \2828+ (entry)->header.length != ACPI_MADT_GICC_LENGTH)2929+2230/* Basic configuration for ACPI */2331#ifdef CONFIG_ACPI2432/* ACPI table mapping after acpi_gbl_permanent_mmap is set */
-15
arch/arm64/include/asm/mm-arch-hooks.h
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_ARM64_MM_ARCH_HOOKS_H1313-#define _ASM_ARM64_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_ARM64_MM_ARCH_HOOKS_H */
+2-2
arch/arm64/kernel/entry.S
···352352 // TODO: add support for undefined instructions in kernel mode353353 enable_dbg354354 mov x0, sp355355+ mov x2, x1355356 mov x1, #BAD_SYNC356356- mrs x2, esr_el1357357 b bad_mode358358ENDPROC(el1_sync)359359···553553 ct_user_exit554554 mov x0, sp555555 mov x1, #BAD_SYNC556556- mrs x2, esr_el1556556+ mov x2, x25557557 bl bad_mode558558 b ret_to_user559559ENDPROC(el0_sync)
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_AVR32_MM_ARCH_HOOKS_H1313-#define _ASM_AVR32_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_AVR32_MM_ARCH_HOOKS_H */
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_BLACKFIN_MM_ARCH_HOOKS_H1313-#define _ASM_BLACKFIN_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_BLACKFIN_MM_ARCH_HOOKS_H */
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_C6X_MM_ARCH_HOOKS_H1313-#define _ASM_C6X_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_C6X_MM_ARCH_HOOKS_H */
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_CRIS_MM_ARCH_HOOKS_H1313-#define _ASM_CRIS_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_CRIS_MM_ARCH_HOOKS_H */
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_FRV_MM_ARCH_HOOKS_H1313-#define _ASM_FRV_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_FRV_MM_ARCH_HOOKS_H */
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_HEXAGON_MM_ARCH_HOOKS_H1313-#define _ASM_HEXAGON_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_HEXAGON_MM_ARCH_HOOKS_H */
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_IA64_MM_ARCH_HOOKS_H1313-#define _ASM_IA64_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_IA64_MM_ARCH_HOOKS_H */
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_M32R_MM_ARCH_HOOKS_H1313-#define _ASM_M32R_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_M32R_MM_ARCH_HOOKS_H */
+27-22
arch/m68k/Kconfig.cpu
···125125126126if COLDFIRE127127128128+choice129129+ prompt "ColdFire SoC type"130130+ default M520x131131+ help132132+ Select the type of ColdFire System-on-Chip (SoC) that you want133133+ to build for.134134+128135config M5206129136 bool "MCF5206"130137 depends on !MMU···181174 help182175 Freescale (Motorola) Coldfire 5251/5253 processor support.183176184184-config M527x185185- bool186186-187177config M5271188178 bool "MCF5271"189179 depends on !MMU···227223 help228224 Motorola ColdFire 5307 processor support.229225230230-config M53xx231231- bool232232-233226config M532x234227 bool "MCF532x"235228 depends on !MMU···251250 select HAVE_MBAR252251 help253252 Motorola ColdFire 5407 processor support.254254-255255-config M54xx256256- bool257253258254config M547x259255 bool "MCF547x"···277279 select HAVE_CACHE_CB278280 help279281 Freescale Coldfire 54410/54415/54416/54417/54418 processor support.282282+283283+endchoice284284+285285+config M527x286286+ bool287287+288288+config M53xx289289+ bool290290+291291+config M54xx292292+ bool280293281294endif # COLDFIRE282295···425416config HAVE_IPSBAR426417 bool427418428428-config CLOCK_SET429429- bool "Enable setting the CPU clock frequency"430430- depends on COLDFIRE431431- default n432432- help433433- On some CPU's you do not need to know what the core CPU clock434434- frequency is. On these you can disable clock setting. On some435435- traditional 68K parts, and on all ColdFire parts you need to set436436- the appropriate CPU clock frequency. On these devices many of the437437- onboard peripherals derive their timing from the master CPU clock438438- frequency.439439-440419config CLOCK_FREQ441420 int "Set the core clock frequency"421421+ default "25000000" if M5206422422+ default "54000000" if M5206e423423+ default "166666666" if M520x424424+ default "140000000" if M5249425425+ default "150000000" if M527x || M523x426426+ default "90000000" if M5307427427+ default "50000000" if M5407428428+ default "266000000" if M54xx442429 default "66666666"443443- depends on CLOCK_SET430430+ depends on COLDFIRE444431 help445432 Define the CPU clock frequency in use. This is the core clock446433 frequency, it may or may not be the same as the external clock
+3-19
arch/m68k/configs/m5208evb_defconfig
···11-# CONFIG_MMU is not set22-CONFIG_EXPERIMENTAL=y31CONFIG_LOG_BUF_SHIFT=1444-# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set52CONFIG_EXPERT=y63# CONFIG_KALLSYMS is not set77-# CONFIG_HOTPLUG is not set84# CONFIG_FUTEX is not set95# CONFIG_EPOLL is not set106# CONFIG_SIGNALFD is not set···1216# CONFIG_BLK_DEV_BSG is not set1317# CONFIG_IOSCHED_DEADLINE is not set1418# CONFIG_IOSCHED_CFQ is not set1515-CONFIG_M520x=y1616-CONFIG_CLOCK_SET=y1717-CONFIG_CLOCK_FREQ=1666666661818-CONFIG_CLOCK_DIV=21919-CONFIG_M5208EVB=y1919+# CONFIG_MMU is not set2020# CONFIG_4KSTACKS is not set2121CONFIG_RAMBASE=0x400000002222CONFIG_RAMSIZE=0x20000002323CONFIG_VECTORBASE=0x400000002424CONFIG_KERNELBASE=0x400200002525-CONFIG_RAM16BIT=y2625CONFIG_BINFMT_FLAT=y2726CONFIG_NET=y2827CONFIG_PACKET=y···3140# CONFIG_IPV6 is not set3241# CONFIG_FW_LOADER is not set3342CONFIG_MTD=y3434-CONFIG_MTD_CHAR=y3543CONFIG_MTD_BLOCK=y3644CONFIG_MTD_RAM=y3745CONFIG_MTD_UCLINUX=y3846CONFIG_BLK_DEV_RAM=y3939-# CONFIG_MISC_DEVICES is not set4047CONFIG_NETDEVICES=y4141-CONFIG_NET_ETHERNET=y4248CONFIG_FEC=y4343-# CONFIG_NETDEV_1000 is not set4444-# CONFIG_NETDEV_10000 is not set4549# CONFIG_INPUT is not set4650# CONFIG_SERIO is not set4751# CONFIG_VT is not set5252+# CONFIG_UNIX98_PTYS is not set4853CONFIG_SERIAL_MCF=y4954CONFIG_SERIAL_MCF_BAUDRATE=1152005055CONFIG_SERIAL_MCF_CONSOLE=y5151-# CONFIG_UNIX98_PTYS is not set5256# CONFIG_HW_RANDOM is not set5357# CONFIG_HWMON is not set5458# CONFIG_USB_SUPPORT is not set···5468CONFIG_ROMFS_FS=y5569CONFIG_ROMFS_BACKED_BY_MTD=y5670# CONFIG_NETWORK_FILESYSTEMS is not set5757-# CONFIG_RCU_CPU_STALL_DETECTOR is not set5858-CONFIG_SYSCTL_SYSCALL_CHECK=y5959-CONFIG_FULLDEBUG=y6071CONFIG_BOOTPARAM=y6172CONFIG_BOOTPARAM_STRING="root=/dev/mtdblock0"7373+CONFIG_FULLDEBUG=y
+2-15
arch/m68k/configs/m5249evb_defconfig
···11-# CONFIG_MMU is not set22-CONFIG_EXPERIMENTAL=y31CONFIG_LOG_BUF_SHIFT=1444-# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set52CONFIG_EXPERT=y63# CONFIG_KALLSYMS is not set77-# CONFIG_HOTPLUG is not set84# CONFIG_FUTEX is not set95# CONFIG_EPOLL is not set106# CONFIG_SIGNALFD is not set···1216# CONFIG_BLK_DEV_BSG is not set1317# CONFIG_IOSCHED_DEADLINE is not set1418# CONFIG_IOSCHED_CFQ is not set1919+# CONFIG_MMU is not set1520CONFIG_M5249=y1616-CONFIG_CLOCK_SET=y1717-CONFIG_CLOCK_FREQ=1400000001818-CONFIG_CLOCK_DIV=21921CONFIG_M5249C3=y2022CONFIG_RAMBASE=0x000000002123CONFIG_RAMSIZE=0x00800000···3238# CONFIG_IPV6 is not set3339# CONFIG_FW_LOADER is not set3440CONFIG_MTD=y3535-CONFIG_MTD_CHAR=y3641CONFIG_MTD_BLOCK=y3742CONFIG_MTD_RAM=y3843CONFIG_MTD_UCLINUX=y3944CONFIG_BLK_DEV_RAM=y4040-# CONFIG_MISC_DEVICES is not set4145CONFIG_NETDEVICES=y4242-CONFIG_NET_ETHERNET=y4343-# CONFIG_NETDEV_1000 is not set4444-# CONFIG_NETDEV_10000 is not set4546CONFIG_PPP=y4647# CONFIG_INPUT is not set4748# CONFIG_SERIO is not set4849# CONFIG_VT is not set5050+# CONFIG_UNIX98_PTYS is not set4951CONFIG_SERIAL_MCF=y5052CONFIG_SERIAL_MCF_CONSOLE=y5151-# CONFIG_UNIX98_PTYS is not set5253# CONFIG_HWMON is not set5354# CONFIG_USB_SUPPORT is not set5455CONFIG_EXT2_FS=y···5162CONFIG_ROMFS_FS=y5263CONFIG_ROMFS_BACKED_BY_MTD=y5364# CONFIG_NETWORK_FILESYSTEMS is not set5454-# CONFIG_RCU_CPU_STALL_DETECTOR is not set5565CONFIG_BOOTPARAM=y5666CONFIG_BOOTPARAM_STRING="root=/dev/mtdblock0"5757-# CONFIG_CRC32 is not set
+2-12
arch/m68k/configs/m5272c3_defconfig
···11-# CONFIG_MMU is not set22-CONFIG_EXPERIMENTAL=y31CONFIG_LOG_BUF_SHIFT=1444-# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set52CONFIG_EXPERT=y63# CONFIG_KALLSYMS is not set77-# CONFIG_HOTPLUG is not set84# CONFIG_FUTEX is not set95# CONFIG_EPOLL is not set106# CONFIG_SIGNALFD is not set···1216# CONFIG_BLK_DEV_BSG is not set1317# CONFIG_IOSCHED_DEADLINE is not set1418# CONFIG_IOSCHED_CFQ is not set1919+# CONFIG_MMU is not set1520CONFIG_M5272=y1616-CONFIG_CLOCK_SET=y1721CONFIG_M5272C3=y1822CONFIG_RAMBASE=0x000000001923CONFIG_RAMSIZE=0x00800000···3236# CONFIG_IPV6 is not set3337# CONFIG_FW_LOADER is not set3438CONFIG_MTD=y3535-CONFIG_MTD_CHAR=y3639CONFIG_MTD_BLOCK=y3740CONFIG_MTD_RAM=y3841CONFIG_MTD_UCLINUX=y3942CONFIG_BLK_DEV_RAM=y4040-# CONFIG_MISC_DEVICES is not set4143CONFIG_NETDEVICES=y4242-CONFIG_NET_ETHERNET=y4344CONFIG_FEC=y4444-# CONFIG_NETDEV_1000 is not set4545-# CONFIG_NETDEV_10000 is not set4645# CONFIG_INPUT is not set4746# CONFIG_SERIO is not set4847# CONFIG_VT is not set4848+# CONFIG_UNIX98_PTYS is not set4949CONFIG_SERIAL_MCF=y5050CONFIG_SERIAL_MCF_CONSOLE=y5151-# CONFIG_UNIX98_PTYS is not set5251# CONFIG_HWMON is not set5352# CONFIG_USB_SUPPORT is not set5453CONFIG_EXT2_FS=y···5261CONFIG_ROMFS_FS=y5362CONFIG_ROMFS_BACKED_BY_MTD=y5463# CONFIG_NETWORK_FILESYSTEMS is not set5555-# CONFIG_RCU_CPU_STALL_DETECTOR is not set5664CONFIG_BOOTPARAM=y5765CONFIG_BOOTPARAM_STRING="root=/dev/mtdblock0"
+2-17
arch/m68k/configs/m5275evb_defconfig
···11-# CONFIG_MMU is not set22-CONFIG_EXPERIMENTAL=y31CONFIG_LOG_BUF_SHIFT=1444-# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set52CONFIG_EXPERT=y63# CONFIG_KALLSYMS is not set77-# CONFIG_HOTPLUG is not set84# CONFIG_FUTEX is not set95# CONFIG_EPOLL is not set106# CONFIG_SIGNALFD is not set···1216# CONFIG_BLK_DEV_BSG is not set1317# CONFIG_IOSCHED_DEADLINE is not set1418# CONFIG_IOSCHED_CFQ is not set1919+# CONFIG_MMU is not set1520CONFIG_M5275=y1616-CONFIG_CLOCK_SET=y1717-CONFIG_CLOCK_FREQ=1500000001818-CONFIG_CLOCK_DIV=21919-CONFIG_M5275EVB=y2021# CONFIG_4KSTACKS is not set2122CONFIG_RAMBASE=0x000000002223CONFIG_RAMSIZE=0x00000000···3239# CONFIG_IPV6 is not set3340# CONFIG_FW_LOADER is not set3441CONFIG_MTD=y3535-CONFIG_MTD_CHAR=y3642CONFIG_MTD_BLOCK=y3743CONFIG_MTD_RAM=y3844CONFIG_MTD_UCLINUX=y3945CONFIG_BLK_DEV_RAM=y4040-# CONFIG_MISC_DEVICES is not set4146CONFIG_NETDEVICES=y4242-CONFIG_NET_ETHERNET=y4347CONFIG_FEC=y4444-# CONFIG_NETDEV_1000 is not set4545-# CONFIG_NETDEV_10000 is not set4648CONFIG_PPP=y4749# CONFIG_INPUT is not set4850# CONFIG_SERIO is not set4951# CONFIG_VT is not set5252+# CONFIG_UNIX98_PTYS is not set5053CONFIG_SERIAL_MCF=y5154CONFIG_SERIAL_MCF_CONSOLE=y5252-# CONFIG_UNIX98_PTYS is not set5355# CONFIG_HWMON is not set5456# CONFIG_USB_SUPPORT is not set5557CONFIG_EXT2_FS=y···5365CONFIG_ROMFS_FS=y5466CONFIG_ROMFS_BACKED_BY_MTD=y5567# CONFIG_NETWORK_FILESYSTEMS is not set5656-# CONFIG_RCU_CPU_STALL_DETECTOR is not set5757-CONFIG_SYSCTL_SYSCALL_CHECK=y5868CONFIG_BOOTPARAM=y5969CONFIG_BOOTPARAM_STRING="root=/dev/mtdblock0"6060-# CONFIG_CRC32 is not set
+3-18
arch/m68k/configs/m5307c3_defconfig
···11-# CONFIG_MMU is not set22-CONFIG_EXPERIMENTAL=y31CONFIG_LOG_BUF_SHIFT=1444-# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set52CONFIG_EXPERT=y63# CONFIG_KALLSYMS is not set77-# CONFIG_HOTPLUG is not set84# CONFIG_FUTEX is not set95# CONFIG_EPOLL is not set106# CONFIG_SIGNALFD is not set···1216# CONFIG_BLK_DEV_BSG is not set1317# CONFIG_IOSCHED_DEADLINE is not set1418# CONFIG_IOSCHED_CFQ is not set1919+# CONFIG_MMU is not set1520CONFIG_M5307=y1616-CONFIG_CLOCK_SET=y1717-CONFIG_CLOCK_FREQ=900000001818-CONFIG_CLOCK_DIV=21921CONFIG_M5307C3=y2022CONFIG_RAMBASE=0x000000002123CONFIG_RAMSIZE=0x00800000···3238# CONFIG_IPV6 is not set3339# CONFIG_FW_LOADER is not set3440CONFIG_MTD=y3535-CONFIG_MTD_CHAR=y3641CONFIG_MTD_BLOCK=y3742CONFIG_MTD_RAM=y3843CONFIG_MTD_UCLINUX=y3944CONFIG_BLK_DEV_RAM=y4040-# CONFIG_MISC_DEVICES is not set4145CONFIG_NETDEVICES=y4242-CONFIG_NET_ETHERNET=y4343-# CONFIG_NETDEV_1000 is not set4444-# CONFIG_NETDEV_10000 is not set4546CONFIG_PPP=y4647CONFIG_SLIP=y4748CONFIG_SLIP_COMPRESSED=y···4556# CONFIG_INPUT_MOUSE is not set4657# CONFIG_SERIO is not set4758# CONFIG_VT is not set5959+# CONFIG_LEGACY_PTYS is not set4860CONFIG_SERIAL_MCF=y4961CONFIG_SERIAL_MCF_CONSOLE=y5050-# CONFIG_LEGACY_PTYS is not set5162# CONFIG_HW_RANDOM is not set5263# CONFIG_HWMON is not set5353-# CONFIG_HID_SUPPORT is not set5464# CONFIG_USB_SUPPORT is not set5565CONFIG_EXT2_FS=y5666# CONFIG_DNOTIFY is not set5767CONFIG_ROMFS_FS=y5868CONFIG_ROMFS_BACKED_BY_MTD=y5969# CONFIG_NETWORK_FILESYSTEMS is not set6060-# CONFIG_RCU_CPU_STALL_DETECTOR is not set6161-CONFIG_SYSCTL_SYSCALL_CHECK=y6262-CONFIG_FULLDEBUG=y6370CONFIG_BOOTPARAM=y6471CONFIG_BOOTPARAM_STRING="root=/dev/mtdblock0"6565-# CONFIG_CRC32 is not set7272+CONFIG_FULLDEBUG=y
+2-15
arch/m68k/configs/m5407c3_defconfig
···11-# CONFIG_MMU is not set22-CONFIG_EXPERIMENTAL=y31CONFIG_LOG_BUF_SHIFT=1444-# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set52CONFIG_EXPERT=y63# CONFIG_KALLSYMS is not set77-# CONFIG_HOTPLUG is not set84# CONFIG_FUTEX is not set95# CONFIG_EPOLL is not set106# CONFIG_SIGNALFD is not set···1317# CONFIG_BLK_DEV_BSG is not set1418# CONFIG_IOSCHED_DEADLINE is not set1519# CONFIG_IOSCHED_CFQ is not set2020+# CONFIG_MMU is not set1621CONFIG_M5407=y1717-CONFIG_CLOCK_SET=y1818-CONFIG_CLOCK_FREQ=500000001922CONFIG_M5407C3=y2023CONFIG_RAMBASE=0x000000002124CONFIG_RAMSIZE=0x00000000···3338# CONFIG_IPV6 is not set3439# CONFIG_FW_LOADER is not set3540CONFIG_MTD=y3636-CONFIG_MTD_CHAR=y3741CONFIG_MTD_BLOCK=y3842CONFIG_MTD_RAM=y3943CONFIG_MTD_UCLINUX=y4044CONFIG_BLK_DEV_RAM=y4141-# CONFIG_MISC_DEVICES is not set4245CONFIG_NETDEVICES=y4343-CONFIG_NET_ETHERNET=y4444-# CONFIG_NETDEV_1000 is not set4545-# CONFIG_NETDEV_10000 is not set4646CONFIG_PPP=y4747# CONFIG_INPUT is not set4848# CONFIG_VT is not set4949+# CONFIG_UNIX98_PTYS is not set4950CONFIG_SERIAL_MCF=y5051CONFIG_SERIAL_MCF_CONSOLE=y5151-# CONFIG_UNIX98_PTYS is not set5252# CONFIG_HW_RANDOM is not set5353# CONFIG_HWMON is not set5454# CONFIG_USB_SUPPORT is not set···5363CONFIG_ROMFS_FS=y5464CONFIG_ROMFS_BACKED_BY_MTD=y5565# CONFIG_NETWORK_FILESYSTEMS is not set5656-# CONFIG_RCU_CPU_STALL_DETECTOR is not set5757-CONFIG_SYSCTL_SYSCALL_CHECK=y5866CONFIG_BOOTPARAM=y5967CONFIG_BOOTPARAM_STRING="root=/dev/mtdblock0"6060-# CONFIG_CRC32 is not set
+1-8
arch/m68k/configs/m5475evb_defconfig
···11-CONFIG_EXPERIMENTAL=y21# CONFIG_SWAP is not set32CONFIG_LOG_BUF_SHIFT=1444-CONFIG_SYSFS_DEPRECATED=y55-CONFIG_SYSFS_DEPRECATED_V2=y63CONFIG_SYSCTL_SYSCALL=y74# CONFIG_KALLSYMS is not set88-# CONFIG_HOTPLUG is not set95# CONFIG_FUTEX is not set106# CONFIG_EPOLL is not set117# CONFIG_SIGNALFD is not set···1620# CONFIG_IOSCHED_DEADLINE is not set1721# CONFIG_IOSCHED_CFQ is not set1822CONFIG_COLDFIRE=y1919-CONFIG_M547x=y2020-CONFIG_CLOCK_SET=y2121-CONFIG_CLOCK_FREQ=2660000002223# CONFIG_4KSTACKS is not set2324CONFIG_RAMBASE=0x02425CONFIG_RAMSIZE=0x20000002526CONFIG_VECTORBASE=0x02627CONFIG_MBAR=0xff0000002728CONFIG_KERNELBASE=0x200002929+CONFIG_PCI=y2830# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set2931# CONFIG_FW_LOADER is not set3032CONFIG_MTD=y3131-CONFIG_MTD_CHAR=y3233CONFIG_MTD_BLOCK=y3334CONFIG_MTD_CFI=y3435CONFIG_MTD_JEDECPROBE=y
···1919 * in any case new boards come along from time to time that have yet2020 * another different clocking frequency.2121 */2222-#ifdef CONFIG_CLOCK_SET2222+#ifdef CONFIG_CLOCK_FREQ2323#define MCF_CLK CONFIG_CLOCK_FREQ2424#else2525#error "Don't know what your ColdFire CPU clock frequency is??"
+2-1
arch/m68k/include/asm/io_mm.h
···413413#define writew(val, addr) out_le16((addr), (val))414414#endif /* CONFIG_ATARI_ROM_ISA */415415416416-#if !defined(CONFIG_ISA) && !defined(CONFIG_ATARI_ROM_ISA)416416+#if !defined(CONFIG_ISA) && !defined(CONFIG_ATARI_ROM_ISA) && \417417+ !(defined(CONFIG_PCI) && defined(CONFIG_COLDFIRE))417418/*418419 * We need to define dummy functions for GENERIC_IOMAP support.419420 */
-15
arch/m68k/include/asm/mm-arch-hooks.h
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_M68K_MM_ARCH_HOOKS_H1313-#define _ASM_M68K_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_M68K_MM_ARCH_HOOKS_H */
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_METAG_MM_ARCH_HOOKS_H1313-#define _ASM_METAG_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_METAG_MM_ARCH_HOOKS_H */
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_MICROBLAZE_MM_ARCH_HOOKS_H1313-#define _ASM_MICROBLAZE_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_MICROBLAZE_MM_ARCH_HOOKS_H */
+2-6
arch/mips/Kconfig
···14271427 select CPU_SUPPORTS_HIGHMEM14281428 select CPU_SUPPORTS_MSA14291429 select GENERIC_CSUM14301430+ select MIPS_O32_FP64_SUPPORT if MIPS32_O3214301431 help14311432 Choose this option to build a kernel for release 6 or later of the14321433 MIPS64 architecture. New MIPS processors, starting with the Warrior···2232223122332232config MIPS_CPS22342233 bool "MIPS Coherent Processing System support"22352235- depends on SYS_SUPPORTS_MIPS_CPS && !64BIT22342234+ depends on SYS_SUPPORTS_MIPS_CPS22362235 select MIPS_CM22372236 select MIPS_CPC22382237 select MIPS_CPS_PM if HOTPLUG_CPU···2262226122632262config MIPS_CPC22642263 bool22652265-22662266-config SB1_PASS_1_WORKAROUNDS22672267- bool22682268- depends on CPU_SB1_PASS_122692269- default y2270226422712265config SB1_PASS_2_WORKAROUNDS22722266 bool
-7
arch/mips/Makefile
···181181cflags-$(CONFIG_CPU_R4400_WORKAROUNDS) += $(call cc-option,-mfix-r4400,)182182cflags-$(CONFIG_CPU_DADDI_WORKAROUNDS) += $(call cc-option,-mno-daddi,)183183184184-ifdef CONFIG_CPU_SB1185185-ifdef CONFIG_SB1_PASS_1_WORKAROUNDS186186-KBUILD_AFLAGS_MODULE += -msb1-pass1-workarounds187187-KBUILD_CFLAGS_MODULE += -msb1-pass1-workarounds188188-endif189189-endif190190-191184# For smartmips configurations, there are hundreds of warnings due to ISA overrides192185# in assembly and header files. smartmips is only supported for MIPS32r1 onwards193186# and there is no support for 64-bit. Various '.set mips2' or '.set mips3' or
···7474 goto fr_common;75757676 case FPU_64BIT:7777-#if !(defined(CONFIG_CPU_MIPS32_R2) || defined(CONFIG_CPU_MIPS32_R6) \7777+#if !(defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6) \7878 || defined(CONFIG_64BIT))7979 /* we only have a 32-bit FPU */8080 return SIGFPE;
+1-1
arch/mips/include/asm/mach-loongson64/mmzone.h
···11/*22 * Copyright (C) 2010 Loongson Inc. & Lemote Inc. &33- * Insititute of Computing Technology33+ * Institute of Computing Technology44 * Author: Xiang Gao, gaoxiang@ict.ac.cn55 * Huacai Chen, chenhc@lemote.com66 * Xiaofu Meng, Shuangshuang Zhang
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_MIPS_MM_ARCH_HOOKS_H1313-#define _ASM_MIPS_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_MIPS_MM_ARCH_HOOKS_H */
···16161717/*1818 * Keep this struct definition in sync with the sigcontext fragment1919- * in arch/mips/tools/offset.c1919+ * in arch/mips/kernel/asm-offsets.c2020 */2121struct sigcontext {2222 unsigned int sc_regmask; /* Unused */···4646#include <linux/posix_types.h>4747/*4848 * Keep this struct definition in sync with the sigcontext fragment4949- * in arch/mips/tools/offset.c4949+ * in arch/mips/kernel/asm-offsets.c5050 *5151 * Warning: this structure illdefined with sc_badvaddr being just an unsigned5252 * int so it was changed to unsigned long in 2.6.0-test1. This may break
+1-1
arch/mips/kernel/asm-offsets.c
···11/*22- * offset.c: Calculate pt_regs and task_struct offsets.22+ * asm-offsets.c: Calculate pt_regs and task_struct offsets.33 *44 * Copyright (C) 1996 David S. Miller55 * Copyright (C) 1997, 1998, 1999, 2000, 2001, 2002, 2003 Ralf Baechle
+2-2
arch/mips/kernel/branch.c
···600600 break;601601602602 case blezl_op: /* not really i_format */603603- if (NO_R6EMU)603603+ if (!insn.i_format.rt && NO_R6EMU)604604 goto sigill_r6;605605 case blez_op:606606 /*···635635 break;636636637637 case bgtzl_op:638638- if (NO_R6EMU)638638+ if (!insn.i_format.rt && NO_R6EMU)639639 goto sigill_r6;640640 case bgtz_op:641641 /*
+48-48
arch/mips/kernel/cps-vec.S
···6060 nop61616262 /* This is an NMI */6363- la k0, nmi_handler6363+ PTR_LA k0, nmi_handler6464 jr k06565 nop6666···107107 mul t1, t1, t0108108 mul t1, t1, t2109109110110- li a0, KSEG0111111- add a1, a0, t1110110+ li a0, CKSEG0111111+ PTR_ADD a1, a0, t11121121: cache Index_Store_Tag_I, 0(a0)113113- add a0, a0, t0113113+ PTR_ADD a0, a0, t0114114 bne a0, a1, 1b115115 nop116116icache_done:···134134 mul t1, t1, t0135135 mul t1, t1, t2136136137137- li a0, KSEG0138138- addu a1, a0, t1139139- subu a1, a1, t0137137+ li a0, CKSEG0138138+ PTR_ADDU a1, a0, t1139139+ PTR_SUBU a1, a1, t01401401: cache Index_Store_Tag_D, 0(a0)141141 bne a0, a1, 1b142142- add a0, a0, t0142142+ PTR_ADD a0, a0, t0143143dcache_done:144144145145 /* Set Kseg0 CCA to that in s0 */···152152153153 /* Enter the coherent domain */154154 li t0, 0xff155155- sw t0, GCR_CL_COHERENCE_OFS(v1)155155+ PTR_S t0, GCR_CL_COHERENCE_OFS(v1)156156 ehb157157158158 /* Jump to kseg0 */159159- la t0, 1f159159+ PTR_LA t0, 1f160160 jr t0161161 nop162162···178178 nop179179180180 /* Off we go! */181181- lw t1, VPEBOOTCFG_PC(v0)182182- lw gp, VPEBOOTCFG_GP(v0)183183- lw sp, VPEBOOTCFG_SP(v0)181181+ PTR_L t1, VPEBOOTCFG_PC(v0)182182+ PTR_L gp, VPEBOOTCFG_GP(v0)183183+ PTR_L sp, VPEBOOTCFG_SP(v0)184184 jr t1185185 nop186186 END(mips_cps_core_entry)···217217218218.org 0x480219219LEAF(excep_ejtag)220220- la k0, ejtag_debug_handler220220+ PTR_LA k0, ejtag_debug_handler221221 jr k0222222 nop223223 END(excep_ejtag)···229229 nop230230231231 .set push232232- .set mips32r2232232+ .set mips64r2233233 .set mt234234235235 /* Only allow 1 TC per VPE to execute... */···237237238238 /* ...and for the moment only 1 VPE */239239 dvpe240240- la t1, 1f240240+ PTR_LA t1, 1f241241 jr.hb t1242242 nop243243···250250 mfc0 t0, CP0_MVPCONF0251251 srl t0, t0, MVPCONF0_PVPE_SHIFT252252 andi t0, t0, (MVPCONF0_PVPE >> MVPCONF0_PVPE_SHIFT)253253- addiu t7, t0, 1253253+ addiu ta3, t0, 1254254255255 /* If there's only 1, we're done */256256 beqz t0, 2f257257 nop258258259259 /* Loop through each VPE within this core */260260- li t5, 1260260+ li ta1, 12612612622621: /* Operate on the appropriate TC */263263- mtc0 t5, CP0_VPECONTROL263263+ mtc0 ta1, CP0_VPECONTROL264264 ehb265265266266 /* Bind TC to VPE (1:1 TC:VPE mapping) */267267- mttc0 t5, CP0_TCBIND267267+ mttc0 ta1, CP0_TCBIND268268269269 /* Set exclusive TC, non-active, master */270270 li t0, VPECONF0_MVP271271- sll t1, t5, VPECONF0_XTC_SHIFT271271+ sll t1, ta1, VPECONF0_XTC_SHIFT272272 or t0, t0, t1273273 mttc0 t0, CP0_VPECONF0274274···280280 mttc0 t0, CP0_TCHALT281281282282 /* Next VPE */283283- addiu t5, t5, 1284284- slt t0, t5, t7283283+ addiu ta1, ta1, 1284284+ slt t0, ta1, ta3285285 bnez t0, 1b286286 nop287287···298298299299LEAF(mips_cps_boot_vpes)300300 /* Retrieve CM base address */301301- la t0, mips_cm_base302302- lw t0, 0(t0)301301+ PTR_LA t0, mips_cm_base302302+ PTR_L t0, 0(t0)303303304304 /* Calculate a pointer to this cores struct core_boot_config */305305- lw t0, GCR_CL_ID_OFS(t0)305305+ PTR_L t0, GCR_CL_ID_OFS(t0)306306 li t1, COREBOOTCFG_SIZE307307 mul t0, t0, t1308308- la t1, mips_cps_core_bootcfg309309- lw t1, 0(t1)310310- addu t0, t0, t1308308+ PTR_LA t1, mips_cps_core_bootcfg309309+ PTR_L t1, 0(t1)310310+ PTR_ADDU t0, t0, t1311311312312 /* Calculate this VPEs ID. If the core doesn't support MT use 0 */313313- has_mt t6, 1f313313+ has_mt ta2, 1f314314 li t9, 0315315316316 /* Find the number of VPEs present in the core */···3343341: /* Calculate a pointer to this VPEs struct vpe_boot_config */335335 li t1, VPEBOOTCFG_SIZE336336 mul v0, t9, t1337337- lw t7, COREBOOTCFG_VPECONFIG(t0)338338- addu v0, v0, t7337337+ PTR_L ta3, COREBOOTCFG_VPECONFIG(t0)338338+ PTR_ADDU v0, v0, ta3339339340340#ifdef CONFIG_MIPS_MT341341342342 /* If the core doesn't support MT then return */343343- bnez t6, 1f343343+ bnez ta2, 1f344344 nop345345 jr ra346346 nop347347348348 .set push349349- .set mips32r2349349+ .set mips64r2350350 .set mt3513513523521: /* Enter VPE configuration state */353353 dvpe354354- la t1, 1f354354+ PTR_LA t1, 1f355355 jr.hb t1356356 nop3573571: mfc0 t1, CP0_MVPCONTROL···360360 ehb361361362362 /* Loop through each VPE */363363- lw t6, COREBOOTCFG_VPEMASK(t0)364364- move t8, t6365365- li t5, 0363363+ PTR_L ta2, COREBOOTCFG_VPEMASK(t0)364364+ move t8, ta2365365+ li ta1, 0366366367367 /* Check whether the VPE should be running. If not, skip it */368368-1: andi t0, t6, 1368368+1: andi t0, ta2, 1369369 beqz t0, 2f370370 nop371371···373373 mfc0 t0, CP0_VPECONTROL374374 ori t0, t0, VPECONTROL_TARGTC375375 xori t0, t0, VPECONTROL_TARGTC376376- or t0, t0, t5376376+ or t0, t0, ta1377377 mtc0 t0, CP0_VPECONTROL378378 ehb379379···384384385385 /* Calculate a pointer to the VPEs struct vpe_boot_config */386386 li t0, VPEBOOTCFG_SIZE387387- mul t0, t0, t5388388- addu t0, t0, t7387387+ mul t0, t0, ta1388388+ addu t0, t0, ta3389389390390 /* Set the TC restart PC */391391 lw t1, VPEBOOTCFG_PC(t0)···423423 mttc0 t0, CP0_VPECONF0424424425425 /* Next VPE */426426-2: srl t6, t6, 1427427- addiu t5, t5, 1428428- bnez t6, 1b426426+2: srl ta2, ta2, 1427427+ addiu ta1, ta1, 1428428+ bnez ta2, 1b429429 nop430430431431 /* Leave VPE configuration state */···445445 /* This VPE should be offline, halt the TC */446446 li t0, TCHALT_H447447 mtc0 t0, CP0_TCHALT448448- la t0, 1f448448+ PTR_LA t0, 1f4494491: jr.hb t0450450 nop451451···466466 .set noat467467 lw $1, TI_CPU(gp)468468 sll $1, $1, LONGLOG469469- la \dest, __per_cpu_offset469469+ PTR_LA \dest, __per_cpu_offset470470 addu $1, $1, \dest471471 lw $1, 0($1)472472- la \dest, cps_cpu_state472472+ PTR_LA \dest, cps_cpu_state473473 addu \dest, \dest, $1474474 .set pop475475 .endm
+27-10
arch/mips/kernel/scall32-o32.S
···7373 .set noreorder7474 .set nomacro75757676-1: user_lw(t5, 16(t0)) # argument #5 from usp7777-4: user_lw(t6, 20(t0)) # argument #6 from usp7878-3: user_lw(t7, 24(t0)) # argument #7 from usp7979-2: user_lw(t8, 28(t0)) # argument #8 from usp7676+load_a4: user_lw(t5, 16(t0)) # argument #5 from usp7777+load_a5: user_lw(t6, 20(t0)) # argument #6 from usp7878+load_a6: user_lw(t7, 24(t0)) # argument #7 from usp7979+load_a7: user_lw(t8, 28(t0)) # argument #8 from usp8080+loads_done:80818182 sw t5, 16(sp) # argument #5 to ksp8283 sw t6, 20(sp) # argument #6 to ksp···8685 .set pop87868887 .section __ex_table,"a"8989- PTR 1b,bad_stack9090- PTR 2b,bad_stack9191- PTR 3b,bad_stack9292- PTR 4b,bad_stack8888+ PTR load_a4, bad_stack_a48989+ PTR load_a5, bad_stack_a59090+ PTR load_a6, bad_stack_a69191+ PTR load_a7, bad_stack_a79392 .previous94939594 lw t0, TI_FLAGS($28) # syscall tracing enabled?···154153/* ------------------------------------------------------------------------ */155154156155 /*157157- * The stackpointer for a call with more than 4 arguments is bad.158158- * We probably should handle this case a bit more drastic.156156+ * Our open-coded access area sanity test for the stack pointer157157+ * failed. We probably should handle this case a bit more drastic.159158 */160159bad_stack:161160 li v0, EFAULT···163162 li t0, 1 # set error flag164163 sw t0, PT_R7(sp)165164 j o32_syscall_exit165165+166166+bad_stack_a4:167167+ li t5, 0168168+ b load_a5169169+170170+bad_stack_a5:171171+ li t6, 0172172+ b load_a6173173+174174+bad_stack_a6:175175+ li t7, 0176176+ b load_a7177177+178178+bad_stack_a7:179179+ li t8, 0180180+ b loads_done166181167182 /*168183 * The system call does not exist in this kernel
+26-9
arch/mips/kernel/scall64-o32.S
···6969 daddu t1, t0, 327070 bltz t1, bad_stack71717272-1: lw a4, 16(t0) # argument #5 from usp7373-2: lw a5, 20(t0) # argument #6 from usp7474-3: lw a6, 24(t0) # argument #7 from usp7575-4: lw a7, 28(t0) # argument #8 from usp (for indirect syscalls)7272+load_a4: lw a4, 16(t0) # argument #5 from usp7373+load_a5: lw a5, 20(t0) # argument #6 from usp7474+load_a6: lw a6, 24(t0) # argument #7 from usp7575+load_a7: lw a7, 28(t0) # argument #8 from usp7676+loads_done:76777778 .section __ex_table,"a"7878- PTR 1b, bad_stack7979- PTR 2b, bad_stack8080- PTR 3b, bad_stack8181- PTR 4b, bad_stack7979+ PTR load_a4, bad_stack_a48080+ PTR load_a5, bad_stack_a58181+ PTR load_a6, bad_stack_a68282+ PTR load_a7, bad_stack_a78283 .previous83848485 li t1, _TIF_WORK_SYSCALL_ENTRY···167166 li t0, 1 # set error flag168167 sd t0, PT_R7(sp)169168 j o32_syscall_exit169169+170170+bad_stack_a4:171171+ li a4, 0172172+ b load_a5173173+174174+bad_stack_a5:175175+ li a5, 0176176+ b load_a6177177+178178+bad_stack_a6:179179+ li a6, 0180180+ b load_a7181181+182182+bad_stack_a7:183183+ li a7, 0184184+ b loads_done170185171186not_o32_scall:172187 /*···400383 PTR sys_connect /* 4170 */401384 PTR sys_getpeername402385 PTR sys_getsockname403403- PTR sys_getsockopt386386+ PTR compat_sys_getsockopt404387 PTR sys_listen405388 PTR compat_sys_recv /* 4175 */406389 PTR compat_sys_recvfrom
+5-8
arch/mips/kernel/setup.c
···337337 min_low_pfn = start;338338 if (end <= reserved_end)339339 continue;340340+#ifdef CONFIG_BLK_DEV_INITRD341341+ /* mapstart should be after initrd_end */342342+ if (initrd_end && end <= (unsigned long)PFN_UP(__pa(initrd_end)))343343+ continue;344344+#endif340345 if (start >= mapstart)341346 continue;342347 mapstart = max(reserved_end, start);···370365#endif371366 max_low_pfn = PFN_DOWN(HIGHMEM_START);372367 }373373-374374-#ifdef CONFIG_BLK_DEV_INITRD375375- /*376376- * mapstart should be after initrd_end377377- */378378- if (initrd_end)379379- mapstart = max(mapstart, (unsigned long)PFN_UP(__pa(initrd_end)));380380-#endif381368382369 /*383370 * Initialize the boot-time allocator with low memory only.
+3-3
arch/mips/kernel/smp-cps.c
···133133 /*134134 * Patch the start of mips_cps_core_entry to provide:135135 *136136- * v0 = CM base address136136+ * v1 = CM base address137137 * s0 = kseg0 CCA138138 */139139 entry_code = (u32 *)&mips_cps_core_entry;···369369370370static void wait_for_sibling_halt(void *ptr_cpu)371371{372372- unsigned cpu = (unsigned)ptr_cpu;372372+ unsigned cpu = (unsigned long)ptr_cpu;373373 unsigned vpe_id = cpu_vpe_id(&cpu_data[cpu]);374374 unsigned halted;375375 unsigned long flags;···430430 */431431 err = smp_call_function_single(cpu_death_sibling,432432 wait_for_sibling_halt,433433- (void *)cpu, 1);433433+ (void *)(unsigned long)cpu, 1);434434 if (err)435435 panic("Failed to call remote sibling CPU\n");436436 }
+43-1
arch/mips/kernel/smp.c
···6363cpumask_t cpu_core_map[NR_CPUS] __read_mostly;6464EXPORT_SYMBOL(cpu_core_map);65656666+/*6767+ * A logcal cpu mask containing only one VPE per core to6868+ * reduce the number of IPIs on large MT systems.6969+ */7070+cpumask_t cpu_foreign_map __read_mostly;7171+EXPORT_SYMBOL(cpu_foreign_map);7272+6673/* representing cpus for which sibling maps can be computed */6774static cpumask_t cpu_sibling_setup_map;6875···108101 cpumask_set_cpu(cpu, &cpu_core_map[i]);109102 }110103 }104104+}105105+106106+/*107107+ * Calculate a new cpu_foreign_map mask whenever a108108+ * new cpu appears or disappears.109109+ */110110+static inline void calculate_cpu_foreign_map(void)111111+{112112+ int i, k, core_present;113113+ cpumask_t temp_foreign_map;114114+115115+ /* Re-calculate the mask */116116+ for_each_online_cpu(i) {117117+ core_present = 0;118118+ for_each_cpu(k, &temp_foreign_map)119119+ if (cpu_data[i].package == cpu_data[k].package &&120120+ cpu_data[i].core == cpu_data[k].core)121121+ core_present = 1;122122+ if (!core_present)123123+ cpumask_set_cpu(i, &temp_foreign_map);124124+ }125125+126126+ cpumask_copy(&cpu_foreign_map, &temp_foreign_map);111127}112128113129struct plat_smp_ops *mp_ops;···176146 set_cpu_sibling_map(cpu);177147 set_cpu_core_map(cpu);178148149149+ calculate_cpu_foreign_map();150150+179151 cpumask_set_cpu(cpu, &cpu_callin_map);180152181153 synchronise_count_slave(cpu);···205173static void stop_this_cpu(void *dummy)206174{207175 /*208208- * Remove this CPU:176176+ * Remove this CPU. Be a bit slow here and177177+ * set the bits for every online CPU so we don't miss178178+ * any IPI whilst taking this VPE down.209179 */180180+181181+ cpumask_copy(&cpu_foreign_map, cpu_online_mask);182182+183183+ /* Make it visible to every other CPU */184184+ smp_mb();185185+210186 set_cpu_online(smp_processor_id(), false);187187+ calculate_cpu_foreign_map();211188 local_irq_disable();212189 while (1);213190}···238197 mp_ops->prepare_cpus(max_cpus);239198 set_cpu_sibling_map(0);240199 set_cpu_core_map(0);200200+ calculate_cpu_foreign_map();241201#ifndef CONFIG_HOTPLUG_CPU242202 init_cpu_present(cpu_possible_mask);243203#endif
+4-4
arch/mips/kernel/traps.c
···21302130 BUG_ON(current->mm);21312131 enter_lazy_tlb(&init_mm, current);2132213221332133- /* Boot CPU's cache setup in setup_arch(). */21342134- if (!is_boot_cpu)21352135- cpu_cache_init();21362136- tlb_init();21332133+ /* Boot CPU's cache setup in setup_arch(). */21342134+ if (!is_boot_cpu)21352135+ cpu_cache_init();21362136+ tlb_init();21372137 TLBMISS_HANDLER_SETUP();21382138}21392139
+1-1
arch/mips/loongson64/common/bonito-irq.c
···33 * Author: Jun Sun, jsun@mvista.com or jsun@junsun.net44 * Copyright (C) 2000, 2001 Ralf Baechle (ralf@gnu.org)55 *66- * Copyright (C) 2007 Lemote Inc. & Insititute of Computing Technology66+ * Copyright (C) 2007 Lemote Inc. & Institute of Computing Technology77 * Author: Fuxin Zhang, zhangfx@lemote.com88 *99 * This program is free software; you can redistribute it and/or modify it
+1-1
arch/mips/loongson64/common/cmdline.c
···66 * Copyright 2003 ICT CAS77 * Author: Michael Guo <guoyi@ict.ac.cn>88 *99- * Copyright (C) 2007 Lemote Inc. & Insititute of Computing Technology99+ * Copyright (C) 2007 Lemote Inc. & Institute of Computing Technology1010 * Author: Fuxin Zhang, zhangfx@lemote.com1111 *1212 * Copyright (C) 2009 Lemote Inc.
+1-1
arch/mips/loongson64/common/cs5536/cs5536_mfgpt.c
···11/*22 * CS5536 General timer functions33 *44- * Copyright (C) 2007 Lemote Inc. & Insititute of Computing Technology44+ * Copyright (C) 2007 Lemote Inc. & Institute of Computing Technology55 * Author: Yanhua, yanh@lemote.com66 *77 * Copyright (C) 2009 Lemote Inc.
+1-1
arch/mips/loongson64/common/env.c
···66 * Copyright 2003 ICT CAS77 * Author: Michael Guo <guoyi@ict.ac.cn>88 *99- * Copyright (C) 2007 Lemote Inc. & Insititute of Computing Technology99+ * Copyright (C) 2007 Lemote Inc. & Institute of Computing Technology1010 * Author: Fuxin Zhang, zhangfx@lemote.com1111 *1212 * Copyright (C) 2009 Lemote Inc.
+1-1
arch/mips/loongson64/common/irq.c
···11/*22- * Copyright (C) 2007 Lemote Inc. & Insititute of Computing Technology22+ * Copyright (C) 2007 Lemote Inc. & Institute of Computing Technology33 * Author: Fuxin Zhang, zhangfx@lemote.com44 *55 * This program is free software; you can redistribute it and/or modify it
+1-1
arch/mips/loongson64/common/setup.c
···11/*22- * Copyright (C) 2007 Lemote Inc. & Insititute of Computing Technology22+ * Copyright (C) 2007 Lemote Inc. & Institute of Computing Technology33 * Author: Fuxin Zhang, zhangfx@lemote.com44 *55 * This program is free software; you can redistribute it and/or modify it
+1-1
arch/mips/loongson64/fuloong-2e/irq.c
···11/*22- * Copyright (C) 2007 Lemote Inc. & Insititute of Computing Technology22+ * Copyright (C) 2007 Lemote Inc. & Institute of Computing Technology33 * Author: Fuxin Zhang, zhangfx@lemote.com44 *55 * This program is free software; you can redistribute it and/or modify it
+2-2
arch/mips/loongson64/lemote-2f/clock.c
···11/*22- * Copyright (C) 2006 - 2008 Lemote Inc. & Insititute of Computing Technology22+ * Copyright (C) 2006 - 2008 Lemote Inc. & Institute of Computing Technology33 * Author: Yanhua, yanh@lemote.com44 *55 * This file is subject to the terms and conditions of the GNU General Public···1515#include <linux/spinlock.h>16161717#include <asm/clock.h>1818-#include <asm/mach-loongson/loongson.h>1818+#include <asm/mach-loongson64/loongson.h>19192020static LIST_HEAD(clock_list);2121static DEFINE_SPINLOCK(clock_lock);
+1-1
arch/mips/loongson64/loongson-3/numa.c
···11/*22 * Copyright (C) 2010 Loongson Inc. & Lemote Inc. &33- * Insititute of Computing Technology33+ * Institute of Computing Technology44 * Author: Xiang Gao, gaoxiang@ict.ac.cn55 * Huacai Chen, chenhc@lemote.com66 * Xiaofu Meng, Shuangshuang Zhang
+3-3
arch/mips/math-emu/cp1emu.c
···451451 /* Fall through */452452 case jr_op:453453 /* For R6, JR already emulated in jalr_op */454454- if (NO_R6EMU && insn.r_format.opcode == jr_op)454454+ if (NO_R6EMU && insn.r_format.func == jr_op)455455 break;456456 *contpc = regs->regs[insn.r_format.rs];457457 return 1;···551551 dec_insn.next_pc_inc;552552 return 1;553553 case blezl_op:554554- if (NO_R6EMU)554554+ if (!insn.i_format.rt && NO_R6EMU)555555 break;556556 case blez_op:557557···588588 dec_insn.next_pc_inc;589589 return 1;590590 case bgtzl_op:591591- if (NO_R6EMU)591591+ if (!insn.i_format.rt && NO_R6EMU)592592 break;593593 case bgtz_op:594594 /*
+14-4
arch/mips/mm/c-r4k.c
···3737#include <asm/cacheflush.h> /* for run_uncached() */3838#include <asm/traps.h>3939#include <asm/dma-coherence.h>4040+#include <asm/mips-cm.h>40414142/*4243 * Special Variant of smp_call_function for use by cache functions:···5251{5352 preempt_disable();54535555-#ifndef CONFIG_MIPS_MT_SMP5656- smp_call_function(func, info, 1);5757-#endif5454+ /*5555+ * The Coherent Manager propagates address-based cache ops to other5656+ * cores but not index-based ops. However, r4k_on_each_cpu is used5757+ * in both cases so there is no easy way to tell what kind of op is5858+ * executed to the other cores. The best we can probably do is5959+ * to restrict that call when a CM is not present because both6060+ * CM-based SMP protocols (CMP & CPS) restrict index-based cache ops.6161+ */6262+ if (!mips_cm_present())6363+ smp_call_function_many(&cpu_foreign_map, func, info, 1);5864 func(info);5965 preempt_enable();6066}···945937}946938947939static char *way_string[] = { NULL, "direct mapped", "2-way",948948- "3-way", "4-way", "5-way", "6-way", "7-way", "8-way"940940+ "3-way", "4-way", "5-way", "6-way", "7-way", "8-way",941941+ "9-way", "10-way", "11-way", "12-way",942942+ "13-way", "14-way", "15-way", "16-way",949943};950944951945static void probe_pcache(void)
+13-7
arch/mips/mti-malta/malta-time.c
···119119120120int get_c0_fdc_int(void)121121{122122- int mips_cpu_fdc_irq;122122+ /*123123+ * Some cores claim the FDC is routable through the GIC, but it doesn't124124+ * actually seem to be connected for those Malta bitstreams.125125+ */126126+ switch (current_cpu_type()) {127127+ case CPU_INTERAPTIV:128128+ case CPU_PROAPTIV:129129+ return -1;130130+ };123131124132 if (cpu_has_veic)125125- mips_cpu_fdc_irq = -1;133133+ return -1;126134 else if (gic_present)127127- mips_cpu_fdc_irq = gic_get_c0_fdc_int();135135+ return gic_get_c0_fdc_int();128136 else if (cp0_fdc_irq >= 0)129129- mips_cpu_fdc_irq = MIPS_CPU_IRQ_BASE + cp0_fdc_irq;137137+ return MIPS_CPU_IRQ_BASE + cp0_fdc_irq;130138 else131131- mips_cpu_fdc_irq = -1;132132-133133- return mips_cpu_fdc_irq;139139+ return -1;134140}135141136142int get_c0_perfcount_int(void)
···8181 prompt "SiByte SOC Stepping"8282 depends on SIBYTE_SB1xxx_SOC83838484-config CPU_SB1_PASS_18585- bool "1250 Pass1"8686- depends on SIBYTE_SB12508787- select CPU_HAS_PREFETCH8888-8984config CPU_SB1_PASS_2_12509085 bool "1250 An"9186 depends on SIBYTE_SB1250
+1-4
arch/mips/sibyte/common/bus_watcher.c
···8181{8282 u32 status, l2_err, memio_err;83838484-#ifdef CONFIG_SB1_PASS_1_WORKAROUNDS8585- /* Destructive read, clears register and interrupt */8686- status = csr_in32(IOADDR(A_SCD_BUS_ERR_STATUS));8787-#elif defined(CONFIG_SIBYTE_BCM112X) || defined(CONFIG_SIBYTE_SB1250)8484+#if defined(CONFIG_SIBYTE_BCM112X) || defined(CONFIG_SIBYTE_SB1250)8885 /* Use non-destructive register */8986 status = csr_in32(IOADDR(A_SCD_BUS_ERR_STATUS_DEBUG));9087#elif defined(CONFIG_SIBYTE_BCM1x55) || defined(CONFIG_SIBYTE_BCM1x80)
-2
arch/mips/sibyte/sb1250/setup.c
···202202203203 switch (war_pass) {204204 case K_SYS_REVISION_BCM1250_PASS1:205205-#ifndef CONFIG_SB1_PASS_1_WORKAROUNDS206205 printk("@@@@ This is a BCM1250 A0-A2 (Pass 1) board, "207206 "and the kernel doesn't have the proper "208207 "workarounds compiled in. @@@@\n");209208 bad_config = 1;210210-#endif211209 break;212210 case K_SYS_REVISION_BCM1250_PASS2:213211 /* Pass 2 - easiest as default for now - so many numbers */
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_MN10300_MM_ARCH_HOOKS_H1313-#define _ASM_MN10300_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_MN10300_MM_ARCH_HOOKS_H */
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_NIOS2_MM_ARCH_HOOKS_H1313-#define _ASM_NIOS2_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_NIOS2_MM_ARCH_HOOKS_H */
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_OPENRISC_MM_ARCH_HOOKS_H1313-#define _ASM_OPENRISC_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_OPENRISC_MM_ARCH_HOOKS_H */
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_PARISC_MM_ARCH_HOOKS_H1313-#define _ASM_PARISC_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_PARISC_MM_ARCH_HOOKS_H */
+2-1
arch/parisc/include/asm/pgalloc.h
···72727373static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)7474{7575- if(pmd_flag(*pmd) & PxD_FLAG_ATTACHED)7575+ if (pmd_flag(*pmd) & PxD_FLAG_ATTACHED) {7676 /*7777 * This is the permanent pmd attached to the pgd;7878 * cannot free it.···8181 */8282 mm_inc_nr_pmds(mm);8383 return;8484+ }8485 free_pages((unsigned long)pmd, PMD_ORDER);8586}8687
+37-18
arch/parisc/include/asm/pgtable.h
···1616#include <asm/processor.h>1717#include <asm/cache.h>18181919-extern spinlock_t pa_dbit_lock;1919+extern spinlock_t pa_tlb_lock;20202121/*2222 * kern_addr_valid(ADDR) tests if ADDR is pointing to valid kernel···3333 */3434#define kern_addr_valid(addr) (1)35353636+/* Purge data and instruction TLB entries. Must be called holding3737+ * the pa_tlb_lock. The TLB purge instructions are slow on SMP3838+ * machines since the purge must be broadcast to all CPUs.3939+ */4040+4141+static inline void purge_tlb_entries(struct mm_struct *mm, unsigned long addr)4242+{4343+ mtsp(mm->context, 1);4444+ pdtlb(addr);4545+ if (unlikely(split_tlb))4646+ pitlb(addr);4747+}4848+3649/* Certain architectures need to do special things when PTEs3750 * within a page table are directly modified. Thus, the following3851 * hook is made available.···5542 *(pteptr) = (pteval); \5643 } while(0)57445858-extern void purge_tlb_entries(struct mm_struct *, unsigned long);4545+#define pte_inserted(x) \4646+ ((pte_val(x) & (_PAGE_PRESENT|_PAGE_ACCESSED)) \4747+ == (_PAGE_PRESENT|_PAGE_ACCESSED))59486060-#define set_pte_at(mm, addr, ptep, pteval) \6161- do { \4949+#define set_pte_at(mm, addr, ptep, pteval) \5050+ do { \5151+ pte_t old_pte; \6252 unsigned long flags; \6363- spin_lock_irqsave(&pa_dbit_lock, flags); \6464- set_pte(ptep, pteval); \6565- purge_tlb_entries(mm, addr); \6666- spin_unlock_irqrestore(&pa_dbit_lock, flags); \5353+ spin_lock_irqsave(&pa_tlb_lock, flags); \5454+ old_pte = *ptep; \5555+ set_pte(ptep, pteval); \5656+ if (pte_inserted(old_pte)) \5757+ purge_tlb_entries(mm, addr); \5858+ spin_unlock_irqrestore(&pa_tlb_lock, flags); \6759 } while (0)68606961#endif /* !__ASSEMBLY__ */···286268287269#define pte_none(x) (pte_val(x) == 0)288270#define pte_present(x) (pte_val(x) & _PAGE_PRESENT)289289-#define pte_clear(mm,addr,xp) do { pte_val(*(xp)) = 0; } while (0)271271+#define pte_clear(mm, addr, xp) set_pte_at(mm, addr, xp, __pte(0))290272291273#define pmd_flag(x) (pmd_val(x) & PxD_FLAG_MASK)292274#define pmd_address(x) ((unsigned long)(pmd_val(x) &~ PxD_FLAG_MASK) << PxD_VALUE_SHIFT)···453435 if (!pte_young(*ptep))454436 return 0;455437456456- spin_lock_irqsave(&pa_dbit_lock, flags);438438+ spin_lock_irqsave(&pa_tlb_lock, flags);457439 pte = *ptep;458440 if (!pte_young(pte)) {459459- spin_unlock_irqrestore(&pa_dbit_lock, flags);441441+ spin_unlock_irqrestore(&pa_tlb_lock, flags);460442 return 0;461443 }462444 set_pte(ptep, pte_mkold(pte));463445 purge_tlb_entries(vma->vm_mm, addr);464464- spin_unlock_irqrestore(&pa_dbit_lock, flags);446446+ spin_unlock_irqrestore(&pa_tlb_lock, flags);465447 return 1;466448}467449···471453 pte_t old_pte;472454 unsigned long flags;473455474474- spin_lock_irqsave(&pa_dbit_lock, flags);456456+ spin_lock_irqsave(&pa_tlb_lock, flags);475457 old_pte = *ptep;476476- pte_clear(mm,addr,ptep);477477- purge_tlb_entries(mm, addr);478478- spin_unlock_irqrestore(&pa_dbit_lock, flags);458458+ set_pte(ptep, __pte(0));459459+ if (pte_inserted(old_pte))460460+ purge_tlb_entries(mm, addr);461461+ spin_unlock_irqrestore(&pa_tlb_lock, flags);479462480463 return old_pte;481464}···484465static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep)485466{486467 unsigned long flags;487487- spin_lock_irqsave(&pa_dbit_lock, flags);468468+ spin_lock_irqsave(&pa_tlb_lock, flags);488469 set_pte(ptep, pte_wrprotect(*ptep));489470 purge_tlb_entries(mm, addr);490490- spin_unlock_irqrestore(&pa_dbit_lock, flags);471471+ spin_unlock_irqrestore(&pa_tlb_lock, flags);491472}492473493474#define pte_same(A,B) (pte_val(A) == pte_val(B))
+29-24
arch/parisc/include/asm/tlbflush.h
···1313 * active at any one time on the Merced bus. This tlb purge1414 * synchronisation is fairly lightweight and harmless so we activate1515 * it on all systems not just the N class.1616+1717+ * It is also used to ensure PTE updates are atomic and consistent1818+ * with the TLB.1619 */1720extern spinlock_t pa_tlb_lock;1821···27242825#define smp_flush_tlb_all() flush_tlb_all()29262727+int __flush_tlb_range(unsigned long sid,2828+ unsigned long start, unsigned long end);2929+3030+#define flush_tlb_range(vma, start, end) \3131+ __flush_tlb_range((vma)->vm_mm->context, start, end)3232+3333+#define flush_tlb_kernel_range(start, end) \3434+ __flush_tlb_range(0, start, end)3535+3036/*3137 * flush_tlb_mm()3238 *3333- * XXX This code is NOT valid for HP-UX compatibility processes,3434- * (although it will probably work 99% of the time). HP-UX3535- * processes are free to play with the space id's and save them3636- * over long periods of time, etc. so we have to preserve the3737- * space and just flush the entire tlb. We need to check the3838- * personality in order to do that, but the personality is not3939- * currently being set correctly.4040- *4141- * Of course, Linux processes could do the same thing, but4242- * we don't support that (and the compilers, dynamic linker,4343- * etc. do not do that).3939+ * The code to switch to a new context is NOT valid for processes4040+ * which play with the space id's. Thus, we have to preserve the4141+ * space and just flush the entire tlb. However, the compilers,4242+ * dynamic linker, etc, do not manipulate space id's, so there4343+ * could be a significant performance benefit in switching contexts4444+ * and not flushing the whole tlb.4445 */45464647static inline void flush_tlb_mm(struct mm_struct *mm)···5245 BUG_ON(mm == &init_mm); /* Should never happen */53465447#if 1 || defined(CONFIG_SMP)4848+ /* Except for very small threads, flushing the whole TLB is4949+ * faster than using __flush_tlb_range. The pdtlb and pitlb5050+ * instructions are very slow because of the TLB broadcast.5151+ * It might be faster to do local range flushes on all CPUs5252+ * on PA 2.0 systems.5353+ */5554 flush_tlb_all();5655#else5756 /* FIXME: currently broken, causing space id and protection ids5858- * to go out of sync, resulting in faults on userspace accesses.5757+ * to go out of sync, resulting in faults on userspace accesses.5858+ * This approach needs further investigation since running many5959+ * small applications (e.g., GCC testsuite) is faster on HP-UX.5960 */6061 if (mm) {6162 if (mm->context != 0)···8065{8166 unsigned long flags, sid;82678383- /* For one page, it's not worth testing the split_tlb variable */8484-8585- mb();8668 sid = vma->vm_mm->context;8769 purge_tlb_start(flags);8870 mtsp(sid, 1);8971 pdtlb(addr);9090- pitlb(addr);7272+ if (unlikely(split_tlb))7373+ pitlb(addr);9174 purge_tlb_end(flags);9275}9393-9494-void __flush_tlb_range(unsigned long sid,9595- unsigned long start, unsigned long end);9696-9797-#define flush_tlb_range(vma,start,end) __flush_tlb_range((vma)->vm_mm->context,start,end)9898-9999-#define flush_tlb_kernel_range(start, end) __flush_tlb_range(0,start,end)100100-10176#endif
+67-38
arch/parisc/kernel/cache.c
···342342EXPORT_SYMBOL(flush_kernel_icache_range_asm);343343344344#define FLUSH_THRESHOLD 0x80000 /* 0.5MB */345345-int parisc_cache_flush_threshold __read_mostly = FLUSH_THRESHOLD;345345+static unsigned long parisc_cache_flush_threshold __read_mostly = FLUSH_THRESHOLD;346346+347347+#define FLUSH_TLB_THRESHOLD (2*1024*1024) /* 2MB initial TLB threshold */348348+static unsigned long parisc_tlb_flush_threshold __read_mostly = FLUSH_TLB_THRESHOLD;346349347350void __init parisc_setup_cache_timing(void)348351{349352 unsigned long rangetime, alltime;350350- unsigned long size;353353+ unsigned long size, start;351354352355 alltime = mfctl(16);353356 flush_data_cache();···367364 /* Racy, but if we see an intermediate value, it's ok too... */368365 parisc_cache_flush_threshold = size * alltime / rangetime;369366370370- parisc_cache_flush_threshold = (parisc_cache_flush_threshold + L1_CACHE_BYTES - 1) &~ (L1_CACHE_BYTES - 1); 367367+ parisc_cache_flush_threshold = L1_CACHE_ALIGN(parisc_cache_flush_threshold);371368 if (!parisc_cache_flush_threshold)372369 parisc_cache_flush_threshold = FLUSH_THRESHOLD;373370374371 if (parisc_cache_flush_threshold > cache_info.dc_size)375372 parisc_cache_flush_threshold = cache_info.dc_size;376373377377- printk(KERN_INFO "Setting cache flush threshold to %x (%d CPUs online)\n", parisc_cache_flush_threshold, num_online_cpus());374374+ printk(KERN_INFO "Setting cache flush threshold to %lu kB\n",375375+ parisc_cache_flush_threshold/1024);376376+377377+ /* calculate TLB flush threshold */378378+379379+ alltime = mfctl(16);380380+ flush_tlb_all();381381+ alltime = mfctl(16) - alltime;382382+383383+ size = PAGE_SIZE;384384+ start = (unsigned long) _text;385385+ rangetime = mfctl(16);386386+ while (start < (unsigned long) _end) {387387+ flush_tlb_kernel_range(start, start + PAGE_SIZE);388388+ start += PAGE_SIZE;389389+ size += PAGE_SIZE;390390+ }391391+ rangetime = mfctl(16) - rangetime;392392+393393+ printk(KERN_DEBUG "Whole TLB flush %lu cycles, flushing %lu bytes %lu cycles\n",394394+ alltime, size, rangetime);395395+396396+ parisc_tlb_flush_threshold = size * alltime / rangetime;397397+ parisc_tlb_flush_threshold *= num_online_cpus();398398+ parisc_tlb_flush_threshold = PAGE_ALIGN(parisc_tlb_flush_threshold);399399+ if (!parisc_tlb_flush_threshold)400400+ parisc_tlb_flush_threshold = FLUSH_TLB_THRESHOLD;401401+402402+ printk(KERN_INFO "Setting TLB flush threshold to %lu kB\n",403403+ parisc_tlb_flush_threshold/1024);378404}379405380406extern void purge_kernel_dcache_page_asm(unsigned long);···435403}436404EXPORT_SYMBOL(copy_user_page);437405438438-void purge_tlb_entries(struct mm_struct *mm, unsigned long addr)406406+/* __flush_tlb_range()407407+ *408408+ * returns 1 if all TLBs were flushed.409409+ */410410+int __flush_tlb_range(unsigned long sid, unsigned long start,411411+ unsigned long end)439412{440440- unsigned long flags;413413+ unsigned long flags, size;441414442442- /* Note: purge_tlb_entries can be called at startup with443443- no context. */444444-445445- purge_tlb_start(flags);446446- mtsp(mm->context, 1);447447- pdtlb(addr);448448- pitlb(addr);449449- purge_tlb_end(flags);450450-}451451-EXPORT_SYMBOL(purge_tlb_entries);452452-453453-void __flush_tlb_range(unsigned long sid, unsigned long start,454454- unsigned long end)455455-{456456- unsigned long npages;457457-458458- npages = ((end - (start & PAGE_MASK)) + (PAGE_SIZE - 1)) >> PAGE_SHIFT;459459- if (npages >= 512) /* 2MB of space: arbitrary, should be tuned */415415+ size = (end - start);416416+ if (size >= parisc_tlb_flush_threshold) {460417 flush_tlb_all();461461- else {462462- unsigned long flags;418418+ return 1;419419+ }463420421421+ /* Purge TLB entries for small ranges using the pdtlb and422422+ pitlb instructions. These instructions execute locally423423+ but cause a purge request to be broadcast to other TLBs. */424424+ if (likely(!split_tlb)) {425425+ while (start < end) {426426+ purge_tlb_start(flags);427427+ mtsp(sid, 1);428428+ pdtlb(start);429429+ purge_tlb_end(flags);430430+ start += PAGE_SIZE;431431+ }432432+ return 0;433433+ }434434+435435+ /* split TLB case */436436+ while (start < end) {464437 purge_tlb_start(flags);465438 mtsp(sid, 1);466466- if (split_tlb) {467467- while (npages--) {468468- pdtlb(start);469469- pitlb(start);470470- start += PAGE_SIZE;471471- }472472- } else {473473- while (npages--) {474474- pdtlb(start);475475- start += PAGE_SIZE;476476- }477477- }439439+ pdtlb(start);440440+ pitlb(start);478441 purge_tlb_end(flags);442442+ start += PAGE_SIZE;479443 }444444+ return 0;480445}481446482447static void cacheflush_h_tmp_function(void *dummy)
+79-84
arch/parisc/kernel/entry.S
···4545 .level 2.04646#endif47474848- .import pa_dbit_lock,data4848+ .import pa_tlb_lock,data49495050 /* space_to_prot macro creates a prot id from a space id */5151···420420 SHLREG %r9,PxD_VALUE_SHIFT,\pmd421421 extru \va,31-PAGE_SHIFT,ASM_BITS_PER_PTE,\index422422 dep %r0,31,PAGE_SHIFT,\pmd /* clear offset */423423- shladd \index,BITS_PER_PTE_ENTRY,\pmd,\pmd424424- LDREG %r0(\pmd),\pte /* pmd is now pte */423423+ shladd \index,BITS_PER_PTE_ENTRY,\pmd,\pmd /* pmd is now pte */424424+ LDREG %r0(\pmd),\pte425425 bb,>=,n \pte,_PAGE_PRESENT_BIT,\fault426426 .endm427427···453453 L2_ptep \pgd,\pte,\index,\va,\fault454454 .endm455455456456- /* Acquire pa_dbit_lock lock. */457457- .macro dbit_lock spc,tmp,tmp1456456+ /* Acquire pa_tlb_lock lock and recheck page is still present. */457457+ .macro tlb_lock spc,ptp,pte,tmp,tmp1,fault458458#ifdef CONFIG_SMP459459 cmpib,COND(=),n 0,\spc,2f460460- load32 PA(pa_dbit_lock),\tmp460460+ load32 PA(pa_tlb_lock),\tmp4614611: LDCW 0(\tmp),\tmp1462462 cmpib,COND(=) 0,\tmp1,1b463463 nop464464+ LDREG 0(\ptp),\pte465465+ bb,<,n \pte,_PAGE_PRESENT_BIT,2f466466+ b \fault467467+ stw \spc,0(\tmp)4644682:465469#endif466470 .endm467471468468- /* Release pa_dbit_lock lock without reloading lock address. */469469- .macro dbit_unlock0 spc,tmp472472+ /* Release pa_tlb_lock lock without reloading lock address. */473473+ .macro tlb_unlock0 spc,tmp470474#ifdef CONFIG_SMP471475 or,COND(=) %r0,\spc,%r0472476 stw \spc,0(\tmp)473477#endif474478 .endm475479476476- /* Release pa_dbit_lock lock. */477477- .macro dbit_unlock1 spc,tmp480480+ /* Release pa_tlb_lock lock. */481481+ .macro tlb_unlock1 spc,tmp478482#ifdef CONFIG_SMP479479- load32 PA(pa_dbit_lock),\tmp480480- dbit_unlock0 \spc,\tmp483483+ load32 PA(pa_tlb_lock),\tmp484484+ tlb_unlock0 \spc,\tmp481485#endif482486 .endm483487484488 /* Set the _PAGE_ACCESSED bit of the PTE. Be clever and485489 * don't needlessly dirty the cache line if it was already set */486486- .macro update_ptep spc,ptep,pte,tmp,tmp1487487-#ifdef CONFIG_SMP488488- or,COND(=) %r0,\spc,%r0489489- LDREG 0(\ptep),\pte490490-#endif490490+ .macro update_accessed ptp,pte,tmp,tmp1491491 ldi _PAGE_ACCESSED,\tmp1492492 or \tmp1,\pte,\tmp493493 and,COND(<>) \tmp1,\pte,%r0494494- STREG \tmp,0(\ptep)494494+ STREG \tmp,0(\ptp)495495 .endm496496497497 /* Set the dirty bit (and accessed bit). No need to be498498 * clever, this is only used from the dirty fault */499499- .macro update_dirty spc,ptep,pte,tmp500500-#ifdef CONFIG_SMP501501- or,COND(=) %r0,\spc,%r0502502- LDREG 0(\ptep),\pte503503-#endif499499+ .macro update_dirty ptp,pte,tmp504500 ldi _PAGE_ACCESSED|_PAGE_DIRTY,\tmp505501 or \tmp,\pte,\pte506506- STREG \pte,0(\ptep)502502+ STREG \pte,0(\ptp)507503 .endm508504509505 /* bitshift difference between a PFN (based on kernel's PAGE_SIZE)···1144114811451149 L3_ptep ptp,pte,t0,va,dtlb_check_alias_20w1146115011471147- dbit_lock spc,t0,t111481148- update_ptep spc,ptp,pte,t0,t111511151+ tlb_lock spc,ptp,pte,t0,t1,dtlb_check_alias_20w11521152+ update_accessed ptp,pte,t0,t11149115311501154 make_insert_tlb spc,pte,prot1151115511521156 idtlbt pte,prot11531153- dbit_unlock1 spc,t01154115711581158+ tlb_unlock1 spc,t011551159 rfir11561160 nop11571161···1170117411711175 L3_ptep ptp,pte,t0,va,nadtlb_check_alias_20w1172117611731173- dbit_lock spc,t0,t111741174- update_ptep spc,ptp,pte,t0,t111771177+ tlb_lock spc,ptp,pte,t0,t1,nadtlb_check_alias_20w11781178+ update_accessed ptp,pte,t0,t11175117911761180 make_insert_tlb spc,pte,prot1177118111781182 idtlbt pte,prot11791179- dbit_unlock1 spc,t01180118311841184+ tlb_unlock1 spc,t011811185 rfir11821186 nop11831187···1198120211991203 L2_ptep ptp,pte,t0,va,dtlb_check_alias_111200120412011201- dbit_lock spc,t0,t112021202- update_ptep spc,ptp,pte,t0,t112051205+ tlb_lock spc,ptp,pte,t0,t1,dtlb_check_alias_1112061206+ update_accessed ptp,pte,t0,t11203120712041208 make_insert_tlb_11 spc,pte,prot1205120912061206- mfsp %sr1,t0 /* Save sr1 so we can use it in tlb inserts */12101210+ mfsp %sr1,t1 /* Save sr1 so we can use it in tlb inserts */12071211 mtsp spc,%sr11208121212091213 idtlba pte,(%sr1,va)12101214 idtlbp prot,(%sr1,va)1211121512121212- mtsp t0, %sr1 /* Restore sr1 */12131213- dbit_unlock1 spc,t012161216+ mtsp t1, %sr1 /* Restore sr1 */1214121712181218+ tlb_unlock1 spc,t012151219 rfir12161220 nop12171221···1231123512321236 L2_ptep ptp,pte,t0,va,nadtlb_check_alias_111233123712341234- dbit_lock spc,t0,t112351235- update_ptep spc,ptp,pte,t0,t112381238+ tlb_lock spc,ptp,pte,t0,t1,nadtlb_check_alias_1112391239+ update_accessed ptp,pte,t0,t11236124012371241 make_insert_tlb_11 spc,pte,prot1238124212391239-12401240- mfsp %sr1,t0 /* Save sr1 so we can use it in tlb inserts */12431243+ mfsp %sr1,t1 /* Save sr1 so we can use it in tlb inserts */12411244 mtsp spc,%sr11242124512431246 idtlba pte,(%sr1,va)12441247 idtlbp prot,(%sr1,va)1245124812461246- mtsp t0, %sr1 /* Restore sr1 */12471247- dbit_unlock1 spc,t012491249+ mtsp t1, %sr1 /* Restore sr1 */1248125012511251+ tlb_unlock1 spc,t012491252 rfir12501253 nop12511254···1264126912651270 L2_ptep ptp,pte,t0,va,dtlb_check_alias_201266127112671267- dbit_lock spc,t0,t112681268- update_ptep spc,ptp,pte,t0,t112721272+ tlb_lock spc,ptp,pte,t0,t1,dtlb_check_alias_2012731273+ update_accessed ptp,pte,t0,t11269127412701275 make_insert_tlb spc,pte,prot1271127612721272- f_extend pte,t012771277+ f_extend pte,t11273127812741279 idtlbt pte,prot12751275- dbit_unlock1 spc,t01276128012811281+ tlb_unlock1 spc,t012771282 rfir12781283 nop12791284···1292129712931298 L2_ptep ptp,pte,t0,va,nadtlb_check_alias_201294129912951295- dbit_lock spc,t0,t112961296- update_ptep spc,ptp,pte,t0,t113001300+ tlb_lock spc,ptp,pte,t0,t1,nadtlb_check_alias_2013011301+ update_accessed ptp,pte,t0,t11297130212981303 make_insert_tlb spc,pte,prot1299130413001300- f_extend pte,t013051305+ f_extend pte,t11301130613021302- idtlbt pte,prot13031303- dbit_unlock1 spc,t013071307+ idtlbt pte,prot1304130813091309+ tlb_unlock1 spc,t013051310 rfir13061311 nop13071312···1401140614021407 L3_ptep ptp,pte,t0,va,itlb_fault1403140814041404- dbit_lock spc,t0,t114051405- update_ptep spc,ptp,pte,t0,t114091409+ tlb_lock spc,ptp,pte,t0,t1,itlb_fault14101410+ update_accessed ptp,pte,t0,t11406141114071412 make_insert_tlb spc,pte,prot1408141314091414 iitlbt pte,prot14101410- dbit_unlock1 spc,t01411141514161416+ tlb_unlock1 spc,t014121417 rfir14131418 nop14141419···1425143014261431 L3_ptep ptp,pte,t0,va,naitlb_check_alias_20w1427143214281428- dbit_lock spc,t0,t114291429- update_ptep spc,ptp,pte,t0,t114331433+ tlb_lock spc,ptp,pte,t0,t1,naitlb_check_alias_20w14341434+ update_accessed ptp,pte,t0,t11430143514311436 make_insert_tlb spc,pte,prot1432143714331438 iitlbt pte,prot14341434- dbit_unlock1 spc,t01435143914401440+ tlb_unlock1 spc,t014361441 rfir14371442 nop14381443···1453145814541459 L2_ptep ptp,pte,t0,va,itlb_fault1455146014561456- dbit_lock spc,t0,t114571457- update_ptep spc,ptp,pte,t0,t114611461+ tlb_lock spc,ptp,pte,t0,t1,itlb_fault14621462+ update_accessed ptp,pte,t0,t11458146314591464 make_insert_tlb_11 spc,pte,prot1460146514611461- mfsp %sr1,t0 /* Save sr1 so we can use it in tlb inserts */14661466+ mfsp %sr1,t1 /* Save sr1 so we can use it in tlb inserts */14621467 mtsp spc,%sr11463146814641469 iitlba pte,(%sr1,va)14651470 iitlbp prot,(%sr1,va)1466147114671467- mtsp t0, %sr1 /* Restore sr1 */14681468- dbit_unlock1 spc,t014721472+ mtsp t1, %sr1 /* Restore sr1 */1469147314741474+ tlb_unlock1 spc,t014701475 rfir14711476 nop14721477···1477148214781483 L2_ptep ptp,pte,t0,va,naitlb_check_alias_111479148414801480- dbit_lock spc,t0,t114811481- update_ptep spc,ptp,pte,t0,t114851485+ tlb_lock spc,ptp,pte,t0,t1,naitlb_check_alias_1114861486+ update_accessed ptp,pte,t0,t11482148714831488 make_insert_tlb_11 spc,pte,prot1484148914851485- mfsp %sr1,t0 /* Save sr1 so we can use it in tlb inserts */14901490+ mfsp %sr1,t1 /* Save sr1 so we can use it in tlb inserts */14861491 mtsp spc,%sr11487149214881493 iitlba pte,(%sr1,va)14891494 iitlbp prot,(%sr1,va)1490149514911491- mtsp t0, %sr1 /* Restore sr1 */14921492- dbit_unlock1 spc,t014961496+ mtsp t1, %sr1 /* Restore sr1 */1493149714981498+ tlb_unlock1 spc,t014941499 rfir14951500 nop14961501···1511151615121517 L2_ptep ptp,pte,t0,va,itlb_fault1513151815141514- dbit_lock spc,t0,t115151515- update_ptep spc,ptp,pte,t0,t115191519+ tlb_lock spc,ptp,pte,t0,t1,itlb_fault15201520+ update_accessed ptp,pte,t0,t11516152115171522 make_insert_tlb spc,pte,prot1518152315191519- f_extend pte,t0 15241524+ f_extend pte,t11520152515211526 iitlbt pte,prot15221522- dbit_unlock1 spc,t01523152715281528+ tlb_unlock1 spc,t015241529 rfir15251530 nop15261531···1531153615321537 L2_ptep ptp,pte,t0,va,naitlb_check_alias_201533153815341534- dbit_lock spc,t0,t115351535- update_ptep spc,ptp,pte,t0,t115391539+ tlb_lock spc,ptp,pte,t0,t1,naitlb_check_alias_2015401540+ update_accessed ptp,pte,t0,t11536154115371542 make_insert_tlb spc,pte,prot1538154315391539- f_extend pte,t015441544+ f_extend pte,t11540154515411546 iitlbt pte,prot15421542- dbit_unlock1 spc,t01543154715481548+ tlb_unlock1 spc,t015441549 rfir15451550 nop15461551···1563156815641569 L3_ptep ptp,pte,t0,va,dbit_fault1565157015661566- dbit_lock spc,t0,t115671567- update_dirty spc,ptp,pte,t115711571+ tlb_lock spc,ptp,pte,t0,t1,dbit_fault15721572+ update_dirty ptp,pte,t11568157315691574 make_insert_tlb spc,pte,prot1570157515711576 idtlbt pte,prot15721572- dbit_unlock0 spc,t01573157715781578+ tlb_unlock0 spc,t015741579 rfir15751580 nop15761581#else···1583158815841589 L2_ptep ptp,pte,t0,va,dbit_fault1585159015861586- dbit_lock spc,t0,t115871587- update_dirty spc,ptp,pte,t115911591+ tlb_lock spc,ptp,pte,t0,t1,dbit_fault15921592+ update_dirty ptp,pte,t11588159315891594 make_insert_tlb_11 spc,pte,prot15901595···15951600 idtlbp prot,(%sr1,va)1596160115971602 mtsp t1, %sr1 /* Restore sr1 */15981598- dbit_unlock0 spc,t01599160316041604+ tlb_unlock0 spc,t016001605 rfir16011606 nop16021607···1607161216081613 L2_ptep ptp,pte,t0,va,dbit_fault1609161416101610- dbit_lock spc,t0,t116111611- update_dirty spc,ptp,pte,t116151615+ tlb_lock spc,ptp,pte,t0,t1,dbit_fault16161616+ update_dirty ptp,pte,t11612161716131618 make_insert_tlb spc,pte,prot1614161916151620 f_extend pte,t11616162116171617- idtlbt pte,prot16181618- dbit_unlock0 spc,t016221622+ idtlbt pte,prot1619162316241624+ tlb_unlock0 spc,t016201625 rfir16211626 nop16221627#endif
-4
arch/parisc/kernel/traps.c
···43434444#include "../math-emu/math-emu.h" /* for handle_fpe() */45454646-#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)4747-DEFINE_SPINLOCK(pa_dbit_lock);4848-#endif4949-5046static void parisc_show_stack(struct task_struct *task, unsigned long *sp,5147 struct pt_regs *regs);5248
+21-10
arch/powerpc/kernel/idle_power7.S
···5252 .text53535454/*5555+ * Used by threads when the lock bit of core_idle_state is set.5656+ * Threads will spin in HMT_LOW until the lock bit is cleared.5757+ * r14 - pointer to core_idle_state5858+ * r15 - used to load contents of core_idle_state5959+ */6060+6161+core_idle_lock_held:6262+ HMT_LOW6363+3: lwz r15,0(r14)6464+ andi. r15,r15,PNV_CORE_IDLE_LOCK_BIT6565+ bne 3b6666+ HMT_MEDIUM6767+ lwarx r15,0,r146868+ blr6969+7070+/*5571 * Pass requested state in r3:5672 * r3 - PNV_THREAD_NAP/SLEEP/WINKLE5773 *···166150 ld r14,PACA_CORE_IDLE_STATE_PTR(r13)167151lwarx_loop1:168152 lwarx r15,0,r14153153+154154+ andi. r9,r15,PNV_CORE_IDLE_LOCK_BIT155155+ bnel core_idle_lock_held156156+169157 andc r15,r15,r7 /* Clear thread bit */170158171159 andi. r15,r15,PNV_CORE_IDLE_THREAD_BITS···314294 * workaround undo code or resyncing timebase or restoring context315295 * In either case loop until the lock bit is cleared.316296 */317317- bne core_idle_lock_held297297+ bnel core_idle_lock_held318298319299 cmpwi cr2,r15,0320300 lbz r4,PACA_SUBCORE_SIBLING_MASK(r13)···338318 bne- lwarx_loop2339319 isync340320 b common_exit341341-342342-core_idle_lock_held:343343- HMT_LOW344344-core_idle_lock_loop:345345- lwz r15,0(14)346346- andi. r9,r15,PNV_CORE_IDLE_LOCK_BIT347347- bne core_idle_lock_loop348348- HMT_MEDIUM349349- b lwarx_loop2350321351322first_thread_in_subcore:352323 /* First thread in subcore to wakeup */
···5757 unsigned long lap : 1; /* Low-address-protection control */5858 unsigned long : 4;5959 unsigned long edat : 1; /* Enhanced-DAT-enablement control */6060- unsigned long : 23;6060+ unsigned long : 4;6161+ unsigned long afp : 1; /* AFP-register control */6262+ unsigned long vx : 1; /* Vector enablement control */6363+ unsigned long : 17;6164 };6265};6366
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_S390_MM_ARCH_HOOKS_H1313-#define _ASM_S390_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_S390_MM_ARCH_HOOKS_H */
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_SCORE_MM_ARCH_HOOKS_H1313-#define _ASM_SCORE_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_SCORE_MM_ARCH_HOOKS_H */
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_SH_MM_ARCH_HOOKS_H1313-#define _ASM_SH_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_SH_MM_ARCH_HOOKS_H */
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_SPARC_MM_ARCH_HOOKS_H1313-#define _ASM_SPARC_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_SPARC_MM_ARCH_HOOKS_H */
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_TILE_MM_ARCH_HOOKS_H1313-#define _ASM_TILE_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_TILE_MM_ARCH_HOOKS_H */
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_UM_MM_ARCH_HOOKS_H1313-#define _ASM_UM_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_UM_MM_ARCH_HOOKS_H */
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_UNICORE32_MM_ARCH_HOOKS_H1313-#define _ASM_UNICORE32_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_UNICORE32_MM_ARCH_HOOKS_H */
+7-1
arch/x86/Kconfig
···4141 select ARCH_USE_CMPXCHG_LOCKREF if X86_644242 select ARCH_USE_QUEUED_RWLOCKS4343 select ARCH_USE_QUEUED_SPINLOCKS4444+ select ARCH_WANTS_DYNAMIC_TASK_STRUCT4445 select ARCH_WANT_FRAME_POINTERS4546 select ARCH_WANT_IPC_PARSE_VERSION if X86_324647 select ARCH_WANT_OPTIONAL_GPIOLIB···254253255254config ARCH_SUPPORTS_DEBUG_PAGEALLOC256255 def_bool y256256+257257+config KASAN_SHADOW_OFFSET258258+ hex259259+ depends on KASAN260260+ default 0xdffffc0000000000257261258262config HAVE_INTEL_TXT259263 def_bool y···2021201520222016 To compile command line arguments into the kernel,20232017 set this option to 'Y', then fill in the20242024- the boot arguments in CONFIG_CMDLINE.20182018+ boot arguments in CONFIG_CMDLINE.2025201920262020 Systems with fully functional boot loaders (i.e. non-embedded)20272021 should leave this option set to 'N'.
+12
arch/x86/Kconfig.debug
···297297298298 If unsure, say N.299299300300+config DEBUG_ENTRY301301+ bool "Debug low-level entry code"302302+ depends on DEBUG_KERNEL303303+ ---help---304304+ This option enables sanity checks in x86's low-level entry code.305305+ Some of these sanity checks may slow down kernel entries and306306+ exits or otherwise impact performance.307307+308308+ This is currently used to help test NMI code.309309+310310+ If unsure, say N.311311+300312config DEBUG_NMI_SELFTEST301313 bool "NMI Selftest"302314 depends on DEBUG_KERNEL && X86_LOCAL_APIC
+202-103
arch/x86/entry/entry_64.S
···12371237 * If the variable is not set and the stack is not the NMI12381238 * stack then:12391239 * o Set the special variable on the stack12401240- * o Copy the interrupt frame into a "saved" location on the stack12411241- * o Copy the interrupt frame into a "copy" location on the stack12401240+ * o Copy the interrupt frame into an "outermost" location on the12411241+ * stack12421242+ * o Copy the interrupt frame into an "iret" location on the stack12421243 * o Continue processing the NMI12431244 * If the variable is set or the previous stack is the NMI stack:12441244- * o Modify the "copy" location to jump to the repeate_nmi12451245+ * o Modify the "iret" location to jump to the repeat_nmi12451246 * o return back to the first NMI12461247 *12471248 * Now on exit of the first NMI, we first clear the stack variable···12511250 * a nested NMI that updated the copy interrupt stack frame, a12521251 * jump will be made to the repeat_nmi code that will handle the second12531252 * NMI.12531253+ *12541254+ * However, espfix prevents us from directly returning to userspace12551255+ * with a single IRET instruction. Similarly, IRET to user mode12561256+ * can fault. We therefore handle NMIs from user space like12571257+ * other IST entries.12541258 */1255125912561260 /* Use %rdx as our temp variable throughout */12571261 pushq %rdx1258126212591259- /*12601260- * If %cs was not the kernel segment, then the NMI triggered in user12611261- * space, which means it is definitely not nested.12621262- */12631263- cmpl $__KERNEL_CS, 16(%rsp)12641264- jne first_nmi12631263+ testb $3, CS-RIP+8(%rsp)12641264+ jz .Lnmi_from_kernel1265126512661266 /*12671267- * Check the special variable on the stack to see if NMIs are12681268- * executing.12671267+ * NMI from user mode. We need to run on the thread stack, but we12681268+ * can't go through the normal entry paths: NMIs are masked, and12691269+ * we don't want to enable interrupts, because then we'll end12701270+ * up in an awkward situation in which IRQs are on but NMIs12711271+ * are off.12721272+ */12731273+12741274+ SWAPGS12751275+ cld12761276+ movq %rsp, %rdx12771277+ movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp12781278+ pushq 5*8(%rdx) /* pt_regs->ss */12791279+ pushq 4*8(%rdx) /* pt_regs->rsp */12801280+ pushq 3*8(%rdx) /* pt_regs->flags */12811281+ pushq 2*8(%rdx) /* pt_regs->cs */12821282+ pushq 1*8(%rdx) /* pt_regs->rip */12831283+ pushq $-1 /* pt_regs->orig_ax */12841284+ pushq %rdi /* pt_regs->di */12851285+ pushq %rsi /* pt_regs->si */12861286+ pushq (%rdx) /* pt_regs->dx */12871287+ pushq %rcx /* pt_regs->cx */12881288+ pushq %rax /* pt_regs->ax */12891289+ pushq %r8 /* pt_regs->r8 */12901290+ pushq %r9 /* pt_regs->r9 */12911291+ pushq %r10 /* pt_regs->r10 */12921292+ pushq %r11 /* pt_regs->r11 */12931293+ pushq %rbx /* pt_regs->rbx */12941294+ pushq %rbp /* pt_regs->rbp */12951295+ pushq %r12 /* pt_regs->r12 */12961296+ pushq %r13 /* pt_regs->r13 */12971297+ pushq %r14 /* pt_regs->r14 */12981298+ pushq %r15 /* pt_regs->r15 */12991299+13001300+ /*13011301+ * At this point we no longer need to worry about stack damage13021302+ * due to nesting -- we're on the normal thread stack and we're13031303+ * done with the NMI stack.13041304+ */13051305+13061306+ movq %rsp, %rdi13071307+ movq $-1, %rsi13081308+ call do_nmi13091309+13101310+ /*13111311+ * Return back to user mode. We must *not* do the normal exit13121312+ * work, because we don't want to enable interrupts. Fortunately,13131313+ * do_nmi doesn't modify pt_regs.13141314+ */13151315+ SWAPGS13161316+ jmp restore_c_regs_and_iret13171317+13181318+.Lnmi_from_kernel:13191319+ /*13201320+ * Here's what our stack frame will look like:13211321+ * +---------------------------------------------------------+13221322+ * | original SS |13231323+ * | original Return RSP |13241324+ * | original RFLAGS |13251325+ * | original CS |13261326+ * | original RIP |13271327+ * +---------------------------------------------------------+13281328+ * | temp storage for rdx |13291329+ * +---------------------------------------------------------+13301330+ * | "NMI executing" variable |13311331+ * +---------------------------------------------------------+13321332+ * | iret SS } Copied from "outermost" frame |13331333+ * | iret Return RSP } on each loop iteration; overwritten |13341334+ * | iret RFLAGS } by a nested NMI to force another |13351335+ * | iret CS } iteration if needed. |13361336+ * | iret RIP } |13371337+ * +---------------------------------------------------------+13381338+ * | outermost SS } initialized in first_nmi; |13391339+ * | outermost Return RSP } will not be changed before |13401340+ * | outermost RFLAGS } NMI processing is done. |13411341+ * | outermost CS } Copied to "iret" frame on each |13421342+ * | outermost RIP } iteration. |13431343+ * +---------------------------------------------------------+13441344+ * | pt_regs |13451345+ * +---------------------------------------------------------+13461346+ *13471347+ * The "original" frame is used by hardware. Before re-enabling13481348+ * NMIs, we need to be done with it, and we need to leave enough13491349+ * space for the asm code here.13501350+ *13511351+ * We return by executing IRET while RSP points to the "iret" frame.13521352+ * That will either return for real or it will loop back into NMI13531353+ * processing.13541354+ *13551355+ * The "outermost" frame is copied to the "iret" frame on each13561356+ * iteration of the loop, so each iteration starts with the "iret"13571357+ * frame pointing to the final return target.13581358+ */13591359+13601360+ /*13611361+ * Determine whether we're a nested NMI.13621362+ *13631363+ * If we interrupted kernel code between repeat_nmi and13641364+ * end_repeat_nmi, then we are a nested NMI. We must not13651365+ * modify the "iret" frame because it's being written by13661366+ * the outer NMI. That's okay; the outer NMI handler is13671367+ * about to about to call do_nmi anyway, so we can just13681368+ * resume the outer NMI.13691369+ */13701370+13711371+ movq $repeat_nmi, %rdx13721372+ cmpq 8(%rsp), %rdx13731373+ ja 1f13741374+ movq $end_repeat_nmi, %rdx13751375+ cmpq 8(%rsp), %rdx13761376+ ja nested_nmi_out13771377+1:13781378+13791379+ /*13801380+ * Now check "NMI executing". If it's set, then we're nested.13811381+ * This will not detect if we interrupted an outer NMI just13821382+ * before IRET.12691383 */12701384 cmpl $1, -8(%rsp)12711385 je nested_nmi1272138612731387 /*12741274- * Now test if the previous stack was an NMI stack.12751275- * We need the double check. We check the NMI stack to satisfy the12761276- * race when the first NMI clears the variable before returning.12771277- * We check the variable because the first NMI could be in a12781278- * breakpoint routine using a breakpoint stack.13881388+ * Now test if the previous stack was an NMI stack. This covers13891389+ * the case where we interrupt an outer NMI after it clears13901390+ * "NMI executing" but before IRET. We need to be careful, though:13911391+ * there is one case in which RSP could point to the NMI stack13921392+ * despite there being no NMI active: naughty userspace controls13931393+ * RSP at the very beginning of the SYSCALL targets. We can13941394+ * pull a fast one on naughty userspace, though: we program13951395+ * SYSCALL to mask DF, so userspace cannot cause DF to be set13961396+ * if it controls the kernel's RSP. We set DF before we clear13971397+ * "NMI executing".12791398 */12801399 lea 6*8(%rsp), %rdx12811400 /* Compare the NMI stack (rdx) with the stack we came from (4*8(%rsp)) */···14071286 cmpq %rdx, 4*8(%rsp)14081287 /* If it is below the NMI stack, it is a normal NMI */14091288 jb first_nmi14101410- /* Ah, it is within the NMI stack, treat it as nested */12891289+12901290+ /* Ah, it is within the NMI stack. */12911291+12921292+ testb $(X86_EFLAGS_DF >> 8), (3*8 + 1)(%rsp)12931293+ jz first_nmi /* RSP was user controlled. */12941294+12951295+ /* This is a nested NMI. */1411129614121297nested_nmi:14131298 /*14141414- * Do nothing if we interrupted the fixup in repeat_nmi.14151415- * It's about to repeat the NMI handler, so we are fine14161416- * with ignoring this one.12991299+ * Modify the "iret" frame to point to repeat_nmi, forcing another13001300+ * iteration of NMI handling.14171301 */14181418- movq $repeat_nmi, %rdx14191419- cmpq 8(%rsp), %rdx14201420- ja 1f14211421- movq $end_repeat_nmi, %rdx14221422- cmpq 8(%rsp), %rdx14231423- ja nested_nmi_out14241424-14251425-1:14261426- /* Set up the interrupted NMIs stack to jump to repeat_nmi */14271427- leaq -1*8(%rsp), %rdx14281428- movq %rdx, %rsp13021302+ subq $8, %rsp14291303 leaq -10*8(%rsp), %rdx14301304 pushq $__KERNEL_DS14311305 pushq %rdx···14341318nested_nmi_out:14351319 popq %rdx1436132014371437- /* No need to check faults here */13211321+ /* We are returning to kernel mode, so this cannot result in a fault. */14381322 INTERRUPT_RETURN1439132314401324first_nmi:14411441- /*14421442- * Because nested NMIs will use the pushed location that we14431443- * stored in rdx, we must keep that space available.14441444- * Here's what our stack frame will look like:14451445- * +-------------------------+14461446- * | original SS |14471447- * | original Return RSP |14481448- * | original RFLAGS |14491449- * | original CS |14501450- * | original RIP |14511451- * +-------------------------+14521452- * | temp storage for rdx |14531453- * +-------------------------+14541454- * | NMI executing variable |14551455- * +-------------------------+14561456- * | copied SS |14571457- * | copied Return RSP |14581458- * | copied RFLAGS |14591459- * | copied CS |14601460- * | copied RIP |14611461- * +-------------------------+14621462- * | Saved SS |14631463- * | Saved Return RSP |14641464- * | Saved RFLAGS |14651465- * | Saved CS |14661466- * | Saved RIP |14671467- * +-------------------------+14681468- * | pt_regs |14691469- * +-------------------------+14701470- *14711471- * The saved stack frame is used to fix up the copied stack frame14721472- * that a nested NMI may change to make the interrupted NMI iret jump14731473- * to the repeat_nmi. The original stack frame and the temp storage14741474- * is also used by nested NMIs and can not be trusted on exit.14751475- */14761476- /* Do not pop rdx, nested NMIs will corrupt that part of the stack */13251325+ /* Restore rdx. */14771326 movq (%rsp), %rdx1478132714791479- /* Set the NMI executing variable on the stack. */14801480- pushq $113281328+ /* Make room for "NMI executing". */13291329+ pushq $01481133014821482- /* Leave room for the "copied" frame */13311331+ /* Leave room for the "iret" frame */14831332 subq $(5*8), %rsp1484133314851485- /* Copy the stack frame to the Saved frame */13341334+ /* Copy the "original" frame to the "outermost" frame */14861335 .rept 514871336 pushq 11*8(%rsp)14881337 .endr1489133814901339 /* Everything up to here is safe from nested NMIs */1491134013411341+#ifdef CONFIG_DEBUG_ENTRY13421342+ /*13431343+ * For ease of testing, unmask NMIs right away. Disabled by13441344+ * default because IRET is very expensive.13451345+ */13461346+ pushq $0 /* SS */13471347+ pushq %rsp /* RSP (minus 8 because of the previous push) */13481348+ addq $8, (%rsp) /* Fix up RSP */13491349+ pushfq /* RFLAGS */13501350+ pushq $__KERNEL_CS /* CS */13511351+ pushq $1f /* RIP */13521352+ INTERRUPT_RETURN /* continues at repeat_nmi below */13531353+1:13541354+#endif13551355+13561356+repeat_nmi:14921357 /*14931358 * If there was a nested NMI, the first NMI's iret will return14941359 * here. But NMIs are still enabled and we can take another···14781381 * it will just return, as we are about to repeat an NMI anyway.14791382 * This makes it safe to copy to the stack frame that a nested14801383 * NMI will update.13841384+ *13851385+ * RSP is pointing to "outermost RIP". gsbase is unknown, but, if13861386+ * we're repeating an NMI, gsbase has the same value that it had on13871387+ * the first iteration. paranoid_entry will load the kernel13881388+ * gsbase if needed before we call do_nmi. "NMI executing"13891389+ * is zero.14811390 */14821482-repeat_nmi:14831483- /*14841484- * Update the stack variable to say we are still in NMI (the update14851485- * is benign for the non-repeat case, where 1 was pushed just above14861486- * to this very stack slot).14871487- */14881488- movq $1, 10*8(%rsp)13911391+ movq $1, 10*8(%rsp) /* Set "NMI executing". */1489139214901490- /* Make another copy, this one may be modified by nested NMIs */13931393+ /*13941394+ * Copy the "outermost" frame to the "iret" frame. NMIs that nest13951395+ * here must not modify the "iret" frame while we're writing to13961396+ * it or it will end up containing garbage.13971397+ */14911398 addq $(10*8), %rsp14921399 .rept 514931400 pushq -6*8(%rsp)···15001399end_repeat_nmi:1501140015021401 /*15031503- * Everything below this point can be preempted by a nested15041504- * NMI if the first NMI took an exception and reset our iret stack15051505- * so that we repeat another NMI.14021402+ * Everything below this point can be preempted by a nested NMI.14031403+ * If this happens, then the inner NMI will change the "iret"14041404+ * frame to point back to repeat_nmi.15061405 */15071406 pushq $-1 /* ORIG_RAX: no syscall to restart */15081407 ALLOC_PT_GPREGS_ON_STACK···15161415 */15171416 call paranoid_entry1518141715191519- /*15201520- * Save off the CR2 register. If we take a page fault in the NMI then15211521- * it could corrupt the CR2 value. If the NMI preempts a page fault15221522- * handler before it was able to read the CR2 register, and then the15231523- * NMI itself takes a page fault, the page fault that was preempted15241524- * will read the information from the NMI page fault and not the15251525- * origin fault. Save it off and restore it if it changes.15261526- * Use the r12 callee-saved register.15271527- */15281528- movq %cr2, %r1215291529-15301418 /* paranoidentry do_nmi, 0; without TRACE_IRQS_OFF */15311419 movq %rsp, %rdi15321420 movq $-1, %rsi15331421 call do_nmi1534142215351535- /* Did the NMI take a page fault? Restore cr2 if it did */15361536- movq %cr2, %rcx15371537- cmpq %rcx, %r1215381538- je 1f15391539- movq %r12, %cr215401540-1:15411423 testl %ebx, %ebx /* swapgs needed? */15421424 jnz nmi_restore15431425nmi_swapgs:···15281444nmi_restore:15291445 RESTORE_EXTRA_REGS15301446 RESTORE_C_REGS15311531- /* Pop the extra iret frame at once */14471447+14481448+ /* Point RSP at the "iret" frame. */15321449 REMOVE_PT_GPREGS_FROM_STACK 6*81533145015341534- /* Clear the NMI executing stack variable */15351535- movq $0, 5*8(%rsp)14511451+ /*14521452+ * Clear "NMI executing". Set DF first so that we can easily14531453+ * distinguish the remaining code between here and IRET from14541454+ * the SYSCALL entry and exit paths. On a native kernel, we14551455+ * could just inspect RIP, but, on paravirt kernels,14561456+ * INTERRUPT_RETURN can translate into a jump into a14571457+ * hypercall page.14581458+ */14591459+ std14601460+ movq $0, 5*8(%rsp) /* clear "NMI executing" */14611461+14621462+ /*14631463+ * INTERRUPT_RETURN reads the "iret" frame and exits the NMI14641464+ * stack in a single instruction. We are returning to kernel14651465+ * mode, so this cannot result in a fault.14661466+ */15361467 INTERRUPT_RETURN15371468END(nmi)15381469
···189189 struct fxregs_state fxsave;190190 struct swregs_state soft;191191 struct xregs_state xsave;192192+ u8 __padding[PAGE_SIZE];192193};193194194195/*···198197 * state fields:199198 */200199struct fpu {201201- /*202202- * @state:203203- *204204- * In-memory copy of all FPU registers that we save/restore205205- * over context switches. If the task is using the FPU then206206- * the registers in the FPU are more recent than this state207207- * copy. If the task context-switches away then they get208208- * saved here and represent the FPU state.209209- *210210- * After context switches there may be a (short) time period211211- * during which the in-FPU hardware registers are unchanged212212- * and still perfectly match this state, if the tasks213213- * scheduled afterwards are not using the FPU.214214- *215215- * This is the 'lazy restore' window of optimization, which216216- * we track though 'fpu_fpregs_owner_ctx' and 'fpu->last_cpu'.217217- *218218- * We detect whether a subsequent task uses the FPU via setting219219- * CR0::TS to 1, which causes any FPU use to raise a #NM fault.220220- *221221- * During this window, if the task gets scheduled again, we222222- * might be able to skip having to do a restore from this223223- * memory buffer to the hardware registers - at the cost of224224- * incurring the overhead of #NM fault traps.225225- *226226- * Note that on modern CPUs that support the XSAVEOPT (or other227227- * optimized XSAVE instructions), we don't use #NM traps anymore,228228- * as the hardware can track whether FPU registers need saving229229- * or not. On such CPUs we activate the non-lazy ('eagerfpu')230230- * logic, which unconditionally saves/restores all FPU state231231- * across context switches. (if FPU state exists.)232232- */233233- union fpregs_state state;234234-235200 /*236201 * @last_cpu:237202 *···255288 * deal with bursty apps that only use the FPU for a short time:256289 */257290 unsigned char counter;291291+ /*292292+ * @state:293293+ *294294+ * In-memory copy of all FPU registers that we save/restore295295+ * over context switches. If the task is using the FPU then296296+ * the registers in the FPU are more recent than this state297297+ * copy. If the task context-switches away then they get298298+ * saved here and represent the FPU state.299299+ *300300+ * After context switches there may be a (short) time period301301+ * during which the in-FPU hardware registers are unchanged302302+ * and still perfectly match this state, if the tasks303303+ * scheduled afterwards are not using the FPU.304304+ *305305+ * This is the 'lazy restore' window of optimization, which306306+ * we track though 'fpu_fpregs_owner_ctx' and 'fpu->last_cpu'.307307+ *308308+ * We detect whether a subsequent task uses the FPU via setting309309+ * CR0::TS to 1, which causes any FPU use to raise a #NM fault.310310+ *311311+ * During this window, if the task gets scheduled again, we312312+ * might be able to skip having to do a restore from this313313+ * memory buffer to the hardware registers - at the cost of314314+ * incurring the overhead of #NM fault traps.315315+ *316316+ * Note that on modern CPUs that support the XSAVEOPT (or other317317+ * optimized XSAVE instructions), we don't use #NM traps anymore,318318+ * as the hardware can track whether FPU registers need saving319319+ * or not. On such CPUs we activate the non-lazy ('eagerfpu')320320+ * logic, which unconditionally saves/restores all FPU state321321+ * across context switches. (if FPU state exists.)322322+ */323323+ union fpregs_state state;324324+ /*325325+ * WARNING: 'state' is dynamically-sized. Do not put326326+ * anything after it here.327327+ */258328};259329260330#endif /* _ASM_X86_FPU_H */
-27
arch/x86/include/asm/intel_pmc_ipc.h
···25252626#if IS_ENABLED(CONFIG_INTEL_PMC_IPC)27272828-/*2929- * intel_pmc_ipc_simple_command3030- * @cmd: command3131- * @sub: sub type3232- */3328int intel_pmc_ipc_simple_command(int cmd, int sub);3434-3535-/*3636- * intel_pmc_ipc_raw_cmd3737- * @cmd: command3838- * @sub: sub type3939- * @in: input data4040- * @inlen: input length in bytes4141- * @out: output data4242- * @outlen: output length in dwords4343- * @sptr: data writing to SPTR register4444- * @dptr: data writing to DPTR register4545- */4629int intel_pmc_ipc_raw_cmd(u32 cmd, u32 sub, u8 *in, u32 inlen,4730 u32 *out, u32 outlen, u32 dptr, u32 sptr);4848-4949-/*5050- * intel_pmc_ipc_command5151- * @cmd: command5252- * @sub: sub type5353- * @in: input data5454- * @inlen: input length in bytes5555- * @out: output data5656- * @outlen: output length in dwords5757- */5831int intel_pmc_ipc_command(u32 cmd, u32 sub, u8 *in, u32 inlen,5932 u32 *out, u32 outlen);6033
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_X86_MM_ARCH_HOOKS_H1313-#define _ASM_X86_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_X86_MM_ARCH_HOOKS_H */
+1-1
arch/x86/include/asm/mmu_context.h
···23232424static inline void load_mm_cr4(struct mm_struct *mm)2525{2626- if (static_key_true(&rdpmc_always_available) ||2626+ if (static_key_false(&rdpmc_always_available) ||2727 atomic_read(&mm->context.perf_rdpmc_allowed))2828 cr4_set_bits(X86_CR4_PCE);2929 else
+7-3
arch/x86/include/asm/processor.h
···390390#endif391391 unsigned long gs;392392393393- /* Floating point and extended processor state */394394- struct fpu fpu;395395-396393 /* Save middle states of ptrace breakpoints */397394 struct perf_event *ptrace_bps[HBP_NUM];398395 /* Debug status used for traps, single steps, etc... */···415418 unsigned long iopl;416419 /* Max allowed port in the bitmap, in bytes: */417420 unsigned io_bitmap_max;421421+422422+ /* Floating point and extended processor state */423423+ struct fpu fpu;424424+ /*425425+ * WARNING: 'fpu' is dynamically-sized. It *MUST* be at426426+ * the end.427427+ */418428};419429420430/*
+2
arch/x86/include/uapi/asm/hyperv.h
···108108#define HV_X64_HYPERCALL_PARAMS_XMM_AVAILABLE (1 << 4)109109/* Support for a virtual guest idle state is available */110110#define HV_X64_GUEST_IDLE_STATE_AVAILABLE (1 << 5)111111+/* Guest crash data handler available */112112+#define HV_X64_GUEST_CRASH_MSR_AVAILABLE (1 << 10)111113112114/*113115 * Implementation recommendations. Indicates which behaviors the hypervisor
+2-8
arch/x86/kernel/apic/vector.c
···409409 int irq, vector;410410 struct apic_chip_data *data;411411412412- /*413413- * vector_lock will make sure that we don't run into irq vector414414- * assignments that might be happening on another cpu in parallel,415415- * while we setup our initial vector to irq mappings.416416- */417417- raw_spin_lock(&vector_lock);418412 /* Mark the inuse vectors */419413 for_each_active_irq(irq) {420414 data = apic_chip_data(irq_get_irq_data(irq));···430436 if (!cpumask_test_cpu(cpu, data->domain))431437 per_cpu(vector_irq, cpu)[vector] = VECTOR_UNDEFINED;432438 }433433- raw_spin_unlock(&vector_lock);434439}435440436441/*437437- * Setup the vector to irq mappings.442442+ * Setup the vector to irq mappings. Must be called with vector_lock held.438443 */439444void setup_vector_irq(int cpu)440445{441446 int irq;442447448448+ lockdep_assert_held(&vector_lock);443449 /*444450 * On most of the platforms, legacy PIC delivers the interrupts on the445451 * boot cpu. But there are certain platforms where PIC interrupts are
+3-1
arch/x86/kernel/early_printk.c
···175175 }176176177177 if (*s) {178178- if (kstrtoul(s, 0, &baud) < 0 || baud == 0)178178+ baud = simple_strtoull(s, &e, 0);179179+180180+ if (baud == 0 || s == e)179181 baud = DEFAULT_BAUD;180182 }181183
+16-12
arch/x86/kernel/espfix_64.c
···131131 init_espfix_random();132132133133 /* The rest is the same as for any other processor */134134- init_espfix_ap();134134+ init_espfix_ap(0);135135}136136137137-void init_espfix_ap(void)137137+void init_espfix_ap(int cpu)138138{139139- unsigned int cpu, page;139139+ unsigned int page;140140 unsigned long addr;141141 pud_t pud, *pud_p;142142 pmd_t pmd, *pmd_p;143143 pte_t pte, *pte_p;144144- int n;144144+ int n, node;145145 void *stack_page;146146 pteval_t ptemask;147147148148 /* We only have to do this once... */149149- if (likely(this_cpu_read(espfix_stack)))149149+ if (likely(per_cpu(espfix_stack, cpu)))150150 return; /* Already initialized */151151152152- cpu = smp_processor_id();153152 addr = espfix_base_addr(cpu);154153 page = cpu/ESPFIX_STACKS_PER_PAGE;155154···164165 if (stack_page)165166 goto unlock_done;166167168168+ node = cpu_to_node(cpu);167169 ptemask = __supported_pte_mask;168170169171 pud_p = &espfix_pud_page[pud_index(addr)];170172 pud = *pud_p;171173 if (!pud_present(pud)) {172172- pmd_p = (pmd_t *)__get_free_page(PGALLOC_GFP);174174+ struct page *page = alloc_pages_node(node, PGALLOC_GFP, 0);175175+176176+ pmd_p = (pmd_t *)page_address(page);173177 pud = __pud(__pa(pmd_p) | (PGTABLE_PROT & ptemask));174178 paravirt_alloc_pmd(&init_mm, __pa(pmd_p) >> PAGE_SHIFT);175179 for (n = 0; n < ESPFIX_PUD_CLONES; n++)···182180 pmd_p = pmd_offset(&pud, addr);183181 pmd = *pmd_p;184182 if (!pmd_present(pmd)) {185185- pte_p = (pte_t *)__get_free_page(PGALLOC_GFP);183183+ struct page *page = alloc_pages_node(node, PGALLOC_GFP, 0);184184+185185+ pte_p = (pte_t *)page_address(page);186186 pmd = __pmd(__pa(pte_p) | (PGTABLE_PROT & ptemask));187187 paravirt_alloc_pte(&init_mm, __pa(pte_p) >> PAGE_SHIFT);188188 for (n = 0; n < ESPFIX_PMD_CLONES; n++)···192188 }193189194190 pte_p = pte_offset_kernel(&pmd, addr);195195- stack_page = (void *)__get_free_page(GFP_KERNEL);191191+ stack_page = page_address(alloc_pages_node(node, GFP_KERNEL, 0));196192 pte = __pte(__pa(stack_page) | (__PAGE_KERNEL_RO & ptemask));197193 for (n = 0; n < ESPFIX_PTE_CLONES; n++)198194 set_pte(&pte_p[n*PTE_STRIDE], pte);···203199unlock_done:204200 mutex_unlock(&espfix_init_mutex);205201done:206206- this_cpu_write(espfix_stack, addr);207207- this_cpu_write(espfix_waddr, (unsigned long)stack_page208208- + (addr & ~PAGE_MASK));202202+ per_cpu(espfix_stack, cpu) = addr;203203+ per_cpu(espfix_waddr, cpu) = (unsigned long)stack_page204204+ + (addr & ~PAGE_MASK);209205}
+40
arch/x86/kernel/fpu/init.c
···44#include <asm/fpu/internal.h>55#include <asm/tlbflush.h>6677+#include <linux/sched.h>88+79/*810 * Initialize the TS bit in CR0 according to the style of context-switches911 * we are using:···137135 */138136unsigned int xstate_size;139137EXPORT_SYMBOL_GPL(xstate_size);138138+139139+/* Enforce that 'MEMBER' is the last field of 'TYPE': */140140+#define CHECK_MEMBER_AT_END_OF(TYPE, MEMBER) \141141+ BUILD_BUG_ON(sizeof(TYPE) != offsetofend(TYPE, MEMBER))142142+143143+/*144144+ * We append the 'struct fpu' to the task_struct:145145+ */146146+static void __init fpu__init_task_struct_size(void)147147+{148148+ int task_size = sizeof(struct task_struct);149149+150150+ /*151151+ * Subtract off the static size of the register state.152152+ * It potentially has a bunch of padding.153153+ */154154+ task_size -= sizeof(((struct task_struct *)0)->thread.fpu.state);155155+156156+ /*157157+ * Add back the dynamically-calculated register state158158+ * size.159159+ */160160+ task_size += xstate_size;161161+162162+ /*163163+ * We dynamically size 'struct fpu', so we require that164164+ * it be at the end of 'thread_struct' and that165165+ * 'thread_struct' be at the end of 'task_struct'. If166166+ * you hit a compile error here, check the structure to167167+ * see if something got added to the end.168168+ */169169+ CHECK_MEMBER_AT_END_OF(struct fpu, state);170170+ CHECK_MEMBER_AT_END_OF(struct thread_struct, fpu);171171+ CHECK_MEMBER_AT_END_OF(struct task_struct, thread);172172+173173+ arch_task_struct_size = task_size;174174+}140175141176/*142177 * Set up the xstate_size based on the legacy FPU context size.···326287 fpu__init_system_generic();327288 fpu__init_system_xstate_size_legacy();328289 fpu__init_system_xstate();290290+ fpu__init_task_struct_size();329291330292 fpu__init_system_ctx_switch();331293}
+4-6
arch/x86/kernel/head64.c
···161161 /* Kill off the identity-map trampoline */162162 reset_early_page_tables();163163164164- kasan_map_early_shadow(early_level4_pgt);165165-166166- /* clear bss before set_intr_gate with early_idt_handler */167164 clear_bss();165165+166166+ clear_page(init_level4_pgt);167167+168168+ kasan_early_init();168169169170 for (i = 0; i < NUM_EXCEPTION_VECTORS; i++)170171 set_intr_gate(i, early_idt_handler_array[i]);···178177 */179178 load_ucode_bsp();180179181181- clear_page(init_level4_pgt);182180 /* set init_level4_pgt kernel high mapping*/183181 init_level4_pgt[511] = early_level4_pgt[511];184184-185185- kasan_map_early_shadow(init_level4_pgt);186182187183 x86_64_start_reservations(real_mode_data);188184}
-29
arch/x86/kernel/head_64.S
···516516 /* This must match the first entry in level2_kernel_pgt */517517 .quad 0x0000000000000000518518519519-#ifdef CONFIG_KASAN520520-#define FILL(VAL, COUNT) \521521- .rept (COUNT) ; \522522- .quad (VAL) ; \523523- .endr524524-525525-NEXT_PAGE(kasan_zero_pte)526526- FILL(kasan_zero_page - __START_KERNEL_map + _KERNPG_TABLE, 512)527527-NEXT_PAGE(kasan_zero_pmd)528528- FILL(kasan_zero_pte - __START_KERNEL_map + _KERNPG_TABLE, 512)529529-NEXT_PAGE(kasan_zero_pud)530530- FILL(kasan_zero_pmd - __START_KERNEL_map + _KERNPG_TABLE, 512)531531-532532-#undef FILL533533-#endif534534-535535-536519#include "../../x86/xen/xen-head.S"537520538521 __PAGE_ALIGNED_BSS539522NEXT_PAGE(empty_zero_page)540523 .skip PAGE_SIZE541524542542-#ifdef CONFIG_KASAN543543-/*544544- * This page used as early shadow. We don't use empty_zero_page545545- * at early stages, stack instrumentation could write some garbage546546- * to this page.547547- * Latter we reuse it as zero shadow for large ranges of memory548548- * that allowed to access, but not instrumented by kasan549549- * (vmalloc/vmemmap ...).550550- */551551-NEXT_PAGE(kasan_zero_page)552552- .skip PAGE_SIZE553553-#endif
+18-2
arch/x86/kernel/irq.c
···347347 if (!desc)348348 continue;349349350350+ /*351351+ * Protect against concurrent action removal,352352+ * affinity changes etc.353353+ */354354+ raw_spin_lock(&desc->lock);350355 data = irq_desc_get_irq_data(desc);351356 cpumask_copy(&affinity_new, data->affinity);352357 cpumask_clear_cpu(this_cpu, &affinity_new);353358354359 /* Do not count inactive or per-cpu irqs. */355355- if (!irq_has_action(irq) || irqd_is_per_cpu(data))360360+ if (!irq_has_action(irq) || irqd_is_per_cpu(data)) {361361+ raw_spin_unlock(&desc->lock);356362 continue;363363+ }357364365365+ raw_spin_unlock(&desc->lock);358366 /*359367 * A single irq may be mapped to multiple360368 * cpu's vector_irq[] (for example IOAPIC cluster···393385 * vector. If the vector is marked in the used vectors394386 * bitmap or an irq is assigned to it, we don't count395387 * it as available.388388+ *389389+ * As this is an inaccurate snapshot anyway, we can do390390+ * this w/o holding vector_lock.396391 */397392 for (vector = FIRST_EXTERNAL_VECTOR;398393 vector < first_system_vector; vector++) {···497486 */498487 mdelay(1);499488489489+ /*490490+ * We can walk the vector array of this cpu without holding491491+ * vector_lock because the cpu is already marked !online, so492492+ * nothing else will touch it.493493+ */500494 for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; vector++) {501495 unsigned int irr;502496···513497 irq = __this_cpu_read(vector_irq[vector]);514498515499 desc = irq_to_desc(irq);500500+ raw_spin_lock(&desc->lock);516501 data = irq_desc_get_irq_data(desc);517502 chip = irq_data_get_irq_chip(data);518518- raw_spin_lock(&desc->lock);519503 if (chip->irq_retrigger) {520504 chip->irq_retrigger(data);521505 __this_cpu_write(vector_irq[vector], VECTOR_RETRIGGERED);
+52-71
arch/x86/kernel/nmi.c
···408408NOKPROBE_SYMBOL(default_do_nmi);409409410410/*411411- * NMIs can hit breakpoints which will cause it to lose its412412- * NMI context with the CPU when the breakpoint does an iret.413413- */414414-#ifdef CONFIG_X86_32415415-/*416416- * For i386, NMIs use the same stack as the kernel, and we can417417- * add a workaround to the iret problem in C (preventing nested418418- * NMIs if an NMI takes a trap). Simply have 3 states the NMI419419- * can be in:411411+ * NMIs can page fault or hit breakpoints which will cause it to lose412412+ * its NMI context with the CPU when the breakpoint or page fault does an IRET.413413+ *414414+ * As a result, NMIs can nest if NMIs get unmasked due an IRET during415415+ * NMI processing. On x86_64, the asm glue protects us from nested NMIs416416+ * if the outer NMI came from kernel mode, but we can still nest if the417417+ * outer NMI came from user mode.418418+ *419419+ * To handle these nested NMIs, we have three states:420420 *421421 * 1) not running422422 * 2) executing···430430 * (Note, the latch is binary, thus multiple NMIs triggering,431431 * when one is running, are ignored. Only one NMI is restarted.)432432 *433433- * If an NMI hits a breakpoint that executes an iret, another434434- * NMI can preempt it. We do not want to allow this new NMI435435- * to run, but we want to execute it when the first one finishes.436436- * We set the state to "latched", and the exit of the first NMI will437437- * perform a dec_return, if the result is zero (NOT_RUNNING), then438438- * it will simply exit the NMI handler. If not, the dec_return439439- * would have set the state to NMI_EXECUTING (what we want it to440440- * be when we are running). In this case, we simply jump back441441- * to rerun the NMI handler again, and restart the 'latched' NMI.433433+ * If an NMI executes an iret, another NMI can preempt it. We do not434434+ * want to allow this new NMI to run, but we want to execute it when the435435+ * first one finishes. We set the state to "latched", and the exit of436436+ * the first NMI will perform a dec_return, if the result is zero437437+ * (NOT_RUNNING), then it will simply exit the NMI handler. If not, the438438+ * dec_return would have set the state to NMI_EXECUTING (what we want it439439+ * to be when we are running). In this case, we simply jump back to440440+ * rerun the NMI handler again, and restart the 'latched' NMI.442441 *443442 * No trap (breakpoint or page fault) should be hit before nmi_restart,444443 * thus there is no race between the first check of state for NOT_RUNNING···460461static DEFINE_PER_CPU(enum nmi_states, nmi_state);461462static DEFINE_PER_CPU(unsigned long, nmi_cr2);462463463463-#define nmi_nesting_preprocess(regs) \464464- do { \465465- if (this_cpu_read(nmi_state) != NMI_NOT_RUNNING) { \466466- this_cpu_write(nmi_state, NMI_LATCHED); \467467- return; \468468- } \469469- this_cpu_write(nmi_state, NMI_EXECUTING); \470470- this_cpu_write(nmi_cr2, read_cr2()); \471471- } while (0); \472472- nmi_restart:473473-474474-#define nmi_nesting_postprocess() \475475- do { \476476- if (unlikely(this_cpu_read(nmi_cr2) != read_cr2())) \477477- write_cr2(this_cpu_read(nmi_cr2)); \478478- if (this_cpu_dec_return(nmi_state)) \479479- goto nmi_restart; \480480- } while (0)481481-#else /* x86_64 */464464+#ifdef CONFIG_X86_64482465/*483483- * In x86_64 things are a bit more difficult. This has the same problem484484- * where an NMI hitting a breakpoint that calls iret will remove the485485- * NMI context, allowing a nested NMI to enter. What makes this more486486- * difficult is that both NMIs and breakpoints have their own stack.487487- * When a new NMI or breakpoint is executed, the stack is set to a fixed488488- * point. If an NMI is nested, it will have its stack set at that same489489- * fixed address that the first NMI had, and will start corrupting the490490- * stack. This is handled in entry_64.S, but the same problem exists with491491- * the breakpoint stack.466466+ * In x86_64, we need to handle breakpoint -> NMI -> breakpoint. Without467467+ * some care, the inner breakpoint will clobber the outer breakpoint's468468+ * stack.492469 *493493- * If a breakpoint is being processed, and the debug stack is being used,494494- * if an NMI comes in and also hits a breakpoint, the stack pointer495495- * will be set to the same fixed address as the breakpoint that was496496- * interrupted, causing that stack to be corrupted. To handle this case,497497- * check if the stack that was interrupted is the debug stack, and if498498- * so, change the IDT so that new breakpoints will use the current stack499499- * and not switch to the fixed address. On return of the NMI, switch back500500- * to the original IDT.470470+ * If a breakpoint is being processed, and the debug stack is being471471+ * used, if an NMI comes in and also hits a breakpoint, the stack472472+ * pointer will be set to the same fixed address as the breakpoint that473473+ * was interrupted, causing that stack to be corrupted. To handle this474474+ * case, check if the stack that was interrupted is the debug stack, and475475+ * if so, change the IDT so that new breakpoints will use the current476476+ * stack and not switch to the fixed address. On return of the NMI,477477+ * switch back to the original IDT.501478 */502479static DEFINE_PER_CPU(int, update_debug_stack);480480+#endif503481504504-static inline void nmi_nesting_preprocess(struct pt_regs *regs)482482+dotraplinkage notrace void483483+do_nmi(struct pt_regs *regs, long error_code)505484{485485+ if (this_cpu_read(nmi_state) != NMI_NOT_RUNNING) {486486+ this_cpu_write(nmi_state, NMI_LATCHED);487487+ return;488488+ }489489+ this_cpu_write(nmi_state, NMI_EXECUTING);490490+ this_cpu_write(nmi_cr2, read_cr2());491491+nmi_restart:492492+493493+#ifdef CONFIG_X86_64506494 /*507495 * If we interrupted a breakpoint, it is possible that508496 * the nmi handler will have breakpoints too. We need to···500514 debug_stack_set_zero();501515 this_cpu_write(update_debug_stack, 1);502516 }503503-}504504-505505-static inline void nmi_nesting_postprocess(void)506506-{507507- if (unlikely(this_cpu_read(update_debug_stack))) {508508- debug_stack_reset();509509- this_cpu_write(update_debug_stack, 0);510510- }511511-}512517#endif513513-514514-dotraplinkage notrace void515515-do_nmi(struct pt_regs *regs, long error_code)516516-{517517- nmi_nesting_preprocess(regs);518518519519 nmi_enter();520520···511539512540 nmi_exit();513541514514- /* On i386, may loop back to preprocess */515515- nmi_nesting_postprocess();542542+#ifdef CONFIG_X86_64543543+ if (unlikely(this_cpu_read(update_debug_stack))) {544544+ debug_stack_reset();545545+ this_cpu_write(update_debug_stack, 0);546546+ }547547+#endif548548+549549+ if (unlikely(this_cpu_read(nmi_cr2) != read_cr2()))550550+ write_cr2(this_cpu_read(nmi_cr2));551551+ if (this_cpu_dec_return(nmi_state))552552+ goto nmi_restart;516553}517554NOKPROBE_SYMBOL(do_nmi);518555
···171171 apic_ap_setup();172172173173 /*174174- * Need to setup vector mappings before we enable interrupts.175175- */176176- setup_vector_irq(smp_processor_id());177177-178178- /*179174 * Save our processor parameters. Note: this information180175 * is needed for clock calibration.181176 */···234239 check_tsc_sync_target();235240236241 /*237237- * Enable the espfix hack for this CPU238238- */239239-#ifdef CONFIG_X86_ESPFIX64240240- init_espfix_ap();241241-#endif242242-243243- /*244244- * We need to hold vector_lock so there the set of online cpus245245- * does not change while we are assigning vectors to cpus. Holding246246- * this lock ensures we don't half assign or remove an irq from a cpu.242242+ * Lock vector_lock and initialize the vectors on this cpu243243+ * before setting the cpu online. We must set it online with244244+ * vector_lock held to prevent a concurrent setup/teardown245245+ * from seeing a half valid vector space.247246 */248247 lock_vector_lock();248248+ setup_vector_irq(smp_processor_id());249249 set_cpu_online(smp_processor_id(), true);250250 unlock_vector_lock();251251 cpu_set_state_online(smp_processor_id());···844854 initial_code = (unsigned long)start_secondary;845855 stack_start = idle->thread.sp;846856857857+ /*858858+ * Enable the espfix hack for this CPU859859+ */860860+#ifdef CONFIG_X86_ESPFIX64861861+ init_espfix_ap(cpu);862862+#endif863863+847864 /* So we see what's up */848865 announce_cpu(cpu, apicid);849866···992995993996 common_cpu_up(cpu, tidle);994997998998+ /*999999+ * We have to walk the irq descriptors to setup the vector10001000+ * space for the cpu which comes online. Prevent irq10011001+ * alloc/free across the bringup.10021002+ */10031003+ irq_lock_sparse();10041004+9951005 err = do_boot_cpu(apicid, cpu, tidle);10061006+9961007 if (err) {10081008+ irq_unlock_sparse();9971009 pr_err("do_boot_cpu failed(%d) to wakeup CPU#%u\n", err, cpu);9981010 return -EIO;9991011 }···10191013 cpu_relax();10201014 touch_nmi_watchdog();10211015 }10161016+10171017+ irq_unlock_sparse();1022101810231019 return 0;10241020}
+10-1
arch/x86/kernel/tsc.c
···598598 if (!pit_expect_msb(0xff-i, &delta, &d2))599599 break;600600601601+ delta -= tsc;602602+603603+ /*604604+ * Extrapolate the error and fail fast if the error will605605+ * never be below 500 ppm.606606+ */607607+ if (i == 1 &&608608+ d1 + d2 >= (delta * MAX_QUICK_PIT_ITERATIONS) >> 11)609609+ return 0;610610+601611 /*602612 * Iterate until the error is less than 500 ppm603613 */604604- delta -= tsc;605614 if (d1+d2 >= delta >> 11)606615 continue;607616
+2
arch/x86/kvm/cpuid.c
···9898 best->ebx = xstate_required_size(vcpu->arch.xcr0, true);9999100100 vcpu->arch.eager_fpu = use_eager_fpu() || guest_cpuid_has_mpx(vcpu);101101+ if (vcpu->arch.eager_fpu)102102+ kvm_x86_ops->fpu_activate(vcpu);101103102104 /*103105 * The existing code assumes virtual address is 48-bit in the canonical
···24792479 return 0;24802480}2481248124822482+static bool kvm_is_mmio_pfn(pfn_t pfn)24832483+{24842484+ if (pfn_valid(pfn))24852485+ return !is_zero_pfn(pfn) && PageReserved(pfn_to_page(pfn));24862486+24872487+ return true;24882488+}24892489+24822490static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep,24832491 unsigned pte_access, int level,24842492 gfn_t gfn, pfn_t pfn, bool speculative,···25142506 spte |= PT_PAGE_SIZE_MASK;25152507 if (tdp_enabled)25162508 spte |= kvm_x86_ops->get_mt_mask(vcpu, gfn,25172517- kvm_is_reserved_pfn(pfn));25092509+ kvm_is_mmio_pfn(pfn));2518251025192511 if (host_writable)25202512 spte |= SPTE_HOST_WRITEABLE;
+103-5
arch/x86/kvm/svm.c
···865865 set_msr_interception(msrpm, MSR_IA32_LASTINTTOIP, 0, 0);866866}867867868868+#define MTRR_TYPE_UC_MINUS 7869869+#define MTRR2PROTVAL_INVALID 0xff870870+871871+static u8 mtrr2protval[8];872872+873873+static u8 fallback_mtrr_type(int mtrr)874874+{875875+ /*876876+ * WT and WP aren't always available in the host PAT. Treat877877+ * them as UC and UC- respectively. Everything else should be878878+ * there.879879+ */880880+ switch (mtrr)881881+ {882882+ case MTRR_TYPE_WRTHROUGH:883883+ return MTRR_TYPE_UNCACHABLE;884884+ case MTRR_TYPE_WRPROT:885885+ return MTRR_TYPE_UC_MINUS;886886+ default:887887+ BUG();888888+ }889889+}890890+891891+static void build_mtrr2protval(void)892892+{893893+ int i;894894+ u64 pat;895895+896896+ for (i = 0; i < 8; i++)897897+ mtrr2protval[i] = MTRR2PROTVAL_INVALID;898898+899899+ /* Ignore the invalid MTRR types. */900900+ mtrr2protval[2] = 0;901901+ mtrr2protval[3] = 0;902902+903903+ /*904904+ * Use host PAT value to figure out the mapping from guest MTRR905905+ * values to nested page table PAT/PCD/PWT values. We do not906906+ * want to change the host PAT value every time we enter the907907+ * guest.908908+ */909909+ rdmsrl(MSR_IA32_CR_PAT, pat);910910+ for (i = 0; i < 8; i++) {911911+ u8 mtrr = pat >> (8 * i);912912+913913+ if (mtrr2protval[mtrr] == MTRR2PROTVAL_INVALID)914914+ mtrr2protval[mtrr] = __cm_idx2pte(i);915915+ }916916+917917+ for (i = 0; i < 8; i++) {918918+ if (mtrr2protval[i] == MTRR2PROTVAL_INVALID) {919919+ u8 fallback = fallback_mtrr_type(i);920920+ mtrr2protval[i] = mtrr2protval[fallback];921921+ BUG_ON(mtrr2protval[i] == MTRR2PROTVAL_INVALID);922922+ }923923+ }924924+}925925+868926static __init int svm_hardware_setup(void)869927{870928 int cpu;···989931 } else990932 kvm_disable_tdp();991933934934+ build_mtrr2protval();992935 return 0;993936994937err:···11441085 return target_tsc - tsc;11451086}1146108710881088+static void svm_set_guest_pat(struct vcpu_svm *svm, u64 *g_pat)10891089+{10901090+ struct kvm_vcpu *vcpu = &svm->vcpu;10911091+10921092+ /* Unlike Intel, AMD takes the guest's CR0.CD into account.10931093+ *10941094+ * AMD does not have IPAT. To emulate it for the case of guests10951095+ * with no assigned devices, just set everything to WB. If guests10961096+ * have assigned devices, however, we cannot force WB for RAM10971097+ * pages only, so use the guest PAT directly.10981098+ */10991099+ if (!kvm_arch_has_assigned_device(vcpu->kvm))11001100+ *g_pat = 0x0606060606060606;11011101+ else11021102+ *g_pat = vcpu->arch.pat;11031103+}11041104+11051105+static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)11061106+{11071107+ u8 mtrr;11081108+11091109+ /*11101110+ * 1. MMIO: trust guest MTRR, so same as item 3.11111111+ * 2. No passthrough: always map as WB, and force guest PAT to WB as well11121112+ * 3. Passthrough: can't guarantee the result, try to trust guest.11131113+ */11141114+ if (!is_mmio && !kvm_arch_has_assigned_device(vcpu->kvm))11151115+ return 0;11161116+11171117+ mtrr = kvm_mtrr_get_guest_memory_type(vcpu, gfn);11181118+ return mtrr2protval[mtrr];11191119+}11201120+11471121static void init_vmcb(struct vcpu_svm *svm, bool init_event)11481122{11491123 struct vmcb_control_area *control = &svm->vmcb->control;···12721180 clr_cr_intercept(svm, INTERCEPT_CR3_READ);12731181 clr_cr_intercept(svm, INTERCEPT_CR3_WRITE);12741182 save->g_pat = svm->vcpu.arch.pat;11831183+ svm_set_guest_pat(svm, &save->g_pat);12751184 save->cr3 = 0;12761185 save->cr4 = 0;12771186 }···33473254 case MSR_VM_IGNNE:33483255 vcpu_unimpl(vcpu, "unimplemented wrmsr: 0x%x data 0x%llx\n", ecx, data);33493256 break;32573257+ case MSR_IA32_CR_PAT:32583258+ if (npt_enabled) {32593259+ if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data))32603260+ return 1;32613261+ vcpu->arch.pat = data;32623262+ svm_set_guest_pat(svm, &svm->vmcb->save.g_pat);32633263+ mark_dirty(svm->vmcb, VMCB_NPT);32643264+ break;32653265+ }32663266+ /* fall through */33503267 default:33513268 return kvm_set_msr_common(vcpu, msr);33523269 }···41894086static bool svm_has_high_real_mode_segbase(void)41904087{41914088 return true;41924192-}41934193-41944194-static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)41954195-{41964196- return 0;41974089}4198409041994091static void svm_cpuid_update(struct kvm_vcpu *vcpu)
+3-8
arch/x86/kvm/vmx.c
···86328632 u64 ipat = 0;8633863386348634 /* For VT-d and EPT combination86358635- * 1. MMIO: always map as UC86358635+ * 1. MMIO: guest may want to apply WC, trust it.86368636 * 2. EPT with VT-d:86378637 * a. VT-d without snooping control feature: can't guarantee the86388638- * result, try to trust guest.86388638+ * result, try to trust guest. So the same as item 1.86398639 * b. VT-d with snooping control feature: snooping control feature of86408640 * VT-d engine can guarantee the cache correctness. Just set it86418641 * to WB to keep consistent with host. So the same as item 3.86428642 * 3. EPT without VT-d: always map as WB and set IPAT=1 to keep86438643 * consistent with host MTRR86448644 */86458645- if (is_mmio) {86468646- cache = MTRR_TYPE_UNCACHABLE;86478647- goto exit;86488648- }86498649-86508650- if (!kvm_arch_has_noncoherent_dma(vcpu->kvm)) {86458645+ if (!is_mmio && !kvm_arch_has_noncoherent_dma(vcpu->kvm)) {86518646 ipat = VMX_EPT_IPAT_BIT;86528647 cache = MTRR_TYPE_WRBACK;86538648 goto exit;
+19-7
arch/x86/kvm/x86.c
···31573157 cpuid_count(XSTATE_CPUID, index,31583158 &size, &offset, &ecx, &edx);31593159 memcpy(dest, src + offset, size);31603160- } else31613161- WARN_ON_ONCE(1);31603160+ }3162316131633162 valid -= feature;31643163 }···7314731573157316 vcpu = kvm_x86_ops->vcpu_create(kvm, id);7316731773177317- /*73187318- * Activate fpu unconditionally in case the guest needs eager FPU. It will be73197319- * deactivated soon if it doesn't.73207320- */73217321- kvm_x86_ops->fpu_activate(vcpu);73227318 return vcpu;73237319}73247320···82118217 return !kvm_event_needs_reinjection(vcpu) &&82128218 kvm_x86_ops->interrupt_allowed(vcpu);82138219}82208220+82218221+void kvm_arch_start_assignment(struct kvm *kvm)82228222+{82238223+ atomic_inc(&kvm->arch.assigned_device_count);82248224+}82258225+EXPORT_SYMBOL_GPL(kvm_arch_start_assignment);82268226+82278227+void kvm_arch_end_assignment(struct kvm *kvm)82288228+{82298229+ atomic_dec(&kvm->arch.assigned_device_count);82308230+}82318231+EXPORT_SYMBOL_GPL(kvm_arch_end_assignment);82328232+82338233+bool kvm_arch_has_assigned_device(struct kvm *kvm)82348234+{82358235+ return atomic_read(&kvm->arch.assigned_device_count);82368236+}82378237+EXPORT_SYMBOL_GPL(kvm_arch_has_assigned_device);8214823882158239void kvm_arch_register_noncoherent_dma(struct kvm *kvm)82168240{
+1-1
arch/x86/lib/usercopy.c
···2020 unsigned long ret;21212222 if (__range_not_ok(from, n, TASK_SIZE))2323- return 0;2323+ return n;24242525 /*2626 * Even though this function is typically called from NMI/IRQ context
+42-5
arch/x86/mm/kasan_init_64.c
···11+#define pr_fmt(fmt) "kasan: " fmt12#include <linux/bootmem.h>23#include <linux/kasan.h>34#include <linux/kdebug.h>···1211extern pgd_t early_level4_pgt[PTRS_PER_PGD];1312extern struct range pfn_mapped[E820_X_MAX];14131515-extern unsigned char kasan_zero_page[PAGE_SIZE];1414+static pud_t kasan_zero_pud[PTRS_PER_PUD] __page_aligned_bss;1515+static pmd_t kasan_zero_pmd[PTRS_PER_PMD] __page_aligned_bss;1616+static pte_t kasan_zero_pte[PTRS_PER_PTE] __page_aligned_bss;1717+1818+/*1919+ * This page used as early shadow. We don't use empty_zero_page2020+ * at early stages, stack instrumentation could write some garbage2121+ * to this page.2222+ * Latter we reuse it as zero shadow for large ranges of memory2323+ * that allowed to access, but not instrumented by kasan2424+ * (vmalloc/vmemmap ...).2525+ */2626+static unsigned char kasan_zero_page[PAGE_SIZE] __page_aligned_bss;16271728static int __init map_range(struct range *range)1829{···4936 pgd_clear(pgd_offset_k(start));5037}51385252-void __init kasan_map_early_shadow(pgd_t *pgd)3939+static void __init kasan_map_early_shadow(pgd_t *pgd)5340{5441 int i;5542 unsigned long start = KASAN_SHADOW_START;···8673 while (IS_ALIGNED(addr, PMD_SIZE) && addr + PMD_SIZE <= end) {8774 WARN_ON(!pmd_none(*pmd));8875 set_pmd(pmd, __pmd(__pa_nodebug(kasan_zero_pte)8989- | __PAGE_KERNEL_RO));7676+ | _KERNPG_TABLE));9077 addr += PMD_SIZE;9178 pmd = pmd_offset(pud, addr);9279 }···11299 while (IS_ALIGNED(addr, PUD_SIZE) && addr + PUD_SIZE <= end) {113100 WARN_ON(!pud_none(*pud));114101 set_pud(pud, __pud(__pa_nodebug(kasan_zero_pmd)115115- | __PAGE_KERNEL_RO));102102+ | _KERNPG_TABLE));116103 addr += PUD_SIZE;117104 pud = pud_offset(pgd, addr);118105 }···137124 while (IS_ALIGNED(addr, PGDIR_SIZE) && addr + PGDIR_SIZE <= end) {138125 WARN_ON(!pgd_none(*pgd));139126 set_pgd(pgd, __pgd(__pa_nodebug(kasan_zero_pud)140140- | __PAGE_KERNEL_RO));127127+ | _KERNPG_TABLE));141128 addr += PGDIR_SIZE;142129 pgd = pgd_offset_k(addr);143130 }···179166};180167#endif181168169169+void __init kasan_early_init(void)170170+{171171+ int i;172172+ pteval_t pte_val = __pa_nodebug(kasan_zero_page) | __PAGE_KERNEL;173173+ pmdval_t pmd_val = __pa_nodebug(kasan_zero_pte) | _KERNPG_TABLE;174174+ pudval_t pud_val = __pa_nodebug(kasan_zero_pmd) | _KERNPG_TABLE;175175+176176+ for (i = 0; i < PTRS_PER_PTE; i++)177177+ kasan_zero_pte[i] = __pte(pte_val);178178+179179+ for (i = 0; i < PTRS_PER_PMD; i++)180180+ kasan_zero_pmd[i] = __pmd(pmd_val);181181+182182+ for (i = 0; i < PTRS_PER_PUD; i++)183183+ kasan_zero_pud[i] = __pud(pud_val);184184+185185+ kasan_map_early_shadow(early_level4_pgt);186186+ kasan_map_early_shadow(init_level4_pgt);187187+}188188+182189void __init kasan_init(void)183190{184191 int i;···209176210177 memcpy(early_level4_pgt, init_level4_pgt, sizeof(early_level4_pgt));211178 load_cr3(early_level4_pgt);179179+ __flush_tlb_all();212180213181 clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);214182···236202 memset(kasan_zero_page, 0, PAGE_SIZE);237203238204 load_cr3(init_level4_pgt);205205+ __flush_tlb_all();239206 init_task.kasan_depth = 0;207207+208208+ pr_info("Kernel address sanitizer initialized\n");240209}
···11-/*22- * Architecture specific mm hooks33- *44- * Copyright (C) 2015, IBM Corporation55- * Author: Laurent Dufour <ldufour@linux.vnet.ibm.com>66- *77- * This program is free software; you can redistribute it and/or modify88- * it under the terms of the GNU General Public License version 2 as99- * published by the Free Software Foundation.1010- */1111-1212-#ifndef _ASM_XTENSA_MM_ARCH_HOOKS_H1313-#define _ASM_XTENSA_MM_ARCH_HOOKS_H1414-1515-#endif /* _ASM_XTENSA_MM_ARCH_HOOKS_H */
+2-2
block/bio-integrity.c
···5151 unsigned long idx = BIO_POOL_NONE;5252 unsigned inline_vecs;53535454- if (!bs) {5454+ if (!bs || !bs->bio_integrity_pool) {5555 bip = kmalloc(sizeof(struct bio_integrity_payload) +5656 sizeof(struct bio_vec) * nr_vecs, gfp_mask);5757 inline_vecs = nr_vecs;···104104 kfree(page_address(bip->bip_vec->bv_page) +105105 bip->bip_vec->bv_offset);106106107107- if (bs) {107107+ if (bs && bs->bio_integrity_pool) {108108 if (bip->bip_slab != BIO_POOL_NONE)109109 bvec_free(bs->bvec_integrity_pool, bip->bip_vec,110110 bip->bip_slab);
+81-59
block/blk-cgroup.c
···29293030#define MAX_KEY_LEN 10031313232+/*3333+ * blkcg_pol_mutex protects blkcg_policy[] and policy [de]activation.3434+ * blkcg_pol_register_mutex nests outside of it and synchronizes entire3535+ * policy [un]register operations including cgroup file additions /3636+ * removals. Putting cgroup file registration outside blkcg_pol_mutex3737+ * allows grabbing it from cgroup callbacks.3838+ */3939+static DEFINE_MUTEX(blkcg_pol_register_mutex);3240static DEFINE_MUTEX(blkcg_pol_mutex);33413442struct blkcg blkcg_root;···4537struct cgroup_subsys_state * const blkcg_root_css = &blkcg_root.css;46384739static struct blkcg_policy *blkcg_policy[BLKCG_MAX_POLS];4040+4141+static LIST_HEAD(all_blkcgs); /* protected by blkcg_pol_mutex */48424943static bool blkcg_policy_enabled(struct request_queue *q,5044 const struct blkcg_policy *pol)···463453 struct blkcg_gq *blkg;464454 int i;465455466466- /*467467- * XXX: We invoke cgroup_add/rm_cftypes() under blkcg_pol_mutex468468- * which ends up putting cgroup's internal cgroup_tree_mutex under469469- * it; however, cgroup_tree_mutex is nested above cgroup file470470- * active protection and grabbing blkcg_pol_mutex from a cgroup471471- * file operation creates a possible circular dependency. cgroup472472- * internal locking is planned to go through further simplification473473- * and this issue should go away soon. For now, let's trylock474474- * blkcg_pol_mutex and restart the write on failure.475475- *476476- * http://lkml.kernel.org/g/5363C04B.4010400@oracle.com477477- */478478- if (!mutex_trylock(&blkcg_pol_mutex))479479- return restart_syscall();456456+ mutex_lock(&blkcg_pol_mutex);480457 spin_lock_irq(&blkcg->lock);481458482459 /*···819822{820823 struct blkcg *blkcg = css_to_blkcg(css);821824822822- if (blkcg != &blkcg_root)825825+ mutex_lock(&blkcg_pol_mutex);826826+ list_del(&blkcg->all_blkcgs_node);827827+ mutex_unlock(&blkcg_pol_mutex);828828+829829+ if (blkcg != &blkcg_root) {830830+ int i;831831+832832+ for (i = 0; i < BLKCG_MAX_POLS; i++)833833+ kfree(blkcg->pd[i]);823834 kfree(blkcg);835835+ }824836}825837826838static struct cgroup_subsys_state *···838832 struct blkcg *blkcg;839833 struct cgroup_subsys_state *ret;840834 int i;835835+836836+ mutex_lock(&blkcg_pol_mutex);841837842838 if (!parent_css) {843839 blkcg = &blkcg_root;···883875#ifdef CONFIG_CGROUP_WRITEBACK884876 INIT_LIST_HEAD(&blkcg->cgwb_list);885877#endif878878+ list_add_tail(&blkcg->all_blkcgs_node, &all_blkcgs);879879+880880+ mutex_unlock(&blkcg_pol_mutex);886881 return &blkcg->css;887882888883free_pd_blkcg:889884 for (i--; i >= 0; i--)890885 kfree(blkcg->pd[i]);891891-892886free_blkcg:893887 kfree(blkcg);888888+ mutex_unlock(&blkcg_pol_mutex);894889 return ret;895890}896891···10481037 const struct blkcg_policy *pol)10491038{10501039 LIST_HEAD(pds);10511051- LIST_HEAD(cpds);10521040 struct blkcg_gq *blkg;10531041 struct blkg_policy_data *pd, *nd;10541054- struct blkcg_policy_data *cpd, *cnd;10551042 int cnt = 0, ret;1056104310571044 if (blkcg_policy_enabled(q, pol))···10621053 cnt++;10631054 spin_unlock_irq(q->queue_lock);1064105510651065- /*10661066- * Allocate per-blkg and per-blkcg policy data10671067- * for all existing blkgs.10681068- */10561056+ /* allocate per-blkg policy data for all existing blkgs */10691057 while (cnt--) {10701058 pd = kzalloc_node(pol->pd_size, GFP_KERNEL, q->node);10711059 if (!pd) {···10701064 goto out_free;10711065 }10721066 list_add_tail(&pd->alloc_node, &pds);10731073-10741074- if (!pol->cpd_size)10751075- continue;10761076- cpd = kzalloc_node(pol->cpd_size, GFP_KERNEL, q->node);10771077- if (!cpd) {10781078- ret = -ENOMEM;10791079- goto out_free;10801080- }10811081- list_add_tail(&cpd->alloc_node, &cpds);10821067 }1083106810841069 /*···10791082 spin_lock_irq(q->queue_lock);1080108310811084 list_for_each_entry(blkg, &q->blkg_list, q_node) {10821082- if (WARN_ON(list_empty(&pds)) ||10831083- WARN_ON(pol->cpd_size && list_empty(&cpds))) {10851085+ if (WARN_ON(list_empty(&pds))) {10841086 /* umm... this shouldn't happen, just abort */10851087 ret = -ENOMEM;10861088 goto out_unlock;10871089 }10881088- cpd = list_first_entry(&cpds, struct blkcg_policy_data,10891089- alloc_node);10901090- list_del_init(&cpd->alloc_node);10911090 pd = list_first_entry(&pds, struct blkg_policy_data, alloc_node);10921091 list_del_init(&pd->alloc_node);1093109210941093 /* grab blkcg lock too while installing @pd on @blkg */10951094 spin_lock(&blkg->blkcg->lock);1096109510971097- if (!pol->cpd_size)10981098- goto no_cpd;10991099- if (!blkg->blkcg->pd[pol->plid]) {11001100- /* Per-policy per-blkcg data */11011101- blkg->blkcg->pd[pol->plid] = cpd;11021102- cpd->plid = pol->plid;11031103- pol->cpd_init_fn(blkg->blkcg);11041104- } else { /* must free it as it has already been extracted */11051105- kfree(cpd);11061106- }11071107-no_cpd:11081096 blkg->pd[pol->plid] = pd;11091097 pd->blkg = blkg;11101098 pd->plid = pol->plid;···11061124 blk_queue_bypass_end(q);11071125 list_for_each_entry_safe(pd, nd, &pds, alloc_node)11081126 kfree(pd);11091109- list_for_each_entry_safe(cpd, cnd, &cpds, alloc_node)11101110- kfree(cpd);11111127 return ret;11121128}11131129EXPORT_SYMBOL_GPL(blkcg_activate_policy);···1142116211431163 kfree(blkg->pd[pol->plid]);11441164 blkg->pd[pol->plid] = NULL;11451145- kfree(blkg->blkcg->pd[pol->plid]);11461146- blkg->blkcg->pd[pol->plid] = NULL;1147116511481166 spin_unlock(&blkg->blkcg->lock);11491167 }···11601182 */11611183int blkcg_policy_register(struct blkcg_policy *pol)11621184{11851185+ struct blkcg *blkcg;11631186 int i, ret;1164118711651188 if (WARN_ON(pol->pd_size < sizeof(struct blkg_policy_data)))11661189 return -EINVAL;1167119011911191+ mutex_lock(&blkcg_pol_register_mutex);11681192 mutex_lock(&blkcg_pol_mutex);1169119311701194 /* find an empty slot */···11751195 if (!blkcg_policy[i])11761196 break;11771197 if (i >= BLKCG_MAX_POLS)11781178- goto out_unlock;11981198+ goto err_unlock;1179119911801180- /* register and update blkgs */12001200+ /* register @pol */11811201 pol->plid = i;11821182- blkcg_policy[i] = pol;12021202+ blkcg_policy[pol->plid] = pol;12031203+12041204+ /* allocate and install cpd's */12051205+ if (pol->cpd_size) {12061206+ list_for_each_entry(blkcg, &all_blkcgs, all_blkcgs_node) {12071207+ struct blkcg_policy_data *cpd;12081208+12091209+ cpd = kzalloc(pol->cpd_size, GFP_KERNEL);12101210+ if (!cpd) {12111211+ mutex_unlock(&blkcg_pol_mutex);12121212+ goto err_free_cpds;12131213+ }12141214+12151215+ blkcg->pd[pol->plid] = cpd;12161216+ cpd->plid = pol->plid;12171217+ pol->cpd_init_fn(blkcg);12181218+ }12191219+ }12201220+12211221+ mutex_unlock(&blkcg_pol_mutex);1183122211841223 /* everything is in place, add intf files for the new policy */11851224 if (pol->cftypes)11861225 WARN_ON(cgroup_add_legacy_cftypes(&blkio_cgrp_subsys,11871226 pol->cftypes));11881188- ret = 0;11891189-out_unlock:12271227+ mutex_unlock(&blkcg_pol_register_mutex);12281228+ return 0;12291229+12301230+err_free_cpds:12311231+ if (pol->cpd_size) {12321232+ list_for_each_entry(blkcg, &all_blkcgs, all_blkcgs_node) {12331233+ kfree(blkcg->pd[pol->plid]);12341234+ blkcg->pd[pol->plid] = NULL;12351235+ }12361236+ }12371237+ blkcg_policy[pol->plid] = NULL;12381238+err_unlock:11901239 mutex_unlock(&blkcg_pol_mutex);12401240+ mutex_unlock(&blkcg_pol_register_mutex);11911241 return ret;11921242}11931243EXPORT_SYMBOL_GPL(blkcg_policy_register);···12301220 */12311221void blkcg_policy_unregister(struct blkcg_policy *pol)12321222{12331233- mutex_lock(&blkcg_pol_mutex);12231223+ struct blkcg *blkcg;12241224+12251225+ mutex_lock(&blkcg_pol_register_mutex);1234122612351227 if (WARN_ON(blkcg_policy[pol->plid] != pol))12361228 goto out_unlock;···12411229 if (pol->cftypes)12421230 cgroup_rm_cftypes(pol->cftypes);1243123112441244- /* unregister and update blkgs */12321232+ /* remove cpds and unregister */12331233+ mutex_lock(&blkcg_pol_mutex);12341234+12351235+ if (pol->cpd_size) {12361236+ list_for_each_entry(blkcg, &all_blkcgs, all_blkcgs_node) {12371237+ kfree(blkcg->pd[pol->plid]);12381238+ blkcg->pd[pol->plid] = NULL;12391239+ }12401240+ }12451241 blkcg_policy[pol->plid] = NULL;12461246-out_unlock:12421242+12471243 mutex_unlock(&blkcg_pol_mutex);12441244+out_unlock:12451245+ mutex_unlock(&blkcg_pol_register_mutex);12481246}12491247EXPORT_SYMBOL_GPL(blkcg_policy_unregister);
+1-1
block/blk-core.c
···33703370int __init blk_dev_init(void)33713371{33723372 BUILD_BUG_ON(__REQ_NR_BITS > 8 *33733373- sizeof(((struct request *)0)->cmd_flags));33733373+ FIELD_SIZEOF(struct request, cmd_flags));3374337433753375 /* used for unplugging and affects IO latency/throughput - HIGHPRI */33763376 kblockd_workqueue = alloc_workqueue("kblockd",
···2626#include <linux/device.h>2727#include <linux/export.h>2828#include <linux/ioport.h>2929-#include <linux/list.h>3029#include <linux/slab.h>31303231#ifdef CONFIG_X86···193194 u8 iodec = attr->granularity == 0xfff ? ACPI_DECODE_10 : ACPI_DECODE_16;194195 bool wp = addr->info.mem.write_protect;195196 u64 len = attr->address_length;197197+ u64 start, end, offset = 0;196198 struct resource *res = &win->res;197199198200 /*···205205 pr_debug("ACPI: Invalid address space min_addr_fix %d, max_addr_fix %d, len %llx\n",206206 addr->min_address_fixed, addr->max_address_fixed, len);207207208208- res->start = attr->minimum;209209- res->end = attr->maximum;210210-211208 /*212209 * For bridges that translate addresses across the bridge,213210 * translation_offset is the offset that must be added to the···212215 * primary side. Non-bridge devices must list 0 for all Address213216 * Translation offset bits.214217 */215215- if (addr->producer_consumer == ACPI_PRODUCER) {216216- res->start += attr->translation_offset;217217- res->end += attr->translation_offset;218218- } else if (attr->translation_offset) {218218+ if (addr->producer_consumer == ACPI_PRODUCER)219219+ offset = attr->translation_offset;220220+ else if (attr->translation_offset)219221 pr_debug("ACPI: translation_offset(%lld) is invalid for non-bridge device.\n",220222 attr->translation_offset);223223+ start = attr->minimum + offset;224224+ end = attr->maximum + offset;225225+226226+ win->offset = offset;227227+ res->start = start;228228+ res->end = end;229229+ if (sizeof(resource_size_t) < sizeof(u64) &&230230+ (offset != win->offset || start != res->start || end != res->end)) {231231+ pr_warn("acpi resource window ([%#llx-%#llx] ignored, not CPU addressable)\n",232232+ attr->minimum, attr->maximum);233233+ return false;221234 }222235223236 switch (addr->resource_type) {···243236 default:244237 return false;245238 }246246-247247- win->offset = attr->translation_offset;248239249240 if (addr->producer_consumer == ACPI_PRODUCER)250241 res->flags |= IORESOURCE_WINDOW;···627622 return (type & types) ? 0 : 1;628623}629624EXPORT_SYMBOL_GPL(acpi_dev_filter_resource_type);630630-631631-struct reserved_region {632632- struct list_head node;633633- u64 start;634634- u64 end;635635-};636636-637637-static LIST_HEAD(reserved_io_regions);638638-static LIST_HEAD(reserved_mem_regions);639639-640640-static int request_range(u64 start, u64 end, u8 space_id, unsigned long flags,641641- char *desc)642642-{643643- unsigned int length = end - start + 1;644644- struct resource *res;645645-646646- res = space_id == ACPI_ADR_SPACE_SYSTEM_IO ?647647- request_region(start, length, desc) :648648- request_mem_region(start, length, desc);649649- if (!res)650650- return -EIO;651651-652652- res->flags &= ~flags;653653- return 0;654654-}655655-656656-static int add_region_before(u64 start, u64 end, u8 space_id,657657- unsigned long flags, char *desc,658658- struct list_head *head)659659-{660660- struct reserved_region *reg;661661- int error;662662-663663- reg = kmalloc(sizeof(*reg), GFP_KERNEL);664664- if (!reg)665665- return -ENOMEM;666666-667667- error = request_range(start, end, space_id, flags, desc);668668- if (error) {669669- kfree(reg);670670- return error;671671- }672672-673673- reg->start = start;674674- reg->end = end;675675- list_add_tail(®->node, head);676676- return 0;677677-}678678-679679-/**680680- * acpi_reserve_region - Reserve an I/O or memory region as a system resource.681681- * @start: Starting address of the region.682682- * @length: Length of the region.683683- * @space_id: Identifier of address space to reserve the region from.684684- * @flags: Resource flags to clear for the region after requesting it.685685- * @desc: Region description (for messages).686686- *687687- * Reserve an I/O or memory region as a system resource to prevent others from688688- * using it. If the new region overlaps with one of the regions (in the given689689- * address space) already reserved by this routine, only the non-overlapping690690- * parts of it will be reserved.691691- *692692- * Returned is either 0 (success) or a negative error code indicating a resource693693- * reservation problem. It is the code of the first encountered error, but the694694- * routine doesn't abort until it has attempted to request all of the parts of695695- * the new region that don't overlap with other regions reserved previously.696696- *697697- * The resources requested by this routine are never released.698698- */699699-int acpi_reserve_region(u64 start, unsigned int length, u8 space_id,700700- unsigned long flags, char *desc)701701-{702702- struct list_head *regions;703703- struct reserved_region *reg;704704- u64 end = start + length - 1;705705- int ret = 0, error = 0;706706-707707- if (space_id == ACPI_ADR_SPACE_SYSTEM_IO)708708- regions = &reserved_io_regions;709709- else if (space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY)710710- regions = &reserved_mem_regions;711711- else712712- return -EINVAL;713713-714714- if (list_empty(regions))715715- return add_region_before(start, end, space_id, flags, desc, regions);716716-717717- list_for_each_entry(reg, regions, node)718718- if (reg->start == end + 1) {719719- /* The new region can be prepended to this one. */720720- ret = request_range(start, end, space_id, flags, desc);721721- if (!ret)722722- reg->start = start;723723-724724- return ret;725725- } else if (reg->start > end) {726726- /* No overlap. Add the new region here and get out. */727727- return add_region_before(start, end, space_id, flags,728728- desc, ®->node);729729- } else if (reg->end == start - 1) {730730- goto combine;731731- } else if (reg->end >= start) {732732- goto overlap;733733- }734734-735735- /* The new region goes after the last existing one. */736736- return add_region_before(start, end, space_id, flags, desc, regions);737737-738738- overlap:739739- /*740740- * The new region overlaps an existing one.741741- *742742- * The head part of the new region immediately preceding the existing743743- * overlapping one can be combined with it right away.744744- */745745- if (reg->start > start) {746746- error = request_range(start, reg->start - 1, space_id, flags, desc);747747- if (error)748748- ret = error;749749- else750750- reg->start = start;751751- }752752-753753- combine:754754- /*755755- * The new region is adjacent to an existing one. If it extends beyond756756- * that region all the way to the next one, it is possible to combine757757- * all three of them.758758- */759759- while (reg->end < end) {760760- struct reserved_region *next = NULL;761761- u64 a = reg->end + 1, b = end;762762-763763- if (!list_is_last(®->node, regions)) {764764- next = list_next_entry(reg, node);765765- if (next->start <= end)766766- b = next->start - 1;767767- }768768- error = request_range(a, b, space_id, flags, desc);769769- if (!error) {770770- if (next && next->start == b + 1) {771771- reg->end = next->end;772772- list_del(&next->node);773773- kfree(next);774774- } else {775775- reg->end = end;776776- break;777777- }778778- } else if (next) {779779- if (!ret)780780- ret = error;781781-782782- reg = next;783783- } else {784784- break;785785- }786786- }787787-788788- return ret ? ret : error;789789-}790790-EXPORT_SYMBOL_GPL(acpi_reserve_region);
+30-2
drivers/acpi/scan.c
···10191019 return false;10201020}1021102110221022+static bool __acpi_match_device_cls(const struct acpi_device_id *id,10231023+ struct acpi_hardware_id *hwid)10241024+{10251025+ int i, msk, byte_shift;10261026+ char buf[3];10271027+10281028+ if (!id->cls)10291029+ return false;10301030+10311031+ /* Apply class-code bitmask, before checking each class-code byte */10321032+ for (i = 1; i <= 3; i++) {10331033+ byte_shift = 8 * (3 - i);10341034+ msk = (id->cls_msk >> byte_shift) & 0xFF;10351035+ if (!msk)10361036+ continue;10371037+10381038+ sprintf(buf, "%02x", (id->cls >> byte_shift) & msk);10391039+ if (strncmp(buf, &hwid->id[(i - 1) * 2], 2))10401040+ return false;10411041+ }10421042+ return true;10431043+}10441044+10221045static const struct acpi_device_id *__acpi_match_device(10231046 struct acpi_device *device,10241047 const struct acpi_device_id *ids,···1059103610601037 list_for_each_entry(hwid, &device->pnp.ids, list) {10611038 /* First, check the ACPI/PNP IDs provided by the caller. */10621062- for (id = ids; id->id[0]; id++)10631063- if (!strcmp((char *) id->id, hwid->id))10391039+ for (id = ids; id->id[0] || id->cls; id++) {10401040+ if (id->id[0] && !strcmp((char *) id->id, hwid->id))10641041 return id;10421042+ else if (id->cls && __acpi_match_device_cls(id, hwid))10431043+ return id;10441044+ }1065104510661046 /*10671047 * Next, check ACPI_DT_NAMESPACE_HID and try to match the···21272101 if (info->valid & ACPI_VALID_UID)21282102 pnp->unique_id = kstrdup(info->unique_id.string,21292103 GFP_KERNEL);21042104+ if (info->valid & ACPI_VALID_CLS)21052105+ acpi_add_id(pnp, info->class_code.string);2130210621312107 kfree(info);21322108
+1-1
drivers/ata/Kconfig
···48484949config ATA_ACPI5050 bool "ATA ACPI Support"5151- depends on ACPI && PCI5151+ depends on ACPI5252 default y5353 help5454 This option adds support for ATA-related ACPI objects.
···44 * Arasan Compact Flash host controller source file55 *66 * Copyright (C) 2011 ST Microelectronics77- * Viresh Kumar <viresh.linux@gmail.com>77+ * Viresh Kumar <vireshk@kernel.org>88 *99 * This file is licensed under the terms of the GNU General Public1010 * License version 2. This program is licensed "as is" without any···968968969969module_platform_driver(arasan_cf_driver);970970971971-MODULE_AUTHOR("Viresh Kumar <viresh.linux@gmail.com>");971971+MODULE_AUTHOR("Viresh Kumar <vireshk@kernel.org>");972972MODULE_DESCRIPTION("Arasan ATA Compact Flash driver");973973MODULE_LICENSE("GPL");974974MODULE_ALIAS("platform:" DRIVER_NAME);
+13-3
drivers/base/firmware_class.c
···563563 kfree(fw_priv);564564}565565566566-static int firmware_uevent(struct device *dev, struct kobj_uevent_env *env)566566+static int do_firmware_uevent(struct firmware_priv *fw_priv, struct kobj_uevent_env *env)567567{568568- struct firmware_priv *fw_priv = to_firmware_priv(dev);569569-570568 if (add_uevent_var(env, "FIRMWARE=%s", fw_priv->buf->fw_id))571569 return -ENOMEM;572570 if (add_uevent_var(env, "TIMEOUT=%i", loading_timeout))···573575 return -ENOMEM;574576575577 return 0;578578+}579579+580580+static int firmware_uevent(struct device *dev, struct kobj_uevent_env *env)581581+{582582+ struct firmware_priv *fw_priv = to_firmware_priv(dev);583583+ int err = 0;584584+585585+ mutex_lock(&fw_lock);586586+ if (fw_priv->buf)587587+ err = do_firmware_uevent(fw_priv, env);588588+ mutex_unlock(&fw_lock);589589+ return err;576590}577591578592static struct class firmware_class = {
+11-2
drivers/base/power/domain.c
···66 * This file is released under the GPLv2.77 */8899+#include <linux/delay.h>910#include <linux/kernel.h>1011#include <linux/io.h>1112#include <linux/platform_device.h>···1918#include <linux/sched.h>2019#include <linux/suspend.h>2120#include <linux/export.h>2121+2222+#define GENPD_RETRY_MAX_MS 250 /* Approximate */22232324#define GENPD_DEV_CALLBACK(genpd, type, callback, dev) \2425({ \···21342131static void genpd_dev_pm_detach(struct device *dev, bool power_off)21352132{21362133 struct generic_pm_domain *pd;21342134+ unsigned int i;21372135 int ret = 0;2138213621392137 pd = pm_genpd_lookup_dev(dev);···2143213921442140 dev_dbg(dev, "removing from PM domain %s\n", pd->name);2145214121462146- while (1) {21422142+ for (i = 1; i < GENPD_RETRY_MAX_MS; i <<= 1) {21472143 ret = pm_genpd_remove_device(pd, dev);21482144 if (ret != -EAGAIN)21492145 break;21462146+21472147+ mdelay(i);21502148 cond_resched();21512149 }21522150···21892183{21902184 struct of_phandle_args pd_args;21912185 struct generic_pm_domain *pd;21862186+ unsigned int i;21922187 int ret;2193218821942189 if (!dev->of_node)···2225221822262219 dev_dbg(dev, "adding to PM domain %s\n", pd->name);2227222022282228- while (1) {22212221+ for (i = 1; i < GENPD_RETRY_MAX_MS; i <<= 1) {22292222 ret = pm_genpd_add_device(pd, dev);22302223 if (ret != -EAGAIN)22312224 break;22252225+22262226+ mdelay(i);22322227 cond_resched();22332228 }22342229
···233233 return -ENODEV;234234 }235235236236+ /* At least some versions of AMI BIOS have a bug that TPM2 table has237237+ * zero address for the control area and therefore we must fail.238238+ */239239+ if (!buf->control_area_pa) {240240+ dev_err(dev, "TPM2 ACPI table has a zero address for the control area\n");241241+ return -EINVAL;242242+ }243243+236244 if (buf->hdr.length < sizeof(struct acpi_tpm2)) {237245 dev_err(dev, "TPM2 ACPI table has wrong size");238246 return -EINVAL;
+3-1
drivers/clk/at91/clk-h32mx.c
···116116 h32mxclk->pmc = pmc;117117118118 clk = clk_register(NULL, &h32mxclk->hw);119119- if (!clk)119119+ if (!clk) {120120+ kfree(h32mxclk);120121 return;122122+ }121123122124 of_clk_add_provider(np, of_clk_src_simple_get, clk);123125}
+3-1
drivers/clk/at91/clk-main.c
···171171 irq_set_status_flags(osc->irq, IRQ_NOAUTOEN);172172 ret = request_irq(osc->irq, clk_main_osc_irq_handler,173173 IRQF_TRIGGER_HIGH, name, osc);174174- if (ret)174174+ if (ret) {175175+ kfree(osc);175176 return ERR_PTR(ret);177177+ }176178177179 if (bypass)178180 pmc_write(pmc, AT91_CKGR_MOR,
+6-2
drivers/clk/at91/clk-master.c
···165165 irq_set_status_flags(master->irq, IRQ_NOAUTOEN);166166 ret = request_irq(master->irq, clk_master_irq_handler,167167 IRQF_TRIGGER_HIGH, "clk-master", master);168168- if (ret)168168+ if (ret) {169169+ kfree(master);169170 return ERR_PTR(ret);171171+ }170172171173 clk = clk_register(NULL, &master->hw);172172- if (IS_ERR(clk))174174+ if (IS_ERR(clk)) {175175+ free_irq(master->irq, master);173176 kfree(master);177177+ }174178175179 return clk;176180}
+6-2
drivers/clk/at91/clk-pll.c
···346346 irq_set_status_flags(pll->irq, IRQ_NOAUTOEN);347347 ret = request_irq(pll->irq, clk_pll_irq_handler, IRQF_TRIGGER_HIGH,348348 id ? "clk-pllb" : "clk-plla", pll);349349- if (ret)349349+ if (ret) {350350+ kfree(pll);350351 return ERR_PTR(ret);352352+ }351353352354 clk = clk_register(NULL, &pll->hw);353353- if (IS_ERR(clk))355355+ if (IS_ERR(clk)) {356356+ free_irq(pll->irq, pll);354357 kfree(pll);358358+ }355359356360 return clk;357361}
+6-2
drivers/clk/at91/clk-system.c
···130130 irq_set_status_flags(sys->irq, IRQ_NOAUTOEN);131131 ret = request_irq(sys->irq, clk_system_irq_handler,132132 IRQF_TRIGGER_HIGH, name, sys);133133- if (ret)133133+ if (ret) {134134+ kfree(sys);134135 return ERR_PTR(ret);136136+ }135137 }136138137139 clk = clk_register(NULL, &sys->hw);138138- if (IS_ERR(clk))140140+ if (IS_ERR(clk)) {141141+ free_irq(sys->irq, sys);139142 kfree(sys);143143+ }140144141145 return clk;142146}
+6-2
drivers/clk/at91/clk-utmi.c
···118118 irq_set_status_flags(utmi->irq, IRQ_NOAUTOEN);119119 ret = request_irq(utmi->irq, clk_utmi_irq_handler,120120 IRQF_TRIGGER_HIGH, "clk-utmi", utmi);121121- if (ret)121121+ if (ret) {122122+ kfree(utmi);122123 return ERR_PTR(ret);124124+ }123125124126 clk = clk_register(NULL, &utmi->hw);125125- if (IS_ERR(clk))127127+ if (IS_ERR(clk)) {128128+ free_irq(utmi->irq, utmi);126129 kfree(utmi);130130+ }127131128132 return clk;129133}
+1-5
drivers/clk/bcm/clk-iproc-asiu.c
···222222 struct iproc_asiu_clk *asiu_clk;223223 const char *clk_name;224224225225- clk_name = kzalloc(IPROC_CLK_NAME_LEN, GFP_KERNEL);226226- if (WARN_ON(!clk_name))227227- goto err_clk_register;228228-229225 ret = of_property_read_string_index(node, "clock-output-names",230226 i, &clk_name);231227 if (WARN_ON(ret))···255259256260err_clk_register:257261 for (i = 0; i < num_clks; i++)258258- kfree(asiu->clks[i].name);262262+ clk_unregister(asiu->clk_data.clks[i]);259263 iounmap(asiu->gate_base);260264261265err_iomap_gate:
+4-9
drivers/clk/bcm/clk-iproc-pll.c
···366366 val = readl(pll->pll_base + ctrl->ndiv_int.offset);367367 ndiv_int = (val >> ctrl->ndiv_int.shift) &368368 bit_mask(ctrl->ndiv_int.width);369369- ndiv = ndiv_int << ctrl->ndiv_int.shift;369369+ ndiv = (u64)ndiv_int << ctrl->ndiv_int.shift;370370371371 if (ctrl->flags & IPROC_CLK_PLL_HAS_NDIV_FRAC) {372372 val = readl(pll->pll_base + ctrl->ndiv_frac.offset);···374374 bit_mask(ctrl->ndiv_frac.width);375375376376 if (ndiv_frac != 0)377377- ndiv = (ndiv_int << ctrl->ndiv_int.shift) | ndiv_frac;377377+ ndiv = ((u64)ndiv_int << ctrl->ndiv_int.shift) |378378+ ndiv_frac;378379 }379380380381 val = readl(pll->pll_base + ctrl->pdiv.offset);···656655 memset(&init, 0, sizeof(init));657656 parent_name = node->name;658657659659- clk_name = kzalloc(IPROC_CLK_NAME_LEN, GFP_KERNEL);660660- if (WARN_ON(!clk_name))661661- goto err_clk_register;662662-663658 ret = of_property_read_string_index(node, "clock-output-names",664659 i, &clk_name);665660 if (WARN_ON(ret))···687690 return;688691689692err_clk_register:690690- for (i = 0; i < num_clks; i++) {691691- kfree(pll->clks[i].name);693693+ for (i = 0; i < num_clks; i++)692694 clk_unregister(pll->clk_data.clks[i]);693693- }694695695696err_pll_register:696697 if (pll->asiu_base)
+1-1
drivers/clk/clk-stm32f4.c
···268268 memcpy(table, stm32f42xx_gate_map, sizeof(table));269269270270 /* only bits set in table can be used as indices */271271- if (WARN_ON(secondary > 8 * sizeof(table) ||271271+ if (WARN_ON(secondary >= BITS_PER_BYTE * sizeof(table) ||272272 0 == (table[BIT_ULL_WORD(secondary)] &273273 BIT_ULL_MASK(secondary))))274274 return -EINVAL;
···11/*22 * Copyright (C) 2012 ST Microelectronics33- * Viresh Kumar <viresh.linux@gmail.com>33+ * Viresh Kumar <vireshk@kernel.org>44 *55 * This file is licensed under the terms of the GNU General Public66 * License version 2. This program is licensed "as is" without any
+1-1
drivers/clk/spear/clk-frac-synth.c
···11/*22 * Copyright (C) 2012 ST Microelectronics33- * Viresh Kumar <viresh.linux@gmail.com>33+ * Viresh Kumar <vireshk@kernel.org>44 *55 * This file is licensed under the terms of the GNU General Public66 * License version 2. This program is licensed "as is" without any
+1-1
drivers/clk/spear/clk-gpt-synth.c
···11/*22 * Copyright (C) 2012 ST Microelectronics33- * Viresh Kumar <viresh.linux@gmail.com>33+ * Viresh Kumar <vireshk@kernel.org>44 *55 * This file is licensed under the terms of the GNU General Public66 * License version 2. This program is licensed "as is" without any
+1-1
drivers/clk/spear/clk-vco-pll.c
···11/*22 * Copyright (C) 2012 ST Microelectronics33- * Viresh Kumar <viresh.linux@gmail.com>33+ * Viresh Kumar <vireshk@kernel.org>44 *55 * This file is licensed under the terms of the GNU General Public66 * License version 2. This program is licensed "as is" without any
+1-1
drivers/clk/spear/clk.c
···11/*22 * Copyright (C) 2012 ST Microelectronics33- * Viresh Kumar <viresh.linux@gmail.com>33+ * Viresh Kumar <vireshk@kernel.org>44 *55 * This file is licensed under the terms of the GNU General Public66 * License version 2. This program is licensed "as is" without any
+1-1
drivers/clk/spear/clk.h
···22 * Clock framework definitions for SPEAr platform33 *44 * Copyright (C) 2012 ST Microelectronics55- * Viresh Kumar <viresh.linux@gmail.com>55+ * Viresh Kumar <vireshk@kernel.org>66 *77 * This file is licensed under the terms of the GNU General Public88 * License version 2. This program is licensed "as is" without any
+1-1
drivers/clk/spear/spear1310_clock.c
···44 * SPEAr1310 machine clock framework source file55 *66 * Copyright (C) 2012 ST Microelectronics77- * Viresh Kumar <viresh.linux@gmail.com>77+ * Viresh Kumar <vireshk@kernel.org>88 *99 * This file is licensed under the terms of the GNU General Public1010 * License version 2. This program is licensed "as is" without any
+1-1
drivers/clk/spear/spear1340_clock.c
···44 * SPEAr1340 machine clock framework source file55 *66 * Copyright (C) 2012 ST Microelectronics77- * Viresh Kumar <viresh.linux@gmail.com>77+ * Viresh Kumar <vireshk@kernel.org>88 *99 * This file is licensed under the terms of the GNU General Public1010 * License version 2. This program is licensed "as is" without any
+1-1
drivers/clk/spear/spear3xx_clock.c
···22 * SPEAr3xx machines clock framework source file33 *44 * Copyright (C) 2012 ST Microelectronics55- * Viresh Kumar <viresh.linux@gmail.com>55+ * Viresh Kumar <vireshk@kernel.org>66 *77 * This file is licensed under the terms of the GNU General Public88 * License version 2. This program is licensed "as is" without any
+1-1
drivers/clk/spear/spear6xx_clock.c
···22 * SPEAr6xx machines clock framework source file33 *44 * Copyright (C) 2012 ST Microelectronics55- * Viresh Kumar <viresh.linux@gmail.com>55+ * Viresh Kumar <vireshk@kernel.org>66 *77 * This file is licensed under the terms of the GNU General Public88 * License version 2. This program is licensed "as is" without any
···297297}298298EXPORT_SYMBOL_GPL(cpufreq_table_validate_and_show);299299300300-struct cpufreq_policy *cpufreq_cpu_get_raw(unsigned int cpu);301301-302302-struct cpufreq_frequency_table *cpufreq_frequency_get_table(unsigned int cpu)303303-{304304- struct cpufreq_policy *policy = cpufreq_cpu_get_raw(cpu);305305- return policy ? policy->freq_table : NULL;306306-}307307-EXPORT_SYMBOL_GPL(cpufreq_frequency_get_table);308308-309300MODULE_AUTHOR("Dominik Brodowski <linux@brodo.de>");310301MODULE_DESCRIPTION("CPUfreq frequency table helpers");311302MODULE_LICENSE("GPL");
+1-1
drivers/cpufreq/loongson2_cpufreq.c
···33 *44 * The 2E revision of loongson processor not support this feature.55 *66- * Copyright (C) 2006 - 2008 Lemote Inc. & Insititute of Computing Technology66+ * Copyright (C) 2006 - 2008 Lemote Inc. & Institute of Computing Technology77 * Author: Yanhua, yanh@lemote.com88 *99 * This file is subject to the terms and conditions of the GNU General Public
+7-2
drivers/cpuidle/cpuidle.c
···112112static void enter_freeze_proper(struct cpuidle_driver *drv,113113 struct cpuidle_device *dev, int index)114114{115115- tick_freeze();115115+ /*116116+ * trace_suspend_resume() called by tick_freeze() for the last CPU117117+ * executing it contains RCU usage regarded as invalid in the idle118118+ * context, so tell RCU about that.119119+ */120120+ RCU_NONIDLE(tick_freeze());116121 /*117122 * The state used here cannot be a "coupled" one, because the "coupled"118123 * cpuidle mechanism enables interrupts and doing that with timekeeping···127122 WARN_ON(!irqs_disabled());128123 /*129124 * timekeeping_resume() that will be called by tick_unfreeze() for the130130- * last CPU executing it calls functions containing RCU read-side125125+ * first CPU executing it calls functions containing RCU read-side131126 * critical sections, so tell RCU about that.132127 */133128 RCU_NONIDLE(tick_unfreeze());
···8787 struct brcmstb_gpio_bank *bank;8888 int ret = 0;89899090+ if (!priv) {9191+ dev_err(&pdev->dev, "called %s without drvdata!\n", __func__);9292+ return -EFAULT;9393+ }9494+9595+ /*9696+ * You can lose return values below, but we report all errors, and it's9797+ * more important to actually perform all of the steps.9898+ */9099 list_for_each(pos, &priv->bank_list) {91100 bank = list_entry(pos, struct brcmstb_gpio_bank, node);92101 ret = bgpio_remove(&bank->bgc);···152143 priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);153144 if (!priv)154145 return -ENOMEM;146146+ platform_set_drvdata(pdev, priv);147147+ INIT_LIST_HEAD(&priv->bank_list);155148156149 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);157150 reg_base = devm_ioremap_resource(dev, res);···164153 priv->reg_base = reg_base;165154 priv->pdev = pdev;166155167167- INIT_LIST_HEAD(&priv->bank_list);168156 if (brcmstb_gpio_sanity_check_banks(dev, np, res))169157 return -EINVAL;170158···230220231221 dev_info(dev, "Registered %d banks (GPIO(s): %d-%d)\n",232222 priv->num_banks, priv->gpio_base, gpio_base - 1);233233-234234- platform_set_drvdata(pdev, priv);235223236224 return 0;237225
+2-4
drivers/gpio/gpio-davinci.c
···578578 writel_relaxed(~0, &g->clr_falling);579579 writel_relaxed(~0, &g->clr_rising);580580581581- /* set up all irqs in this bank */582582- irq_set_chained_handler(bank_irq, gpio_irq_handler);583583-584581 /*585582 * Each chip handles 32 gpios, and each irq bank consists of 16586583 * gpio irqs. Pass the irq bank's corresponding controller to587584 * the chained irq handler.588585 */589589- irq_set_handler_data(bank_irq, &chips[gpio / 32]);586586+ irq_set_chained_handler_and_data(bank_irq, gpio_irq_handler,587587+ &chips[gpio / 32]);590588591589 binten |= BIT(bank);592590 }
···16791679 if (ret)16801680 return ret;1681168116821682- DRM_INFO("DPM unforce state min=%d, max=%d.\n",16831683- pi->sclk_dpm.soft_min_clk,16841684- pi->sclk_dpm.soft_max_clk);16821682+ DRM_DEBUG("DPM unforce state min=%d, max=%d.\n",16831683+ pi->sclk_dpm.soft_min_clk,16841684+ pi->sclk_dpm.soft_max_clk);1685168516861686 return 0;16871687}1688168816891689static int cz_dpm_force_dpm_level(struct amdgpu_device *adev,16901690- enum amdgpu_dpm_forced_level level)16901690+ enum amdgpu_dpm_forced_level level)16911691{16921692 int ret = 0;1693169316941694 switch (level) {16951695 case AMDGPU_DPM_FORCED_LEVEL_HIGH:16961696+ ret = cz_dpm_unforce_dpm_levels(adev);16971697+ if (ret)16981698+ return ret;16961699 ret = cz_dpm_force_highest(adev);16971700 if (ret)16981701 return ret;16991702 break;17001703 case AMDGPU_DPM_FORCED_LEVEL_LOW:17041704+ ret = cz_dpm_unforce_dpm_levels(adev);17051705+ if (ret)17061706+ return ret;17011707 ret = cz_dpm_force_lowest(adev);17021708 if (ret)17031709 return ret;···17161710 default:17171711 break;17181712 }17131713+17141714+ adev->pm.dpm.forced_level = level;1719171517201716 return ret;17211717}
···18131813 u32 data, mask;1814181418151815 data = RREG32(mmCC_RB_BACKEND_DISABLE);18161816- if (data & 1)18171817- data &= CC_RB_BACKEND_DISABLE__BACKEND_DISABLE_MASK;18181818- else18191819- data = 0;18161816+ data &= CC_RB_BACKEND_DISABLE__BACKEND_DISABLE_MASK;1820181718211818 data |= RREG32(mmGC_USER_RB_BACKEND_DISABLE);18221819
···420420 pqm_uninit(&p->pqm);421421422422 pdd = kfd_get_process_device_data(dev, p);423423+424424+ if (!pdd) {425425+ mutex_unlock(&p->mutex);426426+ return;427427+ }428428+423429 if (pdd->reset_wavefronts) {424430 dbgdev_wave_reset_wavefronts(pdd->dev, p);425431 pdd->reset_wavefronts = false;···437431 * We don't call amd_iommu_unbind_pasid() here438432 * because the IOMMU called us.439433 */440440- if (pdd)441441- pdd->bound = false;434434+ pdd->bound = false;442435443436 mutex_unlock(&p->mutex);444437}
-2
drivers/gpu/drm/armada/armada_crtc.c
···531531532532 drm_crtc_vblank_off(crtc);533533534534- crtc->mode = *adj;535535-536534 val = dcrtc->dumb_ctrl & ~CFG_DUMB_ENA;537535 if (val != dcrtc->dumb_ctrl) {538536 dcrtc->dumb_ctrl = val;
+3-2
drivers/gpu/drm/armada/armada_gem.c
···69697070 if (dobj->obj.import_attach) {7171 /* We only ever display imported data */7272- dma_buf_unmap_attachment(dobj->obj.import_attach, dobj->sgt,7373- DMA_TO_DEVICE);7272+ if (dobj->sgt)7373+ dma_buf_unmap_attachment(dobj->obj.import_attach,7474+ dobj->sgt, DMA_TO_DEVICE);7475 drm_prime_gem_destroy(&dobj->obj, NULL);7576 }7677
+75-46
drivers/gpu/drm/armada/armada_overlay.c
···77 * published by the Free Software Foundation.88 */99#include <drm/drmP.h>1010+#include <drm/drm_plane_helper.h>1011#include "armada_crtc.h"1112#include "armada_drm.h"1213#include "armada_fb.h"···86858786 if (fb)8887 armada_drm_queue_unref_work(dcrtc->crtc.dev, fb);8989-}90889191-static unsigned armada_limit(int start, unsigned size, unsigned max)9292-{9393- int end = start + size;9494- if (end < 0)9595- return 0;9696- if (start < 0)9797- start = 0;9898- return (unsigned)end > max ? max - start : end - start;8989+ wake_up(&dplane->vbl.wait);9990}1009110192static int···98105{99106 struct armada_plane *dplane = drm_to_armada_plane(plane);100107 struct armada_crtc *dcrtc = drm_to_armada_crtc(crtc);108108+ struct drm_rect src = {109109+ .x1 = src_x,110110+ .y1 = src_y,111111+ .x2 = src_x + src_w,112112+ .y2 = src_y + src_h,113113+ };114114+ struct drm_rect dest = {115115+ .x1 = crtc_x,116116+ .y1 = crtc_y,117117+ .x2 = crtc_x + crtc_w,118118+ .y2 = crtc_y + crtc_h,119119+ };120120+ const struct drm_rect clip = {121121+ .x2 = crtc->mode.hdisplay,122122+ .y2 = crtc->mode.vdisplay,123123+ };101124 uint32_t val, ctrl0;102125 unsigned idx = 0;126126+ bool visible;103127 int ret;104128105105- crtc_w = armada_limit(crtc_x, crtc_w, dcrtc->crtc.mode.hdisplay);106106- crtc_h = armada_limit(crtc_y, crtc_h, dcrtc->crtc.mode.vdisplay);129129+ ret = drm_plane_helper_check_update(plane, crtc, fb, &src, &dest, &clip,130130+ 0, INT_MAX, true, false, &visible);131131+ if (ret)132132+ return ret;133133+107134 ctrl0 = CFG_DMA_FMT(drm_fb_to_armada_fb(fb)->fmt) |108135 CFG_DMA_MOD(drm_fb_to_armada_fb(fb)->mod) |109136 CFG_CBSH_ENA | CFG_DMA_HSMOOTH | CFG_DMA_ENA;110137111138 /* Does the position/size result in nothing to display? */112112- if (crtc_w == 0 || crtc_h == 0) {139139+ if (!visible)113140 ctrl0 &= ~CFG_DMA_ENA;114114- }115115-116116- /*117117- * FIXME: if the starting point is off screen, we need to118118- * adjust src_x, src_y, src_w, src_h appropriately, and119119- * according to the scale.120120- */121141122142 if (!dcrtc->plane) {123143 dcrtc->plane = plane;···140134 /* FIXME: overlay on an interlaced display */141135 /* Just updating the position/size? */142136 if (plane->fb == fb && dplane->ctrl0 == ctrl0) {143143- val = (src_h & 0xffff0000) | src_w >> 16;137137+ val = (drm_rect_height(&src) & 0xffff0000) |138138+ drm_rect_width(&src) >> 16;144139 dplane->src_hw = val;145140 writel_relaxed(val, dcrtc->base + LCD_SPU_DMA_HPXL_VLN);146146- val = crtc_h << 16 | crtc_w;141141+142142+ val = drm_rect_height(&dest) << 16 | drm_rect_width(&dest);147143 dplane->dst_hw = val;148144 writel_relaxed(val, dcrtc->base + LCD_SPU_DZM_HPXL_VLN);149149- val = crtc_y << 16 | crtc_x;145145+146146+ val = dest.y1 << 16 | dest.x1;150147 dplane->dst_yx = val;151148 writel_relaxed(val, dcrtc->base + LCD_SPU_DMA_OVSA_HPXL_VLN);149149+152150 return 0;153151 } else if (~dplane->ctrl0 & ctrl0 & CFG_DMA_ENA) {154152 /* Power up the Y/U/V FIFOs on ENA 0->1 transitions */···160150 dcrtc->base + LCD_SPU_SRAM_PARA1);161151 }162152163163- ret = wait_event_timeout(dplane->vbl.wait,164164- list_empty(&dplane->vbl.update.node),165165- HZ/25);166166- if (ret < 0)167167- return ret;153153+ wait_event_timeout(dplane->vbl.wait,154154+ list_empty(&dplane->vbl.update.node),155155+ HZ/25);168156169157 if (plane->fb != fb) {170158 struct armada_gem_object *obj = drm_fb_obj(fb);171171- uint32_t sy, su, sv;159159+ uint32_t addr[3], pixel_format;160160+ int i, num_planes, hsub;172161173162 /*174163 * Take a reference on the new framebuffer - we want to···187178 older_fb);188179 }189180190190- src_y >>= 16;191191- src_x >>= 16;192192- sy = obj->dev_addr + fb->offsets[0] + src_y * fb->pitches[0] +193193- src_x * fb->bits_per_pixel / 8;194194- su = obj->dev_addr + fb->offsets[1] + src_y * fb->pitches[1] +195195- src_x;196196- sv = obj->dev_addr + fb->offsets[2] + src_y * fb->pitches[2] +197197- src_x;181181+ src_y = src.y1 >> 16;182182+ src_x = src.x1 >> 16;198183199199- armada_reg_queue_set(dplane->vbl.regs, idx, sy,184184+ pixel_format = fb->pixel_format;185185+ hsub = drm_format_horz_chroma_subsampling(pixel_format);186186+ num_planes = drm_format_num_planes(pixel_format);187187+188188+ /*189189+ * Annoyingly, shifting a YUYV-format image by one pixel190190+ * causes the U/V planes to toggle. Toggle the UV swap.191191+ * (Unfortunately, this causes momentary colour flickering.)192192+ */193193+ if (src_x & (hsub - 1) && num_planes == 1)194194+ ctrl0 ^= CFG_DMA_MOD(CFG_SWAPUV);195195+196196+ for (i = 0; i < num_planes; i++)197197+ addr[i] = obj->dev_addr + fb->offsets[i] +198198+ src_y * fb->pitches[i] +199199+ src_x * drm_format_plane_cpp(pixel_format, i);200200+ for (; i < ARRAY_SIZE(addr); i++)201201+ addr[i] = 0;202202+203203+ armada_reg_queue_set(dplane->vbl.regs, idx, addr[0],200204 LCD_SPU_DMA_START_ADDR_Y0);201201- armada_reg_queue_set(dplane->vbl.regs, idx, su,205205+ armada_reg_queue_set(dplane->vbl.regs, idx, addr[1],202206 LCD_SPU_DMA_START_ADDR_U0);203203- armada_reg_queue_set(dplane->vbl.regs, idx, sv,207207+ armada_reg_queue_set(dplane->vbl.regs, idx, addr[2],204208 LCD_SPU_DMA_START_ADDR_V0);205205- armada_reg_queue_set(dplane->vbl.regs, idx, sy,209209+ armada_reg_queue_set(dplane->vbl.regs, idx, addr[0],206210 LCD_SPU_DMA_START_ADDR_Y1);207207- armada_reg_queue_set(dplane->vbl.regs, idx, su,211211+ armada_reg_queue_set(dplane->vbl.regs, idx, addr[1],208212 LCD_SPU_DMA_START_ADDR_U1);209209- armada_reg_queue_set(dplane->vbl.regs, idx, sv,213213+ armada_reg_queue_set(dplane->vbl.regs, idx, addr[2],210214 LCD_SPU_DMA_START_ADDR_V1);211215212216 val = fb->pitches[0] << 16 | fb->pitches[0];···230208 LCD_SPU_DMA_PITCH_UV);231209 }232210233233- val = (src_h & 0xffff0000) | src_w >> 16;211211+ val = (drm_rect_height(&src) & 0xffff0000) | drm_rect_width(&src) >> 16;234212 if (dplane->src_hw != val) {235213 dplane->src_hw = val;236214 armada_reg_queue_set(dplane->vbl.regs, idx, val,237215 LCD_SPU_DMA_HPXL_VLN);238216 }239239- val = crtc_h << 16 | crtc_w;217217+218218+ val = drm_rect_height(&dest) << 16 | drm_rect_width(&dest);240219 if (dplane->dst_hw != val) {241220 dplane->dst_hw = val;242221 armada_reg_queue_set(dplane->vbl.regs, idx, val,243222 LCD_SPU_DZM_HPXL_VLN);244223 }245245- val = crtc_y << 16 | crtc_x;224224+225225+ val = dest.y1 << 16 | dest.x1;246226 if (dplane->dst_yx != val) {247227 dplane->dst_yx = val;248228 armada_reg_queue_set(dplane->vbl.regs, idx, val,249229 LCD_SPU_DMA_OVSA_HPXL_VLN);250230 }231231+251232 if (dplane->ctrl0 != ctrl0) {252233 dplane->ctrl0 = ctrl0;253234 armada_reg_queue_mod(dplane->vbl.regs, idx, ctrl0,···304279305280static void armada_plane_destroy(struct drm_plane *plane)306281{307307- kfree(plane);282282+ struct armada_plane *dplane = drm_to_armada_plane(plane);283283+284284+ drm_plane_cleanup(plane);285285+286286+ kfree(dplane);308287}309288310289static int armada_plane_set_property(struct drm_plane *plane,
+5-2
drivers/gpu/drm/drm_crtc.c
···27062706 if (!drm_core_check_feature(dev, DRIVER_MODESET))27072707 return -EINVAL;2708270827092709- /* For some reason crtc x/y offsets are signed internally. */27102710- if (crtc_req->x > INT_MAX || crtc_req->y > INT_MAX)27092709+ /*27102710+ * Universal plane src offsets are only 16.16, prevent havoc for27112711+ * drivers using universal plane code internally.27122712+ */27132713+ if (crtc_req->x & 0xffff0000 || crtc_req->y & 0xffff0000)27112714 return -ERANGE;2712271527132716 drm_modeset_lock_all(dev);
+60
drivers/gpu/drm/drm_ioc32.c
···70707171#define DRM_IOCTL_WAIT_VBLANK32 DRM_IOWR(0x3a, drm_wait_vblank32_t)72727373+#define DRM_IOCTL_MODE_ADDFB232 DRM_IOWR(0xb8, drm_mode_fb_cmd232_t)7474+7375typedef struct drm_version_32 {7476 int version_major; /**< Major version */7577 int version_minor; /**< Minor version */···10181016 return 0;10191017}1020101810191019+typedef struct drm_mode_fb_cmd232 {10201020+ u32 fb_id;10211021+ u32 width;10221022+ u32 height;10231023+ u32 pixel_format;10241024+ u32 flags;10251025+ u32 handles[4];10261026+ u32 pitches[4];10271027+ u32 offsets[4];10281028+ u64 modifier[4];10291029+} __attribute__((packed)) drm_mode_fb_cmd232_t;10301030+10311031+static int compat_drm_mode_addfb2(struct file *file, unsigned int cmd,10321032+ unsigned long arg)10331033+{10341034+ struct drm_mode_fb_cmd232 __user *argp = (void __user *)arg;10351035+ struct drm_mode_fb_cmd232 req32;10361036+ struct drm_mode_fb_cmd2 __user *req64;10371037+ int i;10381038+ int err;10391039+10401040+ if (copy_from_user(&req32, argp, sizeof(req32)))10411041+ return -EFAULT;10421042+10431043+ req64 = compat_alloc_user_space(sizeof(*req64));10441044+10451045+ if (!access_ok(VERIFY_WRITE, req64, sizeof(*req64))10461046+ || __put_user(req32.width, &req64->width)10471047+ || __put_user(req32.height, &req64->height)10481048+ || __put_user(req32.pixel_format, &req64->pixel_format)10491049+ || __put_user(req32.flags, &req64->flags))10501050+ return -EFAULT;10511051+10521052+ for (i = 0; i < 4; i++) {10531053+ if (__put_user(req32.handles[i], &req64->handles[i]))10541054+ return -EFAULT;10551055+ if (__put_user(req32.pitches[i], &req64->pitches[i]))10561056+ return -EFAULT;10571057+ if (__put_user(req32.offsets[i], &req64->offsets[i]))10581058+ return -EFAULT;10591059+ if (__put_user(req32.modifier[i], &req64->modifier[i]))10601060+ return -EFAULT;10611061+ }10621062+10631063+ err = drm_ioctl(file, DRM_IOCTL_MODE_ADDFB2, (unsigned long)req64);10641064+ if (err)10651065+ return err;10661066+10671067+ if (__get_user(req32.fb_id, &req64->fb_id))10681068+ return -EFAULT;10691069+10701070+ if (copy_to_user(argp, &req32, sizeof(req32)))10711071+ return -EFAULT;10721072+10731073+ return 0;10741074+}10751075+10211076static drm_ioctl_compat_t *drm_compat_ioctls[] = {10221077 [DRM_IOCTL_NR(DRM_IOCTL_VERSION32)] = compat_drm_version,10231078 [DRM_IOCTL_NR(DRM_IOCTL_GET_UNIQUE32)] = compat_drm_getunique,···11071048 [DRM_IOCTL_NR(DRM_IOCTL_UPDATE_DRAW32)] = compat_drm_update_draw,11081049#endif11091050 [DRM_IOCTL_NR(DRM_IOCTL_WAIT_VBLANK32)] = compat_drm_wait_vblank,10511051+ [DRM_IOCTL_NR(DRM_IOCTL_MODE_ADDFB232)] = compat_drm_mode_addfb2,11101052};1111105311121054/**
+3-3
drivers/gpu/drm/i915/i915_drv.h
···826826 struct kref ref;827827 int user_handle;828828 uint8_t remap_slice;829829+ struct drm_i915_private *i915;829830 struct drm_i915_file_private *file_priv;830831 struct i915_ctx_hang_stats hang_stats;831832 struct i915_hw_ppgtt *ppgtt;···20372036 unsigned int cache_level:3;20382037 unsigned int cache_dirty:1;2039203820402040- unsigned int has_dma_mapping:1;20412041-20422039 unsigned int frontbuffer_bits:INTEL_FRONTBUFFER_BITS;2043204020442041 unsigned int pin_display;···31153116int i915_debugfs_connector_add(struct drm_connector *connector);31163117void intel_display_crc_init(struct drm_device *dev);31173118#else31183118-static inline int i915_debugfs_connector_add(struct drm_connector *connector) {}31193119+static inline int i915_debugfs_connector_add(struct drm_connector *connector)31203120+{ return 0; }31193121static inline void intel_display_crc_init(struct drm_device *dev) {}31203122#endif31213123
+17-18
drivers/gpu/drm/i915/i915_gem.c
···213213 sg_dma_len(sg) = obj->base.size;214214215215 obj->pages = st;216216- obj->has_dma_mapping = true;217216 return 0;218217}219218···264265265266 sg_free_table(obj->pages);266267 kfree(obj->pages);267267-268268- obj->has_dma_mapping = false;269268}270269271270static void···21362139 obj->base.read_domains = obj->base.write_domain = I915_GEM_DOMAIN_CPU;21372140 }2138214121422142+ i915_gem_gtt_finish_object(obj);21432143+21392144 if (i915_gem_object_needs_bit17_swizzle(obj))21402145 i915_gem_object_save_bit_17_swizzle(obj);21412146···21982199 struct sg_page_iter sg_iter;21992200 struct page *page;22002201 unsigned long last_pfn = 0; /* suppress gcc warning */22022202+ int ret;22012203 gfp_t gfp;2202220422032205 /* Assert that the object is not currently in any GPU domain. As it···22462246 */22472247 i915_gem_shrink_all(dev_priv);22482248 page = shmem_read_mapping_page(mapping, i);22492249- if (IS_ERR(page))22492249+ if (IS_ERR(page)) {22502250+ ret = PTR_ERR(page);22502251 goto err_pages;22522252+ }22512253 }22522254#ifdef CONFIG_SWIOTLB22532255 if (swiotlb_nr_tbl()) {···22782276 sg_mark_end(sg);22792277 obj->pages = st;2280227822792279+ ret = i915_gem_gtt_prepare_object(obj);22802280+ if (ret)22812281+ goto err_pages;22822282+22812283 if (i915_gem_object_needs_bit17_swizzle(obj))22822284 i915_gem_object_do_bit_17_swizzle(obj);22832285···23062300 * space and so want to translate the error from shmemfs back to our23072301 * usual understanding of ENOMEM.23082302 */23092309- if (PTR_ERR(page) == -ENOSPC)23102310- return -ENOMEM;23112311- else23122312- return PTR_ERR(page);23032303+ if (ret == -ENOSPC)23042304+ ret = -ENOMEM;23052305+23062306+ return ret;23132307}2314230823152309/* Ensure that the associated pages are gathered from the backing storage···25482542 }2549254325502544 request->emitted_jiffies = jiffies;25452545+ ring->last_submitted_seqno = request->seqno;25512546 list_add_tail(&request->list, &ring->request_list);25522547 request->file_priv = NULL;25532548···3254324732553248 /* Since the unbound list is global, only move to that list if32563249 * no more VMAs exist. */32573257- if (list_empty(&obj->vma_list)) {32583258- i915_gem_gtt_finish_object(obj);32503250+ if (list_empty(&obj->vma_list))32593251 list_move_tail(&obj->global_list, &dev_priv->mm.unbound_list);32603260- }3261325232623253 /* And finally now the object is completely decoupled from this vma,32633254 * we can drop its hold on the backing storage and allow it to be···37733768 goto err_remove_node;37743769 }3775377037763776- ret = i915_gem_gtt_prepare_object(obj);37773777- if (ret)37783778- goto err_remove_node;37793779-37803771 trace_i915_vma_bind(vma, flags);37813772 ret = i915_vma_bind(vma, obj->cache_level, flags);37823773 if (ret)37833783- goto err_finish_gtt;37743774+ goto err_remove_node;3784377537853776 list_move_tail(&obj->global_list, &dev_priv->mm.bound_list);37863777 list_add_tail(&vma->mm_list, &vm->inactive_list);3787377837883779 return vma;3789378037903790-err_finish_gtt:37913791- i915_gem_gtt_finish_object(obj);37923781err_remove_node:37933782 drm_mm_remove_node(&vma->node);37943783err_free_vma:
···275275 * Do we have some not yet emitted requests outstanding?276276 */277277 struct drm_i915_gem_request *outstanding_lazy_request;278278+ /**279279+ * Seqno of request most recently submitted to request_list.280280+ * Used exclusively by hang checker to avoid grabbing lock while281281+ * inspecting request list.282282+ */283283+ u32 last_submitted_seqno;284284+278285 bool gpu_caches_dirty;279286280287 wait_queue_head_t irq_queue;
+1-1
drivers/gpu/drm/imx/imx-tve.c
···301301302302 switch (tve->mode) {303303 case TVE_MODE_VGA:304304- imx_drm_set_bus_format_pins(encoder, MEDIA_BUS_FMT_YUV8_1X24,304304+ imx_drm_set_bus_format_pins(encoder, MEDIA_BUS_FMT_GBR888_1X24,305305 tve->hsync_pin, tve->vsync_pin);306306 break;307307 case TVE_MODE_TVOUT:
+15-6
drivers/gpu/drm/imx/parallel-display.c
···2121#include <drm/drm_panel.h>2222#include <linux/videodev2.h>2323#include <video/of_display_timing.h>2424+#include <linux/of_graph.h>24252526#include "imx-drm.h"2627···209208{210209 struct drm_device *drm = data;211210 struct device_node *np = dev->of_node;212212- struct device_node *panel_node;211211+ struct device_node *port;213212 const u8 *edidp;214213 struct imx_parallel_display *imxpd;215214 int ret;···235234 imxpd->bus_format = MEDIA_BUS_FMT_RGB666_1X24_CPADHI;236235 }237236238238- panel_node = of_parse_phandle(np, "fsl,panel", 0);239239- if (panel_node) {240240- imxpd->panel = of_drm_find_panel(panel_node);241241- if (!imxpd->panel)242242- return -EPROBE_DEFER;237237+ /* port@1 is the output port */238238+ port = of_graph_get_port_by_id(np, 1);239239+ if (port) {240240+ struct device_node *endpoint, *remote;241241+242242+ endpoint = of_get_child_by_name(port, "endpoint");243243+ if (endpoint) {244244+ remote = of_graph_get_remote_port_parent(endpoint);245245+ if (remote)246246+ imxpd->panel = of_drm_find_panel(remote);247247+ if (!imxpd->panel)248248+ return -EPROBE_DEFER;249249+ }243250 }244251245252 imxpd->dev = dev;
+1-1
drivers/gpu/drm/omapdrm/omap_dmm_tiler.c
···285285286286 if (wait) {287287 if (!wait_for_completion_timeout(&engine->compl,288288- msecs_to_jiffies(1))) {288288+ msecs_to_jiffies(100))) {289289 dev_err(dmm->dev, "timed out waiting for done\n");290290 ret = -ETIMEDOUT;291291 }
···287287}288288289289/* unpin, no longer being scanned out: */290290-int omap_framebuffer_unpin(struct drm_framebuffer *fb)290290+void omap_framebuffer_unpin(struct drm_framebuffer *fb)291291{292292 struct omap_framebuffer *omap_fb = to_omap_framebuffer(fb);293293- int ret, i, n = drm_format_num_planes(fb->pixel_format);293293+ int i, n = drm_format_num_planes(fb->pixel_format);294294295295 mutex_lock(&omap_fb->lock);296296···298298299299 if (omap_fb->pin_count > 0) {300300 mutex_unlock(&omap_fb->lock);301301- return 0;301301+ return;302302 }303303304304 for (i = 0; i < n; i++) {305305 struct plane *plane = &omap_fb->planes[i];306306- ret = omap_gem_put_paddr(plane->bo);307307- if (ret)308308- goto fail;306306+ omap_gem_put_paddr(plane->bo);309307 plane->paddr = 0;310308 }311309312310 mutex_unlock(&omap_fb->lock);313313-314314- return 0;315315-316316-fail:317317- mutex_unlock(&omap_fb->lock);318318- return ret;319311}320312321313struct drm_gem_object *omap_framebuffer_bo(struct drm_framebuffer *fb, int p)
+1-1
drivers/gpu/drm/omapdrm/omap_fbdev.c
···135135 fbdev->ywrap_enabled = priv->has_dmm && ywrap_enabled;136136 if (fbdev->ywrap_enabled) {137137 /* need to align pitch to page size if using DMM scrolling */138138- mode_cmd.pitches[0] = ALIGN(mode_cmd.pitches[0], PAGE_SIZE);138138+ mode_cmd.pitches[0] = PAGE_ALIGN(mode_cmd.pitches[0]);139139 }140140141141 /* allocate backing bo */
+14-12
drivers/gpu/drm/omapdrm/omap_gem.c
···808808/* Release physical address, when DMA is no longer being performed.. this809809 * could potentially unpin and unmap buffers from TILER810810 */811811-int omap_gem_put_paddr(struct drm_gem_object *obj)811811+void omap_gem_put_paddr(struct drm_gem_object *obj)812812{813813 struct omap_gem_object *omap_obj = to_omap_bo(obj);814814- int ret = 0;814814+ int ret;815815816816 mutex_lock(&obj->dev->struct_mutex);817817 if (omap_obj->paddr_cnt > 0) {···821821 if (ret) {822822 dev_err(obj->dev->dev,823823 "could not unpin pages: %d\n", ret);824824- goto fail;825824 }826825 ret = tiler_release(omap_obj->block);827826 if (ret) {···831832 omap_obj->block = NULL;832833 }833834 }834834-fail:835835+835836 mutex_unlock(&obj->dev->struct_mutex);836836- return ret;837837}838838839839/* Get rotated scanout address (only valid if already pinned), at the···1376137813771379 omap_obj = kzalloc(sizeof(*omap_obj), GFP_KERNEL);13781380 if (!omap_obj)13791379- goto fail;13801380-13811381- spin_lock(&priv->list_lock);13821382- list_add(&omap_obj->mm_list, &priv->obj_list);13831383- spin_unlock(&priv->list_lock);13811381+ return NULL;1384138213851383 obj = &omap_obj->base;13861384···13861392 */13871393 omap_obj->vaddr = dma_alloc_writecombine(dev->dev, size,13881394 &omap_obj->paddr, GFP_KERNEL);13891389- if (omap_obj->vaddr)13901390- flags |= OMAP_BO_DMA;13951395+ if (!omap_obj->vaddr) {13961396+ kfree(omap_obj);1391139713981398+ return NULL;13991399+ }14001400+14011401+ flags |= OMAP_BO_DMA;13921402 }14031403+14041404+ spin_lock(&priv->list_lock);14051405+ list_add(&omap_obj->mm_list, &priv->obj_list);14061406+ spin_unlock(&priv->list_lock);1393140713941408 omap_obj->flags = flags;13951409
+26
drivers/gpu/drm/omapdrm/omap_plane.c
···1717 * this program. If not, see <http://www.gnu.org/licenses/>.1818 */19192020+#include <drm/drm_atomic.h>2021#include <drm/drm_atomic_helper.h>2122#include <drm/drm_plane_helper.h>2223···154153 dispc_ovl_enable(omap_plane->id, false);155154}156155156156+static int omap_plane_atomic_check(struct drm_plane *plane,157157+ struct drm_plane_state *state)158158+{159159+ struct drm_crtc_state *crtc_state;160160+161161+ if (!state->crtc)162162+ return 0;163163+164164+ crtc_state = drm_atomic_get_crtc_state(state->state, state->crtc);165165+ if (IS_ERR(crtc_state))166166+ return PTR_ERR(crtc_state);167167+168168+ if (state->crtc_x < 0 || state->crtc_y < 0)169169+ return -EINVAL;170170+171171+ if (state->crtc_x + state->crtc_w > crtc_state->adjusted_mode.hdisplay)172172+ return -EINVAL;173173+174174+ if (state->crtc_y + state->crtc_h > crtc_state->adjusted_mode.vdisplay)175175+ return -EINVAL;176176+177177+ return 0;178178+}179179+157180static const struct drm_plane_helper_funcs omap_plane_helper_funcs = {158181 .prepare_fb = omap_plane_prepare_fb,159182 .cleanup_fb = omap_plane_cleanup_fb,183183+ .atomic_check = omap_plane_atomic_check,160184 .atomic_update = omap_plane_atomic_update,161185 .atomic_disable = omap_plane_atomic_disable,162186};
···79647964 case 1: /* D1 vblank/vline */79657965 switch (src_data) {79667966 case 0: /* D1 vblank */79677967- if (rdev->irq.stat_regs.cik.disp_int & LB_D1_VBLANK_INTERRUPT) {79687968- if (rdev->irq.crtc_vblank_int[0]) {79697969- drm_handle_vblank(rdev->ddev, 0);79707970- rdev->pm.vblank_sync = true;79717971- wake_up(&rdev->irq.vblank_queue);79727972- }79737973- if (atomic_read(&rdev->irq.pflip[0]))79747974- radeon_crtc_handle_vblank(rdev, 0);79757975- rdev->irq.stat_regs.cik.disp_int &= ~LB_D1_VBLANK_INTERRUPT;79767976- DRM_DEBUG("IH: D1 vblank\n");79677967+ if (!(rdev->irq.stat_regs.cik.disp_int & LB_D1_VBLANK_INTERRUPT))79687968+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");79697969+79707970+ if (rdev->irq.crtc_vblank_int[0]) {79717971+ drm_handle_vblank(rdev->ddev, 0);79727972+ rdev->pm.vblank_sync = true;79737973+ wake_up(&rdev->irq.vblank_queue);79777974 }79757975+ if (atomic_read(&rdev->irq.pflip[0]))79767976+ radeon_crtc_handle_vblank(rdev, 0);79777977+ rdev->irq.stat_regs.cik.disp_int &= ~LB_D1_VBLANK_INTERRUPT;79787978+ DRM_DEBUG("IH: D1 vblank\n");79797979+79787980 break;79797981 case 1: /* D1 vline */79807980- if (rdev->irq.stat_regs.cik.disp_int & LB_D1_VLINE_INTERRUPT) {79817981- rdev->irq.stat_regs.cik.disp_int &= ~LB_D1_VLINE_INTERRUPT;79827982- DRM_DEBUG("IH: D1 vline\n");79837983- }79827982+ if (!(rdev->irq.stat_regs.cik.disp_int & LB_D1_VLINE_INTERRUPT))79837983+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");79847984+79857985+ rdev->irq.stat_regs.cik.disp_int &= ~LB_D1_VLINE_INTERRUPT;79867986+ DRM_DEBUG("IH: D1 vline\n");79877987+79847988 break;79857989 default:79867990 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);···79947990 case 2: /* D2 vblank/vline */79957991 switch (src_data) {79967992 case 0: /* D2 vblank */79977997- if (rdev->irq.stat_regs.cik.disp_int_cont & LB_D2_VBLANK_INTERRUPT) {79987998- if (rdev->irq.crtc_vblank_int[1]) {79997999- drm_handle_vblank(rdev->ddev, 1);80008000- rdev->pm.vblank_sync = true;80018001- wake_up(&rdev->irq.vblank_queue);80028002- }80038003- if (atomic_read(&rdev->irq.pflip[1]))80048004- radeon_crtc_handle_vblank(rdev, 1);80058005- rdev->irq.stat_regs.cik.disp_int_cont &= ~LB_D2_VBLANK_INTERRUPT;80068006- DRM_DEBUG("IH: D2 vblank\n");79937993+ if (!(rdev->irq.stat_regs.cik.disp_int_cont & LB_D2_VBLANK_INTERRUPT))79947994+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");79957995+79967996+ if (rdev->irq.crtc_vblank_int[1]) {79977997+ drm_handle_vblank(rdev->ddev, 1);79987998+ rdev->pm.vblank_sync = true;79997999+ wake_up(&rdev->irq.vblank_queue);80078000 }80018001+ if (atomic_read(&rdev->irq.pflip[1]))80028002+ radeon_crtc_handle_vblank(rdev, 1);80038003+ rdev->irq.stat_regs.cik.disp_int_cont &= ~LB_D2_VBLANK_INTERRUPT;80048004+ DRM_DEBUG("IH: D2 vblank\n");80058005+80088006 break;80098007 case 1: /* D2 vline */80108010- if (rdev->irq.stat_regs.cik.disp_int_cont & LB_D2_VLINE_INTERRUPT) {80118011- rdev->irq.stat_regs.cik.disp_int_cont &= ~LB_D2_VLINE_INTERRUPT;80128012- DRM_DEBUG("IH: D2 vline\n");80138013- }80088008+ if (!(rdev->irq.stat_regs.cik.disp_int_cont & LB_D2_VLINE_INTERRUPT))80098009+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");80108010+80118011+ rdev->irq.stat_regs.cik.disp_int_cont &= ~LB_D2_VLINE_INTERRUPT;80128012+ DRM_DEBUG("IH: D2 vline\n");80138013+80148014 break;80158015 default:80168016 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);···80248016 case 3: /* D3 vblank/vline */80258017 switch (src_data) {80268018 case 0: /* D3 vblank */80278027- if (rdev->irq.stat_regs.cik.disp_int_cont2 & LB_D3_VBLANK_INTERRUPT) {80288028- if (rdev->irq.crtc_vblank_int[2]) {80298029- drm_handle_vblank(rdev->ddev, 2);80308030- rdev->pm.vblank_sync = true;80318031- wake_up(&rdev->irq.vblank_queue);80328032- }80338033- if (atomic_read(&rdev->irq.pflip[2]))80348034- radeon_crtc_handle_vblank(rdev, 2);80358035- rdev->irq.stat_regs.cik.disp_int_cont2 &= ~LB_D3_VBLANK_INTERRUPT;80368036- DRM_DEBUG("IH: D3 vblank\n");80198019+ if (!(rdev->irq.stat_regs.cik.disp_int_cont2 & LB_D3_VBLANK_INTERRUPT))80208020+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");80218021+80228022+ if (rdev->irq.crtc_vblank_int[2]) {80238023+ drm_handle_vblank(rdev->ddev, 2);80248024+ rdev->pm.vblank_sync = true;80258025+ wake_up(&rdev->irq.vblank_queue);80378026 }80278027+ if (atomic_read(&rdev->irq.pflip[2]))80288028+ radeon_crtc_handle_vblank(rdev, 2);80298029+ rdev->irq.stat_regs.cik.disp_int_cont2 &= ~LB_D3_VBLANK_INTERRUPT;80308030+ DRM_DEBUG("IH: D3 vblank\n");80318031+80388032 break;80398033 case 1: /* D3 vline */80408040- if (rdev->irq.stat_regs.cik.disp_int_cont2 & LB_D3_VLINE_INTERRUPT) {80418041- rdev->irq.stat_regs.cik.disp_int_cont2 &= ~LB_D3_VLINE_INTERRUPT;80428042- DRM_DEBUG("IH: D3 vline\n");80438043- }80348034+ if (!(rdev->irq.stat_regs.cik.disp_int_cont2 & LB_D3_VLINE_INTERRUPT))80358035+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");80368036+80378037+ rdev->irq.stat_regs.cik.disp_int_cont2 &= ~LB_D3_VLINE_INTERRUPT;80388038+ DRM_DEBUG("IH: D3 vline\n");80398039+80448040 break;80458041 default:80468042 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);···80548042 case 4: /* D4 vblank/vline */80558043 switch (src_data) {80568044 case 0: /* D4 vblank */80578057- if (rdev->irq.stat_regs.cik.disp_int_cont3 & LB_D4_VBLANK_INTERRUPT) {80588058- if (rdev->irq.crtc_vblank_int[3]) {80598059- drm_handle_vblank(rdev->ddev, 3);80608060- rdev->pm.vblank_sync = true;80618061- wake_up(&rdev->irq.vblank_queue);80628062- }80638063- if (atomic_read(&rdev->irq.pflip[3]))80648064- radeon_crtc_handle_vblank(rdev, 3);80658065- rdev->irq.stat_regs.cik.disp_int_cont3 &= ~LB_D4_VBLANK_INTERRUPT;80668066- DRM_DEBUG("IH: D4 vblank\n");80458045+ if (!(rdev->irq.stat_regs.cik.disp_int_cont3 & LB_D4_VBLANK_INTERRUPT))80468046+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");80478047+80488048+ if (rdev->irq.crtc_vblank_int[3]) {80498049+ drm_handle_vblank(rdev->ddev, 3);80508050+ rdev->pm.vblank_sync = true;80518051+ wake_up(&rdev->irq.vblank_queue);80678052 }80538053+ if (atomic_read(&rdev->irq.pflip[3]))80548054+ radeon_crtc_handle_vblank(rdev, 3);80558055+ rdev->irq.stat_regs.cik.disp_int_cont3 &= ~LB_D4_VBLANK_INTERRUPT;80568056+ DRM_DEBUG("IH: D4 vblank\n");80578057+80688058 break;80698059 case 1: /* D4 vline */80708070- if (rdev->irq.stat_regs.cik.disp_int_cont3 & LB_D4_VLINE_INTERRUPT) {80718071- rdev->irq.stat_regs.cik.disp_int_cont3 &= ~LB_D4_VLINE_INTERRUPT;80728072- DRM_DEBUG("IH: D4 vline\n");80738073- }80608060+ if (!(rdev->irq.stat_regs.cik.disp_int_cont3 & LB_D4_VLINE_INTERRUPT))80618061+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");80628062+80638063+ rdev->irq.stat_regs.cik.disp_int_cont3 &= ~LB_D4_VLINE_INTERRUPT;80648064+ DRM_DEBUG("IH: D4 vline\n");80658065+80748066 break;80758067 default:80768068 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);···80848068 case 5: /* D5 vblank/vline */80858069 switch (src_data) {80868070 case 0: /* D5 vblank */80878087- if (rdev->irq.stat_regs.cik.disp_int_cont4 & LB_D5_VBLANK_INTERRUPT) {80888088- if (rdev->irq.crtc_vblank_int[4]) {80898089- drm_handle_vblank(rdev->ddev, 4);80908090- rdev->pm.vblank_sync = true;80918091- wake_up(&rdev->irq.vblank_queue);80928092- }80938093- if (atomic_read(&rdev->irq.pflip[4]))80948094- radeon_crtc_handle_vblank(rdev, 4);80958095- rdev->irq.stat_regs.cik.disp_int_cont4 &= ~LB_D5_VBLANK_INTERRUPT;80968096- DRM_DEBUG("IH: D5 vblank\n");80718071+ if (!(rdev->irq.stat_regs.cik.disp_int_cont4 & LB_D5_VBLANK_INTERRUPT))80728072+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");80738073+80748074+ if (rdev->irq.crtc_vblank_int[4]) {80758075+ drm_handle_vblank(rdev->ddev, 4);80768076+ rdev->pm.vblank_sync = true;80778077+ wake_up(&rdev->irq.vblank_queue);80978078 }80798079+ if (atomic_read(&rdev->irq.pflip[4]))80808080+ radeon_crtc_handle_vblank(rdev, 4);80818081+ rdev->irq.stat_regs.cik.disp_int_cont4 &= ~LB_D5_VBLANK_INTERRUPT;80828082+ DRM_DEBUG("IH: D5 vblank\n");80838083+80988084 break;80998085 case 1: /* D5 vline */81008100- if (rdev->irq.stat_regs.cik.disp_int_cont4 & LB_D5_VLINE_INTERRUPT) {81018101- rdev->irq.stat_regs.cik.disp_int_cont4 &= ~LB_D5_VLINE_INTERRUPT;81028102- DRM_DEBUG("IH: D5 vline\n");81038103- }80868086+ if (!(rdev->irq.stat_regs.cik.disp_int_cont4 & LB_D5_VLINE_INTERRUPT))80878087+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");80888088+80898089+ rdev->irq.stat_regs.cik.disp_int_cont4 &= ~LB_D5_VLINE_INTERRUPT;80908090+ DRM_DEBUG("IH: D5 vline\n");80918091+81048092 break;81058093 default:81068094 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);···81148094 case 6: /* D6 vblank/vline */81158095 switch (src_data) {81168096 case 0: /* D6 vblank */81178117- if (rdev->irq.stat_regs.cik.disp_int_cont5 & LB_D6_VBLANK_INTERRUPT) {81188118- if (rdev->irq.crtc_vblank_int[5]) {81198119- drm_handle_vblank(rdev->ddev, 5);81208120- rdev->pm.vblank_sync = true;81218121- wake_up(&rdev->irq.vblank_queue);81228122- }81238123- if (atomic_read(&rdev->irq.pflip[5]))81248124- radeon_crtc_handle_vblank(rdev, 5);81258125- rdev->irq.stat_regs.cik.disp_int_cont5 &= ~LB_D6_VBLANK_INTERRUPT;81268126- DRM_DEBUG("IH: D6 vblank\n");80978097+ if (!(rdev->irq.stat_regs.cik.disp_int_cont5 & LB_D6_VBLANK_INTERRUPT))80988098+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");80998099+81008100+ if (rdev->irq.crtc_vblank_int[5]) {81018101+ drm_handle_vblank(rdev->ddev, 5);81028102+ rdev->pm.vblank_sync = true;81038103+ wake_up(&rdev->irq.vblank_queue);81278104 }81058105+ if (atomic_read(&rdev->irq.pflip[5]))81068106+ radeon_crtc_handle_vblank(rdev, 5);81078107+ rdev->irq.stat_regs.cik.disp_int_cont5 &= ~LB_D6_VBLANK_INTERRUPT;81088108+ DRM_DEBUG("IH: D6 vblank\n");81098109+81288110 break;81298111 case 1: /* D6 vline */81308130- if (rdev->irq.stat_regs.cik.disp_int_cont5 & LB_D6_VLINE_INTERRUPT) {81318131- rdev->irq.stat_regs.cik.disp_int_cont5 &= ~LB_D6_VLINE_INTERRUPT;81328132- DRM_DEBUG("IH: D6 vline\n");81338133- }81128112+ if (!(rdev->irq.stat_regs.cik.disp_int_cont5 & LB_D6_VLINE_INTERRUPT))81138113+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");81148114+81158115+ rdev->irq.stat_regs.cik.disp_int_cont5 &= ~LB_D6_VLINE_INTERRUPT;81168116+ DRM_DEBUG("IH: D6 vline\n");81178117+81348118 break;81358119 default:81368120 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);···81548130 case 42: /* HPD hotplug */81558131 switch (src_data) {81568132 case 0:81578157- if (rdev->irq.stat_regs.cik.disp_int & DC_HPD1_INTERRUPT) {81588158- rdev->irq.stat_regs.cik.disp_int &= ~DC_HPD1_INTERRUPT;81598159- queue_hotplug = true;81608160- DRM_DEBUG("IH: HPD1\n");81618161- }81338133+ if (!(rdev->irq.stat_regs.cik.disp_int & DC_HPD1_INTERRUPT))81348134+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");81358135+81368136+ rdev->irq.stat_regs.cik.disp_int &= ~DC_HPD1_INTERRUPT;81378137+ queue_hotplug = true;81388138+ DRM_DEBUG("IH: HPD1\n");81398139+81628140 break;81638141 case 1:81648164- if (rdev->irq.stat_regs.cik.disp_int_cont & DC_HPD2_INTERRUPT) {81658165- rdev->irq.stat_regs.cik.disp_int_cont &= ~DC_HPD2_INTERRUPT;81668166- queue_hotplug = true;81678167- DRM_DEBUG("IH: HPD2\n");81688168- }81428142+ if (!(rdev->irq.stat_regs.cik.disp_int_cont & DC_HPD2_INTERRUPT))81438143+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");81448144+81458145+ rdev->irq.stat_regs.cik.disp_int_cont &= ~DC_HPD2_INTERRUPT;81468146+ queue_hotplug = true;81478147+ DRM_DEBUG("IH: HPD2\n");81488148+81698149 break;81708150 case 2:81718171- if (rdev->irq.stat_regs.cik.disp_int_cont2 & DC_HPD3_INTERRUPT) {81728172- rdev->irq.stat_regs.cik.disp_int_cont2 &= ~DC_HPD3_INTERRUPT;81738173- queue_hotplug = true;81748174- DRM_DEBUG("IH: HPD3\n");81758175- }81518151+ if (!(rdev->irq.stat_regs.cik.disp_int_cont2 & DC_HPD3_INTERRUPT))81528152+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");81538153+81548154+ rdev->irq.stat_regs.cik.disp_int_cont2 &= ~DC_HPD3_INTERRUPT;81558155+ queue_hotplug = true;81568156+ DRM_DEBUG("IH: HPD3\n");81578157+81768158 break;81778159 case 3:81788178- if (rdev->irq.stat_regs.cik.disp_int_cont3 & DC_HPD4_INTERRUPT) {81798179- rdev->irq.stat_regs.cik.disp_int_cont3 &= ~DC_HPD4_INTERRUPT;81808180- queue_hotplug = true;81818181- DRM_DEBUG("IH: HPD4\n");81828182- }81608160+ if (!(rdev->irq.stat_regs.cik.disp_int_cont3 & DC_HPD4_INTERRUPT))81618161+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");81628162+81638163+ rdev->irq.stat_regs.cik.disp_int_cont3 &= ~DC_HPD4_INTERRUPT;81648164+ queue_hotplug = true;81658165+ DRM_DEBUG("IH: HPD4\n");81668166+81838167 break;81848168 case 4:81858185- if (rdev->irq.stat_regs.cik.disp_int_cont4 & DC_HPD5_INTERRUPT) {81868186- rdev->irq.stat_regs.cik.disp_int_cont4 &= ~DC_HPD5_INTERRUPT;81878187- queue_hotplug = true;81888188- DRM_DEBUG("IH: HPD5\n");81898189- }81698169+ if (!(rdev->irq.stat_regs.cik.disp_int_cont4 & DC_HPD5_INTERRUPT))81708170+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");81718171+81728172+ rdev->irq.stat_regs.cik.disp_int_cont4 &= ~DC_HPD5_INTERRUPT;81738173+ queue_hotplug = true;81748174+ DRM_DEBUG("IH: HPD5\n");81758175+81908176 break;81918177 case 5:81928192- if (rdev->irq.stat_regs.cik.disp_int_cont5 & DC_HPD6_INTERRUPT) {81938193- rdev->irq.stat_regs.cik.disp_int_cont5 &= ~DC_HPD6_INTERRUPT;81948194- queue_hotplug = true;81958195- DRM_DEBUG("IH: HPD6\n");81968196- }81788178+ if (!(rdev->irq.stat_regs.cik.disp_int_cont5 & DC_HPD6_INTERRUPT))81798179+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");81808180+81818181+ rdev->irq.stat_regs.cik.disp_int_cont5 &= ~DC_HPD6_INTERRUPT;81828182+ queue_hotplug = true;81838183+ DRM_DEBUG("IH: HPD6\n");81848184+81978185 break;81988186 case 6:81998199- if (rdev->irq.stat_regs.cik.disp_int & DC_HPD1_RX_INTERRUPT) {82008200- rdev->irq.stat_regs.cik.disp_int &= ~DC_HPD1_RX_INTERRUPT;82018201- queue_dp = true;82028202- DRM_DEBUG("IH: HPD_RX 1\n");82038203- }81878187+ if (!(rdev->irq.stat_regs.cik.disp_int & DC_HPD1_RX_INTERRUPT))81888188+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");81898189+81908190+ rdev->irq.stat_regs.cik.disp_int &= ~DC_HPD1_RX_INTERRUPT;81918191+ queue_dp = true;81928192+ DRM_DEBUG("IH: HPD_RX 1\n");81938193+82048194 break;82058195 case 7:82068206- if (rdev->irq.stat_regs.cik.disp_int_cont & DC_HPD2_RX_INTERRUPT) {82078207- rdev->irq.stat_regs.cik.disp_int_cont &= ~DC_HPD2_RX_INTERRUPT;82088208- queue_dp = true;82098209- DRM_DEBUG("IH: HPD_RX 2\n");82108210- }81968196+ if (!(rdev->irq.stat_regs.cik.disp_int_cont & DC_HPD2_RX_INTERRUPT))81978197+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");81988198+81998199+ rdev->irq.stat_regs.cik.disp_int_cont &= ~DC_HPD2_RX_INTERRUPT;82008200+ queue_dp = true;82018201+ DRM_DEBUG("IH: HPD_RX 2\n");82028202+82118203 break;82128204 case 8:82138213- if (rdev->irq.stat_regs.cik.disp_int_cont2 & DC_HPD3_RX_INTERRUPT) {82148214- rdev->irq.stat_regs.cik.disp_int_cont2 &= ~DC_HPD3_RX_INTERRUPT;82158215- queue_dp = true;82168216- DRM_DEBUG("IH: HPD_RX 3\n");82178217- }82058205+ if (!(rdev->irq.stat_regs.cik.disp_int_cont2 & DC_HPD3_RX_INTERRUPT))82068206+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");82078207+82088208+ rdev->irq.stat_regs.cik.disp_int_cont2 &= ~DC_HPD3_RX_INTERRUPT;82098209+ queue_dp = true;82108210+ DRM_DEBUG("IH: HPD_RX 3\n");82118211+82188212 break;82198213 case 9:82208220- if (rdev->irq.stat_regs.cik.disp_int_cont3 & DC_HPD4_RX_INTERRUPT) {82218221- rdev->irq.stat_regs.cik.disp_int_cont3 &= ~DC_HPD4_RX_INTERRUPT;82228222- queue_dp = true;82238223- DRM_DEBUG("IH: HPD_RX 4\n");82248224- }82148214+ if (!(rdev->irq.stat_regs.cik.disp_int_cont3 & DC_HPD4_RX_INTERRUPT))82158215+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");82168216+82178217+ rdev->irq.stat_regs.cik.disp_int_cont3 &= ~DC_HPD4_RX_INTERRUPT;82188218+ queue_dp = true;82198219+ DRM_DEBUG("IH: HPD_RX 4\n");82208220+82258221 break;82268222 case 10:82278227- if (rdev->irq.stat_regs.cik.disp_int_cont4 & DC_HPD5_RX_INTERRUPT) {82288228- rdev->irq.stat_regs.cik.disp_int_cont4 &= ~DC_HPD5_RX_INTERRUPT;82298229- queue_dp = true;82308230- DRM_DEBUG("IH: HPD_RX 5\n");82318231- }82238223+ if (!(rdev->irq.stat_regs.cik.disp_int_cont4 & DC_HPD5_RX_INTERRUPT))82248224+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");82258225+82268226+ rdev->irq.stat_regs.cik.disp_int_cont4 &= ~DC_HPD5_RX_INTERRUPT;82278227+ queue_dp = true;82288228+ DRM_DEBUG("IH: HPD_RX 5\n");82298229+82328230 break;82338231 case 11:82348234- if (rdev->irq.stat_regs.cik.disp_int_cont5 & DC_HPD6_RX_INTERRUPT) {82358235- rdev->irq.stat_regs.cik.disp_int_cont5 &= ~DC_HPD6_RX_INTERRUPT;82368236- queue_dp = true;82378237- DRM_DEBUG("IH: HPD_RX 6\n");82388238- }82328232+ if (!(rdev->irq.stat_regs.cik.disp_int_cont5 & DC_HPD6_RX_INTERRUPT))82338233+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");82348234+82358235+ rdev->irq.stat_regs.cik.disp_int_cont5 &= ~DC_HPD6_RX_INTERRUPT;82368236+ queue_dp = true;82378237+ DRM_DEBUG("IH: HPD_RX 6\n");82388238+82398239 break;82408240 default:82418241 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+217-175
drivers/gpu/drm/radeon/evergreen.c
···49244924 return IRQ_NONE;4925492549264926 rptr = rdev->ih.rptr;49274927- DRM_DEBUG("r600_irq_process start: rptr %d, wptr %d\n", rptr, wptr);49274927+ DRM_DEBUG("evergreen_irq_process start: rptr %d, wptr %d\n", rptr, wptr);4928492849294929 /* Order reading of wptr vs. reading of IH ring data */49304930 rmb();···49424942 case 1: /* D1 vblank/vline */49434943 switch (src_data) {49444944 case 0: /* D1 vblank */49454945- if (rdev->irq.stat_regs.evergreen.disp_int & LB_D1_VBLANK_INTERRUPT) {49464946- if (rdev->irq.crtc_vblank_int[0]) {49474947- drm_handle_vblank(rdev->ddev, 0);49484948- rdev->pm.vblank_sync = true;49494949- wake_up(&rdev->irq.vblank_queue);49504950- }49514951- if (atomic_read(&rdev->irq.pflip[0]))49524952- radeon_crtc_handle_vblank(rdev, 0);49534953- rdev->irq.stat_regs.evergreen.disp_int &= ~LB_D1_VBLANK_INTERRUPT;49544954- DRM_DEBUG("IH: D1 vblank\n");49454945+ if (!(rdev->irq.stat_regs.evergreen.disp_int & LB_D1_VBLANK_INTERRUPT))49464946+ DRM_DEBUG("IH: D1 vblank - IH event w/o asserted irq bit?\n");49474947+49484948+ if (rdev->irq.crtc_vblank_int[0]) {49494949+ drm_handle_vblank(rdev->ddev, 0);49504950+ rdev->pm.vblank_sync = true;49514951+ wake_up(&rdev->irq.vblank_queue);49554952 }49534953+ if (atomic_read(&rdev->irq.pflip[0]))49544954+ radeon_crtc_handle_vblank(rdev, 0);49554955+ rdev->irq.stat_regs.evergreen.disp_int &= ~LB_D1_VBLANK_INTERRUPT;49564956+ DRM_DEBUG("IH: D1 vblank\n");49574957+49564958 break;49574959 case 1: /* D1 vline */49584958- if (rdev->irq.stat_regs.evergreen.disp_int & LB_D1_VLINE_INTERRUPT) {49594959- rdev->irq.stat_regs.evergreen.disp_int &= ~LB_D1_VLINE_INTERRUPT;49604960- DRM_DEBUG("IH: D1 vline\n");49614961- }49604960+ if (!(rdev->irq.stat_regs.evergreen.disp_int & LB_D1_VLINE_INTERRUPT))49614961+ DRM_DEBUG("IH: D1 vline - IH event w/o asserted irq bit?\n");49624962+49634963+ rdev->irq.stat_regs.evergreen.disp_int &= ~LB_D1_VLINE_INTERRUPT;49644964+ DRM_DEBUG("IH: D1 vline\n");49654965+49624966 break;49634967 default:49644968 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);···49724968 case 2: /* D2 vblank/vline */49734969 switch (src_data) {49744970 case 0: /* D2 vblank */49754975- if (rdev->irq.stat_regs.evergreen.disp_int_cont & LB_D2_VBLANK_INTERRUPT) {49764976- if (rdev->irq.crtc_vblank_int[1]) {49774977- drm_handle_vblank(rdev->ddev, 1);49784978- rdev->pm.vblank_sync = true;49794979- wake_up(&rdev->irq.vblank_queue);49804980- }49814981- if (atomic_read(&rdev->irq.pflip[1]))49824982- radeon_crtc_handle_vblank(rdev, 1);49834983- rdev->irq.stat_regs.evergreen.disp_int_cont &= ~LB_D2_VBLANK_INTERRUPT;49844984- DRM_DEBUG("IH: D2 vblank\n");49714971+ if (!(rdev->irq.stat_regs.evergreen.disp_int_cont & LB_D2_VBLANK_INTERRUPT))49724972+ DRM_DEBUG("IH: D2 vblank - IH event w/o asserted irq bit?\n");49734973+49744974+ if (rdev->irq.crtc_vblank_int[1]) {49754975+ drm_handle_vblank(rdev->ddev, 1);49764976+ rdev->pm.vblank_sync = true;49774977+ wake_up(&rdev->irq.vblank_queue);49854978 }49794979+ if (atomic_read(&rdev->irq.pflip[1]))49804980+ radeon_crtc_handle_vblank(rdev, 1);49814981+ rdev->irq.stat_regs.evergreen.disp_int_cont &= ~LB_D2_VBLANK_INTERRUPT;49824982+ DRM_DEBUG("IH: D2 vblank\n");49834983+49864984 break;49874985 case 1: /* D2 vline */49884988- if (rdev->irq.stat_regs.evergreen.disp_int_cont & LB_D2_VLINE_INTERRUPT) {49894989- rdev->irq.stat_regs.evergreen.disp_int_cont &= ~LB_D2_VLINE_INTERRUPT;49904990- DRM_DEBUG("IH: D2 vline\n");49914991- }49864986+ if (!(rdev->irq.stat_regs.evergreen.disp_int_cont & LB_D2_VLINE_INTERRUPT))49874987+ DRM_DEBUG("IH: D2 vline - IH event w/o asserted irq bit?\n");49884988+49894989+ rdev->irq.stat_regs.evergreen.disp_int_cont &= ~LB_D2_VLINE_INTERRUPT;49904990+ DRM_DEBUG("IH: D2 vline\n");49914991+49924992 break;49934993 default:49944994 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);···50024994 case 3: /* D3 vblank/vline */50034995 switch (src_data) {50044996 case 0: /* D3 vblank */50055005- if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & LB_D3_VBLANK_INTERRUPT) {50065006- if (rdev->irq.crtc_vblank_int[2]) {50075007- drm_handle_vblank(rdev->ddev, 2);50085008- rdev->pm.vblank_sync = true;50095009- wake_up(&rdev->irq.vblank_queue);50105010- }50115011- if (atomic_read(&rdev->irq.pflip[2]))50125012- radeon_crtc_handle_vblank(rdev, 2);50135013- rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~LB_D3_VBLANK_INTERRUPT;50145014- DRM_DEBUG("IH: D3 vblank\n");49974997+ if (!(rdev->irq.stat_regs.evergreen.disp_int_cont2 & LB_D3_VBLANK_INTERRUPT))49984998+ DRM_DEBUG("IH: D3 vblank - IH event w/o asserted irq bit?\n");49994999+50005000+ if (rdev->irq.crtc_vblank_int[2]) {50015001+ drm_handle_vblank(rdev->ddev, 2);50025002+ rdev->pm.vblank_sync = true;50035003+ wake_up(&rdev->irq.vblank_queue);50155004 }50055005+ if (atomic_read(&rdev->irq.pflip[2]))50065006+ radeon_crtc_handle_vblank(rdev, 2);50075007+ rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~LB_D3_VBLANK_INTERRUPT;50085008+ DRM_DEBUG("IH: D3 vblank\n");50095009+50165010 break;50175011 case 1: /* D3 vline */50185018- if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & LB_D3_VLINE_INTERRUPT) {50195019- rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~LB_D3_VLINE_INTERRUPT;50205020- DRM_DEBUG("IH: D3 vline\n");50215021- }50125012+ if (!(rdev->irq.stat_regs.evergreen.disp_int_cont2 & LB_D3_VLINE_INTERRUPT))50135013+ DRM_DEBUG("IH: D3 vline - IH event w/o asserted irq bit?\n");50145014+50155015+ rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~LB_D3_VLINE_INTERRUPT;50165016+ DRM_DEBUG("IH: D3 vline\n");50175017+50225018 break;50235019 default:50245020 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);···50325020 case 4: /* D4 vblank/vline */50335021 switch (src_data) {50345022 case 0: /* D4 vblank */50355035- if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & LB_D4_VBLANK_INTERRUPT) {50365036- if (rdev->irq.crtc_vblank_int[3]) {50375037- drm_handle_vblank(rdev->ddev, 3);50385038- rdev->pm.vblank_sync = true;50395039- wake_up(&rdev->irq.vblank_queue);50405040- }50415041- if (atomic_read(&rdev->irq.pflip[3]))50425042- radeon_crtc_handle_vblank(rdev, 3);50435043- rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~LB_D4_VBLANK_INTERRUPT;50445044- DRM_DEBUG("IH: D4 vblank\n");50235023+ if (!(rdev->irq.stat_regs.evergreen.disp_int_cont3 & LB_D4_VBLANK_INTERRUPT))50245024+ DRM_DEBUG("IH: D4 vblank - IH event w/o asserted irq bit?\n");50255025+50265026+ if (rdev->irq.crtc_vblank_int[3]) {50275027+ drm_handle_vblank(rdev->ddev, 3);50285028+ rdev->pm.vblank_sync = true;50295029+ wake_up(&rdev->irq.vblank_queue);50455030 }50315031+ if (atomic_read(&rdev->irq.pflip[3]))50325032+ radeon_crtc_handle_vblank(rdev, 3);50335033+ rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~LB_D4_VBLANK_INTERRUPT;50345034+ DRM_DEBUG("IH: D4 vblank\n");50355035+50465036 break;50475037 case 1: /* D4 vline */50485048- if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & LB_D4_VLINE_INTERRUPT) {50495049- rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~LB_D4_VLINE_INTERRUPT;50505050- DRM_DEBUG("IH: D4 vline\n");50515051- }50385038+ if (!(rdev->irq.stat_regs.evergreen.disp_int_cont3 & LB_D4_VLINE_INTERRUPT))50395039+ DRM_DEBUG("IH: D4 vline - IH event w/o asserted irq bit?\n");50405040+50415041+ rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~LB_D4_VLINE_INTERRUPT;50425042+ DRM_DEBUG("IH: D4 vline\n");50435043+50525044 break;50535045 default:50545046 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);···50625046 case 5: /* D5 vblank/vline */50635047 switch (src_data) {50645048 case 0: /* D5 vblank */50655065- if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & LB_D5_VBLANK_INTERRUPT) {50665066- if (rdev->irq.crtc_vblank_int[4]) {50675067- drm_handle_vblank(rdev->ddev, 4);50685068- rdev->pm.vblank_sync = true;50695069- wake_up(&rdev->irq.vblank_queue);50705070- }50715071- if (atomic_read(&rdev->irq.pflip[4]))50725072- radeon_crtc_handle_vblank(rdev, 4);50735073- rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~LB_D5_VBLANK_INTERRUPT;50745074- DRM_DEBUG("IH: D5 vblank\n");50495049+ if (!(rdev->irq.stat_regs.evergreen.disp_int_cont4 & LB_D5_VBLANK_INTERRUPT))50505050+ DRM_DEBUG("IH: D5 vblank - IH event w/o asserted irq bit?\n");50515051+50525052+ if (rdev->irq.crtc_vblank_int[4]) {50535053+ drm_handle_vblank(rdev->ddev, 4);50545054+ rdev->pm.vblank_sync = true;50555055+ wake_up(&rdev->irq.vblank_queue);50755056 }50575057+ if (atomic_read(&rdev->irq.pflip[4]))50585058+ radeon_crtc_handle_vblank(rdev, 4);50595059+ rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~LB_D5_VBLANK_INTERRUPT;50605060+ DRM_DEBUG("IH: D5 vblank\n");50615061+50765062 break;50775063 case 1: /* D5 vline */50785078- if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & LB_D5_VLINE_INTERRUPT) {50795079- rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~LB_D5_VLINE_INTERRUPT;50805080- DRM_DEBUG("IH: D5 vline\n");50815081- }50645064+ if (!(rdev->irq.stat_regs.evergreen.disp_int_cont4 & LB_D5_VLINE_INTERRUPT))50655065+ DRM_DEBUG("IH: D5 vline - IH event w/o asserted irq bit?\n");50665066+50675067+ rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~LB_D5_VLINE_INTERRUPT;50685068+ DRM_DEBUG("IH: D5 vline\n");50695069+50825070 break;50835071 default:50845072 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);···50925072 case 6: /* D6 vblank/vline */50935073 switch (src_data) {50945074 case 0: /* D6 vblank */50955095- if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & LB_D6_VBLANK_INTERRUPT) {50965096- if (rdev->irq.crtc_vblank_int[5]) {50975097- drm_handle_vblank(rdev->ddev, 5);50985098- rdev->pm.vblank_sync = true;50995099- wake_up(&rdev->irq.vblank_queue);51005100- }51015101- if (atomic_read(&rdev->irq.pflip[5]))51025102- radeon_crtc_handle_vblank(rdev, 5);51035103- rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~LB_D6_VBLANK_INTERRUPT;51045104- DRM_DEBUG("IH: D6 vblank\n");50755075+ if (!(rdev->irq.stat_regs.evergreen.disp_int_cont5 & LB_D6_VBLANK_INTERRUPT))50765076+ DRM_DEBUG("IH: D6 vblank - IH event w/o asserted irq bit?\n");50775077+50785078+ if (rdev->irq.crtc_vblank_int[5]) {50795079+ drm_handle_vblank(rdev->ddev, 5);50805080+ rdev->pm.vblank_sync = true;50815081+ wake_up(&rdev->irq.vblank_queue);51055082 }50835083+ if (atomic_read(&rdev->irq.pflip[5]))50845084+ radeon_crtc_handle_vblank(rdev, 5);50855085+ rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~LB_D6_VBLANK_INTERRUPT;50865086+ DRM_DEBUG("IH: D6 vblank\n");50875087+51065088 break;51075089 case 1: /* D6 vline */51085108- if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & LB_D6_VLINE_INTERRUPT) {51095109- rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~LB_D6_VLINE_INTERRUPT;51105110- DRM_DEBUG("IH: D6 vline\n");51115111- }50905090+ if (!(rdev->irq.stat_regs.evergreen.disp_int_cont5 & LB_D6_VLINE_INTERRUPT))50915091+ DRM_DEBUG("IH: D6 vline - IH event w/o asserted irq bit?\n");50925092+50935093+ rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~LB_D6_VLINE_INTERRUPT;50945094+ DRM_DEBUG("IH: D6 vline\n");50955095+51125096 break;51135097 default:51145098 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);···51325108 case 42: /* HPD hotplug */51335109 switch (src_data) {51345110 case 0:51355135- if (rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_INTERRUPT) {51365136- rdev->irq.stat_regs.evergreen.disp_int &= ~DC_HPD1_INTERRUPT;51375137- queue_hotplug = true;51385138- DRM_DEBUG("IH: HPD1\n");51395139- }51115111+ if (!(rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_INTERRUPT))51125112+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");51135113+51145114+ rdev->irq.stat_regs.evergreen.disp_int &= ~DC_HPD1_INTERRUPT;51155115+ queue_hotplug = true;51165116+ DRM_DEBUG("IH: HPD1\n");51405117 break;51415118 case 1:51425142- if (rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_INTERRUPT) {51435143- rdev->irq.stat_regs.evergreen.disp_int_cont &= ~DC_HPD2_INTERRUPT;51445144- queue_hotplug = true;51455145- DRM_DEBUG("IH: HPD2\n");51465146- }51195119+ if (!(rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_INTERRUPT))51205120+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");51215121+51225122+ rdev->irq.stat_regs.evergreen.disp_int_cont &= ~DC_HPD2_INTERRUPT;51235123+ queue_hotplug = true;51245124+ DRM_DEBUG("IH: HPD2\n");51475125 break;51485126 case 2:51495149- if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_INTERRUPT) {51505150- rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~DC_HPD3_INTERRUPT;51515151- queue_hotplug = true;51525152- DRM_DEBUG("IH: HPD3\n");51535153- }51275127+ if (!(rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_INTERRUPT))51285128+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");51295129+51305130+ rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~DC_HPD3_INTERRUPT;51315131+ queue_hotplug = true;51325132+ DRM_DEBUG("IH: HPD3\n");51545133 break;51555134 case 3:51565156- if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_INTERRUPT) {51575157- rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~DC_HPD4_INTERRUPT;51585158- queue_hotplug = true;51595159- DRM_DEBUG("IH: HPD4\n");51605160- }51355135+ if (!(rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_INTERRUPT))51365136+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");51375137+51385138+ rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~DC_HPD4_INTERRUPT;51395139+ queue_hotplug = true;51405140+ DRM_DEBUG("IH: HPD4\n");51615141 break;51625142 case 4:51635163- if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_INTERRUPT) {51645164- rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~DC_HPD5_INTERRUPT;51655165- queue_hotplug = true;51665166- DRM_DEBUG("IH: HPD5\n");51675167- }51435143+ if (!(rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_INTERRUPT))51445144+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");51455145+51465146+ rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~DC_HPD5_INTERRUPT;51475147+ queue_hotplug = true;51485148+ DRM_DEBUG("IH: HPD5\n");51685149 break;51695150 case 5:51705170- if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_INTERRUPT) {51715171- rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~DC_HPD6_INTERRUPT;51725172- queue_hotplug = true;51735173- DRM_DEBUG("IH: HPD6\n");51745174- }51515151+ if (!(rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_INTERRUPT))51525152+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");51535153+51545154+ rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~DC_HPD6_INTERRUPT;51555155+ queue_hotplug = true;51565156+ DRM_DEBUG("IH: HPD6\n");51755157 break;51765158 case 6:51775177- if (rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_RX_INTERRUPT) {51785178- rdev->irq.stat_regs.evergreen.disp_int &= ~DC_HPD1_RX_INTERRUPT;51795179- queue_dp = true;51805180- DRM_DEBUG("IH: HPD_RX 1\n");51815181- }51595159+ if (!(rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_RX_INTERRUPT))51605160+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");51615161+51625162+ rdev->irq.stat_regs.evergreen.disp_int &= ~DC_HPD1_RX_INTERRUPT;51635163+ queue_dp = true;51645164+ DRM_DEBUG("IH: HPD_RX 1\n");51825165 break;51835166 case 7:51845184- if (rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_RX_INTERRUPT) {51855185- rdev->irq.stat_regs.evergreen.disp_int_cont &= ~DC_HPD2_RX_INTERRUPT;51865186- queue_dp = true;51875187- DRM_DEBUG("IH: HPD_RX 2\n");51885188- }51675167+ if (!(rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_RX_INTERRUPT))51685168+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");51695169+51705170+ rdev->irq.stat_regs.evergreen.disp_int_cont &= ~DC_HPD2_RX_INTERRUPT;51715171+ queue_dp = true;51725172+ DRM_DEBUG("IH: HPD_RX 2\n");51895173 break;51905174 case 8:51915191- if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_RX_INTERRUPT) {51925192- rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~DC_HPD3_RX_INTERRUPT;51935193- queue_dp = true;51945194- DRM_DEBUG("IH: HPD_RX 3\n");51955195- }51755175+ if (!(rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_RX_INTERRUPT))51765176+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");51775177+51785178+ rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~DC_HPD3_RX_INTERRUPT;51795179+ queue_dp = true;51805180+ DRM_DEBUG("IH: HPD_RX 3\n");51965181 break;51975182 case 9:51985198- if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_RX_INTERRUPT) {51995199- rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~DC_HPD4_RX_INTERRUPT;52005200- queue_dp = true;52015201- DRM_DEBUG("IH: HPD_RX 4\n");52025202- }51835183+ if (!(rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_RX_INTERRUPT))51845184+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");51855185+51865186+ rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~DC_HPD4_RX_INTERRUPT;51875187+ queue_dp = true;51885188+ DRM_DEBUG("IH: HPD_RX 4\n");52035189 break;52045190 case 10:52055205- if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_RX_INTERRUPT) {52065206- rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~DC_HPD5_RX_INTERRUPT;52075207- queue_dp = true;52085208- DRM_DEBUG("IH: HPD_RX 5\n");52095209- }51915191+ if (!(rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_RX_INTERRUPT))51925192+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");51935193+51945194+ rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~DC_HPD5_RX_INTERRUPT;51955195+ queue_dp = true;51965196+ DRM_DEBUG("IH: HPD_RX 5\n");52105197 break;52115198 case 11:52125212- if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_RX_INTERRUPT) {52135213- rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~DC_HPD6_RX_INTERRUPT;52145214- queue_dp = true;52155215- DRM_DEBUG("IH: HPD_RX 6\n");52165216- }51995199+ if (!(rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_RX_INTERRUPT))52005200+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");52015201+52025202+ rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~DC_HPD6_RX_INTERRUPT;52035203+ queue_dp = true;52045204+ DRM_DEBUG("IH: HPD_RX 6\n");52175205 break;52185206 default:52195207 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);···52355199 case 44: /* hdmi */52365200 switch (src_data) {52375201 case 0:52385238- if (rdev->irq.stat_regs.evergreen.afmt_status1 & AFMT_AZ_FORMAT_WTRIG) {52395239- rdev->irq.stat_regs.evergreen.afmt_status1 &= ~AFMT_AZ_FORMAT_WTRIG;52405240- queue_hdmi = true;52415241- DRM_DEBUG("IH: HDMI0\n");52425242- }52025202+ if (!(rdev->irq.stat_regs.evergreen.afmt_status1 & AFMT_AZ_FORMAT_WTRIG))52035203+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");52045204+52055205+ rdev->irq.stat_regs.evergreen.afmt_status1 &= ~AFMT_AZ_FORMAT_WTRIG;52065206+ queue_hdmi = true;52075207+ DRM_DEBUG("IH: HDMI0\n");52435208 break;52445209 case 1:52455245- if (rdev->irq.stat_regs.evergreen.afmt_status2 & AFMT_AZ_FORMAT_WTRIG) {52465246- rdev->irq.stat_regs.evergreen.afmt_status2 &= ~AFMT_AZ_FORMAT_WTRIG;52475247- queue_hdmi = true;52485248- DRM_DEBUG("IH: HDMI1\n");52495249- }52105210+ if (!(rdev->irq.stat_regs.evergreen.afmt_status2 & AFMT_AZ_FORMAT_WTRIG))52115211+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");52125212+52135213+ rdev->irq.stat_regs.evergreen.afmt_status2 &= ~AFMT_AZ_FORMAT_WTRIG;52145214+ queue_hdmi = true;52155215+ DRM_DEBUG("IH: HDMI1\n");52505216 break;52515217 case 2:52525252- if (rdev->irq.stat_regs.evergreen.afmt_status3 & AFMT_AZ_FORMAT_WTRIG) {52535253- rdev->irq.stat_regs.evergreen.afmt_status3 &= ~AFMT_AZ_FORMAT_WTRIG;52545254- queue_hdmi = true;52555255- DRM_DEBUG("IH: HDMI2\n");52565256- }52185218+ if (!(rdev->irq.stat_regs.evergreen.afmt_status3 & AFMT_AZ_FORMAT_WTRIG))52195219+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");52205220+52215221+ rdev->irq.stat_regs.evergreen.afmt_status3 &= ~AFMT_AZ_FORMAT_WTRIG;52225222+ queue_hdmi = true;52235223+ DRM_DEBUG("IH: HDMI2\n");52575224 break;52585225 case 3:52595259- if (rdev->irq.stat_regs.evergreen.afmt_status4 & AFMT_AZ_FORMAT_WTRIG) {52605260- rdev->irq.stat_regs.evergreen.afmt_status4 &= ~AFMT_AZ_FORMAT_WTRIG;52615261- queue_hdmi = true;52625262- DRM_DEBUG("IH: HDMI3\n");52635263- }52265226+ if (!(rdev->irq.stat_regs.evergreen.afmt_status4 & AFMT_AZ_FORMAT_WTRIG))52275227+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");52285228+52295229+ rdev->irq.stat_regs.evergreen.afmt_status4 &= ~AFMT_AZ_FORMAT_WTRIG;52305230+ queue_hdmi = true;52315231+ DRM_DEBUG("IH: HDMI3\n");52645232 break;52655233 case 4:52665266- if (rdev->irq.stat_regs.evergreen.afmt_status5 & AFMT_AZ_FORMAT_WTRIG) {52675267- rdev->irq.stat_regs.evergreen.afmt_status5 &= ~AFMT_AZ_FORMAT_WTRIG;52685268- queue_hdmi = true;52695269- DRM_DEBUG("IH: HDMI4\n");52705270- }52345234+ if (!(rdev->irq.stat_regs.evergreen.afmt_status5 & AFMT_AZ_FORMAT_WTRIG))52355235+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");52365236+52375237+ rdev->irq.stat_regs.evergreen.afmt_status5 &= ~AFMT_AZ_FORMAT_WTRIG;52385238+ queue_hdmi = true;52395239+ DRM_DEBUG("IH: HDMI4\n");52715240 break;52725241 case 5:52735273- if (rdev->irq.stat_regs.evergreen.afmt_status6 & AFMT_AZ_FORMAT_WTRIG) {52745274- rdev->irq.stat_regs.evergreen.afmt_status6 &= ~AFMT_AZ_FORMAT_WTRIG;52755275- queue_hdmi = true;52765276- DRM_DEBUG("IH: HDMI5\n");52775277- }52425242+ if (!(rdev->irq.stat_regs.evergreen.afmt_status6 & AFMT_AZ_FORMAT_WTRIG))52435243+ DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");52445244+52455245+ rdev->irq.stat_regs.evergreen.afmt_status6 &= ~AFMT_AZ_FORMAT_WTRIG;52465246+ queue_hdmi = true;52475247+ DRM_DEBUG("IH: HDMI5\n");52785248 break;52795249 default:52805250 DRM_ERROR("Unhandled interrupt: %d %d\n", src_id, src_data);
+14-11
drivers/gpu/drm/radeon/ni.c
···21622162 DRM_ERROR("radeon: failed initializing UVD (%d).\n", r);21632163 }2164216421652165- ring = &rdev->ring[TN_RING_TYPE_VCE1_INDEX];21662166- if (ring->ring_size)21672167- r = radeon_ring_init(rdev, ring, ring->ring_size, 0, 0x0);21652165+ if (rdev->family == CHIP_ARUBA) {21662166+ ring = &rdev->ring[TN_RING_TYPE_VCE1_INDEX];21672167+ if (ring->ring_size)21682168+ r = radeon_ring_init(rdev, ring, ring->ring_size, 0, 0x0);2168216921692169- ring = &rdev->ring[TN_RING_TYPE_VCE2_INDEX];21702170- if (ring->ring_size)21712171- r = radeon_ring_init(rdev, ring, ring->ring_size, 0, 0x0);21702170+ ring = &rdev->ring[TN_RING_TYPE_VCE2_INDEX];21712171+ if (ring->ring_size)21722172+ r = radeon_ring_init(rdev, ring, ring->ring_size, 0, 0x0);2172217321732173- if (!r)21742174- r = vce_v1_0_init(rdev);21752175- else if (r != -ENOENT)21762176- DRM_ERROR("radeon: failed initializing VCE (%d).\n", r);21742174+ if (!r)21752175+ r = vce_v1_0_init(rdev);21762176+ if (r)21772177+ DRM_ERROR("radeon: failed initializing VCE (%d).\n", r);21782178+ }2177217921782180 r = radeon_ib_pool_init(rdev);21792181 if (r) {···23982396 radeon_irq_kms_fini(rdev);23992397 uvd_v1_0_fini(rdev);24002398 radeon_uvd_fini(rdev);24012401- radeon_vce_fini(rdev);23992399+ if (rdev->family == CHIP_ARUBA)24002400+ radeon_vce_fini(rdev);24022401 cayman_pcie_gart_fini(rdev);24032402 r600_vram_scratch_fini(rdev);24042403 radeon_gem_fini(rdev);
+87-68
drivers/gpu/drm/radeon/r600.c
···40864086 case 1: /* D1 vblank/vline */40874087 switch (src_data) {40884088 case 0: /* D1 vblank */40894089- if (rdev->irq.stat_regs.r600.disp_int & LB_D1_VBLANK_INTERRUPT) {40904090- if (rdev->irq.crtc_vblank_int[0]) {40914091- drm_handle_vblank(rdev->ddev, 0);40924092- rdev->pm.vblank_sync = true;40934093- wake_up(&rdev->irq.vblank_queue);40944094- }40954095- if (atomic_read(&rdev->irq.pflip[0]))40964096- radeon_crtc_handle_vblank(rdev, 0);40974097- rdev->irq.stat_regs.r600.disp_int &= ~LB_D1_VBLANK_INTERRUPT;40984098- DRM_DEBUG("IH: D1 vblank\n");40894089+ if (!(rdev->irq.stat_regs.r600.disp_int & LB_D1_VBLANK_INTERRUPT))40904090+ DRM_DEBUG("IH: D1 vblank - IH event w/o asserted irq bit?\n");40914091+40924092+ if (rdev->irq.crtc_vblank_int[0]) {40934093+ drm_handle_vblank(rdev->ddev, 0);40944094+ rdev->pm.vblank_sync = true;40954095+ wake_up(&rdev->irq.vblank_queue);40994096 }40974097+ if (atomic_read(&rdev->irq.pflip[0]))40984098+ radeon_crtc_handle_vblank(rdev, 0);40994099+ rdev->irq.stat_regs.r600.disp_int &= ~LB_D1_VBLANK_INTERRUPT;41004100+ DRM_DEBUG("IH: D1 vblank\n");41014101+41004102 break;41014103 case 1: /* D1 vline */41024102- if (rdev->irq.stat_regs.r600.disp_int & LB_D1_VLINE_INTERRUPT) {41034103- rdev->irq.stat_regs.r600.disp_int &= ~LB_D1_VLINE_INTERRUPT;41044104- DRM_DEBUG("IH: D1 vline\n");41054105- }41044104+ if (!(rdev->irq.stat_regs.r600.disp_int & LB_D1_VLINE_INTERRUPT))41054105+ DRM_DEBUG("IH: D1 vline - IH event w/o asserted irq bit?\n");41064106+41074107+ rdev->irq.stat_regs.r600.disp_int &= ~LB_D1_VLINE_INTERRUPT;41084108+ DRM_DEBUG("IH: D1 vline\n");41094109+41064110 break;41074111 default:41084112 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);···41164112 case 5: /* D2 vblank/vline */41174113 switch (src_data) {41184114 case 0: /* D2 vblank */41194119- if (rdev->irq.stat_regs.r600.disp_int & LB_D2_VBLANK_INTERRUPT) {41204120- if (rdev->irq.crtc_vblank_int[1]) {41214121- drm_handle_vblank(rdev->ddev, 1);41224122- rdev->pm.vblank_sync = true;41234123- wake_up(&rdev->irq.vblank_queue);41244124- }41254125- if (atomic_read(&rdev->irq.pflip[1]))41264126- radeon_crtc_handle_vblank(rdev, 1);41274127- rdev->irq.stat_regs.r600.disp_int &= ~LB_D2_VBLANK_INTERRUPT;41284128- DRM_DEBUG("IH: D2 vblank\n");41154115+ if (!(rdev->irq.stat_regs.r600.disp_int & LB_D2_VBLANK_INTERRUPT))41164116+ DRM_DEBUG("IH: D2 vblank - IH event w/o asserted irq bit?\n");41174117+41184118+ if (rdev->irq.crtc_vblank_int[1]) {41194119+ drm_handle_vblank(rdev->ddev, 1);41204120+ rdev->pm.vblank_sync = true;41214121+ wake_up(&rdev->irq.vblank_queue);41294122 }41234123+ if (atomic_read(&rdev->irq.pflip[1]))41244124+ radeon_crtc_handle_vblank(rdev, 1);41254125+ rdev->irq.stat_regs.r600.disp_int &= ~LB_D2_VBLANK_INTERRUPT;41264126+ DRM_DEBUG("IH: D2 vblank\n");41274127+41304128 break;41314129 case 1: /* D1 vline */41324132- if (rdev->irq.stat_regs.r600.disp_int & LB_D2_VLINE_INTERRUPT) {41334133- rdev->irq.stat_regs.r600.disp_int &= ~LB_D2_VLINE_INTERRUPT;41344134- DRM_DEBUG("IH: D2 vline\n");41354135- }41304130+ if (!(rdev->irq.stat_regs.r600.disp_int & LB_D2_VLINE_INTERRUPT))41314131+ DRM_DEBUG("IH: D2 vline - IH event w/o asserted irq bit?\n");41324132+41334133+ rdev->irq.stat_regs.r600.disp_int &= ~LB_D2_VLINE_INTERRUPT;41344134+ DRM_DEBUG("IH: D2 vline\n");41354135+41364136 break;41374137 default:41384138 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);···41564148 case 19: /* HPD/DAC hotplug */41574149 switch (src_data) {41584150 case 0:41594159- if (rdev->irq.stat_regs.r600.disp_int & DC_HPD1_INTERRUPT) {41604160- rdev->irq.stat_regs.r600.disp_int &= ~DC_HPD1_INTERRUPT;41614161- queue_hotplug = true;41624162- DRM_DEBUG("IH: HPD1\n");41634163- }41514151+ if (!(rdev->irq.stat_regs.r600.disp_int & DC_HPD1_INTERRUPT))41524152+ DRM_DEBUG("IH: HPD1 - IH event w/o asserted irq bit?\n");41534153+41544154+ rdev->irq.stat_regs.r600.disp_int &= ~DC_HPD1_INTERRUPT;41554155+ queue_hotplug = true;41564156+ DRM_DEBUG("IH: HPD1\n");41644157 break;41654158 case 1:41664166- if (rdev->irq.stat_regs.r600.disp_int & DC_HPD2_INTERRUPT) {41674167- rdev->irq.stat_regs.r600.disp_int &= ~DC_HPD2_INTERRUPT;41684168- queue_hotplug = true;41694169- DRM_DEBUG("IH: HPD2\n");41704170- }41594159+ if (!(rdev->irq.stat_regs.r600.disp_int & DC_HPD2_INTERRUPT))41604160+ DRM_DEBUG("IH: HPD2 - IH event w/o asserted irq bit?\n");41614161+41624162+ rdev->irq.stat_regs.r600.disp_int &= ~DC_HPD2_INTERRUPT;41634163+ queue_hotplug = true;41644164+ DRM_DEBUG("IH: HPD2\n");41714165 break;41724166 case 4:41734173- if (rdev->irq.stat_regs.r600.disp_int_cont & DC_HPD3_INTERRUPT) {41744174- rdev->irq.stat_regs.r600.disp_int_cont &= ~DC_HPD3_INTERRUPT;41754175- queue_hotplug = true;41764176- DRM_DEBUG("IH: HPD3\n");41774177- }41674167+ if (!(rdev->irq.stat_regs.r600.disp_int_cont & DC_HPD3_INTERRUPT))41684168+ DRM_DEBUG("IH: HPD3 - IH event w/o asserted irq bit?\n");41694169+41704170+ rdev->irq.stat_regs.r600.disp_int_cont &= ~DC_HPD3_INTERRUPT;41714171+ queue_hotplug = true;41724172+ DRM_DEBUG("IH: HPD3\n");41784173 break;41794174 case 5:41804180- if (rdev->irq.stat_regs.r600.disp_int_cont & DC_HPD4_INTERRUPT) {41814181- rdev->irq.stat_regs.r600.disp_int_cont &= ~DC_HPD4_INTERRUPT;41824182- queue_hotplug = true;41834183- DRM_DEBUG("IH: HPD4\n");41844184- }41754175+ if (!(rdev->irq.stat_regs.r600.disp_int_cont & DC_HPD4_INTERRUPT))41764176+ DRM_DEBUG("IH: HPD4 - IH event w/o asserted irq bit?\n");41774177+41784178+ rdev->irq.stat_regs.r600.disp_int_cont &= ~DC_HPD4_INTERRUPT;41794179+ queue_hotplug = true;41804180+ DRM_DEBUG("IH: HPD4\n");41854181 break;41864182 case 10:41874187- if (rdev->irq.stat_regs.r600.disp_int_cont2 & DC_HPD5_INTERRUPT) {41884188- rdev->irq.stat_regs.r600.disp_int_cont2 &= ~DC_HPD5_INTERRUPT;41894189- queue_hotplug = true;41904190- DRM_DEBUG("IH: HPD5\n");41914191- }41834183+ if (!(rdev->irq.stat_regs.r600.disp_int_cont2 & DC_HPD5_INTERRUPT))41844184+ DRM_DEBUG("IH: HPD5 - IH event w/o asserted irq bit?\n");41854185+41864186+ rdev->irq.stat_regs.r600.disp_int_cont2 &= ~DC_HPD5_INTERRUPT;41874187+ queue_hotplug = true;41884188+ DRM_DEBUG("IH: HPD5\n");41924189 break;41934190 case 12:41944194- if (rdev->irq.stat_regs.r600.disp_int_cont2 & DC_HPD6_INTERRUPT) {41954195- rdev->irq.stat_regs.r600.disp_int_cont2 &= ~DC_HPD6_INTERRUPT;41964196- queue_hotplug = true;41974197- DRM_DEBUG("IH: HPD6\n");41984198- }41914191+ if (!(rdev->irq.stat_regs.r600.disp_int_cont2 & DC_HPD6_INTERRUPT))41924192+ DRM_DEBUG("IH: HPD6 - IH event w/o asserted irq bit?\n");41934193+41944194+ rdev->irq.stat_regs.r600.disp_int_cont2 &= ~DC_HPD6_INTERRUPT;41954195+ queue_hotplug = true;41964196+ DRM_DEBUG("IH: HPD6\n");41974197+41994198 break;42004199 default:42014200 DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);···42124197 case 21: /* hdmi */42134198 switch (src_data) {42144199 case 4:42154215- if (rdev->irq.stat_regs.r600.hdmi0_status & HDMI0_AZ_FORMAT_WTRIG) {42164216- rdev->irq.stat_regs.r600.hdmi0_status &= ~HDMI0_AZ_FORMAT_WTRIG;42174217- queue_hdmi = true;42184218- DRM_DEBUG("IH: HDMI0\n");42194219- }42004200+ if (!(rdev->irq.stat_regs.r600.hdmi0_status & HDMI0_AZ_FORMAT_WTRIG))42014201+ DRM_DEBUG("IH: HDMI0 - IH event w/o asserted irq bit?\n");42024202+42034203+ rdev->irq.stat_regs.r600.hdmi0_status &= ~HDMI0_AZ_FORMAT_WTRIG;42044204+ queue_hdmi = true;42054205+ DRM_DEBUG("IH: HDMI0\n");42064206+42204207 break;42214208 case 5:42224222- if (rdev->irq.stat_regs.r600.hdmi1_status & HDMI0_AZ_FORMAT_WTRIG) {42234223- rdev->irq.stat_regs.r600.hdmi1_status &= ~HDMI0_AZ_FORMAT_WTRIG;42244224- queue_hdmi = true;42254225- DRM_DEBUG("IH: HDMI1\n");42264226- }42094209+ if (!(rdev->irq.stat_regs.r600.hdmi1_status & HDMI0_AZ_FORMAT_WTRIG))42104210+ DRM_DEBUG("IH: HDMI1 - IH event w/o asserted irq bit?\n");42114211+42124212+ rdev->irq.stat_regs.r600.hdmi1_status &= ~HDMI0_AZ_FORMAT_WTRIG;42134213+ queue_hdmi = true;42144214+ DRM_DEBUG("IH: HDMI1\n");42154215+42274216 break;42284217 default:42294218 DRM_ERROR("Unhandled interrupt: %d %d\n", src_id, src_data);
+1-1
drivers/gpu/drm/radeon/r600_cp.c
···24832483 struct drm_buf *buf;24842484 u32 *buffer;24852485 const u8 __user *data;24862486- int size, pass_size;24862486+ unsigned int size, pass_size;24872487 u64 src_offset, dst_offset;2488248824892489 if (!radeon_check_offset(dev_priv, tex->offset)) {
+44-65
drivers/gpu/drm/radeon/radeon_cursor.c
···9191 struct radeon_device *rdev = crtc->dev->dev_private;92929393 if (ASIC_IS_DCE4(rdev)) {9494+ WREG32(EVERGREEN_CUR_SURFACE_ADDRESS_HIGH + radeon_crtc->crtc_offset,9595+ upper_32_bits(radeon_crtc->cursor_addr));9696+ WREG32(EVERGREEN_CUR_SURFACE_ADDRESS + radeon_crtc->crtc_offset,9797+ lower_32_bits(radeon_crtc->cursor_addr));9498 WREG32(RADEON_MM_INDEX, EVERGREEN_CUR_CONTROL + radeon_crtc->crtc_offset);9599 WREG32(RADEON_MM_DATA, EVERGREEN_CURSOR_EN |96100 EVERGREEN_CURSOR_MODE(EVERGREEN_CURSOR_24_8_PRE_MULT) |97101 EVERGREEN_CURSOR_URGENT_CONTROL(EVERGREEN_CURSOR_URGENT_1_2));98102 } else if (ASIC_IS_AVIVO(rdev)) {103103+ if (rdev->family >= CHIP_RV770) {104104+ if (radeon_crtc->crtc_id)105105+ WREG32(R700_D2CUR_SURFACE_ADDRESS_HIGH,106106+ upper_32_bits(radeon_crtc->cursor_addr));107107+ else108108+ WREG32(R700_D1CUR_SURFACE_ADDRESS_HIGH,109109+ upper_32_bits(radeon_crtc->cursor_addr));110110+ }111111+112112+ WREG32(AVIVO_D1CUR_SURFACE_ADDRESS + radeon_crtc->crtc_offset,113113+ lower_32_bits(radeon_crtc->cursor_addr));99114 WREG32(RADEON_MM_INDEX, AVIVO_D1CUR_CONTROL + radeon_crtc->crtc_offset);100115 WREG32(RADEON_MM_DATA, AVIVO_D1CURSOR_EN |101116 (AVIVO_D1CURSOR_MODE_24BPP << AVIVO_D1CURSOR_MODE_SHIFT));102117 } else {118118+ /* offset is from DISP(2)_BASE_ADDRESS */119119+ WREG32(RADEON_CUR_OFFSET + radeon_crtc->crtc_offset,120120+ radeon_crtc->cursor_addr - radeon_crtc->legacy_display_base_addr);121121+103122 switch (radeon_crtc->crtc_id) {104123 case 0:105124 WREG32(RADEON_MM_INDEX, RADEON_CRTC_GEN_CNTL);···224205 | (x << 16)225206 | y));226207 /* offset is from DISP(2)_BASE_ADDRESS */227227- WREG32(RADEON_CUR_OFFSET + radeon_crtc->crtc_offset, (radeon_crtc->legacy_cursor_offset +228228- (yorigin * 256)));208208+ WREG32(RADEON_CUR_OFFSET + radeon_crtc->crtc_offset,209209+ radeon_crtc->cursor_addr - radeon_crtc->legacy_display_base_addr +210210+ yorigin * 256);229211 }230212231213 radeon_crtc->cursor_x = x;···247227 return ret;248228}249229250250-static int radeon_set_cursor(struct drm_crtc *crtc, struct drm_gem_object *obj)251251-{252252- struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);253253- struct radeon_device *rdev = crtc->dev->dev_private;254254- struct radeon_bo *robj = gem_to_radeon_bo(obj);255255- uint64_t gpu_addr;256256- int ret;257257-258258- ret = radeon_bo_reserve(robj, false);259259- if (unlikely(ret != 0))260260- goto fail;261261- /* Only 27 bit offset for legacy cursor */262262- ret = radeon_bo_pin_restricted(robj, RADEON_GEM_DOMAIN_VRAM,263263- ASIC_IS_AVIVO(rdev) ? 0 : 1 << 27,264264- &gpu_addr);265265- radeon_bo_unreserve(robj);266266- if (ret)267267- goto fail;268268-269269- if (ASIC_IS_DCE4(rdev)) {270270- WREG32(EVERGREEN_CUR_SURFACE_ADDRESS_HIGH + radeon_crtc->crtc_offset,271271- upper_32_bits(gpu_addr));272272- WREG32(EVERGREEN_CUR_SURFACE_ADDRESS + radeon_crtc->crtc_offset,273273- gpu_addr & 0xffffffff);274274- } else if (ASIC_IS_AVIVO(rdev)) {275275- if (rdev->family >= CHIP_RV770) {276276- if (radeon_crtc->crtc_id)277277- WREG32(R700_D2CUR_SURFACE_ADDRESS_HIGH, upper_32_bits(gpu_addr));278278- else279279- WREG32(R700_D1CUR_SURFACE_ADDRESS_HIGH, upper_32_bits(gpu_addr));280280- }281281- WREG32(AVIVO_D1CUR_SURFACE_ADDRESS + radeon_crtc->crtc_offset,282282- gpu_addr & 0xffffffff);283283- } else {284284- radeon_crtc->legacy_cursor_offset = gpu_addr - radeon_crtc->legacy_display_base_addr;285285- /* offset is from DISP(2)_BASE_ADDRESS */286286- WREG32(RADEON_CUR_OFFSET + radeon_crtc->crtc_offset, radeon_crtc->legacy_cursor_offset);287287- }288288-289289- return 0;290290-291291-fail:292292- drm_gem_object_unreference_unlocked(obj);293293-294294- return ret;295295-}296296-297230int radeon_crtc_cursor_set2(struct drm_crtc *crtc,298231 struct drm_file *file_priv,299232 uint32_t handle,···256283 int32_t hot_y)257284{258285 struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);286286+ struct radeon_device *rdev = crtc->dev->dev_private;259287 struct drm_gem_object *obj;288288+ struct radeon_bo *robj;260289 int ret;261290262291 if (!handle) {···280305 return -ENOENT;281306 }282307308308+ robj = gem_to_radeon_bo(obj);309309+ ret = radeon_bo_reserve(robj, false);310310+ if (ret != 0) {311311+ drm_gem_object_unreference_unlocked(obj);312312+ return ret;313313+ }314314+ /* Only 27 bit offset for legacy cursor */315315+ ret = radeon_bo_pin_restricted(robj, RADEON_GEM_DOMAIN_VRAM,316316+ ASIC_IS_AVIVO(rdev) ? 0 : 1 << 27,317317+ &radeon_crtc->cursor_addr);318318+ radeon_bo_unreserve(robj);319319+ if (ret) {320320+ DRM_ERROR("Failed to pin new cursor BO (%d)\n", ret);321321+ drm_gem_object_unreference_unlocked(obj);322322+ return ret;323323+ }324324+283325 radeon_crtc->cursor_width = width;284326 radeon_crtc->cursor_height = height;285327···315323 radeon_crtc->cursor_hot_y = hot_y;316324 }317325318318- ret = radeon_set_cursor(crtc, obj);319319-320320- if (ret)321321- DRM_ERROR("radeon_set_cursor returned %d, not changing cursor\n",322322- ret);323323- else324324- radeon_show_cursor(crtc);326326+ radeon_show_cursor(crtc);325327326328 radeon_lock_cursor(crtc, false);327329···327341 radeon_bo_unpin(robj);328342 radeon_bo_unreserve(robj);329343 }330330- if (radeon_crtc->cursor_bo != obj)331331- drm_gem_object_unreference_unlocked(radeon_crtc->cursor_bo);344344+ drm_gem_object_unreference_unlocked(radeon_crtc->cursor_bo);332345 }333346334347 radeon_crtc->cursor_bo = obj;···345360void radeon_cursor_reset(struct drm_crtc *crtc)346361{347362 struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);348348- int ret;349363350364 if (radeon_crtc->cursor_bo) {351365 radeon_lock_cursor(crtc, true);···352368 radeon_cursor_move_locked(crtc, radeon_crtc->cursor_x,353369 radeon_crtc->cursor_y);354370355355- ret = radeon_set_cursor(crtc, radeon_crtc->cursor_bo);356356- if (ret)357357- DRM_ERROR("radeon_set_cursor returned %d, not showing "358358- "cursor\n", ret);359359- else360360- radeon_show_cursor(crtc);371371+ radeon_show_cursor(crtc);361372362373 radeon_lock_cursor(crtc, false);363374 }
+52-14
drivers/gpu/drm/radeon/radeon_device.c
···10801080}1081108110821082/**10831083+ * Determine a sensible default GART size according to ASIC family.10841084+ *10851085+ * @family ASIC family name10861086+ */10871087+static int radeon_gart_size_auto(enum radeon_family family)10881088+{10891089+ /* default to a larger gart size on newer asics */10901090+ if (family >= CHIP_TAHITI)10911091+ return 2048;10921092+ else if (family >= CHIP_RV770)10931093+ return 1024;10941094+ else10951095+ return 512;10961096+}10971097+10981098+/**10831099 * radeon_check_arguments - validate module params10841100 *10851101 * @rdev: radeon_device pointer···11131097 }1114109811151099 if (radeon_gart_size == -1) {11161116- /* default to a larger gart size on newer asics */11171117- if (rdev->family >= CHIP_RV770)11181118- radeon_gart_size = 1024;11191119- else11201120- radeon_gart_size = 512;11001100+ radeon_gart_size = radeon_gart_size_auto(rdev->family);11211101 }11221102 /* gtt size must be power of two and greater or equal to 32M */11231103 if (radeon_gart_size < 32) {11241104 dev_warn(rdev->dev, "gart size (%d) too small\n",11251105 radeon_gart_size);11261126- if (rdev->family >= CHIP_RV770)11271127- radeon_gart_size = 1024;11281128- else11291129- radeon_gart_size = 512;11061106+ radeon_gart_size = radeon_gart_size_auto(rdev->family);11301107 } else if (!radeon_check_pot_argument(radeon_gart_size)) {11311108 dev_warn(rdev->dev, "gart size (%d) must be a power of 2\n",11321109 radeon_gart_size);11331133- if (rdev->family >= CHIP_RV770)11341134- radeon_gart_size = 1024;11351135- else11361136- radeon_gart_size = 512;11101110+ radeon_gart_size = radeon_gart_size_auto(rdev->family);11371111 }11381112 rdev->mc.gtt_size = (uint64_t)radeon_gart_size << 20;11391113···15781572 drm_helper_connector_dpms(connector, DRM_MODE_DPMS_OFF);15791573 }1580157415811581- /* unpin the front buffers */15751575+ /* unpin the front buffers and cursors */15821576 list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {15771577+ struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);15831578 struct radeon_framebuffer *rfb = to_radeon_framebuffer(crtc->primary->fb);15841579 struct radeon_bo *robj;15801580+15811581+ if (radeon_crtc->cursor_bo) {15821582+ struct radeon_bo *robj = gem_to_radeon_bo(radeon_crtc->cursor_bo);15831583+ r = radeon_bo_reserve(robj, false);15841584+ if (r == 0) {15851585+ radeon_bo_unpin(robj);15861586+ radeon_bo_unreserve(robj);15871587+ }15881588+ }1585158915861590 if (rfb == NULL || rfb->obj == NULL) {15871591 continue;···16551639{16561640 struct drm_connector *connector;16571641 struct radeon_device *rdev = dev->dev_private;16421642+ struct drm_crtc *crtc;16581643 int r;1659164416601645 if (dev->switch_power_state == DRM_SWITCH_POWER_OFF)···16941677 }1695167816961679 radeon_restore_bios_scratch_regs(rdev);16801680+16811681+ /* pin cursors */16821682+ list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {16831683+ struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);16841684+16851685+ if (radeon_crtc->cursor_bo) {16861686+ struct radeon_bo *robj = gem_to_radeon_bo(radeon_crtc->cursor_bo);16871687+ r = radeon_bo_reserve(robj, false);16881688+ if (r == 0) {16891689+ /* Only 27 bit offset for legacy cursor */16901690+ r = radeon_bo_pin_restricted(robj,16911691+ RADEON_GEM_DOMAIN_VRAM,16921692+ ASIC_IS_AVIVO(rdev) ?16931693+ 0 : 1 << 27,16941694+ &radeon_crtc->cursor_addr);16951695+ if (r != 0)16961696+ DRM_ERROR("Failed to pin cursor BO (%d)\n", r);16971697+ radeon_bo_unreserve(robj);16981698+ }16991699+ }17001700+ }1697170116981702 /* init dig PHYs, disp eng pll */16991703 if (rdev->is_atom_bios) {
+1
drivers/gpu/drm/radeon/radeon_fb.c
···257257 }258258259259 info->par = rfbdev;260260+ info->skip_vt_switch = true;260261261262 ret = radeon_framebuffer_init(rdev->ddev, &rfbdev->rfb, &mode_cmd, gobj);262263 if (ret) {
···963963 } else {964964 pool->npages_free += count;965965 list_splice(&ttm_dma->pages_list, &pool->free_list);966966- npages = count;967967- if (pool->npages_free > _manager->options.max_size) {966966+ /*967967+ * Wait to have at at least NUM_PAGES_TO_ALLOC number of pages968968+ * to free in order to minimize calls to set_memory_wb().969969+ */970970+ if (pool->npages_free >= (_manager->options.max_size +971971+ NUM_PAGES_TO_ALLOC))968972 npages = pool->npages_free - _manager->options.max_size;969969- /* free at least NUM_PAGES_TO_ALLOC number of pages970970- * to reduce calls to set_memory_wb */971971- if (npages < NUM_PAGES_TO_ALLOC)972972- npages = NUM_PAGES_TO_ALLOC;973973- }974973 }975974 spin_unlock_irqrestore(&pool->lock, irq_flags);976975
+3
drivers/gpu/ipu-v3/ipu-common.c
···11071107 return ret;11081108 }1109110911101110+ for (i = 0; i < IPU_NUM_IRQS; i += 32)11111111+ ipu_cm_write(ipu, 0, IPU_INT_CTRL(i / 32));11121112+11101113 for (i = 0; i < IPU_NUM_IRQS; i += 32) {11111114 gc = irq_get_domain_generic_chip(ipu->domain, i);11121115 gc->reg_base = ipu->cm_reg;
+1
drivers/i2c/busses/Kconfig
···633633config I2C_MT65XX634634 tristate "MediaTek I2C adapter"635635 depends on ARCH_MEDIATEK || COMPILE_TEST636636+ depends on HAS_DMA636637 help637638 This selects the MediaTek(R) Integrated Inter Circuit bus driver638639 for MT65xx and MT81xx.
+8-7
drivers/i2c/busses/i2c-jz4780.c
···764764 if (IS_ERR(i2c->clk))765765 return PTR_ERR(i2c->clk);766766767767- clk_prepare_enable(i2c->clk);767767+ ret = clk_prepare_enable(i2c->clk);768768+ if (ret)769769+ return ret;768770769769- if (of_property_read_u32(pdev->dev.of_node, "clock-frequency",770770- &clk_freq)) {771771+ ret = of_property_read_u32(pdev->dev.of_node, "clock-frequency",772772+ &clk_freq);773773+ if (ret) {771774 dev_err(&pdev->dev, "clock-frequency not specified in DT");772772- return clk_freq;775775+ goto err;773776 }774777775778 i2c->speed = clk_freq / 1000;···793790 i2c->irq = platform_get_irq(pdev, 0);794791 ret = devm_request_irq(&pdev->dev, i2c->irq, jz4780_i2c_irq, 0,795792 dev_name(&pdev->dev), i2c);796796- if (ret) {797797- ret = -ENODEV;793793+ if (ret)798794 goto err;799799- }800795801796 ret = i2c_add_adapter(&i2c->adap);802797 if (ret < 0) {
···10121012 */10131013void i2c_unregister_device(struct i2c_client *client)10141014{10151015+ if (client->dev.of_node)10161016+ of_node_clear_flag(client->dev.of_node, OF_POPULATED);10151017 device_unregister(&client->dev);10161018}10171019EXPORT_SYMBOL_GPL(i2c_unregister_device);···1322132013231321 dev_dbg(&adap->dev, "of_i2c: walking child nodes\n");1324132213251325- for_each_available_child_of_node(adap->dev.of_node, node)13231323+ for_each_available_child_of_node(adap->dev.of_node, node) {13241324+ if (of_node_test_and_set_flag(node, OF_POPULATED))13251325+ continue;13261326 of_i2c_register_device(adap, node);13271327+ }13271328}1328132913291330static int of_dev_node_match(struct device *dev, void *data)···18581853 if (adap == NULL)18591854 return NOTIFY_OK; /* not for us */1860185518561856+ if (of_node_test_and_set_flag(rd->dn, OF_POPULATED)) {18571857+ put_device(&adap->dev);18581858+ return NOTIFY_OK;18591859+ }18601860+18611861 client = of_i2c_register_device(adap, rd->dn);18621862 put_device(&adap->dev);18631863···18731863 }18741864 break;18751865 case OF_RECONFIG_CHANGE_REMOVE:18661866+ /* already depopulated? */18671867+ if (!of_node_check_flag(rd->dn, OF_POPULATED))18681868+ return NOTIFY_OK;18691869+18761870 /* find our device by node */18771871 client = of_find_i2c_device_by_node(rd->dn);18781872 if (client == NULL)
+1-1
drivers/iio/accel/bmc150-accel.c
···14711471{14721472 int i;1473147314741474- for (i = from; i >= 0; i++) {14741474+ for (i = from; i >= 0; i--) {14751475 if (data->triggers[i].indio_trig) {14761476 iio_trigger_unregister(data->triggers[i].indio_trig);14771477 data->triggers[i].indio_trig = NULL;
+1-2
drivers/iio/adc/Kconfig
···163163164164config CC10001_ADC165165 tristate "Cosmic Circuits 10001 ADC driver"166166- depends on HAVE_CLK || REGULATOR167167- depends on HAS_IOMEM166166+ depends on HAS_IOMEM && HAVE_CLK && REGULATOR168167 select IIO_BUFFER169168 select IIO_TRIGGERED_BUFFER170169 help
+4-4
drivers/iio/adc/at91_adc.c
···182182 u8 ts_pen_detect_sensitivity;183183184184 /* startup time calculate function */185185- u32 (*calc_startup_ticks)(u8 startup_time, u32 adc_clk_khz);185185+ u32 (*calc_startup_ticks)(u32 startup_time, u32 adc_clk_khz);186186187187 u8 num_channels;188188 struct at91_adc_reg_desc registers;···201201 u8 num_channels;202202 void __iomem *reg_base;203203 struct at91_adc_reg_desc *registers;204204- u8 startup_time;204204+ u32 startup_time;205205 u8 sample_hold_time;206206 bool sleep_mode;207207 struct iio_trigger **trig;···779779 return ret;780780}781781782782-static u32 calc_startup_ticks_9260(u8 startup_time, u32 adc_clk_khz)782782+static u32 calc_startup_ticks_9260(u32 startup_time, u32 adc_clk_khz)783783{784784 /*785785 * Number of ticks needed to cover the startup time of the ADC···790790 return round_up((startup_time * adc_clk_khz / 1000) - 1, 8) / 8;791791}792792793793-static u32 calc_startup_ticks_9x5(u8 startup_time, u32 adc_clk_khz)793793+static u32 calc_startup_ticks_9x5(u32 startup_time, u32 adc_clk_khz)794794{795795 /*796796 * For sama5d3x and at91sam9x5, the formula changes to:
···3636 s32 poll_value = 0;37373838 if (state) {3939+ if (!atomic_read(&st->user_requested_state))4040+ return 0;3941 if (sensor_hub_device_open(st->hsdev))4042 return -EIO;4143···54525553 poll_value = hid_sensor_read_poll_value(st);5654 } else {5757- if (!atomic_dec_and_test(&st->data_ready))5555+ int val;5656+5757+ val = atomic_dec_if_positive(&st->data_ready);5858+ if (val < 0)5859 return 0;6060+5961 sensor_hub_device_close(st->hsdev);6062 state_val = hid_sensor_get_usage_index(st->hsdev,6163 st->power_state.report_id,···98929993int hid_sensor_power_state(struct hid_sensor_common *st, bool state)10094{9595+10196#ifdef CONFIG_PM10297 int ret;103989999+ atomic_set(&st->user_requested_state, state);104100 if (state)105101 ret = pm_runtime_get_sync(&st->pdev->dev);106102 else {···117109118110 return 0;119111#else112112+ atomic_set(&st->user_requested_state, state);120113 return _hid_sensor_power_state(st, state);121114#endif122115}
+2-2
drivers/iio/dac/ad5624r_spi.c
···2222#include "ad5624r.h"23232424static int ad5624r_spi_write(struct spi_device *spi,2525- u8 cmd, u8 addr, u16 val, u8 len)2525+ u8 cmd, u8 addr, u16 val, u8 shift)2626{2727 u32 data;2828 u8 msg[3];···3535 * 14-, 12-bit input code followed by 0, 2, or 4 don't care bits,3636 * for the AD5664R, AD5644R, and AD5624R, respectively.3737 */3838- data = (0 << 22) | (cmd << 19) | (addr << 16) | (val << (16 - len));3838+ data = (0 << 22) | (cmd << 19) | (addr << 16) | (val << shift);3939 msg[0] = data >> 16;4040 msg[1] = data >> 8;4141 msg[2] = data;
+18
drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
···431431 return -EINVAL;432432}433433434434+static int inv_write_raw_get_fmt(struct iio_dev *indio_dev,435435+ struct iio_chan_spec const *chan, long mask)436436+{437437+ switch (mask) {438438+ case IIO_CHAN_INFO_SCALE:439439+ switch (chan->type) {440440+ case IIO_ANGL_VEL:441441+ return IIO_VAL_INT_PLUS_NANO;442442+ default:443443+ return IIO_VAL_INT_PLUS_MICRO;444444+ }445445+ default:446446+ return IIO_VAL_INT_PLUS_MICRO;447447+ }448448+449449+ return -EINVAL;450450+}434451static int inv_mpu6050_write_accel_scale(struct inv_mpu6050_state *st, int val)435452{436453 int result, i;···719702 .driver_module = THIS_MODULE,720703 .read_raw = &inv_mpu6050_read_raw,721704 .write_raw = &inv_mpu6050_write_raw,705705+ .write_raw_get_fmt = &inv_write_raw_get_fmt,722706 .attrs = &inv_attribute_group,723707 .validate_trigger = inv_mpu6050_validate_trigger,724708};
+2
drivers/iio/light/Kconfig
···199199config LTR501200200 tristate "LTR-501ALS-01 light sensor"201201 depends on I2C202202+ select REGMAP_I2C202203 select IIO_BUFFER203204 select IIO_TRIGGERED_BUFFER204205 help···213212config STK3310214213 tristate "STK3310 ALS and proximity sensor"215214 depends on I2C215215+ select REGMAP_I2C216216 help217217 Say yes here to get support for the Sensortek STK3310 ambient light218218 and proximity sensor. The STK3311 model is also supported by this
+1-1
drivers/iio/light/cm3323.c
···123123 for (i = 0; i < ARRAY_SIZE(cm3323_int_time); i++) {124124 if (val == cm3323_int_time[i].val &&125125 val2 == cm3323_int_time[i].val2) {126126- reg_conf = data->reg_conf;126126+ reg_conf = data->reg_conf & ~CM3323_CONF_IT_MASK;127127 reg_conf |= i << CM3323_CONF_IT_SHIFT;128128129129 ret = i2c_smbus_write_word_data(data->client,
+1-1
drivers/iio/light/ltr501.c
···13021302 if (ret < 0)13031303 return ret;1304130413051305- data->als_contr = ret | data->chip_info->als_mode_active;13051305+ data->als_contr = status | data->chip_info->als_mode_active;1306130613071307 ret = regmap_read(data->regmap, LTR501_PS_CONTR, &status);13081308 if (ret < 0)
+17-36
drivers/iio/light/stk3310.c
···4343#define STK3311_CHIP_ID_VAL 0x1D4444#define STK3310_PSINT_EN 0x014545#define STK3310_PS_MAX_VAL 0xFFFF4646-#define STK3310_THRESH_MAX 0xFFFF47464847#define STK3310_DRIVER_NAME "stk3310"4948#define STK3310_REGMAP_NAME "stk3310_regmap"···8384 REG_FIELD(STK3310_REG_FLAG, 4, 4);8485static const struct reg_field stk3310_reg_field_flag_nf =8586 REG_FIELD(STK3310_REG_FLAG, 0, 0);8686-/*8787- * Maximum PS values with regard to scale. Used to export the 'inverse'8888- * PS value (high values for far objects, low values for near objects).8989- */8787+8888+/* Estimate maximum proximity values with regard to measurement scale. */9089static const int stk3310_ps_max[4] = {9191- STK3310_PS_MAX_VAL / 64,9292- STK3310_PS_MAX_VAL / 16,9393- STK3310_PS_MAX_VAL / 4,9494- STK3310_PS_MAX_VAL,9090+ STK3310_PS_MAX_VAL / 640,9191+ STK3310_PS_MAX_VAL / 160,9292+ STK3310_PS_MAX_VAL / 40,9393+ STK3310_PS_MAX_VAL / 109594};96959796static const int stk3310_scale_table[][2] = {···125128 /* Proximity event */126129 {127130 .type = IIO_EV_TYPE_THRESH,128128- .dir = IIO_EV_DIR_FALLING,131131+ .dir = IIO_EV_DIR_RISING,129132 .mask_separate = BIT(IIO_EV_INFO_VALUE) |130133 BIT(IIO_EV_INFO_ENABLE),131134 },132135 /* Out-of-proximity event */133136 {134137 .type = IIO_EV_TYPE_THRESH,135135- .dir = IIO_EV_DIR_RISING,138138+ .dir = IIO_EV_DIR_FALLING,136139 .mask_separate = BIT(IIO_EV_INFO_VALUE) |137140 BIT(IIO_EV_INFO_ENABLE),138141 },···202205 u8 reg;203206 u16 buf;204207 int ret;205205- unsigned int index;206208 struct stk3310_data *data = iio_priv(indio_dev);207209208210 if (info != IIO_EV_INFO_VALUE)209211 return -EINVAL;210212211211- /*212212- * Only proximity interrupts are implemented at the moment.213213- * Since we're inverting proximity values, the sensor's 'high'214214- * threshold will become our 'low' threshold, associated with215215- * 'near' events. Similarly, the sensor's 'low' threshold will216216- * be our 'high' threshold, associated with 'far' events.217217- */213213+ /* Only proximity interrupts are implemented at the moment. */218214 if (dir == IIO_EV_DIR_RISING)219219- reg = STK3310_REG_THDL_PS;220220- else if (dir == IIO_EV_DIR_FALLING)221215 reg = STK3310_REG_THDH_PS;216216+ else if (dir == IIO_EV_DIR_FALLING)217217+ reg = STK3310_REG_THDL_PS;222218 else223219 return -EINVAL;224220···222232 dev_err(&data->client->dev, "register read failed\n");223233 return ret;224234 }225225- regmap_field_read(data->reg_ps_gain, &index);226226- *val = swab16(stk3310_ps_max[index] - buf);235235+ *val = swab16(buf);227236228237 return IIO_VAL_INT;229238}···246257 return -EINVAL;247258248259 if (dir == IIO_EV_DIR_RISING)249249- reg = STK3310_REG_THDL_PS;250250- else if (dir == IIO_EV_DIR_FALLING)251260 reg = STK3310_REG_THDH_PS;261261+ else if (dir == IIO_EV_DIR_FALLING)262262+ reg = STK3310_REG_THDL_PS;252263 else253264 return -EINVAL;254265255255- buf = swab16(stk3310_ps_max[index] - val);266266+ buf = swab16(val);256267 ret = regmap_bulk_write(data->regmap, reg, &buf, 2);257268 if (ret < 0)258269 dev_err(&client->dev, "failed to set PS threshold!\n");···323334 return ret;324335 }325336 *val = swab16(buf);326326- if (chan->type == IIO_PROXIMITY) {327327- /*328328- * Invert the proximity data so we return low values329329- * for close objects and high values for far ones.330330- */331331- regmap_field_read(data->reg_ps_gain, &index);332332- *val = stk3310_ps_max[index] - *val;333333- }334337 mutex_unlock(&data->lock);335338 return IIO_VAL_INT;336339 case IIO_CHAN_INFO_INT_TIME:···562581 }563582 event = IIO_UNMOD_EVENT_CODE(IIO_PROXIMITY, 1,564583 IIO_EV_TYPE_THRESH,565565- (dir ? IIO_EV_DIR_RISING :566566- IIO_EV_DIR_FALLING));584584+ (dir ? IIO_EV_DIR_FALLING :585585+ IIO_EV_DIR_RISING));567586 iio_push_event(indio_dev, event, data->timestamp);568587569588 /* Reset the interrupt flag */
+1-1
drivers/iio/light/tcs3414.c
···185185 if (val != 0)186186 return -EINVAL;187187 for (i = 0; i < ARRAY_SIZE(tcs3414_times); i++) {188188- if (val == tcs3414_times[i] * 1000) {188188+ if (val2 == tcs3414_times[i] * 1000) {189189 data->timing &= ~TCS3414_INTEG_MASK;190190 data->timing |= i;191191 return i2c_smbus_write_byte_data(
···5858#define IWPM_PID_UNDEFINED -15959#define IWPM_PID_UNAVAILABLE -260606161+#define IWPM_REG_UNDEF 0x016262+#define IWPM_REG_VALID 0x026363+#define IWPM_REG_INCOMPL 0x046464+6165struct iwpm_nlmsg_request {6266 struct list_head inprocess_list;6367 __u32 nlmsg_seq;···9288 atomic_t refcount;9389 atomic_t nlmsg_seq;9490 int client_list[RDMA_NL_NUM_CLIENTS];9595- int reg_list[RDMA_NL_NUM_CLIENTS];9191+ u32 reg_list[RDMA_NL_NUM_CLIENTS];9692};97939894/**···163159void iwpm_set_valid(u8 nl_client, int valid);164160165161/**166166- * iwpm_registered_client - Check if the port mapper client is registered162162+ * iwpm_check_registration - Check if the client registration163163+ * matches the given one167164 * @nl_client: The index of the netlink client165165+ * @reg: The given registration type to compare with168166 *169167 * Call iwpm_register_pid() to register a client168168+ * Returns true if the client registration matches reg,169169+ * otherwise returns false170170 */171171-int iwpm_registered_client(u8 nl_client);171171+u32 iwpm_check_registration(u8 nl_client, u32 reg);172172173173/**174174- * iwpm_set_registered - Set the port mapper client to registered or not174174+ * iwpm_set_registration - Set the client registration175175 * @nl_client: The index of the netlink client176176- * @reg: 1 if registered or 0 if not176176+ * @reg: Registration type to set177177 */178178-void iwpm_set_registered(u8 nl_client, int reg);178178+void iwpm_set_registration(u8 nl_client, u32 reg);179179+180180+/**181181+ * iwpm_get_registration182182+ * @nl_client: The index of the netlink client183183+ *184184+ * Returns the client registration type185185+ */186186+u32 iwpm_get_registration(u8 nl_client);179187180188/**181189 * iwpm_send_mapinfo - Send local and mapped IPv4/IPv6 address info of
+17-30
drivers/infiniband/core/mad.c
···769769 bool opa = rdma_cap_opa_mad(mad_agent_priv->qp_info->port_priv->device,770770 mad_agent_priv->qp_info->port_priv->port_num);771771772772- if (device->node_type == RDMA_NODE_IB_SWITCH &&772772+ if (rdma_cap_ib_switch(device) &&773773 smp->mgmt_class == IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE)774774 port_num = send_wr->wr.ud.port_num;775775 else···787787 if ((opa_get_smp_direction(opa_smp)788788 ? opa_smp->route.dr.dr_dlid : opa_smp->route.dr.dr_slid) ==789789 OPA_LID_PERMISSIVE &&790790- opa_smi_handle_dr_smp_send(opa_smp, device->node_type,790790+ opa_smi_handle_dr_smp_send(opa_smp,791791+ rdma_cap_ib_switch(device),791792 port_num) == IB_SMI_DISCARD) {792793 ret = -EINVAL;793794 dev_err(&device->dev, "OPA Invalid directed route\n");794795 goto out;795796 }796797 opa_drslid = be32_to_cpu(opa_smp->route.dr.dr_slid);797797- if (opa_drslid != OPA_LID_PERMISSIVE &&798798+ if (opa_drslid != be32_to_cpu(OPA_LID_PERMISSIVE) &&798799 opa_drslid & 0xffff0000) {799800 ret = -EINVAL;800801 dev_err(&device->dev, "OPA Invalid dr_slid 0x%x\n",···811810 } else {812811 if ((ib_get_smp_direction(smp) ? smp->dr_dlid : smp->dr_slid) ==813812 IB_LID_PERMISSIVE &&814814- smi_handle_dr_smp_send(smp, device->node_type, port_num) ==813813+ smi_handle_dr_smp_send(smp, rdma_cap_ib_switch(device), port_num) ==815814 IB_SMI_DISCARD) {816815 ret = -EINVAL;817816 dev_err(&device->dev, "Invalid directed route\n");···20312030 struct ib_smp *smp = (struct ib_smp *)recv->mad;2032203120332032 if (smi_handle_dr_smp_recv(smp,20342034- port_priv->device->node_type,20332033+ rdma_cap_ib_switch(port_priv->device),20352034 port_num,20362035 port_priv->device->phys_port_cnt) ==20372036 IB_SMI_DISCARD)···2043204220442043 if (retsmi == IB_SMI_SEND) { /* don't forward */20452044 if (smi_handle_dr_smp_send(smp,20462046- port_priv->device->node_type,20452045+ rdma_cap_ib_switch(port_priv->device),20472046 port_num) == IB_SMI_DISCARD)20482047 return IB_SMI_DISCARD;2049204820502049 if (smi_check_local_smp(smp, port_priv->device) == IB_SMI_DISCARD)20512050 return IB_SMI_DISCARD;20522052- } else if (port_priv->device->node_type == RDMA_NODE_IB_SWITCH) {20512051+ } else if (rdma_cap_ib_switch(port_priv->device)) {20532052 /* forward case for switches */20542053 memcpy(response, recv, mad_priv_size(response));20552054 response->header.recv_wc.wc = &response->header.wc;···21162115 struct opa_smp *smp = (struct opa_smp *)recv->mad;2117211621182117 if (opa_smi_handle_dr_smp_recv(smp,21192119- port_priv->device->node_type,21182118+ rdma_cap_ib_switch(port_priv->device),21202119 port_num,21212120 port_priv->device->phys_port_cnt) ==21222121 IB_SMI_DISCARD)···2128212721292128 if (retsmi == IB_SMI_SEND) { /* don't forward */21302129 if (opa_smi_handle_dr_smp_send(smp,21312131- port_priv->device->node_type,21302130+ rdma_cap_ib_switch(port_priv->device),21322131 port_num) == IB_SMI_DISCARD)21332132 return IB_SMI_DISCARD;21342133···21362135 IB_SMI_DISCARD)21372136 return IB_SMI_DISCARD;2138213721392139- } else if (port_priv->device->node_type == RDMA_NODE_IB_SWITCH) {21382138+ } else if (rdma_cap_ib_switch(port_priv->device)) {21402139 /* forward case for switches */21412140 memcpy(response, recv, mad_priv_size(response));21422141 response->header.recv_wc.wc = &response->header.wc;···22362235 goto out;22372236 }2238223722392239- if (port_priv->device->node_type == RDMA_NODE_IB_SWITCH)22382238+ if (rdma_cap_ib_switch(port_priv->device))22402239 port_num = wc->port_num;22412240 else22422241 port_num = port_priv->port_num;···3298329732993298static void ib_mad_init_device(struct ib_device *device)33003299{33013301- int start, end, i;33003300+ int start, i;3302330133033303- if (device->node_type == RDMA_NODE_IB_SWITCH) {33043304- start = 0;33053305- end = 0;33063306- } else {33073307- start = 1;33083308- end = device->phys_port_cnt;33093309- }33023302+ start = rdma_start_port(device);3310330333113311- for (i = start; i <= end; i++) {33043304+ for (i = start; i <= rdma_end_port(device); i++) {33123305 if (!rdma_cap_ib_mad(device, i))33133306 continue;33143307···3337334233383343static void ib_mad_remove_device(struct ib_device *device)33393344{33403340- int start, end, i;33453345+ int i;3341334633423342- if (device->node_type == RDMA_NODE_IB_SWITCH) {33433343- start = 0;33443344- end = 0;33453345- } else {33463346- start = 1;33473347- end = device->phys_port_cnt;33483348- }33493349-33503350- for (i = start; i <= end; i++) {33473347+ for (i = rdma_start_port(device); i <= rdma_end_port(device); i++) {33513348 if (!rdma_cap_ib_mad(device, i))33523349 continue;33533350
+2-6
drivers/infiniband/core/multicast.c
···812812 if (!dev)813813 return;814814815815- if (device->node_type == RDMA_NODE_IB_SWITCH)816816- dev->start_port = dev->end_port = 0;817817- else {818818- dev->start_port = 1;819819- dev->end_port = device->phys_port_cnt;820820- }815815+ dev->start_port = rdma_start_port(device);816816+ dev->end_port = rdma_end_port(device);821817822818 for (i = 0; i <= dev->end_port - dev->start_port; i++) {823819 if (!rdma_cap_ib_mcast(device, dev->start_port + i))
+2-2
drivers/infiniband/core/opa_smi.h
···39394040#include "smi.h"41414242-enum smi_action opa_smi_handle_dr_smp_recv(struct opa_smp *smp, u8 node_type,4242+enum smi_action opa_smi_handle_dr_smp_recv(struct opa_smp *smp, bool is_switch,4343 int port_num, int phys_port_cnt);4444int opa_smi_get_fwd_port(struct opa_smp *smp);4545extern enum smi_forward_action opa_smi_check_forward_dr_smp(struct opa_smp *smp);4646extern enum smi_action opa_smi_handle_dr_smp_send(struct opa_smp *smp,4747- u8 node_type, int port_num);4747+ bool is_switch, int port_num);48484949/*5050 * Return IB_SMI_HANDLE if the SMP should be handled by the local SMA/SM
+2-6
drivers/infiniband/core/sa_query.c
···11561156 int s, e, i;11571157 int count = 0;1158115811591159- if (device->node_type == RDMA_NODE_IB_SWITCH)11601160- s = e = 0;11611161- else {11621162- s = 1;11631163- e = device->phys_port_cnt;11641164- }11591159+ s = rdma_start_port(device);11601160+ e = rdma_end_port(device);1165116111661162 sa_dev = kzalloc(sizeof *sa_dev +11671163 (e - s + 1) * sizeof (struct ib_sa_port),
+18-19
drivers/infiniband/core/smi.c
···4141#include "smi.h"4242#include "opa_smi.h"43434444-static enum smi_action __smi_handle_dr_smp_send(u8 node_type, int port_num,4444+static enum smi_action __smi_handle_dr_smp_send(bool is_switch, int port_num,4545 u8 *hop_ptr, u8 hop_cnt,4646 const u8 *initial_path,4747 const u8 *return_path,···64646565 /* C14-9:2 */6666 if (*hop_ptr && *hop_ptr < hop_cnt) {6767- if (node_type != RDMA_NODE_IB_SWITCH)6767+ if (!is_switch)6868 return IB_SMI_DISCARD;69697070 /* return_path set when received */···7777 if (*hop_ptr == hop_cnt) {7878 /* return_path set when received */7979 (*hop_ptr)++;8080- return (node_type == RDMA_NODE_IB_SWITCH ||8080+ return (is_switch ||8181 dr_dlid_is_permissive ?8282 IB_SMI_HANDLE : IB_SMI_DISCARD);8383 }···96969797 /* C14-13:2 */9898 if (2 <= *hop_ptr && *hop_ptr <= hop_cnt) {9999- if (node_type != RDMA_NODE_IB_SWITCH)9999+ if (!is_switch)100100 return IB_SMI_DISCARD;101101102102 (*hop_ptr)--;···108108 if (*hop_ptr == 1) {109109 (*hop_ptr)--;110110 /* C14-13:3 -- SMPs destined for SM shouldn't be here */111111- return (node_type == RDMA_NODE_IB_SWITCH ||111111+ return (is_switch ||112112 dr_slid_is_permissive ?113113 IB_SMI_HANDLE : IB_SMI_DISCARD);114114 }···127127 * Return IB_SMI_DISCARD if the SMP should be discarded128128 */129129enum smi_action smi_handle_dr_smp_send(struct ib_smp *smp,130130- u8 node_type, int port_num)130130+ bool is_switch, int port_num)131131{132132- return __smi_handle_dr_smp_send(node_type, port_num,132132+ return __smi_handle_dr_smp_send(is_switch, port_num,133133 &smp->hop_ptr, smp->hop_cnt,134134 smp->initial_path,135135 smp->return_path,···139139}140140141141enum smi_action opa_smi_handle_dr_smp_send(struct opa_smp *smp,142142- u8 node_type, int port_num)142142+ bool is_switch, int port_num)143143{144144- return __smi_handle_dr_smp_send(node_type, port_num,144144+ return __smi_handle_dr_smp_send(is_switch, port_num,145145 &smp->hop_ptr, smp->hop_cnt,146146 smp->route.dr.initial_path,147147 smp->route.dr.return_path,···152152 OPA_LID_PERMISSIVE);153153}154154155155-static enum smi_action __smi_handle_dr_smp_recv(u8 node_type, int port_num,155155+static enum smi_action __smi_handle_dr_smp_recv(bool is_switch, int port_num,156156 int phys_port_cnt,157157 u8 *hop_ptr, u8 hop_cnt,158158 const u8 *initial_path,···173173174174 /* C14-9:2 -- intermediate hop */175175 if (*hop_ptr && *hop_ptr < hop_cnt) {176176- if (node_type != RDMA_NODE_IB_SWITCH)176176+ if (!is_switch)177177 return IB_SMI_DISCARD;178178179179 return_path[*hop_ptr] = port_num;···188188 return_path[*hop_ptr] = port_num;189189 /* hop_ptr updated when sending */190190191191- return (node_type == RDMA_NODE_IB_SWITCH ||191191+ return (is_switch ||192192 dr_dlid_is_permissive ?193193 IB_SMI_HANDLE : IB_SMI_DISCARD);194194 }···208208209209 /* C14-13:2 */210210 if (2 <= *hop_ptr && *hop_ptr <= hop_cnt) {211211- if (node_type != RDMA_NODE_IB_SWITCH)211211+ if (!is_switch)212212 return IB_SMI_DISCARD;213213214214 /* hop_ptr updated when sending */···224224 return IB_SMI_HANDLE;225225 }226226 /* hop_ptr updated when sending */227227- return (node_type == RDMA_NODE_IB_SWITCH ?228228- IB_SMI_HANDLE : IB_SMI_DISCARD);227227+ return (is_switch ? IB_SMI_HANDLE : IB_SMI_DISCARD);229228 }230229231230 /* C14-13:4 -- hop_ptr = 0 -> give to SM */···237238 * Adjust information for a received SMP238239 * Return IB_SMI_DISCARD if the SMP should be dropped239240 */240240-enum smi_action smi_handle_dr_smp_recv(struct ib_smp *smp, u8 node_type,241241+enum smi_action smi_handle_dr_smp_recv(struct ib_smp *smp, bool is_switch,241242 int port_num, int phys_port_cnt)242243{243243- return __smi_handle_dr_smp_recv(node_type, port_num, phys_port_cnt,244244+ return __smi_handle_dr_smp_recv(is_switch, port_num, phys_port_cnt,244245 &smp->hop_ptr, smp->hop_cnt,245246 smp->initial_path,246247 smp->return_path,···253254 * Adjust information for a received SMP254255 * Return IB_SMI_DISCARD if the SMP should be dropped255256 */256256-enum smi_action opa_smi_handle_dr_smp_recv(struct opa_smp *smp, u8 node_type,257257+enum smi_action opa_smi_handle_dr_smp_recv(struct opa_smp *smp, bool is_switch,257258 int port_num, int phys_port_cnt)258259{259259- return __smi_handle_dr_smp_recv(node_type, port_num, phys_port_cnt,260260+ return __smi_handle_dr_smp_recv(is_switch, port_num, phys_port_cnt,260261 &smp->hop_ptr, smp->hop_cnt,261262 smp->route.dr.initial_path,262263 smp->route.dr.return_path,
+2-2
drivers/infiniband/core/smi.h
···5151 IB_SMI_FORWARD /* SMP should be forwarded (for switches only) */5252};53535454-enum smi_action smi_handle_dr_smp_recv(struct ib_smp *smp, u8 node_type,5454+enum smi_action smi_handle_dr_smp_recv(struct ib_smp *smp, bool is_switch,5555 int port_num, int phys_port_cnt);5656int smi_get_fwd_port(struct ib_smp *smp);5757extern enum smi_forward_action smi_check_forward_dr_smp(struct ib_smp *smp);5858extern enum smi_action smi_handle_dr_smp_send(struct ib_smp *smp,5959- u8 node_type, int port_num);5959+ bool is_switch, int port_num);60606161/*6262 * Return IB_SMI_HANDLE if the SMP should be handled by the local SMA/SM
+1-1
drivers/infiniband/core/sysfs.c
···870870 goto err_put;871871 }872872873873- if (device->node_type == RDMA_NODE_IB_SWITCH) {873873+ if (rdma_cap_ib_switch(device)) {874874 ret = add_port(device, 0, port_callback);875875 if (ret)876876 goto err_put;
···2044204420452045 spin_lock_init(&idev->qp_table.lock);20462046 spin_lock_init(&idev->lk_table.lock);20472047- idev->sm_lid = __constant_be16_to_cpu(IB_LID_PERMISSIVE);20472047+ idev->sm_lid = be16_to_cpu(IB_LID_PERMISSIVE);20482048 /* Set the prefix to the default value (see ch. 4.1.1) */20492049- idev->gid_prefix = __constant_cpu_to_be64(0xfe80000000000000ULL);20492049+ idev->gid_prefix = cpu_to_be64(0xfe80000000000000ULL);2050205020512051 ret = ipath_init_qp_table(idev, ib_ipath_qp_table_size);20522052 if (ret)
+22-12
drivers/infiniband/hw/mlx4/mad.c
···860860 struct mlx4_ib_dev *dev = to_mdev(ibdev);861861 const struct ib_mad *in_mad = (const struct ib_mad *)in;862862 struct ib_mad *out_mad = (struct ib_mad *)out;863863+ enum rdma_link_layer link = rdma_port_get_link_layer(ibdev, port_num);863864864864- BUG_ON(in_mad_size != sizeof(*in_mad) ||865865- *out_mad_size != sizeof(*out_mad));865865+ if (WARN_ON_ONCE(in_mad_size != sizeof(*in_mad) ||866866+ *out_mad_size != sizeof(*out_mad)))867867+ return IB_MAD_RESULT_FAILURE;866868867867- switch (rdma_port_get_link_layer(ibdev, port_num)) {868868- case IB_LINK_LAYER_INFINIBAND:869869- if (!mlx4_is_slave(dev->dev))870870- return ib_process_mad(ibdev, mad_flags, port_num, in_wc,871871- in_grh, in_mad, out_mad);872872- case IB_LINK_LAYER_ETHERNET:873873- return iboe_process_mad(ibdev, mad_flags, port_num, in_wc,874874- in_grh, in_mad, out_mad);875875- default:876876- return -EINVAL;869869+ /* iboe_process_mad() which uses the HCA flow-counters to implement IB PMA870870+ * queries, should be called only by VFs and for that specific purpose871871+ */872872+ if (link == IB_LINK_LAYER_INFINIBAND) {873873+ if (mlx4_is_slave(dev->dev) &&874874+ in_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_PERF_MGMT &&875875+ in_mad->mad_hdr.attr_id == IB_PMA_PORT_COUNTERS)876876+ return iboe_process_mad(ibdev, mad_flags, port_num, in_wc,877877+ in_grh, in_mad, out_mad);878878+879879+ return ib_process_mad(ibdev, mad_flags, port_num, in_wc,880880+ in_grh, in_mad, out_mad);877881 }882882+883883+ if (link == IB_LINK_LAYER_ETHERNET)884884+ return iboe_process_mad(ibdev, mad_flags, port_num, in_wc,885885+ in_grh, in_mad, out_mad);886886+887887+ return -EINVAL;878888}879889880890static void send_handler(struct ib_mad_agent *agent,
+18-15
drivers/infiniband/hw/mlx4/main.c
···253253 props->hca_core_clock = dev->dev->caps.hca_core_clock * 1000UL;254254 props->timestamp_mask = 0xFFFFFFFFFFFFULL;255255256256- err = mlx4_get_internal_clock_params(dev->dev, &clock_params);257257- if (err)258258- goto out;256256+ if (!mlx4_is_slave(dev->dev))257257+ err = mlx4_get_internal_clock_params(dev->dev, &clock_params);259258260259 if (uhw->outlen >= resp.response_length + sizeof(resp.hca_core_clock_offset)) {261261- resp.hca_core_clock_offset = clock_params.offset % PAGE_SIZE;262260 resp.response_length += sizeof(resp.hca_core_clock_offset);263263- resp.comp_mask |= QUERY_DEVICE_RESP_MASK_TIMESTAMP;261261+ if (!err && !mlx4_is_slave(dev->dev)) {262262+ resp.comp_mask |= QUERY_DEVICE_RESP_MASK_TIMESTAMP;263263+ resp.hca_core_clock_offset = clock_params.offset % PAGE_SIZE;264264+ }264265 }265266266267 if (uhw->outlen) {···26702669 dm = kcalloc(ports, sizeof(*dm), GFP_ATOMIC);26712670 if (!dm) {26722671 pr_err("failed to allocate memory for tunneling qp update\n");26732673- goto out;26722672+ return;26742673 }2675267426762675 for (i = 0; i < ports; i++) {26772676 dm[i] = kmalloc(sizeof (struct mlx4_ib_demux_work), GFP_ATOMIC);26782677 if (!dm[i]) {26792678 pr_err("failed to allocate memory for tunneling qp update work struct\n");26802680- for (i = 0; i < dev->caps.num_ports; i++) {26812681- if (dm[i])26822682- kfree(dm[i]);26832683- }26792679+ while (--i >= 0)26802680+ kfree(dm[i]);26842681 goto out;26852682 }26862686- }26872687- /* initialize or tear down tunnel QPs for the slave */26882688- for (i = 0; i < ports; i++) {26892683 INIT_WORK(&dm[i]->work, mlx4_ib_tunnels_update_work);26902684 dm[i]->port = first_port + i + 1;26912685 dm[i]->slave = slave;26922686 dm[i]->do_init = do_init;26932687 dm[i]->dev = ibdev;26942694- spin_lock_irqsave(&ibdev->sriov.going_down_lock, flags);26952695- if (!ibdev->sriov.is_going_down)26882688+ }26892689+ /* initialize or tear down tunnel QPs for the slave */26902690+ spin_lock_irqsave(&ibdev->sriov.going_down_lock, flags);26912691+ if (!ibdev->sriov.is_going_down) {26922692+ for (i = 0; i < ports; i++)26962693 queue_work(ibdev->sriov.demux[i].ud_wq, &dm[i]->work);26972694 spin_unlock_irqrestore(&ibdev->sriov.going_down_lock, flags);26952695+ } else {26962696+ spin_unlock_irqrestore(&ibdev->sriov.going_down_lock, flags);26972697+ for (i = 0; i < ports; i++)26982698+ kfree(dm[i]);26982699 }26992700out:27002701 kfree(dm);
···257257 return MIPS_CPU_IRQ_BASE + cp0_fdc_irq;258258 }259259260260- /*261261- * Some cores claim the FDC is routable but it doesn't actually seem to262262- * be connected.263263- */264264- switch (current_cpu_type()) {265265- case CPU_INTERAPTIV:266266- case CPU_PROAPTIV:267267- return -1;268268- }269269-270260 return irq_create_mapping(gic_irq_domain,271261 GIC_LOCAL_TO_HWIRQ(GIC_LOCAL_INT_FDC));272262}
···1818#include <linux/init.h>1919#include <linux/module.h>2020#include <linux/slab.h>2121+#include <linux/vmalloc.h>2122#include <linux/sort.h>2223#include <linux/rbtree.h>2324···269268 process_mapping_fn process_prepared_mapping;270269 process_mapping_fn process_prepared_discard;271270272272- struct dm_bio_prison_cell *cell_sort_array[CELL_SORT_ARRAY_SIZE];271271+ struct dm_bio_prison_cell **cell_sort_array;273272};274273275274static enum pool_mode get_pool_mode(struct pool *pool);···22822281 queue_delayed_work(pool->wq, &pool->waker, COMMIT_PERIOD);22832282}2284228322842284+static void notify_of_pool_mode_change_to_oods(struct pool *pool);22852285+22852286/*22862287 * We're holding onto IO to allow userland time to react. After the22872288 * timeout either the pool will have been resized (and thus back in22882288- * PM_WRITE mode), or we degrade to PM_READ_ONLY and start erroring IO.22892289+ * PM_WRITE mode), or we degrade to PM_OUT_OF_DATA_SPACE w/ error_if_no_space.22892290 */22902291static void do_no_space_timeout(struct work_struct *ws)22912292{22922293 struct pool *pool = container_of(to_delayed_work(ws), struct pool,22932294 no_space_timeout);2294229522952295- if (get_pool_mode(pool) == PM_OUT_OF_DATA_SPACE && !pool->pf.error_if_no_space)22962296- set_pool_mode(pool, PM_READ_ONLY);22962296+ if (get_pool_mode(pool) == PM_OUT_OF_DATA_SPACE && !pool->pf.error_if_no_space) {22972297+ pool->pf.error_if_no_space = true;22982298+ notify_of_pool_mode_change_to_oods(pool);22992299+ error_retry_list(pool);23002300+ }22972301}2298230222992303/*----------------------------------------------------------------*/···23742368 dm_table_event(pool->ti->table);23752369 DMINFO("%s: switching pool to %s mode",23762370 dm_device_name(pool->pool_md), new_mode);23712371+}23722372+23732373+static void notify_of_pool_mode_change_to_oods(struct pool *pool)23742374+{23752375+ if (!pool->pf.error_if_no_space)23762376+ notify_of_pool_mode_change(pool, "out-of-data-space (queue IO)");23772377+ else23782378+ notify_of_pool_mode_change(pool, "out-of-data-space (error IO)");23772379}2378238023792381static bool passdown_enabled(struct pool_c *pt)···24682454 * frequently seeing this mode.24692455 */24702456 if (old_mode != new_mode)24712471- notify_of_pool_mode_change(pool, "out-of-data-space");24572457+ notify_of_pool_mode_change_to_oods(pool);24722458 pool->process_bio = process_bio_read_only;24732459 pool->process_discard = process_discard_bio;24742460 pool->process_cell = process_cell_read_only;···27912777{27922778 __pool_table_remove(pool);2793277927802780+ vfree(pool->cell_sort_array);27942781 if (dm_pool_metadata_close(pool->pmd) < 0)27952782 DMWARN("%s: dm_pool_metadata_close() failed.", __func__);27962783···29042889 goto bad_mapping_pool;29052890 }2906289128922892+ pool->cell_sort_array = vmalloc(sizeof(*pool->cell_sort_array) * CELL_SORT_ARRAY_SIZE);28932893+ if (!pool->cell_sort_array) {28942894+ *error = "Error allocating cell sort array";28952895+ err_p = ERR_PTR(-ENOMEM);28962896+ goto bad_sort_array;28972897+ }28982898+29072899 pool->ref_count = 1;29082900 pool->last_commit_jiffies = jiffies;29092901 pool->pool_md = pool_md;···2919289729202898 return pool;2921289929002900+bad_sort_array:29012901+ mempool_destroy(pool->mapping_pool);29222902bad_mapping_pool:29232903 dm_deferred_set_destroy(pool->all_io_ds);29242904bad_all_io_ds:···37383714 * Status line is:37393715 * <transaction id> <used metadata sectors>/<total metadata sectors>37403716 * <used data sectors>/<total data sectors> <held metadata root>37173717+ * <pool mode> <discard config> <no space config> <needs_check>37413718 */37423719static void pool_status(struct dm_target *ti, status_type_t type,37433720 unsigned status_flags, char *result, unsigned maxlen)···38393814 DMEMIT("error_if_no_space ");38403815 else38413816 DMEMIT("queue_if_no_space ");38173817+38183818+ if (dm_pool_metadata_needs_check(pool->pmd))38193819+ DMEMIT("needs_check ");38203820+ else38213821+ DMEMIT("- ");3842382238433823 break;38443824···39483918 .name = "thin-pool",39493919 .features = DM_TARGET_SINGLETON | DM_TARGET_ALWAYS_WRITEABLE |39503920 DM_TARGET_IMMUTABLE,39513951- .version = {1, 15, 0},39213921+ .version = {1, 16, 0},39523922 .module = THIS_MODULE,39533923 .ctr = pool_ctr,39543924 .dtr = pool_dtr,···4335430543364306static struct target_type thin_target = {43374307 .name = "thin",43384338- .version = {1, 15, 0},43084308+ .version = {1, 16, 0},43394309 .module = THIS_MODULE,43404310 .ctr = thin_ctr,43414311 .dtr = thin_dtr,
+4-8
drivers/md/dm.c
···10671067 */10681068static void rq_completed(struct mapped_device *md, int rw, bool run_queue)10691069{10701070- int nr_requests_pending;10711071-10721070 atomic_dec(&md->pending[rw]);1073107110741072 /* nudge anyone waiting on suspend queue */10751075- nr_requests_pending = md_in_flight(md);10761076- if (!nr_requests_pending)10731073+ if (!md_in_flight(md))10771074 wake_up(&md->wait);1078107510791076 /*···10821085 if (run_queue) {10831086 if (md->queue->mq_ops)10841087 blk_mq_run_hw_queues(md->queue, true);10851085- else if (!nr_requests_pending ||10861086- (nr_requests_pending >= md->queue->nr_congestion_on))10881088+ else10871089 blk_run_queue_async(md->queue);10881090 }10891091···2277228122782282static void cleanup_mapped_device(struct mapped_device *md)22792283{22802280- cleanup_srcu_struct(&md->io_barrier);22812281-22822284 if (md->wq)22832285 destroy_workqueue(md->wq);22842286 if (md->kworker_task)···22872293 mempool_destroy(md->rq_pool);22882294 if (md->bs)22892295 bioset_free(md->bs);22962296+22972297+ cleanup_srcu_struct(&md->io_barrier);2290229822912299 if (md->disk) {22922300 spin_lock(&_minor_lock);
+3-3
drivers/md/persistent-data/dm-btree-remove.c
···309309310310 if (s < 0 && nr_center < -s) {311311 /* not enough in central node */312312- shift(left, center, nr_center);313313- s = nr_center - target;312312+ shift(left, center, -nr_center);313313+ s += nr_center;314314 shift(left, right, s);315315 nr_right += s;316316 } else···323323 if (s > 0 && nr_center < s) {324324 /* not enough in central node */325325 shift(center, right, nr_center);326326- s = target - nr_center;326326+ s -= nr_center;327327 shift(left, right, s);328328 nr_left -= s;329329 } else
+1-1
drivers/md/persistent-data/dm-btree.c
···255255 int r;256256 struct del_stack *s;257257258258- s = kmalloc(sizeof(*s), GFP_KERNEL);258258+ s = kmalloc(sizeof(*s), GFP_NOIO);259259 if (!s)260260 return -ENOMEM;261261 s->info = info;
+1-7
drivers/memory/omap-gpmc.c
···20742074 ret = gpmc_probe_nand_child(pdev, child);20752075 else if (of_node_cmp(child->name, "onenand") == 0)20762076 ret = gpmc_probe_onenand_child(pdev, child);20772077- else if (of_node_cmp(child->name, "ethernet") == 0 ||20782078- of_node_cmp(child->name, "nor") == 0 ||20792079- of_node_cmp(child->name, "uart") == 0)20772077+ else20802078 ret = gpmc_probe_generic_child(pdev, child);20812081-20822082- if (WARN(ret < 0, "%s: probing gpmc child %s failed\n",20832083- __func__, child->full_name))20842084- of_node_put(child);20852079 }2086208020872081 return 0;
+1-1
drivers/mfd/stmpe-i2c.c
···66 *77 * License Terms: GNU General Public License, version 288 * Author: Rabin Vincent <rabin.vincent@stericsson.com> for ST-Ericsson99- * Author: Viresh Kumar <viresh.linux@gmail.com> for ST Microelectronics99+ * Author: Viresh Kumar <vireshk@kernel.org> for ST Microelectronics1010 */11111212#include <linux/i2c.h>
+2-2
drivers/mfd/stmpe-spi.c
···44 * Copyright (C) ST Microelectronics SA 201155 *66 * License Terms: GNU General Public License, version 277- * Author: Viresh Kumar <viresh.linux@gmail.com> for ST Microelectronics77+ * Author: Viresh Kumar <vireshk@kernel.org> for ST Microelectronics88 */991010#include <linux/spi/spi.h>···146146147147MODULE_LICENSE("GPL v2");148148MODULE_DESCRIPTION("STMPE MFD SPI Interface Driver");149149-MODULE_AUTHOR("Viresh Kumar <viresh.linux@gmail.com>");149149+MODULE_AUTHOR("Viresh Kumar <vireshk@kernel.org>");
+5-7
drivers/misc/cxl/api.c
···23232424 afu = cxl_pci_to_afu(dev);25252626+ get_device(&afu->dev);2627 ctx = cxl_context_alloc();2728 if (IS_ERR(ctx))2829 return ctx;···3231 rc = cxl_context_init(ctx, afu, false, NULL);3332 if (rc) {3433 kfree(ctx);3434+ put_device(&afu->dev);3535 return ERR_PTR(-ENOMEM);3636 }3737 cxl_assign_psn_space(ctx);···6159{6260 if (ctx->status != CLOSED)6361 return -EBUSY;6262+6363+ put_device(&ctx->afu->dev);64646565 cxl_context_free(ctx);6666···163159 }164160165161 ctx->status = STARTED;166166- get_device(&ctx->afu->dev);167162out:168163 mutex_unlock(&ctx->status_mutex);169164 return rc;···178175/* Stop a context. Returns 0 on success, otherwise -Errno */179176int cxl_stop_context(struct cxl_context *ctx)180177{181181- int rc;182182-183183- rc = __detach_context(ctx);184184- if (!rc)185185- put_device(&ctx->afu->dev);186186- return rc;178178+ return __detach_context(ctx);187179}188180EXPORT_SYMBOL_GPL(cxl_stop_context);189181
+11-3
drivers/misc/cxl/context.c
···113113114114 if (ctx->afu->current_mode == CXL_MODE_DEDICATED) {115115 area = ctx->afu->psn_phys;116116- if (offset > ctx->afu->adapter->ps_size)116116+ if (offset >= ctx->afu->adapter->ps_size)117117 return VM_FAULT_SIGBUS;118118 } else {119119 area = ctx->psn_phys;120120- if (offset > ctx->psn_size)120120+ if (offset >= ctx->psn_size)121121 return VM_FAULT_SIGBUS;122122 }123123···145145 */146146int cxl_context_iomap(struct cxl_context *ctx, struct vm_area_struct *vma)147147{148148+ u64 start = vma->vm_pgoff << PAGE_SHIFT;148149 u64 len = vma->vm_end - vma->vm_start;149149- len = min(len, ctx->psn_size);150150+151151+ if (ctx->afu->current_mode == CXL_MODE_DEDICATED) {152152+ if (start + len > ctx->afu->adapter->ps_size)153153+ return -EINVAL;154154+ } else {155155+ if (start + len > ctx->psn_size)156156+ return -EINVAL;157157+ }150158151159 if (ctx->afu->current_mode != CXL_MODE_DEDICATED) {152160 /* make sure there is a valid per process space for this AFU */
+1-1
drivers/misc/cxl/main.c
···7373 spin_lock(&adapter->afu_list_lock);7474 for (slice = 0; slice < adapter->slices; slice++) {7575 afu = adapter->afu[slice];7676- if (!afu->enabled)7676+ if (!afu || !afu->enabled)7777 continue;7878 rcu_read_lock();7979 idr_for_each_entry(&afu->contexts_idr, ctx, id)
+1-1
drivers/misc/cxl/pci.c
···539539540540static void cxl_unmap_slice_regs(struct cxl_afu *afu)541541{542542- if (afu->p1n_mmio)542542+ if (afu->p2n_mmio)543543 iounmap(afu->p2n_mmio);544544 if (afu->p1n_mmio)545545 iounmap(afu->p1n_mmio);
+2-1
drivers/misc/cxl/vphb.c
···112112 unsigned long addr;113113114114 phb = pci_bus_to_host(bus);115115- afu = (struct cxl_afu *)phb->private_data;116115 if (phb == NULL)117116 return PCIBIOS_DEVICE_NOT_FOUND;117117+ afu = (struct cxl_afu *)phb->private_data;118118+118119 if (cxl_pcie_cfg_record(bus->number, devfn) > afu->crs_num)119120 return PCIBIOS_DEVICE_NOT_FOUND;120121 if (offset >= (unsigned long)phb->cfg_data)
···402402403403 cldev->priv_data = NULL;404404405405- mutex_lock(&dev->device_lock);406405 /* Need to remove the device here407406 * since mei_nfc_free will unlink the clients408407 */409408 mei_cl_remove_device(cldev);409409+410410+ mutex_lock(&dev->device_lock);410411 mei_nfc_free(ndev);411412 mutex_unlock(&dev->device_lock);412413}
+2-2
drivers/mmc/host/sdhci-spear.c
···44 * Support of SDHCI platform devices for spear soc family55 *66 * Copyright (C) 2010 ST Microelectronics77- * Viresh Kumar <viresh.linux@gmail.com>77+ * Viresh Kumar <vireshk@kernel.org>88 *99 * Inspired by sdhci-pltfm.c1010 *···211211module_platform_driver(sdhci_driver);212212213213MODULE_DESCRIPTION("SPEAr Secure Digital Host Controller Interface driver");214214-MODULE_AUTHOR("Viresh Kumar <viresh.linux@gmail.com>");214214+MODULE_AUTHOR("Viresh Kumar <vireshk@kernel.org>");215215MODULE_LICENSE("GPL v2");
+34-17
drivers/net/bonding/bond_main.c
···689689690690}691691692692-static bool bond_should_change_active(struct bonding *bond)692692+static struct slave *bond_choose_primary_or_current(struct bonding *bond)693693{694694 struct slave *prim = rtnl_dereference(bond->primary_slave);695695 struct slave *curr = rtnl_dereference(bond->curr_active_slave);696696697697- if (!prim || !curr || curr->link != BOND_LINK_UP)698698- return true;697697+ if (!prim || prim->link != BOND_LINK_UP) {698698+ if (!curr || curr->link != BOND_LINK_UP)699699+ return NULL;700700+ return curr;701701+ }702702+699703 if (bond->force_primary) {700704 bond->force_primary = false;701701- return true;705705+ return prim;702706 }703703- if (bond->params.primary_reselect == BOND_PRI_RESELECT_BETTER &&704704- (prim->speed < curr->speed ||705705- (prim->speed == curr->speed && prim->duplex <= curr->duplex)))706706- return false;707707- if (bond->params.primary_reselect == BOND_PRI_RESELECT_FAILURE)708708- return false;709709- return true;707707+708708+ if (!curr || curr->link != BOND_LINK_UP)709709+ return prim;710710+711711+ /* At this point, prim and curr are both up */712712+ switch (bond->params.primary_reselect) {713713+ case BOND_PRI_RESELECT_ALWAYS:714714+ return prim;715715+ case BOND_PRI_RESELECT_BETTER:716716+ if (prim->speed < curr->speed)717717+ return curr;718718+ if (prim->speed == curr->speed && prim->duplex <= curr->duplex)719719+ return curr;720720+ return prim;721721+ case BOND_PRI_RESELECT_FAILURE:722722+ return curr;723723+ default:724724+ netdev_err(bond->dev, "impossible primary_reselect %d\n",725725+ bond->params.primary_reselect);726726+ return curr;727727+ }710728}711729712730/**713713- * find_best_interface - select the best available slave to be the active one731731+ * bond_find_best_slave - select the best available slave to be the active one714732 * @bond: our bonding struct715733 */716734static struct slave *bond_find_best_slave(struct bonding *bond)717735{718718- struct slave *slave, *bestslave = NULL, *primary;736736+ struct slave *slave, *bestslave = NULL;719737 struct list_head *iter;720738 int mintime = bond->params.updelay;721739722722- primary = rtnl_dereference(bond->primary_slave);723723- if (primary && primary->link == BOND_LINK_UP &&724724- bond_should_change_active(bond))725725- return primary;740740+ slave = bond_choose_primary_or_current(bond);741741+ if (slave)742742+ return slave;726743727744 bond_for_each_slave(bond, slave, iter) {728745 if (slave->link == BOND_LINK_UP)
+8-2
drivers/net/can/c_can/c_can.c
···592592{593593 struct c_can_priv *priv = netdev_priv(dev);594594 int err;595595+ struct pinctrl *p;595596596597 /* basic c_can configuration */597598 err = c_can_chip_config(dev);···605604606605 priv->can.state = CAN_STATE_ERROR_ACTIVE;607606608608- /* activate pins */609609- pinctrl_pm_select_default_state(dev->dev.parent);607607+ /* Attempt to use "active" if available else use "default" */608608+ p = pinctrl_get_select(priv->device, "active");609609+ if (!IS_ERR(p))610610+ pinctrl_put(p);611611+ else612612+ pinctrl_pm_select_default_state(priv->device);613613+610614 return 0;611615}612616
···11101110 unsigned int rx_usecs = pdata->rx_usecs;11111111 unsigned int rx_frames = pdata->rx_frames;11121112 unsigned int inte;11131113+ dma_addr_t hdr_dma, buf_dma;1113111411141115 if (!rx_usecs && !rx_frames) {11151116 /* No coalescing, interrupt for every descriptor */···11301129 * Set buffer 2 (hi) address to buffer dma address (hi) and11311130 * set control bits OWN and INTE11321131 */11331133- rdesc->desc0 = cpu_to_le32(lower_32_bits(rdata->rx.hdr.dma));11341134- rdesc->desc1 = cpu_to_le32(upper_32_bits(rdata->rx.hdr.dma));11351135- rdesc->desc2 = cpu_to_le32(lower_32_bits(rdata->rx.buf.dma));11361136- rdesc->desc3 = cpu_to_le32(upper_32_bits(rdata->rx.buf.dma));11321132+ hdr_dma = rdata->rx.hdr.dma_base + rdata->rx.hdr.dma_off;11331133+ buf_dma = rdata->rx.buf.dma_base + rdata->rx.buf.dma_off;11341134+ rdesc->desc0 = cpu_to_le32(lower_32_bits(hdr_dma));11351135+ rdesc->desc1 = cpu_to_le32(upper_32_bits(hdr_dma));11361136+ rdesc->desc2 = cpu_to_le32(lower_32_bits(buf_dma));11371137+ rdesc->desc3 = cpu_to_le32(upper_32_bits(buf_dma));1137113811381139 XGMAC_SET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, INTE, inte);11391140
+11-6
drivers/net/ethernet/amd/xgbe/xgbe-drv.c
···17651765 /* Start with the header buffer which may contain just the header17661766 * or the header plus data17671767 */17681768- dma_sync_single_for_cpu(pdata->dev, rdata->rx.hdr.dma,17691769- rdata->rx.hdr.dma_len, DMA_FROM_DEVICE);17681768+ dma_sync_single_range_for_cpu(pdata->dev, rdata->rx.hdr.dma_base,17691769+ rdata->rx.hdr.dma_off,17701770+ rdata->rx.hdr.dma_len, DMA_FROM_DEVICE);1770177117711772 packet = page_address(rdata->rx.hdr.pa.pages) +17721773 rdata->rx.hdr.pa.pages_offset;···17791778 len -= copy_len;17801779 if (len) {17811780 /* Add the remaining data as a frag */17821782- dma_sync_single_for_cpu(pdata->dev, rdata->rx.buf.dma,17831783- rdata->rx.buf.dma_len, DMA_FROM_DEVICE);17811781+ dma_sync_single_range_for_cpu(pdata->dev,17821782+ rdata->rx.buf.dma_base,17831783+ rdata->rx.buf.dma_off,17841784+ rdata->rx.buf.dma_len,17851785+ DMA_FROM_DEVICE);1784178617851787 skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,17861788 rdata->rx.buf.pa.pages,···19491945 if (!skb)19501946 error = 1;19511947 } else if (rdesc_len) {19521952- dma_sync_single_for_cpu(pdata->dev,19531953- rdata->rx.buf.dma,19481948+ dma_sync_single_range_for_cpu(pdata->dev,19491949+ rdata->rx.buf.dma_base,19501950+ rdata->rx.buf.dma_off,19541951 rdata->rx.buf.dma_len,19551952 DMA_FROM_DEVICE);19561953
+2-1
drivers/net/ethernet/amd/xgbe/xgbe.h
···337337 struct xgbe_page_alloc pa;338338 struct xgbe_page_alloc pa_unmap;339339340340- dma_addr_t dma;340340+ dma_addr_t dma_base;341341+ unsigned long dma_off;341342 unsigned int dma_len;342343};343344
···15081508 __raw_writeq(reg, port);15091509 port = s->sbm_base + R_MAC_ETHERNET_ADDR;1510151015111511-#ifdef CONFIG_SB1_PASS_1_WORKAROUNDS15121512- /*15131513- * Pass1 SOCs do not receive packets addressed to the15141514- * destination address in the R_MAC_ETHERNET_ADDR register.15151515- * Set the value to zero.15161516- */15171517- __raw_writeq(0, port);15181518-#else15191511 __raw_writeq(reg, port);15201520-#endif1521151215221513 /*15231514 * Set the receive filter for no packets, and write values
···245245 */246246static int efx_process_channel(struct efx_channel *channel, int budget)247247{248248+ struct efx_tx_queue *tx_queue;248249 int spent;249250250251 if (unlikely(!channel->enabled))251252 return 0;253253+254254+ efx_for_each_channel_tx_queue(tx_queue, channel) {255255+ tx_queue->pkts_compl = 0;256256+ tx_queue->bytes_compl = 0;257257+ }252258253259 spent = efx_nic_process_eventq(channel, budget);254260 if (spent && efx_channel_has_rx_queue(channel)) {···263257264258 efx_rx_flush_packet(channel);265259 efx_fast_push_rx_descriptors(rx_queue, true);260260+ }261261+262262+ /* Update BQL */263263+ efx_for_each_channel_tx_queue(tx_queue, channel) {264264+ if (tx_queue->bytes_compl) {265265+ netdev_tx_completed_queue(tx_queue->core_txq,266266+ tx_queue->pkts_compl, tx_queue->bytes_compl);267267+ }266268 }267269268270 return spent;
+2
drivers/net/ethernet/sfc/net_driver.h
···241241 unsigned int read_count ____cacheline_aligned_in_smp;242242 unsigned int old_write_count;243243 unsigned int merge_events;244244+ unsigned int bytes_compl;245245+ unsigned int pkts_compl;244246245247 /* Members used only on the xmit path */246248 unsigned int insert_count ____cacheline_aligned_in_smp;
···191191192192config MDIO_BUS_MUX_MMIOREG193193 tristate "Support for MMIO device-controlled MDIO bus multiplexers"194194- depends on OF_MDIO194194+ depends on OF_MDIO && HAS_IOMEM195195 select MDIO_BUS_MUX196196 help197197 This module provides a driver for MDIO bus multiplexers that
···158158 if (!cdc_ncm_comm_intf_is_mbim(intf->cur_altsetting))159159 goto err;160160161161- ret = cdc_ncm_bind_common(dev, intf, data_altsetting);161161+ ret = cdc_ncm_bind_common(dev, intf, data_altsetting, 0);162162 if (ret)163163 goto err;164164
+56-7
drivers/net/usb/cdc_ncm.c
···66 * Original author: Hans Petter Selasky <hans.petter.selasky@stericsson.com>77 *88 * USB Host Driver for Network Control Model (NCM)99- * http://www.usb.org/developers/devclass_docs/NCM10.zip99+ * http://www.usb.org/developers/docs/devclass_docs/NCM10_012011.zip1010 *1111 * The NCM encoding, decoding and initialization logic1212 * derives from FreeBSD 8.x. if_cdce.c and if_cdcereg.h···684684 ctx->tx_curr_skb = NULL;685685 }686686687687+ kfree(ctx->delayed_ndp16);688688+687689 kfree(ctx);688690}689691690690-int cdc_ncm_bind_common(struct usbnet *dev, struct usb_interface *intf, u8 data_altsetting)692692+int cdc_ncm_bind_common(struct usbnet *dev, struct usb_interface *intf, u8 data_altsetting, int drvflags)691693{692694 const struct usb_cdc_union_desc *union_desc = NULL;693695 struct cdc_ncm_ctx *ctx;···857855 /* finish setting up the device specific data */858856 cdc_ncm_setup(dev);859857858858+ /* Device-specific flags */859859+ ctx->drvflags = drvflags;860860+861861+ /* Allocate the delayed NDP if needed. */862862+ if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END) {863863+ ctx->delayed_ndp16 = kzalloc(ctx->max_ndp_size, GFP_KERNEL);864864+ if (!ctx->delayed_ndp16)865865+ goto error2;866866+ dev_info(&intf->dev, "NDP will be placed at end of frame for this device.");867867+ }868868+860869 /* override ethtool_ops */861870 dev->net->ethtool_ops = &cdc_ncm_ethtool_ops;862871···967954 if (cdc_ncm_select_altsetting(intf) != CDC_NCM_COMM_ALTSETTING_NCM)968955 return -ENODEV;969956970970- /* The NCM data altsetting is fixed */971971- ret = cdc_ncm_bind_common(dev, intf, CDC_NCM_DATA_ALTSETTING_NCM);957957+ /* The NCM data altsetting is fixed, so we hard-coded it.958958+ * Additionally, generic NCM devices are assumed to accept arbitrarily959959+ * placed NDP.960960+ */961961+ ret = cdc_ncm_bind_common(dev, intf, CDC_NCM_DATA_ALTSETTING_NCM, 0);972962973963 /*974964 * We should get an event when network connection is "connected" or···1002986 struct usb_cdc_ncm_nth16 *nth16 = (void *)skb->data;1003987 size_t ndpoffset = le16_to_cpu(nth16->wNdpIndex);1004988989989+ /* If NDP should be moved to the end of the NCM package, we can't follow the990990+ * NTH16 header as we would normally do. NDP isn't written to the SKB yet, and991991+ * the wNdpIndex field in the header is actually not consistent with reality. It will be later.992992+ */993993+ if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END)994994+ if (ctx->delayed_ndp16->dwSignature == sign)995995+ return ctx->delayed_ndp16;996996+1005997 /* follow the chain of NDPs, looking for a match */1006998 while (ndpoffset) {1007999 ndp16 = (struct usb_cdc_ncm_ndp16 *)(skb->data + ndpoffset);···1019995 }10209961021997 /* align new NDP */10221022- cdc_ncm_align_tail(skb, ctx->tx_ndp_modulus, 0, ctx->tx_max);998998+ if (!(ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END))999999+ cdc_ncm_align_tail(skb, ctx->tx_ndp_modulus, 0, ctx->tx_max);1023100010241001 /* verify that there is room for the NDP and the datagram (reserve) */10251002 if ((ctx->tx_max - skb->len - reserve) < ctx->max_ndp_size)···10331008 nth16->wNdpIndex = cpu_to_le16(skb->len);1034100910351010 /* push a new empty NDP */10361036- ndp16 = (struct usb_cdc_ncm_ndp16 *)memset(skb_put(skb, ctx->max_ndp_size), 0, ctx->max_ndp_size);10111011+ if (!(ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END))10121012+ ndp16 = (struct usb_cdc_ncm_ndp16 *)memset(skb_put(skb, ctx->max_ndp_size), 0, ctx->max_ndp_size);10131013+ else10141014+ ndp16 = ctx->delayed_ndp16;10151015+10371016 ndp16->dwSignature = sign;10381017 ndp16->wLength = cpu_to_le16(sizeof(struct usb_cdc_ncm_ndp16) + sizeof(struct usb_cdc_ncm_dpe16));10391018 return ndp16;···10521023 struct sk_buff *skb_out;10531024 u16 n = 0, index, ndplen;10541025 u8 ready2send = 0;10261026+ u32 delayed_ndp_size;10271027+10281028+ /* When our NDP gets written in cdc_ncm_ndp(), then skb_out->len gets updated10291029+ * accordingly. Otherwise, we should check here.10301030+ */10311031+ if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END)10321032+ delayed_ndp_size = ctx->max_ndp_size;10331033+ else10341034+ delayed_ndp_size = 0;1055103510561036 /* if there is a remaining skb, it gets priority */10571037 if (skb != NULL) {···11151077 cdc_ncm_align_tail(skb_out, ctx->tx_modulus, ctx->tx_remainder, ctx->tx_max);1116107811171079 /* check if we had enough room left for both NDP and frame */11181118- if (!ndp16 || skb_out->len + skb->len > ctx->tx_max) {10801080+ if (!ndp16 || skb_out->len + skb->len + delayed_ndp_size > ctx->tx_max) {11191081 if (n == 0) {11201082 /* won't fit, MTU problem? */11211083 dev_kfree_skb_any(skb);···11861148 ctx->tx_reason_max_datagram++; /* count reason for transmitting */11871149 /* frame goes out */11881150 /* variables will be reset at next call */11511151+ }11521152+11531153+ /* If requested, put NDP at end of frame. */11541154+ if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END) {11551155+ nth16 = (struct usb_cdc_ncm_nth16 *)skb_out->data;11561156+ cdc_ncm_align_tail(skb_out, ctx->tx_ndp_modulus, 0, ctx->tx_max);11571157+ nth16->wNdpIndex = cpu_to_le16(skb_out->len);11581158+ memcpy(skb_put(skb_out, ctx->max_ndp_size), ctx->delayed_ndp16, ctx->max_ndp_size);11591159+11601160+ /* Zero out delayed NDP - signature checking will naturally fail. */11611161+ ndp16 = memset(ctx->delayed_ndp16, 0, ctx->max_ndp_size);11891162 }1190116311911164 /* If collected data size is less or equal ctx->min_tx_pkt
+5-2
drivers/net/usb/huawei_cdc_ncm.c
···7373 struct usb_driver *subdriver = ERR_PTR(-ENODEV);7474 int ret = -ENODEV;7575 struct huawei_cdc_ncm_state *drvstate = (void *)&usbnet_dev->data;7676+ int drvflags = 0;76777778 /* altsetting should always be 1 for NCM devices - so we hard-coded7878- * it here7979+ * it here. Some huawei devices will need the NDP part of the NCM package to8080+ * be at the end of the frame.7981 */8080- ret = cdc_ncm_bind_common(usbnet_dev, intf, 1);8282+ drvflags |= CDC_NCM_FLAG_NDP_TO_END;8383+ ret = cdc_ncm_bind_common(usbnet_dev, intf, 1, drvflags);8184 if (ret)8285 goto err;8386
···12161216 static const u32 rxprod_reg[2] = {12171217 VMXNET3_REG_RXPROD, VMXNET3_REG_RXPROD212181218 };12191219- u32 num_rxd = 0;12191219+ u32 num_pkts = 0;12201220 bool skip_page_frags = false;12211221 struct Vmxnet3_RxCompDesc *rcd;12221222 struct vmxnet3_rx_ctx *ctx = &rq->rx_ctx;···12351235 struct Vmxnet3_RxDesc *rxd;12361236 u32 idx, ring_idx;12371237 struct vmxnet3_cmd_ring *ring = NULL;12381238- if (num_rxd >= quota) {12381238+ if (num_pkts >= quota) {12391239 /* we may stop even before we see the EOP desc of12401240 * the current pkt12411241 */12421242 break;12431243 }12441244- num_rxd++;12451244 BUG_ON(rcd->rqID != rq->qid && rcd->rqID != rq->qid2);12461245 idx = rcd->rxdIdx;12471246 ring_idx = rcd->rqID < adapter->num_rx_queues ? 0 : 1;···14121413 napi_gro_receive(&rq->napi, skb);1413141414141415 ctx->skb = NULL;14161416+ num_pkts++;14151417 }1416141814171419rcd_done:···14431443 &rq->comp_ring.base[rq->comp_ring.next2proc].rcd, &rxComp);14441444 }1445144514461446- return num_rxd;14461446+ return num_pkts;14471447}1448144814491449
+1-1
drivers/net/wan/z85230.c
···10441044 * @dev: The network device to attach10451045 * @c: The Z8530 channel to configure in sync DMA mode.10461046 *10471047- * Set up a Z85x30 device for synchronous DMA tranmission. One10471047+ * Set up a Z85x30 device for synchronous DMA transmission. One10481048 * ISA DMA channel must be available for this to work. The receive10491049 * side is run in PIO mode, but then it has the bigger FIFO.10501050 */
···22 * Driver for the ST Microelectronics SPEAr pinmux33 *44 * Copyright (C) 2012 ST Microelectronics55- * Viresh Kumar <viresh.linux@gmail.com>55+ * Viresh Kumar <vireshk@kernel.org>66 *77 * Inspired from:88 * - U300 Pinctl drivers
+1-1
drivers/pinctrl/spear/pinctrl-spear.h
···22 * Driver header file for the ST Microelectronics SPEAr pinmux33 *44 * Copyright (C) 2012 ST Microelectronics55- * Viresh Kumar <viresh.linux@gmail.com>55+ * Viresh Kumar <vireshk@kernel.org>66 *77 * This file is licensed under the terms of the GNU General Public88 * License version 2. This program is licensed "as is" without any
+2-2
drivers/pinctrl/spear/pinctrl-spear1310.c
···22 * Driver for the ST Microelectronics SPEAr1310 pinmux33 *44 * Copyright (C) 2012 ST Microelectronics55- * Viresh Kumar <viresh.linux@gmail.com>55+ * Viresh Kumar <vireshk@kernel.org>66 *77 * This file is licensed under the terms of the GNU General Public88 * License version 2. This program is licensed "as is" without any···27302730}27312731module_exit(spear1310_pinctrl_exit);2732273227332733-MODULE_AUTHOR("Viresh Kumar <viresh.linux@gmail.com>");27332733+MODULE_AUTHOR("Viresh Kumar <vireshk@kernel.org>");27342734MODULE_DESCRIPTION("ST Microelectronics SPEAr1310 pinctrl driver");27352735MODULE_LICENSE("GPL v2");27362736MODULE_DEVICE_TABLE(of, spear1310_pinctrl_of_match);
+2-2
drivers/pinctrl/spear/pinctrl-spear1340.c
···22 * Driver for the ST Microelectronics SPEAr1340 pinmux33 *44 * Copyright (C) 2012 ST Microelectronics55- * Viresh Kumar <viresh.linux@gmail.com>55+ * Viresh Kumar <vireshk@kernel.org>66 *77 * This file is licensed under the terms of the GNU General Public88 * License version 2. This program is licensed "as is" without any···20462046}20472047module_exit(spear1340_pinctrl_exit);2048204820492049-MODULE_AUTHOR("Viresh Kumar <viresh.linux@gmail.com>");20492049+MODULE_AUTHOR("Viresh Kumar <vireshk@kernel.org>");20502050MODULE_DESCRIPTION("ST Microelectronics SPEAr1340 pinctrl driver");20512051MODULE_LICENSE("GPL v2");20522052MODULE_DEVICE_TABLE(of, spear1340_pinctrl_of_match);
+2-2
drivers/pinctrl/spear/pinctrl-spear300.c
···22 * Driver for the ST Microelectronics SPEAr300 pinmux33 *44 * Copyright (C) 2012 ST Microelectronics55- * Viresh Kumar <viresh.linux@gmail.com>55+ * Viresh Kumar <vireshk@kernel.org>66 *77 * This file is licensed under the terms of the GNU General Public88 * License version 2. This program is licensed "as is" without any···703703}704704module_exit(spear300_pinctrl_exit);705705706706-MODULE_AUTHOR("Viresh Kumar <viresh.linux@gmail.com>");706706+MODULE_AUTHOR("Viresh Kumar <vireshk@kernel.org>");707707MODULE_DESCRIPTION("ST Microelectronics SPEAr300 pinctrl driver");708708MODULE_LICENSE("GPL v2");709709MODULE_DEVICE_TABLE(of, spear300_pinctrl_of_match);
+2-2
drivers/pinctrl/spear/pinctrl-spear310.c
···22 * Driver for the ST Microelectronics SPEAr310 pinmux33 *44 * Copyright (C) 2012 ST Microelectronics55- * Viresh Kumar <viresh.linux@gmail.com>55+ * Viresh Kumar <vireshk@kernel.org>66 *77 * This file is licensed under the terms of the GNU General Public88 * License version 2. This program is licensed "as is" without any···426426}427427module_exit(spear310_pinctrl_exit);428428429429-MODULE_AUTHOR("Viresh Kumar <viresh.linux@gmail.com>");429429+MODULE_AUTHOR("Viresh Kumar <vireshk@kernel.org>");430430MODULE_DESCRIPTION("ST Microelectronics SPEAr310 pinctrl driver");431431MODULE_LICENSE("GPL v2");432432MODULE_DEVICE_TABLE(of, spear310_pinctrl_of_match);
+2-2
drivers/pinctrl/spear/pinctrl-spear320.c
···22 * Driver for the ST Microelectronics SPEAr320 pinmux33 *44 * Copyright (C) 2012 ST Microelectronics55- * Viresh Kumar <viresh.linux@gmail.com>55+ * Viresh Kumar <vireshk@kernel.org>66 *77 * This file is licensed under the terms of the GNU General Public88 * License version 2. This program is licensed "as is" without any···34673467}34683468module_exit(spear320_pinctrl_exit);3469346934703470-MODULE_AUTHOR("Viresh Kumar <viresh.linux@gmail.com>");34703470+MODULE_AUTHOR("Viresh Kumar <vireshk@kernel.org>");34713471MODULE_DESCRIPTION("ST Microelectronics SPEAr320 pinctrl driver");34723472MODULE_LICENSE("GPL v2");34733473MODULE_DEVICE_TABLE(of, spear320_pinctrl_of_match);
+1-1
drivers/pinctrl/spear/pinctrl-spear3xx.c
···22 * Driver for the ST Microelectronics SPEAr3xx pinmux33 *44 * Copyright (C) 2012 ST Microelectronics55- * Viresh Kumar <viresh.linux@gmail.com>55+ * Viresh Kumar <vireshk@kernel.org>66 *77 * This file is licensed under the terms of the GNU General Public88 * License version 2. This program is licensed "as is" without any
+1-1
drivers/pinctrl/spear/pinctrl-spear3xx.h
···22 * Header file for the ST Microelectronics SPEAr3xx pinmux33 *44 * Copyright (C) 2012 ST Microelectronics55- * Viresh Kumar <viresh.linux@gmail.com>55+ * Viresh Kumar <vireshk@kernel.org>66 *77 * This file is licensed under the terms of the GNU General Public88 * License version 2. This program is licensed "as is" without any
+125-46
drivers/platform/x86/dell-laptop.c
···309309static struct calling_interface_buffer *buffer;310310static DEFINE_MUTEX(buffer_mutex);311311312312-static int hwswitch_state;312312+static void clear_buffer(void)313313+{314314+ memset(buffer, 0, sizeof(struct calling_interface_buffer));315315+}313316314317static void get_buffer(void)315318{316319 mutex_lock(&buffer_mutex);317317- memset(buffer, 0, sizeof(struct calling_interface_buffer));320320+ clear_buffer();318321}319322320323static void release_buffer(void)···551548 int disable = blocked ? 1 : 0;552549 unsigned long radio = (unsigned long)data;553550 int hwswitch_bit = (unsigned long)data - 1;551551+ int hwswitch;552552+ int status;553553+ int ret;554554555555 get_buffer();556556+556557 dell_send_request(buffer, 17, 11);558558+ ret = buffer->output[0];559559+ status = buffer->output[1];560560+561561+ if (ret != 0)562562+ goto out;563563+564564+ clear_buffer();565565+566566+ buffer->input[0] = 0x2;567567+ dell_send_request(buffer, 17, 11);568568+ ret = buffer->output[0];569569+ hwswitch = buffer->output[1];557570558571 /* If the hardware switch controls this radio, and the hardware559572 switch is disabled, always disable the radio */560560- if ((hwswitch_state & BIT(hwswitch_bit)) &&561561- !(buffer->output[1] & BIT(16)))573573+ if (ret == 0 && (hwswitch & BIT(hwswitch_bit)) &&574574+ (status & BIT(0)) && !(status & BIT(16)))562575 disable = 1;576576+577577+ clear_buffer();563578564579 buffer->input[0] = (1 | (radio<<8) | (disable << 16));565580 dell_send_request(buffer, 17, 11);581581+ ret = buffer->output[0];566582583583+ out:567584 release_buffer();568568- return 0;585585+ return dell_smi_error(ret);569586}570587571588/* Must be called with the buffer held */···595572 if (status & BIT(0)) {596573 /* Has hw-switch, sync sw_state to BIOS */597574 int block = rfkill_blocked(rfkill);575575+ clear_buffer();598576 buffer->input[0] = (1 | (radio << 8) | (block << 16));599577 dell_send_request(buffer, 17, 11);600578 } else {···605581}606582607583static void dell_rfkill_update_hw_state(struct rfkill *rfkill, int radio,608608- int status)584584+ int status, int hwswitch)609585{610610- if (hwswitch_state & (BIT(radio - 1)))586586+ if (hwswitch & (BIT(radio - 1)))611587 rfkill_set_hw_state(rfkill, !(status & BIT(16)));612588}613589614590static void dell_rfkill_query(struct rfkill *rfkill, void *data)615591{592592+ int radio = ((unsigned long)data & 0xF);593593+ int hwswitch;616594 int status;595595+ int ret;617596618597 get_buffer();598598+619599 dell_send_request(buffer, 17, 11);600600+ ret = buffer->output[0];620601 status = buffer->output[1];621602622622- dell_rfkill_update_hw_state(rfkill, (unsigned long)data, status);603603+ if (ret != 0 || !(status & BIT(0))) {604604+ release_buffer();605605+ return;606606+ }607607+608608+ clear_buffer();609609+610610+ buffer->input[0] = 0x2;611611+ dell_send_request(buffer, 17, 11);612612+ ret = buffer->output[0];613613+ hwswitch = buffer->output[1];623614624615 release_buffer();616616+617617+ if (ret != 0)618618+ return;619619+620620+ dell_rfkill_update_hw_state(rfkill, radio, status, hwswitch);625621}626622627623static const struct rfkill_ops dell_rfkill_ops = {···653609654610static int dell_debugfs_show(struct seq_file *s, void *data)655611{612612+ int hwswitch_state;613613+ int hwswitch_ret;656614 int status;615615+ int ret;657616658617 get_buffer();618618+659619 dell_send_request(buffer, 17, 11);620620+ ret = buffer->output[0];660621 status = buffer->output[1];622622+623623+ clear_buffer();624624+625625+ buffer->input[0] = 0x2;626626+ dell_send_request(buffer, 17, 11);627627+ hwswitch_ret = buffer->output[0];628628+ hwswitch_state = buffer->output[1];629629+661630 release_buffer();662631632632+ seq_printf(s, "return:\t%d\n", ret);663633 seq_printf(s, "status:\t0x%X\n", status);664634 seq_printf(s, "Bit 0 : Hardware switch supported: %lu\n",665635 status & BIT(0));···715657 seq_printf(s, "Bit 21: WiGig is blocked: %lu\n",716658 (status & BIT(21)) >> 21);717659718718- seq_printf(s, "\nhwswitch_state:\t0x%X\n", hwswitch_state);660660+ seq_printf(s, "\nhwswitch_return:\t%d\n", hwswitch_ret);661661+ seq_printf(s, "hwswitch_state:\t0x%X\n", hwswitch_state);719662 seq_printf(s, "Bit 0 : Wifi controlled by switch: %lu\n",720663 hwswitch_state & BIT(0));721664 seq_printf(s, "Bit 1 : Bluetooth controlled by switch: %lu\n",···752693753694static void dell_update_rfkill(struct work_struct *ignored)754695{696696+ int hwswitch = 0;755697 int status;698698+ int ret;756699757700 get_buffer();701701+758702 dell_send_request(buffer, 17, 11);703703+ ret = buffer->output[0];759704 status = buffer->output[1];760705706706+ if (ret != 0)707707+ goto out;708708+709709+ clear_buffer();710710+711711+ buffer->input[0] = 0x2;712712+ dell_send_request(buffer, 17, 11);713713+ ret = buffer->output[0];714714+715715+ if (ret == 0 && (status & BIT(0)))716716+ hwswitch = buffer->output[1];717717+761718 if (wifi_rfkill) {762762- dell_rfkill_update_hw_state(wifi_rfkill, 1, status);719719+ dell_rfkill_update_hw_state(wifi_rfkill, 1, status, hwswitch);763720 dell_rfkill_update_sw_state(wifi_rfkill, 1, status);764721 }765722 if (bluetooth_rfkill) {766766- dell_rfkill_update_hw_state(bluetooth_rfkill, 2, status);723723+ dell_rfkill_update_hw_state(bluetooth_rfkill, 2, status,724724+ hwswitch);767725 dell_rfkill_update_sw_state(bluetooth_rfkill, 2, status);768726 }769727 if (wwan_rfkill) {770770- dell_rfkill_update_hw_state(wwan_rfkill, 3, status);728728+ dell_rfkill_update_hw_state(wwan_rfkill, 3, status, hwswitch);771729 dell_rfkill_update_sw_state(wwan_rfkill, 3, status);772730 }773731732732+ out:774733 release_buffer();775734}776735static DECLARE_DELAYED_WORK(dell_rfkill_work, dell_update_rfkill);···850773851774 get_buffer();852775 dell_send_request(buffer, 17, 11);776776+ ret = buffer->output[0];853777 status = buffer->output[1];854854- buffer->input[0] = 0x2;855855- dell_send_request(buffer, 17, 11);856856- hwswitch_state = buffer->output[1];857778 release_buffer();858779859859- if (!(status & BIT(0))) {860860- if (force_rfkill) {861861- /* No hwsitch, clear all hw-controlled bits */862862- hwswitch_state &= ~7;863863- } else {864864- /* rfkill is only tested on laptops with a hwswitch */865865- return 0;866866- }867867- }780780+ /* dell wireless info smbios call is not supported */781781+ if (ret != 0)782782+ return 0;783783+784784+ /* rfkill is only tested on laptops with a hwswitch */785785+ if (!(status & BIT(0)) && !force_rfkill)786786+ return 0;868787869788 if ((status & (1<<2|1<<8)) == (1<<2|1<<8)) {870789 wifi_rfkill = rfkill_alloc("dell-wifi", &platform_device->dev,···10059321006933static int dell_send_intensity(struct backlight_device *bd)1007934{10081008- int ret = 0;935935+ int token;936936+ int ret;937937+938938+ token = find_token_location(BRIGHTNESS_TOKEN);939939+ if (token == -1)940940+ return -ENODEV;10099411010942 get_buffer();10111011- buffer->input[0] = find_token_location(BRIGHTNESS_TOKEN);943943+ buffer->input[0] = token;1012944 buffer->input[1] = bd->props.brightness;10131013-10141014- if (buffer->input[0] == -1) {10151015- ret = -ENODEV;10161016- goto out;10171017- }10189451019946 if (power_supply_is_system_supplied() > 0)1020947 dell_send_request(buffer, 1, 2);1021948 else1022949 dell_send_request(buffer, 1, 1);102395010241024- out:951951+ ret = dell_smi_error(buffer->output[0]);952952+1025953 release_buffer();1026954 return ret;1027955}10289561029957static int dell_get_intensity(struct backlight_device *bd)1030958{10311031- int ret = 0;959959+ int token;960960+ int ret;961961+962962+ token = find_token_location(BRIGHTNESS_TOKEN);963963+ if (token == -1)964964+ return -ENODEV;10329651033966 get_buffer();10341034- buffer->input[0] = find_token_location(BRIGHTNESS_TOKEN);10351035-10361036- if (buffer->input[0] == -1) {10371037- ret = -ENODEV;10381038- goto out;10391039- }967967+ buffer->input[0] = token;10409681041969 if (power_supply_is_system_supplied() > 0)1042970 dell_send_request(buffer, 0, 2);1043971 else1044972 dell_send_request(buffer, 0, 1);104597310461046- ret = buffer->output[1];974974+ if (buffer->output[0])975975+ ret = dell_smi_error(buffer->output[0]);976976+ else977977+ ret = buffer->output[1];104797810481048- out:1049979 release_buffer();1050980 return ret;1051981}···21122036static int __init dell_init(void)21132037{21142038 int max_intensity = 0;20392039+ int token;21152040 int ret;2116204121172042 if (!dmi_check_system(dell_device_table))···21712094 if (acpi_video_get_backlight_type() != acpi_backlight_vendor)21722095 return 0;2173209621742174- get_buffer();21752175- buffer->input[0] = find_token_location(BRIGHTNESS_TOKEN);21762176- if (buffer->input[0] != -1) {20972097+ token = find_token_location(BRIGHTNESS_TOKEN);20982098+ if (token != -1) {20992099+ get_buffer();21002100+ buffer->input[0] = token;21772101 dell_send_request(buffer, 0, 2);21782178- max_intensity = buffer->output[3];21022102+ if (buffer->output[0] == 0)21032103+ max_intensity = buffer->output[3];21042104+ release_buffer();21792105 }21802180- release_buffer();2181210621822107 if (max_intensity) {21832108 struct backlight_properties props;
+48-35
drivers/platform/x86/intel_pmc_ipc.c
···9696 struct completion cmd_complete;97979898 /* The following PMC BARs share the same ACPI device with the IPC */9999- void *acpi_io_base;9999+ resource_size_t acpi_io_base;100100 int acpi_io_size;101101 struct platform_device *tco_dev;102102103103 /* gcr */104104- void *gcr_base;104104+ resource_size_t gcr_base;105105 int gcr_size;106106107107 /* punit */108108- void *punit_base;108108+ resource_size_t punit_base;109109 int punit_size;110110- void *punit_base2;110110+ resource_size_t punit_base2;111111 int punit_size2;112112 struct platform_device *punit_dev;113113} ipcdev;···210210 return ret;211211}212212213213-/*214214- * intel_pmc_ipc_simple_command215215- * @cmd: command216216- * @sub: sub type213213+/**214214+ * intel_pmc_ipc_simple_command() - Simple IPC command215215+ * @cmd: IPC command code.216216+ * @sub: IPC command sub type.217217+ *218218+ * Send a simple IPC command to PMC when don't need to specify219219+ * input/output data and source/dest pointers.220220+ *221221+ * Return: an IPC error code or 0 on success.217222 */218223int intel_pmc_ipc_simple_command(int cmd, int sub)219224{···237232}238233EXPORT_SYMBOL_GPL(intel_pmc_ipc_simple_command);239234240240-/*241241- * intel_pmc_ipc_raw_cmd242242- * @cmd: command243243- * @sub: sub type244244- * @in: input data245245- * @inlen: input length in bytes246246- * @out: output data247247- * @outlen: output length in dwords248248- * @sptr: data writing to SPTR register249249- * @dptr: data writing to DPTR register235235+/**236236+ * intel_pmc_ipc_raw_cmd() - IPC command with data and pointers237237+ * @cmd: IPC command code.238238+ * @sub: IPC command sub type.239239+ * @in: input data of this IPC command.240240+ * @inlen: input data length in bytes.241241+ * @out: output data of this IPC command.242242+ * @outlen: output data length in dwords.243243+ * @sptr: data writing to SPTR register.244244+ * @dptr: data writing to DPTR register.245245+ *246246+ * Send an IPC command to PMC with input/output data and source/dest pointers.247247+ *248248+ * Return: an IPC error code or 0 on success.250249 */251250int intel_pmc_ipc_raw_cmd(u32 cmd, u32 sub, u8 *in, u32 inlen, u32 *out,252251 u32 outlen, u32 dptr, u32 sptr)···287278}288279EXPORT_SYMBOL_GPL(intel_pmc_ipc_raw_cmd);289280290290-/*291291- * intel_pmc_ipc_command292292- * @cmd: command293293- * @sub: sub type294294- * @in: input data295295- * @inlen: input length in bytes296296- * @out: output data297297- * @outlen: output length in dwords281281+/**282282+ * intel_pmc_ipc_command() - IPC command with input/output data283283+ * @cmd: IPC command code.284284+ * @sub: IPC command sub type.285285+ * @in: input data of this IPC command.286286+ * @inlen: input data length in bytes.287287+ * @out: output data of this IPC command.288288+ * @outlen: output data length in dwords.289289+ *290290+ * Send an IPC command to PMC with input/output data.291291+ *292292+ * Return: an IPC error code or 0 on success.298293 */299294int intel_pmc_ipc_command(u32 cmd, u32 sub, u8 *in, u32 inlen,300295 u32 *out, u32 outlen)···493480 pdev->dev.parent = ipcdev.dev;494481495482 res = punit_res;496496- res->start = (resource_size_t)ipcdev.punit_base;483483+ res->start = ipcdev.punit_base;497484 res->end = res->start + ipcdev.punit_size - 1;498485499486 res = punit_res + PUNIT_RESOURCE_INTER;500500- res->start = (resource_size_t)ipcdev.punit_base2;487487+ res->start = ipcdev.punit_base2;501488 res->end = res->start + ipcdev.punit_size2 - 1;502489503490 ret = platform_device_add_resources(pdev, punit_res,···535522 pdev->dev.parent = ipcdev.dev;536523537524 res = tco_res + TCO_RESOURCE_ACPI_IO;538538- res->start = (resource_size_t)ipcdev.acpi_io_base + TCO_BASE_OFFSET;525525+ res->start = ipcdev.acpi_io_base + TCO_BASE_OFFSET;539526 res->end = res->start + TCO_REGS_SIZE - 1;540527541528 res = tco_res + TCO_RESOURCE_SMI_EN_IO;542542- res->start = (resource_size_t)ipcdev.acpi_io_base + SMI_EN_OFFSET;529529+ res->start = ipcdev.acpi_io_base + SMI_EN_OFFSET;543530 res->end = res->start + SMI_EN_SIZE - 1;544531545532 res = tco_res + TCO_RESOURCE_GCR_MEM;546546- res->start = (resource_size_t)ipcdev.gcr_base;533533+ res->start = ipcdev.gcr_base;547534 res->end = res->start + ipcdev.gcr_size - 1;548535549536 ret = platform_device_add_resources(pdev, tco_res, ARRAY_SIZE(tco_res));···602589 return -ENXIO;603590 }604591 size = resource_size(res);605605- ipcdev.acpi_io_base = (void *)res->start;592592+ ipcdev.acpi_io_base = res->start;606593 ipcdev.acpi_io_size = size;607594 dev_info(&pdev->dev, "io res: %llx %x\n",608595 (long long)res->start, (int)resource_size(res));···614601 return -ENXIO;615602 }616603 size = resource_size(res);617617- ipcdev.punit_base = (void *)res->start;604604+ ipcdev.punit_base = res->start;618605 ipcdev.punit_size = size;619606 dev_info(&pdev->dev, "punit data res: %llx %x\n",620607 (long long)res->start, (int)resource_size(res));···626613 return -ENXIO;627614 }628615 size = resource_size(res);629629- ipcdev.punit_base2 = (void *)res->start;616616+ ipcdev.punit_base2 = res->start;630617 ipcdev.punit_size2 = size;631618 dev_info(&pdev->dev, "punit interface res: %llx %x\n",632619 (long long)res->start, (int)resource_size(res));···650637 }651638 ipcdev.ipc_base = addr;652639653653- ipcdev.gcr_base = (void *)(res->start + size);640640+ ipcdev.gcr_base = res->start + size;654641 ipcdev.gcr_size = PLAT_RESOURCE_GCR_SIZE;655642 dev_info(&pdev->dev, "ipc res: %llx %x\n",656643 (long long)res->start, (int)resource_size(res));
···77 * Bjorn Helgaas <bjorn.helgaas@hp.com>88 */991010-#include <linux/acpi.h>1110#include <linux/pnp.h>1211#include <linux/device.h>1312#include <linux/init.h>···2223 {"", 0}2324};24252525-#ifdef CONFIG_ACPI2626-static bool __reserve_range(u64 start, unsigned int length, bool io, char *desc)2727-{2828- u8 space_id = io ? ACPI_ADR_SPACE_SYSTEM_IO : ACPI_ADR_SPACE_SYSTEM_MEMORY;2929- return !acpi_reserve_region(start, length, space_id, IORESOURCE_BUSY, desc);3030-}3131-#else3232-static bool __reserve_range(u64 start, unsigned int length, bool io, char *desc)3333-{3434- struct resource *res;3535-3636- res = io ? request_region(start, length, desc) :3737- request_mem_region(start, length, desc);3838- if (res) {3939- res->flags &= ~IORESOURCE_BUSY;4040- return true;4141- }4242- return false;4343-}4444-#endif4545-4626static void reserve_range(struct pnp_dev *dev, struct resource *r, int port)4727{4828 char *regionid;4929 const char *pnpid = dev_name(&dev->dev);5030 resource_size_t start = r->start, end = r->end;5151- bool reserved;3131+ struct resource *res;52325333 regionid = kmalloc(16, GFP_KERNEL);5434 if (!regionid)5535 return;56365737 snprintf(regionid, 16, "pnp %s", pnpid);5858- reserved = __reserve_range(start, end - start + 1, !!port, regionid);5959- if (!reserved)3838+ if (port)3939+ res = request_region(start, end - start + 1, regionid);4040+ else4141+ res = request_mem_region(start, end - start + 1, regionid);4242+ if (res)4343+ res->flags &= ~IORESOURCE_BUSY;4444+ else6045 kfree(regionid);61466247 /*···4966 * have double reservations.5067 */5168 dev_info(&dev->dev, "%pR %s reserved\n", r,5252- reserved ? "has been" : "could not be");6969+ res ? "has been" : "could not be");5370}54715572static void reserve_resources_of_dev(struct pnp_dev *dev)
+1-1
drivers/rtc/rtc-armada38x.c
···8888{8989 struct armada38x_rtc *rtc = dev_get_drvdata(dev);9090 int ret = 0;9191- unsigned long time, flags;9191+ unsigned long time;92929393 ret = rtc_tm_to_time(tm, &time);9494
···18631863}1864186418651865/*18661866+ * return 1 when device is not eligible for IO18671867+ */18681868+static int __dasd_device_is_unusable(struct dasd_device *device,18691869+ struct dasd_ccw_req *cqr)18701870+{18711871+ int mask = ~(DASD_STOPPED_DC_WAIT | DASD_UNRESUMED_PM);18721872+18731873+ if (test_bit(DASD_FLAG_OFFLINE, &device->flags)) {18741874+ /* dasd is being set offline. */18751875+ return 1;18761876+ }18771877+ if (device->stopped) {18781878+ if (device->stopped & mask) {18791879+ /* stopped and CQR will not change that. */18801880+ return 1;18811881+ }18821882+ if (!test_bit(DASD_CQR_VERIFY_PATH, &cqr->flags)) {18831883+ /* CQR is not able to change device to18841884+ * operational. */18851885+ return 1;18861886+ }18871887+ /* CQR required to get device operational. */18881888+ }18891889+ return 0;18901890+}18911891+18921892+/*18661893 * Take a look at the first request on the ccw queue and check18671894 * if it needs to be started.18681895 */···19031876 cqr = list_entry(device->ccw_queue.next, struct dasd_ccw_req, devlist);19041877 if (cqr->status != DASD_CQR_QUEUED)19051878 return;19061906- /* when device is stopped, return request to previous layer19071907- * exception: only the disconnect or unresumed bits are set and the19081908- * cqr is a path verification request19091909- */19101910- if (device->stopped &&19111911- !(!(device->stopped & ~(DASD_STOPPED_DC_WAIT | DASD_UNRESUMED_PM))19121912- && test_bit(DASD_CQR_VERIFY_PATH, &cqr->flags))) {18791879+ /* if device is not usable return request to upper layer */18801880+ if (__dasd_device_is_unusable(device, cqr)) {19131881 cqr->intrc = -EAGAIN;19141882 cqr->status = DASD_CQR_CLEARED;19151883 dasd_schedule_device_bh(device);
···191191 *192192 * Return: Pointer to the newly allocated QH, or NULL on error193193 */194194-static struct dwc2_qh *dwc2_hcd_qh_create(struct dwc2_hsotg *hsotg,194194+struct dwc2_qh *dwc2_hcd_qh_create(struct dwc2_hsotg *hsotg,195195 struct dwc2_hcd_urb *urb,196196 gfp_t mem_flags)197197{···767767 *768768 * @hsotg: The DWC HCD structure769769 * @qtd: The QTD to add770770- * @qh: Out parameter to return queue head771771- * @atomic_alloc: Flag to do atomic alloc if needed770770+ * @qh: Queue head to add qtd to772771 *773772 * Return: 0 if successful, negative error code otherwise774773 *775775- * Finds the correct QH to place the QTD into. If it does not find a QH, it776776- * will create a new QH. If the QH to which the QTD is added is not currently777777- * scheduled, it is placed into the proper schedule based on its EP type.774774+ * If the QH to which the QTD is added is not currently scheduled, it is placed775775+ * into the proper schedule based on its EP type.778776 */779777int dwc2_hcd_qtd_add(struct dwc2_hsotg *hsotg, struct dwc2_qtd *qtd,780780- struct dwc2_qh **qh, gfp_t mem_flags)778778+ struct dwc2_qh *qh)781779{782782- struct dwc2_hcd_urb *urb = qtd->urb;783783- int allocated = 0;784780 int retval;785781786786- /*787787- * Get the QH which holds the QTD-list to insert to. Create QH if it788788- * doesn't exist.789789- */790790- if (*qh == NULL) {791791- *qh = dwc2_hcd_qh_create(hsotg, urb, mem_flags);792792- if (*qh == NULL)793793- return -ENOMEM;794794- allocated = 1;782782+ if (unlikely(!qh)) {783783+ dev_err(hsotg->dev, "%s: Invalid QH\n", __func__);784784+ retval = -EINVAL;785785+ goto fail;795786 }796787797797- retval = dwc2_hcd_qh_add(hsotg, *qh);788788+ retval = dwc2_hcd_qh_add(hsotg, qh);798789 if (retval)799790 goto fail;800791801801- qtd->qh = *qh;802802- list_add_tail(&qtd->qtd_list_entry, &(*qh)->qtd_list);792792+ qtd->qh = qh;793793+ list_add_tail(&qtd->qtd_list_entry, &qh->qtd_list);803794804795 return 0;805805-806796fail:807807- if (allocated) {808808- struct dwc2_qtd *qtd2, *qtd2_tmp;809809- struct dwc2_qh *qh_tmp = *qh;810810-811811- *qh = NULL;812812- dwc2_hcd_qh_unlink(hsotg, qh_tmp);813813-814814- /* Free each QTD in the QH's QTD list */815815- list_for_each_entry_safe(qtd2, qtd2_tmp, &qh_tmp->qtd_list,816816- qtd_list_entry)817817- dwc2_hcd_qtd_unlink_and_free(hsotg, qtd2, qh_tmp);818818-819819- dwc2_hcd_qh_free(hsotg, qh_tmp);820820- }821821-822797 return retval;823798}
+4-2
drivers/usb/dwc3/core.c
···446446 /* Select the HS PHY interface */447447 switch (DWC3_GHWPARAMS3_HSPHY_IFC(dwc->hwparams.hwparams3)) {448448 case DWC3_GHWPARAMS3_HSPHY_IFC_UTMI_ULPI:449449- if (!strncmp(dwc->hsphy_interface, "utmi", 4)) {449449+ if (dwc->hsphy_interface &&450450+ !strncmp(dwc->hsphy_interface, "utmi", 4)) {450451 reg &= ~DWC3_GUSB2PHYCFG_ULPI_UTMI;451452 break;452452- } else if (!strncmp(dwc->hsphy_interface, "ulpi", 4)) {453453+ } else if (dwc->hsphy_interface &&454454+ !strncmp(dwc->hsphy_interface, "ulpi", 4)) {453455 reg |= DWC3_GUSB2PHYCFG_ULPI_UTMI;454456 dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg);455457 } else {
+7-4
drivers/usb/gadget/composite.c
···17581758 * take such requests too, if that's ever needed: to work17591759 * in config 0, etc.17601760 */17611761- list_for_each_entry(f, &cdev->config->functions, list)17621762- if (f->req_match && f->req_match(f, ctrl))17631763- goto try_fun_setup;17641764- f = NULL;17611761+ if (cdev->config) {17621762+ list_for_each_entry(f, &cdev->config->functions, list)17631763+ if (f->req_match && f->req_match(f, ctrl))17641764+ goto try_fun_setup;17651765+ f = NULL;17661766+ }17671767+17651768 switch (ctrl->bRequestType & USB_RECIP_MASK) {17661769 case USB_RECIP_INTERFACE:17671770 if (!cdev->config || intf >= MAX_CONFIG_INTERFACES)
+1-1
drivers/usb/gadget/configfs.c
···571571 if (IS_ERR(fi))572572 return ERR_CAST(fi);573573574574- ret = config_item_set_name(&fi->group.cg_item, name);574574+ ret = config_item_set_name(&fi->group.cg_item, "%s", name);575575 if (ret) {576576 usb_put_function_instance(fi);577577 return ERR_PTR(ret);
+4-2
drivers/usb/gadget/function/f_fs.c
···924924925925 kiocb->private = p;926926927927- kiocb_set_cancel_fn(kiocb, ffs_aio_cancel);927927+ if (p->aio)928928+ kiocb_set_cancel_fn(kiocb, ffs_aio_cancel);928929929930 res = ffs_epfile_io(kiocb->ki_filp, p);930931 if (res == -EIOCBQUEUED)···969968970969 kiocb->private = p;971970972972- kiocb_set_cancel_fn(kiocb, ffs_aio_cancel);971971+ if (p->aio)972972+ kiocb_set_cancel_fn(kiocb, ffs_aio_cancel);973973974974 res = ffs_epfile_io(kiocb->ki_filp, p);975975 if (res == -EIOCBQUEUED)
+13-3
drivers/usb/gadget/function/f_mass_storage.c
···27862786 return -EINVAL;27872787 }2788278827892789- curlun = kcalloc(nluns, sizeof(*curlun), GFP_KERNEL);27892789+ curlun = kcalloc(FSG_MAX_LUNS, sizeof(*curlun), GFP_KERNEL);27902790 if (unlikely(!curlun))27912791 return -ENOMEM;27922792···2795279527962796 common->luns = curlun;27972797 common->nluns = nluns;27982798-27992799- pr_info("Number of LUNs=%d\n", common->nluns);2800279828012799 return 0;28022800}···35613563 struct fsg_opts *opts = fsg_opts_from_func_inst(fi);35623564 struct fsg_common *common = opts->common;35633565 struct fsg_dev *fsg;35663566+ unsigned nluns, i;3564356735653568 fsg = kzalloc(sizeof(*fsg), GFP_KERNEL);35663569 if (unlikely(!fsg))35673570 return ERR_PTR(-ENOMEM);3568357135693572 mutex_lock(&opts->lock);35733573+ if (!opts->refcnt) {35743574+ for (nluns = i = 0; i < FSG_MAX_LUNS; ++i)35753575+ if (common->luns[i])35763576+ nluns = i + 1;35773577+ if (!nluns)35783578+ pr_warn("No LUNS defined, continuing anyway\n");35793579+ else35803580+ common->nluns = nluns;35813581+ pr_info("Number of LUNs=%u\n", common->nluns);35823582+ }35703583 opts->refcnt++;35713584 mutex_unlock(&opts->lock);35853585+35723586 fsg->function.name = FSG_DRIVER_DESC;35733587 fsg->function.bind = fsg_bind;35743588 fsg->function.unbind = fsg_unbind;
···217217{218218 unsigned int vbus_value;219219220220+ if (!mxs_phy->regmap_anatop)221221+ return false;222222+220223 if (mxs_phy->port_id == 0)221224 regmap_read(mxs_phy->regmap_anatop,222225 ANADIG_USB1_VBUS_DET_STAT,
+1
drivers/usb/serial/cp210x.c
···187187 { USB_DEVICE(0x1FB9, 0x0602) }, /* Lake Shore Model 648 Magnet Power Supply */188188 { USB_DEVICE(0x1FB9, 0x0700) }, /* Lake Shore Model 737 VSM Controller */189189 { USB_DEVICE(0x1FB9, 0x0701) }, /* Lake Shore Model 776 Hall Matrix */190190+ { USB_DEVICE(0x2626, 0xEA60) }, /* Aruba Networks 7xxx USB Serial Console */190191 { USB_DEVICE(0x3195, 0xF190) }, /* Link Instruments MSO-19 */191192 { USB_DEVICE(0x3195, 0xF280) }, /* Link Instruments MSO-28 */192193 { USB_DEVICE(0x3195, 0xF281) }, /* Link Instruments MSO-28 */
···44 * Watchdog driver for ARM SP805 watchdog module55 *66 * Copyright (C) 2010 ST Microelectronics77- * Viresh Kumar <viresh.linux@gmail.com>77+ * Viresh Kumar <vireshk@kernel.org>88 *99 * This file is licensed under the terms of the GNU General Public1010 * License version 2 or later. This program is licensed "as is" without any···303303304304module_amba_driver(sp805_wdt_driver);305305306306-MODULE_AUTHOR("Viresh Kumar <viresh.linux@gmail.com>");306306+MODULE_AUTHOR("Viresh Kumar <vireshk@kernel.org>");307307MODULE_DESCRIPTION("ARM SP805 Watchdog Driver");308308MODULE_LICENSE("GPL");
···4444#define BTRFS_INODE_IN_DELALLOC_LIST 94545#define BTRFS_INODE_READDIO_NEED_LOCK 104646#define BTRFS_INODE_HAS_PROPS 114747+/* DIO is ready to submit */4848+#define BTRFS_INODE_DIO_READY 124749/*4850 * The following 3 bits are meant only for the btree inode.4951 * When any of them is set, it means an error happened while writing an
+1
fs/btrfs/ctree.h
···17781778 spinlock_t unused_bgs_lock;17791779 struct list_head unused_bgs;17801780 struct mutex unused_bg_unpin_mutex;17811781+ struct mutex delete_unused_bgs_mutex;1781178217821783 /* For btrfs to record security options */17831784 struct security_mnt_opts security_opts;
+40-1
fs/btrfs/disk-io.c
···17511751{17521752 struct btrfs_root *root = arg;17531753 int again;17541754+ struct btrfs_trans_handle *trans;1754175517551756 do {17561757 again = 0;···17731772 }1774177317751774 btrfs_run_delayed_iputs(root);17761776- btrfs_delete_unused_bgs(root->fs_info);17771775 again = btrfs_clean_one_deleted_snapshot(root);17781776 mutex_unlock(&root->fs_info->cleaner_mutex);17791777···17811781 * needn't do anything special here.17821782 */17831783 btrfs_run_defrag_inodes(root->fs_info);17841784+17851785+ /*17861786+ * Acquires fs_info->delete_unused_bgs_mutex to avoid racing17871787+ * with relocation (btrfs_relocate_chunk) and relocation17881788+ * acquires fs_info->cleaner_mutex (btrfs_relocate_block_group)17891789+ * after acquiring fs_info->delete_unused_bgs_mutex. So we17901790+ * can't hold, nor need to, fs_info->cleaner_mutex when deleting17911791+ * unused block groups.17921792+ */17931793+ btrfs_delete_unused_bgs(root->fs_info);17841794sleep:17851795 if (!try_to_freeze() && !again) {17861796 set_current_state(TASK_INTERRUPTIBLE);···17991789 __set_current_state(TASK_RUNNING);18001790 }18011791 } while (!kthread_should_stop());17921792+17931793+ /*17941794+ * Transaction kthread is stopped before us and wakes us up.17951795+ * However we might have started a new transaction and COWed some17961796+ * tree blocks when deleting unused block groups for example. So17971797+ * make sure we commit the transaction we started to have a clean17981798+ * shutdown when evicting the btree inode - if it has dirty pages17991799+ * when we do the final iput() on it, eviction will trigger a18001800+ * writeback for it which will fail with null pointer dereferences18011801+ * since work queues and other resources were already released and18021802+ * destroyed by the time the iput/eviction/writeback is made.18031803+ */18041804+ trans = btrfs_attach_transaction(root);18051805+ if (IS_ERR(trans)) {18061806+ if (PTR_ERR(trans) != -ENOENT)18071807+ btrfs_err(root->fs_info,18081808+ "cleaner transaction attach returned %ld",18091809+ PTR_ERR(trans));18101810+ } else {18111811+ int ret;18121812+18131813+ ret = btrfs_commit_transaction(trans, root);18141814+ if (ret)18151815+ btrfs_err(root->fs_info,18161816+ "cleaner open transaction commit returned %d",18171817+ ret);18181818+ }18191819+18021820 return 0;18031821}18041822···25302492 spin_lock_init(&fs_info->unused_bgs_lock);25312493 rwlock_init(&fs_info->tree_mod_log_lock);25322494 mutex_init(&fs_info->unused_bg_unpin_mutex);24952495+ mutex_init(&fs_info->delete_unused_bgs_mutex);25332496 mutex_init(&fs_info->reloc_mutex);25342497 mutex_init(&fs_info->delalloc_root_mutex);25352498 seqlock_init(&fs_info->profiles_lock);
+16
fs/btrfs/extent-tree.c
···22962296static inline struct btrfs_delayed_ref_node *22972297select_delayed_ref(struct btrfs_delayed_ref_head *head)22982298{22992299+ struct btrfs_delayed_ref_node *ref;23002300+22992301 if (list_empty(&head->ref_list))23002302 return NULL;23032303+23042304+ /*23052305+ * Select a delayed ref of type BTRFS_ADD_DELAYED_REF first.23062306+ * This is to prevent a ref count from going down to zero, which deletes23072307+ * the extent item from the extent tree, when there still are references23082308+ * to add, which would fail because they would not find the extent item.23092309+ */23102310+ list_for_each_entry(ref, &head->ref_list, list) {23112311+ if (ref->action == BTRFS_ADD_DELAYED_REF)23122312+ return ref;23132313+ }2301231423022315 return list_entry(head->ref_list.next, struct btrfs_delayed_ref_node,23032316 list);···99029889 }99039890 spin_unlock(&fs_info->unused_bgs_lock);9904989198929892+ mutex_lock(&root->fs_info->delete_unused_bgs_mutex);98939893+99059894 /* Don't want to race with allocators so take the groups_sem */99069895 down_write(&space_info->groups_sem);99079896 spin_lock(&block_group->lock);···99989983end_trans:99999984 btrfs_end_transaction(trans, root);100009985next:99869986+ mutex_unlock(&root->fs_info->delete_unused_bgs_mutex);100019987 btrfs_put_block_group(block_group);100029988 spin_lock(&fs_info->unused_bgs_lock);100039989 }
···42094209 u64 extent_num_bytes = 0;42104210 u64 extent_offset = 0;42114211 u64 item_end = 0;42124212- u64 last_size = (u64)-1;42124212+ u64 last_size = new_size;42134213 u32 found_type = (u8)-1;42144214 int found_extent;42154215 int del_item;···44934493 btrfs_abort_transaction(trans, root, ret);44944494 }44954495error:44964496- if (last_size != (u64)-1 &&44974497- root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID)44964496+ if (root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID)44984497 btrfs_ordered_update_i_size(inode, last_size, NULL);4499449845004499 btrfs_free_path(path);···49884989 /*49894990 * Keep looping until we have no more ranges in the io tree.49904991 * We can have ongoing bios started by readpages (called from readahead)49914991- * that didn't get their end io callbacks called yet or they are still49924992- * in progress ((extent_io.c:end_bio_extent_readpage()). This means some49924992+ * that have their endio callback (extent_io.c:end_bio_extent_readpage)49934993+ * still in progress (unlocked the pages in the bio but did not yet49944994+ * unlocked the ranges in the io tree). Therefore this means some49934995 * ranges can still be locked and eviction started because before49944996 * submitting those bios, which are executed by a separate task (work49954997 * queue kthread), inode references (inode->i_count) were not taken···7546754675477547 current->journal_info = outstanding_extents;75487548 btrfs_free_reserved_data_space(inode, len);75497549+ set_bit(BTRFS_INODE_DIO_READY, &BTRFS_I(inode)->runtime_flags);75497550 }7550755175517552 /*···78727871 struct bio *dio_bio;78737872 int ret;7874787378757875- if (err)78767876- goto out_done;78777874again:78787875 ret = btrfs_dec_test_first_ordered_pending(inode, &ordered,78797876 &ordered_offset,···78947895 ordered = NULL;78957896 goto again;78967897 }78977897-out_done:78987898 dio_bio = dip->dio_bio;7899789979007900 kfree(dip);···81618163static void btrfs_submit_direct(int rw, struct bio *dio_bio,81628164 struct inode *inode, loff_t file_offset)81638165{81648164- struct btrfs_root *root = BTRFS_I(inode)->root;81658165- struct btrfs_dio_private *dip;81668166- struct bio *io_bio;81668166+ struct btrfs_dio_private *dip = NULL;81678167+ struct bio *io_bio = NULL;81678168 struct btrfs_io_bio *btrfs_bio;81688169 int skip_sum;81698170 int write = rw & REQ_WRITE;···81798182 dip = kzalloc(sizeof(*dip), GFP_NOFS);81808183 if (!dip) {81818184 ret = -ENOMEM;81828182- goto free_io_bio;81858185+ goto free_ordered;81838186 }8184818781858188 dip->private = dio_bio->bi_private;···8207821082088211 if (btrfs_bio->end_io)82098212 btrfs_bio->end_io(btrfs_bio, ret);82108210-free_io_bio:82118211- bio_put(io_bio);8212821382138214free_ordered:82148215 /*82158215- * If this is a write, we need to clean up the reserved space and kill82168216- * the ordered extent.82168216+ * If we arrived here it means either we failed to submit the dip82178217+ * or we either failed to clone the dio_bio or failed to allocate the82188218+ * dip. If we cloned the dio_bio and allocated the dip, we can just82198219+ * call bio_endio against our io_bio so that we get proper resource82208220+ * cleanup if we fail to submit the dip, otherwise, we must do the82218221+ * same as btrfs_endio_direct_[write|read] because we can't call these82228222+ * callbacks - they require an allocated dip and a clone of dio_bio.82178223 */82188218- if (write) {82198219- struct btrfs_ordered_extent *ordered;82208220- ordered = btrfs_lookup_ordered_extent(inode, file_offset);82218221- if (!test_bit(BTRFS_ORDERED_PREALLOC, &ordered->flags) &&82228222- !test_bit(BTRFS_ORDERED_NOCOW, &ordered->flags))82238223- btrfs_free_reserved_extent(root, ordered->start,82248224- ordered->disk_len, 1);82258225- btrfs_put_ordered_extent(ordered);82268226- btrfs_put_ordered_extent(ordered);82248224+ if (io_bio && dip) {82258225+ bio_endio(io_bio, ret);82268226+ /*82278227+ * The end io callbacks free our dip, do the final put on io_bio82288228+ * and all the cleanup and final put for dio_bio (through82298229+ * dio_end_io()).82308230+ */82318231+ dip = NULL;82328232+ io_bio = NULL;82338233+ } else {82348234+ if (write) {82358235+ struct btrfs_ordered_extent *ordered;82368236+82378237+ ordered = btrfs_lookup_ordered_extent(inode,82388238+ file_offset);82398239+ set_bit(BTRFS_ORDERED_IOERR, &ordered->flags);82408240+ /*82418241+ * Decrements our ref on the ordered extent and removes82428242+ * the ordered extent from the inode's ordered tree,82438243+ * doing all the proper resource cleanup such as for the82448244+ * reserved space and waking up any waiters for this82458245+ * ordered extent (through btrfs_remove_ordered_extent).82468246+ */82478247+ btrfs_finish_ordered_io(ordered);82488248+ } else {82498249+ unlock_extent(&BTRFS_I(inode)->io_tree, file_offset,82508250+ file_offset + dio_bio->bi_iter.bi_size - 1);82518251+ }82528252+ clear_bit(BIO_UPTODATE, &dio_bio->bi_flags);82538253+ /*82548254+ * Releases and cleans up our dio_bio, no need to bio_put()82558255+ * nor bio_endio()/bio_io_error() against dio_bio.82568256+ */82578257+ dio_end_io(dio_bio, ret);82278258 }82288228- bio_endio(dio_bio, ret);82598259+ if (io_bio)82608260+ bio_put(io_bio);82618261+ kfree(dip);82298262}8230826382318264static ssize_t check_direct_IO(struct btrfs_root *root, struct kiocb *iocb,···83578330 btrfs_submit_direct, flags);83588331 if (iov_iter_rw(iter) == WRITE) {83598332 current->journal_info = NULL;83608360- if (ret < 0 && ret != -EIOCBQUEUED)83618361- btrfs_delalloc_release_space(inode, count);83628362- else if (ret >= 0 && (size_t)ret < count)83338333+ if (ret < 0 && ret != -EIOCBQUEUED) {83348334+ /*83358335+ * If the error comes from submitting stage,83368336+ * btrfs_get_blocsk_direct() has free'd data space,83378337+ * and metadata space will be handled by83388338+ * finish_ordered_fn, don't do that again to make83398339+ * sure bytes_may_use is correct.83408340+ */83418341+ if (!test_and_clear_bit(BTRFS_INODE_DIO_READY,83428342+ &BTRFS_I(inode)->runtime_flags))83438343+ btrfs_delalloc_release_space(inode, count);83448344+ } else if (ret >= 0 && (size_t)ret < count)83638345 btrfs_delalloc_release_space(inode,83648346 count - (size_t)ret);83658347 }
+206-55
fs/btrfs/ioctl.c
···878788888989static int btrfs_clone(struct inode *src, struct inode *inode,9090- u64 off, u64 olen, u64 olen_aligned, u64 destoff);9090+ u64 off, u64 olen, u64 olen_aligned, u64 destoff,9191+ int no_time_update);91929293/* Mask out flags that are inappropriate for the given type of inode. */9394static inline __u32 btrfs_mask_flags(umode_t mode, __u32 flags)···27662765 return ret;27672766}2768276727692769-static struct page *extent_same_get_page(struct inode *inode, u64 off)27682768+static struct page *extent_same_get_page(struct inode *inode, pgoff_t index)27702769{27712770 struct page *page;27722772- pgoff_t index;27732771 struct extent_io_tree *tree = &BTRFS_I(inode)->io_tree;27742774-27752775- index = off >> PAGE_CACHE_SHIFT;2776277227772773 page = grab_cache_page(inode->i_mapping, index);27782774 if (!page)···27892791 unlock_page(page);2790279227912793 return page;27942794+}27952795+27962796+static int gather_extent_pages(struct inode *inode, struct page **pages,27972797+ int num_pages, u64 off)27982798+{27992799+ int i;28002800+ pgoff_t index = off >> PAGE_CACHE_SHIFT;28012801+28022802+ for (i = 0; i < num_pages; i++) {28032803+ pages[i] = extent_same_get_page(inode, index + i);28042804+ if (!pages[i])28052805+ return -ENOMEM;28062806+ }28072807+ return 0;27922808}2793280927942810static inline void lock_extent_range(struct inode *inode, u64 off, u64 len)···28302818 }28312819}2832282028332833-static void btrfs_double_unlock(struct inode *inode1, u64 loff1,28342834- struct inode *inode2, u64 loff2, u64 len)28212821+static void btrfs_double_inode_unlock(struct inode *inode1, struct inode *inode2)28352822{28362836- unlock_extent(&BTRFS_I(inode1)->io_tree, loff1, loff1 + len - 1);28372837- unlock_extent(&BTRFS_I(inode2)->io_tree, loff2, loff2 + len - 1);28382838-28392823 mutex_unlock(&inode1->i_mutex);28402824 mutex_unlock(&inode2->i_mutex);28412825}2842282628432843-static void btrfs_double_lock(struct inode *inode1, u64 loff1,28442844- struct inode *inode2, u64 loff2, u64 len)28272827+static void btrfs_double_inode_lock(struct inode *inode1, struct inode *inode2)28282828+{28292829+ if (inode1 < inode2)28302830+ swap(inode1, inode2);28312831+28322832+ mutex_lock_nested(&inode1->i_mutex, I_MUTEX_PARENT);28332833+ if (inode1 != inode2)28342834+ mutex_lock_nested(&inode2->i_mutex, I_MUTEX_CHILD);28352835+}28362836+28372837+static void btrfs_double_extent_unlock(struct inode *inode1, u64 loff1,28382838+ struct inode *inode2, u64 loff2, u64 len)28392839+{28402840+ unlock_extent(&BTRFS_I(inode1)->io_tree, loff1, loff1 + len - 1);28412841+ unlock_extent(&BTRFS_I(inode2)->io_tree, loff2, loff2 + len - 1);28422842+}28432843+28442844+static void btrfs_double_extent_lock(struct inode *inode1, u64 loff1,28452845+ struct inode *inode2, u64 loff2, u64 len)28452846{28462847 if (inode1 < inode2) {28472848 swap(inode1, inode2);28482849 swap(loff1, loff2);28492850 }28502850-28512851- mutex_lock_nested(&inode1->i_mutex, I_MUTEX_PARENT);28522851 lock_extent_range(inode1, loff1, len);28532853- if (inode1 != inode2) {28542854- mutex_lock_nested(&inode2->i_mutex, I_MUTEX_CHILD);28522852+ if (inode1 != inode2)28552853 lock_extent_range(inode2, loff2, len);28542854+}28552855+28562856+struct cmp_pages {28572857+ int num_pages;28582858+ struct page **src_pages;28592859+ struct page **dst_pages;28602860+};28612861+28622862+static void btrfs_cmp_data_free(struct cmp_pages *cmp)28632863+{28642864+ int i;28652865+ struct page *pg;28662866+28672867+ for (i = 0; i < cmp->num_pages; i++) {28682868+ pg = cmp->src_pages[i];28692869+ if (pg)28702870+ page_cache_release(pg);28712871+ pg = cmp->dst_pages[i];28722872+ if (pg)28732873+ page_cache_release(pg);28562874 }28752875+ kfree(cmp->src_pages);28762876+ kfree(cmp->dst_pages);28772877+}28782878+28792879+static int btrfs_cmp_data_prepare(struct inode *src, u64 loff,28802880+ struct inode *dst, u64 dst_loff,28812881+ u64 len, struct cmp_pages *cmp)28822882+{28832883+ int ret;28842884+ int num_pages = PAGE_CACHE_ALIGN(len) >> PAGE_CACHE_SHIFT;28852885+ struct page **src_pgarr, **dst_pgarr;28862886+28872887+ /*28882888+ * We must gather up all the pages before we initiate our28892889+ * extent locking. We use an array for the page pointers. Size28902890+ * of the array is bounded by len, which is in turn bounded by28912891+ * BTRFS_MAX_DEDUPE_LEN.28922892+ */28932893+ src_pgarr = kzalloc(num_pages * sizeof(struct page *), GFP_NOFS);28942894+ dst_pgarr = kzalloc(num_pages * sizeof(struct page *), GFP_NOFS);28952895+ if (!src_pgarr || !dst_pgarr) {28962896+ kfree(src_pgarr);28972897+ kfree(dst_pgarr);28982898+ return -ENOMEM;28992899+ }29002900+ cmp->num_pages = num_pages;29012901+ cmp->src_pages = src_pgarr;29022902+ cmp->dst_pages = dst_pgarr;29032903+29042904+ ret = gather_extent_pages(src, cmp->src_pages, cmp->num_pages, loff);29052905+ if (ret)29062906+ goto out;29072907+29082908+ ret = gather_extent_pages(dst, cmp->dst_pages, cmp->num_pages, dst_loff);29092909+29102910+out:29112911+ if (ret)29122912+ btrfs_cmp_data_free(cmp);29132913+ return 0;28572914}2858291528592916static int btrfs_cmp_data(struct inode *src, u64 loff, struct inode *dst,28602860- u64 dst_loff, u64 len)29172917+ u64 dst_loff, u64 len, struct cmp_pages *cmp)28612918{28622919 int ret = 0;29202920+ int i;28632921 struct page *src_page, *dst_page;28642922 unsigned int cmp_len = PAGE_CACHE_SIZE;28652923 void *addr, *dst_addr;2866292429252925+ i = 0;28672926 while (len) {28682927 if (len < PAGE_CACHE_SIZE)28692928 cmp_len = len;2870292928712871- src_page = extent_same_get_page(src, loff);28722872- if (!src_page)28732873- return -EINVAL;28742874- dst_page = extent_same_get_page(dst, dst_loff);28752875- if (!dst_page) {28762876- page_cache_release(src_page);28772877- return -EINVAL;28782878- }29302930+ BUG_ON(i >= cmp->num_pages);29312931+29322932+ src_page = cmp->src_pages[i];29332933+ dst_page = cmp->dst_pages[i];29342934+28792935 addr = kmap_atomic(src_page);28802936 dst_addr = kmap_atomic(dst_page);28812937···2955287529562876 kunmap_atomic(addr);29572877 kunmap_atomic(dst_addr);29582958- page_cache_release(src_page);29592959- page_cache_release(dst_page);2960287829612879 if (ret)29622880 break;2963288129642964- loff += cmp_len;29652965- dst_loff += cmp_len;29662882 len -= cmp_len;28832883+ i++;29672884 }2968288529692886 return ret;···29912914{29922915 int ret;29932916 u64 len = olen;29172917+ struct cmp_pages cmp;29182918+ int same_inode = 0;29192919+ u64 same_lock_start = 0;29202920+ u64 same_lock_len = 0;2994292129952995- /*29962996- * btrfs_clone() can't handle extents in the same file29972997- * yet. Once that works, we can drop this check and replace it29982998- * with a check for the same inode, but overlapping extents.29992999- */30002922 if (src == dst)30013001- return -EINVAL;29232923+ same_inode = 1;3002292430032925 if (len == 0)30042926 return 0;3005292730063006- btrfs_double_lock(src, loff, dst, dst_loff, len);29282928+ if (same_inode) {29292929+ mutex_lock(&src->i_mutex);3007293030083008- ret = extent_same_check_offsets(src, loff, &len, olen);30093009- if (ret)30103010- goto out_unlock;29312931+ ret = extent_same_check_offsets(src, loff, &len, olen);29322932+ if (ret)29332933+ goto out_unlock;3011293430123012- ret = extent_same_check_offsets(dst, dst_loff, &len, olen);30133013- if (ret)30143014- goto out_unlock;29352935+ /*29362936+ * Single inode case wants the same checks, except we29372937+ * don't want our length pushed out past i_size as29382938+ * comparing that data range makes no sense.29392939+ *29402940+ * extent_same_check_offsets() will do this for an29412941+ * unaligned length at i_size, so catch it here and29422942+ * reject the request.29432943+ *29442944+ * This effectively means we require aligned extents29452945+ * for the single-inode case, whereas the other cases29462946+ * allow an unaligned length so long as it ends at29472947+ * i_size.29482948+ */29492949+ if (len != olen) {29502950+ ret = -EINVAL;29512951+ goto out_unlock;29522952+ }29532953+29542954+ /* Check for overlapping ranges */29552955+ if (dst_loff + len > loff && dst_loff < loff + len) {29562956+ ret = -EINVAL;29572957+ goto out_unlock;29582958+ }29592959+29602960+ same_lock_start = min_t(u64, loff, dst_loff);29612961+ same_lock_len = max_t(u64, loff, dst_loff) + len - same_lock_start;29622962+ } else {29632963+ btrfs_double_inode_lock(src, dst);29642964+29652965+ ret = extent_same_check_offsets(src, loff, &len, olen);29662966+ if (ret)29672967+ goto out_unlock;29682968+29692969+ ret = extent_same_check_offsets(dst, dst_loff, &len, olen);29702970+ if (ret)29712971+ goto out_unlock;29722972+ }3015297330162974 /* don't make the dst file partly checksummed */30172975 if ((BTRFS_I(src)->flags & BTRFS_INODE_NODATASUM) !=···30552943 goto out_unlock;30562944 }3057294530583058- ret = btrfs_cmp_data(src, loff, dst, dst_loff, len);30593059- if (ret == 0)30603060- ret = btrfs_clone(src, dst, loff, olen, len, dst_loff);29462946+ ret = btrfs_cmp_data_prepare(src, loff, dst, dst_loff, olen, &cmp);29472947+ if (ret)29482948+ goto out_unlock;3061294929502950+ if (same_inode)29512951+ lock_extent_range(src, same_lock_start, same_lock_len);29522952+ else29532953+ btrfs_double_extent_lock(src, loff, dst, dst_loff, len);29542954+29552955+ /* pass original length for comparison so we stay within i_size */29562956+ ret = btrfs_cmp_data(src, loff, dst, dst_loff, olen, &cmp);29572957+ if (ret == 0)29582958+ ret = btrfs_clone(src, dst, loff, olen, len, dst_loff, 1);29592959+29602960+ if (same_inode)29612961+ unlock_extent(&BTRFS_I(src)->io_tree, same_lock_start,29622962+ same_lock_start + same_lock_len - 1);29632963+ else29642964+ btrfs_double_extent_unlock(src, loff, dst, dst_loff, len);29652965+29662966+ btrfs_cmp_data_free(&cmp);30622967out_unlock:30633063- btrfs_double_unlock(src, loff, dst, dst_loff, len);29682968+ if (same_inode)29692969+ mutex_unlock(&src->i_mutex);29702970+ else29712971+ btrfs_double_inode_unlock(src, dst);3064297230652973 return ret;30662974}···30902958static long btrfs_ioctl_file_extent_same(struct file *file,30912959 struct btrfs_ioctl_same_args __user *argp)30922960{30933093- struct btrfs_ioctl_same_args *same;29612961+ struct btrfs_ioctl_same_args *same = NULL;30942962 struct btrfs_ioctl_same_extent_info *info;30952963 struct inode *src = file_inode(file);30962964 u64 off;···3120298831212989 if (IS_ERR(same)) {31222990 ret = PTR_ERR(same);29912991+ same = NULL;31232992 goto out;31242993 }31252994···3191305831923059out:31933060 mnt_drop_write_file(file);30613061+ kfree(same);31943062 return ret;31953063}31963064···32343100 struct inode *inode,32353101 u64 endoff,32363102 const u64 destoff,32373237- const u64 olen)31033103+ const u64 olen,31043104+ int no_time_update)32383105{32393106 struct btrfs_root *root = BTRFS_I(inode)->root;32403107 int ret;3241310832423109 inode_inc_iversion(inode);32433243- inode->i_mtime = inode->i_ctime = CURRENT_TIME;31103110+ if (!no_time_update)31113111+ inode->i_mtime = inode->i_ctime = CURRENT_TIME;32443112 /*32453113 * We round up to the block size at eof when determining which32463114 * extents to clone above, but shouldn't round up the file size.···33273191 * @inode: Inode to clone to33283192 * @off: Offset within source to start clone from33293193 * @olen: Original length, passed by user, of range to clone33303330- * @olen_aligned: Block-aligned value of olen, extent_same uses33313331- * identical values here31943194+ * @olen_aligned: Block-aligned value of olen33323195 * @destoff: Offset within @inode to start clone31963196+ * @no_time_update: Whether to update mtime/ctime on the target inode33333197 */33343198static int btrfs_clone(struct inode *src, struct inode *inode,33353199 const u64 off, const u64 olen, const u64 olen_aligned,33363336- const u64 destoff)32003200+ const u64 destoff, int no_time_update)33373201{33383202 struct btrfs_root *root = BTRFS_I(inode)->root;33393203 struct btrfs_path *path = NULL;···35883452 u64 trim = 0;35893453 u64 aligned_end = 0;3590345434553455+ /*34563456+ * Don't copy an inline extent into an offset34573457+ * greater than zero. Having an inline extent34583458+ * at such an offset results in chaos as btrfs34593459+ * isn't prepared for such cases. Just skip34603460+ * this case for the same reasons as commented34613461+ * at btrfs_ioctl_clone().34623462+ */34633463+ if (last_dest_end > 0) {34643464+ ret = -EOPNOTSUPP;34653465+ btrfs_end_transaction(trans, root);34663466+ goto out;34673467+ }34683468+35913469 if (off > key.offset) {35923470 skip = off - key.offset;35933471 new_key.offset += skip;···36713521 root->sectorsize);36723522 ret = clone_finish_inode_update(trans, inode,36733523 last_dest_end,36743674- destoff, olen);35243524+ destoff, olen,35253525+ no_time_update);36753526 if (ret)36763527 goto out;36773528 if (new_key.offset + datal >= destoff + len)···37103559 clone_update_extent_map(inode, trans, NULL, last_dest_end,37113560 destoff + len - last_dest_end);37123561 ret = clone_finish_inode_update(trans, inode, destoff + len,37133713- destoff, olen);35623562+ destoff, olen, no_time_update);37143563 }3715356437163565out:···38473696 lock_extent_range(inode, destoff, len);38483697 }3849369838503850- ret = btrfs_clone(src, inode, off, olen, len, destoff);36993699+ ret = btrfs_clone(src, inode, off, olen, len, destoff, 0);3851370038523701 if (same_inode) {38533702 u64 lock_start = min_t(u64, off, destoff);
+5
fs/btrfs/ordered-data.c
···552552 trace_btrfs_ordered_extent_put(entry->inode, entry);553553554554 if (atomic_dec_and_test(&entry->refs)) {555555+ ASSERT(list_empty(&entry->log_list));556556+ ASSERT(list_empty(&entry->trans_list));557557+ ASSERT(list_empty(&entry->root_extent_list));558558+ ASSERT(RB_EMPTY_NODE(&entry->rb_node));555559 if (entry->inode)556560 btrfs_add_delayed_iput(entry->inode);557561 while (!list_empty(&entry->list)) {···583579 spin_lock_irq(&tree->lock);584580 node = &entry->rb_node;585581 rb_erase(node, &tree->tree);582582+ RB_CLEAR_NODE(node);586583 if (tree->last == node)587584 tree->last = NULL;588585 set_bit(BTRFS_ORDERED_COMPLETE, &entry->flags);
+41-8
fs/btrfs/qgroup.c
···13491349 struct btrfs_root *quota_root;13501350 struct btrfs_qgroup *qgroup;13511351 int ret = 0;13521352+ /* Sometimes we would want to clear the limit on this qgroup.13531353+ * To meet this requirement, we treat the -1 as a special value13541354+ * which tell kernel to clear the limit on this qgroup.13551355+ */13561356+ const u64 CLEAR_VALUE = -1;1352135713531358 mutex_lock(&fs_info->qgroup_ioctl_lock);13541359 quota_root = fs_info->quota_root;···13691364 }1370136513711366 spin_lock(&fs_info->qgroup_lock);13721372- if (limit->flags & BTRFS_QGROUP_LIMIT_MAX_RFER)13731373- qgroup->max_rfer = limit->max_rfer;13741374- if (limit->flags & BTRFS_QGROUP_LIMIT_MAX_EXCL)13751375- qgroup->max_excl = limit->max_excl;13761376- if (limit->flags & BTRFS_QGROUP_LIMIT_RSV_RFER)13771377- qgroup->rsv_rfer = limit->rsv_rfer;13781378- if (limit->flags & BTRFS_QGROUP_LIMIT_RSV_EXCL)13791379- qgroup->rsv_excl = limit->rsv_excl;13671367+ if (limit->flags & BTRFS_QGROUP_LIMIT_MAX_RFER) {13681368+ if (limit->max_rfer == CLEAR_VALUE) {13691369+ qgroup->lim_flags &= ~BTRFS_QGROUP_LIMIT_MAX_RFER;13701370+ limit->flags &= ~BTRFS_QGROUP_LIMIT_MAX_RFER;13711371+ qgroup->max_rfer = 0;13721372+ } else {13731373+ qgroup->max_rfer = limit->max_rfer;13741374+ }13751375+ }13761376+ if (limit->flags & BTRFS_QGROUP_LIMIT_MAX_EXCL) {13771377+ if (limit->max_excl == CLEAR_VALUE) {13781378+ qgroup->lim_flags &= ~BTRFS_QGROUP_LIMIT_MAX_EXCL;13791379+ limit->flags &= ~BTRFS_QGROUP_LIMIT_MAX_EXCL;13801380+ qgroup->max_excl = 0;13811381+ } else {13821382+ qgroup->max_excl = limit->max_excl;13831383+ }13841384+ }13851385+ if (limit->flags & BTRFS_QGROUP_LIMIT_RSV_RFER) {13861386+ if (limit->rsv_rfer == CLEAR_VALUE) {13871387+ qgroup->lim_flags &= ~BTRFS_QGROUP_LIMIT_RSV_RFER;13881388+ limit->flags &= ~BTRFS_QGROUP_LIMIT_RSV_RFER;13891389+ qgroup->rsv_rfer = 0;13901390+ } else {13911391+ qgroup->rsv_rfer = limit->rsv_rfer;13921392+ }13931393+ }13941394+ if (limit->flags & BTRFS_QGROUP_LIMIT_RSV_EXCL) {13951395+ if (limit->rsv_excl == CLEAR_VALUE) {13961396+ qgroup->lim_flags &= ~BTRFS_QGROUP_LIMIT_RSV_EXCL;13971397+ limit->flags &= ~BTRFS_QGROUP_LIMIT_RSV_EXCL;13981398+ qgroup->rsv_excl = 0;13991399+ } else {14001400+ qgroup->rsv_excl = limit->rsv_excl;14011401+ }14021402+ }13801403 qgroup->lim_flags |= limit->flags;1381140413821405 spin_unlock(&fs_info->qgroup_lock);
+1-1
fs/btrfs/relocation.c
···40494049 if (trans && progress && err == -ENOSPC) {40504050 ret = btrfs_force_chunk_alloc(trans, rc->extent_root,40514051 rc->block_group->flags);40524052- if (ret == 0) {40524052+ if (ret == 1) {40534053 err = 0;40544054 progress = 0;40554055 goto restart;
+20-19
fs/btrfs/scrub.c
···35713571static noinline_for_stack int scrub_workers_get(struct btrfs_fs_info *fs_info,35723572 int is_dev_replace)35733573{35743574- int ret = 0;35753574 unsigned int flags = WQ_FREEZABLE | WQ_UNBOUND;35763575 int max_active = fs_info->thread_pool_size;35773576···35833584 fs_info->scrub_workers =35843585 btrfs_alloc_workqueue("btrfs-scrub", flags,35853586 max_active, 4);35863586- if (!fs_info->scrub_workers) {35873587- ret = -ENOMEM;35883588- goto out;35893589- }35873587+ if (!fs_info->scrub_workers)35883588+ goto fail_scrub_workers;35893589+35903590 fs_info->scrub_wr_completion_workers =35913591 btrfs_alloc_workqueue("btrfs-scrubwrc", flags,35923592 max_active, 2);35933593- if (!fs_info->scrub_wr_completion_workers) {35943594- ret = -ENOMEM;35953595- goto out;35963596- }35933593+ if (!fs_info->scrub_wr_completion_workers)35943594+ goto fail_scrub_wr_completion_workers;35953595+35973596 fs_info->scrub_nocow_workers =35983597 btrfs_alloc_workqueue("btrfs-scrubnc", flags, 1, 0);35993599- if (!fs_info->scrub_nocow_workers) {36003600- ret = -ENOMEM;36013601- goto out;36023602- }35983598+ if (!fs_info->scrub_nocow_workers)35993599+ goto fail_scrub_nocow_workers;36033600 fs_info->scrub_parity_workers =36043601 btrfs_alloc_workqueue("btrfs-scrubparity", flags,36053602 max_active, 2);36063606- if (!fs_info->scrub_parity_workers) {36073607- ret = -ENOMEM;36083608- goto out;36093609- }36033603+ if (!fs_info->scrub_parity_workers)36043604+ goto fail_scrub_parity_workers;36103605 }36113606 ++fs_info->scrub_workers_refcnt;36123612-out:36133613- return ret;36073607+ return 0;36083608+36093609+fail_scrub_parity_workers:36103610+ btrfs_destroy_workqueue(fs_info->scrub_nocow_workers);36113611+fail_scrub_nocow_workers:36123612+ btrfs_destroy_workqueue(fs_info->scrub_wr_completion_workers);36133613+fail_scrub_wr_completion_workers:36143614+ btrfs_destroy_workqueue(fs_info->scrub_workers);36153615+fail_scrub_workers:36163616+ return -ENOMEM;36143617}3615361836163619static noinline_for_stack void scrub_workers_put(struct btrfs_fs_info *fs_info)
···41174117 return 0;41184118}4119411941204120+/*41214121+ * At the moment we always log all xattrs. This is to figure out at log replay41224122+ * time which xattrs must have their deletion replayed. If a xattr is missing41234123+ * in the log tree and exists in the fs/subvol tree, we delete it. This is41244124+ * because if a xattr is deleted, the inode is fsynced and a power failure41254125+ * happens, causing the log to be replayed the next time the fs is mounted,41264126+ * we want the xattr to not exist anymore (same behaviour as other filesystems41274127+ * with a journal, ext3/4, xfs, f2fs, etc).41284128+ */41294129+static int btrfs_log_all_xattrs(struct btrfs_trans_handle *trans,41304130+ struct btrfs_root *root,41314131+ struct inode *inode,41324132+ struct btrfs_path *path,41334133+ struct btrfs_path *dst_path)41344134+{41354135+ int ret;41364136+ struct btrfs_key key;41374137+ const u64 ino = btrfs_ino(inode);41384138+ int ins_nr = 0;41394139+ int start_slot = 0;41404140+41414141+ key.objectid = ino;41424142+ key.type = BTRFS_XATTR_ITEM_KEY;41434143+ key.offset = 0;41444144+41454145+ ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);41464146+ if (ret < 0)41474147+ return ret;41484148+41494149+ while (true) {41504150+ int slot = path->slots[0];41514151+ struct extent_buffer *leaf = path->nodes[0];41524152+ int nritems = btrfs_header_nritems(leaf);41534153+41544154+ if (slot >= nritems) {41554155+ if (ins_nr > 0) {41564156+ u64 last_extent = 0;41574157+41584158+ ret = copy_items(trans, inode, dst_path, path,41594159+ &last_extent, start_slot,41604160+ ins_nr, 1, 0);41614161+ /* can't be 1, extent items aren't processed */41624162+ ASSERT(ret <= 0);41634163+ if (ret < 0)41644164+ return ret;41654165+ ins_nr = 0;41664166+ }41674167+ ret = btrfs_next_leaf(root, path);41684168+ if (ret < 0)41694169+ return ret;41704170+ else if (ret > 0)41714171+ break;41724172+ continue;41734173+ }41744174+41754175+ btrfs_item_key_to_cpu(leaf, &key, slot);41764176+ if (key.objectid != ino || key.type != BTRFS_XATTR_ITEM_KEY)41774177+ break;41784178+41794179+ if (ins_nr == 0)41804180+ start_slot = slot;41814181+ ins_nr++;41824182+ path->slots[0]++;41834183+ cond_resched();41844184+ }41854185+ if (ins_nr > 0) {41864186+ u64 last_extent = 0;41874187+41884188+ ret = copy_items(trans, inode, dst_path, path,41894189+ &last_extent, start_slot,41904190+ ins_nr, 1, 0);41914191+ /* can't be 1, extent items aren't processed */41924192+ ASSERT(ret <= 0);41934193+ if (ret < 0)41944194+ return ret;41954195+ }41964196+41974197+ return 0;41984198+}41994199+42004200+/*42014201+ * If the no holes feature is enabled we need to make sure any hole between the42024202+ * last extent and the i_size of our inode is explicitly marked in the log. This42034203+ * is to make sure that doing something like:42044204+ *42054205+ * 1) create file with 128Kb of data42064206+ * 2) truncate file to 64Kb42074207+ * 3) truncate file to 256Kb42084208+ * 4) fsync file42094209+ * 5) <crash/power failure>42104210+ * 6) mount fs and trigger log replay42114211+ *42124212+ * Will give us a file with a size of 256Kb, the first 64Kb of data match what42134213+ * the file had in its first 64Kb of data at step 1 and the last 192Kb of the42144214+ * file correspond to a hole. The presence of explicit holes in a log tree is42154215+ * what guarantees that log replay will remove/adjust file extent items in the42164216+ * fs/subvol tree.42174217+ *42184218+ * Here we do not need to care about holes between extents, that is already done42194219+ * by copy_items(). We also only need to do this in the full sync path, where we42204220+ * lookup for extents from the fs/subvol tree only. In the fast path case, we42214221+ * lookup the list of modified extent maps and if any represents a hole, we42224222+ * insert a corresponding extent representing a hole in the log tree.42234223+ */42244224+static int btrfs_log_trailing_hole(struct btrfs_trans_handle *trans,42254225+ struct btrfs_root *root,42264226+ struct inode *inode,42274227+ struct btrfs_path *path)42284228+{42294229+ int ret;42304230+ struct btrfs_key key;42314231+ u64 hole_start;42324232+ u64 hole_size;42334233+ struct extent_buffer *leaf;42344234+ struct btrfs_root *log = root->log_root;42354235+ const u64 ino = btrfs_ino(inode);42364236+ const u64 i_size = i_size_read(inode);42374237+42384238+ if (!btrfs_fs_incompat(root->fs_info, NO_HOLES))42394239+ return 0;42404240+42414241+ key.objectid = ino;42424242+ key.type = BTRFS_EXTENT_DATA_KEY;42434243+ key.offset = (u64)-1;42444244+42454245+ ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);42464246+ ASSERT(ret != 0);42474247+ if (ret < 0)42484248+ return ret;42494249+42504250+ ASSERT(path->slots[0] > 0);42514251+ path->slots[0]--;42524252+ leaf = path->nodes[0];42534253+ btrfs_item_key_to_cpu(leaf, &key, path->slots[0]);42544254+42554255+ if (key.objectid != ino || key.type != BTRFS_EXTENT_DATA_KEY) {42564256+ /* inode does not have any extents */42574257+ hole_start = 0;42584258+ hole_size = i_size;42594259+ } else {42604260+ struct btrfs_file_extent_item *extent;42614261+ u64 len;42624262+42634263+ /*42644264+ * If there's an extent beyond i_size, an explicit hole was42654265+ * already inserted by copy_items().42664266+ */42674267+ if (key.offset >= i_size)42684268+ return 0;42694269+42704270+ extent = btrfs_item_ptr(leaf, path->slots[0],42714271+ struct btrfs_file_extent_item);42724272+42734273+ if (btrfs_file_extent_type(leaf, extent) ==42744274+ BTRFS_FILE_EXTENT_INLINE) {42754275+ len = btrfs_file_extent_inline_len(leaf,42764276+ path->slots[0],42774277+ extent);42784278+ ASSERT(len == i_size);42794279+ return 0;42804280+ }42814281+42824282+ len = btrfs_file_extent_num_bytes(leaf, extent);42834283+ /* Last extent goes beyond i_size, no need to log a hole. */42844284+ if (key.offset + len > i_size)42854285+ return 0;42864286+ hole_start = key.offset + len;42874287+ hole_size = i_size - hole_start;42884288+ }42894289+ btrfs_release_path(path);42904290+42914291+ /* Last extent ends at i_size. */42924292+ if (hole_size == 0)42934293+ return 0;42944294+42954295+ hole_size = ALIGN(hole_size, root->sectorsize);42964296+ ret = btrfs_insert_file_extent(trans, log, ino, hole_start, 0, 0,42974297+ hole_size, 0, hole_size, 0, 0, 0);42984298+ return ret;42994299+}43004300+41204301/* log a single inode in the tree log.41214302 * At least one parent directory for this inode must exist in the tree41224303 * or be logged already.···43364155 u64 ino = btrfs_ino(inode);43374156 struct extent_map_tree *em_tree = &BTRFS_I(inode)->extent_tree;43384157 u64 logged_isize = 0;41584158+ bool need_log_inode_item = true;4339415943404160 path = btrfs_alloc_path();43414161 if (!path)···44454263 } else {44464264 if (inode_only == LOG_INODE_ALL)44474265 fast_search = true;44484448- ret = log_inode_item(trans, log, dst_path, inode);44494449- if (ret) {44504450- err = ret;44514451- goto out_unlock;44524452- }44534266 goto log_extents;44544267 }44554268···44664289 break;44674290 if (min_key.type > max_key.type)44684291 break;42924292+42934293+ if (min_key.type == BTRFS_INODE_ITEM_KEY)42944294+ need_log_inode_item = false;42954295+42964296+ /* Skip xattrs, we log them later with btrfs_log_all_xattrs() */42974297+ if (min_key.type == BTRFS_XATTR_ITEM_KEY) {42984298+ if (ins_nr == 0)42994299+ goto next_slot;43004300+ ret = copy_items(trans, inode, dst_path, path,43014301+ &last_extent, ins_start_slot,43024302+ ins_nr, inode_only, logged_isize);43034303+ if (ret < 0) {43044304+ err = ret;43054305+ goto out_unlock;43064306+ }43074307+ ins_nr = 0;43084308+ if (ret) {43094309+ btrfs_release_path(path);43104310+ continue;43114311+ }43124312+ goto next_slot;43134313+ }4469431444704315 src = path->nodes[0];44714316 if (ins_nr && ins_start_slot + ins_nr == path->slots[0]) {···45564357 ins_nr = 0;45574358 }4558435943604360+ btrfs_release_path(path);43614361+ btrfs_release_path(dst_path);43624362+ err = btrfs_log_all_xattrs(trans, root, inode, path, dst_path);43634363+ if (err)43644364+ goto out_unlock;43654365+ if (max_key.type >= BTRFS_EXTENT_DATA_KEY && !fast_search) {43664366+ btrfs_release_path(path);43674367+ btrfs_release_path(dst_path);43684368+ err = btrfs_log_trailing_hole(trans, root, inode, path);43694369+ if (err)43704370+ goto out_unlock;43714371+ }45594372log_extents:45604373 btrfs_release_path(path);45614374 btrfs_release_path(dst_path);43754375+ if (need_log_inode_item) {43764376+ err = log_inode_item(trans, log, dst_path, inode);43774377+ if (err)43784378+ goto out_unlock;43794379+ }45624380 if (fast_search) {45634381 /*45644382 * Some ordered extents started by fsync might have completed
+44-6
fs/btrfs/volumes.c
···27662766 root = root->fs_info->chunk_root;27672767 extent_root = root->fs_info->extent_root;2768276827692769+ /*27702770+ * Prevent races with automatic removal of unused block groups.27712771+ * After we relocate and before we remove the chunk with offset27722772+ * chunk_offset, automatic removal of the block group can kick in,27732773+ * resulting in a failure when calling btrfs_remove_chunk() below.27742774+ *27752775+ * Make sure to acquire this mutex before doing a tree search (dev27762776+ * or chunk trees) to find chunks. Otherwise the cleaner kthread might27772777+ * call btrfs_remove_chunk() (through btrfs_delete_unused_bgs()) after27782778+ * we release the path used to search the chunk/dev tree and before27792779+ * the current task acquires this mutex and calls us.27802780+ */27812781+ ASSERT(mutex_is_locked(&root->fs_info->delete_unused_bgs_mutex));27822782+27692783 ret = btrfs_can_relocate(extent_root, chunk_offset);27702784 if (ret)27712785 return -ENOSPC;···28282814 key.type = BTRFS_CHUNK_ITEM_KEY;2829281528302816 while (1) {28172817+ mutex_lock(&root->fs_info->delete_unused_bgs_mutex);28312818 ret = btrfs_search_slot(NULL, chunk_root, &key, path, 0, 0);28322832- if (ret < 0)28192819+ if (ret < 0) {28202820+ mutex_unlock(&root->fs_info->delete_unused_bgs_mutex);28332821 goto error;28222822+ }28342823 BUG_ON(ret == 0); /* Corruption */2835282428362825 ret = btrfs_previous_item(chunk_root, path, key.objectid,28372826 key.type);28272827+ if (ret)28282828+ mutex_unlock(&root->fs_info->delete_unused_bgs_mutex);28382829 if (ret < 0)28392830 goto error;28402831 if (ret > 0)···28622843 else28632844 BUG_ON(ret);28642845 }28462846+ mutex_unlock(&root->fs_info->delete_unused_bgs_mutex);2865284728662848 if (found_key.offset == 0)28672849 break;···33193299 goto error;33203300 }3321330133023302+ mutex_lock(&fs_info->delete_unused_bgs_mutex);33223303 ret = btrfs_search_slot(NULL, chunk_root, &key, path, 0, 0);33233323- if (ret < 0)33043304+ if (ret < 0) {33053305+ mutex_unlock(&fs_info->delete_unused_bgs_mutex);33243306 goto error;33073307+ }3325330833263309 /*33273310 * this shouldn't happen, it means the last relocate···33363313 ret = btrfs_previous_item(chunk_root, path, 0,33373314 BTRFS_CHUNK_ITEM_KEY);33383315 if (ret) {33163316+ mutex_unlock(&fs_info->delete_unused_bgs_mutex);33393317 ret = 0;33403318 break;33413319 }···33453321 slot = path->slots[0];33463322 btrfs_item_key_to_cpu(leaf, &found_key, slot);3347332333483348- if (found_key.objectid != key.objectid)33243324+ if (found_key.objectid != key.objectid) {33253325+ mutex_unlock(&fs_info->delete_unused_bgs_mutex);33493326 break;33273327+ }3350332833513329 chunk = btrfs_item_ptr(leaf, slot, struct btrfs_chunk);33523330···33613335 ret = should_balance_chunk(chunk_root, leaf, chunk,33623336 found_key.offset);33633337 btrfs_release_path(path);33643364- if (!ret)33383338+ if (!ret) {33393339+ mutex_unlock(&fs_info->delete_unused_bgs_mutex);33653340 goto loop;33413341+ }3366334233673343 if (counting) {33443344+ mutex_unlock(&fs_info->delete_unused_bgs_mutex);33683345 spin_lock(&fs_info->balance_lock);33693346 bctl->stat.expected++;33703347 spin_unlock(&fs_info->balance_lock);···33773348 ret = btrfs_relocate_chunk(chunk_root,33783349 found_key.objectid,33793350 found_key.offset);33513351+ mutex_unlock(&fs_info->delete_unused_bgs_mutex);33803352 if (ret && ret != -ENOSPC)33813353 goto error;33823354 if (ret == -ENOSPC) {···41174087 key.type = BTRFS_DEV_EXTENT_KEY;4118408841194089 do {40904090+ mutex_lock(&root->fs_info->delete_unused_bgs_mutex);41204091 ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);41214121- if (ret < 0)40924092+ if (ret < 0) {40934093+ mutex_unlock(&root->fs_info->delete_unused_bgs_mutex);41224094 goto done;40954095+ }4123409641244097 ret = btrfs_previous_item(root, path, 0, key.type);40984098+ if (ret)40994099+ mutex_unlock(&root->fs_info->delete_unused_bgs_mutex);41254100 if (ret < 0)41264101 goto done;41274102 if (ret) {···41404105 btrfs_item_key_to_cpu(l, &key, path->slots[0]);4141410641424107 if (key.objectid != device->devid) {41084108+ mutex_unlock(&root->fs_info->delete_unused_bgs_mutex);41434109 btrfs_release_path(path);41444110 break;41454111 }···41494113 length = btrfs_dev_extent_length(l, dev_extent);4150411441514115 if (key.offset + length <= new_size) {41164116+ mutex_unlock(&root->fs_info->delete_unused_bgs_mutex);41524117 btrfs_release_path(path);41534118 break;41544119 }···41594122 btrfs_release_path(path);4160412341614124 ret = btrfs_relocate_chunk(root, chunk_objectid, chunk_offset);41254125+ mutex_unlock(&root->fs_info->delete_unused_bgs_mutex);41624126 if (ret && ret != -ENOSPC)41634127 goto done;41644128 if (ret == -ENOSPC)···57535715static void btrfs_end_bio(struct bio *bio, int err)57545716{57555717 struct btrfs_bio *bbio = bio->bi_private;57565756- struct btrfs_device *dev = bbio->stripes[0].dev;57575718 int is_orig_bio = 0;5758571957595720 if (err) {···57605723 if (err == -EIO || err == -EREMOTEIO) {57615724 unsigned int stripe_index =57625725 btrfs_io_bio(bio)->stripe_index;57265726+ struct btrfs_device *dev;5763572757645728 BUG_ON(stripe_index >= bbio->num_stripes);57655729 dev = bbio->stripes[stripe_index].dev;
+1
fs/compat_ioctl.c
···896896/* 'X' - originally XFS but some now in the VFS */897897COMPATIBLE_IOCTL(FIFREEZE)898898COMPATIBLE_IOCTL(FITHAW)899899+COMPATIBLE_IOCTL(FITRIM)899900COMPATIBLE_IOCTL(KDGETKEYCODE)900901COMPATIBLE_IOCTL(KDSETKEYCODE)901902COMPATIBLE_IOCTL(KDGKBTYPE)
···642642643643 /*644644 * If we have a d_op->d_delete() operation, we sould not645645- * let the dentry count go to zero, so use "put__or_lock".645645+ * let the dentry count go to zero, so use "put_or_lock".646646 */647647 if (unlikely(dentry->d_flags & DCACHE_OP_DELETE))648648 return lockref_put_or_lock(&dentry->d_lockref);···697697 */698698 smp_rmb();699699 d_flags = ACCESS_ONCE(dentry->d_flags);700700- d_flags &= DCACHE_REFERENCED | DCACHE_LRU_LIST;700700+ d_flags &= DCACHE_REFERENCED | DCACHE_LRU_LIST | DCACHE_DISCONNECTED;701701702702 /* Nothing to do? Dropping the reference was all we needed? */703703 if (d_flags == (DCACHE_REFERENCED | DCACHE_LRU_LIST) && !d_unhashed(dentry))···774774775775 /* Unreachable? Get rid of it */776776 if (unlikely(d_unhashed(dentry)))777777+ goto kill_it;778778+779779+ if (unlikely(dentry->d_flags & DCACHE_DISCONNECTED))777780 goto kill_it;778781779782 if (unlikely(dentry->d_flags & DCACHE_OP_DELETE)) {
-1
fs/ecryptfs/file.c
···325325 return rc;326326327327 switch (cmd) {328328- case FITRIM:329328 case FS_IOC32_GETFLAGS:330329 case FS_IOC32_SETFLAGS:331330 case FS_IOC32_GETVERSION:
···13231323 unsigned int offset,13241324 unsigned int length)13251325{13261326- int to_release = 0;13261326+ int to_release = 0, contiguous_blks = 0;13271327 struct buffer_head *head, *bh;13281328 unsigned int curr_off = 0;13291329 struct inode *inode = page->mapping->host;···1344134413451345 if ((offset <= curr_off) && (buffer_delay(bh))) {13461346 to_release++;13471347+ contiguous_blks++;13471348 clear_buffer_delay(bh);13491349+ } else if (contiguous_blks) {13501350+ lblk = page->index <<13511351+ (PAGE_CACHE_SHIFT - inode->i_blkbits);13521352+ lblk += (curr_off >> inode->i_blkbits) -13531353+ contiguous_blks;13541354+ ext4_es_remove_extent(inode, lblk, contiguous_blks);13551355+ contiguous_blks = 0;13481356 }13491357 curr_off = next_off;13501358 } while ((bh = bh->b_this_page) != head);1351135913521352- if (to_release) {13601360+ if (contiguous_blks) {13531361 lblk = page->index << (PAGE_CACHE_SHIFT - inode->i_blkbits);13541354- ext4_es_remove_extent(inode, lblk, to_release);13621362+ lblk += (curr_off >> inode->i_blkbits) - contiguous_blks;13631363+ ext4_es_remove_extent(inode, lblk, contiguous_blks);13551364 }1356136513571366 /* If we have released all the blocks belonging to a cluster, then we···43534344 int inode_size = EXT4_INODE_SIZE(sb);4354434543554346 oi.orig_ino = orig_ino;43564356- ino = (orig_ino & ~(inodes_per_block - 1)) + 1;43474347+ /*43484348+ * Calculate the first inode in the inode table block. Inode43494349+ * numbers are one-based. That is, the first inode in a block43504350+ * (assuming 4k blocks and 256 byte inodes) is (n*16 + 1).43514351+ */43524352+ ino = ((orig_ino - 1) & ~(inodes_per_block - 1)) + 1;43574353 for (i = 0; i < inodes_per_block; i++, ino++, buf += inode_size) {43584354 if (ino == orig_ino)43594355 continue;
-1
fs/ext4/ioctl.c
···755755 return err;756756 }757757 case EXT4_IOC_MOVE_EXT:758758- case FITRIM:759758 case EXT4_IOC_RESIZE_FS:760759 case EXT4_IOC_PRECACHE_EXTENTS:761760 case EXT4_IOC_SET_ENCRYPTION_POLICY:
+5-11
fs/ext4/mballoc.c
···48164816 /*48174817 * blocks being freed are metadata. these blocks shouldn't48184818 * be used until this transaction is committed48194819+ *48204820+ * We use __GFP_NOFAIL because ext4_free_blocks() is not allowed48214821+ * to fail.48194822 */48204820- retry:48214821- new_entry = kmem_cache_alloc(ext4_free_data_cachep, GFP_NOFS);48224822- if (!new_entry) {48234823- /*48244824- * We use a retry loop because48254825- * ext4_free_blocks() is not allowed to fail.48264826- */48274827- cond_resched();48284828- congestion_wait(BLK_RW_ASYNC, HZ/50);48294829- goto retry;48304830- }48234823+ new_entry = kmem_cache_alloc(ext4_free_data_cachep,48244824+ GFP_NOFS|__GFP_NOFAIL);48314825 new_entry->efd_start_cluster = bit;48324826 new_entry->efd_group = block_group;48334827 new_entry->efd_count = count_clusters;
+14-3
fs/ext4/migrate.c
···620620 struct ext4_inode_info *ei = EXT4_I(inode);621621 struct ext4_extent *ex;622622 unsigned int i, len;623623+ ext4_lblk_t start, end;623624 ext4_fsblk_t blk;624625 handle_t *handle;625626 int ret;···633632 if (EXT4_HAS_RO_COMPAT_FEATURE(inode->i_sb,634633 EXT4_FEATURE_RO_COMPAT_BIGALLOC))635634 return -EOPNOTSUPP;635635+636636+ /*637637+ * In order to get correct extent info, force all delayed allocation638638+ * blocks to be allocated, otherwise delayed allocation blocks may not639639+ * be reflected and bypass the checks on extent header.640640+ */641641+ if (test_opt(inode->i_sb, DELALLOC))642642+ ext4_alloc_da_blocks(inode);636643637644 handle = ext4_journal_start(inode, EXT4_HT_MIGRATE, 1);638645 if (IS_ERR(handle))···659650 goto errout;660651 }661652 if (eh->eh_entries == 0)662662- blk = len = 0;653653+ blk = len = start = end = 0;663654 else {664655 len = le16_to_cpu(ex->ee_len);665656 blk = ext4_ext_pblock(ex);666666- if (len > EXT4_NDIR_BLOCKS) {657657+ start = le32_to_cpu(ex->ee_block);658658+ end = start + len - 1;659659+ if (end >= EXT4_NDIR_BLOCKS) {667660 ret = -EOPNOTSUPP;668661 goto errout;669662 }···673662674663 ext4_clear_inode_flag(inode, EXT4_INODE_EXTENTS);675664 memset(ei->i_data, 0, sizeof(ei->i_data));676676- for (i=0; i < len; i++)665665+ for (i = start; i <= end; i++)677666 ei->i_data[i] = cpu_to_le32(blk++);678667 ext4_mark_inode_dirty(handle, inode);679668errout:
+95
fs/hpfs/alloc.c
···484484 a->btree.first_free = cpu_to_le16(8);485485 return a;486486}487487+488488+static unsigned find_run(__le32 *bmp, unsigned *idx)489489+{490490+ unsigned len;491491+ while (tstbits(bmp, *idx, 1)) {492492+ (*idx)++;493493+ if (unlikely(*idx >= 0x4000))494494+ return 0;495495+ }496496+ len = 1;497497+ while (!tstbits(bmp, *idx + len, 1))498498+ len++;499499+ return len;500500+}501501+502502+static int do_trim(struct super_block *s, secno start, unsigned len, secno limit_start, secno limit_end, unsigned minlen, unsigned *result)503503+{504504+ int err;505505+ secno end;506506+ if (fatal_signal_pending(current))507507+ return -EINTR;508508+ end = start + len;509509+ if (start < limit_start)510510+ start = limit_start;511511+ if (end > limit_end)512512+ end = limit_end;513513+ if (start >= end)514514+ return 0;515515+ if (end - start < minlen)516516+ return 0;517517+ err = sb_issue_discard(s, start, end - start, GFP_NOFS, 0);518518+ if (err)519519+ return err;520520+ *result += end - start;521521+ return 0;522522+}523523+524524+int hpfs_trim_fs(struct super_block *s, u64 start, u64 end, u64 minlen, unsigned *result)525525+{526526+ int err = 0;527527+ struct hpfs_sb_info *sbi = hpfs_sb(s);528528+ unsigned idx, len, start_bmp, end_bmp;529529+ __le32 *bmp;530530+ struct quad_buffer_head qbh;531531+532532+ *result = 0;533533+ if (!end || end > sbi->sb_fs_size)534534+ end = sbi->sb_fs_size;535535+ if (start >= sbi->sb_fs_size)536536+ return 0;537537+ if (minlen > 0x4000)538538+ return 0;539539+ if (start < sbi->sb_dirband_start + sbi->sb_dirband_size && end > sbi->sb_dirband_start) {540540+ hpfs_lock(s);541541+ if (s->s_flags & MS_RDONLY) {542542+ err = -EROFS;543543+ goto unlock_1;544544+ }545545+ if (!(bmp = hpfs_map_dnode_bitmap(s, &qbh))) {546546+ err = -EIO;547547+ goto unlock_1;548548+ }549549+ idx = 0;550550+ while ((len = find_run(bmp, &idx)) && !err) {551551+ err = do_trim(s, sbi->sb_dirband_start + idx * 4, len * 4, start, end, minlen, result);552552+ idx += len;553553+ }554554+ hpfs_brelse4(&qbh);555555+unlock_1:556556+ hpfs_unlock(s);557557+ }558558+ start_bmp = start >> 14;559559+ end_bmp = (end + 0x3fff) >> 14;560560+ while (start_bmp < end_bmp && !err) {561561+ hpfs_lock(s);562562+ if (s->s_flags & MS_RDONLY) {563563+ err = -EROFS;564564+ goto unlock_2;565565+ }566566+ if (!(bmp = hpfs_map_bitmap(s, start_bmp, &qbh, "trim"))) {567567+ err = -EIO;568568+ goto unlock_2;569569+ }570570+ idx = 0;571571+ while ((len = find_run(bmp, &idx)) && !err) {572572+ err = do_trim(s, (start_bmp << 14) + idx, len, start, end, minlen, result);573573+ idx += len;574574+ }575575+ hpfs_brelse4(&qbh);576576+unlock_2:577577+ hpfs_unlock(s);578578+ start_bmp++;579579+ }580580+ return err;581581+}
···134134 * It has been committed since the last change, but was still135135 * on the dirty inode list.136136 */137137- if (!test_cflag(COMMIT_Dirty, inode)) {137137+ if (!test_cflag(COMMIT_Dirty, inode)) {138138 /* Make sure committed changes hit the disk */139139 jfs_flush_journal(JFS_SBI(inode->i_sb)->log, wait);140140 return 0;141141- }141141+ }142142143143 if (jfs_commit_inode(inode, wait)) {144144 jfs_err("jfs_write_inode: jfs_commit_inode failed!");
-3
fs/jfs/ioctl.c
···180180 case JFS_IOC_SETFLAGS32:181181 cmd = JFS_IOC_SETFLAGS;182182 break;183183- case FITRIM:184184- cmd = FITRIM;185185- break;186183 }187184 return jfs_ioctl(filp, cmd, arg);188185}
+13-14
fs/jfs/namei.c
···11601160 rc = dtModify(tid, new_dir, &new_dname, &ino,11611161 old_ip->i_ino, JFS_RENAME);11621162 if (rc)11631163- goto out4;11631163+ goto out_tx;11641164 drop_nlink(new_ip);11651165 if (S_ISDIR(new_ip->i_mode)) {11661166 drop_nlink(new_ip);···11851185 if ((new_size = commitZeroLink(tid, new_ip)) < 0) {11861186 txAbort(tid, 1); /* Marks FS Dirty */11871187 rc = new_size;11881188- goto out4;11881188+ goto out_tx;11891189 }11901190 tblk = tid_to_tblock(tid);11911191 tblk->xflag |= COMMIT_DELETE;···12031203 if (rc) {12041204 jfs_err("jfs_rename didn't expect dtSearch to fail "12051205 "w/rc = %d", rc);12061206- goto out4;12061206+ goto out_tx;12071207 }1208120812091209 ino = old_ip->i_ino;···12111211 if (rc) {12121212 if (rc == -EIO)12131213 jfs_err("jfs_rename: dtInsert returned -EIO");12141214- goto out4;12141214+ goto out_tx;12151215 }12161216 if (S_ISDIR(old_ip->i_mode))12171217 inc_nlink(new_dir);···12261226 jfs_err("jfs_rename did not expect dtDelete to return rc = %d",12271227 rc);12281228 txAbort(tid, 1); /* Marks Filesystem dirty */12291229- goto out4;12291229+ goto out_tx;12301230 }12311231 if (S_ISDIR(old_ip->i_mode)) {12321232 drop_nlink(old_dir);···1285128512861286 rc = txCommit(tid, ipcount, iplist, commit_flag);1287128712881288- out4:12881288+ out_tx:12891289 txEnd(tid);12901290 if (new_ip)12911291 mutex_unlock(&JFS_IP(new_ip)->commit_mutex);···13081308 }13091309 if (new_ip && (new_ip->i_nlink == 0))13101310 set_cflag(COMMIT_Nolink, new_ip);13111311- out3:13121312- free_UCSname(&new_dname);13131313- out2:13141314- free_UCSname(&old_dname);13151315- out1:13161316- if (new_ip && !S_ISDIR(new_ip->i_mode))13171317- IWRITE_UNLOCK(new_ip);13181311 /*13191312 * Truncating the directory index table is not guaranteed. It13201313 * may need to be done iteratively···1318132513191326 clear_cflag(COMMIT_Stale, old_dir);13201327 }13211321-13281328+ if (new_ip && !S_ISDIR(new_ip->i_mode))13291329+ IWRITE_UNLOCK(new_ip);13301330+ out3:13311331+ free_UCSname(&new_dname);13321332+ out2:13331333+ free_UCSname(&old_dname);13341334+ out1:13221335 jfs_info("jfs_rename: returning %d", rc);13231336 return rc;13241337}
+18-20
fs/locks.c
···862862 * whether or not a lock was successfully freed by testing the return863863 * value for -ENOENT.864864 */865865-static int flock_lock_file(struct file *filp, struct file_lock *request)865865+static int flock_lock_inode(struct inode *inode, struct file_lock *request)866866{867867 struct file_lock *new_fl = NULL;868868 struct file_lock *fl;869869 struct file_lock_context *ctx;870870- struct inode *inode = file_inode(filp);871870 int error = 0;872871 bool found = false;873872 LIST_HEAD(dispose);···889890 goto find_conflict;890891891892 list_for_each_entry(fl, &ctx->flc_flock, fl_list) {892892- if (filp != fl->fl_file)893893+ if (request->fl_file != fl->fl_file)893894 continue;894895 if (request->fl_type == fl->fl_type)895896 goto out;···11631164EXPORT_SYMBOL(posix_lock_file);1164116511651166/**11661166- * posix_lock_file_wait - Apply a POSIX-style lock to a file11671167- * @filp: The file to apply the lock to11671167+ * posix_lock_inode_wait - Apply a POSIX-style lock to a file11681168+ * @inode: inode of file to which lock request should be applied11681169 * @fl: The lock to be applied11691170 *11701170- * Add a POSIX style lock to a file.11711171- * We merge adjacent & overlapping locks whenever possible.11721172- * POSIX locks are sorted by owner task, then by starting address11711171+ * Variant of posix_lock_file_wait that does not take a filp, and so can be11721172+ * used after the filp has already been torn down.11731173 */11741174-int posix_lock_file_wait(struct file *filp, struct file_lock *fl)11741174+int posix_lock_inode_wait(struct inode *inode, struct file_lock *fl)11751175{11761176 int error;11771177 might_sleep ();11781178 for (;;) {11791179- error = posix_lock_file(filp, fl, NULL);11791179+ error = __posix_lock_file(inode, fl, NULL);11801180 if (error != FILE_LOCK_DEFERRED)11811181 break;11821182 error = wait_event_interruptible(fl->fl_wait, !fl->fl_next);···11871189 }11881190 return error;11891191}11901190-EXPORT_SYMBOL(posix_lock_file_wait);11921192+EXPORT_SYMBOL(posix_lock_inode_wait);1191119311921194/**11931195 * locks_mandatory_locked - Check for an active lock···18491851}1850185218511853/**18521852- * flock_lock_file_wait - Apply a FLOCK-style lock to a file18531853- * @filp: The file to apply the lock to18541854+ * flock_lock_inode_wait - Apply a FLOCK-style lock to a file18551855+ * @inode: inode of the file to apply to18541856 * @fl: The lock to be applied18551857 *18561856- * Add a FLOCK style lock to a file.18581858+ * Apply a FLOCK style lock request to an inode.18571859 */18581858-int flock_lock_file_wait(struct file *filp, struct file_lock *fl)18601860+int flock_lock_inode_wait(struct inode *inode, struct file_lock *fl)18591861{18601862 int error;18611863 might_sleep();18621864 for (;;) {18631863- error = flock_lock_file(filp, fl);18651865+ error = flock_lock_inode(inode, fl);18641866 if (error != FILE_LOCK_DEFERRED)18651867 break;18661868 error = wait_event_interruptible(fl->fl_wait, !fl->fl_next);···18721874 }18731875 return error;18741876}18751875-18761876-EXPORT_SYMBOL(flock_lock_file_wait);18771877+EXPORT_SYMBOL(flock_lock_inode_wait);1877187818781879/**18791880 * sys_flock: - flock() system call.···23982401 .fl_type = F_UNLCK,23992402 .fl_end = OFFSET_MAX,24002403 };24012401- struct file_lock_context *flctx = file_inode(filp)->i_flctx;24042404+ struct inode *inode = file_inode(filp);24052405+ struct file_lock_context *flctx = inode->i_flctx;2402240624032407 if (list_empty(&flctx->flc_flock))24042408 return;···24072409 if (filp->f_op->flock)24082410 filp->f_op->flock(filp, F_SETLKW, &fl);24092411 else24102410- flock_lock_file(filp, &fl);24122412+ flock_lock_inode(inode, &fl);2411241324122414 if (fl.fl_ops && fl.fl_ops->fl_release_private)24132415 fl.fl_ops->fl_release_private(&fl);
+8-10
fs/nfs/nfs4proc.c
···54395439 return err;54405440}5441544154425442-static int do_vfs_lock(struct file *file, struct file_lock *fl)54425442+static int do_vfs_lock(struct inode *inode, struct file_lock *fl)54435443{54445444 int res = 0;54455445 switch (fl->fl_flags & (FL_POSIX|FL_FLOCK)) {54465446 case FL_POSIX:54475447- res = posix_lock_file_wait(file, fl);54475447+ res = posix_lock_inode_wait(inode, fl);54485448 break;54495449 case FL_FLOCK:54505450- res = flock_lock_file_wait(file, fl);54505450+ res = flock_lock_inode_wait(inode, fl);54515451 break;54525452 default:54535453 BUG();···54845484 atomic_inc(&lsp->ls_count);54855485 /* Ensure we don't close file until we're done freeing locks! */54865486 p->ctx = get_nfs_open_context(ctx);54875487- get_file(fl->fl_file);54885487 memcpy(&p->fl, fl, sizeof(p->fl));54895488 p->server = NFS_SERVER(inode);54905489 return p;···54955496 nfs_free_seqid(calldata->arg.seqid);54965497 nfs4_put_lock_state(calldata->lsp);54975498 put_nfs_open_context(calldata->ctx);54985498- fput(calldata->fl.fl_file);54995499 kfree(calldata);55005500}55015501···55075509 switch (task->tk_status) {55085510 case 0:55095511 renew_lease(calldata->server, calldata->timestamp);55105510- do_vfs_lock(calldata->fl.fl_file, &calldata->fl);55125512+ do_vfs_lock(calldata->lsp->ls_state->inode, &calldata->fl);55115513 if (nfs4_update_lock_stateid(calldata->lsp,55125514 &calldata->res.stateid))55135515 break;···56155617 mutex_lock(&sp->so_delegreturn_mutex);56165618 /* Exclude nfs4_reclaim_open_stateid() - note nesting! */56175619 down_read(&nfsi->rwsem);56185618- if (do_vfs_lock(request->fl_file, request) == -ENOENT) {56205620+ if (do_vfs_lock(inode, request) == -ENOENT) {56195621 up_read(&nfsi->rwsem);56205622 mutex_unlock(&sp->so_delegreturn_mutex);56215623 goto out;···57565758 data->timestamp);57575759 if (data->arg.new_lock) {57585760 data->fl.fl_flags &= ~(FL_SLEEP | FL_ACCESS);57595759- if (do_vfs_lock(data->fl.fl_file, &data->fl) < 0) {57615761+ if (do_vfs_lock(lsp->ls_state->inode, &data->fl) < 0) {57605762 rpc_restart_call_prepare(task);57615763 break;57625764 }···59986000 if (status != 0)59996001 goto out;60006002 request->fl_flags |= FL_ACCESS;60016001- status = do_vfs_lock(request->fl_file, request);60036003+ status = do_vfs_lock(state->inode, request);60026004 if (status < 0)60036005 goto out;60046006 down_read(&nfsi->rwsem);···60066008 /* Yes: cache locks! */60076009 /* ...but avoid races with delegation recall... */60086010 request->fl_flags = fl_flags & ~FL_SLEEP;60096009- status = do_vfs_lock(request->fl_file, request);60116011+ status = do_vfs_lock(state->inode, request);60106012 up_read(&nfsi->rwsem);60116013 goto out;60126014 }
···152152 BUG();153153154154 list_del_init(&mark->g_list);155155-156155 spin_unlock(&mark->lock);157156158157 if (inode && (mark->flags & FSNOTIFY_MARK_FLAG_OBJECT_PINNED))159158 iput(inode);160160- /* release lock temporarily */161161- mutex_unlock(&group->mark_mutex);162159163160 spin_lock(&destroy_lock);164161 list_add(&mark->g_list, &destroy_list);165162 spin_unlock(&destroy_lock);166163 wake_up(&destroy_waitq);167167- /*168168- * We don't necessarily have a ref on mark from caller so the above destroy169169- * may have actually freed it, unless this group provides a 'freeing_mark'170170- * function which must be holding a reference.171171- */172172-173173- /*174174- * Some groups like to know that marks are being freed. This is a175175- * callback to the group function to let it know that this mark176176- * is being freed.177177- */178178- if (group->ops->freeing_mark)179179- group->ops->freeing_mark(mark, group);180164181165 /*182166 * __fsnotify_update_child_dentry_flags(inode);···175191 */176192177193 atomic_dec(&group->num_marks);178178-179179- mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING);180194}181195182196void fsnotify_destroy_mark(struct fsnotify_mark *mark,···187205188206/*189207 * Destroy all marks in the given list. The marks must be already detached from190190- * the original inode / vfsmount.208208+ * the original inode / vfsmount. Note that we can race with209209+ * fsnotify_clear_marks_by_group_flags(). However we hold a reference to each210210+ * mark so they won't get freed from under us and nobody else touches our211211+ * free_list list_head.191212 */192213void fsnotify_destroy_marks(struct list_head *to_free)193214{···391406}392407393408/*394394- * clear any marks in a group in which mark->flags & flags is true409409+ * Clear any marks in a group in which mark->flags & flags is true.395410 */396411void fsnotify_clear_marks_by_group_flags(struct fsnotify_group *group,397412 unsigned int flags)···445460{446461 struct fsnotify_mark *mark, *next;447462 struct list_head private_destroy_list;463463+ struct fsnotify_group *group;448464449465 for (;;) {450466 spin_lock(&destroy_lock);···457471458472 list_for_each_entry_safe(mark, next, &private_destroy_list, g_list) {459473 list_del_init(&mark->g_list);474474+ group = mark->group;475475+ /*476476+ * Some groups like to know that marks are being freed.477477+ * This is a callback to the group function to let it478478+ * know that this mark is being freed.479479+ */480480+ if (group && group->ops->freeing_mark)481481+ group->ops->freeing_mark(mark, group);460482 fsnotify_put_mark(mark);461483 }462484
-1
fs/ocfs2/ioctl.c
···980980 case OCFS2_IOC_GROUP_EXTEND:981981 case OCFS2_IOC_GROUP_ADD:982982 case OCFS2_IOC_GROUP_ADD64:983983- case FITRIM:984983 break;985984 case OCFS2_IOC_REFLINK:986985 if (copy_from_user(&args, argp, sizeof(args)))
+3
fs/overlayfs/inode.c
···343343 struct path realpath;344344 enum ovl_path_type type;345345346346+ if (d_is_dir(dentry))347347+ return d_backing_inode(dentry);348348+346349 type = ovl_path_real(dentry, &realpath);347350 if (ovl_open_need_copy_up(file_flags, type, realpath.dentry)) {348351 err = ovl_want_write(dentry);
+6
fs/proc/Kconfig
···7575config PROC_CHILDREN7676 bool "Include /proc/<pid>/task/<tid>/children file"7777 default n7878+ help7979+ Provides a fast way to retrieve first level children pids of a task. See8080+ <file:Documentation/filesystems/proc.txt> for more information.8181+8282+ Say Y if you are running any user-space software which takes benefit from8383+ this interface. For example, rkt is such a piece of software.
···11+/*22+ * Architecture specific mm hooks33+ */44+55+#ifndef _ASM_GENERIC_MM_ARCH_HOOKS_H66+#define _ASM_GENERIC_MM_ARCH_HOOKS_H77+88+/*99+ * This file should be included through arch/../include/asm/Kbuild for1010+ * the architecture which doesn't need specific mm hooks.1111+ *1212+ * In that case, the generic hooks defined in include/linux/mm-arch-hooks.h1313+ * are used.1414+ */1515+1616+#endif /* _ASM_GENERIC_MM_ARCH_HOOKS_H */
+14-10
include/linux/acpi.h
···5858 acpi_fwnode_handle(adev) : NULL)5959#define ACPI_HANDLE(dev) acpi_device_handle(ACPI_COMPANION(dev))60606161+/**6262+ * ACPI_DEVICE_CLASS - macro used to describe an ACPI device with6363+ * the PCI-defined class-code information6464+ *6565+ * @_cls : the class, subclass, prog-if triple for this device6666+ * @_msk : the class mask for this device6767+ *6868+ * This macro is used to create a struct acpi_device_id that matches a6969+ * specific PCI class. The .id and .driver_data fields will be left7070+ * initialized with the default value.7171+ */7272+#define ACPI_DEVICE_CLASS(_cls, _msk) .cls = (_cls), .cls_msk = (_msk),7373+6174static inline bool has_acpi_companion(struct device *dev)6275{6376 return is_acpi_node(dev->fwnode);···322309323310int acpi_resources_are_enforced(void);324311325325-int acpi_reserve_region(u64 start, unsigned int length, u8 space_id,326326- unsigned long flags, char *desc);327327-328312#ifdef CONFIG_HIBERNATION329313void __init acpi_no_s4_hw_signature(void);330314#endif···456446#define ACPI_COMPANION(dev) (NULL)457447#define ACPI_COMPANION_SET(dev, adev) do { } while (0)458448#define ACPI_HANDLE(dev) (NULL)449449+#define ACPI_DEVICE_CLASS(_cls, _msk) .cls = (0), .cls_msk = (0),459450460451struct fwnode_handle;461452···516505 const char *name)517506{518507 return 0;519519-}520520-521521-static inline int acpi_reserve_region(u64 start, unsigned int length,522522- u8 space_id, unsigned long flags,523523- char *desc)524524-{525525- return -ENXIO;526508}527509528510struct acpi_table_header;
+1-1
include/linux/amba/sp810.h
···22 * ARM PrimeXsys System Controller SP810 header file33 *44 * Copyright (C) 2009 ST Microelectronics55- * Viresh Kumar <viresh.linux@gmail.com>55+ * Viresh Kumar <vireshk@kernel.org>66 *77 * This file is licensed under the terms of the GNU General Public88 * License version 2. This program is licensed "as is" without any
+3-8
include/linux/blk-cgroup.h
···47474848 struct blkcg_policy_data *pd[BLKCG_MAX_POLS];49495050+ struct list_head all_blkcgs_node;5051#ifdef CONFIG_CGROUP_WRITEBACK5152 struct list_head cgwb_list;5253#endif···8988 * Policies that need to keep per-blkcg data which is independent9089 * from any request_queue associated to it must specify its size9190 * with the cpd_size field of the blkcg_policy structure and9292- * embed a blkcg_policy_data in it. blkcg core allocates9393- * policy-specific per-blkcg structures lazily the first time9494- * they are actually needed, so it handles them together with9595- * blkgs. cpd_init() is invoked to let each policy handle9696- * per-blkcg data.9191+ * embed a blkcg_policy_data in it. cpd_init() is invoked to let9292+ * each policy handle per-blkcg data.9793 */9894struct blkcg_policy_data {9995 /* the policy id this per-policy data belongs to */10096 int plid;101101-102102- /* used during policy activation */103103- struct list_head alloc_node;10497};1059810699/* association between a blk cgroup and a request queue */
···2727/**2828 * struct can_skb_priv - private additional data inside CAN sk_buffs2929 * @ifindex: ifindex of the first interface the CAN frame appeared on3030+ * @skbcnt: atomic counter to have an unique id together with skb pointer3031 * @cf: align to the following CAN frame at skb->data3132 */3233struct can_skb_priv {3334 int ifindex;3535+ int skbcnt;3436 struct can_frame cf[0];3537};3638
···4545 * @base: identifies the first GPIO number handled by this chip;4646 * or, if negative during registration, requests dynamic ID allocation.4747 * DEPRECATION: providing anything non-negative and nailing the base4848- * base offset of GPIO chips is deprecated. Please pass -1 as base to4848+ * offset of GPIO chips is deprecated. Please pass -1 as base to4949 * let gpiolib select the chip base in all possible cases. We want to5050 * get rid of the static GPIO number space in the long run.5151 * @ngpio: the number of GPIOs handled by this controller; the last GPIO
···460460 return &mm->page_table_lock;461461}462462463463-static inline bool hugepages_supported(void)464464-{465465- /*466466- * Some platform decide whether they support huge pages at boot467467- * time. On these, such as powerpc, HPAGE_SHIFT is set to 0 when468468- * there is no such support469469- */470470- return HPAGE_SHIFT != 0;471471-}463463+#ifndef hugepages_supported464464+/*465465+ * Some platform decide whether they support huge pages at boot466466+ * time. Some of them, such as powerpc, set HPAGE_SHIFT to 0467467+ * when there is no such support468468+ */469469+#define hugepages_supported() (HPAGE_SHIFT != 0)470470+#endif472471473472#else /* CONFIG_HUGETLB_PAGE */474473struct hstate {};
-78
include/linux/init.h
···282282void __init parse_early_options(char *cmdline);283283#endif /* __ASSEMBLY__ */284284285285-/**286286- * module_init() - driver initialization entry point287287- * @x: function to be run at kernel boot time or module insertion288288- * 289289- * module_init() will either be called during do_initcalls() (if290290- * builtin) or at module insertion time (if a module). There can only291291- * be one per module.292292- */293293-#define module_init(x) __initcall(x);294294-295295-/**296296- * module_exit() - driver exit entry point297297- * @x: function to be run when driver is removed298298- * 299299- * module_exit() will wrap the driver clean-up code300300- * with cleanup_module() when used with rmmod when301301- * the driver is a module. If the driver is statically302302- * compiled into the kernel, module_exit() has no effect.303303- * There can only be one per module.304304- */305305-#define module_exit(x) __exitcall(x);306306-307285#else /* MODULE */308308-309309-/*310310- * In most cases loadable modules do not need custom311311- * initcall levels. There are still some valid cases where312312- * a driver may be needed early if built in, and does not313313- * matter when built as a loadable module. Like bus314314- * snooping debug drivers.315315- */316316-#define early_initcall(fn) module_init(fn)317317-#define core_initcall(fn) module_init(fn)318318-#define core_initcall_sync(fn) module_init(fn)319319-#define postcore_initcall(fn) module_init(fn)320320-#define postcore_initcall_sync(fn) module_init(fn)321321-#define arch_initcall(fn) module_init(fn)322322-#define subsys_initcall(fn) module_init(fn)323323-#define subsys_initcall_sync(fn) module_init(fn)324324-#define fs_initcall(fn) module_init(fn)325325-#define fs_initcall_sync(fn) module_init(fn)326326-#define rootfs_initcall(fn) module_init(fn)327327-#define device_initcall(fn) module_init(fn)328328-#define device_initcall_sync(fn) module_init(fn)329329-#define late_initcall(fn) module_init(fn)330330-#define late_initcall_sync(fn) module_init(fn)331331-332332-#define console_initcall(fn) module_init(fn)333333-#define security_initcall(fn) module_init(fn)334334-335335-/* Each module must use one module_init(). */336336-#define module_init(initfn) \337337- static inline initcall_t __inittest(void) \338338- { return initfn; } \339339- int init_module(void) __attribute__((alias(#initfn)));340340-341341-/* This is only required if you want to be unloadable. */342342-#define module_exit(exitfn) \343343- static inline exitcall_t __exittest(void) \344344- { return exitfn; } \345345- void cleanup_module(void) __attribute__((alias(#exitfn)));346286347287#define __setup_param(str, unique_id, fn) /* nothing */348288#define __setup(str, func) /* nothing */···290350291351/* Data marked not to be saved by software suspend */292352#define __nosavedata __section(.data..nosave)293293-294294-/* This means "can be init if no module support, otherwise module load295295- may call it." */296296-#ifdef CONFIG_MODULES297297-#define __init_or_module298298-#define __initdata_or_module299299-#define __initconst_or_module300300-#define __INIT_OR_MODULE .text301301-#define __INITDATA_OR_MODULE .data302302-#define __INITRODATA_OR_MODULE .section ".rodata","a",%progbits303303-#else304304-#define __init_or_module __init305305-#define __initdata_or_module __initdata306306-#define __initconst_or_module __initconst307307-#define __INIT_OR_MODULE __INIT308308-#define __INITDATA_OR_MODULE __INITDATA309309-#define __INITRODATA_OR_MODULE __INITRODATA310310-#endif /*CONFIG_MODULES*/311353312354#ifdef MODULE313355#define __exit_p(x) x
···1111#include <linux/compiler.h>1212#include <linux/cache.h>1313#include <linux/kmod.h>1414+#include <linux/init.h>1415#include <linux/elf.h>1516#include <linux/stringify.h>1617#include <linux/kobject.h>···7170/* These are either module local, or the kernel's dummy ones. */7271extern int init_module(void);7372extern void cleanup_module(void);7373+7474+#ifndef MODULE7575+/**7676+ * module_init() - driver initialization entry point7777+ * @x: function to be run at kernel boot time or module insertion7878+ *7979+ * module_init() will either be called during do_initcalls() (if8080+ * builtin) or at module insertion time (if a module). There can only8181+ * be one per module.8282+ */8383+#define module_init(x) __initcall(x);8484+8585+/**8686+ * module_exit() - driver exit entry point8787+ * @x: function to be run when driver is removed8888+ *8989+ * module_exit() will wrap the driver clean-up code9090+ * with cleanup_module() when used with rmmod when9191+ * the driver is a module. If the driver is statically9292+ * compiled into the kernel, module_exit() has no effect.9393+ * There can only be one per module.9494+ */9595+#define module_exit(x) __exitcall(x);9696+9797+#else /* MODULE */9898+9999+/*100100+ * In most cases loadable modules do not need custom101101+ * initcall levels. There are still some valid cases where102102+ * a driver may be needed early if built in, and does not103103+ * matter when built as a loadable module. Like bus104104+ * snooping debug drivers.105105+ */106106+#define early_initcall(fn) module_init(fn)107107+#define core_initcall(fn) module_init(fn)108108+#define core_initcall_sync(fn) module_init(fn)109109+#define postcore_initcall(fn) module_init(fn)110110+#define postcore_initcall_sync(fn) module_init(fn)111111+#define arch_initcall(fn) module_init(fn)112112+#define subsys_initcall(fn) module_init(fn)113113+#define subsys_initcall_sync(fn) module_init(fn)114114+#define fs_initcall(fn) module_init(fn)115115+#define fs_initcall_sync(fn) module_init(fn)116116+#define rootfs_initcall(fn) module_init(fn)117117+#define device_initcall(fn) module_init(fn)118118+#define device_initcall_sync(fn) module_init(fn)119119+#define late_initcall(fn) module_init(fn)120120+#define late_initcall_sync(fn) module_init(fn)121121+122122+#define console_initcall(fn) module_init(fn)123123+#define security_initcall(fn) module_init(fn)124124+125125+/* Each module must use one module_init(). */126126+#define module_init(initfn) \127127+ static inline initcall_t __inittest(void) \128128+ { return initfn; } \129129+ int init_module(void) __attribute__((alias(#initfn)));130130+131131+/* This is only required if you want to be unloadable. */132132+#define module_exit(exitfn) \133133+ static inline exitcall_t __exittest(void) \134134+ { return exitfn; } \135135+ void cleanup_module(void) __attribute__((alias(#exitfn)));136136+137137+#endif138138+139139+/* This means "can be init if no module support, otherwise module load140140+ may call it." */141141+#ifdef CONFIG_MODULES142142+#define __init_or_module143143+#define __initdata_or_module144144+#define __initconst_or_module145145+#define __INIT_OR_MODULE .text146146+#define __INITDATA_OR_MODULE .data147147+#define __INITRODATA_OR_MODULE .section ".rodata","a",%progbits148148+#else149149+#define __init_or_module __init150150+#define __initdata_or_module __initdata151151+#define __initconst_or_module __initconst152152+#define __INIT_OR_MODULE __INIT153153+#define __INITDATA_OR_MODULE __INITDATA154154+#define __INITRODATA_OR_MODULE __INITRODATA155155+#endif /*CONFIG_MODULES*/7415675157/* Archs provide a method of finding the correct exception table. */76158struct exception_table_entry;
···44 * Arasan Compact Flash host controller platform data header file55 *66 * Copyright (C) 2011 ST Microelectronics77- * Viresh Kumar <viresh.linux@gmail.com>77+ * Viresh Kumar <vireshk@kernel.org>88 *99 * This file is licensed under the terms of the GNU General Public1010 * License version 2. This program is licensed "as is" without any
···15221522/* hung task detection */15231523 unsigned long last_switch_count;15241524#endif15251525-/* CPU-specific state of this task */15261526- struct thread_struct thread;15271525/* filesystem information */15281526 struct fs_struct *fs;15291527/* open file information */···17761778 unsigned long task_state_change;17771779#endif17781780 int pagefault_disabled;17811781+/* CPU-specific state of this task */17821782+ struct thread_struct thread;17831783+/*17841784+ * WARNING: on x86, 'thread_struct' contains a variable-sized17851785+ * structure. It *MUST* be at the end of 'task_struct'.17861786+ *17871787+ * Do not put anything below here!17881788+ */17791789};17901790+17911791+#ifdef CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT17921792+extern int arch_task_struct_size __read_mostly;17931793+#else17941794+# define arch_task_struct_size (sizeof(struct task_struct))17951795+#endif1780179617811797/* Future-safe accessor for struct task_struct's cpus_allowed. */17821798#define tsk_cpus_allowed(tsk) (&(tsk)->cpus_allowed)
···10211021 * for strings that are too long, we should not have created10221022 * any.10231023 */10241024- if (unlikely((len == 0) || len > MAX_ARG_STRLEN - 1)) {10251025- WARN_ON(1);10241024+ if (WARN_ON_ONCE(len < 0 || len > MAX_ARG_STRLEN - 1)) {10261025 send_sig(SIGKILL, current, 0);10271026 return -1;10281027 }
+12-1
kernel/cpu.c
···2121#include <linux/suspend.h>2222#include <linux/lockdep.h>2323#include <linux/tick.h>2424+#include <linux/irq.h>2425#include <trace/events/power.h>25262627#include "smpboot.h"···393392 smpboot_park_threads(cpu);394393395394 /*395395+ * Prevent irq alloc/free while the dying cpu reorganizes the396396+ * interrupt affinities.397397+ */398398+ irq_lock_sparse();399399+400400+ /*396401 * So now all preempt/rcu users must observe !cpu_active().397402 */398398-399403 err = __stop_machine(take_cpu_down, &tcd_param, cpumask_of(cpu));400404 if (err) {401405 /* CPU didn't die: tell everyone. Can't complain. */402406 cpu_notify_nofail(CPU_DOWN_FAILED | mod, hcpu);407407+ irq_unlock_sparse();403408 goto out_release;404409 }405410 BUG_ON(cpu_online(cpu));···421414 cpu_relax();422415 smp_mb(); /* Read from cpu_dead_idle before __cpu_die(). */423416 per_cpu(cpu_dead_idle, cpu) = false;417417+418418+ /* Interrupts are moved away from the dying cpu, reenable alloc/free */419419+ irq_unlock_sparse();424420425421 hotplug_cpu__broadcast_tick_pull(cpu);426422 /* This actually kills the CPU. */···529519530520 /* Arch-specific enabling code. */531521 ret = __cpu_up(cpu, idle);522522+532523 if (ret != 0)533524 goto out_notify;534525 BUG_ON(!cpu_online(cpu));
···287287 max_threads = clamp_t(u64, threads, MIN_THREADS, MAX_THREADS);288288}289289290290+#ifdef CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT291291+/* Initialized by the architecture: */292292+int arch_task_struct_size __read_mostly;293293+#endif294294+290295void __init fork_init(void)291296{292297#ifndef CONFIG_ARCH_TASK_STRUCT_ALLOCATOR···300295#endif301296 /* create a slab on which task_structs can be allocated */302297 task_struct_cachep =303303- kmem_cache_create("task_struct", sizeof(struct task_struct),298298+ kmem_cache_create("task_struct", arch_task_struct_size,304299 ARCH_MIN_TASKALIGN, SLAB_PANIC | SLAB_NOTRACK, NULL);305300#endif306301
-4
kernel/irq/internals.h
···76767777#ifdef CONFIG_SPARSE_IRQ7878static inline void irq_mark_irq(unsigned int irq) { }7979-extern void irq_lock_sparse(void);8080-extern void irq_unlock_sparse(void);8179#else8280extern void irq_mark_irq(unsigned int irq);8383-static inline void irq_lock_sparse(void) { }8484-static inline void irq_unlock_sparse(void) { }8581#endif86828783extern void init_kstat_irqs(struct irq_desc *desc, int node, int nr);
+13-5
kernel/irq/resend.c
···7575 !desc->irq_data.chip->irq_retrigger(&desc->irq_data)) {7676#ifdef CONFIG_HARDIRQS_SW_RESEND7777 /*7878- * If the interrupt has a parent irq and runs7979- * in the thread context of the parent irq,8080- * retrigger the parent.7878+ * If the interrupt is running in the thread7979+ * context of the parent irq we need to be8080+ * careful, because we cannot trigger it8181+ * directly.8182 */8282- if (desc->parent_irq &&8383- irq_settings_is_nested_thread(desc))8383+ if (irq_settings_is_nested_thread(desc)) {8484+ /*8585+ * If the parent_irq is valid, we8686+ * retrigger the parent, otherwise we8787+ * do nothing.8888+ */8989+ if (!desc->parent_irq)9090+ return;8491 irq = desc->parent_irq;9292+ }8593 /* Set it pending and activate the softirq: */8694 set_bit(irq, irqs_resend);8795 tasklet_schedule(&resend_tasklet);
+1
kernel/module.c
···35573557 mutex_lock(&module_mutex);35583558 /* Unlink carefully: kallsyms could be walking list. */35593559 list_del_rcu(&mod->list);35603560+ mod_tree_remove(mod);35603561 wake_up_all(&module_wq);35613562 /* Wait for RCU-sched synchronizing before releasing mod->list. */35623563 synchronize_sched();
+1-1
kernel/sched/fair.c
···36833683 cfs_rq->throttled = 1;36843684 cfs_rq->throttled_clock = rq_clock(rq);36853685 raw_spin_lock(&cfs_b->lock);36863686- empty = list_empty(&cfs_rq->throttled_list);36863686+ empty = list_empty(&cfs_b->throttled_cfs_rq);3687368736883688 /*36893689 * Add to the _head_ of the list, so that an already-started
+9-15
kernel/time/clockevents.c
···120120 /* The clockevent device is getting replaced. Shut it down. */121121122122 case CLOCK_EVT_STATE_SHUTDOWN:123123- return dev->set_state_shutdown(dev);123123+ if (dev->set_state_shutdown)124124+ return dev->set_state_shutdown(dev);125125+ return 0;124126125127 case CLOCK_EVT_STATE_PERIODIC:126128 /* Core internal bug */127129 if (!(dev->features & CLOCK_EVT_FEAT_PERIODIC))128130 return -ENOSYS;129129- return dev->set_state_periodic(dev);131131+ if (dev->set_state_periodic)132132+ return dev->set_state_periodic(dev);133133+ return 0;130134131135 case CLOCK_EVT_STATE_ONESHOT:132136 /* Core internal bug */133137 if (!(dev->features & CLOCK_EVT_FEAT_ONESHOT))134138 return -ENOSYS;135135- return dev->set_state_oneshot(dev);139139+ if (dev->set_state_oneshot)140140+ return dev->set_state_oneshot(dev);141141+ return 0;136142137143 case CLOCK_EVT_STATE_ONESHOT_STOPPED:138144 /* Core internal bug */···476470477471 if (dev->features & CLOCK_EVT_FEAT_DUMMY)478472 return 0;479479-480480- /* New state-specific callbacks */481481- if (!dev->set_state_shutdown)482482- return -EINVAL;483483-484484- if ((dev->features & CLOCK_EVT_FEAT_PERIODIC) &&485485- !dev->set_state_periodic)486486- return -EINVAL;487487-488488- if ((dev->features & CLOCK_EVT_FEAT_ONESHOT) &&489489- !dev->set_state_oneshot)490490- return -EINVAL;491473492474 return 0;493475}
+108-56
kernel/time/tick-broadcast.c
···159159{160160 struct clock_event_device *bc = tick_broadcast_device.evtdev;161161 unsigned long flags;162162- int ret;162162+ int ret = 0;163163164164 raw_spin_lock_irqsave(&tick_broadcast_lock, flags);165165···221221 * If we kept the cpu in the broadcast mask,222222 * tell the caller to leave the per cpu device223223 * in shutdown state. The periodic interrupt224224- * is delivered by the broadcast device.224224+ * is delivered by the broadcast device, if225225+ * the broadcast device exists and is not226226+ * hrtimer based.225227 */226226- ret = cpumask_test_cpu(cpu, tick_broadcast_mask);228228+ if (bc && !(bc->features & CLOCK_EVT_FEAT_HRTIMER))229229+ ret = cpumask_test_cpu(cpu, tick_broadcast_mask);227230 break;228231 default:229229- /* Nothing to do */230230- ret = 0;231232 break;232233 }233234 }···266265 * Check, if the current cpu is in the mask267266 */268267 if (cpumask_test_cpu(cpu, mask)) {268268+ struct clock_event_device *bc = tick_broadcast_device.evtdev;269269+269270 cpumask_clear_cpu(cpu, mask);270270- local = true;271271+ /*272272+ * We only run the local handler, if the broadcast273273+ * device is not hrtimer based. Otherwise we run into274274+ * a hrtimer recursion.275275+ *276276+ * local timer_interrupt()277277+ * local_handler()278278+ * expire_hrtimers()279279+ * bc_handler()280280+ * local_handler()281281+ * expire_hrtimers()282282+ */283283+ local = !(bc->features & CLOCK_EVT_FEAT_HRTIMER);271284 }272285273286 if (!cpumask_empty(mask)) {···316301 bool bc_local;317302318303 raw_spin_lock(&tick_broadcast_lock);304304+305305+ /* Handle spurious interrupts gracefully */306306+ if (clockevent_state_shutdown(tick_broadcast_device.evtdev)) {307307+ raw_spin_unlock(&tick_broadcast_lock);308308+ return;309309+ }310310+319311 bc_local = tick_do_periodic_broadcast();320312321313 if (clockevent_state_oneshot(dev)) {···381359 case TICK_BROADCAST_ON:382360 cpumask_set_cpu(cpu, tick_broadcast_on);383361 if (!cpumask_test_and_set_cpu(cpu, tick_broadcast_mask)) {384384- if (tick_broadcast_device.mode ==385385- TICKDEV_MODE_PERIODIC)362362+ /*363363+ * Only shutdown the cpu local device, if:364364+ *365365+ * - the broadcast device exists366366+ * - the broadcast device is not a hrtimer based one367367+ * - the broadcast device is in periodic mode to368368+ * avoid a hickup during switch to oneshot mode369369+ */370370+ if (bc && !(bc->features & CLOCK_EVT_FEAT_HRTIMER) &&371371+ tick_broadcast_device.mode == TICKDEV_MODE_PERIODIC)386372 clockevents_shutdown(dev);387373 }388374 break;···409379 break;410380 }411381412412- if (cpumask_empty(tick_broadcast_mask)) {413413- if (!bc_stopped)414414- clockevents_shutdown(bc);415415- } else if (bc_stopped) {416416- if (tick_broadcast_device.mode == TICKDEV_MODE_PERIODIC)417417- tick_broadcast_start_periodic(bc);418418- else419419- tick_broadcast_setup_oneshot(bc);382382+ if (bc) {383383+ if (cpumask_empty(tick_broadcast_mask)) {384384+ if (!bc_stopped)385385+ clockevents_shutdown(bc);386386+ } else if (bc_stopped) {387387+ if (tick_broadcast_device.mode == TICKDEV_MODE_PERIODIC)388388+ tick_broadcast_start_periodic(bc);389389+ else390390+ tick_broadcast_setup_oneshot(bc);391391+ }420392 }421393 raw_spin_unlock(&tick_broadcast_lock);422394}···694662 clockevents_switch_state(dev, CLOCK_EVT_STATE_SHUTDOWN);695663}696664697697-/**698698- * tick_broadcast_oneshot_control - Enter/exit broadcast oneshot mode699699- * @state: The target state (enter/exit)700700- *701701- * The system enters/leaves a state, where affected devices might stop702702- * Returns 0 on success, -EBUSY if the cpu is used to broadcast wakeups.703703- *704704- * Called with interrupts disabled, so clockevents_lock is not705705- * required here because the local clock event device cannot go away706706- * under us.707707- */708708-int tick_broadcast_oneshot_control(enum tick_broadcast_state state)665665+int __tick_broadcast_oneshot_control(enum tick_broadcast_state state)709666{710667 struct clock_event_device *bc, *dev;711711- struct tick_device *td;712668 int cpu, ret = 0;713669 ktime_t now;714670715671 /*716716- * Periodic mode does not care about the enter/exit of power717717- * states672672+ * If there is no broadcast device, tell the caller not to go673673+ * into deep idle.718674 */719719- if (tick_broadcast_device.mode == TICKDEV_MODE_PERIODIC)720720- return 0;675675+ if (!tick_broadcast_device.evtdev)676676+ return -EBUSY;721677722722- /*723723- * We are called with preemtion disabled from the depth of the724724- * idle code, so we can't be moved away.725725- */726726- td = this_cpu_ptr(&tick_cpu_device);727727- dev = td->evtdev;728728-729729- if (!(dev->features & CLOCK_EVT_FEAT_C3STOP))730730- return 0;678678+ dev = this_cpu_ptr(&tick_cpu_device)->evtdev;731679732680 raw_spin_lock(&tick_broadcast_lock);733681 bc = tick_broadcast_device.evtdev;734682 cpu = smp_processor_id();735683736684 if (state == TICK_BROADCAST_ENTER) {685685+ /*686686+ * If the current CPU owns the hrtimer broadcast687687+ * mechanism, it cannot go deep idle and we do not add688688+ * the CPU to the broadcast mask. We don't have to go689689+ * through the EXIT path as the local timer is not690690+ * shutdown.691691+ */692692+ ret = broadcast_needs_cpu(bc, cpu);693693+ if (ret)694694+ goto out;695695+696696+ /*697697+ * If the broadcast device is in periodic mode, we698698+ * return.699699+ */700700+ if (tick_broadcast_device.mode == TICKDEV_MODE_PERIODIC) {701701+ /* If it is a hrtimer based broadcast, return busy */702702+ if (bc->features & CLOCK_EVT_FEAT_HRTIMER)703703+ ret = -EBUSY;704704+ goto out;705705+ }706706+737707 if (!cpumask_test_and_set_cpu(cpu, tick_broadcast_oneshot_mask)) {738708 WARN_ON_ONCE(cpumask_test_cpu(cpu, tick_broadcast_pending_mask));709709+710710+ /* Conditionally shut down the local timer. */739711 broadcast_shutdown_local(bc, dev);712712+740713 /*741714 * We only reprogram the broadcast timer if we742715 * did not mark ourself in the force mask and743716 * if the cpu local event is earlier than the744717 * broadcast event. If the current CPU is in745718 * the force mask, then we are going to be746746- * woken by the IPI right away.719719+ * woken by the IPI right away; we return720720+ * busy, so the CPU does not try to go deep721721+ * idle.747722 */748748- if (!cpumask_test_cpu(cpu, tick_broadcast_force_mask) &&749749- dev->next_event.tv64 < bc->next_event.tv64)723723+ if (cpumask_test_cpu(cpu, tick_broadcast_force_mask)) {724724+ ret = -EBUSY;725725+ } else if (dev->next_event.tv64 < bc->next_event.tv64) {750726 tick_broadcast_set_event(bc, cpu, dev->next_event);727727+ /*728728+ * In case of hrtimer broadcasts the729729+ * programming might have moved the730730+ * timer to this cpu. If yes, remove731731+ * us from the broadcast mask and732732+ * return busy.733733+ */734734+ ret = broadcast_needs_cpu(bc, cpu);735735+ if (ret) {736736+ cpumask_clear_cpu(cpu,737737+ tick_broadcast_oneshot_mask);738738+ }739739+ }751740 }752752- /*753753- * If the current CPU owns the hrtimer broadcast754754- * mechanism, it cannot go deep idle and we remove the755755- * CPU from the broadcast mask. We don't have to go756756- * through the EXIT path as the local timer is not757757- * shutdown.758758- */759759- ret = broadcast_needs_cpu(bc, cpu);760760- if (ret)761761- cpumask_clear_cpu(cpu, tick_broadcast_oneshot_mask);762741 } else {763742 if (cpumask_test_and_clear_cpu(cpu, tick_broadcast_oneshot_mask)) {764743 clockevents_switch_state(dev, CLOCK_EVT_STATE_ONESHOT);···839796 raw_spin_unlock(&tick_broadcast_lock);840797 return ret;841798}842842-EXPORT_SYMBOL_GPL(tick_broadcast_oneshot_control);843799844800/*845801 * Reset the one shot broadcast for a cpu···980938 return bc ? bc->features & CLOCK_EVT_FEAT_ONESHOT : false;981939}982940941941+#else942942+int __tick_broadcast_oneshot_control(enum tick_broadcast_state state)943943+{944944+ struct clock_event_device *bc = tick_broadcast_device.evtdev;945945+946946+ if (!bc || (bc->features & CLOCK_EVT_FEAT_HRTIMER))947947+ return -EBUSY;948948+949949+ return 0;950950+}983951#endif984952985953void __init tick_broadcast_init(void)
+22
kernel/time/tick-common.c
···343343 tick_install_broadcast_device(newdev);344344}345345346346+/**347347+ * tick_broadcast_oneshot_control - Enter/exit broadcast oneshot mode348348+ * @state: The target state (enter/exit)349349+ *350350+ * The system enters/leaves a state, where affected devices might stop351351+ * Returns 0 on success, -EBUSY if the cpu is used to broadcast wakeups.352352+ *353353+ * Called with interrupts disabled, so clockevents_lock is not354354+ * required here because the local clock event device cannot go away355355+ * under us.356356+ */357357+int tick_broadcast_oneshot_control(enum tick_broadcast_state state)358358+{359359+ struct tick_device *td = this_cpu_ptr(&tick_cpu_device);360360+361361+ if (!(td->evtdev->features & CLOCK_EVT_FEAT_C3STOP))362362+ return 0;363363+364364+ return __tick_broadcast_oneshot_control(state);365365+}366366+EXPORT_SYMBOL_GPL(tick_broadcast_oneshot_control);367367+346368#ifdef CONFIG_HOTPLUG_CPU347369/*348370 * Transfer the do_timer job away from a dying cpu.
···444444445445 TRACE_CONTROL_BIT,446446447447+ TRACE_BRANCH_BIT,447448/*448449 * Abuse of the trace_recursion.449450 * As we need a way to maintain state if we are tracing the function
+10-7
kernel/trace/trace_branch.c
···3636 struct trace_branch *entry;3737 struct ring_buffer *buffer;3838 unsigned long flags;3939- int cpu, pc;3939+ int pc;4040 const char *p;4141+4242+ if (current->trace_recursion & TRACE_BRANCH_BIT)4343+ return;41444245 /*4346 * I would love to save just the ftrace_likely_data pointer, but···5249 if (unlikely(!tr))5350 return;54515555- local_irq_save(flags);5656- cpu = raw_smp_processor_id();5757- data = per_cpu_ptr(tr->trace_buffer.data, cpu);5858- if (atomic_inc_return(&data->disabled) != 1)5252+ raw_local_irq_save(flags);5353+ current->trace_recursion |= TRACE_BRANCH_BIT;5454+ data = this_cpu_ptr(tr->trace_buffer.data);5555+ if (atomic_read(&data->disabled))5956 goto out;60576158 pc = preempt_count();···8481 __buffer_unlock_commit(buffer, event);85828683 out:8787- atomic_dec(&data->disabled);8888- local_irq_restore(flags);8484+ current->trace_recursion &= ~TRACE_BRANCH_BIT;8585+ raw_local_irq_restore(flags);8986}90879188static inline
-4
lib/Kconfig.kasan
···1818 For better error detection enable CONFIG_STACKTRACE,1919 and add slub_debug=U to boot cmdline.20202121-config KASAN_SHADOW_OFFSET2222- hex2323- default 0xdffffc0000000000 if X86_642424-2521choice2622 prompt "Instrumentation type"2723 depends on KASAN
+4-1
lib/decompress.c
···5959{6060 const struct compress_format *cf;61616262- if (len < 2)6262+ if (len < 2) {6363+ if (name)6464+ *name = NULL;6365 return NULL; /* Need at least this much... */6666+ }64676568 pr_debug("Compressed data magic: %#.2x %#.2x\n", inbuf[0], inbuf[1]);6669
···269269 }270270}271271272272+enum {273273+ NFNL_BATCH_FAILURE = (1 << 0),274274+ NFNL_BATCH_DONE = (1 << 1),275275+ NFNL_BATCH_REPLAY = (1 << 2),276276+};277277+272278static void nfnetlink_rcv_batch(struct sk_buff *skb, struct nlmsghdr *nlh,273279 u_int16_t subsys_id)274280{···282276 struct net *net = sock_net(skb->sk);283277 const struct nfnetlink_subsystem *ss;284278 const struct nfnl_callback *nc;285285- bool success = true, done = false;286279 static LIST_HEAD(err_list);280280+ u32 status;287281 int err;288282289283 if (subsys_id >= NFNL_SUBSYS_COUNT)290284 return netlink_ack(skb, nlh, -EINVAL);291285replay:286286+ status = 0;287287+292288 skb = netlink_skb_clone(oskb, GFP_KERNEL);293289 if (!skb)294290 return netlink_ack(oskb, nlh, -ENOMEM);···344336 if (type == NFNL_MSG_BATCH_BEGIN) {345337 /* Malformed: Batch begin twice */346338 nfnl_err_reset(&err_list);347347- success = false;339339+ status |= NFNL_BATCH_FAILURE;348340 goto done;349341 } else if (type == NFNL_MSG_BATCH_END) {350350- done = true;342342+ status |= NFNL_BATCH_DONE;351343 goto done;352344 } else if (type < NLMSG_MIN_TYPE) {353345 err = -EINVAL;···390382 * original skb.391383 */392384 if (err == -EAGAIN) {393393- nfnl_err_reset(&err_list);394394- ss->abort(oskb);395395- nfnl_unlock(subsys_id);396396- kfree_skb(skb);397397- goto replay;385385+ status |= NFNL_BATCH_REPLAY;386386+ goto next;398387 }399388 }400389ack:···407402 */408403 nfnl_err_reset(&err_list);409404 netlink_ack(skb, nlmsg_hdr(oskb), -ENOMEM);410410- success = false;405405+ status |= NFNL_BATCH_FAILURE;411406 goto done;412407 }413408 /* We don't stop processing the batch on errors, thus,···415410 * triggers.416411 */417412 if (err)418418- success = false;413413+ status |= NFNL_BATCH_FAILURE;419414 }420420-415415+next:421416 msglen = NLMSG_ALIGN(nlh->nlmsg_len);422417 if (msglen > skb->len)423418 msglen = skb->len;424419 skb_pull(skb, msglen);425420 }426421done:427427- if (success && done)428428- ss->commit(oskb);429429- else422422+ if (status & NFNL_BATCH_REPLAY) {430423 ss->abort(oskb);424424+ nfnl_err_reset(&err_list);425425+ nfnl_unlock(subsys_id);426426+ kfree_skb(skb);427427+ goto replay;428428+ } else if (status == NFNL_BATCH_DONE) {429429+ ss->commit(oskb);430430+ } else {431431+ ss->abort(oskb);432432+ }431433432434 nfnl_err_deliver(&err_list, oskb);433435 nfnl_unlock(subsys_id);
+1-1
net/netlink/af_netlink.c
···158158out:159159 spin_unlock(&netlink_tap_lock);160160161161- if (found && nt->module)161161+ if (found)162162 module_put(nt->module);163163164164 return found ? 0 : -ENODEV;
+3-1
net/rds/ib_rdma.c
···759759 }760760761761 ibmr = rds_ib_alloc_fmr(rds_ibdev);762762- if (IS_ERR(ibmr))762762+ if (IS_ERR(ibmr)) {763763+ rds_ib_dev_put(rds_ibdev);763764 return ibmr;765765+ }764766765767 ret = rds_ib_map_fmr(rds_ibdev, ibmr, sg, nents);766768 if (ret == 0)
+1-1
net/rds/transport.c
···73737474void rds_trans_put(struct rds_transport *trans)7575{7676- if (trans && trans->t_owner)7676+ if (trans)7777 module_put(trans->t_owner);7878}7979
···20072007 res = tipc_sk_create(sock_net(sock->sk), new_sock, 0, 1);20082008 if (res)20092009 goto exit;20102010+ security_sk_clone(sock->sk, new_sock->sk);2010201120112012 new_sk = new_sock->sk;20122013 new_tsock = tipc_sk(new_sk);
+1-1
scripts/checkpatch.pl
···25992599# if LONG_LINE is ignored, the other 2 types are also ignored26002600#2601260126022602- if ($length > $max_line_length) {26022602+ if ($line =~ /^\+/ && $length > $max_line_length) {26032603 my $msg_type = "LONG_LINE";2604260426052605 # Check the allowed long line types first
···32833283 int rc = 0;3284328432853285 if (default_noexec &&32863286- (prot & PROT_EXEC) && (!file || (!shared && (prot & PROT_WRITE)))) {32863286+ (prot & PROT_EXEC) && (!file || IS_PRIVATE(file_inode(file)) ||32873287+ (!shared && (prot & PROT_WRITE)))) {32873288 /*32883289 * We are making executable an anonymous mapping or a32893290 * private file mapping that will also be writable.
+6
security/selinux/ss/ebitmap.c
···153153 if (offset == (u32)-1)154154 return 0;155155156156+ /* don't waste ebitmap space if the netlabel bitmap is empty */157157+ if (bitmap == 0) {158158+ offset += EBITMAP_UNIT_SIZE;159159+ continue;160160+ }161161+156162 if (e_iter == NULL ||157163 offset >= e_iter->startbit + EBITMAP_SIZE) {158164 e_prev = e_iter;
+1-1
sound/pci/hda/hda_generic.c
···51755175 int err = 0;5176517651775177 mutex_lock(&spec->pcm_mutex);51785178- if (!spec->indep_hp_enabled)51785178+ if (spec->indep_hp && !spec->indep_hp_enabled)51795179 err = -EBUSY;51805180 else51815181 spec->active_streams |= 1 << STREAM_INDEP_HP;
···44414441 }44424442}4443444344444444+/* Hook to update amp GPIO4 for automute */44454445+static void alc280_hp_gpio4_automute_hook(struct hda_codec *codec,44464446+ struct hda_jack_callback *jack)44474447+{44484448+ struct alc_spec *spec = codec->spec;44494449+44504450+ snd_hda_gen_hp_automute(codec, jack);44514451+ /* mute_led_polarity is set to 0, so we pass inverted value here */44524452+ alc_update_gpio_led(codec, 0x10, !spec->gen.hp_jack_present);44534453+}44544454+44554455+/* Manage GPIOs for HP EliteBook Folio 9480m.44564456+ *44574457+ * GPIO4 is the headphone amplifier power control44584458+ * GPIO3 is the audio output mute indicator LED44594459+ */44604460+44614461+static void alc280_fixup_hp_9480m(struct hda_codec *codec,44624462+ const struct hda_fixup *fix,44634463+ int action)44644464+{44654465+ struct alc_spec *spec = codec->spec;44664466+ static const struct hda_verb gpio_init[] = {44674467+ { 0x01, AC_VERB_SET_GPIO_MASK, 0x18 },44684468+ { 0x01, AC_VERB_SET_GPIO_DIRECTION, 0x18 },44694469+ {}44704470+ };44714471+44724472+ if (action == HDA_FIXUP_ACT_PRE_PROBE) {44734473+ /* Set the hooks to turn the headphone amp on/off44744474+ * as needed44754475+ */44764476+ spec->gen.vmaster_mute.hook = alc_fixup_gpio_mute_hook;44774477+ spec->gen.hp_automute_hook = alc280_hp_gpio4_automute_hook;44784478+44794479+ /* The GPIOs are currently off */44804480+ spec->gpio_led = 0;44814481+44824482+ /* GPIO3 is connected to the output mute LED,44834483+ * high is on, low is off44844484+ */44854485+ spec->mute_led_polarity = 0;44864486+ spec->gpio_mute_led_mask = 0x08;44874487+44884488+ /* Initialize GPIO configuration */44894489+ snd_hda_add_verbs(codec, gpio_init);44904490+ }44914491+}44924492+44444493/* for hda_fixup_thinkpad_acpi() */44454494#include "thinkpad_helper.c"44464495···45704521 ALC286_FIXUP_HP_GPIO_LED,45714522 ALC280_FIXUP_HP_GPIO2_MIC_HOTKEY,45724523 ALC280_FIXUP_HP_DOCK_PINS,45244524+ ALC280_FIXUP_HP_9480M,45734525 ALC288_FIXUP_DELL_HEADSET_MODE,45744526 ALC288_FIXUP_DELL1_MIC_NO_PRESENCE,45754527 ALC288_FIXUP_DELL_XPS_13_GPIO6,···50935043 .chained = true,50945044 .chain_id = ALC280_FIXUP_HP_GPIO450955045 },50465046+ [ALC280_FIXUP_HP_9480M] = {50475047+ .type = HDA_FIXUP_FUNC,50485048+ .v.func = alc280_fixup_hp_9480m,50495049+ },50965050 [ALC288_FIXUP_DELL_HEADSET_MODE] = {50975051 .type = HDA_FIXUP_FUNC,50985052 .v.func = alc_fixup_headset_mode_dell_alc288,···52155161 SND_PCI_QUIRK(0x103c, 0x22b7, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),52165162 SND_PCI_QUIRK(0x103c, 0x22bf, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),52175163 SND_PCI_QUIRK(0x103c, 0x22cf, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),51645164+ SND_PCI_QUIRK(0x103c, 0x22db, "HP", ALC280_FIXUP_HP_9480M),52185165 SND_PCI_QUIRK(0x103c, 0x22dc, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),52195166 SND_PCI_QUIRK(0x103c, 0x22fb, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),52205167 /* ALC290 */
+2-7
sound/usb/line6/pcm.c
···186186 int ret = 0;187187188188 spin_lock_irqsave(&pstr->lock, flags);189189- if (!test_and_set_bit(type, &pstr->running)) {190190- if (pstr->active_urbs || pstr->unlink_urbs) {191191- ret = -EBUSY;192192- goto error;193193- }194194-189189+ if (!test_and_set_bit(type, &pstr->running) &&190190+ !(pstr->active_urbs || pstr->unlink_urbs)) {195191 pstr->count = 0;196192 /* Submit all currently available URBs */197193 if (direction == SNDRV_PCM_STREAM_PLAYBACK)···195199 else196200 ret = line6_submit_audio_in_all_urbs(line6pcm);197201 }198198- error:199202 if (ret < 0)200203 clear_bit(type, &pstr->running);201204 spin_unlock_irqrestore(&pstr->lock, flags);
···41414242#define ACCESS_ONCE(x) (*(volatile typeof(x) *)&(x))43434444+#include <linux/types.h>4545+4646+static __always_inline void __read_once_size(const volatile void *p, void *res, int size)4747+{4848+ switch (size) {4949+ case 1: *(__u8 *)res = *(volatile __u8 *)p; break;5050+ case 2: *(__u16 *)res = *(volatile __u16 *)p; break;5151+ case 4: *(__u32 *)res = *(volatile __u32 *)p; break;5252+ case 8: *(__u64 *)res = *(volatile __u64 *)p; break;5353+ default:5454+ barrier();5555+ __builtin_memcpy((void *)res, (const void *)p, size);5656+ barrier();5757+ }5858+}5959+6060+static __always_inline void __write_once_size(volatile void *p, void *res, int size)6161+{6262+ switch (size) {6363+ case 1: *(volatile __u8 *)p = *(__u8 *)res; break;6464+ case 2: *(volatile __u16 *)p = *(__u16 *)res; break;6565+ case 4: *(volatile __u32 *)p = *(__u32 *)res; break;6666+ case 8: *(volatile __u64 *)p = *(__u64 *)res; break;6767+ default:6868+ barrier();6969+ __builtin_memcpy((void *)p, (const void *)res, size);7070+ barrier();7171+ }7272+}7373+7474+/*7575+ * Prevent the compiler from merging or refetching reads or writes. The7676+ * compiler is also forbidden from reordering successive instances of7777+ * READ_ONCE, WRITE_ONCE and ACCESS_ONCE (see below), but only when the7878+ * compiler is aware of some particular ordering. One way to make the7979+ * compiler aware of ordering is to put the two invocations of READ_ONCE,8080+ * WRITE_ONCE or ACCESS_ONCE() in different C statements.8181+ *8282+ * In contrast to ACCESS_ONCE these two macros will also work on aggregate8383+ * data types like structs or unions. If the size of the accessed data8484+ * type exceeds the word size of the machine (e.g., 32 bits or 64 bits)8585+ * READ_ONCE() and WRITE_ONCE() will fall back to memcpy and print a8686+ * compile-time warning.8787+ *8888+ * Their two major use cases are: (1) Mediating communication between8989+ * process-level code and irq/NMI handlers, all running on the same CPU,9090+ * and (2) Ensuring that the compiler does not fold, spindle, or otherwise9191+ * mutilate accesses that either do not require ordering or that interact9292+ * with an explicit memory barrier or atomic instruction that provides the9393+ * required ordering.9494+ */9595+9696+#define READ_ONCE(x) \9797+ ({ union { typeof(x) __val; char __c[1]; } __u; __read_once_size(&(x), __u.__c, sizeof(x)); __u.__val; })9898+9999+#define WRITE_ONCE(x, val) \100100+ ({ union { typeof(x) __val; char __c[1]; } __u = { .__val = (val) }; __write_once_size(&(x), __u.__c, sizeof(x)); __u.__val; })101101+44102#endif /* _TOOLS_LINUX_COMPILER_H */
···11+/*22+ Red Black Trees33+ (C) 1999 Andrea Arcangeli <andrea@suse.de>44+55+ This program is free software; you can redistribute it and/or modify66+ it under the terms of the GNU General Public License as published by77+ the Free Software Foundation; either version 2 of the License, or88+ (at your option) any later version.99+1010+ This program is distributed in the hope that it will be useful,1111+ but WITHOUT ANY WARRANTY; without even the implied warranty of1212+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the1313+ GNU General Public License for more details.1414+1515+ You should have received a copy of the GNU General Public License1616+ along with this program; if not, write to the Free Software1717+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA1818+1919+ linux/include/linux/rbtree.h2020+2121+ To use rbtrees you'll have to implement your own insert and search cores.2222+ This will avoid us to use callbacks and to drop drammatically performances.2323+ I know it's not the cleaner way, but in C (not in C++) to get2424+ performances and genericity...2525+2626+ See Documentation/rbtree.txt for documentation and samples.2727+*/2828+2929+#ifndef __TOOLS_LINUX_PERF_RBTREE_H3030+#define __TOOLS_LINUX_PERF_RBTREE_H3131+3232+#include <linux/kernel.h>3333+#include <linux/stddef.h>3434+3535+struct rb_node {3636+ unsigned long __rb_parent_color;3737+ struct rb_node *rb_right;3838+ struct rb_node *rb_left;3939+} __attribute__((aligned(sizeof(long))));4040+ /* The alignment might seem pointless, but allegedly CRIS needs it */4141+4242+struct rb_root {4343+ struct rb_node *rb_node;4444+};4545+4646+4747+#define rb_parent(r) ((struct rb_node *)((r)->__rb_parent_color & ~3))4848+4949+#define RB_ROOT (struct rb_root) { NULL, }5050+#define rb_entry(ptr, type, member) container_of(ptr, type, member)5151+5252+#define RB_EMPTY_ROOT(root) ((root)->rb_node == NULL)5353+5454+/* 'empty' nodes are nodes that are known not to be inserted in an rbtree */5555+#define RB_EMPTY_NODE(node) \5656+ ((node)->__rb_parent_color == (unsigned long)(node))5757+#define RB_CLEAR_NODE(node) \5858+ ((node)->__rb_parent_color = (unsigned long)(node))5959+6060+6161+extern void rb_insert_color(struct rb_node *, struct rb_root *);6262+extern void rb_erase(struct rb_node *, struct rb_root *);6363+6464+6565+/* Find logical next and previous nodes in a tree */6666+extern struct rb_node *rb_next(const struct rb_node *);6767+extern struct rb_node *rb_prev(const struct rb_node *);6868+extern struct rb_node *rb_first(const struct rb_root *);6969+extern struct rb_node *rb_last(const struct rb_root *);7070+7171+/* Postorder iteration - always visit the parent after its children */7272+extern struct rb_node *rb_first_postorder(const struct rb_root *);7373+extern struct rb_node *rb_next_postorder(const struct rb_node *);7474+7575+/* Fast replacement of a single node without remove/rebalance/add/rebalance */7676+extern void rb_replace_node(struct rb_node *victim, struct rb_node *new,7777+ struct rb_root *root);7878+7979+static inline void rb_link_node(struct rb_node *node, struct rb_node *parent,8080+ struct rb_node **rb_link)8181+{8282+ node->__rb_parent_color = (unsigned long)parent;8383+ node->rb_left = node->rb_right = NULL;8484+8585+ *rb_link = node;8686+}8787+8888+#define rb_entry_safe(ptr, type, member) \8989+ ({ typeof(ptr) ____ptr = (ptr); \9090+ ____ptr ? rb_entry(____ptr, type, member) : NULL; \9191+ })9292+9393+9494+/*9595+ * Handy for checking that we are not deleting an entry that is9696+ * already in a list, found in block/{blk-throttle,cfq-iosched}.c,9797+ * probably should be moved to lib/rbtree.c...9898+ */9999+static inline void rb_erase_init(struct rb_node *n, struct rb_root *root)100100+{101101+ rb_erase(n, root);102102+ RB_CLEAR_NODE(n);103103+}104104+#endif /* __TOOLS_LINUX_PERF_RBTREE_H */
+245
tools/include/linux/rbtree_augmented.h
···11+/*22+ Red Black Trees33+ (C) 1999 Andrea Arcangeli <andrea@suse.de>44+ (C) 2002 David Woodhouse <dwmw2@infradead.org>55+ (C) 2012 Michel Lespinasse <walken@google.com>66+77+ This program is free software; you can redistribute it and/or modify88+ it under the terms of the GNU General Public License as published by99+ the Free Software Foundation; either version 2 of the License, or1010+ (at your option) any later version.1111+1212+ This program is distributed in the hope that it will be useful,1313+ but WITHOUT ANY WARRANTY; without even the implied warranty of1414+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the1515+ GNU General Public License for more details.1616+1717+ You should have received a copy of the GNU General Public License1818+ along with this program; if not, write to the Free Software1919+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA2020+2121+ tools/linux/include/linux/rbtree_augmented.h2222+2323+ Copied from:2424+ linux/include/linux/rbtree_augmented.h2525+*/2626+2727+#ifndef _TOOLS_LINUX_RBTREE_AUGMENTED_H2828+#define _TOOLS_LINUX_RBTREE_AUGMENTED_H2929+3030+#include <linux/compiler.h>3131+#include <linux/rbtree.h>3232+3333+/*3434+ * Please note - only struct rb_augment_callbacks and the prototypes for3535+ * rb_insert_augmented() and rb_erase_augmented() are intended to be public.3636+ * The rest are implementation details you are not expected to depend on.3737+ *3838+ * See Documentation/rbtree.txt for documentation and samples.3939+ */4040+4141+struct rb_augment_callbacks {4242+ void (*propagate)(struct rb_node *node, struct rb_node *stop);4343+ void (*copy)(struct rb_node *old, struct rb_node *new);4444+ void (*rotate)(struct rb_node *old, struct rb_node *new);4545+};4646+4747+extern void __rb_insert_augmented(struct rb_node *node, struct rb_root *root,4848+ void (*augment_rotate)(struct rb_node *old, struct rb_node *new));4949+/*5050+ * Fixup the rbtree and update the augmented information when rebalancing.5151+ *5252+ * On insertion, the user must update the augmented information on the path5353+ * leading to the inserted node, then call rb_link_node() as usual and5454+ * rb_augment_inserted() instead of the usual rb_insert_color() call.5555+ * If rb_augment_inserted() rebalances the rbtree, it will callback into5656+ * a user provided function to update the augmented information on the5757+ * affected subtrees.5858+ */5959+static inline void6060+rb_insert_augmented(struct rb_node *node, struct rb_root *root,6161+ const struct rb_augment_callbacks *augment)6262+{6363+ __rb_insert_augmented(node, root, augment->rotate);6464+}6565+6666+#define RB_DECLARE_CALLBACKS(rbstatic, rbname, rbstruct, rbfield, \6767+ rbtype, rbaugmented, rbcompute) \6868+static inline void \6969+rbname ## _propagate(struct rb_node *rb, struct rb_node *stop) \7070+{ \7171+ while (rb != stop) { \7272+ rbstruct *node = rb_entry(rb, rbstruct, rbfield); \7373+ rbtype augmented = rbcompute(node); \7474+ if (node->rbaugmented == augmented) \7575+ break; \7676+ node->rbaugmented = augmented; \7777+ rb = rb_parent(&node->rbfield); \7878+ } \7979+} \8080+static inline void \8181+rbname ## _copy(struct rb_node *rb_old, struct rb_node *rb_new) \8282+{ \8383+ rbstruct *old = rb_entry(rb_old, rbstruct, rbfield); \8484+ rbstruct *new = rb_entry(rb_new, rbstruct, rbfield); \8585+ new->rbaugmented = old->rbaugmented; \8686+} \8787+static void \8888+rbname ## _rotate(struct rb_node *rb_old, struct rb_node *rb_new) \8989+{ \9090+ rbstruct *old = rb_entry(rb_old, rbstruct, rbfield); \9191+ rbstruct *new = rb_entry(rb_new, rbstruct, rbfield); \9292+ new->rbaugmented = old->rbaugmented; \9393+ old->rbaugmented = rbcompute(old); \9494+} \9595+rbstatic const struct rb_augment_callbacks rbname = { \9696+ rbname ## _propagate, rbname ## _copy, rbname ## _rotate \9797+};9898+9999+100100+#define RB_RED 0101101+#define RB_BLACK 1102102+103103+#define __rb_parent(pc) ((struct rb_node *)(pc & ~3))104104+105105+#define __rb_color(pc) ((pc) & 1)106106+#define __rb_is_black(pc) __rb_color(pc)107107+#define __rb_is_red(pc) (!__rb_color(pc))108108+#define rb_color(rb) __rb_color((rb)->__rb_parent_color)109109+#define rb_is_red(rb) __rb_is_red((rb)->__rb_parent_color)110110+#define rb_is_black(rb) __rb_is_black((rb)->__rb_parent_color)111111+112112+static inline void rb_set_parent(struct rb_node *rb, struct rb_node *p)113113+{114114+ rb->__rb_parent_color = rb_color(rb) | (unsigned long)p;115115+}116116+117117+static inline void rb_set_parent_color(struct rb_node *rb,118118+ struct rb_node *p, int color)119119+{120120+ rb->__rb_parent_color = (unsigned long)p | color;121121+}122122+123123+static inline void124124+__rb_change_child(struct rb_node *old, struct rb_node *new,125125+ struct rb_node *parent, struct rb_root *root)126126+{127127+ if (parent) {128128+ if (parent->rb_left == old)129129+ parent->rb_left = new;130130+ else131131+ parent->rb_right = new;132132+ } else133133+ root->rb_node = new;134134+}135135+136136+extern void __rb_erase_color(struct rb_node *parent, struct rb_root *root,137137+ void (*augment_rotate)(struct rb_node *old, struct rb_node *new));138138+139139+static __always_inline struct rb_node *140140+__rb_erase_augmented(struct rb_node *node, struct rb_root *root,141141+ const struct rb_augment_callbacks *augment)142142+{143143+ struct rb_node *child = node->rb_right, *tmp = node->rb_left;144144+ struct rb_node *parent, *rebalance;145145+ unsigned long pc;146146+147147+ if (!tmp) {148148+ /*149149+ * Case 1: node to erase has no more than 1 child (easy!)150150+ *151151+ * Note that if there is one child it must be red due to 5)152152+ * and node must be black due to 4). We adjust colors locally153153+ * so as to bypass __rb_erase_color() later on.154154+ */155155+ pc = node->__rb_parent_color;156156+ parent = __rb_parent(pc);157157+ __rb_change_child(node, child, parent, root);158158+ if (child) {159159+ child->__rb_parent_color = pc;160160+ rebalance = NULL;161161+ } else162162+ rebalance = __rb_is_black(pc) ? parent : NULL;163163+ tmp = parent;164164+ } else if (!child) {165165+ /* Still case 1, but this time the child is node->rb_left */166166+ tmp->__rb_parent_color = pc = node->__rb_parent_color;167167+ parent = __rb_parent(pc);168168+ __rb_change_child(node, tmp, parent, root);169169+ rebalance = NULL;170170+ tmp = parent;171171+ } else {172172+ struct rb_node *successor = child, *child2;173173+ tmp = child->rb_left;174174+ if (!tmp) {175175+ /*176176+ * Case 2: node's successor is its right child177177+ *178178+ * (n) (s)179179+ * / \ / \180180+ * (x) (s) -> (x) (c)181181+ * \182182+ * (c)183183+ */184184+ parent = successor;185185+ child2 = successor->rb_right;186186+ augment->copy(node, successor);187187+ } else {188188+ /*189189+ * Case 3: node's successor is leftmost under190190+ * node's right child subtree191191+ *192192+ * (n) (s)193193+ * / \ / \194194+ * (x) (y) -> (x) (y)195195+ * / /196196+ * (p) (p)197197+ * / /198198+ * (s) (c)199199+ * \200200+ * (c)201201+ */202202+ do {203203+ parent = successor;204204+ successor = tmp;205205+ tmp = tmp->rb_left;206206+ } while (tmp);207207+ parent->rb_left = child2 = successor->rb_right;208208+ successor->rb_right = child;209209+ rb_set_parent(child, successor);210210+ augment->copy(node, successor);211211+ augment->propagate(parent, successor);212212+ }213213+214214+ successor->rb_left = tmp = node->rb_left;215215+ rb_set_parent(tmp, successor);216216+217217+ pc = node->__rb_parent_color;218218+ tmp = __rb_parent(pc);219219+ __rb_change_child(node, successor, tmp, root);220220+ if (child2) {221221+ successor->__rb_parent_color = pc;222222+ rb_set_parent_color(child2, parent, RB_BLACK);223223+ rebalance = NULL;224224+ } else {225225+ unsigned long pc2 = successor->__rb_parent_color;226226+ successor->__rb_parent_color = pc;227227+ rebalance = __rb_is_black(pc2) ? parent : NULL;228228+ }229229+ tmp = successor;230230+ }231231+232232+ augment->propagate(tmp, NULL);233233+ return rebalance;234234+}235235+236236+static __always_inline void237237+rb_erase_augmented(struct rb_node *node, struct rb_root *root,238238+ const struct rb_augment_callbacks *augment)239239+{240240+ struct rb_node *rebalance = __rb_erase_augmented(node, root, augment);241241+ if (rebalance)242242+ __rb_erase_color(rebalance, root, augment->rotate);243243+}244244+245245+#endif /* _TOOLS_LINUX_RBTREE_AUGMENTED_H */
···11+#include <linux/bitops.h>22+#include <asm/types.h>33+44+/**55+ * hweightN - returns the hamming weight of a N-bit word66+ * @x: the word to weigh77+ *88+ * The Hamming Weight of a number is the total number of bits set in it.99+ */1010+1111+unsigned int __sw_hweight32(unsigned int w)1212+{1313+#ifdef CONFIG_ARCH_HAS_FAST_MULTIPLIER1414+ w -= (w >> 1) & 0x55555555;1515+ w = (w & 0x33333333) + ((w >> 2) & 0x33333333);1616+ w = (w + (w >> 4)) & 0x0f0f0f0f;1717+ return (w * 0x01010101) >> 24;1818+#else1919+ unsigned int res = w - ((w >> 1) & 0x55555555);2020+ res = (res & 0x33333333) + ((res >> 2) & 0x33333333);2121+ res = (res + (res >> 4)) & 0x0F0F0F0F;2222+ res = res + (res >> 8);2323+ return (res + (res >> 16)) & 0x000000FF;2424+#endif2525+}2626+2727+unsigned int __sw_hweight16(unsigned int w)2828+{2929+ unsigned int res = w - ((w >> 1) & 0x5555);3030+ res = (res & 0x3333) + ((res >> 2) & 0x3333);3131+ res = (res + (res >> 4)) & 0x0F0F;3232+ return (res + (res >> 8)) & 0x00FF;3333+}3434+3535+unsigned int __sw_hweight8(unsigned int w)3636+{3737+ unsigned int res = w - ((w >> 1) & 0x55);3838+ res = (res & 0x33) + ((res >> 2) & 0x33);3939+ return (res + (res >> 4)) & 0x0F;4040+}4141+4242+unsigned long __sw_hweight64(__u64 w)4343+{4444+#if BITS_PER_LONG == 324545+ return __sw_hweight32((unsigned int)(w >> 32)) +4646+ __sw_hweight32((unsigned int)w);4747+#elif BITS_PER_LONG == 644848+#ifdef CONFIG_ARCH_HAS_FAST_MULTIPLIER4949+ w -= (w >> 1) & 0x5555555555555555ul;5050+ w = (w & 0x3333333333333333ul) + ((w >> 2) & 0x3333333333333333ul);5151+ w = (w + (w >> 4)) & 0x0f0f0f0f0f0f0f0ful;5252+ return (w * 0x0101010101010101ul) >> 56;5353+#else5454+ __u64 res = w - ((w >> 1) & 0x5555555555555555ul);5555+ res = (res & 0x3333333333333333ul) + ((res >> 2) & 0x3333333333333333ul);5656+ res = (res + (res >> 4)) & 0x0F0F0F0F0F0F0F0Ful;5757+ res = res + (res >> 8);5858+ res = res + (res >> 16);5959+ return (res + (res >> 32)) & 0x00000000000000FFul;6060+#endif6161+#endif6262+}
+548
tools/lib/rbtree.c
···11+/*22+ Red Black Trees33+ (C) 1999 Andrea Arcangeli <andrea@suse.de>44+ (C) 2002 David Woodhouse <dwmw2@infradead.org>55+ (C) 2012 Michel Lespinasse <walken@google.com>66+77+ This program is free software; you can redistribute it and/or modify88+ it under the terms of the GNU General Public License as published by99+ the Free Software Foundation; either version 2 of the License, or1010+ (at your option) any later version.1111+1212+ This program is distributed in the hope that it will be useful,1313+ but WITHOUT ANY WARRANTY; without even the implied warranty of1414+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the1515+ GNU General Public License for more details.1616+1717+ You should have received a copy of the GNU General Public License1818+ along with this program; if not, write to the Free Software1919+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA2020+2121+ linux/lib/rbtree.c2222+*/2323+2424+#include <linux/rbtree_augmented.h>2525+2626+/*2727+ * red-black trees properties: http://en.wikipedia.org/wiki/Rbtree2828+ *2929+ * 1) A node is either red or black3030+ * 2) The root is black3131+ * 3) All leaves (NULL) are black3232+ * 4) Both children of every red node are black3333+ * 5) Every simple path from root to leaves contains the same number3434+ * of black nodes.3535+ *3636+ * 4 and 5 give the O(log n) guarantee, since 4 implies you cannot have two3737+ * consecutive red nodes in a path and every red node is therefore followed by3838+ * a black. So if B is the number of black nodes on every simple path (as per3939+ * 5), then the longest possible path due to 4 is 2B.4040+ *4141+ * We shall indicate color with case, where black nodes are uppercase and red4242+ * nodes will be lowercase. Unknown color nodes shall be drawn as red within4343+ * parentheses and have some accompanying text comment.4444+ */4545+4646+static inline void rb_set_black(struct rb_node *rb)4747+{4848+ rb->__rb_parent_color |= RB_BLACK;4949+}5050+5151+static inline struct rb_node *rb_red_parent(struct rb_node *red)5252+{5353+ return (struct rb_node *)red->__rb_parent_color;5454+}5555+5656+/*5757+ * Helper function for rotations:5858+ * - old's parent and color get assigned to new5959+ * - old gets assigned new as a parent and 'color' as a color.6060+ */6161+static inline void6262+__rb_rotate_set_parents(struct rb_node *old, struct rb_node *new,6363+ struct rb_root *root, int color)6464+{6565+ struct rb_node *parent = rb_parent(old);6666+ new->__rb_parent_color = old->__rb_parent_color;6767+ rb_set_parent_color(old, new, color);6868+ __rb_change_child(old, new, parent, root);6969+}7070+7171+static __always_inline void7272+__rb_insert(struct rb_node *node, struct rb_root *root,7373+ void (*augment_rotate)(struct rb_node *old, struct rb_node *new))7474+{7575+ struct rb_node *parent = rb_red_parent(node), *gparent, *tmp;7676+7777+ while (true) {7878+ /*7979+ * Loop invariant: node is red8080+ *8181+ * If there is a black parent, we are done.8282+ * Otherwise, take some corrective action as we don't8383+ * want a red root or two consecutive red nodes.8484+ */8585+ if (!parent) {8686+ rb_set_parent_color(node, NULL, RB_BLACK);8787+ break;8888+ } else if (rb_is_black(parent))8989+ break;9090+9191+ gparent = rb_red_parent(parent);9292+9393+ tmp = gparent->rb_right;9494+ if (parent != tmp) { /* parent == gparent->rb_left */9595+ if (tmp && rb_is_red(tmp)) {9696+ /*9797+ * Case 1 - color flips9898+ *9999+ * G g100100+ * / \ / \101101+ * p u --> P U102102+ * / /103103+ * n n104104+ *105105+ * However, since g's parent might be red, and106106+ * 4) does not allow this, we need to recurse107107+ * at g.108108+ */109109+ rb_set_parent_color(tmp, gparent, RB_BLACK);110110+ rb_set_parent_color(parent, gparent, RB_BLACK);111111+ node = gparent;112112+ parent = rb_parent(node);113113+ rb_set_parent_color(node, parent, RB_RED);114114+ continue;115115+ }116116+117117+ tmp = parent->rb_right;118118+ if (node == tmp) {119119+ /*120120+ * Case 2 - left rotate at parent121121+ *122122+ * G G123123+ * / \ / \124124+ * p U --> n U125125+ * \ /126126+ * n p127127+ *128128+ * This still leaves us in violation of 4), the129129+ * continuation into Case 3 will fix that.130130+ */131131+ parent->rb_right = tmp = node->rb_left;132132+ node->rb_left = parent;133133+ if (tmp)134134+ rb_set_parent_color(tmp, parent,135135+ RB_BLACK);136136+ rb_set_parent_color(parent, node, RB_RED);137137+ augment_rotate(parent, node);138138+ parent = node;139139+ tmp = node->rb_right;140140+ }141141+142142+ /*143143+ * Case 3 - right rotate at gparent144144+ *145145+ * G P146146+ * / \ / \147147+ * p U --> n g148148+ * / \149149+ * n U150150+ */151151+ gparent->rb_left = tmp; /* == parent->rb_right */152152+ parent->rb_right = gparent;153153+ if (tmp)154154+ rb_set_parent_color(tmp, gparent, RB_BLACK);155155+ __rb_rotate_set_parents(gparent, parent, root, RB_RED);156156+ augment_rotate(gparent, parent);157157+ break;158158+ } else {159159+ tmp = gparent->rb_left;160160+ if (tmp && rb_is_red(tmp)) {161161+ /* Case 1 - color flips */162162+ rb_set_parent_color(tmp, gparent, RB_BLACK);163163+ rb_set_parent_color(parent, gparent, RB_BLACK);164164+ node = gparent;165165+ parent = rb_parent(node);166166+ rb_set_parent_color(node, parent, RB_RED);167167+ continue;168168+ }169169+170170+ tmp = parent->rb_left;171171+ if (node == tmp) {172172+ /* Case 2 - right rotate at parent */173173+ parent->rb_left = tmp = node->rb_right;174174+ node->rb_right = parent;175175+ if (tmp)176176+ rb_set_parent_color(tmp, parent,177177+ RB_BLACK);178178+ rb_set_parent_color(parent, node, RB_RED);179179+ augment_rotate(parent, node);180180+ parent = node;181181+ tmp = node->rb_left;182182+ }183183+184184+ /* Case 3 - left rotate at gparent */185185+ gparent->rb_right = tmp; /* == parent->rb_left */186186+ parent->rb_left = gparent;187187+ if (tmp)188188+ rb_set_parent_color(tmp, gparent, RB_BLACK);189189+ __rb_rotate_set_parents(gparent, parent, root, RB_RED);190190+ augment_rotate(gparent, parent);191191+ break;192192+ }193193+ }194194+}195195+196196+/*197197+ * Inline version for rb_erase() use - we want to be able to inline198198+ * and eliminate the dummy_rotate callback there199199+ */200200+static __always_inline void201201+____rb_erase_color(struct rb_node *parent, struct rb_root *root,202202+ void (*augment_rotate)(struct rb_node *old, struct rb_node *new))203203+{204204+ struct rb_node *node = NULL, *sibling, *tmp1, *tmp2;205205+206206+ while (true) {207207+ /*208208+ * Loop invariants:209209+ * - node is black (or NULL on first iteration)210210+ * - node is not the root (parent is not NULL)211211+ * - All leaf paths going through parent and node have a212212+ * black node count that is 1 lower than other leaf paths.213213+ */214214+ sibling = parent->rb_right;215215+ if (node != sibling) { /* node == parent->rb_left */216216+ if (rb_is_red(sibling)) {217217+ /*218218+ * Case 1 - left rotate at parent219219+ *220220+ * P S221221+ * / \ / \222222+ * N s --> p Sr223223+ * / \ / \224224+ * Sl Sr N Sl225225+ */226226+ parent->rb_right = tmp1 = sibling->rb_left;227227+ sibling->rb_left = parent;228228+ rb_set_parent_color(tmp1, parent, RB_BLACK);229229+ __rb_rotate_set_parents(parent, sibling, root,230230+ RB_RED);231231+ augment_rotate(parent, sibling);232232+ sibling = tmp1;233233+ }234234+ tmp1 = sibling->rb_right;235235+ if (!tmp1 || rb_is_black(tmp1)) {236236+ tmp2 = sibling->rb_left;237237+ if (!tmp2 || rb_is_black(tmp2)) {238238+ /*239239+ * Case 2 - sibling color flip240240+ * (p could be either color here)241241+ *242242+ * (p) (p)243243+ * / \ / \244244+ * N S --> N s245245+ * / \ / \246246+ * Sl Sr Sl Sr247247+ *248248+ * This leaves us violating 5) which249249+ * can be fixed by flipping p to black250250+ * if it was red, or by recursing at p.251251+ * p is red when coming from Case 1.252252+ */253253+ rb_set_parent_color(sibling, parent,254254+ RB_RED);255255+ if (rb_is_red(parent))256256+ rb_set_black(parent);257257+ else {258258+ node = parent;259259+ parent = rb_parent(node);260260+ if (parent)261261+ continue;262262+ }263263+ break;264264+ }265265+ /*266266+ * Case 3 - right rotate at sibling267267+ * (p could be either color here)268268+ *269269+ * (p) (p)270270+ * / \ / \271271+ * N S --> N Sl272272+ * / \ \273273+ * sl Sr s274274+ * \275275+ * Sr276276+ */277277+ sibling->rb_left = tmp1 = tmp2->rb_right;278278+ tmp2->rb_right = sibling;279279+ parent->rb_right = tmp2;280280+ if (tmp1)281281+ rb_set_parent_color(tmp1, sibling,282282+ RB_BLACK);283283+ augment_rotate(sibling, tmp2);284284+ tmp1 = sibling;285285+ sibling = tmp2;286286+ }287287+ /*288288+ * Case 4 - left rotate at parent + color flips289289+ * (p and sl could be either color here.290290+ * After rotation, p becomes black, s acquires291291+ * p's color, and sl keeps its color)292292+ *293293+ * (p) (s)294294+ * / \ / \295295+ * N S --> P Sr296296+ * / \ / \297297+ * (sl) sr N (sl)298298+ */299299+ parent->rb_right = tmp2 = sibling->rb_left;300300+ sibling->rb_left = parent;301301+ rb_set_parent_color(tmp1, sibling, RB_BLACK);302302+ if (tmp2)303303+ rb_set_parent(tmp2, parent);304304+ __rb_rotate_set_parents(parent, sibling, root,305305+ RB_BLACK);306306+ augment_rotate(parent, sibling);307307+ break;308308+ } else {309309+ sibling = parent->rb_left;310310+ if (rb_is_red(sibling)) {311311+ /* Case 1 - right rotate at parent */312312+ parent->rb_left = tmp1 = sibling->rb_right;313313+ sibling->rb_right = parent;314314+ rb_set_parent_color(tmp1, parent, RB_BLACK);315315+ __rb_rotate_set_parents(parent, sibling, root,316316+ RB_RED);317317+ augment_rotate(parent, sibling);318318+ sibling = tmp1;319319+ }320320+ tmp1 = sibling->rb_left;321321+ if (!tmp1 || rb_is_black(tmp1)) {322322+ tmp2 = sibling->rb_right;323323+ if (!tmp2 || rb_is_black(tmp2)) {324324+ /* Case 2 - sibling color flip */325325+ rb_set_parent_color(sibling, parent,326326+ RB_RED);327327+ if (rb_is_red(parent))328328+ rb_set_black(parent);329329+ else {330330+ node = parent;331331+ parent = rb_parent(node);332332+ if (parent)333333+ continue;334334+ }335335+ break;336336+ }337337+ /* Case 3 - right rotate at sibling */338338+ sibling->rb_right = tmp1 = tmp2->rb_left;339339+ tmp2->rb_left = sibling;340340+ parent->rb_left = tmp2;341341+ if (tmp1)342342+ rb_set_parent_color(tmp1, sibling,343343+ RB_BLACK);344344+ augment_rotate(sibling, tmp2);345345+ tmp1 = sibling;346346+ sibling = tmp2;347347+ }348348+ /* Case 4 - left rotate at parent + color flips */349349+ parent->rb_left = tmp2 = sibling->rb_right;350350+ sibling->rb_right = parent;351351+ rb_set_parent_color(tmp1, sibling, RB_BLACK);352352+ if (tmp2)353353+ rb_set_parent(tmp2, parent);354354+ __rb_rotate_set_parents(parent, sibling, root,355355+ RB_BLACK);356356+ augment_rotate(parent, sibling);357357+ break;358358+ }359359+ }360360+}361361+362362+/* Non-inline version for rb_erase_augmented() use */363363+void __rb_erase_color(struct rb_node *parent, struct rb_root *root,364364+ void (*augment_rotate)(struct rb_node *old, struct rb_node *new))365365+{366366+ ____rb_erase_color(parent, root, augment_rotate);367367+}368368+369369+/*370370+ * Non-augmented rbtree manipulation functions.371371+ *372372+ * We use dummy augmented callbacks here, and have the compiler optimize them373373+ * out of the rb_insert_color() and rb_erase() function definitions.374374+ */375375+376376+static inline void dummy_propagate(struct rb_node *node, struct rb_node *stop) {}377377+static inline void dummy_copy(struct rb_node *old, struct rb_node *new) {}378378+static inline void dummy_rotate(struct rb_node *old, struct rb_node *new) {}379379+380380+static const struct rb_augment_callbacks dummy_callbacks = {381381+ dummy_propagate, dummy_copy, dummy_rotate382382+};383383+384384+void rb_insert_color(struct rb_node *node, struct rb_root *root)385385+{386386+ __rb_insert(node, root, dummy_rotate);387387+}388388+389389+void rb_erase(struct rb_node *node, struct rb_root *root)390390+{391391+ struct rb_node *rebalance;392392+ rebalance = __rb_erase_augmented(node, root, &dummy_callbacks);393393+ if (rebalance)394394+ ____rb_erase_color(rebalance, root, dummy_rotate);395395+}396396+397397+/*398398+ * Augmented rbtree manipulation functions.399399+ *400400+ * This instantiates the same __always_inline functions as in the non-augmented401401+ * case, but this time with user-defined callbacks.402402+ */403403+404404+void __rb_insert_augmented(struct rb_node *node, struct rb_root *root,405405+ void (*augment_rotate)(struct rb_node *old, struct rb_node *new))406406+{407407+ __rb_insert(node, root, augment_rotate);408408+}409409+410410+/*411411+ * This function returns the first node (in sort order) of the tree.412412+ */413413+struct rb_node *rb_first(const struct rb_root *root)414414+{415415+ struct rb_node *n;416416+417417+ n = root->rb_node;418418+ if (!n)419419+ return NULL;420420+ while (n->rb_left)421421+ n = n->rb_left;422422+ return n;423423+}424424+425425+struct rb_node *rb_last(const struct rb_root *root)426426+{427427+ struct rb_node *n;428428+429429+ n = root->rb_node;430430+ if (!n)431431+ return NULL;432432+ while (n->rb_right)433433+ n = n->rb_right;434434+ return n;435435+}436436+437437+struct rb_node *rb_next(const struct rb_node *node)438438+{439439+ struct rb_node *parent;440440+441441+ if (RB_EMPTY_NODE(node))442442+ return NULL;443443+444444+ /*445445+ * If we have a right-hand child, go down and then left as far446446+ * as we can.447447+ */448448+ if (node->rb_right) {449449+ node = node->rb_right;450450+ while (node->rb_left)451451+ node=node->rb_left;452452+ return (struct rb_node *)node;453453+ }454454+455455+ /*456456+ * No right-hand children. Everything down and left is smaller than us,457457+ * so any 'next' node must be in the general direction of our parent.458458+ * Go up the tree; any time the ancestor is a right-hand child of its459459+ * parent, keep going up. First time it's a left-hand child of its460460+ * parent, said parent is our 'next' node.461461+ */462462+ while ((parent = rb_parent(node)) && node == parent->rb_right)463463+ node = parent;464464+465465+ return parent;466466+}467467+468468+struct rb_node *rb_prev(const struct rb_node *node)469469+{470470+ struct rb_node *parent;471471+472472+ if (RB_EMPTY_NODE(node))473473+ return NULL;474474+475475+ /*476476+ * If we have a left-hand child, go down and then right as far477477+ * as we can.478478+ */479479+ if (node->rb_left) {480480+ node = node->rb_left;481481+ while (node->rb_right)482482+ node=node->rb_right;483483+ return (struct rb_node *)node;484484+ }485485+486486+ /*487487+ * No left-hand children. Go up till we find an ancestor which488488+ * is a right-hand child of its parent.489489+ */490490+ while ((parent = rb_parent(node)) && node == parent->rb_left)491491+ node = parent;492492+493493+ return parent;494494+}495495+496496+void rb_replace_node(struct rb_node *victim, struct rb_node *new,497497+ struct rb_root *root)498498+{499499+ struct rb_node *parent = rb_parent(victim);500500+501501+ /* Set the surrounding nodes to point to the replacement */502502+ __rb_change_child(victim, new, parent, root);503503+ if (victim->rb_left)504504+ rb_set_parent(victim->rb_left, new);505505+ if (victim->rb_right)506506+ rb_set_parent(victim->rb_right, new);507507+508508+ /* Copy the pointers/colour from the victim to the replacement */509509+ *new = *victim;510510+}511511+512512+static struct rb_node *rb_left_deepest_node(const struct rb_node *node)513513+{514514+ for (;;) {515515+ if (node->rb_left)516516+ node = node->rb_left;517517+ else if (node->rb_right)518518+ node = node->rb_right;519519+ else520520+ return (struct rb_node *)node;521521+ }522522+}523523+524524+struct rb_node *rb_next_postorder(const struct rb_node *node)525525+{526526+ const struct rb_node *parent;527527+ if (!node)528528+ return NULL;529529+ parent = rb_parent(node);530530+531531+ /* If we're sitting on node, we've already seen our children */532532+ if (parent && node == parent->rb_left && parent->rb_right) {533533+ /* If we are the parent's left node, go to the parent's right534534+ * node then all the way down to the left */535535+ return rb_left_deepest_node(parent->rb_right);536536+ } else537537+ /* Otherwise we are the parent's right node, and the parent538538+ * should be next */539539+ return (struct rb_node *)parent;540540+}541541+542542+struct rb_node *rb_first_postorder(const struct rb_root *root)543543+{544544+ if (!root->rb_node)545545+ return NULL;546546+547547+ return rb_left_deepest_node(root->rb_node);548548+}
···109109 $(Q)$(SHELL_PATH) util/PERF-VERSION-GEN $(OUTPUT)110110 $(Q)touch $(OUTPUT)PERF-VERSION-FILE111111112112-CC = $(CROSS_COMPILE)gcc113113-LD ?= $(CROSS_COMPILE)ld114114-AR = $(CROSS_COMPILE)ar112112+# Makefiles suck: This macro sets a default value of $(2) for the113113+# variable named by $(1), unless the variable has been set by114114+# environment or command line. This is necessary for CC and AR115115+# because make sets default values, so the simpler ?= approach116116+# won't work as expected.117117+define allow-override118118+ $(if $(or $(findstring environment,$(origin $(1))),\119119+ $(findstring command line,$(origin $(1)))),,\120120+ $(eval $(1) = $(2)))121121+endef122122+123123+# Allow setting CC and AR and LD, or setting CROSS_COMPILE as a prefix.124124+$(call allow-override,CC,$(CROSS_COMPILE)gcc)125125+$(call allow-override,AR,$(CROSS_COMPILE)ar)126126+$(call allow-override,LD,$(CROSS_COMPILE)ld)127127+115128PKG_CONFIG = $(CROSS_COMPILE)pkg-config116129117130RM = rm -f
+2-2
tools/perf/builtin-stat.c
···343343 return 0;344344}345345346346-static void read_counters(bool close)346346+static void read_counters(bool close_counters)347347{348348 struct perf_evsel *counter;349349···354354 if (process_counter(counter))355355 pr_warning("failed to process counter %s\n", counter->name);356356357357- if (close) {357357+ if (close_counters) {358358 perf_evsel__close_fd(counter, perf_evsel__nr_cpus(counter),359359 thread_map__nr(evsel_list->threads));360360 }
···5353{5454 struct perf_event_mmap_page *pc = userpg;55555656-#if BITS_PER_LONG != 64 && !defined(HAVE_SYNC_COMPARE_AND_SWAP_SUPPORT)5757- pr_err("Cannot use AUX area tracing mmaps\n");5858- return -1;5959-#endif6060-6156 WARN_ONCE(mm->base, "Uninitialized auxtrace_mmap\n");62576358 mm->userpg = userpg;···6772 mm->base = NULL;6873 return 0;6974 }7575+7676+#if BITS_PER_LONG != 64 && !defined(HAVE_SYNC_COMPARE_AND_SWAP_SUPPORT)7777+ pr_err("Cannot use AUX area tracing mmaps\n");7878+ return -1;7979+#endif70807181 pc->aux_offset = mp->offset;7282 pc->aux_size = mp->len;
-16
tools/perf/util/include/linux/rbtree.h
···11-#ifndef __TOOLS_LINUX_PERF_RBTREE_H22-#define __TOOLS_LINUX_PERF_RBTREE_H33-#include <stdbool.h>44-#include "../../../../include/linux/rbtree.h"55-66-/*77- * Handy for checking that we are not deleting an entry that is88- * already in a list, found in block/{blk-throttle,cfq-iosched}.c,99- * probably should be moved to lib/rbtree.c...1010- */1111-static inline void rb_erase_init(struct rb_node *n, struct rb_root *root)1212-{1313- rb_erase(n, root);1414- RB_CLEAR_NODE(n);1515-}1616-#endif /* __TOOLS_LINUX_PERF_RBTREE_H */