Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 50f09a3dd587 ("Merge tag 'char-misc-5.13-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc") into char-misc-next

We want the char/misc driver fixes in here as well

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+3568 -2179
+1
.mailmap
··· 160 160 Jeff Layton <jlayton@kernel.org> <jlayton@redhat.com> 161 161 Jens Axboe <axboe@suse.de> 162 162 Jens Osterkamp <Jens.Osterkamp@de.ibm.com> 163 + Jernej Skrabec <jernej.skrabec@gmail.com> <jernej.skrabec@siol.net> 163 164 Jiri Slaby <jirislaby@kernel.org> <jirislaby@gmail.com> 164 165 Jiri Slaby <jirislaby@kernel.org> <jslaby@novell.com> 165 166 Jiri Slaby <jirislaby@kernel.org> <jslaby@suse.com>
+1 -1
Documentation/ABI/obsolete/sysfs-class-dax
··· 1 1 What: /sys/class/dax/ 2 2 Date: May, 2016 3 3 KernelVersion: v4.7 4 - Contact: linux-nvdimm@lists.01.org 4 + Contact: nvdimm@lists.linux.dev 5 5 Description: Device DAX is the device-centric analogue of Filesystem 6 6 DAX (CONFIG_FS_DAX). It allows memory ranges to be 7 7 allocated and mapped without need of an intervening file
+1 -1
Documentation/ABI/obsolete/sysfs-kernel-fadump_registered
··· 1 - This ABI is renamed and moved to a new location /sys/kernel/fadump/registered.¬ 1 + This ABI is renamed and moved to a new location /sys/kernel/fadump/registered. 2 2 3 3 What: /sys/kernel/fadump_registered 4 4 Date: Feb 2012
+1 -1
Documentation/ABI/obsolete/sysfs-kernel-fadump_release_mem
··· 1 - This ABI is renamed and moved to a new location /sys/kernel/fadump/release_mem.¬ 1 + This ABI is renamed and moved to a new location /sys/kernel/fadump/release_mem. 2 2 3 3 What: /sys/kernel/fadump_release_mem 4 4 Date: Feb 2012
+1 -1
Documentation/ABI/removed/sysfs-bus-nfit
··· 1 1 What: /sys/bus/nd/devices/regionX/nfit/ecc_unit_size 2 2 Date: Aug, 2017 3 3 KernelVersion: v4.14 (Removed v4.18) 4 - Contact: linux-nvdimm@lists.01.org 4 + Contact: nvdimm@lists.linux.dev 5 5 Description: 6 6 (RO) Size of a write request to a DIMM that will not incur a 7 7 read-modify-write cycle at the memory controller.
+20 -20
Documentation/ABI/testing/sysfs-bus-nfit
··· 5 5 What: /sys/bus/nd/devices/nmemX/nfit/serial 6 6 Date: Jun, 2015 7 7 KernelVersion: v4.2 8 - Contact: linux-nvdimm@lists.01.org 8 + Contact: nvdimm@lists.linux.dev 9 9 Description: 10 10 (RO) Serial number of the NVDIMM (non-volatile dual in-line 11 11 memory module), assigned by the module vendor. ··· 14 14 What: /sys/bus/nd/devices/nmemX/nfit/handle 15 15 Date: Apr, 2015 16 16 KernelVersion: v4.2 17 - Contact: linux-nvdimm@lists.01.org 17 + Contact: nvdimm@lists.linux.dev 18 18 Description: 19 19 (RO) The address (given by the _ADR object) of the device on its 20 20 parent bus of the NVDIMM device containing the NVDIMM region. ··· 23 23 What: /sys/bus/nd/devices/nmemX/nfit/device 24 24 Date: Apr, 2015 25 25 KernelVersion: v4.1 26 - Contact: linux-nvdimm@lists.01.org 26 + Contact: nvdimm@lists.linux.dev 27 27 Description: 28 28 (RO) Device id for the NVDIMM, assigned by the module vendor. 29 29 ··· 31 31 What: /sys/bus/nd/devices/nmemX/nfit/rev_id 32 32 Date: Jun, 2015 33 33 KernelVersion: v4.2 34 - Contact: linux-nvdimm@lists.01.org 34 + Contact: nvdimm@lists.linux.dev 35 35 Description: 36 36 (RO) Revision of the NVDIMM, assigned by the module vendor. 37 37 ··· 39 39 What: /sys/bus/nd/devices/nmemX/nfit/phys_id 40 40 Date: Apr, 2015 41 41 KernelVersion: v4.2 42 - Contact: linux-nvdimm@lists.01.org 42 + Contact: nvdimm@lists.linux.dev 43 43 Description: 44 44 (RO) Handle (i.e., instance number) for the SMBIOS (system 45 45 management BIOS) Memory Device structure describing the NVDIMM ··· 49 49 What: /sys/bus/nd/devices/nmemX/nfit/flags 50 50 Date: Jun, 2015 51 51 KernelVersion: v4.2 52 - Contact: linux-nvdimm@lists.01.org 52 + Contact: nvdimm@lists.linux.dev 53 53 Description: 54 54 (RO) The flags in the NFIT memory device sub-structure indicate 55 55 the state of the data on the nvdimm relative to its energy ··· 68 68 What: /sys/bus/nd/devices/nmemX/nfit/formats 69 69 Date: Apr, 2016 70 70 KernelVersion: v4.7 71 - Contact: linux-nvdimm@lists.01.org 71 + Contact: nvdimm@lists.linux.dev 72 72 Description: 73 73 (RO) The interface codes indicate support for persistent memory 74 74 mapped directly into system physical address space and / or a ··· 84 84 What: /sys/bus/nd/devices/nmemX/nfit/vendor 85 85 Date: Apr, 2016 86 86 KernelVersion: v4.7 87 - Contact: linux-nvdimm@lists.01.org 87 + Contact: nvdimm@lists.linux.dev 88 88 Description: 89 89 (RO) Vendor id of the NVDIMM. 90 90 ··· 92 92 What: /sys/bus/nd/devices/nmemX/nfit/dsm_mask 93 93 Date: May, 2016 94 94 KernelVersion: v4.7 95 - Contact: linux-nvdimm@lists.01.org 95 + Contact: nvdimm@lists.linux.dev 96 96 Description: 97 97 (RO) The bitmask indicates the supported device specific control 98 98 functions relative to the NVDIMM command family supported by the ··· 102 102 What: /sys/bus/nd/devices/nmemX/nfit/family 103 103 Date: Apr, 2016 104 104 KernelVersion: v4.7 105 - Contact: linux-nvdimm@lists.01.org 105 + Contact: nvdimm@lists.linux.dev 106 106 Description: 107 107 (RO) Displays the NVDIMM family command sets. Values 108 108 0, 1, 2 and 3 correspond to NVDIMM_FAMILY_INTEL, ··· 118 118 What: /sys/bus/nd/devices/nmemX/nfit/id 119 119 Date: Apr, 2016 120 120 KernelVersion: v4.7 121 - Contact: linux-nvdimm@lists.01.org 121 + Contact: nvdimm@lists.linux.dev 122 122 Description: 123 123 (RO) ACPI specification 6.2 section 5.2.25.9, defines an 124 124 identifier for an NVDIMM, which refelects the id attribute. ··· 127 127 What: /sys/bus/nd/devices/nmemX/nfit/subsystem_vendor 128 128 Date: Apr, 2016 129 129 KernelVersion: v4.7 130 - Contact: linux-nvdimm@lists.01.org 130 + Contact: nvdimm@lists.linux.dev 131 131 Description: 132 132 (RO) Sub-system vendor id of the NVDIMM non-volatile memory 133 133 subsystem controller. ··· 136 136 What: /sys/bus/nd/devices/nmemX/nfit/subsystem_rev_id 137 137 Date: Apr, 2016 138 138 KernelVersion: v4.7 139 - Contact: linux-nvdimm@lists.01.org 139 + Contact: nvdimm@lists.linux.dev 140 140 Description: 141 141 (RO) Sub-system revision id of the NVDIMM non-volatile memory subsystem 142 142 controller, assigned by the non-volatile memory subsystem ··· 146 146 What: /sys/bus/nd/devices/nmemX/nfit/subsystem_device 147 147 Date: Apr, 2016 148 148 KernelVersion: v4.7 149 - Contact: linux-nvdimm@lists.01.org 149 + Contact: nvdimm@lists.linux.dev 150 150 Description: 151 151 (RO) Sub-system device id for the NVDIMM non-volatile memory 152 152 subsystem controller, assigned by the non-volatile memory ··· 156 156 What: /sys/bus/nd/devices/ndbusX/nfit/revision 157 157 Date: Jun, 2015 158 158 KernelVersion: v4.2 159 - Contact: linux-nvdimm@lists.01.org 159 + Contact: nvdimm@lists.linux.dev 160 160 Description: 161 161 (RO) ACPI NFIT table revision number. 162 162 ··· 164 164 What: /sys/bus/nd/devices/ndbusX/nfit/scrub 165 165 Date: Sep, 2016 166 166 KernelVersion: v4.9 167 - Contact: linux-nvdimm@lists.01.org 167 + Contact: nvdimm@lists.linux.dev 168 168 Description: 169 169 (RW) This shows the number of full Address Range Scrubs (ARS) 170 170 that have been completed since driver load time. Userspace can ··· 177 177 What: /sys/bus/nd/devices/ndbusX/nfit/hw_error_scrub 178 178 Date: Sep, 2016 179 179 KernelVersion: v4.9 180 - Contact: linux-nvdimm@lists.01.org 180 + Contact: nvdimm@lists.linux.dev 181 181 Description: 182 182 (RW) Provides a way to toggle the behavior between just adding 183 183 the address (cache line) where the MCE happened to the poison ··· 196 196 What: /sys/bus/nd/devices/ndbusX/nfit/dsm_mask 197 197 Date: Jun, 2017 198 198 KernelVersion: v4.13 199 - Contact: linux-nvdimm@lists.01.org 199 + Contact: nvdimm@lists.linux.dev 200 200 Description: 201 201 (RO) The bitmask indicates the supported bus specific control 202 202 functions. See the section named 'NVDIMM Root Device _DSMs' in ··· 205 205 What: /sys/bus/nd/devices/ndbusX/nfit/firmware_activate_noidle 206 206 Date: Apr, 2020 207 207 KernelVersion: v5.8 208 - Contact: linux-nvdimm@lists.01.org 208 + Contact: nvdimm@lists.linux.dev 209 209 Description: 210 210 (RW) The Intel platform implementation of firmware activate 211 211 support exposes an option let the platform force idle devices in ··· 225 225 What: /sys/bus/nd/devices/regionX/nfit/range_index 226 226 Date: Jun, 2015 227 227 KernelVersion: v4.2 228 - Contact: linux-nvdimm@lists.01.org 228 + Contact: nvdimm@lists.linux.dev 229 229 Description: 230 230 (RO) A unique number provided by the BIOS to identify an address 231 231 range. Used by NVDIMM Region Mapping Structure to uniquely refer
+2 -2
Documentation/ABI/testing/sysfs-bus-papr-pmem
··· 1 1 What: /sys/bus/nd/devices/nmemX/papr/flags 2 2 Date: Apr, 2020 3 3 KernelVersion: v5.8 4 - Contact: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>, linux-nvdimm@lists.01.org, 4 + Contact: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>, nvdimm@lists.linux.dev, 5 5 Description: 6 6 (RO) Report flags indicating various states of a 7 7 papr-pmem NVDIMM device. Each flag maps to a one or ··· 36 36 What: /sys/bus/nd/devices/nmemX/papr/perf_stats 37 37 Date: May, 2020 38 38 KernelVersion: v5.9 39 - Contact: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>, linux-nvdimm@lists.01.org, 39 + Contact: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>, nvdimm@lists.linux.dev, 40 40 Description: 41 41 (RO) Report various performance stats related to papr-scm NVDIMM 42 42 device. Each stat is reported on a new line with each line
+2 -2
Documentation/ABI/testing/sysfs-module
··· 37 37 38 38 What: /sys/module/*/{coresize,initsize} 39 39 Date: Jan 2012 40 - KernelVersion:»·3.3 40 + KernelVersion: 3.3 41 41 Contact: Kay Sievers <kay.sievers@vrfy.org> 42 42 Description: Module size in bytes. 43 43 44 44 What: /sys/module/*/taint 45 45 Date: Jan 2012 46 - KernelVersion:»·3.3 46 + KernelVersion: 3.3 47 47 Contact: Kay Sievers <kay.sievers@vrfy.org> 48 48 Description: Module taint flags: 49 49 == =====================
+5 -4
Documentation/admin-guide/sysctl/kernel.rst
··· 483 483 ======== 484 484 485 485 The full path to the usermode helper for autoloading kernel modules, 486 - by default "/sbin/modprobe". This binary is executed when the kernel 487 - requests a module. For example, if userspace passes an unknown 488 - filesystem type to mount(), then the kernel will automatically request 489 - the corresponding filesystem module by executing this usermode helper. 486 + by default ``CONFIG_MODPROBE_PATH``, which in turn defaults to 487 + "/sbin/modprobe". This binary is executed when the kernel requests a 488 + module. For example, if userspace passes an unknown filesystem type 489 + to mount(), then the kernel will automatically request the 490 + corresponding filesystem module by executing this usermode helper. 490 491 This usermode helper should insert the needed module into the kernel. 491 492 492 493 This sysctl only affects module autoloading. It has no effect on the
+1 -1
Documentation/block/data-integrity.rst
··· 1 - ============== 1 + ============== 2 2 Data Integrity 3 3 ============== 4 4
+15 -15
Documentation/cdrom/cdrom-standard.rst
··· 146 146 *struct file_operations*:: 147 147 148 148 struct file_operations cdrom_fops = { 149 - NULL, /∗ lseek ∗/ 150 - block _read , /∗ read—general block-dev read ∗/ 151 - block _write, /∗ write—general block-dev write ∗/ 152 - NULL, /∗ readdir ∗/ 153 - NULL, /∗ select ∗/ 154 - cdrom_ioctl, /∗ ioctl ∗/ 155 - NULL, /∗ mmap ∗/ 156 - cdrom_open, /∗ open ∗/ 157 - cdrom_release, /∗ release ∗/ 158 - NULL, /∗ fsync ∗/ 159 - NULL, /∗ fasync ∗/ 160 - NULL /∗ revalidate ∗/ 149 + NULL, /* lseek */ 150 + block _read , /* read--general block-dev read */ 151 + block _write, /* write--general block-dev write */ 152 + NULL, /* readdir */ 153 + NULL, /* select */ 154 + cdrom_ioctl, /* ioctl */ 155 + NULL, /* mmap */ 156 + cdrom_open, /* open */ 157 + cdrom_release, /* release */ 158 + NULL, /* fsync */ 159 + NULL, /* fasync */ 160 + NULL /* revalidate */ 161 161 }; 162 162 163 163 Every active CD-ROM device shares this *struct*. The routines ··· 250 250 `cdrom.c`, currently contains the following fields:: 251 251 252 252 struct cdrom_device_info { 253 - const struct cdrom_device_ops * ops; /* device operations for this major */ 253 + const struct cdrom_device_ops * ops; /* device operations for this major */ 254 254 struct list_head list; /* linked list of all device_info */ 255 255 struct gendisk * disk; /* matching block layer disk */ 256 256 void * handle; /* driver-dependent data */ 257 257 258 - int mask; /* mask of capability: disables them */ 258 + int mask; /* mask of capability: disables them */ 259 259 int speed; /* maximum speed for reading data */ 260 260 int capacity; /* number of discs in a jukebox */ 261 261 ··· 569 569 570 570 In the file `cdrom.c` you will encounter many constructions of the type:: 571 571 572 - if (cdo->capability & ∼cdi->mask & CDC _⟨capability⟩) ... 572 + if (cdo->capability & ~cdi->mask & CDC _<capability>) ... 573 573 574 574 There is no *ioctl* to set the mask... The reason is that 575 575 I think it is better to control the **behavior** rather than the
+1 -1
Documentation/driver-api/nvdimm/nvdimm.rst
··· 4 4 5 5 libnvdimm - kernel / libndctl - userspace helper library 6 6 7 - linux-nvdimm@lists.01.org 7 + nvdimm@lists.linux.dev 8 8 9 9 Version 13 10 10
-1
Documentation/driver-api/serial/index.rst
··· 19 19 20 20 moxa-smartio 21 21 n_gsm 22 - rocket 23 22 serial-iso7816 24 23 serial-rs485 25 24
+9 -6
Documentation/driver-api/usb/usb.rst
··· 109 109 USB-Standard Types 110 110 ================== 111 111 112 - In ``drivers/usb/common/common.c`` and ``drivers/usb/common/debug.c`` you 113 - will find the USB data types defined in chapter 9 of the USB specification. 114 - These data types are used throughout USB, and in APIs including this host 115 - side API, gadget APIs, usb character devices and debugfs interfaces. 112 + In ``include/uapi/linux/usb/ch9.h`` you will find the USB data types defined 113 + in chapter 9 of the USB specification. These data types are used throughout 114 + USB, and in APIs including this host side API, gadget APIs, usb character 115 + devices and debugfs interfaces. That file is itself included by 116 + ``include/linux/usb/ch9.h``, which also contains declarations of a few 117 + utility routines for manipulating these data types; the implementations 118 + are in ``drivers/usb/common/common.c``. 116 119 117 120 .. kernel-doc:: drivers/usb/common/common.c 118 121 :export: 119 122 120 - .. kernel-doc:: drivers/usb/common/debug.c 121 - :export: 123 + In addition, some functions useful for creating debugging output are 124 + defined in ``drivers/usb/common/debug.c``. 122 125 123 126 Host-Side Data Types and Macros 124 127 ===============================
+99 -70
Documentation/filesystems/erofs.rst
··· 50 50 51 51 - Support POSIX.1e ACLs by using xattrs; 52 52 53 - - Support transparent file compression as an option: 54 - LZ4 algorithm with 4 KB fixed-sized output compression for high performance. 53 + - Support transparent data compression as an option: 54 + LZ4 algorithm with the fixed-sized output compression for high performance. 55 55 56 56 The following git tree provides the file system user-space tools under 57 57 development (ex, formatting tool mkfs.erofs): ··· 113 113 114 114 :: 115 115 116 - |-> aligned with 8B 117 - |-> followed closely 118 - + meta_blkaddr blocks |-> another slot 119 - _____________________________________________________________________ 120 - | ... | inode | xattrs | extents | data inline | ... | inode ... 121 - |________|_______|(optional)|(optional)|__(optional)_|_____|__________ 122 - |-> aligned with the inode slot size 123 - . . 124 - . . 125 - . . 126 - . . 127 - . . 128 - . . 129 - .____________________________________________________|-> aligned with 4B 130 - | xattr_ibody_header | shared xattrs | inline xattrs | 131 - |____________________|_______________|_______________| 132 - |-> 12 bytes <-|->x * 4 bytes<-| . 133 - . . . 134 - . . . 135 - . . . 136 - ._______________________________.______________________. 137 - | id | id | id | id | ... | id | ent | ... | ent| ... | 138 - |____|____|____|____|______|____|_____|_____|____|_____| 139 - |-> aligned with 4B 140 - |-> aligned with 4B 116 + |-> aligned with 8B 117 + |-> followed closely 118 + + meta_blkaddr blocks |-> another slot 119 + _____________________________________________________________________ 120 + | ... | inode | xattrs | extents | data inline | ... | inode ... 121 + |________|_______|(optional)|(optional)|__(optional)_|_____|__________ 122 + |-> aligned with the inode slot size 123 + . . 124 + . . 125 + . . 126 + . . 127 + . . 128 + . . 129 + .____________________________________________________|-> aligned with 4B 130 + | xattr_ibody_header | shared xattrs | inline xattrs | 131 + |____________________|_______________|_______________| 132 + |-> 12 bytes <-|->x * 4 bytes<-| . 133 + . . . 134 + . . . 135 + . . . 136 + ._______________________________.______________________. 137 + | id | id | id | id | ... | id | ent | ... | ent| ... | 138 + |____|____|____|____|______|____|_____|_____|____|_____| 139 + |-> aligned with 4B 140 + |-> aligned with 4B 141 141 142 142 Inode could be 32 or 64 bytes, which can be distinguished from a common 143 143 field which all inode versions have -- i_format:: ··· 175 175 Each share xattr can also be directly found by the following formula: 176 176 xattr offset = xattr_blkaddr * block_size + 4 * xattr_id 177 177 178 - :: 178 + :: 179 179 180 - |-> aligned by 4 bytes 181 - + xattr_blkaddr blocks |-> aligned with 4 bytes 182 - _________________________________________________________________________ 183 - | ... | xattr_entry | xattr data | ... | xattr_entry | xattr data ... 184 - |________|_____________|_____________|_____|______________|_______________ 180 + |-> aligned by 4 bytes 181 + + xattr_blkaddr blocks |-> aligned with 4 bytes 182 + _________________________________________________________________________ 183 + | ... | xattr_entry | xattr data | ... | xattr_entry | xattr data ... 184 + |________|_____________|_____________|_____|______________|_______________ 185 185 186 186 Directories 187 187 ----------- ··· 193 193 194 194 :: 195 195 196 - ___________________________ 197 - / | 198 - / ______________|________________ 199 - / / | nameoff1 | nameoffN-1 200 - ____________.______________._______________v________________v__________ 201 - | dirent | dirent | ... | dirent | filename | filename | ... | filename | 202 - |___.0___|____1___|_____|___N-1__|____0_____|____1_____|_____|___N-1____| 203 - \ ^ 204 - \ | * could have 205 - \ | trailing '\0' 206 - \________________________| nameoff0 207 - 208 - Directory block 196 + ___________________________ 197 + / | 198 + / ______________|________________ 199 + / / | nameoff1 | nameoffN-1 200 + ____________.______________._______________v________________v__________ 201 + | dirent | dirent | ... | dirent | filename | filename | ... | filename | 202 + |___.0___|____1___|_____|___N-1__|____0_____|____1_____|_____|___N-1____| 203 + \ ^ 204 + \ | * could have 205 + \ | trailing '\0' 206 + \________________________| nameoff0 207 + Directory block 209 208 210 209 Note that apart from the offset of the first filename, nameoff0 also indicates 211 210 the total number of directory entries in this block since it is no need to 212 211 introduce another on-disk field at all. 213 212 214 - Compression 215 - ----------- 216 - Currently, EROFS supports 4KB fixed-sized output transparent file compression, 217 - as illustrated below:: 213 + Data compression 214 + ---------------- 215 + EROFS implements LZ4 fixed-sized output compression which generates fixed-sized 216 + compressed data blocks from variable-sized input in contrast to other existing 217 + fixed-sized input solutions. Relatively higher compression ratios can be gotten 218 + by using fixed-sized output compression since nowadays popular data compression 219 + algorithms are mostly LZ77-based and such fixed-sized output approach can be 220 + benefited from the historical dictionary (aka. sliding window). 218 221 219 - |---- Variant-Length Extent ----|-------- VLE --------|----- VLE ----- 220 - clusterofs clusterofs clusterofs 221 - | | | logical data 222 - _________v_______________________________v_____________________v_______________ 223 - ... | . | | . | | . | ... 224 - ____|____.________|_____________|________.____|_____________|__.__________|____ 225 - |-> cluster <-|-> cluster <-|-> cluster <-|-> cluster <-|-> cluster <-| 226 - size size size size size 227 - . . . . 228 - . . . . 229 - . . . . 230 - _______._____________._____________._____________._____________________ 231 - ... | | | | ... physical data 232 - _______|_____________|_____________|_____________|_____________________ 233 - |-> cluster <-|-> cluster <-|-> cluster <-| 234 - size size size 222 + In details, original (uncompressed) data is turned into several variable-sized 223 + extents and in the meanwhile, compressed into physical clusters (pclusters). 224 + In order to record each variable-sized extent, logical clusters (lclusters) are 225 + introduced as the basic unit of compress indexes to indicate whether a new 226 + extent is generated within the range (HEAD) or not (NONHEAD). Lclusters are now 227 + fixed in block size, as illustrated below:: 235 228 236 - Currently each on-disk physical cluster can contain 4KB (un)compressed data 237 - at most. For each logical cluster, there is a corresponding on-disk index to 238 - describe its cluster type, physical cluster address, etc. 229 + |<- variable-sized extent ->|<- VLE ->| 230 + clusterofs clusterofs clusterofs 231 + | | | 232 + _________v_________________________________v_______________________v________ 233 + ... | . | | . | | . ... 234 + ____|____._________|______________|________.___ _|______________|__.________ 235 + |-> lcluster <-|-> lcluster <-|-> lcluster <-|-> lcluster <-| 236 + (HEAD) (NONHEAD) (HEAD) (NONHEAD) . 237 + . CBLKCNT . . 238 + . . . 239 + . . . 240 + _______._____________________________.______________._________________ 241 + ... | | | | ... 242 + _______|______________|______________|______________|_________________ 243 + |-> big pcluster <-|-> pcluster <-| 239 244 240 - See "struct z_erofs_vle_decompressed_index" in erofs_fs.h for more details. 245 + A physical cluster can be seen as a container of physical compressed blocks 246 + which contains compressed data. Previously, only lcluster-sized (4KB) pclusters 247 + were supported. After big pcluster feature is introduced (available since 248 + Linux v5.13), pcluster can be a multiple of lcluster size. 249 + 250 + For each HEAD lcluster, clusterofs is recorded to indicate where a new extent 251 + starts and blkaddr is used to seek the compressed data. For each NONHEAD 252 + lcluster, delta0 and delta1 are available instead of blkaddr to indicate the 253 + distance to its HEAD lcluster and the next HEAD lcluster. A PLAIN lcluster is 254 + also a HEAD lcluster except that its data is uncompressed. See the comments 255 + around "struct z_erofs_vle_decompressed_index" in erofs_fs.h for more details. 256 + 257 + If big pcluster is enabled, pcluster size in lclusters needs to be recorded as 258 + well. Let the delta0 of the first NONHEAD lcluster store the compressed block 259 + count with a special flag as a new called CBLKCNT NONHEAD lcluster. It's easy 260 + to understand its delta0 is constantly 1, as illustrated below:: 261 + 262 + __________________________________________________________ 263 + | HEAD | NONHEAD | NONHEAD | ... | NONHEAD | HEAD | HEAD | 264 + |__:___|_(CBLKCNT)_|_________|_____|_________|__:___|____:_| 265 + |<----- a big pcluster (with CBLKCNT) ------>|<-- -->| 266 + a lcluster-sized pcluster (without CBLKCNT) ^ 267 + 268 + If another HEAD follows a HEAD lcluster, there is no room to record CBLKCNT, 269 + but it's easy to know the size of such pcluster is 1 lcluster as well.
+2 -2
Documentation/hwmon/tmp103.rst
··· 21 21 The TMP103 is a digital output temperature sensor in a four-ball 22 22 wafer chip-scale package (WCSP). The TMP103 is capable of reading 23 23 temperatures to a resolution of 1°C. The TMP103 is specified for 24 - operation over a temperature range of –40°C to +125°C. 24 + operation over a temperature range of -40°C to +125°C. 25 25 26 26 Resolution: 8 Bits 27 - Accuracy: ±1°C Typ (–10°C to +100°C) 27 + Accuracy: ±1°C Typ (-10°C to +100°C) 28 28 29 29 The driver provides the common sysfs-interface for temperatures (see 30 30 Documentation/hwmon/sysfs-interface.rst under Temperatures).
+2 -2
Documentation/networking/device_drivers/ethernet/intel/i40e.rst
··· 173 173 driver. To re-enable ATR, the sideband can be disabled with the ethtool -K 174 174 option. For example:: 175 175 176 - ethtool –K [adapter] ntuple [off|on] 176 + ethtool -K [adapter] ntuple [off|on] 177 177 178 178 If sideband is re-enabled after ATR is re-enabled, ATR remains enabled until a 179 179 TCP-IP flow is added. When all TCP-IP sideband rules are deleted, ATR is ··· 688 688 Totals must be equal or less than port speed. 689 689 690 690 For example: min_rate 1Gbit 3Gbit: Verify bandwidth limit using network 691 - monitoring tools such as ifstat or sar –n DEV [interval] [number of samples] 691 + monitoring tools such as `ifstat` or `sar -n DEV [interval] [number of samples]` 692 692 693 693 2. Enable HW TC offload on interface:: 694 694
+1 -1
Documentation/networking/device_drivers/ethernet/intel/iavf.rst
··· 179 179 Totals must be equal or less than port speed. 180 180 181 181 For example: min_rate 1Gbit 3Gbit: Verify bandwidth limit using network 182 - monitoring tools such as ifstat or sar –n DEV [interval] [number of samples] 182 + monitoring tools such as ``ifstat`` or ``sar -n DEV [interval] [number of samples]`` 183 183 184 184 NOTE: 185 185 Setting up channels via ethtool (ethtool -L) is not supported when the
+1 -1
Documentation/process/kernel-enforcement-statement.rst
··· 1 - .. _process_statement_kernel: 1 + .. _process_statement_kernel: 2 2 3 3 Linux Kernel Enforcement Statement 4 4 ----------------------------------
+1 -1
Documentation/security/tpm/xen-tpmfront.rst
··· 1 - ============================= 1 + ============================= 2 2 Virtual TPM interface for Xen 3 3 ============================= 4 4
+1 -1
Documentation/timers/no_hz.rst
··· 1 - ====================================== 1 + ====================================== 2 2 NO_HZ: Reducing Scheduling-Clock Ticks 3 3 ====================================== 4 4
-50
Documentation/translations/zh_CN/SecurityBugs
··· 1 - Chinese translated version of Documentation/admin-guide/security-bugs.rst 2 - 3 - If you have any comment or update to the content, please contact the 4 - original document maintainer directly. However, if you have a problem 5 - communicating in English you can also ask the Chinese maintainer for 6 - help. Contact the Chinese maintainer if this translation is outdated 7 - or if there is a problem with the translation. 8 - 9 - Chinese maintainer: Harry Wei <harryxiyou@gmail.com> 10 - --------------------------------------------------------------------- 11 - Documentation/admin-guide/security-bugs.rst 的中文翻译 12 - 13 - 如果想评论或更新本文的内容,请直接联系原文档的维护者。如果你使用英文 14 - 交流有困难的话,也可以向中文版维护者求助。如果本翻译更新不及时或者翻 15 - 译存在问题,请联系中文版维护者。 16 - 17 - 中文版维护者: 贾威威 Harry Wei <harryxiyou@gmail.com> 18 - 中文版翻译者: 贾威威 Harry Wei <harryxiyou@gmail.com> 19 - 中文版校译者: 贾威威 Harry Wei <harryxiyou@gmail.com> 20 - 21 - 22 - 以下为正文 23 - --------------------------------------------------------------------- 24 - Linux内核开发者认为安全非常重要。因此,我们想要知道当一个有关于 25 - 安全的漏洞被发现的时候,并且它可能会被尽快的修复或者公开。请把这个安全 26 - 漏洞报告给Linux内核安全团队。 27 - 28 - 1) 联系 29 - 30 - linux内核安全团队可以通过email<security@kernel.org>来联系。这是 31 - 一组独立的安全工作人员,可以帮助改善漏洞报告并且公布和取消一个修复。安 32 - 全团队有可能会从部分的维护者那里引进额外的帮助来了解并且修复安全漏洞。 33 - 当遇到任何漏洞,所能提供的信息越多就越能诊断和修复。如果你不清楚什么 34 - 是有帮助的信息,那就请重温一下admin-guide/reporting-bugs.rst文件中的概述过程。任 35 - 何攻击性的代码都是非常有用的,未经报告者的同意不会被取消,除非它已经 36 - 被公布于众。 37 - 38 - 2) 公开 39 - 40 - Linux内核安全团队的宗旨就是和漏洞提交者一起处理漏洞的解决方案直 41 - 到公开。我们喜欢尽快地完全公开漏洞。当一个漏洞或者修复还没有被完全地理 42 - 解,解决方案没有通过测试或者供应商协调,可以合理地延迟公开。然而,我们 43 - 期望这些延迟尽可能的短些,是可数的几天,而不是几个星期或者几个月。公开 44 - 日期是通过安全团队和漏洞提供者以及供应商洽谈后的结果。公开时间表是从很 45 - 短(特殊的,它已经被公众所知道)到几个星期。作为一个基本的默认政策,我 46 - 们所期望通知公众的日期是7天的安排。 47 - 48 - 3) 保密协议 49 - 50 - Linux内核安全团队不是一个正式的团体,因此不能加入任何的保密协议。
+1 -1
Documentation/usb/gadget_configfs.rst
··· 140 140 Each function provides its specific set of attributes, with either read-only 141 141 or read-write access. Where applicable they need to be written to as 142 142 appropriate. 143 - Please refer to Documentation/ABI/*/configfs-usb-gadget* for more information. 143 + Please refer to Documentation/ABI/testing/configfs-usb-gadget for more information. 144 144 145 145 4. Associating the functions with their configurations 146 146 ------------------------------------------------------
+1 -1
Documentation/usb/mtouchusb.rst
··· 1 - ================ 1 + ================ 2 2 mtouchusb driver 3 3 ================ 4 4
+1 -1
Documentation/usb/usb-serial.rst
··· 1 - ========== 1 + ========== 2 2 USB serial 3 3 ========== 4 4
+1 -1
Documentation/virt/kvm/amd-memory-encryption.rst
··· 22 22 [ecx]: 23 23 Bits[31:0] Number of encrypted guests supported simultaneously 24 24 25 - If support for SEV is present, MSR 0xc001_0010 (MSR_K8_SYSCFG) and MSR 0xc001_0015 25 + If support for SEV is present, MSR 0xc001_0010 (MSR_AMD64_SYSCFG) and MSR 0xc001_0015 26 26 (MSR_K7_HWCR) can be used to determine if it can be enabled:: 27 27 28 28 0xc001_0010:
+2 -2
Documentation/virt/kvm/api.rst
··· 4803 4803 4.126 KVM_X86_SET_MSR_FILTER 4804 4804 ---------------------------- 4805 4805 4806 - :Capability: KVM_X86_SET_MSR_FILTER 4806 + :Capability: KVM_CAP_X86_MSR_FILTER 4807 4807 :Architectures: x86 4808 4808 :Type: vm ioctl 4809 4809 :Parameters: struct kvm_msr_filter ··· 6715 6715 instead get bounced to user space through the KVM_EXIT_X86_RDMSR and 6716 6716 KVM_EXIT_X86_WRMSR exit notifications. 6717 6717 6718 - 8.27 KVM_X86_SET_MSR_FILTER 6718 + 8.27 KVM_CAP_X86_MSR_FILTER 6719 6719 --------------------------- 6720 6720 6721 6721 :Architectures: x86
+3 -3
Documentation/x86/amd-memory-encryption.rst
··· 53 53 system physical addresses, not guest physical 54 54 addresses) 55 55 56 - If support for SME is present, MSR 0xc00100010 (MSR_K8_SYSCFG) can be used to 56 + If support for SME is present, MSR 0xc00100010 (MSR_AMD64_SYSCFG) can be used to 57 57 determine if SME is enabled and/or to enable memory encryption:: 58 58 59 59 0xc0010010: ··· 79 79 The CPU supports SME (determined through CPUID instruction). 80 80 81 81 - Enabled: 82 - Supported and bit 23 of MSR_K8_SYSCFG is set. 82 + Supported and bit 23 of MSR_AMD64_SYSCFG is set. 83 83 84 84 - Active: 85 85 Supported, Enabled and the Linux kernel is actively applying ··· 89 89 SME can also be enabled and activated in the BIOS. If SME is enabled and 90 90 activated in the BIOS, then all memory accesses will be encrypted and it will 91 91 not be necessary to activate the Linux memory encryption support. If the BIOS 92 - merely enables SME (sets bit 23 of the MSR_K8_SYSCFG), then Linux can activate 92 + merely enables SME (sets bit 23 of the MSR_AMD64_SYSCFG), then Linux can activate 93 93 memory encryption by default (CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT=y) or 94 94 by supplying mem_encrypt=on on the kernel command line. However, if BIOS does 95 95 not enable SME, then Linux will not be able to activate memory encryption, even
+16 -16
MAINTAINERS
··· 1578 1578 ARM/Allwinner sunXi SoC support 1579 1579 M: Maxime Ripard <mripard@kernel.org> 1580 1580 M: Chen-Yu Tsai <wens@csie.org> 1581 - R: Jernej Skrabec <jernej.skrabec@siol.net> 1581 + R: Jernej Skrabec <jernej.skrabec@gmail.com> 1582 1582 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1583 1583 S: Maintained 1584 1584 T: git git://git.kernel.org/pub/scm/linux/kernel/git/sunxi/linux.git ··· 5089 5089 F: drivers/net/fddi/defza.* 5090 5090 5091 5091 DEINTERLACE DRIVERS FOR ALLWINNER H3 5092 - M: Jernej Skrabec <jernej.skrabec@siol.net> 5092 + M: Jernej Skrabec <jernej.skrabec@gmail.com> 5093 5093 L: linux-media@vger.kernel.org 5094 5094 S: Maintained 5095 5095 T: git git://linuxtv.org/media_tree.git ··· 5237 5237 M: Dan Williams <dan.j.williams@intel.com> 5238 5238 M: Vishal Verma <vishal.l.verma@intel.com> 5239 5239 M: Dave Jiang <dave.jiang@intel.com> 5240 - L: linux-nvdimm@lists.01.org 5240 + L: nvdimm@lists.linux.dev 5241 5241 S: Supported 5242 5242 F: drivers/dax/ 5243 5243 ··· 5632 5632 DRM DRIVER FOR ALLWINNER DE2 AND DE3 ENGINE 5633 5633 M: Maxime Ripard <mripard@kernel.org> 5634 5634 M: Chen-Yu Tsai <wens@csie.org> 5635 - R: Jernej Skrabec <jernej.skrabec@siol.net> 5635 + R: Jernej Skrabec <jernej.skrabec@gmail.com> 5636 5636 L: dri-devel@lists.freedesktop.org 5637 5637 S: Supported 5638 5638 T: git git://anongit.freedesktop.org/drm/drm-misc 5639 5639 F: drivers/gpu/drm/sun4i/sun8i* 5640 5640 5641 5641 DRM DRIVER FOR ARM PL111 CLCD 5642 - M: Eric Anholt <eric@anholt.net> 5642 + M: Emma Anholt <emma@anholt.net> 5643 5643 S: Supported 5644 5644 T: git git://anongit.freedesktop.org/drm/drm-misc 5645 5645 F: drivers/gpu/drm/pl111/ ··· 5719 5719 F: drivers/gpu/drm/tiny/gm12u320.c 5720 5720 5721 5721 DRM DRIVER FOR HX8357D PANELS 5722 - M: Eric Anholt <eric@anholt.net> 5722 + M: Emma Anholt <emma@anholt.net> 5723 5723 S: Maintained 5724 5724 T: git git://anongit.freedesktop.org/drm/drm-misc 5725 5725 F: Documentation/devicetree/bindings/display/himax,hx8357d.txt ··· 6023 6023 M: Robert Foss <robert.foss@linaro.org> 6024 6024 R: Laurent Pinchart <Laurent.pinchart@ideasonboard.com> 6025 6025 R: Jonas Karlman <jonas@kwiboo.se> 6026 - R: Jernej Skrabec <jernej.skrabec@siol.net> 6026 + R: Jernej Skrabec <jernej.skrabec@gmail.com> 6027 6027 S: Maintained 6028 6028 T: git git://anongit.freedesktop.org/drm/drm-misc 6029 6029 F: drivers/gpu/drm/bridge/ ··· 6177 6177 F: drivers/gpu/drm/omapdrm/ 6178 6178 6179 6179 DRM DRIVERS FOR V3D 6180 - M: Eric Anholt <eric@anholt.net> 6180 + M: Emma Anholt <emma@anholt.net> 6181 6181 S: Supported 6182 6182 T: git git://anongit.freedesktop.org/drm/drm-misc 6183 6183 F: Documentation/devicetree/bindings/gpu/brcm,bcm-v3d.yaml ··· 6185 6185 F: include/uapi/drm/v3d_drm.h 6186 6186 6187 6187 DRM DRIVERS FOR VC4 6188 - M: Eric Anholt <eric@anholt.net> 6188 + M: Emma Anholt <emma@anholt.net> 6189 6189 M: Maxime Ripard <mripard@kernel.org> 6190 6190 S: Supported 6191 6191 T: git git://github.com/anholt/linux ··· 7006 7006 R: Matthew Wilcox <willy@infradead.org> 7007 7007 R: Jan Kara <jack@suse.cz> 7008 7008 L: linux-fsdevel@vger.kernel.org 7009 - L: linux-nvdimm@lists.01.org 7009 + L: nvdimm@lists.linux.dev 7010 7010 S: Supported 7011 7011 F: fs/dax.c 7012 7012 F: include/linux/dax.h ··· 10378 10378 M: Dan Williams <dan.j.williams@intel.com> 10379 10379 M: Vishal Verma <vishal.l.verma@intel.com> 10380 10380 M: Dave Jiang <dave.jiang@intel.com> 10381 - L: linux-nvdimm@lists.01.org 10381 + L: nvdimm@lists.linux.dev 10382 10382 S: Supported 10383 10383 Q: https://patchwork.kernel.org/project/linux-nvdimm/list/ 10384 10384 P: Documentation/nvdimm/maintainer-entry-profile.rst ··· 10389 10389 M: Vishal Verma <vishal.l.verma@intel.com> 10390 10390 M: Dan Williams <dan.j.williams@intel.com> 10391 10391 M: Dave Jiang <dave.jiang@intel.com> 10392 - L: linux-nvdimm@lists.01.org 10392 + L: nvdimm@lists.linux.dev 10393 10393 S: Supported 10394 10394 Q: https://patchwork.kernel.org/project/linux-nvdimm/list/ 10395 10395 P: Documentation/nvdimm/maintainer-entry-profile.rst ··· 10399 10399 M: Dan Williams <dan.j.williams@intel.com> 10400 10400 M: Vishal Verma <vishal.l.verma@intel.com> 10401 10401 M: Dave Jiang <dave.jiang@intel.com> 10402 - L: linux-nvdimm@lists.01.org 10402 + L: nvdimm@lists.linux.dev 10403 10403 S: Supported 10404 10404 Q: https://patchwork.kernel.org/project/linux-nvdimm/list/ 10405 10405 P: Documentation/nvdimm/maintainer-entry-profile.rst ··· 10407 10407 10408 10408 LIBNVDIMM: DEVICETREE BINDINGS 10409 10409 M: Oliver O'Halloran <oohall@gmail.com> 10410 - L: linux-nvdimm@lists.01.org 10410 + L: nvdimm@lists.linux.dev 10411 10411 S: Supported 10412 10412 Q: https://patchwork.kernel.org/project/linux-nvdimm/list/ 10413 10413 F: Documentation/devicetree/bindings/pmem/pmem-region.txt ··· 10418 10418 M: Vishal Verma <vishal.l.verma@intel.com> 10419 10419 M: Dave Jiang <dave.jiang@intel.com> 10420 10420 M: Ira Weiny <ira.weiny@intel.com> 10421 - L: linux-nvdimm@lists.01.org 10421 + L: nvdimm@lists.linux.dev 10422 10422 S: Supported 10423 10423 Q: https://patchwork.kernel.org/project/linux-nvdimm/list/ 10424 10424 P: Documentation/nvdimm/maintainer-entry-profile.rst ··· 15815 15815 F: net/rose/ 15816 15816 15817 15817 ROTATION DRIVER FOR ALLWINNER A83T 15818 - M: Jernej Skrabec <jernej.skrabec@siol.net> 15818 + M: Jernej Skrabec <jernej.skrabec@gmail.com> 15819 15819 L: linux-media@vger.kernel.org 15820 15820 S: Maintained 15821 15821 T: git git://linuxtv.org/media_tree.git
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 13 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc1 5 + EXTRAVERSION = -rc2 6 6 NAME = Frozen Wasteland 7 7 8 8 # *DOCUMENTATION*
+1 -1
arch/alpha/kernel/syscalls/syscall.tbl
··· 482 482 550 common process_madvise sys_process_madvise 483 483 551 common epoll_pwait2 sys_epoll_pwait2 484 484 552 common mount_setattr sys_mount_setattr 485 - 553 common quotactl_path sys_quotactl_path 485 + # 553 reserved for quotactl_path 486 486 554 common landlock_create_ruleset sys_landlock_create_ruleset 487 487 555 common landlock_add_rule sys_landlock_add_rule 488 488 556 common landlock_restrict_self sys_landlock_restrict_self
+1 -1
arch/arc/Makefile
··· 31 31 32 32 33 33 ifdef CONFIG_ARC_CURR_IN_REG 34 - # For a global register defintion, make sure it gets passed to every file 34 + # For a global register definition, make sure it gets passed to every file 35 35 # We had a customer reported bug where some code built in kernel was NOT using 36 36 # any kernel headers, and missing the r25 global register 37 37 # Can't do unconditionally because of recursive include issues
+2 -2
arch/arc/include/asm/cmpxchg.h
··· 116 116 * 117 117 * Technically the lock is also needed for UP (boils down to irq save/restore) 118 118 * but we can cheat a bit since cmpxchg() atomic_ops_lock() would cause irqs to 119 - * be disabled thus can't possibly be interrpted/preempted/clobbered by xchg() 119 + * be disabled thus can't possibly be interrupted/preempted/clobbered by xchg() 120 120 * Other way around, xchg is one instruction anyways, so can't be interrupted 121 121 * as such 122 122 */ ··· 143 143 /* 144 144 * "atomic" variant of xchg() 145 145 * REQ: It needs to follow the same serialization rules as other atomic_xxx() 146 - * Since xchg() doesn't always do that, it would seem that following defintion 146 + * Since xchg() doesn't always do that, it would seem that following definition 147 147 * is incorrect. But here's the rationale: 148 148 * SMP : Even xchg() takes the atomic_ops_lock, so OK. 149 149 * LLSC: atomic_ops_lock are not relevant at all (even if SMP, since LLSC
+12
arch/arc/include/asm/page.h
··· 7 7 8 8 #include <uapi/asm/page.h> 9 9 10 + #ifdef CONFIG_ARC_HAS_PAE40 11 + 12 + #define MAX_POSSIBLE_PHYSMEM_BITS 40 13 + #define PAGE_MASK_PHYS (0xff00000000ull | PAGE_MASK) 14 + 15 + #else /* CONFIG_ARC_HAS_PAE40 */ 16 + 17 + #define MAX_POSSIBLE_PHYSMEM_BITS 32 18 + #define PAGE_MASK_PHYS PAGE_MASK 19 + 20 + #endif /* CONFIG_ARC_HAS_PAE40 */ 21 + 10 22 #ifndef __ASSEMBLY__ 11 23 12 24 #define clear_page(paddr) memset((paddr), 0, PAGE_SIZE)
+3 -9
arch/arc/include/asm/pgtable.h
··· 107 107 #define ___DEF (_PAGE_PRESENT | _PAGE_CACHEABLE) 108 108 109 109 /* Set of bits not changed in pte_modify */ 110 - #define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_SPECIAL) 111 - 110 + #define _PAGE_CHG_MASK (PAGE_MASK_PHYS | _PAGE_ACCESSED | _PAGE_DIRTY | \ 111 + _PAGE_SPECIAL) 112 112 /* More Abbrevaited helpers */ 113 113 #define PAGE_U_NONE __pgprot(___DEF) 114 114 #define PAGE_U_R __pgprot(___DEF | _PAGE_READ) ··· 132 132 #define PTE_BITS_IN_PD0 (_PAGE_GLOBAL | _PAGE_PRESENT | _PAGE_HW_SZ) 133 133 #define PTE_BITS_RWX (_PAGE_EXECUTE | _PAGE_WRITE | _PAGE_READ) 134 134 135 - #ifdef CONFIG_ARC_HAS_PAE40 136 - #define PTE_BITS_NON_RWX_IN_PD1 (0xff00000000 | PAGE_MASK | _PAGE_CACHEABLE) 137 - #define MAX_POSSIBLE_PHYSMEM_BITS 40 138 - #else 139 - #define PTE_BITS_NON_RWX_IN_PD1 (PAGE_MASK | _PAGE_CACHEABLE) 140 - #define MAX_POSSIBLE_PHYSMEM_BITS 32 141 - #endif 135 + #define PTE_BITS_NON_RWX_IN_PD1 (PAGE_MASK_PHYS | _PAGE_CACHEABLE) 142 136 143 137 /************************************************************************** 144 138 * Mapping of vm_flags (Generic VM) to PTE flags (arch specific)
-1
arch/arc/include/uapi/asm/page.h
··· 33 33 34 34 #define PAGE_MASK (~(PAGE_SIZE-1)) 35 35 36 - 37 36 #endif /* _UAPI__ASM_ARC_PAGE_H */
+2 -2
arch/arc/kernel/entry.S
··· 177 177 178 178 ; Do the Sys Call as we normally would. 179 179 ; Validate the Sys Call number 180 - cmp r8, NR_syscalls 180 + cmp r8, NR_syscalls - 1 181 181 mov.hi r0, -ENOSYS 182 182 bhi tracesys_exit 183 183 ··· 255 255 ;============ Normal syscall case 256 256 257 257 ; syscall num shd not exceed the total system calls avail 258 - cmp r8, NR_syscalls 258 + cmp r8, NR_syscalls - 1 259 259 mov.hi r0, -ENOSYS 260 260 bhi .Lret_from_system_call 261 261
+1
arch/arc/kernel/kgdb.c
··· 140 140 ptr = &remcomInBuffer[1]; 141 141 if (kgdb_hex2long(&ptr, &addr)) 142 142 regs->ret = addr; 143 + fallthrough; 143 144 144 145 case 'D': 145 146 case 'k':
+4 -4
arch/arc/kernel/process.c
··· 50 50 int ret; 51 51 52 52 /* 53 - * This is only for old cores lacking LLOCK/SCOND, which by defintion 53 + * This is only for old cores lacking LLOCK/SCOND, which by definition 54 54 * can't possibly be SMP. Thus doesn't need to be SMP safe. 55 55 * And this also helps reduce the overhead for serializing in 56 56 * the UP case 57 57 */ 58 58 WARN_ON_ONCE(IS_ENABLED(CONFIG_SMP)); 59 59 60 - /* Z indicates to userspace if operation succeded */ 60 + /* Z indicates to userspace if operation succeeded */ 61 61 regs->status32 &= ~STATUS_Z_MASK; 62 62 63 63 ret = access_ok(uaddr, sizeof(*uaddr)); ··· 107 107 108 108 void arch_cpu_idle(void) 109 109 { 110 - /* Re-enable interrupts <= default irq priority before commiting SLEEP */ 110 + /* Re-enable interrupts <= default irq priority before committing SLEEP */ 111 111 const unsigned int arg = 0x10 | ARCV2_IRQ_DEF_PRIO; 112 112 113 113 __asm__ __volatile__( ··· 120 120 121 121 void arch_cpu_idle(void) 122 122 { 123 - /* sleep, but enable both set E1/E2 (levels of interrutps) before committing */ 123 + /* sleep, but enable both set E1/E2 (levels of interrupts) before committing */ 124 124 __asm__ __volatile__("sleep 0x3 \n"); 125 125 } 126 126
+2 -2
arch/arc/kernel/signal.c
··· 259 259 regs->r2 = (unsigned long)&sf->uc; 260 260 261 261 /* 262 - * small optim to avoid unconditonally calling do_sigaltstack 262 + * small optim to avoid unconditionally calling do_sigaltstack 263 263 * in sigreturn path, now that we only have rt_sigreturn 264 264 */ 265 265 magic = MAGIC_SIGALTSTK; ··· 391 391 void do_notify_resume(struct pt_regs *regs) 392 392 { 393 393 /* 394 - * ASM glue gaurantees that this is only called when returning to 394 + * ASM glue guarantees that this is only called when returning to 395 395 * user mode 396 396 */ 397 397 if (test_thread_flag(TIF_NOTIFY_RESUME))
+10 -1
arch/arc/mm/init.c
··· 157 157 min_high_pfn = PFN_DOWN(high_mem_start); 158 158 max_high_pfn = PFN_DOWN(high_mem_start + high_mem_sz); 159 159 160 - max_zone_pfn[ZONE_HIGHMEM] = min_low_pfn; 160 + /* 161 + * max_high_pfn should be ok here for both HIGHMEM and HIGHMEM+PAE. 162 + * For HIGHMEM without PAE max_high_pfn should be less than 163 + * min_low_pfn to guarantee that these two regions don't overlap. 164 + * For PAE case highmem is greater than lowmem, so it is natural 165 + * to use max_high_pfn. 166 + * 167 + * In both cases, holes should be handled by pfn_valid(). 168 + */ 169 + max_zone_pfn[ZONE_HIGHMEM] = max_high_pfn; 161 170 162 171 high_memory = (void *)(min_high_pfn << PAGE_SHIFT); 163 172
+3 -2
arch/arc/mm/ioremap.c
··· 53 53 void __iomem *ioremap_prot(phys_addr_t paddr, unsigned long size, 54 54 unsigned long flags) 55 55 { 56 + unsigned int off; 56 57 unsigned long vaddr; 57 58 struct vm_struct *area; 58 - phys_addr_t off, end; 59 + phys_addr_t end; 59 60 pgprot_t prot = __pgprot(flags); 60 61 61 62 /* Don't allow wraparound, zero size */ ··· 73 72 74 73 /* Mappings have to be page-aligned */ 75 74 off = paddr & ~PAGE_MASK; 76 - paddr &= PAGE_MASK; 75 + paddr &= PAGE_MASK_PHYS; 77 76 size = PAGE_ALIGN(end + 1) - paddr; 78 77 79 78 /*
+1 -1
arch/arc/mm/tlb.c
··· 576 576 pte_t *ptep) 577 577 { 578 578 unsigned long vaddr = vaddr_unaligned & PAGE_MASK; 579 - phys_addr_t paddr = pte_val(*ptep) & PAGE_MASK; 579 + phys_addr_t paddr = pte_val(*ptep) & PAGE_MASK_PHYS; 580 580 struct page *page = pfn_to_page(pte_pfn(*ptep)); 581 581 582 582 create_tlb(vma, vaddr, ptep);
+1 -1
arch/arm/tools/syscall.tbl
··· 456 456 440 common process_madvise sys_process_madvise 457 457 441 common epoll_pwait2 sys_epoll_pwait2 458 458 442 common mount_setattr sys_mount_setattr 459 - 443 common quotactl_path sys_quotactl_path 459 + # 443 reserved for quotactl_path 460 460 444 common landlock_create_ruleset sys_landlock_create_ruleset 461 461 445 common landlock_add_rule sys_landlock_add_rule 462 462 446 common landlock_restrict_self sys_landlock_restrict_self
+7 -13
arch/arm/xen/mm.c
··· 135 135 return; 136 136 } 137 137 138 - int xen_swiotlb_detect(void) 139 - { 140 - if (!xen_domain()) 141 - return 0; 142 - if (xen_feature(XENFEAT_direct_mapped)) 143 - return 1; 144 - /* legacy case */ 145 - if (!xen_feature(XENFEAT_not_direct_mapped) && xen_initial_domain()) 146 - return 1; 147 - return 0; 148 - } 149 - 150 138 static int __init xen_mm_init(void) 151 139 { 152 140 struct gnttab_cache_flush cflush; 141 + int rc; 142 + 153 143 if (!xen_swiotlb_detect()) 154 144 return 0; 155 - xen_swiotlb_init(); 145 + 146 + rc = xen_swiotlb_init(); 147 + /* we can work with the default swiotlb */ 148 + if (rc < 0 && rc != -EEXIST) 149 + return rc; 156 150 157 151 cflush.op = 0; 158 152 cflush.a.dev_bus_addr = 0;
+3
arch/arm64/Makefile
··· 175 175 $(if $(CONFIG_COMPAT_VDSO), \ 176 176 $(Q)$(MAKE) $(build)=arch/arm64/kernel/vdso32 $@) 177 177 178 + archprepare: 179 + $(Q)$(MAKE) $(build)=arch/arm64/tools kapi 180 + 178 181 # We use MRPROPER_FILES and CLEAN_FILES now 179 182 archclean: 180 183 $(Q)$(MAKE) $(clean)=$(boot)
+2
arch/arm64/include/asm/Kbuild
··· 5 5 generic-y += qspinlock.h 6 6 generic-y += set_memory.h 7 7 generic-y += user.h 8 + 9 + generated-y += cpucaps.h
-74
arch/arm64/include/asm/cpucaps.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* 3 - * arch/arm64/include/asm/cpucaps.h 4 - * 5 - * Copyright (C) 2016 ARM Ltd. 6 - */ 7 - #ifndef __ASM_CPUCAPS_H 8 - #define __ASM_CPUCAPS_H 9 - 10 - #define ARM64_WORKAROUND_CLEAN_CACHE 0 11 - #define ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE 1 12 - #define ARM64_WORKAROUND_845719 2 13 - #define ARM64_HAS_SYSREG_GIC_CPUIF 3 14 - #define ARM64_HAS_PAN 4 15 - #define ARM64_HAS_LSE_ATOMICS 5 16 - #define ARM64_WORKAROUND_CAVIUM_23154 6 17 - #define ARM64_WORKAROUND_834220 7 18 - #define ARM64_HAS_NO_HW_PREFETCH 8 19 - #define ARM64_HAS_VIRT_HOST_EXTN 11 20 - #define ARM64_WORKAROUND_CAVIUM_27456 12 21 - #define ARM64_HAS_32BIT_EL0 13 22 - #define ARM64_SPECTRE_V3A 14 23 - #define ARM64_HAS_CNP 15 24 - #define ARM64_HAS_NO_FPSIMD 16 25 - #define ARM64_WORKAROUND_REPEAT_TLBI 17 26 - #define ARM64_WORKAROUND_QCOM_FALKOR_E1003 18 27 - #define ARM64_WORKAROUND_858921 19 28 - #define ARM64_WORKAROUND_CAVIUM_30115 20 29 - #define ARM64_HAS_DCPOP 21 30 - #define ARM64_SVE 22 31 - #define ARM64_UNMAP_KERNEL_AT_EL0 23 32 - #define ARM64_SPECTRE_V2 24 33 - #define ARM64_HAS_RAS_EXTN 25 34 - #define ARM64_WORKAROUND_843419 26 35 - #define ARM64_HAS_CACHE_IDC 27 36 - #define ARM64_HAS_CACHE_DIC 28 37 - #define ARM64_HW_DBM 29 38 - #define ARM64_SPECTRE_V4 30 39 - #define ARM64_MISMATCHED_CACHE_TYPE 31 40 - #define ARM64_HAS_STAGE2_FWB 32 41 - #define ARM64_HAS_CRC32 33 42 - #define ARM64_SSBS 34 43 - #define ARM64_WORKAROUND_1418040 35 44 - #define ARM64_HAS_SB 36 45 - #define ARM64_WORKAROUND_SPECULATIVE_AT 37 46 - #define ARM64_HAS_ADDRESS_AUTH_ARCH 38 47 - #define ARM64_HAS_ADDRESS_AUTH_IMP_DEF 39 48 - #define ARM64_HAS_GENERIC_AUTH_ARCH 40 49 - #define ARM64_HAS_GENERIC_AUTH_IMP_DEF 41 50 - #define ARM64_HAS_IRQ_PRIO_MASKING 42 51 - #define ARM64_HAS_DCPODP 43 52 - #define ARM64_WORKAROUND_1463225 44 53 - #define ARM64_WORKAROUND_CAVIUM_TX2_219_TVM 45 54 - #define ARM64_WORKAROUND_CAVIUM_TX2_219_PRFM 46 55 - #define ARM64_WORKAROUND_1542419 47 56 - #define ARM64_HAS_E0PD 48 57 - #define ARM64_HAS_RNG 49 58 - #define ARM64_HAS_AMU_EXTN 50 59 - #define ARM64_HAS_ADDRESS_AUTH 51 60 - #define ARM64_HAS_GENERIC_AUTH 52 61 - #define ARM64_HAS_32BIT_EL1 53 62 - #define ARM64_BTI 54 63 - #define ARM64_HAS_ARMv8_4_TTL 55 64 - #define ARM64_HAS_TLB_RANGE 56 65 - #define ARM64_MTE 57 66 - #define ARM64_WORKAROUND_1508412 58 67 - #define ARM64_HAS_LDAPR 59 68 - #define ARM64_KVM_PROTECTED_MODE 60 69 - #define ARM64_WORKAROUND_NVIDIA_CARMEL_CNP 61 70 - #define ARM64_HAS_EPAN 62 71 - 72 - #define ARM64_NCAPS 63 73 - 74 - #endif /* __ASM_CPUCAPS_H */
+1 -2
arch/arm64/include/asm/unistd32.h
··· 893 893 __SYSCALL(__NR_epoll_pwait2, compat_sys_epoll_pwait2) 894 894 #define __NR_mount_setattr 442 895 895 __SYSCALL(__NR_mount_setattr, sys_mount_setattr) 896 - #define __NR_quotactl_path 443 897 - __SYSCALL(__NR_quotactl_path, sys_quotactl_path) 896 + /* 443 is reserved for quotactl_path */ 898 897 #define __NR_landlock_create_ruleset 444 899 898 __SYSCALL(__NR_landlock_create_ruleset, sys_landlock_create_ruleset) 900 899 #define __NR_landlock_add_rule 445
+3 -1
arch/arm64/mm/flush.c
··· 55 55 { 56 56 struct page *page = pte_page(pte); 57 57 58 - if (!test_and_set_bit(PG_dcache_clean, &page->flags)) 58 + if (!test_bit(PG_dcache_clean, &page->flags)) { 59 59 sync_icache_aliases(page_address(page), page_size(page)); 60 + set_bit(PG_dcache_clean, &page->flags); 61 + } 60 62 } 61 63 EXPORT_SYMBOL_GPL(__sync_icache_dcache); 62 64
+2 -1
arch/arm64/mm/init.c
··· 43 43 #include <linux/sizes.h> 44 44 #include <asm/tlb.h> 45 45 #include <asm/alternative.h> 46 + #include <asm/xen/swiotlb-xen.h> 46 47 47 48 /* 48 49 * We need to be able to catch inadvertent references to memstart_addr ··· 483 482 if (swiotlb_force == SWIOTLB_FORCE || 484 483 max_pfn > PFN_DOWN(arm64_dma_phys_limit)) 485 484 swiotlb_init(1); 486 - else 485 + else if (!xen_swiotlb_detect()) 487 486 swiotlb_force = SWIOTLB_NO_FORCE; 488 487 489 488 set_max_mapnr(max_pfn - PHYS_PFN_OFFSET);
+12
arch/arm64/mm/proc.S
··· 447 447 mov x10, #(SYS_GCR_EL1_RRND | SYS_GCR_EL1_EXCL_MASK) 448 448 msr_s SYS_GCR_EL1, x10 449 449 450 + /* 451 + * If GCR_EL1.RRND=1 is implemented the same way as RRND=0, then 452 + * RGSR_EL1.SEED must be non-zero for IRG to produce 453 + * pseudorandom numbers. As RGSR_EL1 is UNKNOWN out of reset, we 454 + * must initialize it. 455 + */ 456 + mrs x10, CNTVCT_EL0 457 + ands x10, x10, #SYS_RGSR_EL1_SEED_MASK 458 + csinc x10, x10, xzr, ne 459 + lsl x10, x10, #SYS_RGSR_EL1_SEED_SHIFT 460 + msr_s SYS_RGSR_EL1, x10 461 + 450 462 /* clear any pending tag check faults in TFSR*_EL1 */ 451 463 msr_s SYS_TFSR_EL1, xzr 452 464 msr_s SYS_TFSRE0_EL1, xzr
+22
arch/arm64/tools/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + gen := arch/$(ARCH)/include/generated 4 + kapi := $(gen)/asm 5 + 6 + kapi-hdrs-y := $(kapi)/cpucaps.h 7 + 8 + targets += $(addprefix ../../../,$(gen-y) $(kapi-hdrs-y)) 9 + 10 + PHONY += kapi 11 + 12 + kapi: $(kapi-hdrs-y) $(gen-y) 13 + 14 + # Create output directory if not already present 15 + _dummy := $(shell [ -d '$(kapi)' ] || mkdir -p '$(kapi)') 16 + 17 + quiet_cmd_gen_cpucaps = GEN $@ 18 + cmd_gen_cpucaps = mkdir -p $(dir $@) && \ 19 + $(AWK) -f $(filter-out $(PHONY),$^) > $@ 20 + 21 + $(kapi)/cpucaps.h: $(src)/gen-cpucaps.awk $(src)/cpucaps FORCE 22 + $(call if_changed,gen_cpucaps)
+65
arch/arm64/tools/cpucaps
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + # 3 + # Internal CPU capabilities constants, keep this list sorted 4 + 5 + BTI 6 + HAS_32BIT_EL0 7 + HAS_32BIT_EL1 8 + HAS_ADDRESS_AUTH 9 + HAS_ADDRESS_AUTH_ARCH 10 + HAS_ADDRESS_AUTH_IMP_DEF 11 + HAS_AMU_EXTN 12 + HAS_ARMv8_4_TTL 13 + HAS_CACHE_DIC 14 + HAS_CACHE_IDC 15 + HAS_CNP 16 + HAS_CRC32 17 + HAS_DCPODP 18 + HAS_DCPOP 19 + HAS_E0PD 20 + HAS_EPAN 21 + HAS_GENERIC_AUTH 22 + HAS_GENERIC_AUTH_ARCH 23 + HAS_GENERIC_AUTH_IMP_DEF 24 + HAS_IRQ_PRIO_MASKING 25 + HAS_LDAPR 26 + HAS_LSE_ATOMICS 27 + HAS_NO_FPSIMD 28 + HAS_NO_HW_PREFETCH 29 + HAS_PAN 30 + HAS_RAS_EXTN 31 + HAS_RNG 32 + HAS_SB 33 + HAS_STAGE2_FWB 34 + HAS_SYSREG_GIC_CPUIF 35 + HAS_TLB_RANGE 36 + HAS_VIRT_HOST_EXTN 37 + HW_DBM 38 + KVM_PROTECTED_MODE 39 + MISMATCHED_CACHE_TYPE 40 + MTE 41 + SPECTRE_V2 42 + SPECTRE_V3A 43 + SPECTRE_V4 44 + SSBS 45 + SVE 46 + UNMAP_KERNEL_AT_EL0 47 + WORKAROUND_834220 48 + WORKAROUND_843419 49 + WORKAROUND_845719 50 + WORKAROUND_858921 51 + WORKAROUND_1418040 52 + WORKAROUND_1463225 53 + WORKAROUND_1508412 54 + WORKAROUND_1542419 55 + WORKAROUND_CAVIUM_23154 56 + WORKAROUND_CAVIUM_27456 57 + WORKAROUND_CAVIUM_30115 58 + WORKAROUND_CAVIUM_TX2_219_PRFM 59 + WORKAROUND_CAVIUM_TX2_219_TVM 60 + WORKAROUND_CLEAN_CACHE 61 + WORKAROUND_DEVICE_LOAD_ACQUIRE 62 + WORKAROUND_NVIDIA_CARMEL_CNP 63 + WORKAROUND_QCOM_FALKOR_E1003 64 + WORKAROUND_REPEAT_TLBI 65 + WORKAROUND_SPECULATIVE_AT
+40
arch/arm64/tools/gen-cpucaps.awk
··· 1 + #!/bin/awk -f 2 + # SPDX-License-Identifier: GPL-2.0 3 + # gen-cpucaps.awk: arm64 cpucaps header generator 4 + # 5 + # Usage: awk -f gen-cpucaps.awk cpucaps.txt 6 + 7 + # Log an error and terminate 8 + function fatal(msg) { 9 + print "Error at line " NR ": " msg > "/dev/stderr" 10 + exit 1 11 + } 12 + 13 + # skip blank lines and comment lines 14 + /^$/ { next } 15 + /^#/ { next } 16 + 17 + BEGIN { 18 + print "#ifndef __ASM_CPUCAPS_H" 19 + print "#define __ASM_CPUCAPS_H" 20 + print "" 21 + print "/* Generated file - do not edit */" 22 + cap_num = 0 23 + print "" 24 + } 25 + 26 + /^[vA-Z0-9_]+$/ { 27 + printf("#define ARM64_%-30s\t%d\n", $0, cap_num++) 28 + next 29 + } 30 + 31 + END { 32 + printf("#define ARM64_NCAPS\t\t\t\t%d\n", cap_num) 33 + print "" 34 + print "#endif /* __ASM_CPUCAPS_H */" 35 + } 36 + 37 + # Any lines not handled by previous rules are unexpected 38 + { 39 + fatal("unhandled statement") 40 + }
+1 -1
arch/ia64/kernel/syscalls/syscall.tbl
··· 363 363 440 common process_madvise sys_process_madvise 364 364 441 common epoll_pwait2 sys_epoll_pwait2 365 365 442 common mount_setattr sys_mount_setattr 366 - 443 common quotactl_path sys_quotactl_path 366 + # 443 reserved for quotactl_path 367 367 444 common landlock_create_ruleset sys_landlock_create_ruleset 368 368 445 common landlock_add_rule sys_landlock_add_rule 369 369 446 common landlock_restrict_self sys_landlock_restrict_self
+1 -1
arch/m68k/kernel/syscalls/syscall.tbl
··· 442 442 440 common process_madvise sys_process_madvise 443 443 441 common epoll_pwait2 sys_epoll_pwait2 444 444 442 common mount_setattr sys_mount_setattr 445 - 443 common quotactl_path sys_quotactl_path 445 + # 443 reserved for quotactl_path 446 446 444 common landlock_create_ruleset sys_landlock_create_ruleset 447 447 445 common landlock_add_rule sys_landlock_add_rule 448 448 446 common landlock_restrict_self sys_landlock_restrict_self
+1 -1
arch/microblaze/kernel/syscalls/syscall.tbl
··· 448 448 440 common process_madvise sys_process_madvise 449 449 441 common epoll_pwait2 sys_epoll_pwait2 450 450 442 common mount_setattr sys_mount_setattr 451 - 443 common quotactl_path sys_quotactl_path 451 + # 443 reserved for quotactl_path 452 452 444 common landlock_create_ruleset sys_landlock_create_ruleset 453 453 445 common landlock_add_rule sys_landlock_add_rule 454 454 446 common landlock_restrict_self sys_landlock_restrict_self
+1 -1
arch/mips/kernel/syscalls/syscall_n32.tbl
··· 381 381 440 n32 process_madvise sys_process_madvise 382 382 441 n32 epoll_pwait2 compat_sys_epoll_pwait2 383 383 442 n32 mount_setattr sys_mount_setattr 384 - 443 n32 quotactl_path sys_quotactl_path 384 + # 443 reserved for quotactl_path 385 385 444 n32 landlock_create_ruleset sys_landlock_create_ruleset 386 386 445 n32 landlock_add_rule sys_landlock_add_rule 387 387 446 n32 landlock_restrict_self sys_landlock_restrict_self
+1 -1
arch/mips/kernel/syscalls/syscall_n64.tbl
··· 357 357 440 n64 process_madvise sys_process_madvise 358 358 441 n64 epoll_pwait2 sys_epoll_pwait2 359 359 442 n64 mount_setattr sys_mount_setattr 360 - 443 n64 quotactl_path sys_quotactl_path 360 + # 443 reserved for quotactl_path 361 361 444 n64 landlock_create_ruleset sys_landlock_create_ruleset 362 362 445 n64 landlock_add_rule sys_landlock_add_rule 363 363 446 n64 landlock_restrict_self sys_landlock_restrict_self
+1 -1
arch/mips/kernel/syscalls/syscall_o32.tbl
··· 430 430 440 o32 process_madvise sys_process_madvise 431 431 441 o32 epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2 432 432 442 o32 mount_setattr sys_mount_setattr 433 - 443 o32 quotactl_path sys_quotactl_path 433 + # 443 reserved for quotactl_path 434 434 444 o32 landlock_create_ruleset sys_landlock_create_ruleset 435 435 445 o32 landlock_add_rule sys_landlock_add_rule 436 436 446 o32 landlock_restrict_self sys_landlock_restrict_self
+1 -1
arch/parisc/kernel/syscalls/syscall.tbl
··· 440 440 440 common process_madvise sys_process_madvise 441 441 441 common epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2 442 442 442 common mount_setattr sys_mount_setattr 443 - 443 common quotactl_path sys_quotactl_path 443 + # 443 reserved for quotactl_path 444 444 444 common landlock_create_ruleset sys_landlock_create_ruleset 445 445 445 common landlock_add_rule sys_landlock_add_rule 446 446 446 common landlock_restrict_self sys_landlock_restrict_self
+3
arch/powerpc/include/asm/hvcall.h
··· 448 448 */ 449 449 long plpar_hcall_norets(unsigned long opcode, ...); 450 450 451 + /* Variant which does not do hcall tracing */ 452 + long plpar_hcall_norets_notrace(unsigned long opcode, ...); 453 + 451 454 /** 452 455 * plpar_hcall: - Make a pseries hypervisor call 453 456 * @opcode: The hypervisor call to make.
+7 -2
arch/powerpc/include/asm/interrupt.h
··· 153 153 */ 154 154 static inline void interrupt_exit_prepare(struct pt_regs *regs, struct interrupt_state *state) 155 155 { 156 - if (user_mode(regs)) 157 - kuep_unlock(); 158 156 } 159 157 160 158 static inline void interrupt_async_enter_prepare(struct pt_regs *regs, struct interrupt_state *state) ··· 219 221 */ 220 222 local_paca->irq_soft_mask = IRQS_ALL_DISABLED; 221 223 local_paca->irq_happened |= PACA_IRQ_HARD_DIS; 224 + 225 + if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) && !(regs->msr & MSR_PR) && 226 + regs->nip < (unsigned long)__end_interrupts) { 227 + // Kernel code running below __end_interrupts is 228 + // implicitly soft-masked. 229 + regs->softe = IRQS_ALL_DISABLED; 230 + } 222 231 223 232 /* Don't do any per-CPU operations until interrupt state is fixed */ 224 233
+19 -3
arch/powerpc/include/asm/paravirt.h
··· 28 28 return be32_to_cpu(yield_count); 29 29 } 30 30 31 + /* 32 + * Spinlock code confers and prods, so don't trace the hcalls because the 33 + * tracing code takes spinlocks which can cause recursion deadlocks. 34 + * 35 + * These calls are made while the lock is not held: the lock slowpath yields if 36 + * it can not acquire the lock, and unlock slow path might prod if a waiter has 37 + * yielded). So this may not be a problem for simple spin locks because the 38 + * tracing does not technically recurse on the lock, but we avoid it anyway. 39 + * 40 + * However the queued spin lock contended path is more strictly ordered: the 41 + * H_CONFER hcall is made after the task has queued itself on the lock, so then 42 + * recursing on that lock will cause the task to then queue up again behind the 43 + * first instance (or worse: queued spinlocks use tricks that assume a context 44 + * never waits on more than one spinlock, so such recursion may cause random 45 + * corruption in the lock code). 46 + */ 31 47 static inline void yield_to_preempted(int cpu, u32 yield_count) 32 48 { 33 - plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(cpu), yield_count); 49 + plpar_hcall_norets_notrace(H_CONFER, get_hard_smp_processor_id(cpu), yield_count); 34 50 } 35 51 36 52 static inline void prod_cpu(int cpu) 37 53 { 38 - plpar_hcall_norets(H_PROD, get_hard_smp_processor_id(cpu)); 54 + plpar_hcall_norets_notrace(H_PROD, get_hard_smp_processor_id(cpu)); 39 55 } 40 56 41 57 static inline void yield_to_any(void) 42 58 { 43 - plpar_hcall_norets(H_CONFER, -1, 0); 59 + plpar_hcall_norets_notrace(H_CONFER, -1, 0); 44 60 } 45 61 #else 46 62 static inline bool is_shared_processor(void)
+5 -1
arch/powerpc/include/asm/plpar_wrappers.h
··· 28 28 29 29 static inline long cede_processor(void) 30 30 { 31 - return plpar_hcall_norets(H_CEDE); 31 + /* 32 + * We cannot call tracepoints inside RCU idle regions which 33 + * means we must not trace H_CEDE. 34 + */ 35 + return plpar_hcall_norets_notrace(H_CEDE); 32 36 } 33 37 34 38 static inline long extended_cede_processor(unsigned long latency_hint)
+1 -1
arch/powerpc/include/asm/uaccess.h
··· 157 157 "2: lwz%X1 %L0, %L1\n" \ 158 158 EX_TABLE(1b, %l2) \ 159 159 EX_TABLE(2b, %l2) \ 160 - : "=r" (x) \ 160 + : "=&r" (x) \ 161 161 : "m" (*addr) \ 162 162 : \ 163 163 : label)
+24 -14
arch/powerpc/kernel/exceptions-64e.S
··· 340 340 andi. r10,r10,IRQS_DISABLED; /* yes -> go out of line */ \ 341 341 bne masked_interrupt_book3e_##n 342 342 343 + /* 344 + * Additional regs must be re-loaded from paca before EXCEPTION_COMMON* is 345 + * called, because that does SAVE_NVGPRS which must see the original register 346 + * values, otherwise the scratch values might be restored when exiting the 347 + * interrupt. 348 + */ 343 349 #define PROLOG_ADDITION_2REGS_GEN(n) \ 344 350 std r14,PACA_EXGEN+EX_R14(r13); \ 345 351 std r15,PACA_EXGEN+EX_R15(r13) ··· 541 535 PROLOG_ADDITION_2REGS) 542 536 mfspr r14,SPRN_DEAR 543 537 mfspr r15,SPRN_ESR 538 + std r14,_DAR(r1) 539 + std r15,_DSISR(r1) 540 + ld r14,PACA_EXGEN+EX_R14(r13) 541 + ld r15,PACA_EXGEN+EX_R15(r13) 544 542 EXCEPTION_COMMON(0x300) 545 543 b storage_fault_common 546 544 ··· 554 544 PROLOG_ADDITION_2REGS) 555 545 li r15,0 556 546 mr r14,r10 547 + std r14,_DAR(r1) 548 + std r15,_DSISR(r1) 549 + ld r14,PACA_EXGEN+EX_R14(r13) 550 + ld r15,PACA_EXGEN+EX_R15(r13) 557 551 EXCEPTION_COMMON(0x400) 558 552 b storage_fault_common 559 553 ··· 571 557 PROLOG_ADDITION_2REGS) 572 558 mfspr r14,SPRN_DEAR 573 559 mfspr r15,SPRN_ESR 560 + std r14,_DAR(r1) 561 + std r15,_DSISR(r1) 562 + ld r14,PACA_EXGEN+EX_R14(r13) 563 + ld r15,PACA_EXGEN+EX_R15(r13) 574 564 EXCEPTION_COMMON(0x600) 575 565 b alignment_more /* no room, go out of line */ 576 566 ··· 583 565 NORMAL_EXCEPTION_PROLOG(0x700, BOOKE_INTERRUPT_PROGRAM, 584 566 PROLOG_ADDITION_1REG) 585 567 mfspr r14,SPRN_ESR 586 - EXCEPTION_COMMON(0x700) 587 568 std r14,_DSISR(r1) 588 - addi r3,r1,STACK_FRAME_OVERHEAD 589 569 ld r14,PACA_EXGEN+EX_R14(r13) 570 + EXCEPTION_COMMON(0x700) 571 + addi r3,r1,STACK_FRAME_OVERHEAD 590 572 bl program_check_exception 591 573 REST_NVGPRS(r1) 592 574 b interrupt_return ··· 743 725 * normal exception 744 726 */ 745 727 mfspr r14,SPRN_DBSR 746 - EXCEPTION_COMMON_CRIT(0xd00) 747 728 std r14,_DSISR(r1) 748 - addi r3,r1,STACK_FRAME_OVERHEAD 749 729 ld r14,PACA_EXCRIT+EX_R14(r13) 750 730 ld r15,PACA_EXCRIT+EX_R15(r13) 731 + EXCEPTION_COMMON_CRIT(0xd00) 732 + addi r3,r1,STACK_FRAME_OVERHEAD 751 733 bl DebugException 752 734 REST_NVGPRS(r1) 753 735 b interrupt_return ··· 814 796 * normal exception 815 797 */ 816 798 mfspr r14,SPRN_DBSR 817 - EXCEPTION_COMMON_DBG(0xd08) 818 799 std r14,_DSISR(r1) 819 - addi r3,r1,STACK_FRAME_OVERHEAD 820 800 ld r14,PACA_EXDBG+EX_R14(r13) 821 801 ld r15,PACA_EXDBG+EX_R15(r13) 802 + EXCEPTION_COMMON_DBG(0xd08) 803 + addi r3,r1,STACK_FRAME_OVERHEAD 822 804 bl DebugException 823 805 REST_NVGPRS(r1) 824 806 b interrupt_return ··· 949 931 * original values stashed away in the PACA 950 932 */ 951 933 storage_fault_common: 952 - std r14,_DAR(r1) 953 - std r15,_DSISR(r1) 954 934 addi r3,r1,STACK_FRAME_OVERHEAD 955 - ld r14,PACA_EXGEN+EX_R14(r13) 956 - ld r15,PACA_EXGEN+EX_R15(r13) 957 935 bl do_page_fault 958 936 b interrupt_return 959 937 ··· 958 944 * continues here. 959 945 */ 960 946 alignment_more: 961 - std r14,_DAR(r1) 962 - std r15,_DSISR(r1) 963 947 addi r3,r1,STACK_FRAME_OVERHEAD 964 - ld r14,PACA_EXGEN+EX_R14(r13) 965 - ld r15,PACA_EXGEN+EX_R15(r13) 966 948 bl alignment_exception 967 949 REST_NVGPRS(r1) 968 950 b interrupt_return
+1 -3
arch/powerpc/kernel/interrupt.c
··· 34 34 syscall_fn f; 35 35 36 36 kuep_lock(); 37 - #ifdef CONFIG_PPC32 38 - kuap_save_and_lock(regs); 39 - #endif 40 37 41 38 regs->orig_gpr3 = r3; 42 39 ··· 424 427 425 428 /* Restore user access locks last */ 426 429 kuap_user_restore(regs); 430 + kuep_unlock(); 427 431 428 432 return ret; 429 433 }
+5 -2
arch/powerpc/kernel/legacy_serial.c
··· 356 356 357 357 static int __init ioremap_legacy_serial_console(void) 358 358 { 359 - struct legacy_serial_info *info = &legacy_serial_infos[legacy_serial_console]; 360 - struct plat_serial8250_port *port = &legacy_serial_ports[legacy_serial_console]; 359 + struct plat_serial8250_port *port; 360 + struct legacy_serial_info *info; 361 361 void __iomem *vaddr; 362 362 363 363 if (legacy_serial_console < 0) 364 364 return 0; 365 + 366 + info = &legacy_serial_infos[legacy_serial_console]; 367 + port = &legacy_serial_ports[legacy_serial_console]; 365 368 366 369 if (!info->early_addr) 367 370 return 0;
+2 -2
arch/powerpc/kernel/signal.h
··· 166 166 } 167 167 #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */ 168 168 #else 169 - #define unsafe_copy_fpr_to_user(to, task, label) do { } while (0) 169 + #define unsafe_copy_fpr_to_user(to, task, label) do { if (0) goto label;} while (0) 170 170 171 - #define unsafe_copy_fpr_from_user(task, from, label) do { } while (0) 171 + #define unsafe_copy_fpr_from_user(task, from, label) do { if (0) goto label;} while (0) 172 172 173 173 static inline unsigned long 174 174 copy_fpr_to_user(void __user *to, struct task_struct *task)
+1 -1
arch/powerpc/kernel/syscalls/syscall.tbl
··· 522 522 440 common process_madvise sys_process_madvise 523 523 441 common epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2 524 524 442 common mount_setattr sys_mount_setattr 525 - 443 common quotactl_path sys_quotactl_path 525 + # 443 reserved for quotactl_path 526 526 444 common landlock_create_ruleset sys_landlock_create_ruleset 527 527 445 common landlock_add_rule sys_landlock_add_rule 528 528 446 common landlock_restrict_self sys_landlock_restrict_self
+1 -1
arch/powerpc/kvm/book3s_64_mmu_hv.c
··· 840 840 kvm_unmap_radix(kvm, range->slot, gfn); 841 841 } else { 842 842 for (gfn = range->start; gfn < range->end; gfn++) 843 - kvm_unmap_rmapp(kvm, range->slot, range->start); 843 + kvm_unmap_rmapp(kvm, range->slot, gfn); 844 844 } 845 845 846 846 return false;
+85 -29
arch/powerpc/lib/feature-fixups.c
··· 14 14 #include <linux/string.h> 15 15 #include <linux/init.h> 16 16 #include <linux/sched/mm.h> 17 + #include <linux/stop_machine.h> 17 18 #include <asm/cputable.h> 18 19 #include <asm/code-patching.h> 19 20 #include <asm/page.h> ··· 150 149 151 150 pr_devel("patching dest %lx\n", (unsigned long)dest); 152 151 153 - patch_instruction((struct ppc_inst *)dest, ppc_inst(instrs[0])); 154 - 155 - if (types & STF_BARRIER_FALLBACK) 152 + // See comment in do_entry_flush_fixups() RE order of patching 153 + if (types & STF_BARRIER_FALLBACK) { 154 + patch_instruction((struct ppc_inst *)dest, ppc_inst(instrs[0])); 155 + patch_instruction((struct ppc_inst *)(dest + 2), ppc_inst(instrs[2])); 156 156 patch_branch((struct ppc_inst *)(dest + 1), 157 - (unsigned long)&stf_barrier_fallback, 158 - BRANCH_SET_LINK); 159 - else 160 - patch_instruction((struct ppc_inst *)(dest + 1), 161 - ppc_inst(instrs[1])); 162 - 163 - patch_instruction((struct ppc_inst *)(dest + 2), ppc_inst(instrs[2])); 157 + (unsigned long)&stf_barrier_fallback, BRANCH_SET_LINK); 158 + } else { 159 + patch_instruction((struct ppc_inst *)(dest + 1), ppc_inst(instrs[1])); 160 + patch_instruction((struct ppc_inst *)(dest + 2), ppc_inst(instrs[2])); 161 + patch_instruction((struct ppc_inst *)dest, ppc_inst(instrs[0])); 162 + } 164 163 } 165 164 166 165 printk(KERN_DEBUG "stf-barrier: patched %d entry locations (%s barrier)\n", i, ··· 228 227 : "unknown"); 229 228 } 230 229 230 + static int __do_stf_barrier_fixups(void *data) 231 + { 232 + enum stf_barrier_type *types = data; 233 + 234 + do_stf_entry_barrier_fixups(*types); 235 + do_stf_exit_barrier_fixups(*types); 236 + 237 + return 0; 238 + } 231 239 232 240 void do_stf_barrier_fixups(enum stf_barrier_type types) 233 241 { 234 - do_stf_entry_barrier_fixups(types); 235 - do_stf_exit_barrier_fixups(types); 242 + /* 243 + * The call to the fallback entry flush, and the fallback/sync-ori exit 244 + * flush can not be safely patched in/out while other CPUs are executing 245 + * them. So call __do_stf_barrier_fixups() on one CPU while all other CPUs 246 + * spin in the stop machine core with interrupts hard disabled. 247 + */ 248 + stop_machine(__do_stf_barrier_fixups, &types, NULL); 236 249 } 237 250 238 251 void do_uaccess_flush_fixups(enum l1d_flush_type types) ··· 299 284 : "unknown"); 300 285 } 301 286 302 - void do_entry_flush_fixups(enum l1d_flush_type types) 287 + static int __do_entry_flush_fixups(void *data) 303 288 { 289 + enum l1d_flush_type types = *(enum l1d_flush_type *)data; 304 290 unsigned int instrs[3], *dest; 305 291 long *start, *end; 306 292 int i; ··· 325 309 if (types & L1D_FLUSH_MTTRIG) 326 310 instrs[i++] = 0x7c12dba6; /* mtspr TRIG2,r0 (SPR #882) */ 327 311 312 + /* 313 + * If we're patching in or out the fallback flush we need to be careful about the 314 + * order in which we patch instructions. That's because it's possible we could 315 + * take a page fault after patching one instruction, so the sequence of 316 + * instructions must be safe even in a half patched state. 317 + * 318 + * To make that work, when patching in the fallback flush we patch in this order: 319 + * - the mflr (dest) 320 + * - the mtlr (dest + 2) 321 + * - the branch (dest + 1) 322 + * 323 + * That ensures the sequence is safe to execute at any point. In contrast if we 324 + * patch the mtlr last, it's possible we could return from the branch and not 325 + * restore LR, leading to a crash later. 326 + * 327 + * When patching out the fallback flush (either with nops or another flush type), 328 + * we patch in this order: 329 + * - the branch (dest + 1) 330 + * - the mtlr (dest + 2) 331 + * - the mflr (dest) 332 + * 333 + * Note we are protected by stop_machine() from other CPUs executing the code in a 334 + * semi-patched state. 335 + */ 336 + 328 337 start = PTRRELOC(&__start___entry_flush_fixup); 329 338 end = PTRRELOC(&__stop___entry_flush_fixup); 330 339 for (i = 0; start < end; start++, i++) { ··· 357 316 358 317 pr_devel("patching dest %lx\n", (unsigned long)dest); 359 318 360 - patch_instruction((struct ppc_inst *)dest, ppc_inst(instrs[0])); 361 - 362 - if (types == L1D_FLUSH_FALLBACK) 363 - patch_branch((struct ppc_inst *)(dest + 1), (unsigned long)&entry_flush_fallback, 364 - BRANCH_SET_LINK); 365 - else 319 + if (types == L1D_FLUSH_FALLBACK) { 320 + patch_instruction((struct ppc_inst *)dest, ppc_inst(instrs[0])); 321 + patch_instruction((struct ppc_inst *)(dest + 2), ppc_inst(instrs[2])); 322 + patch_branch((struct ppc_inst *)(dest + 1), 323 + (unsigned long)&entry_flush_fallback, BRANCH_SET_LINK); 324 + } else { 366 325 patch_instruction((struct ppc_inst *)(dest + 1), ppc_inst(instrs[1])); 367 - 368 - patch_instruction((struct ppc_inst *)(dest + 2), ppc_inst(instrs[2])); 326 + patch_instruction((struct ppc_inst *)(dest + 2), ppc_inst(instrs[2])); 327 + patch_instruction((struct ppc_inst *)dest, ppc_inst(instrs[0])); 328 + } 369 329 } 370 330 371 331 start = PTRRELOC(&__start___scv_entry_flush_fixup); ··· 376 334 377 335 pr_devel("patching dest %lx\n", (unsigned long)dest); 378 336 379 - patch_instruction((struct ppc_inst *)dest, ppc_inst(instrs[0])); 380 - 381 - if (types == L1D_FLUSH_FALLBACK) 382 - patch_branch((struct ppc_inst *)(dest + 1), (unsigned long)&scv_entry_flush_fallback, 383 - BRANCH_SET_LINK); 384 - else 337 + if (types == L1D_FLUSH_FALLBACK) { 338 + patch_instruction((struct ppc_inst *)dest, ppc_inst(instrs[0])); 339 + patch_instruction((struct ppc_inst *)(dest + 2), ppc_inst(instrs[2])); 340 + patch_branch((struct ppc_inst *)(dest + 1), 341 + (unsigned long)&scv_entry_flush_fallback, BRANCH_SET_LINK); 342 + } else { 385 343 patch_instruction((struct ppc_inst *)(dest + 1), ppc_inst(instrs[1])); 386 - 387 - patch_instruction((struct ppc_inst *)(dest + 2), ppc_inst(instrs[2])); 344 + patch_instruction((struct ppc_inst *)(dest + 2), ppc_inst(instrs[2])); 345 + patch_instruction((struct ppc_inst *)dest, ppc_inst(instrs[0])); 346 + } 388 347 } 389 348 390 349 ··· 397 354 : "ori type" : 398 355 (types & L1D_FLUSH_MTTRIG) ? "mttrig type" 399 356 : "unknown"); 357 + 358 + return 0; 359 + } 360 + 361 + void do_entry_flush_fixups(enum l1d_flush_type types) 362 + { 363 + /* 364 + * The call to the fallback flush can not be safely patched in/out while 365 + * other CPUs are executing it. So call __do_entry_flush_fixups() on one 366 + * CPU while all other CPUs spin in the stop machine core with interrupts 367 + * hard disabled. 368 + */ 369 + stop_machine(__do_entry_flush_fixups, &types, NULL); 400 370 } 401 371 402 372 void do_rfi_flush_fixups(enum l1d_flush_type types)
+10
arch/powerpc/platforms/pseries/hvCall.S
··· 102 102 #define HCALL_BRANCH(LABEL) 103 103 #endif 104 104 105 + _GLOBAL_TOC(plpar_hcall_norets_notrace) 106 + HMT_MEDIUM 107 + 108 + mfcr r0 109 + stw r0,8(r1) 110 + HVSC /* invoke the hypervisor */ 111 + lwz r0,8(r1) 112 + mtcrf 0xff,r0 113 + blr /* return r3 = status */ 114 + 105 115 _GLOBAL_TOC(plpar_hcall_norets) 106 116 HMT_MEDIUM 107 117
+12 -17
arch/powerpc/platforms/pseries/lpar.c
··· 1829 1829 #endif 1830 1830 1831 1831 /* 1832 - * Since the tracing code might execute hcalls we need to guard against 1833 - * recursion. One example of this are spinlocks calling H_YIELD on 1834 - * shared processor partitions. 1832 + * Keep track of hcall tracing depth and prevent recursion. Warn if any is 1833 + * detected because it may indicate a problem. This will not catch all 1834 + * problems with tracing code making hcalls, because the tracing might have 1835 + * been invoked from a non-hcall, so the first hcall could recurse into it 1836 + * without warning here, but this better than nothing. 1837 + * 1838 + * Hcalls with specific problems being traced should use the _notrace 1839 + * plpar_hcall variants. 1835 1840 */ 1836 1841 static DEFINE_PER_CPU(unsigned int, hcall_trace_depth); 1837 1842 1838 1843 1839 - void __trace_hcall_entry(unsigned long opcode, unsigned long *args) 1844 + notrace void __trace_hcall_entry(unsigned long opcode, unsigned long *args) 1840 1845 { 1841 1846 unsigned long flags; 1842 1847 unsigned int *depth; 1843 - 1844 - /* 1845 - * We cannot call tracepoints inside RCU idle regions which 1846 - * means we must not trace H_CEDE. 1847 - */ 1848 - if (opcode == H_CEDE) 1849 - return; 1850 1848 1851 1849 local_irq_save(flags); 1852 1850 1853 1851 depth = this_cpu_ptr(&hcall_trace_depth); 1854 1852 1855 - if (*depth) 1853 + if (WARN_ON_ONCE(*depth)) 1856 1854 goto out; 1857 1855 1858 1856 (*depth)++; ··· 1862 1864 local_irq_restore(flags); 1863 1865 } 1864 1866 1865 - void __trace_hcall_exit(long opcode, long retval, unsigned long *retbuf) 1867 + notrace void __trace_hcall_exit(long opcode, long retval, unsigned long *retbuf) 1866 1868 { 1867 1869 unsigned long flags; 1868 1870 unsigned int *depth; 1869 - 1870 - if (opcode == H_CEDE) 1871 - return; 1872 1871 1873 1872 local_irq_save(flags); 1874 1873 1875 1874 depth = this_cpu_ptr(&hcall_trace_depth); 1876 1875 1877 - if (*depth) 1876 + if (*depth) /* Don't warn again on the way out */ 1878 1877 goto out; 1879 1878 1880 1879 (*depth)++;
+1 -1
arch/s390/kernel/syscalls/syscall.tbl
··· 445 445 440 common process_madvise sys_process_madvise sys_process_madvise 446 446 441 common epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2 447 447 442 common mount_setattr sys_mount_setattr sys_mount_setattr 448 - 443 common quotactl_path sys_quotactl_path sys_quotactl_path 448 + # 443 reserved for quotactl_path 449 449 444 common landlock_create_ruleset sys_landlock_create_ruleset sys_landlock_create_ruleset 450 450 445 common landlock_add_rule sys_landlock_add_rule sys_landlock_add_rule 451 451 446 common landlock_restrict_self sys_landlock_restrict_self sys_landlock_restrict_self
+1 -1
arch/sh/kernel/syscalls/syscall.tbl
··· 445 445 440 common process_madvise sys_process_madvise 446 446 441 common epoll_pwait2 sys_epoll_pwait2 447 447 442 common mount_setattr sys_mount_setattr 448 - 443 common quotactl_path sys_quotactl_path 448 + # 443 reserved for quotactl_path 449 449 444 common landlock_create_ruleset sys_landlock_create_ruleset 450 450 445 common landlock_add_rule sys_landlock_add_rule 451 451 446 common landlock_restrict_self sys_landlock_restrict_self
-1
arch/sh/kernel/traps.c
··· 180 180 181 181 BUILD_TRAP_HANDLER(nmi) 182 182 { 183 - unsigned int cpu = smp_processor_id(); 184 183 TRAP_HANDLER_DECL; 185 184 186 185 arch_ftrace_nmi_enter();
+1 -1
arch/sparc/kernel/syscalls/syscall.tbl
··· 488 488 440 common process_madvise sys_process_madvise 489 489 441 common epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2 490 490 442 common mount_setattr sys_mount_setattr 491 - 443 common quotactl_path sys_quotactl_path 491 + # 443 reserved for quotactl_path 492 492 444 common landlock_create_ruleset sys_landlock_create_ruleset 493 493 445 common landlock_add_rule sys_landlock_add_rule 494 494 446 common landlock_restrict_self sys_landlock_restrict_self
+4 -3
arch/x86/boot/compressed/Makefile
··· 30 30 31 31 KBUILD_CFLAGS := -m$(BITS) -O2 32 32 KBUILD_CFLAGS += -fno-strict-aliasing -fPIE 33 + KBUILD_CFLAGS += -Wundef 33 34 KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING 34 35 cflags-$(CONFIG_X86_32) := -march=i386 35 36 cflags-$(CONFIG_X86_64) := -mcmodel=small -mno-red-zone ··· 49 48 KBUILD_CFLAGS += -include $(srctree)/include/linux/hidden.h 50 49 KBUILD_CFLAGS += $(CLANG_FLAGS) 51 50 52 - # sev-es.c indirectly inludes inat-table.h which is generated during 51 + # sev.c indirectly inludes inat-table.h which is generated during 53 52 # compilation and stored in $(objtree). Add the directory to the includes so 54 53 # that the compiler finds it even with out-of-tree builds (make O=/some/path). 55 - CFLAGS_sev-es.o += -I$(objtree)/arch/x86/lib/ 54 + CFLAGS_sev.o += -I$(objtree)/arch/x86/lib/ 56 55 57 56 KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__ 58 57 GCOV_PROFILE := n ··· 94 93 vmlinux-objs-y += $(obj)/idt_64.o $(obj)/idt_handlers_64.o 95 94 vmlinux-objs-y += $(obj)/mem_encrypt.o 96 95 vmlinux-objs-y += $(obj)/pgtable_64.o 97 - vmlinux-objs-$(CONFIG_AMD_MEM_ENCRYPT) += $(obj)/sev-es.o 96 + vmlinux-objs-$(CONFIG_AMD_MEM_ENCRYPT) += $(obj)/sev.o 98 97 endif 99 98 100 99 vmlinux-objs-$(CONFIG_ACPI) += $(obj)/acpi.o
+1 -1
arch/x86/boot/compressed/misc.c
··· 172 172 } 173 173 } 174 174 175 - #if CONFIG_X86_NEED_RELOCS 175 + #ifdef CONFIG_X86_NEED_RELOCS 176 176 static void handle_relocations(void *output, unsigned long output_len, 177 177 unsigned long virt_addr) 178 178 {
+1 -1
arch/x86/boot/compressed/misc.h
··· 79 79 u64 size; 80 80 }; 81 81 82 - #if CONFIG_RANDOMIZE_BASE 82 + #ifdef CONFIG_RANDOMIZE_BASE 83 83 /* kaslr.c */ 84 84 void choose_random_location(unsigned long input, 85 85 unsigned long input_size,
+2 -2
arch/x86/boot/compressed/sev-es.c arch/x86/boot/compressed/sev.c
··· 13 13 #include "misc.h" 14 14 15 15 #include <asm/pgtable_types.h> 16 - #include <asm/sev-es.h> 16 + #include <asm/sev.h> 17 17 #include <asm/trapnr.h> 18 18 #include <asm/trap_pf.h> 19 19 #include <asm/msr-index.h> ··· 117 117 #include "../../lib/insn.c" 118 118 119 119 /* Include code for early handlers */ 120 - #include "../../kernel/sev-es-shared.c" 120 + #include "../../kernel/sev-shared.c" 121 121 122 122 static bool early_setup_sev_es(void) 123 123 {
+1 -1
arch/x86/entry/syscalls/syscall_32.tbl
··· 447 447 440 i386 process_madvise sys_process_madvise 448 448 441 i386 epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2 449 449 442 i386 mount_setattr sys_mount_setattr 450 - 443 i386 quotactl_path sys_quotactl_path 450 + # 443 reserved for quotactl_path 451 451 444 i386 landlock_create_ruleset sys_landlock_create_ruleset 452 452 445 i386 landlock_add_rule sys_landlock_add_rule 453 453 446 i386 landlock_restrict_self sys_landlock_restrict_self
+1 -1
arch/x86/entry/syscalls/syscall_64.tbl
··· 364 364 440 common process_madvise sys_process_madvise 365 365 441 common epoll_pwait2 sys_epoll_pwait2 366 366 442 common mount_setattr sys_mount_setattr 367 - 443 common quotactl_path sys_quotactl_path 367 + # 443 reserved for quotactl_path 368 368 444 common landlock_create_ruleset sys_landlock_create_ruleset 369 369 445 common landlock_add_rule sys_landlock_add_rule 370 370 446 common landlock_restrict_self sys_landlock_restrict_self
+12 -3
arch/x86/include/asm/kvm_host.h
··· 113 113 #define VALID_PAGE(x) ((x) != INVALID_PAGE) 114 114 115 115 #define UNMAPPED_GVA (~(gpa_t)0) 116 + #define INVALID_GPA (~(gpa_t)0) 116 117 117 118 /* KVM Hugepage definitions for x86 */ 118 119 #define KVM_MAX_HUGEPAGE_LEVEL PG_LEVEL_1G ··· 200 199 201 200 #define KVM_NR_DB_REGS 4 202 201 202 + #define DR6_BUS_LOCK (1 << 11) 203 203 #define DR6_BD (1 << 13) 204 204 #define DR6_BS (1 << 14) 205 205 #define DR6_BT (1 << 15) ··· 214 212 * DR6_ACTIVE_LOW is also used as the init/reset value for DR6. 215 213 */ 216 214 #define DR6_ACTIVE_LOW 0xffff0ff0 217 - #define DR6_VOLATILE 0x0001e00f 215 + #define DR6_VOLATILE 0x0001e80f 218 216 #define DR6_FIXED_1 (DR6_ACTIVE_LOW & ~DR6_VOLATILE) 219 217 220 218 #define DR7_BP_EN_MASK 0x000000ff ··· 409 407 u32 pkru_mask; 410 408 411 409 u64 *pae_root; 412 - u64 *lm_root; 410 + u64 *pml4_root; 413 411 414 412 /* 415 413 * check zero bits on shadow page table entries, these ··· 1419 1417 bool direct_map; 1420 1418 }; 1421 1419 1420 + extern u32 __read_mostly kvm_nr_uret_msrs; 1422 1421 extern u64 __read_mostly host_efer; 1423 1422 extern bool __read_mostly allow_smaller_maxphyaddr; 1424 1423 extern struct kvm_x86_ops kvm_x86_ops; ··· 1778 1775 unsigned long ipi_bitmap_high, u32 min, 1779 1776 unsigned long icr, int op_64_bit); 1780 1777 1781 - void kvm_define_user_return_msr(unsigned index, u32 msr); 1778 + int kvm_add_user_return_msr(u32 msr); 1779 + int kvm_find_user_return_msr(u32 msr); 1782 1780 int kvm_set_user_return_msr(unsigned index, u64 val, u64 mask); 1781 + 1782 + static inline bool kvm_is_supported_user_return_msr(u32 msr) 1783 + { 1784 + return kvm_find_user_return_msr(msr) >= 0; 1785 + } 1783 1786 1784 1787 u64 kvm_scale_tsc(struct kvm_vcpu *vcpu, u64 tsc); 1785 1788 u64 kvm_read_l1_tsc(struct kvm_vcpu *vcpu, u64 host_tsc);
+2 -8
arch/x86/include/asm/kvm_para.h
··· 7 7 #include <linux/interrupt.h> 8 8 #include <uapi/asm/kvm_para.h> 9 9 10 - extern void kvmclock_init(void); 11 - 12 10 #ifdef CONFIG_KVM_GUEST 13 11 bool kvm_check_and_clear_guest_paused(void); 14 12 #else ··· 84 86 } 85 87 86 88 #ifdef CONFIG_KVM_GUEST 89 + void kvmclock_init(void); 90 + void kvmclock_disable(void); 87 91 bool kvm_para_available(void); 88 92 unsigned int kvm_arch_para_features(void); 89 93 unsigned int kvm_arch_para_hints(void); 90 94 void kvm_async_pf_task_wait_schedule(u32 token); 91 95 void kvm_async_pf_task_wake(u32 token); 92 96 u32 kvm_read_and_reset_apf_flags(void); 93 - void kvm_disable_steal_time(void); 94 97 bool __kvm_handle_async_pf(struct pt_regs *regs, u32 token); 95 98 96 99 DECLARE_STATIC_KEY_FALSE(kvm_async_pf_enabled); ··· 134 135 static inline u32 kvm_read_and_reset_apf_flags(void) 135 136 { 136 137 return 0; 137 - } 138 - 139 - static inline void kvm_disable_steal_time(void) 140 - { 141 - return; 142 138 } 143 139 144 140 static __always_inline bool kvm_handle_async_pf(struct pt_regs *regs, u32 token)
+3 -3
arch/x86/include/asm/msr-index.h
··· 537 537 /* K8 MSRs */ 538 538 #define MSR_K8_TOP_MEM1 0xc001001a 539 539 #define MSR_K8_TOP_MEM2 0xc001001d 540 - #define MSR_K8_SYSCFG 0xc0010010 541 - #define MSR_K8_SYSCFG_MEM_ENCRYPT_BIT 23 542 - #define MSR_K8_SYSCFG_MEM_ENCRYPT BIT_ULL(MSR_K8_SYSCFG_MEM_ENCRYPT_BIT) 540 + #define MSR_AMD64_SYSCFG 0xc0010010 541 + #define MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT 23 542 + #define MSR_AMD64_SYSCFG_MEM_ENCRYPT BIT_ULL(MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT) 543 543 #define MSR_K8_INT_PENDING_MSG 0xc0010055 544 544 /* C1E active bits in int pending message */ 545 545 #define K8_INTP_C1E_ACTIVE_MASK 0x18000000
+2
arch/x86/include/asm/processor.h
··· 787 787 788 788 #ifdef CONFIG_CPU_SUP_AMD 789 789 extern u32 amd_get_nodes_per_socket(void); 790 + extern u32 amd_get_highest_perf(void); 790 791 #else 791 792 static inline u32 amd_get_nodes_per_socket(void) { return 0; } 793 + static inline u32 amd_get_highest_perf(void) { return 0; } 792 794 #endif 793 795 794 796 static inline uint32_t hypervisor_cpuid_base(const char *sig, uint32_t leaves)
+62
arch/x86/include/asm/sev-common.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * AMD SEV header common between the guest and the hypervisor. 4 + * 5 + * Author: Brijesh Singh <brijesh.singh@amd.com> 6 + */ 7 + 8 + #ifndef __ASM_X86_SEV_COMMON_H 9 + #define __ASM_X86_SEV_COMMON_H 10 + 11 + #define GHCB_MSR_INFO_POS 0 12 + #define GHCB_MSR_INFO_MASK (BIT_ULL(12) - 1) 13 + 14 + #define GHCB_MSR_SEV_INFO_RESP 0x001 15 + #define GHCB_MSR_SEV_INFO_REQ 0x002 16 + #define GHCB_MSR_VER_MAX_POS 48 17 + #define GHCB_MSR_VER_MAX_MASK 0xffff 18 + #define GHCB_MSR_VER_MIN_POS 32 19 + #define GHCB_MSR_VER_MIN_MASK 0xffff 20 + #define GHCB_MSR_CBIT_POS 24 21 + #define GHCB_MSR_CBIT_MASK 0xff 22 + #define GHCB_MSR_SEV_INFO(_max, _min, _cbit) \ 23 + ((((_max) & GHCB_MSR_VER_MAX_MASK) << GHCB_MSR_VER_MAX_POS) | \ 24 + (((_min) & GHCB_MSR_VER_MIN_MASK) << GHCB_MSR_VER_MIN_POS) | \ 25 + (((_cbit) & GHCB_MSR_CBIT_MASK) << GHCB_MSR_CBIT_POS) | \ 26 + GHCB_MSR_SEV_INFO_RESP) 27 + #define GHCB_MSR_INFO(v) ((v) & 0xfffUL) 28 + #define GHCB_MSR_PROTO_MAX(v) (((v) >> GHCB_MSR_VER_MAX_POS) & GHCB_MSR_VER_MAX_MASK) 29 + #define GHCB_MSR_PROTO_MIN(v) (((v) >> GHCB_MSR_VER_MIN_POS) & GHCB_MSR_VER_MIN_MASK) 30 + 31 + #define GHCB_MSR_CPUID_REQ 0x004 32 + #define GHCB_MSR_CPUID_RESP 0x005 33 + #define GHCB_MSR_CPUID_FUNC_POS 32 34 + #define GHCB_MSR_CPUID_FUNC_MASK 0xffffffff 35 + #define GHCB_MSR_CPUID_VALUE_POS 32 36 + #define GHCB_MSR_CPUID_VALUE_MASK 0xffffffff 37 + #define GHCB_MSR_CPUID_REG_POS 30 38 + #define GHCB_MSR_CPUID_REG_MASK 0x3 39 + #define GHCB_CPUID_REQ_EAX 0 40 + #define GHCB_CPUID_REQ_EBX 1 41 + #define GHCB_CPUID_REQ_ECX 2 42 + #define GHCB_CPUID_REQ_EDX 3 43 + #define GHCB_CPUID_REQ(fn, reg) \ 44 + (GHCB_MSR_CPUID_REQ | \ 45 + (((unsigned long)reg & GHCB_MSR_CPUID_REG_MASK) << GHCB_MSR_CPUID_REG_POS) | \ 46 + (((unsigned long)fn) << GHCB_MSR_CPUID_FUNC_POS)) 47 + 48 + #define GHCB_MSR_TERM_REQ 0x100 49 + #define GHCB_MSR_TERM_REASON_SET_POS 12 50 + #define GHCB_MSR_TERM_REASON_SET_MASK 0xf 51 + #define GHCB_MSR_TERM_REASON_POS 16 52 + #define GHCB_MSR_TERM_REASON_MASK 0xff 53 + #define GHCB_SEV_TERM_REASON(reason_set, reason_val) \ 54 + (((((u64)reason_set) & GHCB_MSR_TERM_REASON_SET_MASK) << GHCB_MSR_TERM_REASON_SET_POS) | \ 55 + ((((u64)reason_val) & GHCB_MSR_TERM_REASON_MASK) << GHCB_MSR_TERM_REASON_POS)) 56 + 57 + #define GHCB_SEV_ES_REASON_GENERAL_REQUEST 0 58 + #define GHCB_SEV_ES_REASON_PROTOCOL_UNSUPPORTED 1 59 + 60 + #define GHCB_RESP_CODE(v) ((v) & GHCB_MSR_INFO_MASK) 61 + 62 + #endif
+4 -26
arch/x86/include/asm/sev-es.h arch/x86/include/asm/sev.h
··· 10 10 11 11 #include <linux/types.h> 12 12 #include <asm/insn.h> 13 + #include <asm/sev-common.h> 13 14 14 - #define GHCB_SEV_INFO 0x001UL 15 - #define GHCB_SEV_INFO_REQ 0x002UL 16 - #define GHCB_INFO(v) ((v) & 0xfffUL) 17 - #define GHCB_PROTO_MAX(v) (((v) >> 48) & 0xffffUL) 18 - #define GHCB_PROTO_MIN(v) (((v) >> 32) & 0xffffUL) 19 - #define GHCB_PROTO_OUR 0x0001UL 20 - #define GHCB_SEV_CPUID_REQ 0x004UL 21 - #define GHCB_CPUID_REQ_EAX 0 22 - #define GHCB_CPUID_REQ_EBX 1 23 - #define GHCB_CPUID_REQ_ECX 2 24 - #define GHCB_CPUID_REQ_EDX 3 25 - #define GHCB_CPUID_REQ(fn, reg) (GHCB_SEV_CPUID_REQ | \ 26 - (((unsigned long)reg & 3) << 30) | \ 27 - (((unsigned long)fn) << 32)) 15 + #define GHCB_PROTO_OUR 0x0001UL 16 + #define GHCB_PROTOCOL_MAX 1ULL 17 + #define GHCB_DEFAULT_USAGE 0ULL 28 18 29 - #define GHCB_PROTOCOL_MAX 0x0001UL 30 - #define GHCB_DEFAULT_USAGE 0x0000UL 31 - 32 - #define GHCB_SEV_CPUID_RESP 0x005UL 33 - #define GHCB_SEV_TERMINATE 0x100UL 34 - #define GHCB_SEV_TERMINATE_REASON(reason_set, reason_val) \ 35 - (((((u64)reason_set) & 0x7) << 12) | \ 36 - ((((u64)reason_val) & 0xff) << 16)) 37 - #define GHCB_SEV_ES_REASON_GENERAL_REQUEST 0 38 - #define GHCB_SEV_ES_REASON_PROTOCOL_UNSUPPORTED 1 39 - 40 - #define GHCB_SEV_GHCB_RESP_CODE(v) ((v) & 0xfff) 41 19 #define VMGEXIT() { asm volatile("rep; vmmcall\n\r"); } 42 20 43 21 enum es_result {
+2
arch/x86/include/asm/vdso/clocksource.h
··· 7 7 VDSO_CLOCKMODE_PVCLOCK, \ 8 8 VDSO_CLOCKMODE_HVCLOCK 9 9 10 + #define HAVE_VDSO_CLOCKMODE_HVCLOCK 11 + 10 12 #endif /* __ASM_VDSO_CLOCKSOURCE_H */
+2
arch/x86/include/uapi/asm/kvm.h
··· 437 437 __u16 flags; 438 438 } smm; 439 439 440 + __u16 pad; 441 + 440 442 __u32 flags; 441 443 __u64 preemption_timer_deadline; 442 444 };
+3 -3
arch/x86/kernel/Makefile
··· 20 20 CFLAGS_REMOVE_ftrace.o = -pg 21 21 CFLAGS_REMOVE_early_printk.o = -pg 22 22 CFLAGS_REMOVE_head64.o = -pg 23 - CFLAGS_REMOVE_sev-es.o = -pg 23 + CFLAGS_REMOVE_sev.o = -pg 24 24 endif 25 25 26 26 KASAN_SANITIZE_head$(BITS).o := n ··· 28 28 KASAN_SANITIZE_dumpstack_$(BITS).o := n 29 29 KASAN_SANITIZE_stacktrace.o := n 30 30 KASAN_SANITIZE_paravirt.o := n 31 - KASAN_SANITIZE_sev-es.o := n 31 + KASAN_SANITIZE_sev.o := n 32 32 33 33 # With some compiler versions the generated code results in boot hangs, caused 34 34 # by several compilation units. To be safe, disable all instrumentation. ··· 148 148 obj-$(CONFIG_UNWINDER_FRAME_POINTER) += unwind_frame.o 149 149 obj-$(CONFIG_UNWINDER_GUESS) += unwind_guess.o 150 150 151 - obj-$(CONFIG_AMD_MEM_ENCRYPT) += sev-es.o 151 + obj-$(CONFIG_AMD_MEM_ENCRYPT) += sev.o 152 152 ### 153 153 # 64 bit specific files 154 154 ifeq ($(CONFIG_X86_64),y)
+18 -2
arch/x86/kernel/cpu/amd.c
··· 593 593 */ 594 594 if (cpu_has(c, X86_FEATURE_SME) || cpu_has(c, X86_FEATURE_SEV)) { 595 595 /* Check if memory encryption is enabled */ 596 - rdmsrl(MSR_K8_SYSCFG, msr); 597 - if (!(msr & MSR_K8_SYSCFG_MEM_ENCRYPT)) 596 + rdmsrl(MSR_AMD64_SYSCFG, msr); 597 + if (!(msr & MSR_AMD64_SYSCFG_MEM_ENCRYPT)) 598 598 goto clear_all; 599 599 600 600 /* ··· 1165 1165 break; 1166 1166 } 1167 1167 } 1168 + 1169 + u32 amd_get_highest_perf(void) 1170 + { 1171 + struct cpuinfo_x86 *c = &boot_cpu_data; 1172 + 1173 + if (c->x86 == 0x17 && ((c->x86_model >= 0x30 && c->x86_model < 0x40) || 1174 + (c->x86_model >= 0x70 && c->x86_model < 0x80))) 1175 + return 166; 1176 + 1177 + if (c->x86 == 0x19 && ((c->x86_model >= 0x20 && c->x86_model < 0x30) || 1178 + (c->x86_model >= 0x40 && c->x86_model < 0x70))) 1179 + return 166; 1180 + 1181 + return 255; 1182 + } 1183 + EXPORT_SYMBOL_GPL(amd_get_highest_perf);
+1 -1
arch/x86/kernel/cpu/mtrr/cleanup.c
··· 836 836 if (boot_cpu_data.x86 < 0xf) 837 837 return 0; 838 838 /* In case some hypervisor doesn't pass SYSCFG through: */ 839 - if (rdmsr_safe(MSR_K8_SYSCFG, &l, &h) < 0) 839 + if (rdmsr_safe(MSR_AMD64_SYSCFG, &l, &h) < 0) 840 840 return 0; 841 841 /* 842 842 * Memory between 4GB and top of mem is forced WB by this magic bit.
+2 -2
arch/x86/kernel/cpu/mtrr/generic.c
··· 53 53 (boot_cpu_data.x86 >= 0x0f))) 54 54 return; 55 55 56 - rdmsr(MSR_K8_SYSCFG, lo, hi); 56 + rdmsr(MSR_AMD64_SYSCFG, lo, hi); 57 57 if (lo & K8_MTRRFIXRANGE_DRAM_MODIFY) { 58 58 pr_err(FW_WARN "MTRR: CPU %u: SYSCFG[MtrrFixDramModEn]" 59 59 " not cleared by BIOS, clearing this bit\n", 60 60 smp_processor_id()); 61 61 lo &= ~K8_MTRRFIXRANGE_DRAM_MODIFY; 62 - mtrr_wrmsr(MSR_K8_SYSCFG, lo, hi); 62 + mtrr_wrmsr(MSR_AMD64_SYSCFG, lo, hi); 63 63 } 64 64 } 65 65
+1 -1
arch/x86/kernel/head64.c
··· 39 39 #include <asm/realmode.h> 40 40 #include <asm/extable.h> 41 41 #include <asm/trapnr.h> 42 - #include <asm/sev-es.h> 42 + #include <asm/sev.h> 43 43 44 44 /* 45 45 * Manage page tables very early on.
+88 -51
arch/x86/kernel/kvm.c
··· 26 26 #include <linux/kprobes.h> 27 27 #include <linux/nmi.h> 28 28 #include <linux/swait.h> 29 + #include <linux/syscore_ops.h> 29 30 #include <asm/timer.h> 30 31 #include <asm/cpu.h> 31 32 #include <asm/traps.h> ··· 38 37 #include <asm/tlb.h> 39 38 #include <asm/cpuidle_haltpoll.h> 40 39 #include <asm/ptrace.h> 40 + #include <asm/reboot.h> 41 41 #include <asm/svm.h> 42 42 43 43 DEFINE_STATIC_KEY_FALSE(kvm_async_pf_enabled); ··· 347 345 348 346 wrmsrl(MSR_KVM_ASYNC_PF_EN, pa); 349 347 __this_cpu_write(apf_reason.enabled, 1); 350 - pr_info("KVM setup async PF for cpu %d\n", smp_processor_id()); 348 + pr_info("setup async PF for cpu %d\n", smp_processor_id()); 351 349 } 352 350 353 351 if (kvm_para_has_feature(KVM_FEATURE_PV_EOI)) { ··· 373 371 wrmsrl(MSR_KVM_ASYNC_PF_EN, 0); 374 372 __this_cpu_write(apf_reason.enabled, 0); 375 373 376 - pr_info("Unregister pv shared memory for cpu %d\n", smp_processor_id()); 374 + pr_info("disable async PF for cpu %d\n", smp_processor_id()); 377 375 } 378 376 379 - static void kvm_pv_guest_cpu_reboot(void *unused) 377 + static void kvm_disable_steal_time(void) 380 378 { 381 - /* 382 - * We disable PV EOI before we load a new kernel by kexec, 383 - * since MSR_KVM_PV_EOI_EN stores a pointer into old kernel's memory. 384 - * New kernel can re-enable when it boots. 385 - */ 386 - if (kvm_para_has_feature(KVM_FEATURE_PV_EOI)) 387 - wrmsrl(MSR_KVM_PV_EOI_EN, 0); 388 - kvm_pv_disable_apf(); 389 - kvm_disable_steal_time(); 390 - } 379 + if (!has_steal_clock) 380 + return; 391 381 392 - static int kvm_pv_reboot_notify(struct notifier_block *nb, 393 - unsigned long code, void *unused) 394 - { 395 - if (code == SYS_RESTART) 396 - on_each_cpu(kvm_pv_guest_cpu_reboot, NULL, 1); 397 - return NOTIFY_DONE; 382 + wrmsr(MSR_KVM_STEAL_TIME, 0, 0); 398 383 } 399 - 400 - static struct notifier_block kvm_pv_reboot_nb = { 401 - .notifier_call = kvm_pv_reboot_notify, 402 - }; 403 384 404 385 static u64 kvm_steal_clock(int cpu) 405 386 { ··· 399 414 } while ((version & 1) || (version != src->version)); 400 415 401 416 return steal; 402 - } 403 - 404 - void kvm_disable_steal_time(void) 405 - { 406 - if (!has_steal_clock) 407 - return; 408 - 409 - wrmsr(MSR_KVM_STEAL_TIME, 0, 0); 410 417 } 411 418 412 419 static inline void __set_percpu_decrypted(void *ptr, unsigned long size) ··· 426 449 __set_percpu_decrypted(&per_cpu(steal_time, cpu), sizeof(steal_time)); 427 450 __set_percpu_decrypted(&per_cpu(kvm_apic_eoi, cpu), sizeof(kvm_apic_eoi)); 428 451 } 452 + } 453 + 454 + static void kvm_guest_cpu_offline(bool shutdown) 455 + { 456 + kvm_disable_steal_time(); 457 + if (kvm_para_has_feature(KVM_FEATURE_PV_EOI)) 458 + wrmsrl(MSR_KVM_PV_EOI_EN, 0); 459 + kvm_pv_disable_apf(); 460 + if (!shutdown) 461 + apf_task_wake_all(); 462 + kvmclock_disable(); 463 + } 464 + 465 + static int kvm_cpu_online(unsigned int cpu) 466 + { 467 + unsigned long flags; 468 + 469 + local_irq_save(flags); 470 + kvm_guest_cpu_init(); 471 + local_irq_restore(flags); 472 + return 0; 429 473 } 430 474 431 475 #ifdef CONFIG_SMP ··· 633 635 kvm_spinlock_init(); 634 636 } 635 637 636 - static void kvm_guest_cpu_offline(void) 637 - { 638 - kvm_disable_steal_time(); 639 - if (kvm_para_has_feature(KVM_FEATURE_PV_EOI)) 640 - wrmsrl(MSR_KVM_PV_EOI_EN, 0); 641 - kvm_pv_disable_apf(); 642 - apf_task_wake_all(); 643 - } 644 - 645 - static int kvm_cpu_online(unsigned int cpu) 646 - { 647 - local_irq_disable(); 648 - kvm_guest_cpu_init(); 649 - local_irq_enable(); 650 - return 0; 651 - } 652 - 653 638 static int kvm_cpu_down_prepare(unsigned int cpu) 654 639 { 655 - local_irq_disable(); 656 - kvm_guest_cpu_offline(); 657 - local_irq_enable(); 640 + unsigned long flags; 641 + 642 + local_irq_save(flags); 643 + kvm_guest_cpu_offline(false); 644 + local_irq_restore(flags); 658 645 return 0; 659 646 } 660 647 648 + #endif 649 + 650 + static int kvm_suspend(void) 651 + { 652 + kvm_guest_cpu_offline(false); 653 + 654 + return 0; 655 + } 656 + 657 + static void kvm_resume(void) 658 + { 659 + kvm_cpu_online(raw_smp_processor_id()); 660 + } 661 + 662 + static struct syscore_ops kvm_syscore_ops = { 663 + .suspend = kvm_suspend, 664 + .resume = kvm_resume, 665 + }; 666 + 667 + static void kvm_pv_guest_cpu_reboot(void *unused) 668 + { 669 + kvm_guest_cpu_offline(true); 670 + } 671 + 672 + static int kvm_pv_reboot_notify(struct notifier_block *nb, 673 + unsigned long code, void *unused) 674 + { 675 + if (code == SYS_RESTART) 676 + on_each_cpu(kvm_pv_guest_cpu_reboot, NULL, 1); 677 + return NOTIFY_DONE; 678 + } 679 + 680 + static struct notifier_block kvm_pv_reboot_nb = { 681 + .notifier_call = kvm_pv_reboot_notify, 682 + }; 683 + 684 + /* 685 + * After a PV feature is registered, the host will keep writing to the 686 + * registered memory location. If the guest happens to shutdown, this memory 687 + * won't be valid. In cases like kexec, in which you install a new kernel, this 688 + * means a random memory location will be kept being written. 689 + */ 690 + #ifdef CONFIG_KEXEC_CORE 691 + static void kvm_crash_shutdown(struct pt_regs *regs) 692 + { 693 + kvm_guest_cpu_offline(true); 694 + native_machine_crash_shutdown(regs); 695 + } 661 696 #endif 662 697 663 698 static void __init kvm_guest_init(void) ··· 734 703 sev_map_percpu_data(); 735 704 kvm_guest_cpu_init(); 736 705 #endif 706 + 707 + #ifdef CONFIG_KEXEC_CORE 708 + machine_ops.crash_shutdown = kvm_crash_shutdown; 709 + #endif 710 + 711 + register_syscore_ops(&kvm_syscore_ops); 737 712 738 713 /* 739 714 * Hard lockup detection is enabled by default. Disable it, as guests
+1 -25
arch/x86/kernel/kvmclock.c
··· 20 20 #include <asm/hypervisor.h> 21 21 #include <asm/mem_encrypt.h> 22 22 #include <asm/x86_init.h> 23 - #include <asm/reboot.h> 24 23 #include <asm/kvmclock.h> 25 24 26 25 static int kvmclock __initdata = 1; ··· 202 203 } 203 204 #endif 204 205 205 - /* 206 - * After the clock is registered, the host will keep writing to the 207 - * registered memory location. If the guest happens to shutdown, this memory 208 - * won't be valid. In cases like kexec, in which you install a new kernel, this 209 - * means a random memory location will be kept being written. So before any 210 - * kind of shutdown from our side, we unregister the clock by writing anything 211 - * that does not have the 'enable' bit set in the msr 212 - */ 213 - #ifdef CONFIG_KEXEC_CORE 214 - static void kvm_crash_shutdown(struct pt_regs *regs) 206 + void kvmclock_disable(void) 215 207 { 216 208 native_write_msr(msr_kvm_system_time, 0, 0); 217 - kvm_disable_steal_time(); 218 - native_machine_crash_shutdown(regs); 219 - } 220 - #endif 221 - 222 - static void kvm_shutdown(void) 223 - { 224 - native_write_msr(msr_kvm_system_time, 0, 0); 225 - kvm_disable_steal_time(); 226 - native_machine_shutdown(); 227 209 } 228 210 229 211 static void __init kvmclock_init_mem(void) ··· 331 351 #endif 332 352 x86_platform.save_sched_clock_state = kvm_save_sched_clock_state; 333 353 x86_platform.restore_sched_clock_state = kvm_restore_sched_clock_state; 334 - machine_ops.shutdown = kvm_shutdown; 335 - #ifdef CONFIG_KEXEC_CORE 336 - machine_ops.crash_shutdown = kvm_crash_shutdown; 337 - #endif 338 354 kvm_get_preset_lpj(); 339 355 340 356 /*
+1 -1
arch/x86/kernel/mmconf-fam10h_64.c
··· 95 95 return; 96 96 97 97 /* SYS_CFG */ 98 - address = MSR_K8_SYSCFG; 98 + address = MSR_AMD64_SYSCFG; 99 99 rdmsrl(address, val); 100 100 101 101 /* TOP_MEM2 is not enabled? */
+1 -1
arch/x86/kernel/nmi.c
··· 33 33 #include <asm/reboot.h> 34 34 #include <asm/cache.h> 35 35 #include <asm/nospec-branch.h> 36 - #include <asm/sev-es.h> 36 + #include <asm/sev.h> 37 37 38 38 #define CREATE_TRACE_POINTS 39 39 #include <trace/events/nmi.h>
+10 -10
arch/x86/kernel/sev-es-shared.c arch/x86/kernel/sev-shared.c
··· 26 26 27 27 static void __noreturn sev_es_terminate(unsigned int reason) 28 28 { 29 - u64 val = GHCB_SEV_TERMINATE; 29 + u64 val = GHCB_MSR_TERM_REQ; 30 30 31 31 /* 32 32 * Tell the hypervisor what went wrong - only reason-set 0 is 33 33 * currently supported. 34 34 */ 35 - val |= GHCB_SEV_TERMINATE_REASON(0, reason); 35 + val |= GHCB_SEV_TERM_REASON(0, reason); 36 36 37 37 /* Request Guest Termination from Hypvervisor */ 38 38 sev_es_wr_ghcb_msr(val); ··· 47 47 u64 val; 48 48 49 49 /* Do the GHCB protocol version negotiation */ 50 - sev_es_wr_ghcb_msr(GHCB_SEV_INFO_REQ); 50 + sev_es_wr_ghcb_msr(GHCB_MSR_SEV_INFO_REQ); 51 51 VMGEXIT(); 52 52 val = sev_es_rd_ghcb_msr(); 53 53 54 - if (GHCB_INFO(val) != GHCB_SEV_INFO) 54 + if (GHCB_MSR_INFO(val) != GHCB_MSR_SEV_INFO_RESP) 55 55 return false; 56 56 57 - if (GHCB_PROTO_MAX(val) < GHCB_PROTO_OUR || 58 - GHCB_PROTO_MIN(val) > GHCB_PROTO_OUR) 57 + if (GHCB_MSR_PROTO_MAX(val) < GHCB_PROTO_OUR || 58 + GHCB_MSR_PROTO_MIN(val) > GHCB_PROTO_OUR) 59 59 return false; 60 60 61 61 return true; ··· 153 153 sev_es_wr_ghcb_msr(GHCB_CPUID_REQ(fn, GHCB_CPUID_REQ_EAX)); 154 154 VMGEXIT(); 155 155 val = sev_es_rd_ghcb_msr(); 156 - if (GHCB_SEV_GHCB_RESP_CODE(val) != GHCB_SEV_CPUID_RESP) 156 + if (GHCB_RESP_CODE(val) != GHCB_MSR_CPUID_RESP) 157 157 goto fail; 158 158 regs->ax = val >> 32; 159 159 160 160 sev_es_wr_ghcb_msr(GHCB_CPUID_REQ(fn, GHCB_CPUID_REQ_EBX)); 161 161 VMGEXIT(); 162 162 val = sev_es_rd_ghcb_msr(); 163 - if (GHCB_SEV_GHCB_RESP_CODE(val) != GHCB_SEV_CPUID_RESP) 163 + if (GHCB_RESP_CODE(val) != GHCB_MSR_CPUID_RESP) 164 164 goto fail; 165 165 regs->bx = val >> 32; 166 166 167 167 sev_es_wr_ghcb_msr(GHCB_CPUID_REQ(fn, GHCB_CPUID_REQ_ECX)); 168 168 VMGEXIT(); 169 169 val = sev_es_rd_ghcb_msr(); 170 - if (GHCB_SEV_GHCB_RESP_CODE(val) != GHCB_SEV_CPUID_RESP) 170 + if (GHCB_RESP_CODE(val) != GHCB_MSR_CPUID_RESP) 171 171 goto fail; 172 172 regs->cx = val >> 32; 173 173 174 174 sev_es_wr_ghcb_msr(GHCB_CPUID_REQ(fn, GHCB_CPUID_REQ_EDX)); 175 175 VMGEXIT(); 176 176 val = sev_es_rd_ghcb_msr(); 177 - if (GHCB_SEV_GHCB_RESP_CODE(val) != GHCB_SEV_CPUID_RESP) 177 + if (GHCB_RESP_CODE(val) != GHCB_MSR_CPUID_RESP) 178 178 goto fail; 179 179 regs->dx = val >> 32; 180 180
+2 -2
arch/x86/kernel/sev-es.c arch/x86/kernel/sev.c
··· 22 22 23 23 #include <asm/cpu_entry_area.h> 24 24 #include <asm/stacktrace.h> 25 - #include <asm/sev-es.h> 25 + #include <asm/sev.h> 26 26 #include <asm/insn-eval.h> 27 27 #include <asm/fpu/internal.h> 28 28 #include <asm/processor.h> ··· 459 459 } 460 460 461 461 /* Include code shared with pre-decompression boot stage */ 462 - #include "sev-es-shared.c" 462 + #include "sev-shared.c" 463 463 464 464 void noinstr __sev_es_nmi_complete(void) 465 465 {
+1 -1
arch/x86/kernel/smpboot.c
··· 2043 2043 return false; 2044 2044 } 2045 2045 2046 - highest_perf = perf_caps.highest_perf; 2046 + highest_perf = amd_get_highest_perf(); 2047 2047 nominal_perf = perf_caps.nominal_perf; 2048 2048 2049 2049 if (!highest_perf || !nominal_perf) {
+18 -2
arch/x86/kvm/cpuid.c
··· 458 458 F(AVX512_VPOPCNTDQ) | F(UMIP) | F(AVX512_VBMI2) | F(GFNI) | 459 459 F(VAES) | F(VPCLMULQDQ) | F(AVX512_VNNI) | F(AVX512_BITALG) | 460 460 F(CLDEMOTE) | F(MOVDIRI) | F(MOVDIR64B) | 0 /*WAITPKG*/ | 461 - F(SGX_LC) 461 + F(SGX_LC) | F(BUS_LOCK_DETECT) 462 462 ); 463 463 /* Set LA57 based on hardware capability. */ 464 464 if (cpuid_ecx(7) & F(LA57)) ··· 567 567 F(ACE2) | F(ACE2_EN) | F(PHE) | F(PHE_EN) | 568 568 F(PMM) | F(PMM_EN) 569 569 ); 570 + 571 + /* 572 + * Hide RDTSCP and RDPID if either feature is reported as supported but 573 + * probing MSR_TSC_AUX failed. This is purely a sanity check and 574 + * should never happen, but the guest will likely crash if RDTSCP or 575 + * RDPID is misreported, and KVM has botched MSR_TSC_AUX emulation in 576 + * the past. For example, the sanity check may fire if this instance of 577 + * KVM is running as L1 on top of an older, broken KVM. 578 + */ 579 + if (WARN_ON((kvm_cpu_cap_has(X86_FEATURE_RDTSCP) || 580 + kvm_cpu_cap_has(X86_FEATURE_RDPID)) && 581 + !kvm_is_supported_user_return_msr(MSR_TSC_AUX))) { 582 + kvm_cpu_cap_clear(X86_FEATURE_RDTSCP); 583 + kvm_cpu_cap_clear(X86_FEATURE_RDPID); 584 + } 570 585 } 571 586 EXPORT_SYMBOL_GPL(kvm_set_cpu_caps); 572 587 ··· 652 637 case 7: 653 638 entry->flags |= KVM_CPUID_FLAG_SIGNIFCANT_INDEX; 654 639 entry->eax = 0; 655 - entry->ecx = F(RDPID); 640 + if (kvm_cpu_cap_has(X86_FEATURE_RDTSCP)) 641 + entry->ecx = F(RDPID); 656 642 ++array->nent; 657 643 default: 658 644 break;
+1 -1
arch/x86/kvm/emulate.c
··· 4502 4502 * from the register case of group9. 4503 4503 */ 4504 4504 static const struct gprefix pfx_0f_c7_7 = { 4505 - N, N, N, II(DstMem | ModRM | Op3264 | EmulateOnUD, em_rdpid, rdtscp), 4505 + N, N, N, II(DstMem | ModRM | Op3264 | EmulateOnUD, em_rdpid, rdpid), 4506 4506 }; 4507 4507 4508 4508
+1
arch/x86/kvm/kvm_emulate.h
··· 468 468 x86_intercept_clgi, 469 469 x86_intercept_skinit, 470 470 x86_intercept_rdtscp, 471 + x86_intercept_rdpid, 471 472 x86_intercept_icebp, 472 473 x86_intercept_wbinvd, 473 474 x86_intercept_monitor,
+1 -1
arch/x86/kvm/lapic.c
··· 1913 1913 if (!apic->lapic_timer.hv_timer_in_use) 1914 1914 goto out; 1915 1915 WARN_ON(rcuwait_active(&vcpu->wait)); 1916 - cancel_hv_timer(apic); 1917 1916 apic_timer_expired(apic, false); 1917 + cancel_hv_timer(apic); 1918 1918 1919 1919 if (apic_lvtt_period(apic) && apic->lapic_timer.period) { 1920 1920 advance_periodic_target_expiration(apic);
+10 -10
arch/x86/kvm/mmu/mmu.c
··· 3310 3310 if (mmu->shadow_root_level == PT64_ROOT_4LEVEL) { 3311 3311 pm_mask |= PT_ACCESSED_MASK | PT_WRITABLE_MASK | PT_USER_MASK; 3312 3312 3313 - if (WARN_ON_ONCE(!mmu->lm_root)) { 3313 + if (WARN_ON_ONCE(!mmu->pml4_root)) { 3314 3314 r = -EIO; 3315 3315 goto out_unlock; 3316 3316 } 3317 3317 3318 - mmu->lm_root[0] = __pa(mmu->pae_root) | pm_mask; 3318 + mmu->pml4_root[0] = __pa(mmu->pae_root) | pm_mask; 3319 3319 } 3320 3320 3321 3321 for (i = 0; i < 4; ++i) { ··· 3335 3335 } 3336 3336 3337 3337 if (mmu->shadow_root_level == PT64_ROOT_4LEVEL) 3338 - mmu->root_hpa = __pa(mmu->lm_root); 3338 + mmu->root_hpa = __pa(mmu->pml4_root); 3339 3339 else 3340 3340 mmu->root_hpa = __pa(mmu->pae_root); 3341 3341 ··· 3350 3350 static int mmu_alloc_special_roots(struct kvm_vcpu *vcpu) 3351 3351 { 3352 3352 struct kvm_mmu *mmu = vcpu->arch.mmu; 3353 - u64 *lm_root, *pae_root; 3353 + u64 *pml4_root, *pae_root; 3354 3354 3355 3355 /* 3356 3356 * When shadowing 32-bit or PAE NPT with 64-bit NPT, the PML4 and PDP ··· 3369 3369 if (WARN_ON_ONCE(mmu->shadow_root_level != PT64_ROOT_4LEVEL)) 3370 3370 return -EIO; 3371 3371 3372 - if (mmu->pae_root && mmu->lm_root) 3372 + if (mmu->pae_root && mmu->pml4_root) 3373 3373 return 0; 3374 3374 3375 3375 /* 3376 3376 * The special roots should always be allocated in concert. Yell and 3377 3377 * bail if KVM ends up in a state where only one of the roots is valid. 3378 3378 */ 3379 - if (WARN_ON_ONCE(!tdp_enabled || mmu->pae_root || mmu->lm_root)) 3379 + if (WARN_ON_ONCE(!tdp_enabled || mmu->pae_root || mmu->pml4_root)) 3380 3380 return -EIO; 3381 3381 3382 3382 /* ··· 3387 3387 if (!pae_root) 3388 3388 return -ENOMEM; 3389 3389 3390 - lm_root = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT); 3391 - if (!lm_root) { 3390 + pml4_root = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT); 3391 + if (!pml4_root) { 3392 3392 free_page((unsigned long)pae_root); 3393 3393 return -ENOMEM; 3394 3394 } 3395 3395 3396 3396 mmu->pae_root = pae_root; 3397 - mmu->lm_root = lm_root; 3397 + mmu->pml4_root = pml4_root; 3398 3398 3399 3399 return 0; 3400 3400 } ··· 5261 5261 if (!tdp_enabled && mmu->pae_root) 5262 5262 set_memory_encrypted((unsigned long)mmu->pae_root, 1); 5263 5263 free_page((unsigned long)mmu->pae_root); 5264 - free_page((unsigned long)mmu->lm_root); 5264 + free_page((unsigned long)mmu->pml4_root); 5265 5265 } 5266 5266 5267 5267 static int __kvm_mmu_create(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu)
+16 -1
arch/x86/kvm/mmu/tdp_mmu.c
··· 388 388 } 389 389 390 390 /** 391 - * handle_changed_spte - handle bookkeeping associated with an SPTE change 391 + * __handle_changed_spte - handle bookkeeping associated with an SPTE change 392 392 * @kvm: kvm instance 393 393 * @as_id: the address space of the paging structure the SPTE was a part of 394 394 * @gfn: the base GFN that was mapped by the SPTE ··· 443 443 return; 444 444 445 445 trace_kvm_tdp_mmu_spte_changed(as_id, gfn, level, old_spte, new_spte); 446 + 447 + if (is_large_pte(old_spte) != is_large_pte(new_spte)) { 448 + if (is_large_pte(old_spte)) 449 + atomic64_sub(1, (atomic64_t*)&kvm->stat.lpages); 450 + else 451 + atomic64_add(1, (atomic64_t*)&kvm->stat.lpages); 452 + } 446 453 447 454 /* 448 455 * The only times a SPTE should be changed from a non-present to ··· 1016 1009 } 1017 1010 1018 1011 if (!is_shadow_present_pte(iter.old_spte)) { 1012 + /* 1013 + * If SPTE has been forzen by another thread, just 1014 + * give up and retry, avoiding unnecessary page table 1015 + * allocation and free. 1016 + */ 1017 + if (is_removed_spte(iter.old_spte)) 1018 + break; 1019 + 1019 1020 sp = alloc_tdp_mmu_page(vcpu, iter.gfn, iter.level); 1020 1021 child_pt = sp->spt; 1021 1022
+19 -4
arch/x86/kvm/svm/nested.c
··· 764 764 nested_svm_copy_common_state(svm->nested.vmcb02.ptr, svm->vmcb01.ptr); 765 765 766 766 svm_switch_vmcb(svm, &svm->vmcb01); 767 - WARN_ON_ONCE(svm->vmcb->control.exit_code != SVM_EXIT_VMRUN); 768 767 769 768 /* 770 769 * On vmexit the GIF is set to false and ··· 871 872 __free_page(virt_to_page(svm->nested.vmcb02.ptr)); 872 873 svm->nested.vmcb02.ptr = NULL; 873 874 875 + /* 876 + * When last_vmcb12_gpa matches the current vmcb12 gpa, 877 + * some vmcb12 fields are not loaded if they are marked clean 878 + * in the vmcb12, since in this case they are up to date already. 879 + * 880 + * When the vmcb02 is freed, this optimization becomes invalid. 881 + */ 882 + svm->nested.last_vmcb12_gpa = INVALID_GPA; 883 + 874 884 svm->nested.initialized = false; 875 885 } 876 886 ··· 892 884 893 885 if (is_guest_mode(vcpu)) { 894 886 svm->nested.nested_run_pending = 0; 887 + svm->nested.vmcb12_gpa = INVALID_GPA; 888 + 895 889 leave_guest_mode(vcpu); 896 890 897 - svm_switch_vmcb(svm, &svm->nested.vmcb02); 891 + svm_switch_vmcb(svm, &svm->vmcb01); 898 892 899 893 nested_svm_uninit_mmu_context(vcpu); 900 894 vmcb_mark_all_dirty(svm->vmcb); ··· 1308 1298 * L2 registers if needed are moved from the current VMCB to VMCB02. 1309 1299 */ 1310 1300 1301 + if (is_guest_mode(vcpu)) 1302 + svm_leave_nested(svm); 1303 + else 1304 + svm->nested.vmcb02.ptr->save = svm->vmcb01.ptr->save; 1305 + 1306 + svm_set_gif(svm, !!(kvm_state->flags & KVM_STATE_NESTED_GIF_SET)); 1307 + 1311 1308 svm->nested.nested_run_pending = 1312 1309 !!(kvm_state->flags & KVM_STATE_NESTED_RUN_PENDING); 1313 1310 1314 1311 svm->nested.vmcb12_gpa = kvm_state->hdr.svm.vmcb_pa; 1315 - if (svm->current_vmcb == &svm->vmcb01) 1316 - svm->nested.vmcb02.ptr->save = svm->vmcb01.ptr->save; 1317 1312 1318 1313 svm->vmcb01.ptr->save.es = save->es; 1319 1314 svm->vmcb01.ptr->save.cs = save->cs;
+14 -18
arch/x86/kvm/svm/sev.c
··· 763 763 } 764 764 765 765 static int __sev_dbg_decrypt_user(struct kvm *kvm, unsigned long paddr, 766 - unsigned long __user dst_uaddr, 766 + void __user *dst_uaddr, 767 767 unsigned long dst_paddr, 768 768 int size, int *err) 769 769 { ··· 787 787 788 788 if (tpage) { 789 789 offset = paddr & 15; 790 - if (copy_to_user((void __user *)(uintptr_t)dst_uaddr, 791 - page_address(tpage) + offset, size)) 790 + if (copy_to_user(dst_uaddr, page_address(tpage) + offset, size)) 792 791 ret = -EFAULT; 793 792 } 794 793 ··· 799 800 } 800 801 801 802 static int __sev_dbg_encrypt_user(struct kvm *kvm, unsigned long paddr, 802 - unsigned long __user vaddr, 803 + void __user *vaddr, 803 804 unsigned long dst_paddr, 804 - unsigned long __user dst_vaddr, 805 + void __user *dst_vaddr, 805 806 int size, int *error) 806 807 { 807 808 struct page *src_tpage = NULL; ··· 809 810 int ret, len = size; 810 811 811 812 /* If source buffer is not aligned then use an intermediate buffer */ 812 - if (!IS_ALIGNED(vaddr, 16)) { 813 + if (!IS_ALIGNED((unsigned long)vaddr, 16)) { 813 814 src_tpage = alloc_page(GFP_KERNEL); 814 815 if (!src_tpage) 815 816 return -ENOMEM; 816 817 817 - if (copy_from_user(page_address(src_tpage), 818 - (void __user *)(uintptr_t)vaddr, size)) { 818 + if (copy_from_user(page_address(src_tpage), vaddr, size)) { 819 819 __free_page(src_tpage); 820 820 return -EFAULT; 821 821 } ··· 828 830 * - copy the source buffer in an intermediate buffer 829 831 * - use the intermediate buffer as source buffer 830 832 */ 831 - if (!IS_ALIGNED(dst_vaddr, 16) || !IS_ALIGNED(size, 16)) { 833 + if (!IS_ALIGNED((unsigned long)dst_vaddr, 16) || !IS_ALIGNED(size, 16)) { 832 834 int dst_offset; 833 835 834 836 dst_tpage = alloc_page(GFP_KERNEL); ··· 853 855 page_address(src_tpage), size); 854 856 else { 855 857 if (copy_from_user(page_address(dst_tpage) + dst_offset, 856 - (void __user *)(uintptr_t)vaddr, size)) { 858 + vaddr, size)) { 857 859 ret = -EFAULT; 858 860 goto e_free; 859 861 } ··· 933 935 if (dec) 934 936 ret = __sev_dbg_decrypt_user(kvm, 935 937 __sme_page_pa(src_p[0]) + s_off, 936 - dst_vaddr, 938 + (void __user *)dst_vaddr, 937 939 __sme_page_pa(dst_p[0]) + d_off, 938 940 len, &argp->error); 939 941 else 940 942 ret = __sev_dbg_encrypt_user(kvm, 941 943 __sme_page_pa(src_p[0]) + s_off, 942 - vaddr, 944 + (void __user *)vaddr, 943 945 __sme_page_pa(dst_p[0]) + d_off, 944 - dst_vaddr, 946 + (void __user *)dst_vaddr, 945 947 len, &argp->error); 946 948 947 949 sev_unpin_memory(kvm, src_p, n); ··· 1762 1764 e_source_unlock: 1763 1765 mutex_unlock(&source_kvm->lock); 1764 1766 e_source_put: 1765 - fput(source_kvm_file); 1767 + if (source_kvm_file) 1768 + fput(source_kvm_file); 1766 1769 return ret; 1767 1770 } 1768 1771 ··· 2197 2198 return -EINVAL; 2198 2199 } 2199 2200 2200 - static void pre_sev_es_run(struct vcpu_svm *svm) 2201 + void sev_es_unmap_ghcb(struct vcpu_svm *svm) 2201 2202 { 2202 2203 if (!svm->ghcb) 2203 2204 return; ··· 2232 2233 { 2233 2234 struct svm_cpu_data *sd = per_cpu(svm_data, cpu); 2234 2235 int asid = sev_get_asid(svm->vcpu.kvm); 2235 - 2236 - /* Perform any SEV-ES pre-run actions */ 2237 - pre_sev_es_run(svm); 2238 2236 2239 2237 /* Assign the asid allocated with this SEV guest */ 2240 2238 svm->asid = asid;
+30 -36
arch/x86/kvm/svm/svm.c
··· 212 212 * RDTSCP and RDPID are not used in the kernel, specifically to allow KVM to 213 213 * defer the restoration of TSC_AUX until the CPU returns to userspace. 214 214 */ 215 - #define TSC_AUX_URET_SLOT 0 215 + static int tsc_aux_uret_slot __read_mostly = -1; 216 216 217 217 static const u32 msrpm_ranges[] = {0, 0xc0000000, 0xc0010000}; 218 218 ··· 444 444 445 445 if (sev_active()) { 446 446 pr_info("KVM is unsupported when running as an SEV guest\n"); 447 + return 0; 448 + } 449 + 450 + if (pgtable_l5_enabled()) { 451 + pr_info("KVM doesn't yet support 5-level paging on AMD SVM\n"); 447 452 return 0; 448 453 } 449 454 ··· 863 858 return; 864 859 865 860 /* If memory encryption is not enabled, use existing mask */ 866 - rdmsrl(MSR_K8_SYSCFG, msr); 867 - if (!(msr & MSR_K8_SYSCFG_MEM_ENCRYPT)) 861 + rdmsrl(MSR_AMD64_SYSCFG, msr); 862 + if (!(msr & MSR_AMD64_SYSCFG_MEM_ENCRYPT)) 868 863 return; 869 864 870 865 enc_bit = cpuid_ebx(0x8000001f) & 0x3f; ··· 964 959 kvm_tsc_scaling_ratio_frac_bits = 32; 965 960 } 966 961 967 - if (boot_cpu_has(X86_FEATURE_RDTSCP)) 968 - kvm_define_user_return_msr(TSC_AUX_URET_SLOT, MSR_TSC_AUX); 962 + tsc_aux_uret_slot = kvm_add_user_return_msr(MSR_TSC_AUX); 969 963 970 964 /* Check for pause filtering support */ 971 965 if (!boot_cpu_has(X86_FEATURE_PAUSEFILTER)) { ··· 1104 1100 return svm->vmcb->control.tsc_offset; 1105 1101 } 1106 1102 1107 - static void svm_check_invpcid(struct vcpu_svm *svm) 1103 + /* Evaluate instruction intercepts that depend on guest CPUID features. */ 1104 + static void svm_recalc_instruction_intercepts(struct kvm_vcpu *vcpu, 1105 + struct vcpu_svm *svm) 1108 1106 { 1109 1107 /* 1110 1108 * Intercept INVPCID if shadow paging is enabled to sync/free shadow ··· 1118 1112 svm_set_intercept(svm, INTERCEPT_INVPCID); 1119 1113 else 1120 1114 svm_clr_intercept(svm, INTERCEPT_INVPCID); 1115 + } 1116 + 1117 + if (kvm_cpu_cap_has(X86_FEATURE_RDTSCP)) { 1118 + if (guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP)) 1119 + svm_clr_intercept(svm, INTERCEPT_RDTSCP); 1120 + else 1121 + svm_set_intercept(svm, INTERCEPT_RDTSCP); 1121 1122 } 1122 1123 } 1123 1124 ··· 1248 1235 svm->current_vmcb->asid_generation = 0; 1249 1236 svm->asid = 0; 1250 1237 1251 - svm->nested.vmcb12_gpa = 0; 1252 - svm->nested.last_vmcb12_gpa = 0; 1238 + svm->nested.vmcb12_gpa = INVALID_GPA; 1239 + svm->nested.last_vmcb12_gpa = INVALID_GPA; 1253 1240 vcpu->arch.hflags = 0; 1254 1241 1255 1242 if (!kvm_pause_in_guest(vcpu->kvm)) { ··· 1261 1248 svm_clr_intercept(svm, INTERCEPT_PAUSE); 1262 1249 } 1263 1250 1264 - svm_check_invpcid(svm); 1251 + svm_recalc_instruction_intercepts(vcpu, svm); 1265 1252 1266 1253 /* 1267 1254 * If the host supports V_SPEC_CTRL then disable the interception ··· 1437 1424 struct vcpu_svm *svm = to_svm(vcpu); 1438 1425 struct svm_cpu_data *sd = per_cpu(svm_data, vcpu->cpu); 1439 1426 1427 + if (sev_es_guest(vcpu->kvm)) 1428 + sev_es_unmap_ghcb(svm); 1429 + 1440 1430 if (svm->guest_state_loaded) 1441 1431 return; 1442 1432 ··· 1461 1445 } 1462 1446 } 1463 1447 1464 - if (static_cpu_has(X86_FEATURE_RDTSCP)) 1465 - kvm_set_user_return_msr(TSC_AUX_URET_SLOT, svm->tsc_aux, -1ull); 1448 + if (likely(tsc_aux_uret_slot >= 0)) 1449 + kvm_set_user_return_msr(tsc_aux_uret_slot, svm->tsc_aux, -1ull); 1466 1450 1467 1451 svm->guest_state_loaded = true; 1468 1452 } ··· 2671 2655 msr_info->data |= (u64)svm->sysenter_esp_hi << 32; 2672 2656 break; 2673 2657 case MSR_TSC_AUX: 2674 - if (!boot_cpu_has(X86_FEATURE_RDTSCP)) 2675 - return 1; 2676 - if (!msr_info->host_initiated && 2677 - !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP)) 2678 - return 1; 2679 2658 msr_info->data = svm->tsc_aux; 2680 2659 break; 2681 2660 /* ··· 2887 2876 svm->sysenter_esp_hi = guest_cpuid_is_intel(vcpu) ? (data >> 32) : 0; 2888 2877 break; 2889 2878 case MSR_TSC_AUX: 2890 - if (!boot_cpu_has(X86_FEATURE_RDTSCP)) 2891 - return 1; 2892 - 2893 - if (!msr->host_initiated && 2894 - !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP)) 2895 - return 1; 2896 - 2897 - /* 2898 - * Per Intel's SDM, bits 63:32 are reserved, but AMD's APM has 2899 - * incomplete and conflicting architectural behavior. Current 2900 - * AMD CPUs completely ignore bits 63:32, i.e. they aren't 2901 - * reserved and always read as zeros. Emulate AMD CPU behavior 2902 - * to avoid explosions if the vCPU is migrated from an AMD host 2903 - * to an Intel host. 2904 - */ 2905 - data = (u32)data; 2906 - 2907 2879 /* 2908 2880 * TSC_AUX is usually changed only during boot and never read 2909 2881 * directly. Intercept TSC_AUX instead of exposing it to the 2910 2882 * guest via direct_access_msrs, and switch it via user return. 2911 2883 */ 2912 2884 preempt_disable(); 2913 - r = kvm_set_user_return_msr(TSC_AUX_URET_SLOT, data, -1ull); 2885 + r = kvm_set_user_return_msr(tsc_aux_uret_slot, data, -1ull); 2914 2886 preempt_enable(); 2915 2887 if (r) 2916 2888 return 1; ··· 3078 3084 [SVM_EXIT_STGI] = stgi_interception, 3079 3085 [SVM_EXIT_CLGI] = clgi_interception, 3080 3086 [SVM_EXIT_SKINIT] = skinit_interception, 3087 + [SVM_EXIT_RDTSCP] = kvm_handle_invalid_op, 3081 3088 [SVM_EXIT_WBINVD] = kvm_emulate_wbinvd, 3082 3089 [SVM_EXIT_MONITOR] = kvm_emulate_monitor, 3083 3090 [SVM_EXIT_MWAIT] = kvm_emulate_mwait, ··· 3967 3972 svm->nrips_enabled = kvm_cpu_cap_has(X86_FEATURE_NRIPS) && 3968 3973 guest_cpuid_has(vcpu, X86_FEATURE_NRIPS); 3969 3974 3970 - /* Check again if INVPCID interception if required */ 3971 - svm_check_invpcid(svm); 3975 + svm_recalc_instruction_intercepts(vcpu, svm); 3972 3976 3973 3977 /* For sev guests, the memory encryption bit is not reserved in CR3. */ 3974 3978 if (sev_guest(vcpu->kvm)) {
+4 -33
arch/x86/kvm/svm/svm.h
··· 20 20 #include <linux/bits.h> 21 21 22 22 #include <asm/svm.h> 23 + #include <asm/sev-common.h> 23 24 24 25 #define __sme_page_pa(x) __sme_set(page_to_pfn(x) << PAGE_SHIFT) 25 26 ··· 526 525 527 526 /* sev.c */ 528 527 529 - #define GHCB_VERSION_MAX 1ULL 530 - #define GHCB_VERSION_MIN 1ULL 528 + #define GHCB_VERSION_MAX 1ULL 529 + #define GHCB_VERSION_MIN 1ULL 531 530 532 - #define GHCB_MSR_INFO_POS 0 533 - #define GHCB_MSR_INFO_MASK (BIT_ULL(12) - 1) 534 - 535 - #define GHCB_MSR_SEV_INFO_RESP 0x001 536 - #define GHCB_MSR_SEV_INFO_REQ 0x002 537 - #define GHCB_MSR_VER_MAX_POS 48 538 - #define GHCB_MSR_VER_MAX_MASK 0xffff 539 - #define GHCB_MSR_VER_MIN_POS 32 540 - #define GHCB_MSR_VER_MIN_MASK 0xffff 541 - #define GHCB_MSR_CBIT_POS 24 542 - #define GHCB_MSR_CBIT_MASK 0xff 543 - #define GHCB_MSR_SEV_INFO(_max, _min, _cbit) \ 544 - ((((_max) & GHCB_MSR_VER_MAX_MASK) << GHCB_MSR_VER_MAX_POS) | \ 545 - (((_min) & GHCB_MSR_VER_MIN_MASK) << GHCB_MSR_VER_MIN_POS) | \ 546 - (((_cbit) & GHCB_MSR_CBIT_MASK) << GHCB_MSR_CBIT_POS) | \ 547 - GHCB_MSR_SEV_INFO_RESP) 548 - 549 - #define GHCB_MSR_CPUID_REQ 0x004 550 - #define GHCB_MSR_CPUID_RESP 0x005 551 - #define GHCB_MSR_CPUID_FUNC_POS 32 552 - #define GHCB_MSR_CPUID_FUNC_MASK 0xffffffff 553 - #define GHCB_MSR_CPUID_VALUE_POS 32 554 - #define GHCB_MSR_CPUID_VALUE_MASK 0xffffffff 555 - #define GHCB_MSR_CPUID_REG_POS 30 556 - #define GHCB_MSR_CPUID_REG_MASK 0x3 557 - 558 - #define GHCB_MSR_TERM_REQ 0x100 559 - #define GHCB_MSR_TERM_REASON_SET_POS 12 560 - #define GHCB_MSR_TERM_REASON_SET_MASK 0xf 561 - #define GHCB_MSR_TERM_REASON_POS 16 562 - #define GHCB_MSR_TERM_REASON_MASK 0xff 563 531 564 532 extern unsigned int max_sev_asid; 565 533 ··· 551 581 void sev_es_create_vcpu(struct vcpu_svm *svm); 552 582 void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector); 553 583 void sev_es_prepare_guest_switch(struct vcpu_svm *svm, unsigned int cpu); 584 + void sev_es_unmap_ghcb(struct vcpu_svm *svm); 554 585 555 586 /* vmenter.S */ 556 587
+3
arch/x86/kvm/vmx/capabilities.h
··· 398 398 { 399 399 u64 debugctl = 0; 400 400 401 + if (boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT)) 402 + debugctl |= DEBUGCTLMSR_BUS_LOCK_DETECT; 403 + 401 404 if (vmx_get_perf_capabilities() & PMU_CAP_LBR_FMT) 402 405 debugctl |= DEBUGCTLMSR_LBR_MASK; 403 406
+19 -10
arch/x86/kvm/vmx/nested.c
··· 3098 3098 nested_vmx_handle_enlightened_vmptrld(vcpu, false); 3099 3099 3100 3100 if (evmptrld_status == EVMPTRLD_VMFAIL || 3101 - evmptrld_status == EVMPTRLD_ERROR) { 3102 - pr_debug_ratelimited("%s: enlightened vmptrld failed\n", 3103 - __func__); 3104 - vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; 3105 - vcpu->run->internal.suberror = 3106 - KVM_INTERNAL_ERROR_EMULATION; 3107 - vcpu->run->internal.ndata = 0; 3101 + evmptrld_status == EVMPTRLD_ERROR) 3108 3102 return false; 3109 - } 3110 3103 } 3111 3104 3112 3105 return true; ··· 3187 3194 3188 3195 static bool vmx_get_nested_state_pages(struct kvm_vcpu *vcpu) 3189 3196 { 3190 - if (!nested_get_evmcs_page(vcpu)) 3197 + if (!nested_get_evmcs_page(vcpu)) { 3198 + pr_debug_ratelimited("%s: enlightened vmptrld failed\n", 3199 + __func__); 3200 + vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; 3201 + vcpu->run->internal.suberror = 3202 + KVM_INTERNAL_ERROR_EMULATION; 3203 + vcpu->run->internal.ndata = 0; 3204 + 3191 3205 return false; 3206 + } 3192 3207 3193 3208 if (is_guest_mode(vcpu) && !nested_get_vmcs12_pages(vcpu)) 3194 3209 return false; ··· 4436 4435 /* Similarly, triple faults in L2 should never escape. */ 4437 4436 WARN_ON_ONCE(kvm_check_request(KVM_REQ_TRIPLE_FAULT, vcpu)); 4438 4437 4439 - kvm_clear_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu); 4438 + if (kvm_check_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu)) { 4439 + /* 4440 + * KVM_REQ_GET_NESTED_STATE_PAGES is also used to map 4441 + * Enlightened VMCS after migration and we still need to 4442 + * do that when something is forcing L2->L1 exit prior to 4443 + * the first L2 run. 4444 + */ 4445 + (void)nested_get_evmcs_page(vcpu); 4446 + } 4440 4447 4441 4448 /* Service the TLB flush request for L2 before switching to L1. */ 4442 4449 if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu))
+110 -110
arch/x86/kvm/vmx/vmx.c
··· 455 455 456 456 static unsigned long host_idt_base; 457 457 458 - /* 459 - * Though SYSCALL is only supported in 64-bit mode on Intel CPUs, kvm 460 - * will emulate SYSCALL in legacy mode if the vendor string in guest 461 - * CPUID.0:{EBX,ECX,EDX} is "AuthenticAMD" or "AMDisbetter!" To 462 - * support this emulation, IA32_STAR must always be included in 463 - * vmx_uret_msrs_list[], even in i386 builds. 464 - */ 465 - static const u32 vmx_uret_msrs_list[] = { 466 - #ifdef CONFIG_X86_64 467 - MSR_SYSCALL_MASK, MSR_LSTAR, MSR_CSTAR, 468 - #endif 469 - MSR_EFER, MSR_TSC_AUX, MSR_STAR, 470 - MSR_IA32_TSX_CTRL, 471 - }; 472 - 473 458 #if IS_ENABLED(CONFIG_HYPERV) 474 459 static bool __read_mostly enlightened_vmcs = true; 475 460 module_param(enlightened_vmcs, bool, 0444); ··· 682 697 return r; 683 698 } 684 699 685 - static inline int __vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr) 686 - { 687 - int i; 688 - 689 - for (i = 0; i < vmx->nr_uret_msrs; ++i) 690 - if (vmx_uret_msrs_list[vmx->guest_uret_msrs[i].slot] == msr) 691 - return i; 692 - return -1; 693 - } 694 - 695 700 struct vmx_uret_msr *vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr) 696 701 { 697 702 int i; 698 703 699 - i = __vmx_find_uret_msr(vmx, msr); 704 + i = kvm_find_user_return_msr(msr); 700 705 if (i >= 0) 701 706 return &vmx->guest_uret_msrs[i]; 702 707 return NULL; ··· 695 720 static int vmx_set_guest_uret_msr(struct vcpu_vmx *vmx, 696 721 struct vmx_uret_msr *msr, u64 data) 697 722 { 723 + unsigned int slot = msr - vmx->guest_uret_msrs; 698 724 int ret = 0; 699 725 700 726 u64 old_msr_data = msr->data; 701 727 msr->data = data; 702 - if (msr - vmx->guest_uret_msrs < vmx->nr_active_uret_msrs) { 728 + if (msr->load_into_hardware) { 703 729 preempt_disable(); 704 - ret = kvm_set_user_return_msr(msr->slot, msr->data, msr->mask); 730 + ret = kvm_set_user_return_msr(slot, msr->data, msr->mask); 705 731 preempt_enable(); 706 732 if (ret) 707 733 msr->data = old_msr_data; ··· 1054 1078 return false; 1055 1079 } 1056 1080 1057 - i = __vmx_find_uret_msr(vmx, MSR_EFER); 1081 + i = kvm_find_user_return_msr(MSR_EFER); 1058 1082 if (i < 0) 1059 1083 return false; 1060 1084 ··· 1216 1240 */ 1217 1241 if (!vmx->guest_uret_msrs_loaded) { 1218 1242 vmx->guest_uret_msrs_loaded = true; 1219 - for (i = 0; i < vmx->nr_active_uret_msrs; ++i) 1220 - kvm_set_user_return_msr(vmx->guest_uret_msrs[i].slot, 1243 + for (i = 0; i < kvm_nr_uret_msrs; ++i) { 1244 + if (!vmx->guest_uret_msrs[i].load_into_hardware) 1245 + continue; 1246 + 1247 + kvm_set_user_return_msr(i, 1221 1248 vmx->guest_uret_msrs[i].data, 1222 1249 vmx->guest_uret_msrs[i].mask); 1223 - 1250 + } 1224 1251 } 1225 1252 1226 1253 if (vmx->nested.need_vmcs12_to_shadow_sync) ··· 1730 1751 vmx_clear_hlt(vcpu); 1731 1752 } 1732 1753 1733 - static void vmx_setup_uret_msr(struct vcpu_vmx *vmx, unsigned int msr) 1754 + static void vmx_setup_uret_msr(struct vcpu_vmx *vmx, unsigned int msr, 1755 + bool load_into_hardware) 1734 1756 { 1735 - struct vmx_uret_msr tmp; 1736 - int from, to; 1757 + struct vmx_uret_msr *uret_msr; 1737 1758 1738 - from = __vmx_find_uret_msr(vmx, msr); 1739 - if (from < 0) 1759 + uret_msr = vmx_find_uret_msr(vmx, msr); 1760 + if (!uret_msr) 1740 1761 return; 1741 - to = vmx->nr_active_uret_msrs++; 1742 1762 1743 - tmp = vmx->guest_uret_msrs[to]; 1744 - vmx->guest_uret_msrs[to] = vmx->guest_uret_msrs[from]; 1745 - vmx->guest_uret_msrs[from] = tmp; 1763 + uret_msr->load_into_hardware = load_into_hardware; 1746 1764 } 1747 1765 1748 1766 /* ··· 1749 1773 */ 1750 1774 static void setup_msrs(struct vcpu_vmx *vmx) 1751 1775 { 1752 - vmx->guest_uret_msrs_loaded = false; 1753 - vmx->nr_active_uret_msrs = 0; 1754 1776 #ifdef CONFIG_X86_64 1777 + bool load_syscall_msrs; 1778 + 1755 1779 /* 1756 1780 * The SYSCALL MSRs are only needed on long mode guests, and only 1757 1781 * when EFER.SCE is set. 1758 1782 */ 1759 - if (is_long_mode(&vmx->vcpu) && (vmx->vcpu.arch.efer & EFER_SCE)) { 1760 - vmx_setup_uret_msr(vmx, MSR_STAR); 1761 - vmx_setup_uret_msr(vmx, MSR_LSTAR); 1762 - vmx_setup_uret_msr(vmx, MSR_SYSCALL_MASK); 1763 - } 1783 + load_syscall_msrs = is_long_mode(&vmx->vcpu) && 1784 + (vmx->vcpu.arch.efer & EFER_SCE); 1785 + 1786 + vmx_setup_uret_msr(vmx, MSR_STAR, load_syscall_msrs); 1787 + vmx_setup_uret_msr(vmx, MSR_LSTAR, load_syscall_msrs); 1788 + vmx_setup_uret_msr(vmx, MSR_SYSCALL_MASK, load_syscall_msrs); 1764 1789 #endif 1765 - if (update_transition_efer(vmx)) 1766 - vmx_setup_uret_msr(vmx, MSR_EFER); 1790 + vmx_setup_uret_msr(vmx, MSR_EFER, update_transition_efer(vmx)); 1767 1791 1768 - if (guest_cpuid_has(&vmx->vcpu, X86_FEATURE_RDTSCP)) 1769 - vmx_setup_uret_msr(vmx, MSR_TSC_AUX); 1792 + vmx_setup_uret_msr(vmx, MSR_TSC_AUX, 1793 + guest_cpuid_has(&vmx->vcpu, X86_FEATURE_RDTSCP) || 1794 + guest_cpuid_has(&vmx->vcpu, X86_FEATURE_RDPID)); 1770 1795 1771 - vmx_setup_uret_msr(vmx, MSR_IA32_TSX_CTRL); 1796 + /* 1797 + * hle=0, rtm=0, tsx_ctrl=1 can be found with some combinations of new 1798 + * kernel and old userspace. If those guests run on a tsx=off host, do 1799 + * allow guests to use TSX_CTRL, but don't change the value in hardware 1800 + * so that TSX remains always disabled. 1801 + */ 1802 + vmx_setup_uret_msr(vmx, MSR_IA32_TSX_CTRL, boot_cpu_has(X86_FEATURE_RTM)); 1772 1803 1773 1804 if (cpu_has_vmx_msr_bitmap()) 1774 1805 vmx_update_msr_bitmap(&vmx->vcpu); 1806 + 1807 + /* 1808 + * The set of MSRs to load may have changed, reload MSRs before the 1809 + * next VM-Enter. 1810 + */ 1811 + vmx->guest_uret_msrs_loaded = false; 1775 1812 } 1776 1813 1777 1814 static u64 vmx_write_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) ··· 1982 1993 else 1983 1994 msr_info->data = vmx->pt_desc.guest.addr_a[index / 2]; 1984 1995 break; 1985 - case MSR_TSC_AUX: 1986 - if (!msr_info->host_initiated && 1987 - !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP)) 1988 - return 1; 1989 - goto find_uret_msr; 1990 1996 case MSR_IA32_DEBUGCTLMSR: 1991 1997 msr_info->data = vmcs_read64(GUEST_IA32_DEBUGCTL); 1992 1998 break; ··· 2014 2030 2015 2031 if (!intel_pmu_lbr_is_enabled(vcpu)) 2016 2032 debugctl &= ~DEBUGCTLMSR_LBR_MASK; 2033 + 2034 + if (!guest_cpuid_has(vcpu, X86_FEATURE_BUS_LOCK_DETECT)) 2035 + debugctl &= ~DEBUGCTLMSR_BUS_LOCK_DETECT; 2017 2036 2018 2037 return debugctl; 2019 2038 } ··· 2300 2313 else 2301 2314 vmx->pt_desc.guest.addr_a[index / 2] = data; 2302 2315 break; 2303 - case MSR_TSC_AUX: 2304 - if (!msr_info->host_initiated && 2305 - !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP)) 2306 - return 1; 2307 - /* Check reserved bit, higher 32 bits should be zero */ 2308 - if ((data >> 32) != 0) 2309 - return 1; 2310 - goto find_uret_msr; 2311 2316 case MSR_IA32_PERF_CAPABILITIES: 2312 2317 if (data && !vcpu_to_pmu(vcpu)->version) 2313 2318 return 1; ··· 4348 4369 xsaves_enabled, false); 4349 4370 } 4350 4371 4351 - vmx_adjust_sec_exec_feature(vmx, &exec_control, rdtscp, RDTSCP); 4372 + /* 4373 + * RDPID is also gated by ENABLE_RDTSCP, turn on the control if either 4374 + * feature is exposed to the guest. This creates a virtualization hole 4375 + * if both are supported in hardware but only one is exposed to the 4376 + * guest, but letting the guest execute RDTSCP or RDPID when either one 4377 + * is advertised is preferable to emulating the advertised instruction 4378 + * in KVM on #UD, and obviously better than incorrectly injecting #UD. 4379 + */ 4380 + if (cpu_has_vmx_rdtscp()) { 4381 + bool rdpid_or_rdtscp_enabled = 4382 + guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP) || 4383 + guest_cpuid_has(vcpu, X86_FEATURE_RDPID); 4384 + 4385 + vmx_adjust_secondary_exec_control(vmx, &exec_control, 4386 + SECONDARY_EXEC_ENABLE_RDTSCP, 4387 + rdpid_or_rdtscp_enabled, false); 4388 + } 4352 4389 vmx_adjust_sec_exec_feature(vmx, &exec_control, invpcid, INVPCID); 4353 4390 4354 4391 vmx_adjust_sec_exec_exiting(vmx, &exec_control, rdrand, RDRAND); ··· 6850 6855 6851 6856 static int vmx_create_vcpu(struct kvm_vcpu *vcpu) 6852 6857 { 6858 + struct vmx_uret_msr *tsx_ctrl; 6853 6859 struct vcpu_vmx *vmx; 6854 6860 int i, cpu, err; 6855 6861 ··· 6873 6877 goto free_vpid; 6874 6878 } 6875 6879 6876 - BUILD_BUG_ON(ARRAY_SIZE(vmx_uret_msrs_list) != MAX_NR_USER_RETURN_MSRS); 6877 - 6878 - for (i = 0; i < ARRAY_SIZE(vmx_uret_msrs_list); ++i) { 6879 - u32 index = vmx_uret_msrs_list[i]; 6880 - u32 data_low, data_high; 6881 - int j = vmx->nr_uret_msrs; 6882 - 6883 - if (rdmsr_safe(index, &data_low, &data_high) < 0) 6884 - continue; 6885 - if (wrmsr_safe(index, data_low, data_high) < 0) 6886 - continue; 6887 - 6888 - vmx->guest_uret_msrs[j].slot = i; 6889 - vmx->guest_uret_msrs[j].data = 0; 6890 - switch (index) { 6891 - case MSR_IA32_TSX_CTRL: 6892 - /* 6893 - * TSX_CTRL_CPUID_CLEAR is handled in the CPUID 6894 - * interception. Keep the host value unchanged to avoid 6895 - * changing CPUID bits under the host kernel's feet. 6896 - * 6897 - * hle=0, rtm=0, tsx_ctrl=1 can be found with some 6898 - * combinations of new kernel and old userspace. If 6899 - * those guests run on a tsx=off host, do allow guests 6900 - * to use TSX_CTRL, but do not change the value on the 6901 - * host so that TSX remains always disabled. 6902 - */ 6903 - if (boot_cpu_has(X86_FEATURE_RTM)) 6904 - vmx->guest_uret_msrs[j].mask = ~(u64)TSX_CTRL_CPUID_CLEAR; 6905 - else 6906 - vmx->guest_uret_msrs[j].mask = 0; 6907 - break; 6908 - default: 6909 - vmx->guest_uret_msrs[j].mask = -1ull; 6910 - break; 6911 - } 6912 - ++vmx->nr_uret_msrs; 6880 + for (i = 0; i < kvm_nr_uret_msrs; ++i) { 6881 + vmx->guest_uret_msrs[i].data = 0; 6882 + vmx->guest_uret_msrs[i].mask = -1ull; 6883 + } 6884 + if (boot_cpu_has(X86_FEATURE_RTM)) { 6885 + /* 6886 + * TSX_CTRL_CPUID_CLEAR is handled in the CPUID interception. 6887 + * Keep the host value unchanged to avoid changing CPUID bits 6888 + * under the host kernel's feet. 6889 + */ 6890 + tsx_ctrl = vmx_find_uret_msr(vmx, MSR_IA32_TSX_CTRL); 6891 + if (tsx_ctrl) 6892 + vmx->guest_uret_msrs[i].mask = ~(u64)TSX_CTRL_CPUID_CLEAR; 6913 6893 } 6914 6894 6915 6895 err = alloc_loaded_vmcs(&vmx->vmcs01); ··· 7316 7344 if (!cpu_has_vmx_xsaves()) 7317 7345 kvm_cpu_cap_clear(X86_FEATURE_XSAVES); 7318 7346 7319 - /* CPUID 0x80000001 */ 7320 - if (!cpu_has_vmx_rdtscp()) 7347 + /* CPUID 0x80000001 and 0x7 (RDPID) */ 7348 + if (!cpu_has_vmx_rdtscp()) { 7321 7349 kvm_cpu_cap_clear(X86_FEATURE_RDTSCP); 7350 + kvm_cpu_cap_clear(X86_FEATURE_RDPID); 7351 + } 7322 7352 7323 7353 if (cpu_has_vmx_waitpkg()) 7324 7354 kvm_cpu_cap_check_and_set(X86_FEATURE_WAITPKG); ··· 7376 7402 /* 7377 7403 * RDPID causes #UD if disabled through secondary execution controls. 7378 7404 * Because it is marked as EmulateOnUD, we need to intercept it here. 7405 + * Note, RDPID is hidden behind ENABLE_RDTSCP. 7379 7406 */ 7380 - case x86_intercept_rdtscp: 7407 + case x86_intercept_rdpid: 7381 7408 if (!nested_cpu_has2(vmcs12, SECONDARY_EXEC_ENABLE_RDTSCP)) { 7382 7409 exception->vector = UD_VECTOR; 7383 7410 exception->error_code_valid = false; ··· 7744 7769 .vcpu_deliver_sipi_vector = kvm_vcpu_deliver_sipi_vector, 7745 7770 }; 7746 7771 7772 + static __init void vmx_setup_user_return_msrs(void) 7773 + { 7774 + 7775 + /* 7776 + * Though SYSCALL is only supported in 64-bit mode on Intel CPUs, kvm 7777 + * will emulate SYSCALL in legacy mode if the vendor string in guest 7778 + * CPUID.0:{EBX,ECX,EDX} is "AuthenticAMD" or "AMDisbetter!" To 7779 + * support this emulation, MSR_STAR is included in the list for i386, 7780 + * but is never loaded into hardware. MSR_CSTAR is also never loaded 7781 + * into hardware and is here purely for emulation purposes. 7782 + */ 7783 + const u32 vmx_uret_msrs_list[] = { 7784 + #ifdef CONFIG_X86_64 7785 + MSR_SYSCALL_MASK, MSR_LSTAR, MSR_CSTAR, 7786 + #endif 7787 + MSR_EFER, MSR_TSC_AUX, MSR_STAR, 7788 + MSR_IA32_TSX_CTRL, 7789 + }; 7790 + int i; 7791 + 7792 + BUILD_BUG_ON(ARRAY_SIZE(vmx_uret_msrs_list) != MAX_NR_USER_RETURN_MSRS); 7793 + 7794 + for (i = 0; i < ARRAY_SIZE(vmx_uret_msrs_list); ++i) 7795 + kvm_add_user_return_msr(vmx_uret_msrs_list[i]); 7796 + } 7797 + 7747 7798 static __init int hardware_setup(void) 7748 7799 { 7749 7800 unsigned long host_bndcfgs; 7750 7801 struct desc_ptr dt; 7751 - int r, i, ept_lpage_level; 7802 + int r, ept_lpage_level; 7752 7803 7753 7804 store_idt(&dt); 7754 7805 host_idt_base = dt.address; 7755 7806 7756 - for (i = 0; i < ARRAY_SIZE(vmx_uret_msrs_list); ++i) 7757 - kvm_define_user_return_msr(i, vmx_uret_msrs_list[i]); 7807 + vmx_setup_user_return_msrs(); 7758 7808 7759 7809 if (setup_vmcs_config(&vmcs_config, &vmx_capability) < 0) 7760 7810 return -EIO;
+10 -2
arch/x86/kvm/vmx/vmx.h
··· 36 36 }; 37 37 38 38 struct vmx_uret_msr { 39 - unsigned int slot; /* The MSR's slot in kvm_user_return_msrs. */ 39 + bool load_into_hardware; 40 40 u64 data; 41 41 u64 mask; 42 42 }; ··· 245 245 u32 idt_vectoring_info; 246 246 ulong rflags; 247 247 248 + /* 249 + * User return MSRs are always emulated when enabled in the guest, but 250 + * only loaded into hardware when necessary, e.g. SYSCALL #UDs outside 251 + * of 64-bit mode or if EFER.SCE=1, thus the SYSCALL MSRs don't need to 252 + * be loaded into hardware if those conditions aren't met. 253 + * nr_active_uret_msrs tracks the number of MSRs that need to be loaded 254 + * into hardware when running the guest. guest_uret_msrs[] is resorted 255 + * whenever the number of "active" uret MSRs is modified. 256 + */ 248 257 struct vmx_uret_msr guest_uret_msrs[MAX_NR_USER_RETURN_MSRS]; 249 - int nr_uret_msrs; 250 258 int nr_active_uret_msrs; 251 259 bool guest_uret_msrs_loaded; 252 260 #ifdef CONFIG_X86_64
+113 -42
arch/x86/kvm/x86.c
··· 184 184 */ 185 185 #define KVM_MAX_NR_USER_RETURN_MSRS 16 186 186 187 - struct kvm_user_return_msrs_global { 188 - int nr; 189 - u32 msrs[KVM_MAX_NR_USER_RETURN_MSRS]; 190 - }; 191 - 192 187 struct kvm_user_return_msrs { 193 188 struct user_return_notifier urn; 194 189 bool registered; ··· 193 198 } values[KVM_MAX_NR_USER_RETURN_MSRS]; 194 199 }; 195 200 196 - static struct kvm_user_return_msrs_global __read_mostly user_return_msrs_global; 201 + u32 __read_mostly kvm_nr_uret_msrs; 202 + EXPORT_SYMBOL_GPL(kvm_nr_uret_msrs); 203 + static u32 __read_mostly kvm_uret_msrs_list[KVM_MAX_NR_USER_RETURN_MSRS]; 197 204 static struct kvm_user_return_msrs __percpu *user_return_msrs; 198 205 199 206 #define KVM_SUPPORTED_XCR0 (XFEATURE_MASK_FP | XFEATURE_MASK_SSE \ ··· 327 330 user_return_notifier_unregister(urn); 328 331 } 329 332 local_irq_restore(flags); 330 - for (slot = 0; slot < user_return_msrs_global.nr; ++slot) { 333 + for (slot = 0; slot < kvm_nr_uret_msrs; ++slot) { 331 334 values = &msrs->values[slot]; 332 335 if (values->host != values->curr) { 333 - wrmsrl(user_return_msrs_global.msrs[slot], values->host); 336 + wrmsrl(kvm_uret_msrs_list[slot], values->host); 334 337 values->curr = values->host; 335 338 } 336 339 } 337 340 } 338 341 339 - void kvm_define_user_return_msr(unsigned slot, u32 msr) 342 + static int kvm_probe_user_return_msr(u32 msr) 340 343 { 341 - BUG_ON(slot >= KVM_MAX_NR_USER_RETURN_MSRS); 342 - user_return_msrs_global.msrs[slot] = msr; 343 - if (slot >= user_return_msrs_global.nr) 344 - user_return_msrs_global.nr = slot + 1; 344 + u64 val; 345 + int ret; 346 + 347 + preempt_disable(); 348 + ret = rdmsrl_safe(msr, &val); 349 + if (ret) 350 + goto out; 351 + ret = wrmsrl_safe(msr, val); 352 + out: 353 + preempt_enable(); 354 + return ret; 345 355 } 346 - EXPORT_SYMBOL_GPL(kvm_define_user_return_msr); 356 + 357 + int kvm_add_user_return_msr(u32 msr) 358 + { 359 + BUG_ON(kvm_nr_uret_msrs >= KVM_MAX_NR_USER_RETURN_MSRS); 360 + 361 + if (kvm_probe_user_return_msr(msr)) 362 + return -1; 363 + 364 + kvm_uret_msrs_list[kvm_nr_uret_msrs] = msr; 365 + return kvm_nr_uret_msrs++; 366 + } 367 + EXPORT_SYMBOL_GPL(kvm_add_user_return_msr); 368 + 369 + int kvm_find_user_return_msr(u32 msr) 370 + { 371 + int i; 372 + 373 + for (i = 0; i < kvm_nr_uret_msrs; ++i) { 374 + if (kvm_uret_msrs_list[i] == msr) 375 + return i; 376 + } 377 + return -1; 378 + } 379 + EXPORT_SYMBOL_GPL(kvm_find_user_return_msr); 347 380 348 381 static void kvm_user_return_msr_cpu_online(void) 349 382 { ··· 382 355 u64 value; 383 356 int i; 384 357 385 - for (i = 0; i < user_return_msrs_global.nr; ++i) { 386 - rdmsrl_safe(user_return_msrs_global.msrs[i], &value); 358 + for (i = 0; i < kvm_nr_uret_msrs; ++i) { 359 + rdmsrl_safe(kvm_uret_msrs_list[i], &value); 387 360 msrs->values[i].host = value; 388 361 msrs->values[i].curr = value; 389 362 } ··· 398 371 value = (value & mask) | (msrs->values[slot].host & ~mask); 399 372 if (value == msrs->values[slot].curr) 400 373 return 0; 401 - err = wrmsrl_safe(user_return_msrs_global.msrs[slot], value); 374 + err = wrmsrl_safe(kvm_uret_msrs_list[slot], value); 402 375 if (err) 403 376 return 1; 404 377 ··· 1176 1149 1177 1150 if (!guest_cpuid_has(vcpu, X86_FEATURE_RTM)) 1178 1151 fixed |= DR6_RTM; 1152 + 1153 + if (!guest_cpuid_has(vcpu, X86_FEATURE_BUS_LOCK_DETECT)) 1154 + fixed |= DR6_BUS_LOCK; 1179 1155 return fixed; 1180 1156 } 1181 1157 ··· 1645 1615 * invokes 64-bit SYSENTER. 1646 1616 */ 1647 1617 data = get_canonical(data, vcpu_virt_addr_bits(vcpu)); 1618 + break; 1619 + case MSR_TSC_AUX: 1620 + if (!kvm_is_supported_user_return_msr(MSR_TSC_AUX)) 1621 + return 1; 1622 + 1623 + if (!host_initiated && 1624 + !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP) && 1625 + !guest_cpuid_has(vcpu, X86_FEATURE_RDPID)) 1626 + return 1; 1627 + 1628 + /* 1629 + * Per Intel's SDM, bits 63:32 are reserved, but AMD's APM has 1630 + * incomplete and conflicting architectural behavior. Current 1631 + * AMD CPUs completely ignore bits 63:32, i.e. they aren't 1632 + * reserved and always read as zeros. Enforce Intel's reserved 1633 + * bits check if and only if the guest CPU is Intel, and clear 1634 + * the bits in all other cases. This ensures cross-vendor 1635 + * migration will provide consistent behavior for the guest. 1636 + */ 1637 + if (guest_cpuid_is_intel(vcpu) && (data >> 32) != 0) 1638 + return 1; 1639 + 1640 + data = (u32)data; 1641 + break; 1648 1642 } 1649 1643 1650 1644 msr.data = data; ··· 1704 1650 1705 1651 if (!host_initiated && !kvm_msr_allowed(vcpu, index, KVM_MSR_FILTER_READ)) 1706 1652 return KVM_MSR_RET_FILTERED; 1653 + 1654 + switch (index) { 1655 + case MSR_TSC_AUX: 1656 + if (!kvm_is_supported_user_return_msr(MSR_TSC_AUX)) 1657 + return 1; 1658 + 1659 + if (!host_initiated && 1660 + !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP) && 1661 + !guest_cpuid_has(vcpu, X86_FEATURE_RDPID)) 1662 + return 1; 1663 + break; 1664 + } 1707 1665 1708 1666 msr.index = index; 1709 1667 msr.host_initiated = host_initiated; ··· 3468 3402 case MSR_IA32_LASTBRANCHTOIP: 3469 3403 case MSR_IA32_LASTINTFROMIP: 3470 3404 case MSR_IA32_LASTINTTOIP: 3471 - case MSR_K8_SYSCFG: 3405 + case MSR_AMD64_SYSCFG: 3472 3406 case MSR_K8_TSEG_ADDR: 3473 3407 case MSR_K8_TSEG_MASK: 3474 3408 case MSR_VM_HSAVE_PA: ··· 5534 5468 static int kvm_add_msr_filter(struct kvm_x86_msr_filter *msr_filter, 5535 5469 struct kvm_msr_filter_range *user_range) 5536 5470 { 5537 - struct msr_bitmap_range range; 5538 5471 unsigned long *bitmap = NULL; 5539 5472 size_t bitmap_size; 5540 - int r; 5541 5473 5542 5474 if (!user_range->nmsrs) 5543 5475 return 0; 5476 + 5477 + if (user_range->flags & ~(KVM_MSR_FILTER_READ | KVM_MSR_FILTER_WRITE)) 5478 + return -EINVAL; 5479 + 5480 + if (!user_range->flags) 5481 + return -EINVAL; 5544 5482 5545 5483 bitmap_size = BITS_TO_LONGS(user_range->nmsrs) * sizeof(long); 5546 5484 if (!bitmap_size || bitmap_size > KVM_MSR_FILTER_MAX_BITMAP_SIZE) ··· 5554 5484 if (IS_ERR(bitmap)) 5555 5485 return PTR_ERR(bitmap); 5556 5486 5557 - range = (struct msr_bitmap_range) { 5487 + msr_filter->ranges[msr_filter->count] = (struct msr_bitmap_range) { 5558 5488 .flags = user_range->flags, 5559 5489 .base = user_range->base, 5560 5490 .nmsrs = user_range->nmsrs, 5561 5491 .bitmap = bitmap, 5562 5492 }; 5563 5493 5564 - if (range.flags & ~(KVM_MSR_FILTER_READ | KVM_MSR_FILTER_WRITE)) { 5565 - r = -EINVAL; 5566 - goto err; 5567 - } 5568 - 5569 - if (!range.flags) { 5570 - r = -EINVAL; 5571 - goto err; 5572 - } 5573 - 5574 - /* Everything ok, add this range identifier. */ 5575 - msr_filter->ranges[msr_filter->count] = range; 5576 5494 msr_filter->count++; 5577 - 5578 5495 return 0; 5579 - err: 5580 - kfree(bitmap); 5581 - return r; 5582 5496 } 5583 5497 5584 5498 static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm, void __user *argp) ··· 5991 5937 continue; 5992 5938 break; 5993 5939 case MSR_TSC_AUX: 5994 - if (!kvm_cpu_cap_has(X86_FEATURE_RDTSCP)) 5940 + if (!kvm_cpu_cap_has(X86_FEATURE_RDTSCP) && 5941 + !kvm_cpu_cap_has(X86_FEATURE_RDPID)) 5995 5942 continue; 5996 5943 break; 5997 5944 case MSR_IA32_UMWAIT_CONTROL: ··· 8095 8040 static DECLARE_WORK(pvclock_gtod_work, pvclock_gtod_update_fn); 8096 8041 8097 8042 /* 8043 + * Indirection to move queue_work() out of the tk_core.seq write held 8044 + * region to prevent possible deadlocks against time accessors which 8045 + * are invoked with work related locks held. 8046 + */ 8047 + static void pvclock_irq_work_fn(struct irq_work *w) 8048 + { 8049 + queue_work(system_long_wq, &pvclock_gtod_work); 8050 + } 8051 + 8052 + static DEFINE_IRQ_WORK(pvclock_irq_work, pvclock_irq_work_fn); 8053 + 8054 + /* 8098 8055 * Notification about pvclock gtod data update. 8099 8056 */ 8100 8057 static int pvclock_gtod_notify(struct notifier_block *nb, unsigned long unused, ··· 8117 8050 8118 8051 update_pvclock_gtod(tk); 8119 8052 8120 - /* disable master clock if host does not trust, or does not 8121 - * use, TSC based clocksource. 8053 + /* 8054 + * Disable master clock if host does not trust, or does not use, 8055 + * TSC based clocksource. Delegate queue_work() to irq_work as 8056 + * this is invoked with tk_core.seq write held. 8122 8057 */ 8123 8058 if (!gtod_is_based_on_tsc(gtod->clock.vclock_mode) && 8124 8059 atomic_read(&kvm_guest_has_master_clock) != 0) 8125 - queue_work(system_long_wq, &pvclock_gtod_work); 8126 - 8060 + irq_work_queue(&pvclock_irq_work); 8127 8061 return 0; 8128 8062 } 8129 8063 ··· 8186 8118 printk(KERN_ERR "kvm: failed to allocate percpu kvm_user_return_msrs\n"); 8187 8119 goto out_free_x86_emulator_cache; 8188 8120 } 8121 + kvm_nr_uret_msrs = 0; 8189 8122 8190 8123 r = kvm_mmu_module_init(); 8191 8124 if (r) ··· 8237 8168 cpuhp_remove_state_nocalls(CPUHP_AP_X86_KVM_CLK_ONLINE); 8238 8169 #ifdef CONFIG_X86_64 8239 8170 pvclock_gtod_unregister_notifier(&pvclock_gtod_notifier); 8171 + irq_work_sync(&pvclock_irq_work); 8172 + cancel_work_sync(&pvclock_gtod_work); 8240 8173 #endif 8241 8174 kvm_x86_ops.hardware_enable = NULL; 8242 8175 kvm_mmu_module_exit();
+1 -1
arch/x86/mm/extable.c
··· 5 5 #include <xen/xen.h> 6 6 7 7 #include <asm/fpu/internal.h> 8 - #include <asm/sev-es.h> 8 + #include <asm/sev.h> 9 9 #include <asm/traps.h> 10 10 #include <asm/kdebug.h> 11 11
+3 -3
arch/x86/mm/mem_encrypt_identity.c
··· 529 529 /* 530 530 * No SME if Hypervisor bit is set. This check is here to 531 531 * prevent a guest from trying to enable SME. For running as a 532 - * KVM guest the MSR_K8_SYSCFG will be sufficient, but there 532 + * KVM guest the MSR_AMD64_SYSCFG will be sufficient, but there 533 533 * might be other hypervisors which emulate that MSR as non-zero 534 534 * or even pass it through to the guest. 535 535 * A malicious hypervisor can still trick a guest into this ··· 542 542 return; 543 543 544 544 /* For SME, check the SYSCFG MSR */ 545 - msr = __rdmsr(MSR_K8_SYSCFG); 546 - if (!(msr & MSR_K8_SYSCFG_MEM_ENCRYPT)) 545 + msr = __rdmsr(MSR_AMD64_SYSCFG); 546 + if (!(msr & MSR_AMD64_SYSCFG_MEM_ENCRYPT)) 547 547 return; 548 548 } else { 549 549 /* SEV state cannot be controlled by a command line option */
+1 -1
arch/x86/pci/amd_bus.c
··· 284 284 285 285 /* need to take out [4G, TOM2) for RAM*/ 286 286 /* SYS_CFG */ 287 - address = MSR_K8_SYSCFG; 287 + address = MSR_AMD64_SYSCFG; 288 288 rdmsrl(address, val); 289 289 /* TOP_MEM2 is enabled? */ 290 290 if (val & (1<<21)) {
+1 -1
arch/x86/platform/efi/efi_64.c
··· 47 47 #include <asm/realmode.h> 48 48 #include <asm/time.h> 49 49 #include <asm/pgalloc.h> 50 - #include <asm/sev-es.h> 50 + #include <asm/sev.h> 51 51 52 52 /* 53 53 * We allocate runtime services regions top-down, starting from -4G, i.e.
+1 -1
arch/x86/realmode/init.c
··· 9 9 #include <asm/realmode.h> 10 10 #include <asm/tlbflush.h> 11 11 #include <asm/crash.h> 12 - #include <asm/sev-es.h> 12 + #include <asm/sev.h> 13 13 14 14 struct real_mode_header *real_mode_header; 15 15 u32 *trampoline_cr4_features;
+2 -2
arch/x86/realmode/rm/trampoline_64.S
··· 123 123 */ 124 124 btl $TH_FLAGS_SME_ACTIVE_BIT, pa_tr_flags 125 125 jnc .Ldone 126 - movl $MSR_K8_SYSCFG, %ecx 126 + movl $MSR_AMD64_SYSCFG, %ecx 127 127 rdmsr 128 - bts $MSR_K8_SYSCFG_MEM_ENCRYPT_BIT, %eax 128 + bts $MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT, %eax 129 129 jc .Ldone 130 130 131 131 /*
+1 -1
arch/xtensa/kernel/syscalls/syscall.tbl
··· 413 413 440 common process_madvise sys_process_madvise 414 414 441 common epoll_pwait2 sys_epoll_pwait2 415 415 442 common mount_setattr sys_mount_setattr 416 - 443 common quotactl_path sys_quotactl_path 416 + # 443 reserved for quotactl_path 417 417 444 common landlock_create_ruleset sys_landlock_create_ruleset 418 418 445 common landlock_add_rule sys_landlock_add_rule 419 419 446 common landlock_restrict_self sys_landlock_restrict_self
+30 -4
block/bfq-iosched.c
··· 372 372 return bic->bfqq[is_sync]; 373 373 } 374 374 375 + static void bfq_put_stable_ref(struct bfq_queue *bfqq); 376 + 375 377 void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync) 376 378 { 379 + /* 380 + * If bfqq != NULL, then a non-stable queue merge between 381 + * bic->bfqq and bfqq is happening here. This causes troubles 382 + * in the following case: bic->bfqq has also been scheduled 383 + * for a possible stable merge with bic->stable_merge_bfqq, 384 + * and bic->stable_merge_bfqq == bfqq happens to 385 + * hold. Troubles occur because bfqq may then undergo a split, 386 + * thereby becoming eligible for a stable merge. Yet, if 387 + * bic->stable_merge_bfqq points exactly to bfqq, then bfqq 388 + * would be stably merged with itself. To avoid this anomaly, 389 + * we cancel the stable merge if 390 + * bic->stable_merge_bfqq == bfqq. 391 + */ 377 392 bic->bfqq[is_sync] = bfqq; 393 + 394 + if (bfqq && bic->stable_merge_bfqq == bfqq) { 395 + /* 396 + * Actually, these same instructions are executed also 397 + * in bfq_setup_cooperator, in case of abort or actual 398 + * execution of a stable merge. We could avoid 399 + * repeating these instructions there too, but if we 400 + * did so, we would nest even more complexity in this 401 + * function. 402 + */ 403 + bfq_put_stable_ref(bic->stable_merge_bfqq); 404 + 405 + bic->stable_merge_bfqq = NULL; 406 + } 378 407 } 379 408 380 409 struct bfq_data *bic_to_bfqd(struct bfq_io_cq *bic) ··· 2292 2263 2293 2264 } 2294 2265 2295 - static bool bfq_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio, 2266 + static bool bfq_bio_merge(struct request_queue *q, struct bio *bio, 2296 2267 unsigned int nr_segs) 2297 2268 { 2298 - struct request_queue *q = hctx->queue; 2299 2269 struct bfq_data *bfqd = q->elevator->elevator_data; 2300 2270 struct request *free = NULL; 2301 2271 /* ··· 2658 2630 2659 2631 static bool idling_boosts_thr_without_issues(struct bfq_data *bfqd, 2660 2632 struct bfq_queue *bfqq); 2661 - 2662 - static void bfq_put_stable_ref(struct bfq_queue *bfqq); 2663 2633 2664 2634 /* 2665 2635 * Attempt to schedule a merge of bfqq with the currently in-service
+12 -2
block/blk-iocost.c
··· 1069 1069 1070 1070 lockdep_assert_held(&ioc->lock); 1071 1071 1072 - inuse = clamp_t(u32, inuse, 1, active); 1072 + /* 1073 + * For an active leaf node, its inuse shouldn't be zero or exceed 1074 + * @active. An active internal node's inuse is solely determined by the 1075 + * inuse to active ratio of its children regardless of @inuse. 1076 + */ 1077 + if (list_empty(&iocg->active_list) && iocg->child_active_sum) { 1078 + inuse = DIV64_U64_ROUND_UP(active * iocg->child_inuse_sum, 1079 + iocg->child_active_sum); 1080 + } else { 1081 + inuse = clamp_t(u32, inuse, 1, active); 1082 + } 1073 1083 1074 1084 iocg->last_inuse = iocg->inuse; 1075 1085 if (save) ··· 1096 1086 /* update the level sums */ 1097 1087 parent->child_active_sum += (s32)(active - child->active); 1098 1088 parent->child_inuse_sum += (s32)(inuse - child->inuse); 1099 - /* apply the udpates */ 1089 + /* apply the updates */ 1100 1090 child->active = active; 1101 1091 child->inuse = inuse; 1102 1092
+5 -3
block/blk-mq-sched.c
··· 358 358 unsigned int nr_segs) 359 359 { 360 360 struct elevator_queue *e = q->elevator; 361 - struct blk_mq_ctx *ctx = blk_mq_get_ctx(q); 362 - struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, bio->bi_opf, ctx); 361 + struct blk_mq_ctx *ctx; 362 + struct blk_mq_hw_ctx *hctx; 363 363 bool ret = false; 364 364 enum hctx_type type; 365 365 366 366 if (e && e->type->ops.bio_merge) 367 - return e->type->ops.bio_merge(hctx, bio, nr_segs); 367 + return e->type->ops.bio_merge(q, bio, nr_segs); 368 368 369 + ctx = blk_mq_get_ctx(q); 370 + hctx = blk_mq_map_queue(q, bio->bi_opf, ctx); 369 371 type = hctx->type; 370 372 if (!(hctx->flags & BLK_MQ_F_SHOULD_MERGE) || 371 373 list_empty_careful(&ctx->rq_lists[type]))
+7 -4
block/blk-mq.c
··· 2232 2232 /* Bypass scheduler for flush requests */ 2233 2233 blk_insert_flush(rq); 2234 2234 blk_mq_run_hw_queue(data.hctx, true); 2235 - } else if (plug && (q->nr_hw_queues == 1 || q->mq_ops->commit_rqs || 2236 - !blk_queue_nonrot(q))) { 2235 + } else if (plug && (q->nr_hw_queues == 1 || 2236 + blk_mq_is_sbitmap_shared(rq->mq_hctx->flags) || 2237 + q->mq_ops->commit_rqs || !blk_queue_nonrot(q))) { 2237 2238 /* 2238 2239 * Use plugging if we have a ->commit_rqs() hook as well, as 2239 2240 * we know the driver uses bd->last in a smart fashion. ··· 3286 3285 /* tags can _not_ be used after returning from blk_mq_exit_queue */ 3287 3286 void blk_mq_exit_queue(struct request_queue *q) 3288 3287 { 3289 - struct blk_mq_tag_set *set = q->tag_set; 3288 + struct blk_mq_tag_set *set = q->tag_set; 3290 3289 3291 - blk_mq_del_queue_tag_set(q); 3290 + /* Checks hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED. */ 3292 3291 blk_mq_exit_hw_queues(q, set, set->nr_hw_queues); 3292 + /* May clear BLK_MQ_F_TAG_QUEUE_SHARED in hctx->flags. */ 3293 + blk_mq_del_queue_tag_set(q); 3293 3294 } 3294 3295 3295 3296 static int __blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set)
+3 -2
block/kyber-iosched.c
··· 561 561 } 562 562 } 563 563 564 - static bool kyber_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio, 564 + static bool kyber_bio_merge(struct request_queue *q, struct bio *bio, 565 565 unsigned int nr_segs) 566 566 { 567 + struct blk_mq_ctx *ctx = blk_mq_get_ctx(q); 568 + struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, bio->bi_opf, ctx); 567 569 struct kyber_hctx_data *khd = hctx->sched_data; 568 - struct blk_mq_ctx *ctx = blk_mq_get_ctx(hctx->queue); 569 570 struct kyber_ctx_queue *kcq = &khd->kcqs[ctx->index_hw[hctx->type]]; 570 571 unsigned int sched_domain = kyber_sched_domain(bio->bi_opf); 571 572 struct list_head *rq_list = &kcq->rq_list[sched_domain];
+1 -2
block/mq-deadline.c
··· 461 461 return ELEVATOR_NO_MERGE; 462 462 } 463 463 464 - static bool dd_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio, 464 + static bool dd_bio_merge(struct request_queue *q, struct bio *bio, 465 465 unsigned int nr_segs) 466 466 { 467 - struct request_queue *q = hctx->queue; 468 467 struct deadline_data *dd = q->elevator->elevator_data; 469 468 struct request *free = NULL; 470 469 bool ret;
+1 -1
block/partitions/efi.c
··· 682 682 } 683 683 684 684 /** 685 - * efi_partition(struct parsed_partitions *state) 685 + * efi_partition - scan for GPT partitions 686 686 * @state: disk parsed partitions 687 687 * 688 688 * Description: called from check.c, if the disk contains GPT
+1
drivers/acpi/device_pm.c
··· 1313 1313 {"PNP0C0B", }, /* Generic ACPI fan */ 1314 1314 {"INT3404", }, /* Fan */ 1315 1315 {"INTC1044", }, /* Fan for Tiger Lake generation */ 1316 + {"INTC1048", }, /* Fan for Alder Lake generation */ 1316 1317 {} 1317 1318 }; 1318 1319 struct acpi_device *adev = ACPI_COMPANION(dev);
+1
drivers/acpi/internal.h
··· 142 142 int acpi_power_get_inferred_state(struct acpi_device *device, int *state); 143 143 int acpi_power_on_resources(struct acpi_device *device, int state); 144 144 int acpi_power_transition(struct acpi_device *device, int state); 145 + void acpi_turn_off_unused_power_resources(void); 145 146 146 147 /* -------------------------------------------------------------------------- 147 148 Device Power Management
+11 -4
drivers/acpi/nfit/core.c
··· 686 686 return -1; 687 687 } 688 688 689 + static size_t sizeof_spa(struct acpi_nfit_system_address *spa) 690 + { 691 + if (spa->flags & ACPI_NFIT_LOCATION_COOKIE_VALID) 692 + return sizeof(*spa); 693 + return sizeof(*spa) - 8; 694 + } 695 + 689 696 static bool add_spa(struct acpi_nfit_desc *acpi_desc, 690 697 struct nfit_table_prev *prev, 691 698 struct acpi_nfit_system_address *spa) ··· 700 693 struct device *dev = acpi_desc->dev; 701 694 struct nfit_spa *nfit_spa; 702 695 703 - if (spa->header.length != sizeof(*spa)) 696 + if (spa->header.length != sizeof_spa(spa)) 704 697 return false; 705 698 706 699 list_for_each_entry(nfit_spa, &prev->spas, list) { 707 - if (memcmp(nfit_spa->spa, spa, sizeof(*spa)) == 0) { 700 + if (memcmp(nfit_spa->spa, spa, sizeof_spa(spa)) == 0) { 708 701 list_move_tail(&nfit_spa->list, &acpi_desc->spas); 709 702 return true; 710 703 } 711 704 } 712 705 713 - nfit_spa = devm_kzalloc(dev, sizeof(*nfit_spa) + sizeof(*spa), 706 + nfit_spa = devm_kzalloc(dev, sizeof(*nfit_spa) + sizeof_spa(spa), 714 707 GFP_KERNEL); 715 708 if (!nfit_spa) 716 709 return false; 717 710 INIT_LIST_HEAD(&nfit_spa->list); 718 - memcpy(nfit_spa->spa, spa, sizeof(*spa)); 711 + memcpy(nfit_spa->spa, spa, sizeof_spa(spa)); 719 712 list_add_tail(&nfit_spa->list, &acpi_desc->spas); 720 713 dev_dbg(dev, "spa index: %d type: %s\n", 721 714 spa->range_index,
+1 -1
drivers/acpi/power.c
··· 995 995 996 996 mutex_unlock(&power_resource_list_lock); 997 997 } 998 + #endif 998 999 999 1000 void acpi_turn_off_unused_power_resources(void) 1000 1001 { ··· 1016 1015 1017 1016 mutex_unlock(&power_resource_list_lock); 1018 1017 } 1019 - #endif
+3
drivers/acpi/scan.c
··· 700 700 701 701 result = acpi_device_set_name(device, acpi_device_bus_id); 702 702 if (result) { 703 + kfree_const(acpi_device_bus_id->bus_id); 703 704 kfree(acpi_device_bus_id); 704 705 goto err_unlock; 705 706 } ··· 2359 2358 goto out; 2360 2359 } 2361 2360 } 2361 + 2362 + acpi_turn_off_unused_power_resources(); 2362 2363 2363 2364 acpi_scan_initialized = true; 2364 2365
-1
drivers/acpi/sleep.h
··· 8 8 extern struct mutex acpi_device_lock; 9 9 10 10 extern void acpi_resume_power_resources(void); 11 - extern void acpi_turn_off_unused_power_resources(void); 12 11 13 12 static inline acpi_status acpi_set_waking_vector(u32 wakeup_address) 14 13 {
+1 -1
drivers/android/binder.c
··· 4918 4918 uint32_t enable; 4919 4919 4920 4920 if (copy_from_user(&enable, ubuf, sizeof(enable))) { 4921 - ret = -EINVAL; 4921 + ret = -EFAULT; 4922 4922 goto err; 4923 4923 } 4924 4924 binder_inner_proc_lock(proc);
+2 -1
drivers/base/core.c
··· 150 150 fwnode_links_purge_consumers(fwnode); 151 151 } 152 152 153 - static void fw_devlink_purge_absent_suppliers(struct fwnode_handle *fwnode) 153 + void fw_devlink_purge_absent_suppliers(struct fwnode_handle *fwnode) 154 154 { 155 155 struct fwnode_handle *child; 156 156 ··· 164 164 fwnode_for_each_available_child_node(fwnode, child) 165 165 fw_devlink_purge_absent_suppliers(child); 166 166 } 167 + EXPORT_SYMBOL_GPL(fw_devlink_purge_absent_suppliers); 167 168 168 169 #ifdef CONFIG_SRCU 169 170 static DEFINE_MUTEX(device_links_lock);
+7 -3
drivers/base/power/runtime.c
··· 1637 1637 dev->power.request_pending = false; 1638 1638 dev->power.request = RPM_REQ_NONE; 1639 1639 dev->power.deferred_resume = false; 1640 + dev->power.needs_force_resume = 0; 1640 1641 INIT_WORK(&dev->power.work, pm_runtime_work); 1641 1642 1642 1643 dev->power.timer_expires = 0; ··· 1805 1804 * its parent, but set its status to RPM_SUSPENDED anyway in case this 1806 1805 * function will be called again for it in the meantime. 1807 1806 */ 1808 - if (pm_runtime_need_not_resume(dev)) 1807 + if (pm_runtime_need_not_resume(dev)) { 1809 1808 pm_runtime_set_suspended(dev); 1810 - else 1809 + } else { 1811 1810 __update_runtime_status(dev, RPM_SUSPENDED); 1811 + dev->power.needs_force_resume = 1; 1812 + } 1812 1813 1813 1814 return 0; 1814 1815 ··· 1837 1834 int (*callback)(struct device *); 1838 1835 int ret = 0; 1839 1836 1840 - if (!pm_runtime_status_suspended(dev) || pm_runtime_need_not_resume(dev)) 1837 + if (!pm_runtime_status_suspended(dev) || !dev->power.needs_force_resume) 1841 1838 goto out; 1842 1839 1843 1840 /* ··· 1856 1853 1857 1854 pm_runtime_mark_last_busy(dev); 1858 1855 out: 1856 + dev->power.needs_force_resume = 0; 1859 1857 pm_runtime_enable(dev); 1860 1858 return ret; 1861 1859 }
+5 -5
drivers/block/nbd.c
··· 1980 1980 * config ref and try to destroy the workqueue from inside the work 1981 1981 * queue. 1982 1982 */ 1983 - flush_workqueue(nbd->recv_workq); 1983 + if (nbd->recv_workq) 1984 + flush_workqueue(nbd->recv_workq); 1984 1985 if (test_and_clear_bit(NBD_RT_HAS_CONFIG_REF, 1985 1986 &nbd->config->runtime_flags)) 1986 1987 nbd_config_put(nbd); ··· 2015 2014 return -EINVAL; 2016 2015 } 2017 2016 mutex_unlock(&nbd_index_mutex); 2018 - if (!refcount_inc_not_zero(&nbd->config_refs)) { 2019 - nbd_put(nbd); 2020 - return 0; 2021 - } 2017 + if (!refcount_inc_not_zero(&nbd->config_refs)) 2018 + goto put_nbd; 2022 2019 nbd_disconnect_and_put(nbd); 2023 2020 nbd_config_put(nbd); 2021 + put_nbd: 2024 2022 nbd_put(nbd); 2025 2023 return 0; 2026 2024 }
+10 -3
drivers/cdrom/gdrom.c
··· 744 744 static int probe_gdrom(struct platform_device *devptr) 745 745 { 746 746 int err; 747 + 748 + /* 749 + * Ensure our "one" device is initialized properly in case of previous 750 + * usages of it 751 + */ 752 + memset(&gd, 0, sizeof(gd)); 753 + 747 754 /* Start the device */ 748 755 if (gdrom_execute_diagnostic() != 1) { 749 756 pr_warn("ATA Probe for GDROM failed\n"); ··· 837 830 if (gdrom_major) 838 831 unregister_blkdev(gdrom_major, GDROM_DEV_NAME); 839 832 unregister_cdrom(gd.cd_info); 833 + kfree(gd.cd_info); 834 + kfree(gd.toc); 840 835 841 836 return 0; 842 837 } ··· 854 845 static int __init init_gdrom(void) 855 846 { 856 847 int rc; 857 - gd.toc = NULL; 848 + 858 849 rc = platform_driver_register(&gdrom_driver); 859 850 if (rc) 860 851 return rc; ··· 870 861 { 871 862 platform_device_unregister(pd); 872 863 platform_driver_unregister(&gdrom_driver); 873 - kfree(gd.toc); 874 - kfree(gd.cd_info); 875 864 } 876 865 877 866 module_init(init_gdrom);
+2
drivers/char/hpet.c
··· 984 984 hdp->hd_phys_address = fixmem32->address; 985 985 hdp->hd_address = ioremap(fixmem32->address, 986 986 HPET_RANGE_SIZE); 987 + if (!hdp->hd_address) 988 + return AE_ERROR; 987 989 988 990 if (hpet_is_known(hdp)) { 989 991 iounmap(hdp->hd_address);
+1
drivers/char/tpm/tpm2-cmd.c
··· 656 656 657 657 if (nr_commands != 658 658 be32_to_cpup((__be32 *)&buf.data[TPM_HEADER_SIZE + 5])) { 659 + rc = -EFAULT; 659 660 tpm_buf_destroy(&buf); 660 661 goto out; 661 662 }
+14 -8
drivers/char/tpm/tpm_tis_core.c
··· 709 709 cap_t cap; 710 710 int ret; 711 711 712 - /* TPM 2.0 */ 713 - if (chip->flags & TPM_CHIP_FLAG_TPM2) 714 - return tpm2_get_tpm_pt(chip, 0x100, &cap2, desc); 715 - 716 - /* TPM 1.2 */ 717 712 ret = request_locality(chip, 0); 718 713 if (ret < 0) 719 714 return ret; 720 715 721 - ret = tpm1_getcap(chip, TPM_CAP_PROP_TIS_TIMEOUT, &cap, desc, 0); 716 + if (chip->flags & TPM_CHIP_FLAG_TPM2) 717 + ret = tpm2_get_tpm_pt(chip, 0x100, &cap2, desc); 718 + else 719 + ret = tpm1_getcap(chip, TPM_CAP_PROP_TIS_TIMEOUT, &cap, desc, 0); 722 720 723 721 release_locality(chip, 0); 724 722 ··· 1125 1127 if (ret) 1126 1128 return ret; 1127 1129 1128 - /* TPM 1.2 requires self-test on resume. This function actually returns 1130 + /* 1131 + * TPM 1.2 requires self-test on resume. This function actually returns 1129 1132 * an error code but for unknown reason it isn't handled. 1130 1133 */ 1131 - if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) 1134 + if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) { 1135 + ret = request_locality(chip, 0); 1136 + if (ret < 0) 1137 + return ret; 1138 + 1132 1139 tpm1_do_selftest(chip); 1140 + 1141 + release_locality(chip, 0); 1142 + } 1133 1143 1134 1144 return 0; 1135 1145 }
+9
drivers/clk/clk.c
··· 4540 4540 struct of_clk_provider *cp; 4541 4541 int ret; 4542 4542 4543 + if (!np) 4544 + return 0; 4545 + 4543 4546 cp = kzalloc(sizeof(*cp), GFP_KERNEL); 4544 4547 if (!cp) 4545 4548 return -ENOMEM; ··· 4581 4578 { 4582 4579 struct of_clk_provider *cp; 4583 4580 int ret; 4581 + 4582 + if (!np) 4583 + return 0; 4584 4584 4585 4585 cp = kzalloc(sizeof(*cp), GFP_KERNEL); 4586 4586 if (!cp) ··· 4681 4675 void of_clk_del_provider(struct device_node *np) 4682 4676 { 4683 4677 struct of_clk_provider *cp; 4678 + 4679 + if (!np) 4680 + return; 4684 4681 4685 4682 mutex_lock(&of_clk_mutex); 4686 4683 list_for_each_entry(cp, &of_clk_providers, link) {
+2 -2
drivers/clocksource/hyperv_timer.c
··· 419 419 hv_set_register(HV_REGISTER_REFERENCE_TSC, tsc_msr); 420 420 } 421 421 422 - #ifdef VDSO_CLOCKMODE_HVCLOCK 422 + #ifdef HAVE_VDSO_CLOCKMODE_HVCLOCK 423 423 static int hv_cs_enable(struct clocksource *cs) 424 424 { 425 425 vclocks_set_used(VDSO_CLOCKMODE_HVCLOCK); ··· 435 435 .flags = CLOCK_SOURCE_IS_CONTINUOUS, 436 436 .suspend= suspend_hv_clock_tsc, 437 437 .resume = resume_hv_clock_tsc, 438 - #ifdef VDSO_CLOCKMODE_HVCLOCK 438 + #ifdef HAVE_VDSO_CLOCKMODE_HVCLOCK 439 439 .enable = hv_cs_enable, 440 440 .vdso_clock_mode = VDSO_CLOCKMODE_HVCLOCK, 441 441 #else
+5 -1
drivers/cpufreq/acpi-cpufreq.c
··· 646 646 return 0; 647 647 } 648 648 649 - highest_perf = perf_caps.highest_perf; 649 + if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) 650 + highest_perf = amd_get_highest_perf(); 651 + else 652 + highest_perf = perf_caps.highest_perf; 653 + 650 654 nominal_perf = perf_caps.nominal_perf; 651 655 652 656 if (!highest_perf || !nominal_perf) {
+13 -1
drivers/cpufreq/intel_pstate.c
··· 3033 3033 {} 3034 3034 }; 3035 3035 3036 + static bool intel_pstate_hwp_is_enabled(void) 3037 + { 3038 + u64 value; 3039 + 3040 + rdmsrl(MSR_PM_ENABLE, value); 3041 + return !!(value & 0x1); 3042 + } 3043 + 3036 3044 static int __init intel_pstate_init(void) 3037 3045 { 3038 3046 const struct x86_cpu_id *id; ··· 3059 3051 * Avoid enabling HWP for processors without EPP support, 3060 3052 * because that means incomplete HWP implementation which is a 3061 3053 * corner case and supporting it is generally problematic. 3054 + * 3055 + * If HWP is enabled already, though, there is no choice but to 3056 + * deal with it. 3062 3057 */ 3063 - if (!no_hwp && boot_cpu_has(X86_FEATURE_HWP_EPP)) { 3058 + if ((!no_hwp && boot_cpu_has(X86_FEATURE_HWP_EPP)) || 3059 + intel_pstate_hwp_is_enabled()) { 3064 3060 hwp_active++; 3065 3061 hwp_mode_bdw = id->driver_data; 3066 3062 intel_pstate.attr = hwp_cpufreq_attrs;
-1
drivers/crypto/cavium/nitrox/nitrox_main.c
··· 442 442 err = pci_request_mem_regions(pdev, nitrox_driver_name); 443 443 if (err) { 444 444 pci_disable_device(pdev); 445 - dev_err(&pdev->dev, "Failed to request mem regions!\n"); 446 445 return err; 447 446 } 448 447 pci_set_master(pdev);
+16 -1
drivers/dma/qcom/hidma_mgmt.c
··· 418 418 hidma_mgmt_of_populate_channels(child); 419 419 } 420 420 #endif 421 - return platform_driver_register(&hidma_mgmt_driver); 421 + /* 422 + * We do not check for return value here, as it is assumed that 423 + * platform_driver_register must not fail. The reason for this is that 424 + * the (potential) hidma_mgmt_of_populate_channels calls above are not 425 + * cleaned up if it does fail, and to do this work is quite 426 + * complicated. In particular, various calls of of_address_to_resource, 427 + * of_irq_to_resource, platform_device_register_full, of_dma_configure, 428 + * and of_msi_configure which then call other functions and so on, must 429 + * be cleaned up - this is not a trivial exercise. 430 + * 431 + * Currently, this module is not intended to be unloaded, and there is 432 + * no module_exit function defined which does the needed cleanup. For 433 + * this reason, we have to assume success here. 434 + */ 435 + platform_driver_register(&hidma_mgmt_driver); 422 436 437 + return 0; 423 438 } 424 439 module_init(hidma_mgmt_init); 425 440 MODULE_LICENSE("GPL v2");
+1 -1
drivers/edac/amd64_edac.c
··· 3083 3083 edac_dbg(0, " TOP_MEM: 0x%016llx\n", pvt->top_mem); 3084 3084 3085 3085 /* Check first whether TOP_MEM2 is enabled: */ 3086 - rdmsrl(MSR_K8_SYSCFG, msr_val); 3086 + rdmsrl(MSR_AMD64_SYSCFG, msr_val); 3087 3087 if (msr_val & BIT(21)) { 3088 3088 rdmsrl(MSR_K8_TOP_MEM2, pvt->top_mem2); 3089 3089 edac_dbg(0, " TOP_MEM2: 0x%016llx\n", pvt->top_mem2);
+1
drivers/gpu/drm/amd/amdgpu/amdgpu.h
··· 1006 1006 struct amdgpu_df df; 1007 1007 1008 1008 struct amdgpu_ip_block ip_blocks[AMDGPU_MAX_IP_NUM]; 1009 + uint32_t harvest_ip_mask; 1009 1010 int num_ip_blocks; 1010 1011 struct mutex mn_lock; 1011 1012 DECLARE_HASHTABLE(mn_hash, 7);
+14 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 1683 1683 if (!ip_block_version) 1684 1684 return -EINVAL; 1685 1685 1686 + switch (ip_block_version->type) { 1687 + case AMD_IP_BLOCK_TYPE_VCN: 1688 + if (adev->harvest_ip_mask & AMD_HARVEST_IP_VCN_MASK) 1689 + return 0; 1690 + break; 1691 + case AMD_IP_BLOCK_TYPE_JPEG: 1692 + if (adev->harvest_ip_mask & AMD_HARVEST_IP_JPEG_MASK) 1693 + return 0; 1694 + break; 1695 + default: 1696 + break; 1697 + } 1698 + 1686 1699 DRM_INFO("add ip block number %d <%s>\n", adev->num_ip_blocks, 1687 1700 ip_block_version->funcs->name); 1688 1701 ··· 3124 3111 return amdgpu_device_asic_has_dc_support(adev->asic_type); 3125 3112 } 3126 3113 3127 - 3128 3114 static void amdgpu_device_xgmi_reset_func(struct work_struct *__work) 3129 3115 { 3130 3116 struct amdgpu_device *adev = ··· 3288 3276 adev->vm_manager.vm_pte_funcs = NULL; 3289 3277 adev->vm_manager.vm_pte_num_scheds = 0; 3290 3278 adev->gmc.gmc_funcs = NULL; 3279 + adev->harvest_ip_mask = 0x0; 3291 3280 adev->fence_context = dma_fence_context_alloc(AMDGPU_MAX_RINGS); 3292 3281 bitmap_zero(adev->gfx.pipe_reserve_bitmap, AMDGPU_MAX_COMPUTE_QUEUES); 3293 3282
+28
drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
··· 373 373 return -EINVAL; 374 374 } 375 375 376 + void amdgpu_discovery_harvest_ip(struct amdgpu_device *adev) 377 + { 378 + struct binary_header *bhdr; 379 + struct harvest_table *harvest_info; 380 + int i; 381 + 382 + bhdr = (struct binary_header *)adev->mman.discovery_bin; 383 + harvest_info = (struct harvest_table *)(adev->mman.discovery_bin + 384 + le16_to_cpu(bhdr->table_list[HARVEST_INFO].offset)); 385 + 386 + for (i = 0; i < 32; i++) { 387 + if (le32_to_cpu(harvest_info->list[i].hw_id) == 0) 388 + break; 389 + 390 + switch (le32_to_cpu(harvest_info->list[i].hw_id)) { 391 + case VCN_HWID: 392 + adev->harvest_ip_mask |= AMD_HARVEST_IP_VCN_MASK; 393 + adev->harvest_ip_mask |= AMD_HARVEST_IP_JPEG_MASK; 394 + break; 395 + case DMU_HWID: 396 + adev->harvest_ip_mask |= AMD_HARVEST_IP_DMU_MASK; 397 + break; 398 + default: 399 + break; 400 + } 401 + } 402 + } 403 + 376 404 int amdgpu_discovery_get_gfx_info(struct amdgpu_device *adev) 377 405 { 378 406 struct binary_header *bhdr;
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.h
··· 29 29 30 30 void amdgpu_discovery_fini(struct amdgpu_device *adev); 31 31 int amdgpu_discovery_reg_base_init(struct amdgpu_device *adev); 32 + void amdgpu_discovery_harvest_ip(struct amdgpu_device *adev); 32 33 int amdgpu_discovery_get_ip_version(struct amdgpu_device *adev, int hw_id, 33 34 int *major, int *minor, int *revision); 34 35 int amdgpu_discovery_get_gfx_info(struct amdgpu_device *adev);
+23 -15
drivers/gpu/drm/amd/amdgpu/nv.c
··· 623 623 .funcs = &nv_common_ip_funcs, 624 624 }; 625 625 626 + static bool nv_is_headless_sku(struct pci_dev *pdev) 627 + { 628 + if ((pdev->device == 0x731E && 629 + (pdev->revision == 0xC6 || pdev->revision == 0xC7)) || 630 + (pdev->device == 0x7340 && pdev->revision == 0xC9) || 631 + (pdev->device == 0x7360 && pdev->revision == 0xC7)) 632 + return true; 633 + return false; 634 + } 635 + 626 636 static int nv_reg_base_init(struct amdgpu_device *adev) 627 637 { 628 638 int r; ··· 643 633 DRM_WARN("failed to init reg base from ip discovery table, " 644 634 "fallback to legacy init method\n"); 645 635 goto legacy_init; 636 + } 637 + 638 + amdgpu_discovery_harvest_ip(adev); 639 + if (nv_is_headless_sku(adev->pdev)) { 640 + adev->harvest_ip_mask |= AMD_HARVEST_IP_VCN_MASK; 641 + adev->harvest_ip_mask |= AMD_HARVEST_IP_JPEG_MASK; 646 642 } 647 643 648 644 return 0; ··· 685 669 void nv_set_virt_ops(struct amdgpu_device *adev) 686 670 { 687 671 adev->virt.ops = &xgpu_nv_virt_ops; 688 - } 689 - 690 - static bool nv_is_headless_sku(struct pci_dev *pdev) 691 - { 692 - if ((pdev->device == 0x731E && 693 - (pdev->revision == 0xC6 || pdev->revision == 0xC7)) || 694 - (pdev->device == 0x7340 && pdev->revision == 0xC9) || 695 - (pdev->device == 0x7360 && pdev->revision == 0xC7)) 696 - return true; 697 - return false; 698 672 } 699 673 700 674 int nv_set_ip_blocks(struct amdgpu_device *adev) ··· 734 728 if (adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT && 735 729 !amdgpu_sriov_vf(adev)) 736 730 amdgpu_device_ip_block_add(adev, &smu_v11_0_ip_block); 737 - if (!nv_is_headless_sku(adev->pdev)) 738 - amdgpu_device_ip_block_add(adev, &vcn_v2_0_ip_block); 731 + amdgpu_device_ip_block_add(adev, &vcn_v2_0_ip_block); 739 732 amdgpu_device_ip_block_add(adev, &jpeg_v2_0_ip_block); 740 733 if (adev->enable_mes) 741 734 amdgpu_device_ip_block_add(adev, &mes_v10_1_ip_block); ··· 757 752 if (adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT && 758 753 !amdgpu_sriov_vf(adev)) 759 754 amdgpu_device_ip_block_add(adev, &smu_v11_0_ip_block); 760 - if (!nv_is_headless_sku(adev->pdev)) 761 - amdgpu_device_ip_block_add(adev, &vcn_v2_0_ip_block); 755 + amdgpu_device_ip_block_add(adev, &vcn_v2_0_ip_block); 762 756 if (!amdgpu_sriov_vf(adev)) 763 757 amdgpu_device_ip_block_add(adev, &jpeg_v2_0_ip_block); 764 758 break; ··· 781 777 amdgpu_device_ip_block_add(adev, &vcn_v3_0_ip_block); 782 778 if (!amdgpu_sriov_vf(adev)) 783 779 amdgpu_device_ip_block_add(adev, &jpeg_v3_0_ip_block); 784 - 785 780 if (adev->enable_mes) 786 781 amdgpu_device_ip_block_add(adev, &mes_v10_1_ip_block); 787 782 break; ··· 1151 1148 /* FIXME: not supported yet */ 1152 1149 return -EINVAL; 1153 1150 } 1151 + 1152 + if (adev->harvest_ip_mask & AMD_HARVEST_IP_VCN_MASK) 1153 + adev->pg_flags &= ~(AMD_PG_SUPPORT_VCN | 1154 + AMD_PG_SUPPORT_VCN_DPG | 1155 + AMD_PG_SUPPORT_JPEG); 1154 1156 1155 1157 if (amdgpu_sriov_vf(adev)) { 1156 1158 amdgpu_virt_init_setting(adev);
+2 -1
drivers/gpu/drm/amd/amdgpu/soc15.c
··· 1401 1401 AMD_CG_SUPPORT_MC_MGCG | 1402 1402 AMD_CG_SUPPORT_MC_LS | 1403 1403 AMD_CG_SUPPORT_SDMA_MGCG | 1404 - AMD_CG_SUPPORT_SDMA_LS; 1404 + AMD_CG_SUPPORT_SDMA_LS | 1405 + AMD_CG_SUPPORT_VCN_MGCG; 1405 1406 1406 1407 adev->pg_flags = AMD_PG_SUPPORT_SDMA | 1407 1408 AMD_PG_SUPPORT_MMHUB |
+9 -4
drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
··· 1119 1119 UVD_LMI_STATUS__WRITE_CLEAN_RAW_MASK; 1120 1120 SOC15_WAIT_ON_RREG(UVD, 0, mmUVD_LMI_STATUS, tmp, tmp); 1121 1121 1122 - /* put VCPU into reset */ 1123 - WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_SOFT_RESET), 1124 - UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK, 1125 - ~UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK); 1122 + /* stall UMC channel */ 1123 + WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_CTRL2), 1124 + UVD_LMI_CTRL2__STALL_ARB_UMC_MASK, 1125 + ~UVD_LMI_CTRL2__STALL_ARB_UMC_MASK); 1126 1126 1127 1127 tmp = UVD_LMI_STATUS__UMC_READ_CLEAN_RAW_MASK | 1128 1128 UVD_LMI_STATUS__UMC_WRITE_CLEAN_RAW_MASK; ··· 1140 1140 WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_SOFT_RESET), 1141 1141 UVD_SOFT_RESET__LMI_SOFT_RESET_MASK, 1142 1142 ~UVD_SOFT_RESET__LMI_SOFT_RESET_MASK); 1143 + 1144 + /* put VCPU into reset */ 1145 + WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_SOFT_RESET), 1146 + UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK, 1147 + ~UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK); 1143 1148 1144 1149 WREG32_SOC15(UVD, 0, mmUVD_STATUS, 0); 1145 1150
+1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
··· 650 650 651 651 /* File created at /sys/class/drm/card0/device/hdcp_srm*/ 652 652 hdcp_work[0].attr = data_attr; 653 + sysfs_bin_attr_init(&hdcp_work[0].attr); 653 654 654 655 if (sysfs_create_bin_file(&adev->dev->kobj, &hdcp_work[0].attr)) 655 656 DRM_WARN("Failed to create device file hdcp_srm");
+6
drivers/gpu/drm/amd/include/amd_shared.h
··· 216 216 PP_GFX_DCS_MASK = 0x80000, 217 217 }; 218 218 219 + enum amd_harvest_ip_mask { 220 + AMD_HARVEST_IP_VCN_MASK = 0x1, 221 + AMD_HARVEST_IP_JPEG_MASK = 0x2, 222 + AMD_HARVEST_IP_DMU_MASK = 0x4, 223 + }; 224 + 219 225 enum DC_FEATURE_MASK { 220 226 DC_FBC_MASK = 0x1, 221 227 DC_MULTI_MON_PP_MCLK_SWITCH_MASK = 0x2,
+88 -86
drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
··· 4817 4817 u32 reg; 4818 4818 int ret; 4819 4819 4820 - table->initialState.levels[0].mclk.vDLL_CNTL = 4820 + table->initialState.level.mclk.vDLL_CNTL = 4821 4821 cpu_to_be32(si_pi->clock_registers.dll_cntl); 4822 - table->initialState.levels[0].mclk.vMCLK_PWRMGT_CNTL = 4822 + table->initialState.level.mclk.vMCLK_PWRMGT_CNTL = 4823 4823 cpu_to_be32(si_pi->clock_registers.mclk_pwrmgt_cntl); 4824 - table->initialState.levels[0].mclk.vMPLL_AD_FUNC_CNTL = 4824 + table->initialState.level.mclk.vMPLL_AD_FUNC_CNTL = 4825 4825 cpu_to_be32(si_pi->clock_registers.mpll_ad_func_cntl); 4826 - table->initialState.levels[0].mclk.vMPLL_DQ_FUNC_CNTL = 4826 + table->initialState.level.mclk.vMPLL_DQ_FUNC_CNTL = 4827 4827 cpu_to_be32(si_pi->clock_registers.mpll_dq_func_cntl); 4828 - table->initialState.levels[0].mclk.vMPLL_FUNC_CNTL = 4828 + table->initialState.level.mclk.vMPLL_FUNC_CNTL = 4829 4829 cpu_to_be32(si_pi->clock_registers.mpll_func_cntl); 4830 - table->initialState.levels[0].mclk.vMPLL_FUNC_CNTL_1 = 4830 + table->initialState.level.mclk.vMPLL_FUNC_CNTL_1 = 4831 4831 cpu_to_be32(si_pi->clock_registers.mpll_func_cntl_1); 4832 - table->initialState.levels[0].mclk.vMPLL_FUNC_CNTL_2 = 4832 + table->initialState.level.mclk.vMPLL_FUNC_CNTL_2 = 4833 4833 cpu_to_be32(si_pi->clock_registers.mpll_func_cntl_2); 4834 - table->initialState.levels[0].mclk.vMPLL_SS = 4834 + table->initialState.level.mclk.vMPLL_SS = 4835 4835 cpu_to_be32(si_pi->clock_registers.mpll_ss1); 4836 - table->initialState.levels[0].mclk.vMPLL_SS2 = 4836 + table->initialState.level.mclk.vMPLL_SS2 = 4837 4837 cpu_to_be32(si_pi->clock_registers.mpll_ss2); 4838 4838 4839 - table->initialState.levels[0].mclk.mclk_value = 4839 + table->initialState.level.mclk.mclk_value = 4840 4840 cpu_to_be32(initial_state->performance_levels[0].mclk); 4841 4841 4842 - table->initialState.levels[0].sclk.vCG_SPLL_FUNC_CNTL = 4842 + table->initialState.level.sclk.vCG_SPLL_FUNC_CNTL = 4843 4843 cpu_to_be32(si_pi->clock_registers.cg_spll_func_cntl); 4844 - table->initialState.levels[0].sclk.vCG_SPLL_FUNC_CNTL_2 = 4844 + table->initialState.level.sclk.vCG_SPLL_FUNC_CNTL_2 = 4845 4845 cpu_to_be32(si_pi->clock_registers.cg_spll_func_cntl_2); 4846 - table->initialState.levels[0].sclk.vCG_SPLL_FUNC_CNTL_3 = 4846 + table->initialState.level.sclk.vCG_SPLL_FUNC_CNTL_3 = 4847 4847 cpu_to_be32(si_pi->clock_registers.cg_spll_func_cntl_3); 4848 - table->initialState.levels[0].sclk.vCG_SPLL_FUNC_CNTL_4 = 4848 + table->initialState.level.sclk.vCG_SPLL_FUNC_CNTL_4 = 4849 4849 cpu_to_be32(si_pi->clock_registers.cg_spll_func_cntl_4); 4850 - table->initialState.levels[0].sclk.vCG_SPLL_SPREAD_SPECTRUM = 4850 + table->initialState.level.sclk.vCG_SPLL_SPREAD_SPECTRUM = 4851 4851 cpu_to_be32(si_pi->clock_registers.cg_spll_spread_spectrum); 4852 - table->initialState.levels[0].sclk.vCG_SPLL_SPREAD_SPECTRUM_2 = 4852 + table->initialState.level.sclk.vCG_SPLL_SPREAD_SPECTRUM_2 = 4853 4853 cpu_to_be32(si_pi->clock_registers.cg_spll_spread_spectrum_2); 4854 4854 4855 - table->initialState.levels[0].sclk.sclk_value = 4855 + table->initialState.level.sclk.sclk_value = 4856 4856 cpu_to_be32(initial_state->performance_levels[0].sclk); 4857 4857 4858 - table->initialState.levels[0].arbRefreshState = 4858 + table->initialState.level.arbRefreshState = 4859 4859 SISLANDS_INITIAL_STATE_ARB_INDEX; 4860 4860 4861 - table->initialState.levels[0].ACIndex = 0; 4861 + table->initialState.level.ACIndex = 0; 4862 4862 4863 4863 ret = si_populate_voltage_value(adev, &eg_pi->vddc_voltage_table, 4864 4864 initial_state->performance_levels[0].vddc, 4865 - &table->initialState.levels[0].vddc); 4865 + &table->initialState.level.vddc); 4866 4866 4867 4867 if (!ret) { 4868 4868 u16 std_vddc; 4869 4869 4870 4870 ret = si_get_std_voltage_value(adev, 4871 - &table->initialState.levels[0].vddc, 4871 + &table->initialState.level.vddc, 4872 4872 &std_vddc); 4873 4873 if (!ret) 4874 4874 si_populate_std_voltage_value(adev, std_vddc, 4875 - table->initialState.levels[0].vddc.index, 4876 - &table->initialState.levels[0].std_vddc); 4875 + table->initialState.level.vddc.index, 4876 + &table->initialState.level.std_vddc); 4877 4877 } 4878 4878 4879 4879 if (eg_pi->vddci_control) 4880 4880 si_populate_voltage_value(adev, 4881 4881 &eg_pi->vddci_voltage_table, 4882 4882 initial_state->performance_levels[0].vddci, 4883 - &table->initialState.levels[0].vddci); 4883 + &table->initialState.level.vddci); 4884 4884 4885 4885 if (si_pi->vddc_phase_shed_control) 4886 4886 si_populate_phase_shedding_value(adev, ··· 4888 4888 initial_state->performance_levels[0].vddc, 4889 4889 initial_state->performance_levels[0].sclk, 4890 4890 initial_state->performance_levels[0].mclk, 4891 - &table->initialState.levels[0].vddc); 4891 + &table->initialState.level.vddc); 4892 4892 4893 - si_populate_initial_mvdd_value(adev, &table->initialState.levels[0].mvdd); 4893 + si_populate_initial_mvdd_value(adev, &table->initialState.level.mvdd); 4894 4894 4895 4895 reg = CG_R(0xffff) | CG_L(0); 4896 - table->initialState.levels[0].aT = cpu_to_be32(reg); 4897 - table->initialState.levels[0].bSP = cpu_to_be32(pi->dsp); 4898 - table->initialState.levels[0].gen2PCIE = (u8)si_pi->boot_pcie_gen; 4896 + table->initialState.level.aT = cpu_to_be32(reg); 4897 + table->initialState.level.bSP = cpu_to_be32(pi->dsp); 4898 + table->initialState.level.gen2PCIE = (u8)si_pi->boot_pcie_gen; 4899 4899 4900 4900 if (adev->gmc.vram_type == AMDGPU_VRAM_TYPE_GDDR5) { 4901 - table->initialState.levels[0].strobeMode = 4901 + table->initialState.level.strobeMode = 4902 4902 si_get_strobe_mode_settings(adev, 4903 4903 initial_state->performance_levels[0].mclk); 4904 4904 4905 4905 if (initial_state->performance_levels[0].mclk > pi->mclk_edc_enable_threshold) 4906 - table->initialState.levels[0].mcFlags = SISLANDS_SMC_MC_EDC_RD_FLAG | SISLANDS_SMC_MC_EDC_WR_FLAG; 4906 + table->initialState.level.mcFlags = SISLANDS_SMC_MC_EDC_RD_FLAG | SISLANDS_SMC_MC_EDC_WR_FLAG; 4907 4907 else 4908 - table->initialState.levels[0].mcFlags = 0; 4908 + table->initialState.level.mcFlags = 0; 4909 4909 } 4910 4910 4911 4911 table->initialState.levelCount = 1; 4912 4912 4913 4913 table->initialState.flags |= PPSMC_SWSTATE_FLAG_DC; 4914 4914 4915 - table->initialState.levels[0].dpm2.MaxPS = 0; 4916 - table->initialState.levels[0].dpm2.NearTDPDec = 0; 4917 - table->initialState.levels[0].dpm2.AboveSafeInc = 0; 4918 - table->initialState.levels[0].dpm2.BelowSafeInc = 0; 4919 - table->initialState.levels[0].dpm2.PwrEfficiencyRatio = 0; 4915 + table->initialState.level.dpm2.MaxPS = 0; 4916 + table->initialState.level.dpm2.NearTDPDec = 0; 4917 + table->initialState.level.dpm2.AboveSafeInc = 0; 4918 + table->initialState.level.dpm2.BelowSafeInc = 0; 4919 + table->initialState.level.dpm2.PwrEfficiencyRatio = 0; 4920 4920 4921 4921 reg = MIN_POWER_MASK | MAX_POWER_MASK; 4922 - table->initialState.levels[0].SQPowerThrottle = cpu_to_be32(reg); 4922 + table->initialState.level.SQPowerThrottle = cpu_to_be32(reg); 4923 4923 4924 4924 reg = MAX_POWER_DELTA_MASK | STI_SIZE_MASK | LTI_RATIO_MASK; 4925 - table->initialState.levels[0].SQPowerThrottle_2 = cpu_to_be32(reg); 4925 + table->initialState.level.SQPowerThrottle_2 = cpu_to_be32(reg); 4926 4926 4927 4927 return 0; 4928 4928 } ··· 4953 4953 4954 4954 if (pi->acpi_vddc) { 4955 4955 ret = si_populate_voltage_value(adev, &eg_pi->vddc_voltage_table, 4956 - pi->acpi_vddc, &table->ACPIState.levels[0].vddc); 4956 + pi->acpi_vddc, &table->ACPIState.level.vddc); 4957 4957 if (!ret) { 4958 4958 u16 std_vddc; 4959 4959 4960 4960 ret = si_get_std_voltage_value(adev, 4961 - &table->ACPIState.levels[0].vddc, &std_vddc); 4961 + &table->ACPIState.level.vddc, &std_vddc); 4962 4962 if (!ret) 4963 4963 si_populate_std_voltage_value(adev, std_vddc, 4964 - table->ACPIState.levels[0].vddc.index, 4965 - &table->ACPIState.levels[0].std_vddc); 4964 + table->ACPIState.level.vddc.index, 4965 + &table->ACPIState.level.std_vddc); 4966 4966 } 4967 - table->ACPIState.levels[0].gen2PCIE = si_pi->acpi_pcie_gen; 4967 + table->ACPIState.level.gen2PCIE = si_pi->acpi_pcie_gen; 4968 4968 4969 4969 if (si_pi->vddc_phase_shed_control) { 4970 4970 si_populate_phase_shedding_value(adev, ··· 4972 4972 pi->acpi_vddc, 4973 4973 0, 4974 4974 0, 4975 - &table->ACPIState.levels[0].vddc); 4975 + &table->ACPIState.level.vddc); 4976 4976 } 4977 4977 } else { 4978 4978 ret = si_populate_voltage_value(adev, &eg_pi->vddc_voltage_table, 4979 - pi->min_vddc_in_table, &table->ACPIState.levels[0].vddc); 4979 + pi->min_vddc_in_table, &table->ACPIState.level.vddc); 4980 4980 if (!ret) { 4981 4981 u16 std_vddc; 4982 4982 4983 4983 ret = si_get_std_voltage_value(adev, 4984 - &table->ACPIState.levels[0].vddc, &std_vddc); 4984 + &table->ACPIState.level.vddc, &std_vddc); 4985 4985 4986 4986 if (!ret) 4987 4987 si_populate_std_voltage_value(adev, std_vddc, 4988 - table->ACPIState.levels[0].vddc.index, 4989 - &table->ACPIState.levels[0].std_vddc); 4988 + table->ACPIState.level.vddc.index, 4989 + &table->ACPIState.level.std_vddc); 4990 4990 } 4991 - table->ACPIState.levels[0].gen2PCIE = 4991 + table->ACPIState.level.gen2PCIE = 4992 4992 (u8)amdgpu_get_pcie_gen_support(adev, 4993 4993 si_pi->sys_pcie_mask, 4994 4994 si_pi->boot_pcie_gen, ··· 5000 5000 pi->min_vddc_in_table, 5001 5001 0, 5002 5002 0, 5003 - &table->ACPIState.levels[0].vddc); 5003 + &table->ACPIState.level.vddc); 5004 5004 } 5005 5005 5006 5006 if (pi->acpi_vddc) { 5007 5007 if (eg_pi->acpi_vddci) 5008 5008 si_populate_voltage_value(adev, &eg_pi->vddci_voltage_table, 5009 5009 eg_pi->acpi_vddci, 5010 - &table->ACPIState.levels[0].vddci); 5010 + &table->ACPIState.level.vddci); 5011 5011 } 5012 5012 5013 5013 mclk_pwrmgt_cntl |= MRDCK0_RESET | MRDCK1_RESET; ··· 5018 5018 spll_func_cntl_2 &= ~SCLK_MUX_SEL_MASK; 5019 5019 spll_func_cntl_2 |= SCLK_MUX_SEL(4); 5020 5020 5021 - table->ACPIState.levels[0].mclk.vDLL_CNTL = 5021 + table->ACPIState.level.mclk.vDLL_CNTL = 5022 5022 cpu_to_be32(dll_cntl); 5023 - table->ACPIState.levels[0].mclk.vMCLK_PWRMGT_CNTL = 5023 + table->ACPIState.level.mclk.vMCLK_PWRMGT_CNTL = 5024 5024 cpu_to_be32(mclk_pwrmgt_cntl); 5025 - table->ACPIState.levels[0].mclk.vMPLL_AD_FUNC_CNTL = 5025 + table->ACPIState.level.mclk.vMPLL_AD_FUNC_CNTL = 5026 5026 cpu_to_be32(mpll_ad_func_cntl); 5027 - table->ACPIState.levels[0].mclk.vMPLL_DQ_FUNC_CNTL = 5027 + table->ACPIState.level.mclk.vMPLL_DQ_FUNC_CNTL = 5028 5028 cpu_to_be32(mpll_dq_func_cntl); 5029 - table->ACPIState.levels[0].mclk.vMPLL_FUNC_CNTL = 5029 + table->ACPIState.level.mclk.vMPLL_FUNC_CNTL = 5030 5030 cpu_to_be32(mpll_func_cntl); 5031 - table->ACPIState.levels[0].mclk.vMPLL_FUNC_CNTL_1 = 5031 + table->ACPIState.level.mclk.vMPLL_FUNC_CNTL_1 = 5032 5032 cpu_to_be32(mpll_func_cntl_1); 5033 - table->ACPIState.levels[0].mclk.vMPLL_FUNC_CNTL_2 = 5033 + table->ACPIState.level.mclk.vMPLL_FUNC_CNTL_2 = 5034 5034 cpu_to_be32(mpll_func_cntl_2); 5035 - table->ACPIState.levels[0].mclk.vMPLL_SS = 5035 + table->ACPIState.level.mclk.vMPLL_SS = 5036 5036 cpu_to_be32(si_pi->clock_registers.mpll_ss1); 5037 - table->ACPIState.levels[0].mclk.vMPLL_SS2 = 5037 + table->ACPIState.level.mclk.vMPLL_SS2 = 5038 5038 cpu_to_be32(si_pi->clock_registers.mpll_ss2); 5039 5039 5040 - table->ACPIState.levels[0].sclk.vCG_SPLL_FUNC_CNTL = 5040 + table->ACPIState.level.sclk.vCG_SPLL_FUNC_CNTL = 5041 5041 cpu_to_be32(spll_func_cntl); 5042 - table->ACPIState.levels[0].sclk.vCG_SPLL_FUNC_CNTL_2 = 5042 + table->ACPIState.level.sclk.vCG_SPLL_FUNC_CNTL_2 = 5043 5043 cpu_to_be32(spll_func_cntl_2); 5044 - table->ACPIState.levels[0].sclk.vCG_SPLL_FUNC_CNTL_3 = 5044 + table->ACPIState.level.sclk.vCG_SPLL_FUNC_CNTL_3 = 5045 5045 cpu_to_be32(spll_func_cntl_3); 5046 - table->ACPIState.levels[0].sclk.vCG_SPLL_FUNC_CNTL_4 = 5046 + table->ACPIState.level.sclk.vCG_SPLL_FUNC_CNTL_4 = 5047 5047 cpu_to_be32(spll_func_cntl_4); 5048 5048 5049 - table->ACPIState.levels[0].mclk.mclk_value = 0; 5050 - table->ACPIState.levels[0].sclk.sclk_value = 0; 5049 + table->ACPIState.level.mclk.mclk_value = 0; 5050 + table->ACPIState.level.sclk.sclk_value = 0; 5051 5051 5052 - si_populate_mvdd_value(adev, 0, &table->ACPIState.levels[0].mvdd); 5052 + si_populate_mvdd_value(adev, 0, &table->ACPIState.level.mvdd); 5053 5053 5054 5054 if (eg_pi->dynamic_ac_timing) 5055 - table->ACPIState.levels[0].ACIndex = 0; 5055 + table->ACPIState.level.ACIndex = 0; 5056 5056 5057 - table->ACPIState.levels[0].dpm2.MaxPS = 0; 5058 - table->ACPIState.levels[0].dpm2.NearTDPDec = 0; 5059 - table->ACPIState.levels[0].dpm2.AboveSafeInc = 0; 5060 - table->ACPIState.levels[0].dpm2.BelowSafeInc = 0; 5061 - table->ACPIState.levels[0].dpm2.PwrEfficiencyRatio = 0; 5057 + table->ACPIState.level.dpm2.MaxPS = 0; 5058 + table->ACPIState.level.dpm2.NearTDPDec = 0; 5059 + table->ACPIState.level.dpm2.AboveSafeInc = 0; 5060 + table->ACPIState.level.dpm2.BelowSafeInc = 0; 5061 + table->ACPIState.level.dpm2.PwrEfficiencyRatio = 0; 5062 5062 5063 5063 reg = MIN_POWER_MASK | MAX_POWER_MASK; 5064 - table->ACPIState.levels[0].SQPowerThrottle = cpu_to_be32(reg); 5064 + table->ACPIState.level.SQPowerThrottle = cpu_to_be32(reg); 5065 5065 5066 5066 reg = MAX_POWER_DELTA_MASK | STI_SIZE_MASK | LTI_RATIO_MASK; 5067 - table->ACPIState.levels[0].SQPowerThrottle_2 = cpu_to_be32(reg); 5067 + table->ACPIState.level.SQPowerThrottle_2 = cpu_to_be32(reg); 5068 5068 5069 5069 return 0; 5070 5070 } 5071 5071 5072 5072 static int si_populate_ulv_state(struct amdgpu_device *adev, 5073 - SISLANDS_SMC_SWSTATE *state) 5073 + struct SISLANDS_SMC_SWSTATE_SINGLE *state) 5074 5074 { 5075 5075 struct evergreen_power_info *eg_pi = evergreen_get_pi(adev); 5076 5076 struct si_power_info *si_pi = si_get_pi(adev); ··· 5079 5079 int ret; 5080 5080 5081 5081 ret = si_convert_power_level_to_smc(adev, &ulv->pl, 5082 - &state->levels[0]); 5082 + &state->level); 5083 5083 if (!ret) { 5084 5084 if (eg_pi->sclk_deep_sleep) { 5085 5085 if (sclk_in_sr <= SCLK_MIN_DEEPSLEEP_FREQ) 5086 - state->levels[0].stateFlags |= PPSMC_STATEFLAG_DEEPSLEEP_BYPASS; 5086 + state->level.stateFlags |= PPSMC_STATEFLAG_DEEPSLEEP_BYPASS; 5087 5087 else 5088 - state->levels[0].stateFlags |= PPSMC_STATEFLAG_DEEPSLEEP_THROTTLE; 5088 + state->level.stateFlags |= PPSMC_STATEFLAG_DEEPSLEEP_THROTTLE; 5089 5089 } 5090 5090 if (ulv->one_pcie_lane_in_ulv) 5091 5091 state->flags |= PPSMC_SWSTATE_FLAG_PCIE_X1; 5092 - state->levels[0].arbRefreshState = (u8)(SISLANDS_ULV_STATE_ARB_INDEX); 5093 - state->levels[0].ACIndex = 1; 5094 - state->levels[0].std_vddc = state->levels[0].vddc; 5092 + state->level.arbRefreshState = (u8)(SISLANDS_ULV_STATE_ARB_INDEX); 5093 + state->level.ACIndex = 1; 5094 + state->level.std_vddc = state->level.vddc; 5095 5095 state->levelCount = 1; 5096 5096 5097 5097 state->flags |= PPSMC_SWSTATE_FLAG_DC; ··· 5190 5190 if (ret) 5191 5191 return ret; 5192 5192 5193 - table->driverState = table->initialState; 5193 + table->driverState.flags = table->initialState.flags; 5194 + table->driverState.levelCount = table->initialState.levelCount; 5195 + table->driverState.levels[0] = table->initialState.level; 5194 5196 5195 5197 ret = si_do_program_memory_timing_parameters(adev, amdgpu_boot_state, 5196 5198 SISLANDS_INITIAL_STATE_ARB_INDEX); ··· 5739 5737 if (ulv->supported && ulv->pl.vddc) { 5740 5738 u32 address = si_pi->state_table_start + 5741 5739 offsetof(SISLANDS_SMC_STATETABLE, ULVState); 5742 - SISLANDS_SMC_SWSTATE *smc_state = &si_pi->smc_statetable.ULVState; 5743 - u32 state_size = sizeof(SISLANDS_SMC_SWSTATE); 5740 + struct SISLANDS_SMC_SWSTATE_SINGLE *smc_state = &si_pi->smc_statetable.ULVState; 5741 + u32 state_size = sizeof(struct SISLANDS_SMC_SWSTATE_SINGLE); 5744 5742 5745 5743 memset(smc_state, 0, state_size); 5746 5744
+21 -13
drivers/gpu/drm/amd/pm/powerplay/sislands_smc.h
··· 191 191 192 192 typedef struct SISLANDS_SMC_SWSTATE SISLANDS_SMC_SWSTATE; 193 193 194 + struct SISLANDS_SMC_SWSTATE_SINGLE { 195 + uint8_t flags; 196 + uint8_t levelCount; 197 + uint8_t padding2; 198 + uint8_t padding3; 199 + SISLANDS_SMC_HW_PERFORMANCE_LEVEL level; 200 + }; 201 + 194 202 #define SISLANDS_SMC_VOLTAGEMASK_VDDC 0 195 203 #define SISLANDS_SMC_VOLTAGEMASK_MVDD 1 196 204 #define SISLANDS_SMC_VOLTAGEMASK_VDDCI 2 ··· 216 208 217 209 struct SISLANDS_SMC_STATETABLE 218 210 { 219 - uint8_t thermalProtectType; 220 - uint8_t systemFlags; 221 - uint8_t maxVDDCIndexInPPTable; 222 - uint8_t extraFlags; 223 - uint32_t lowSMIO[SISLANDS_MAX_NO_VREG_STEPS]; 224 - SISLANDS_SMC_VOLTAGEMASKTABLE voltageMaskTable; 225 - SISLANDS_SMC_VOLTAGEMASKTABLE phaseMaskTable; 226 - PP_SIslands_DPM2Parameters dpm2Params; 227 - SISLANDS_SMC_SWSTATE initialState; 228 - SISLANDS_SMC_SWSTATE ACPIState; 229 - SISLANDS_SMC_SWSTATE ULVState; 230 - SISLANDS_SMC_SWSTATE driverState; 231 - SISLANDS_SMC_HW_PERFORMANCE_LEVEL dpmLevels[SISLANDS_MAX_SMC_PERFORMANCE_LEVELS_PER_SWSTATE - 1]; 211 + uint8_t thermalProtectType; 212 + uint8_t systemFlags; 213 + uint8_t maxVDDCIndexInPPTable; 214 + uint8_t extraFlags; 215 + uint32_t lowSMIO[SISLANDS_MAX_NO_VREG_STEPS]; 216 + SISLANDS_SMC_VOLTAGEMASKTABLE voltageMaskTable; 217 + SISLANDS_SMC_VOLTAGEMASKTABLE phaseMaskTable; 218 + PP_SIslands_DPM2Parameters dpm2Params; 219 + struct SISLANDS_SMC_SWSTATE_SINGLE initialState; 220 + struct SISLANDS_SMC_SWSTATE_SINGLE ACPIState; 221 + struct SISLANDS_SMC_SWSTATE_SINGLE ULVState; 222 + SISLANDS_SMC_SWSTATE driverState; 223 + SISLANDS_SMC_HW_PERFORMANCE_LEVEL dpmLevels[SISLANDS_MAX_SMC_PERFORMANCE_LEVELS_PER_SWSTATE]; 232 224 }; 233 225 234 226 typedef struct SISLANDS_SMC_STATETABLE SISLANDS_SMC_STATETABLE;
+6 -55
drivers/gpu/drm/i915/display/intel_dp.c
··· 1095 1095 return -EINVAL; 1096 1096 } 1097 1097 1098 - /* Optimize link config in order: max bpp, min lanes, min clock */ 1099 - static int 1100 - intel_dp_compute_link_config_fast(struct intel_dp *intel_dp, 1101 - struct intel_crtc_state *pipe_config, 1102 - const struct link_config_limits *limits) 1103 - { 1104 - const struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode; 1105 - int bpp, clock, lane_count; 1106 - int mode_rate, link_clock, link_avail; 1107 - 1108 - for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -= 2 * 3) { 1109 - int output_bpp = intel_dp_output_bpp(pipe_config->output_format, bpp); 1110 - 1111 - mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock, 1112 - output_bpp); 1113 - 1114 - for (lane_count = limits->min_lane_count; 1115 - lane_count <= limits->max_lane_count; 1116 - lane_count <<= 1) { 1117 - for (clock = limits->min_clock; clock <= limits->max_clock; clock++) { 1118 - link_clock = intel_dp->common_rates[clock]; 1119 - link_avail = intel_dp_max_data_rate(link_clock, 1120 - lane_count); 1121 - 1122 - if (mode_rate <= link_avail) { 1123 - pipe_config->lane_count = lane_count; 1124 - pipe_config->pipe_bpp = bpp; 1125 - pipe_config->port_clock = link_clock; 1126 - 1127 - return 0; 1128 - } 1129 - } 1130 - } 1131 - } 1132 - 1133 - return -EINVAL; 1134 - } 1135 - 1136 1098 static int intel_dp_dsc_compute_bpp(struct intel_dp *intel_dp, u8 dsc_max_bpc) 1137 1099 { 1138 1100 int i, num_bpc; ··· 1344 1382 intel_dp_can_bigjoiner(intel_dp)) 1345 1383 pipe_config->bigjoiner = true; 1346 1384 1347 - if (intel_dp_is_edp(intel_dp)) 1348 - /* 1349 - * Optimize for fast and narrow. eDP 1.3 section 3.3 and eDP 1.4 1350 - * section A.1: "It is recommended that the minimum number of 1351 - * lanes be used, using the minimum link rate allowed for that 1352 - * lane configuration." 1353 - * 1354 - * Note that we fall back to the max clock and lane count for eDP 1355 - * panels that fail with the fast optimal settings (see 1356 - * intel_dp->use_max_params), in which case the fast vs. wide 1357 - * choice doesn't matter. 1358 - */ 1359 - ret = intel_dp_compute_link_config_fast(intel_dp, pipe_config, &limits); 1360 - else 1361 - /* Optimize for slow and wide. */ 1362 - ret = intel_dp_compute_link_config_wide(intel_dp, pipe_config, &limits); 1385 + /* 1386 + * Optimize for slow and wide for everything, because there are some 1387 + * eDP 1.3 and 1.4 panels don't work well with fast and narrow. 1388 + */ 1389 + ret = intel_dp_compute_link_config_wide(intel_dp, pipe_config, &limits); 1363 1390 1364 1391 /* enable compression if the mode doesn't fit available BW */ 1365 1392 drm_dbg_kms(&i915->drm, "Force DSC en = %d\n", intel_dp->force_dsc_en); ··· 2111 2160 * -PCON supports SRC_CTL_MODE (VESA DP2.0-HDMI2.1 PCON Spec Draft-1 Sec-7) 2112 2161 * -sink is HDMI2.1 2113 2162 */ 2114 - if (!(intel_dp->dpcd[2] & DP_PCON_SOURCE_CTL_MODE) || 2163 + if (!(intel_dp->downstream_ports[2] & DP_PCON_SOURCE_CTL_MODE) || 2115 2164 !intel_dp_is_hdmi_2_1_sink(intel_dp) || 2116 2165 intel_dp->frl.is_trained) 2117 2166 return;
+1 -1
drivers/gpu/drm/i915/display/intel_overlay.c
··· 383 383 i830_overlay_clock_gating(dev_priv, true); 384 384 } 385 385 386 - static void 386 + __i915_active_call static void 387 387 intel_overlay_last_flip_retire(struct i915_active *active) 388 388 { 389 389 struct intel_overlay *overlay =
+1 -1
drivers/gpu/drm/i915/gem/i915_gem_mman.c
··· 189 189 struct i915_ggtt_view view; 190 190 191 191 if (i915_gem_object_is_tiled(obj)) 192 - chunk = roundup(chunk, tile_row_pages(obj)); 192 + chunk = roundup(chunk, tile_row_pages(obj) ?: 1); 193 193 194 194 view.type = I915_GGTT_VIEW_PARTIAL; 195 195 view.partial.offset = rounddown(page_offset, chunk);
-1
drivers/gpu/drm/i915/gt/gen8_ppgtt.c
··· 641 641 642 642 err = pin_pt_dma(vm, pde->pt.base); 643 643 if (err) { 644 - i915_gem_object_put(pde->pt.base); 645 644 free_pd(vm, pde); 646 645 return err; 647 646 }
+2 -2
drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
··· 653 653 * banks of memory are paired and unswizzled on the 654 654 * uneven portion, so leave that as unknown. 655 655 */ 656 - if (intel_uncore_read(uncore, C0DRB3) == 657 - intel_uncore_read(uncore, C1DRB3)) { 656 + if (intel_uncore_read16(uncore, C0DRB3) == 657 + intel_uncore_read16(uncore, C1DRB3)) { 658 658 swizzle_x = I915_BIT_6_SWIZZLE_9_10; 659 659 swizzle_y = I915_BIT_6_SWIZZLE_9; 660 660 }
+2 -1
drivers/gpu/drm/i915/i915_active.c
··· 1156 1156 return 0; 1157 1157 } 1158 1158 1159 - static void auto_retire(struct i915_active *ref) 1159 + __i915_active_call static void 1160 + auto_retire(struct i915_active *ref) 1160 1161 { 1161 1162 i915_active_put(ref); 1162 1163 }
+49 -22
drivers/gpu/drm/i915/i915_mm.c
··· 28 28 29 29 #include "i915_drv.h" 30 30 31 - #define EXPECTED_FLAGS (VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP) 31 + struct remap_pfn { 32 + struct mm_struct *mm; 33 + unsigned long pfn; 34 + pgprot_t prot; 35 + 36 + struct sgt_iter sgt; 37 + resource_size_t iobase; 38 + }; 32 39 33 40 #define use_dma(io) ((io) != -1) 41 + 42 + static inline unsigned long sgt_pfn(const struct remap_pfn *r) 43 + { 44 + if (use_dma(r->iobase)) 45 + return (r->sgt.dma + r->sgt.curr + r->iobase) >> PAGE_SHIFT; 46 + else 47 + return r->sgt.pfn + (r->sgt.curr >> PAGE_SHIFT); 48 + } 49 + 50 + static int remap_sg(pte_t *pte, unsigned long addr, void *data) 51 + { 52 + struct remap_pfn *r = data; 53 + 54 + if (GEM_WARN_ON(!r->sgt.sgp)) 55 + return -EINVAL; 56 + 57 + /* Special PTE are not associated with any struct page */ 58 + set_pte_at(r->mm, addr, pte, 59 + pte_mkspecial(pfn_pte(sgt_pfn(r), r->prot))); 60 + r->pfn++; /* track insertions in case we need to unwind later */ 61 + 62 + r->sgt.curr += PAGE_SIZE; 63 + if (r->sgt.curr >= r->sgt.max) 64 + r->sgt = __sgt_iter(__sg_next(r->sgt.sgp), use_dma(r->iobase)); 65 + 66 + return 0; 67 + } 68 + 69 + #define EXPECTED_FLAGS (VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP) 34 70 35 71 /** 36 72 * remap_io_sg - remap an IO mapping to userspace ··· 82 46 unsigned long addr, unsigned long size, 83 47 struct scatterlist *sgl, resource_size_t iobase) 84 48 { 85 - unsigned long pfn, len, remapped = 0; 49 + struct remap_pfn r = { 50 + .mm = vma->vm_mm, 51 + .prot = vma->vm_page_prot, 52 + .sgt = __sgt_iter(sgl, use_dma(iobase)), 53 + .iobase = iobase, 54 + }; 86 55 int err; 87 56 88 57 /* We rely on prevalidation of the io-mapping to skip track_pfn(). */ ··· 96 55 if (!use_dma(iobase)) 97 56 flush_cache_range(vma, addr, size); 98 57 99 - do { 100 - if (use_dma(iobase)) { 101 - if (!sg_dma_len(sgl)) 102 - break; 103 - pfn = (sg_dma_address(sgl) + iobase) >> PAGE_SHIFT; 104 - len = sg_dma_len(sgl); 105 - } else { 106 - pfn = page_to_pfn(sg_page(sgl)); 107 - len = sgl->length; 108 - } 58 + err = apply_to_page_range(r.mm, addr, size, remap_sg, &r); 59 + if (unlikely(err)) { 60 + zap_vma_ptes(vma, addr, r.pfn << PAGE_SHIFT); 61 + return err; 62 + } 109 63 110 - err = remap_pfn_range(vma, addr + remapped, pfn, len, 111 - vma->vm_page_prot); 112 - if (err) 113 - break; 114 - remapped += len; 115 - } while ((sgl = __sg_next(sgl))); 116 - 117 - if (err) 118 - zap_vma_ptes(vma, addr, remapped); 119 - return err; 64 + return 0; 120 65 }
+5 -4
drivers/gpu/drm/msm/adreno/a6xx_gpu.c
··· 1153 1153 { 1154 1154 struct device_node *phandle; 1155 1155 1156 - a6xx_gpu->llc_mmio = msm_ioremap(pdev, "cx_mem", "gpu_cx"); 1157 - if (IS_ERR(a6xx_gpu->llc_mmio)) 1158 - return; 1159 - 1160 1156 /* 1161 1157 * There is a different programming path for targets with an mmu500 1162 1158 * attached, so detect if that is the case ··· 1161 1165 a6xx_gpu->have_mmu500 = (phandle && 1162 1166 of_device_is_compatible(phandle, "arm,mmu-500")); 1163 1167 of_node_put(phandle); 1168 + 1169 + if (a6xx_gpu->have_mmu500) 1170 + a6xx_gpu->llc_mmio = NULL; 1171 + else 1172 + a6xx_gpu->llc_mmio = msm_ioremap(pdev, "cx_mem", "gpu_cx"); 1164 1173 1165 1174 a6xx_gpu->llc_slice = llcc_slice_getd(LLCC_GPU); 1166 1175 a6xx_gpu->htw_llc_slice = llcc_slice_getd(LLCC_GPUHTW);
+1
drivers/gpu/drm/msm/dp/dp_audio.c
··· 527 527 dp_audio_setup_acr(audio); 528 528 dp_audio_safe_to_exit_level(audio); 529 529 dp_audio_enable(audio, true); 530 + dp_display_signal_audio_start(dp_display); 530 531 dp_display->audio_enabled = true; 531 532 532 533 end:
+17 -9
drivers/gpu/drm/msm/dp/dp_display.c
··· 178 178 return 0; 179 179 } 180 180 181 + void dp_display_signal_audio_start(struct msm_dp *dp_display) 182 + { 183 + struct dp_display_private *dp; 184 + 185 + dp = container_of(dp_display, struct dp_display_private, dp_display); 186 + 187 + reinit_completion(&dp->audio_comp); 188 + } 189 + 181 190 void dp_display_signal_audio_complete(struct msm_dp *dp_display) 182 191 { 183 192 struct dp_display_private *dp; ··· 595 586 mutex_lock(&dp->event_mutex); 596 587 597 588 state = dp->hpd_state; 598 - if (state == ST_CONNECT_PENDING) { 599 - dp_display_enable(dp, 0); 589 + if (state == ST_CONNECT_PENDING) 600 590 dp->hpd_state = ST_CONNECTED; 601 - } 602 591 603 592 mutex_unlock(&dp->event_mutex); 604 593 ··· 658 651 dp_add_event(dp, EV_DISCONNECT_PENDING_TIMEOUT, 0, DP_TIMEOUT_5_SECOND); 659 652 660 653 /* signal the disconnect event early to ensure proper teardown */ 661 - reinit_completion(&dp->audio_comp); 662 654 dp_display_handle_plugged_change(g_dp_display, false); 663 655 664 656 dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_PLUG_INT_MASK | ··· 675 669 mutex_lock(&dp->event_mutex); 676 670 677 671 state = dp->hpd_state; 678 - if (state == ST_DISCONNECT_PENDING) { 679 - dp_display_disable(dp, 0); 672 + if (state == ST_DISCONNECT_PENDING) 680 673 dp->hpd_state = ST_DISCONNECTED; 681 - } 682 674 683 675 mutex_unlock(&dp->event_mutex); 684 676 ··· 902 898 /* wait only if audio was enabled */ 903 899 if (dp_display->audio_enabled) { 904 900 /* signal the disconnect event */ 905 - reinit_completion(&dp->audio_comp); 906 901 dp_display_handle_plugged_change(dp_display, false); 907 902 if (!wait_for_completion_timeout(&dp->audio_comp, 908 903 HZ * 5)) ··· 1275 1272 1276 1273 status = dp_catalog_link_is_connected(dp->catalog); 1277 1274 1278 - if (status) 1275 + /* 1276 + * can not declared display is connected unless 1277 + * HDMI cable is plugged in and sink_count of 1278 + * dongle become 1 1279 + */ 1280 + if (status && dp->link->sink_count) 1279 1281 dp->dp_display.is_connected = true; 1280 1282 else 1281 1283 dp->dp_display.is_connected = false;
+1
drivers/gpu/drm/msm/dp/dp_display.h
··· 34 34 int dp_display_request_irq(struct msm_dp *dp_display); 35 35 bool dp_display_check_video_test(struct msm_dp *dp_display); 36 36 int dp_display_get_test_bpp(struct msm_dp *dp_display); 37 + void dp_display_signal_audio_start(struct msm_dp *dp_display); 37 38 void dp_display_signal_audio_complete(struct msm_dp *dp_display); 38 39 39 40 #endif /* _DP_DISPLAY_H_ */
+1 -1
drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
··· 843 843 if (pixel_clk_provider) 844 844 *pixel_clk_provider = phy->provided_clocks->hws[DSI_PIXEL_PLL_CLK]->clk; 845 845 846 - return -EINVAL; 846 + return 0; 847 847 } 848 848 849 849 void msm_dsi_phy_pll_save_state(struct msm_dsi_phy *phy)
+4
drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c
··· 405 405 if (!vco_name) 406 406 return -ENOMEM; 407 407 408 + parent_name = devm_kzalloc(dev, 32, GFP_KERNEL); 409 + if (!parent_name) 410 + return -ENOMEM; 411 + 408 412 clk_name = devm_kzalloc(dev, 32, GFP_KERNEL); 409 413 if (!clk_name) 410 414 return -ENOMEM;
+1 -1
drivers/gpu/drm/msm/msm_drv.c
··· 42 42 * - 1.7.0 - Add MSM_PARAM_SUSPENDS to access suspend count 43 43 */ 44 44 #define MSM_VERSION_MAJOR 1 45 - #define MSM_VERSION_MINOR 6 45 + #define MSM_VERSION_MINOR 7 46 46 #define MSM_VERSION_PATCHLEVEL 0 47 47 48 48 static const struct drm_mode_config_funcs mode_config_funcs = {
+15 -1
drivers/gpu/drm/msm/msm_gem.c
··· 190 190 } 191 191 192 192 p = get_pages(obj); 193 + 194 + if (!IS_ERR(p)) { 195 + msm_obj->pin_count++; 196 + update_inactive(msm_obj); 197 + } 198 + 193 199 msm_gem_unlock(obj); 194 200 return p; 195 201 } 196 202 197 203 void msm_gem_put_pages(struct drm_gem_object *obj) 198 204 { 199 - /* when we start tracking the pin count, then do something here */ 205 + struct msm_gem_object *msm_obj = to_msm_bo(obj); 206 + 207 + msm_gem_lock(obj); 208 + msm_obj->pin_count--; 209 + GEM_WARN_ON(msm_obj->pin_count < 0); 210 + update_inactive(msm_obj); 211 + msm_gem_unlock(obj); 200 212 } 201 213 202 214 int msm_gem_mmap_obj(struct drm_gem_object *obj, ··· 658 646 ret = -ENOMEM; 659 647 goto fail; 660 648 } 649 + 650 + update_inactive(msm_obj); 661 651 } 662 652 663 653 return msm_obj->vaddr;
+2 -2
drivers/gpu/drm/msm/msm_gem.h
··· 221 221 /* imported/exported objects are not purgeable: */ 222 222 static inline bool is_unpurgeable(struct msm_gem_object *msm_obj) 223 223 { 224 - return msm_obj->base.dma_buf && msm_obj->base.import_attach; 224 + return msm_obj->base.import_attach || msm_obj->pin_count; 225 225 } 226 226 227 227 static inline bool is_purgeable(struct msm_gem_object *msm_obj) ··· 271 271 272 272 static inline bool is_unevictable(struct msm_gem_object *msm_obj) 273 273 { 274 - return is_unpurgeable(msm_obj) || msm_obj->pin_count || msm_obj->vaddr; 274 + return is_unpurgeable(msm_obj) || msm_obj->vaddr; 275 275 } 276 276 277 277 static inline void mark_evictable(struct msm_gem_object *msm_obj)
+73 -71
drivers/gpu/drm/radeon/ni_dpm.c
··· 1687 1687 u32 reg; 1688 1688 int ret; 1689 1689 1690 - table->initialState.levels[0].mclk.vMPLL_AD_FUNC_CNTL = 1690 + table->initialState.level.mclk.vMPLL_AD_FUNC_CNTL = 1691 1691 cpu_to_be32(ni_pi->clock_registers.mpll_ad_func_cntl); 1692 - table->initialState.levels[0].mclk.vMPLL_AD_FUNC_CNTL_2 = 1692 + table->initialState.level.mclk.vMPLL_AD_FUNC_CNTL_2 = 1693 1693 cpu_to_be32(ni_pi->clock_registers.mpll_ad_func_cntl_2); 1694 - table->initialState.levels[0].mclk.vMPLL_DQ_FUNC_CNTL = 1694 + table->initialState.level.mclk.vMPLL_DQ_FUNC_CNTL = 1695 1695 cpu_to_be32(ni_pi->clock_registers.mpll_dq_func_cntl); 1696 - table->initialState.levels[0].mclk.vMPLL_DQ_FUNC_CNTL_2 = 1696 + table->initialState.level.mclk.vMPLL_DQ_FUNC_CNTL_2 = 1697 1697 cpu_to_be32(ni_pi->clock_registers.mpll_dq_func_cntl_2); 1698 - table->initialState.levels[0].mclk.vMCLK_PWRMGT_CNTL = 1698 + table->initialState.level.mclk.vMCLK_PWRMGT_CNTL = 1699 1699 cpu_to_be32(ni_pi->clock_registers.mclk_pwrmgt_cntl); 1700 - table->initialState.levels[0].mclk.vDLL_CNTL = 1700 + table->initialState.level.mclk.vDLL_CNTL = 1701 1701 cpu_to_be32(ni_pi->clock_registers.dll_cntl); 1702 - table->initialState.levels[0].mclk.vMPLL_SS = 1702 + table->initialState.level.mclk.vMPLL_SS = 1703 1703 cpu_to_be32(ni_pi->clock_registers.mpll_ss1); 1704 - table->initialState.levels[0].mclk.vMPLL_SS2 = 1704 + table->initialState.level.mclk.vMPLL_SS2 = 1705 1705 cpu_to_be32(ni_pi->clock_registers.mpll_ss2); 1706 - table->initialState.levels[0].mclk.mclk_value = 1706 + table->initialState.level.mclk.mclk_value = 1707 1707 cpu_to_be32(initial_state->performance_levels[0].mclk); 1708 1708 1709 - table->initialState.levels[0].sclk.vCG_SPLL_FUNC_CNTL = 1709 + table->initialState.level.sclk.vCG_SPLL_FUNC_CNTL = 1710 1710 cpu_to_be32(ni_pi->clock_registers.cg_spll_func_cntl); 1711 - table->initialState.levels[0].sclk.vCG_SPLL_FUNC_CNTL_2 = 1711 + table->initialState.level.sclk.vCG_SPLL_FUNC_CNTL_2 = 1712 1712 cpu_to_be32(ni_pi->clock_registers.cg_spll_func_cntl_2); 1713 - table->initialState.levels[0].sclk.vCG_SPLL_FUNC_CNTL_3 = 1713 + table->initialState.level.sclk.vCG_SPLL_FUNC_CNTL_3 = 1714 1714 cpu_to_be32(ni_pi->clock_registers.cg_spll_func_cntl_3); 1715 - table->initialState.levels[0].sclk.vCG_SPLL_FUNC_CNTL_4 = 1715 + table->initialState.level.sclk.vCG_SPLL_FUNC_CNTL_4 = 1716 1716 cpu_to_be32(ni_pi->clock_registers.cg_spll_func_cntl_4); 1717 - table->initialState.levels[0].sclk.vCG_SPLL_SPREAD_SPECTRUM = 1717 + table->initialState.level.sclk.vCG_SPLL_SPREAD_SPECTRUM = 1718 1718 cpu_to_be32(ni_pi->clock_registers.cg_spll_spread_spectrum); 1719 - table->initialState.levels[0].sclk.vCG_SPLL_SPREAD_SPECTRUM_2 = 1719 + table->initialState.level.sclk.vCG_SPLL_SPREAD_SPECTRUM_2 = 1720 1720 cpu_to_be32(ni_pi->clock_registers.cg_spll_spread_spectrum_2); 1721 - table->initialState.levels[0].sclk.sclk_value = 1721 + table->initialState.level.sclk.sclk_value = 1722 1722 cpu_to_be32(initial_state->performance_levels[0].sclk); 1723 - table->initialState.levels[0].arbRefreshState = 1723 + table->initialState.level.arbRefreshState = 1724 1724 NISLANDS_INITIAL_STATE_ARB_INDEX; 1725 1725 1726 - table->initialState.levels[0].ACIndex = 0; 1726 + table->initialState.level.ACIndex = 0; 1727 1727 1728 1728 ret = ni_populate_voltage_value(rdev, &eg_pi->vddc_voltage_table, 1729 1729 initial_state->performance_levels[0].vddc, 1730 - &table->initialState.levels[0].vddc); 1730 + &table->initialState.level.vddc); 1731 1731 if (!ret) { 1732 1732 u16 std_vddc; 1733 1733 1734 1734 ret = ni_get_std_voltage_value(rdev, 1735 - &table->initialState.levels[0].vddc, 1735 + &table->initialState.level.vddc, 1736 1736 &std_vddc); 1737 1737 if (!ret) 1738 1738 ni_populate_std_voltage_value(rdev, std_vddc, 1739 - table->initialState.levels[0].vddc.index, 1740 - &table->initialState.levels[0].std_vddc); 1739 + table->initialState.level.vddc.index, 1740 + &table->initialState.level.std_vddc); 1741 1741 } 1742 1742 1743 1743 if (eg_pi->vddci_control) 1744 1744 ni_populate_voltage_value(rdev, 1745 1745 &eg_pi->vddci_voltage_table, 1746 1746 initial_state->performance_levels[0].vddci, 1747 - &table->initialState.levels[0].vddci); 1747 + &table->initialState.level.vddci); 1748 1748 1749 - ni_populate_initial_mvdd_value(rdev, &table->initialState.levels[0].mvdd); 1749 + ni_populate_initial_mvdd_value(rdev, &table->initialState.level.mvdd); 1750 1750 1751 1751 reg = CG_R(0xffff) | CG_L(0); 1752 - table->initialState.levels[0].aT = cpu_to_be32(reg); 1752 + table->initialState.level.aT = cpu_to_be32(reg); 1753 1753 1754 - table->initialState.levels[0].bSP = cpu_to_be32(pi->dsp); 1754 + table->initialState.level.bSP = cpu_to_be32(pi->dsp); 1755 1755 1756 1756 if (pi->boot_in_gen2) 1757 - table->initialState.levels[0].gen2PCIE = 1; 1757 + table->initialState.level.gen2PCIE = 1; 1758 1758 else 1759 - table->initialState.levels[0].gen2PCIE = 0; 1759 + table->initialState.level.gen2PCIE = 0; 1760 1760 1761 1761 if (pi->mem_gddr5) { 1762 - table->initialState.levels[0].strobeMode = 1762 + table->initialState.level.strobeMode = 1763 1763 cypress_get_strobe_mode_settings(rdev, 1764 1764 initial_state->performance_levels[0].mclk); 1765 1765 1766 1766 if (initial_state->performance_levels[0].mclk > pi->mclk_edc_enable_threshold) 1767 - table->initialState.levels[0].mcFlags = NISLANDS_SMC_MC_EDC_RD_FLAG | NISLANDS_SMC_MC_EDC_WR_FLAG; 1767 + table->initialState.level.mcFlags = NISLANDS_SMC_MC_EDC_RD_FLAG | NISLANDS_SMC_MC_EDC_WR_FLAG; 1768 1768 else 1769 - table->initialState.levels[0].mcFlags = 0; 1769 + table->initialState.level.mcFlags = 0; 1770 1770 } 1771 1771 1772 1772 table->initialState.levelCount = 1; 1773 1773 1774 1774 table->initialState.flags |= PPSMC_SWSTATE_FLAG_DC; 1775 1775 1776 - table->initialState.levels[0].dpm2.MaxPS = 0; 1777 - table->initialState.levels[0].dpm2.NearTDPDec = 0; 1778 - table->initialState.levels[0].dpm2.AboveSafeInc = 0; 1779 - table->initialState.levels[0].dpm2.BelowSafeInc = 0; 1776 + table->initialState.level.dpm2.MaxPS = 0; 1777 + table->initialState.level.dpm2.NearTDPDec = 0; 1778 + table->initialState.level.dpm2.AboveSafeInc = 0; 1779 + table->initialState.level.dpm2.BelowSafeInc = 0; 1780 1780 1781 1781 reg = MIN_POWER_MASK | MAX_POWER_MASK; 1782 - table->initialState.levels[0].SQPowerThrottle = cpu_to_be32(reg); 1782 + table->initialState.level.SQPowerThrottle = cpu_to_be32(reg); 1783 1783 1784 1784 reg = MAX_POWER_DELTA_MASK | STI_SIZE_MASK | LTI_RATIO_MASK; 1785 - table->initialState.levels[0].SQPowerThrottle_2 = cpu_to_be32(reg); 1785 + table->initialState.level.SQPowerThrottle_2 = cpu_to_be32(reg); 1786 1786 1787 1787 return 0; 1788 1788 } ··· 1813 1813 if (pi->acpi_vddc) { 1814 1814 ret = ni_populate_voltage_value(rdev, 1815 1815 &eg_pi->vddc_voltage_table, 1816 - pi->acpi_vddc, &table->ACPIState.levels[0].vddc); 1816 + pi->acpi_vddc, &table->ACPIState.level.vddc); 1817 1817 if (!ret) { 1818 1818 u16 std_vddc; 1819 1819 1820 1820 ret = ni_get_std_voltage_value(rdev, 1821 - &table->ACPIState.levels[0].vddc, &std_vddc); 1821 + &table->ACPIState.level.vddc, &std_vddc); 1822 1822 if (!ret) 1823 1823 ni_populate_std_voltage_value(rdev, std_vddc, 1824 - table->ACPIState.levels[0].vddc.index, 1825 - &table->ACPIState.levels[0].std_vddc); 1824 + table->ACPIState.level.vddc.index, 1825 + &table->ACPIState.level.std_vddc); 1826 1826 } 1827 1827 1828 1828 if (pi->pcie_gen2) { 1829 1829 if (pi->acpi_pcie_gen2) 1830 - table->ACPIState.levels[0].gen2PCIE = 1; 1830 + table->ACPIState.level.gen2PCIE = 1; 1831 1831 else 1832 - table->ACPIState.levels[0].gen2PCIE = 0; 1832 + table->ACPIState.level.gen2PCIE = 0; 1833 1833 } else { 1834 - table->ACPIState.levels[0].gen2PCIE = 0; 1834 + table->ACPIState.level.gen2PCIE = 0; 1835 1835 } 1836 1836 } else { 1837 1837 ret = ni_populate_voltage_value(rdev, 1838 1838 &eg_pi->vddc_voltage_table, 1839 1839 pi->min_vddc_in_table, 1840 - &table->ACPIState.levels[0].vddc); 1840 + &table->ACPIState.level.vddc); 1841 1841 if (!ret) { 1842 1842 u16 std_vddc; 1843 1843 1844 1844 ret = ni_get_std_voltage_value(rdev, 1845 - &table->ACPIState.levels[0].vddc, 1845 + &table->ACPIState.level.vddc, 1846 1846 &std_vddc); 1847 1847 if (!ret) 1848 1848 ni_populate_std_voltage_value(rdev, std_vddc, 1849 - table->ACPIState.levels[0].vddc.index, 1850 - &table->ACPIState.levels[0].std_vddc); 1849 + table->ACPIState.level.vddc.index, 1850 + &table->ACPIState.level.std_vddc); 1851 1851 } 1852 - table->ACPIState.levels[0].gen2PCIE = 0; 1852 + table->ACPIState.level.gen2PCIE = 0; 1853 1853 } 1854 1854 1855 1855 if (eg_pi->acpi_vddci) { ··· 1857 1857 ni_populate_voltage_value(rdev, 1858 1858 &eg_pi->vddci_voltage_table, 1859 1859 eg_pi->acpi_vddci, 1860 - &table->ACPIState.levels[0].vddci); 1860 + &table->ACPIState.level.vddci); 1861 1861 } 1862 1862 1863 1863 ··· 1900 1900 spll_func_cntl_2 &= ~SCLK_MUX_SEL_MASK; 1901 1901 spll_func_cntl_2 |= SCLK_MUX_SEL(4); 1902 1902 1903 - table->ACPIState.levels[0].mclk.vMPLL_AD_FUNC_CNTL = cpu_to_be32(mpll_ad_func_cntl); 1904 - table->ACPIState.levels[0].mclk.vMPLL_AD_FUNC_CNTL_2 = cpu_to_be32(mpll_ad_func_cntl_2); 1905 - table->ACPIState.levels[0].mclk.vMPLL_DQ_FUNC_CNTL = cpu_to_be32(mpll_dq_func_cntl); 1906 - table->ACPIState.levels[0].mclk.vMPLL_DQ_FUNC_CNTL_2 = cpu_to_be32(mpll_dq_func_cntl_2); 1907 - table->ACPIState.levels[0].mclk.vMCLK_PWRMGT_CNTL = cpu_to_be32(mclk_pwrmgt_cntl); 1908 - table->ACPIState.levels[0].mclk.vDLL_CNTL = cpu_to_be32(dll_cntl); 1903 + table->ACPIState.level.mclk.vMPLL_AD_FUNC_CNTL = cpu_to_be32(mpll_ad_func_cntl); 1904 + table->ACPIState.level.mclk.vMPLL_AD_FUNC_CNTL_2 = cpu_to_be32(mpll_ad_func_cntl_2); 1905 + table->ACPIState.level.mclk.vMPLL_DQ_FUNC_CNTL = cpu_to_be32(mpll_dq_func_cntl); 1906 + table->ACPIState.level.mclk.vMPLL_DQ_FUNC_CNTL_2 = cpu_to_be32(mpll_dq_func_cntl_2); 1907 + table->ACPIState.level.mclk.vMCLK_PWRMGT_CNTL = cpu_to_be32(mclk_pwrmgt_cntl); 1908 + table->ACPIState.level.mclk.vDLL_CNTL = cpu_to_be32(dll_cntl); 1909 1909 1910 - table->ACPIState.levels[0].mclk.mclk_value = 0; 1910 + table->ACPIState.level.mclk.mclk_value = 0; 1911 1911 1912 - table->ACPIState.levels[0].sclk.vCG_SPLL_FUNC_CNTL = cpu_to_be32(spll_func_cntl); 1913 - table->ACPIState.levels[0].sclk.vCG_SPLL_FUNC_CNTL_2 = cpu_to_be32(spll_func_cntl_2); 1914 - table->ACPIState.levels[0].sclk.vCG_SPLL_FUNC_CNTL_3 = cpu_to_be32(spll_func_cntl_3); 1915 - table->ACPIState.levels[0].sclk.vCG_SPLL_FUNC_CNTL_4 = cpu_to_be32(spll_func_cntl_4); 1912 + table->ACPIState.level.sclk.vCG_SPLL_FUNC_CNTL = cpu_to_be32(spll_func_cntl); 1913 + table->ACPIState.level.sclk.vCG_SPLL_FUNC_CNTL_2 = cpu_to_be32(spll_func_cntl_2); 1914 + table->ACPIState.level.sclk.vCG_SPLL_FUNC_CNTL_3 = cpu_to_be32(spll_func_cntl_3); 1915 + table->ACPIState.level.sclk.vCG_SPLL_FUNC_CNTL_4 = cpu_to_be32(spll_func_cntl_4); 1916 1916 1917 - table->ACPIState.levels[0].sclk.sclk_value = 0; 1917 + table->ACPIState.level.sclk.sclk_value = 0; 1918 1918 1919 - ni_populate_mvdd_value(rdev, 0, &table->ACPIState.levels[0].mvdd); 1919 + ni_populate_mvdd_value(rdev, 0, &table->ACPIState.level.mvdd); 1920 1920 1921 1921 if (eg_pi->dynamic_ac_timing) 1922 - table->ACPIState.levels[0].ACIndex = 1; 1922 + table->ACPIState.level.ACIndex = 1; 1923 1923 1924 - table->ACPIState.levels[0].dpm2.MaxPS = 0; 1925 - table->ACPIState.levels[0].dpm2.NearTDPDec = 0; 1926 - table->ACPIState.levels[0].dpm2.AboveSafeInc = 0; 1927 - table->ACPIState.levels[0].dpm2.BelowSafeInc = 0; 1924 + table->ACPIState.level.dpm2.MaxPS = 0; 1925 + table->ACPIState.level.dpm2.NearTDPDec = 0; 1926 + table->ACPIState.level.dpm2.AboveSafeInc = 0; 1927 + table->ACPIState.level.dpm2.BelowSafeInc = 0; 1928 1928 1929 1929 reg = MIN_POWER_MASK | MAX_POWER_MASK; 1930 - table->ACPIState.levels[0].SQPowerThrottle = cpu_to_be32(reg); 1930 + table->ACPIState.level.SQPowerThrottle = cpu_to_be32(reg); 1931 1931 1932 1932 reg = MAX_POWER_DELTA_MASK | STI_SIZE_MASK | LTI_RATIO_MASK; 1933 - table->ACPIState.levels[0].SQPowerThrottle_2 = cpu_to_be32(reg); 1933 + table->ACPIState.level.SQPowerThrottle_2 = cpu_to_be32(reg); 1934 1934 1935 1935 return 0; 1936 1936 } ··· 1980 1980 if (ret) 1981 1981 return ret; 1982 1982 1983 - table->driverState = table->initialState; 1983 + table->driverState.flags = table->initialState.flags; 1984 + table->driverState.levelCount = table->initialState.levelCount; 1985 + table->driverState.levels[0] = table->initialState.level; 1984 1986 1985 1987 table->ULVState = table->initialState; 1986 1988
+21 -13
drivers/gpu/drm/radeon/nislands_smc.h
··· 143 143 144 144 typedef struct NISLANDS_SMC_SWSTATE NISLANDS_SMC_SWSTATE; 145 145 146 + struct NISLANDS_SMC_SWSTATE_SINGLE { 147 + uint8_t flags; 148 + uint8_t levelCount; 149 + uint8_t padding2; 150 + uint8_t padding3; 151 + NISLANDS_SMC_HW_PERFORMANCE_LEVEL level; 152 + }; 153 + 146 154 #define NISLANDS_SMC_VOLTAGEMASK_VDDC 0 147 155 #define NISLANDS_SMC_VOLTAGEMASK_MVDD 1 148 156 #define NISLANDS_SMC_VOLTAGEMASK_VDDCI 2 ··· 168 160 169 161 struct NISLANDS_SMC_STATETABLE 170 162 { 171 - uint8_t thermalProtectType; 172 - uint8_t systemFlags; 173 - uint8_t maxVDDCIndexInPPTable; 174 - uint8_t extraFlags; 175 - uint8_t highSMIO[NISLANDS_MAX_NO_VREG_STEPS]; 176 - uint32_t lowSMIO[NISLANDS_MAX_NO_VREG_STEPS]; 177 - NISLANDS_SMC_VOLTAGEMASKTABLE voltageMaskTable; 178 - PP_NIslands_DPM2Parameters dpm2Params; 179 - NISLANDS_SMC_SWSTATE initialState; 180 - NISLANDS_SMC_SWSTATE ACPIState; 181 - NISLANDS_SMC_SWSTATE ULVState; 182 - NISLANDS_SMC_SWSTATE driverState; 183 - NISLANDS_SMC_HW_PERFORMANCE_LEVEL dpmLevels[NISLANDS_MAX_SMC_PERFORMANCE_LEVELS_PER_SWSTATE - 1]; 163 + uint8_t thermalProtectType; 164 + uint8_t systemFlags; 165 + uint8_t maxVDDCIndexInPPTable; 166 + uint8_t extraFlags; 167 + uint8_t highSMIO[NISLANDS_MAX_NO_VREG_STEPS]; 168 + uint32_t lowSMIO[NISLANDS_MAX_NO_VREG_STEPS]; 169 + NISLANDS_SMC_VOLTAGEMASKTABLE voltageMaskTable; 170 + PP_NIslands_DPM2Parameters dpm2Params; 171 + struct NISLANDS_SMC_SWSTATE_SINGLE initialState; 172 + struct NISLANDS_SMC_SWSTATE_SINGLE ACPIState; 173 + struct NISLANDS_SMC_SWSTATE_SINGLE ULVState; 174 + NISLANDS_SMC_SWSTATE driverState; 175 + NISLANDS_SMC_HW_PERFORMANCE_LEVEL dpmLevels[NISLANDS_MAX_SMC_PERFORMANCE_LEVELS_PER_SWSTATE]; 184 176 }; 185 177 186 178 typedef struct NISLANDS_SMC_STATETABLE NISLANDS_SMC_STATETABLE;
+1
drivers/gpu/drm/radeon/radeon.h
··· 1549 1549 void *priv; 1550 1550 u32 new_active_crtcs; 1551 1551 int new_active_crtc_count; 1552 + int high_pixelclock_count; 1552 1553 u32 current_active_crtcs; 1553 1554 int current_active_crtc_count; 1554 1555 bool single_display;
+8
drivers/gpu/drm/radeon/radeon_pm.c
··· 1767 1767 struct drm_device *ddev = rdev->ddev; 1768 1768 struct drm_crtc *crtc; 1769 1769 struct radeon_crtc *radeon_crtc; 1770 + struct radeon_connector *radeon_connector; 1770 1771 1771 1772 if (!rdev->pm.dpm_enabled) 1772 1773 return; ··· 1777 1776 /* update active crtc counts */ 1778 1777 rdev->pm.dpm.new_active_crtcs = 0; 1779 1778 rdev->pm.dpm.new_active_crtc_count = 0; 1779 + rdev->pm.dpm.high_pixelclock_count = 0; 1780 1780 if (rdev->num_crtc && rdev->mode_info.mode_config_initialized) { 1781 1781 list_for_each_entry(crtc, 1782 1782 &ddev->mode_config.crtc_list, head) { ··· 1785 1783 if (crtc->enabled) { 1786 1784 rdev->pm.dpm.new_active_crtcs |= (1 << radeon_crtc->crtc_id); 1787 1785 rdev->pm.dpm.new_active_crtc_count++; 1786 + if (!radeon_crtc->connector) 1787 + continue; 1788 + 1789 + radeon_connector = to_radeon_connector(radeon_crtc->connector); 1790 + if (radeon_connector->pixelclock_for_modeset > 297000) 1791 + rdev->pm.dpm.high_pixelclock_count++; 1788 1792 } 1789 1793 } 1790 1794 }
+91 -86
drivers/gpu/drm/radeon/si_dpm.c
··· 2979 2979 (rdev->pdev->device == 0x6605)) { 2980 2980 max_sclk = 75000; 2981 2981 } 2982 + 2983 + if (rdev->pm.dpm.high_pixelclock_count > 1) 2984 + disable_sclk_switching = true; 2982 2985 } 2983 2986 2984 2987 if (rps->vce_active) { ··· 4353 4350 u32 reg; 4354 4351 int ret; 4355 4352 4356 - table->initialState.levels[0].mclk.vDLL_CNTL = 4353 + table->initialState.level.mclk.vDLL_CNTL = 4357 4354 cpu_to_be32(si_pi->clock_registers.dll_cntl); 4358 - table->initialState.levels[0].mclk.vMCLK_PWRMGT_CNTL = 4355 + table->initialState.level.mclk.vMCLK_PWRMGT_CNTL = 4359 4356 cpu_to_be32(si_pi->clock_registers.mclk_pwrmgt_cntl); 4360 - table->initialState.levels[0].mclk.vMPLL_AD_FUNC_CNTL = 4357 + table->initialState.level.mclk.vMPLL_AD_FUNC_CNTL = 4361 4358 cpu_to_be32(si_pi->clock_registers.mpll_ad_func_cntl); 4362 - table->initialState.levels[0].mclk.vMPLL_DQ_FUNC_CNTL = 4359 + table->initialState.level.mclk.vMPLL_DQ_FUNC_CNTL = 4363 4360 cpu_to_be32(si_pi->clock_registers.mpll_dq_func_cntl); 4364 - table->initialState.levels[0].mclk.vMPLL_FUNC_CNTL = 4361 + table->initialState.level.mclk.vMPLL_FUNC_CNTL = 4365 4362 cpu_to_be32(si_pi->clock_registers.mpll_func_cntl); 4366 - table->initialState.levels[0].mclk.vMPLL_FUNC_CNTL_1 = 4363 + table->initialState.level.mclk.vMPLL_FUNC_CNTL_1 = 4367 4364 cpu_to_be32(si_pi->clock_registers.mpll_func_cntl_1); 4368 - table->initialState.levels[0].mclk.vMPLL_FUNC_CNTL_2 = 4365 + table->initialState.level.mclk.vMPLL_FUNC_CNTL_2 = 4369 4366 cpu_to_be32(si_pi->clock_registers.mpll_func_cntl_2); 4370 - table->initialState.levels[0].mclk.vMPLL_SS = 4367 + table->initialState.level.mclk.vMPLL_SS = 4371 4368 cpu_to_be32(si_pi->clock_registers.mpll_ss1); 4372 - table->initialState.levels[0].mclk.vMPLL_SS2 = 4369 + table->initialState.level.mclk.vMPLL_SS2 = 4373 4370 cpu_to_be32(si_pi->clock_registers.mpll_ss2); 4374 4371 4375 - table->initialState.levels[0].mclk.mclk_value = 4372 + table->initialState.level.mclk.mclk_value = 4376 4373 cpu_to_be32(initial_state->performance_levels[0].mclk); 4377 4374 4378 - table->initialState.levels[0].sclk.vCG_SPLL_FUNC_CNTL = 4375 + table->initialState.level.sclk.vCG_SPLL_FUNC_CNTL = 4379 4376 cpu_to_be32(si_pi->clock_registers.cg_spll_func_cntl); 4380 - table->initialState.levels[0].sclk.vCG_SPLL_FUNC_CNTL_2 = 4377 + table->initialState.level.sclk.vCG_SPLL_FUNC_CNTL_2 = 4381 4378 cpu_to_be32(si_pi->clock_registers.cg_spll_func_cntl_2); 4382 - table->initialState.levels[0].sclk.vCG_SPLL_FUNC_CNTL_3 = 4379 + table->initialState.level.sclk.vCG_SPLL_FUNC_CNTL_3 = 4383 4380 cpu_to_be32(si_pi->clock_registers.cg_spll_func_cntl_3); 4384 - table->initialState.levels[0].sclk.vCG_SPLL_FUNC_CNTL_4 = 4381 + table->initialState.level.sclk.vCG_SPLL_FUNC_CNTL_4 = 4385 4382 cpu_to_be32(si_pi->clock_registers.cg_spll_func_cntl_4); 4386 - table->initialState.levels[0].sclk.vCG_SPLL_SPREAD_SPECTRUM = 4383 + table->initialState.level.sclk.vCG_SPLL_SPREAD_SPECTRUM = 4387 4384 cpu_to_be32(si_pi->clock_registers.cg_spll_spread_spectrum); 4388 - table->initialState.levels[0].sclk.vCG_SPLL_SPREAD_SPECTRUM_2 = 4385 + table->initialState.level.sclk.vCG_SPLL_SPREAD_SPECTRUM_2 = 4389 4386 cpu_to_be32(si_pi->clock_registers.cg_spll_spread_spectrum_2); 4390 4387 4391 - table->initialState.levels[0].sclk.sclk_value = 4388 + table->initialState.level.sclk.sclk_value = 4392 4389 cpu_to_be32(initial_state->performance_levels[0].sclk); 4393 4390 4394 - table->initialState.levels[0].arbRefreshState = 4391 + table->initialState.level.arbRefreshState = 4395 4392 SISLANDS_INITIAL_STATE_ARB_INDEX; 4396 4393 4397 - table->initialState.levels[0].ACIndex = 0; 4394 + table->initialState.level.ACIndex = 0; 4398 4395 4399 4396 ret = si_populate_voltage_value(rdev, &eg_pi->vddc_voltage_table, 4400 4397 initial_state->performance_levels[0].vddc, 4401 - &table->initialState.levels[0].vddc); 4398 + &table->initialState.level.vddc); 4402 4399 4403 4400 if (!ret) { 4404 4401 u16 std_vddc; 4405 4402 4406 4403 ret = si_get_std_voltage_value(rdev, 4407 - &table->initialState.levels[0].vddc, 4404 + &table->initialState.level.vddc, 4408 4405 &std_vddc); 4409 4406 if (!ret) 4410 4407 si_populate_std_voltage_value(rdev, std_vddc, 4411 - table->initialState.levels[0].vddc.index, 4412 - &table->initialState.levels[0].std_vddc); 4408 + table->initialState.level.vddc.index, 4409 + &table->initialState.level.std_vddc); 4413 4410 } 4414 4411 4415 4412 if (eg_pi->vddci_control) 4416 4413 si_populate_voltage_value(rdev, 4417 4414 &eg_pi->vddci_voltage_table, 4418 4415 initial_state->performance_levels[0].vddci, 4419 - &table->initialState.levels[0].vddci); 4416 + &table->initialState.level.vddci); 4420 4417 4421 4418 if (si_pi->vddc_phase_shed_control) 4422 4419 si_populate_phase_shedding_value(rdev, ··· 4424 4421 initial_state->performance_levels[0].vddc, 4425 4422 initial_state->performance_levels[0].sclk, 4426 4423 initial_state->performance_levels[0].mclk, 4427 - &table->initialState.levels[0].vddc); 4424 + &table->initialState.level.vddc); 4428 4425 4429 - si_populate_initial_mvdd_value(rdev, &table->initialState.levels[0].mvdd); 4426 + si_populate_initial_mvdd_value(rdev, &table->initialState.level.mvdd); 4430 4427 4431 4428 reg = CG_R(0xffff) | CG_L(0); 4432 - table->initialState.levels[0].aT = cpu_to_be32(reg); 4429 + table->initialState.level.aT = cpu_to_be32(reg); 4433 4430 4434 - table->initialState.levels[0].bSP = cpu_to_be32(pi->dsp); 4431 + table->initialState.level.bSP = cpu_to_be32(pi->dsp); 4435 4432 4436 - table->initialState.levels[0].gen2PCIE = (u8)si_pi->boot_pcie_gen; 4433 + table->initialState.level.gen2PCIE = (u8)si_pi->boot_pcie_gen; 4437 4434 4438 4435 if (pi->mem_gddr5) { 4439 - table->initialState.levels[0].strobeMode = 4436 + table->initialState.level.strobeMode = 4440 4437 si_get_strobe_mode_settings(rdev, 4441 4438 initial_state->performance_levels[0].mclk); 4442 4439 4443 4440 if (initial_state->performance_levels[0].mclk > pi->mclk_edc_enable_threshold) 4444 - table->initialState.levels[0].mcFlags = SISLANDS_SMC_MC_EDC_RD_FLAG | SISLANDS_SMC_MC_EDC_WR_FLAG; 4441 + table->initialState.level.mcFlags = SISLANDS_SMC_MC_EDC_RD_FLAG | SISLANDS_SMC_MC_EDC_WR_FLAG; 4445 4442 else 4446 - table->initialState.levels[0].mcFlags = 0; 4443 + table->initialState.level.mcFlags = 0; 4447 4444 } 4448 4445 4449 4446 table->initialState.levelCount = 1; 4450 4447 4451 4448 table->initialState.flags |= PPSMC_SWSTATE_FLAG_DC; 4452 4449 4453 - table->initialState.levels[0].dpm2.MaxPS = 0; 4454 - table->initialState.levels[0].dpm2.NearTDPDec = 0; 4455 - table->initialState.levels[0].dpm2.AboveSafeInc = 0; 4456 - table->initialState.levels[0].dpm2.BelowSafeInc = 0; 4457 - table->initialState.levels[0].dpm2.PwrEfficiencyRatio = 0; 4450 + table->initialState.level.dpm2.MaxPS = 0; 4451 + table->initialState.level.dpm2.NearTDPDec = 0; 4452 + table->initialState.level.dpm2.AboveSafeInc = 0; 4453 + table->initialState.level.dpm2.BelowSafeInc = 0; 4454 + table->initialState.level.dpm2.PwrEfficiencyRatio = 0; 4458 4455 4459 4456 reg = MIN_POWER_MASK | MAX_POWER_MASK; 4460 - table->initialState.levels[0].SQPowerThrottle = cpu_to_be32(reg); 4457 + table->initialState.level.SQPowerThrottle = cpu_to_be32(reg); 4461 4458 4462 4459 reg = MAX_POWER_DELTA_MASK | STI_SIZE_MASK | LTI_RATIO_MASK; 4463 - table->initialState.levels[0].SQPowerThrottle_2 = cpu_to_be32(reg); 4460 + table->initialState.level.SQPowerThrottle_2 = cpu_to_be32(reg); 4464 4461 4465 4462 return 0; 4466 4463 } ··· 4491 4488 4492 4489 if (pi->acpi_vddc) { 4493 4490 ret = si_populate_voltage_value(rdev, &eg_pi->vddc_voltage_table, 4494 - pi->acpi_vddc, &table->ACPIState.levels[0].vddc); 4491 + pi->acpi_vddc, &table->ACPIState.level.vddc); 4495 4492 if (!ret) { 4496 4493 u16 std_vddc; 4497 4494 4498 4495 ret = si_get_std_voltage_value(rdev, 4499 - &table->ACPIState.levels[0].vddc, &std_vddc); 4496 + &table->ACPIState.level.vddc, &std_vddc); 4500 4497 if (!ret) 4501 4498 si_populate_std_voltage_value(rdev, std_vddc, 4502 - table->ACPIState.levels[0].vddc.index, 4503 - &table->ACPIState.levels[0].std_vddc); 4499 + table->ACPIState.level.vddc.index, 4500 + &table->ACPIState.level.std_vddc); 4504 4501 } 4505 - table->ACPIState.levels[0].gen2PCIE = si_pi->acpi_pcie_gen; 4502 + table->ACPIState.level.gen2PCIE = si_pi->acpi_pcie_gen; 4506 4503 4507 4504 if (si_pi->vddc_phase_shed_control) { 4508 4505 si_populate_phase_shedding_value(rdev, ··· 4510 4507 pi->acpi_vddc, 4511 4508 0, 4512 4509 0, 4513 - &table->ACPIState.levels[0].vddc); 4510 + &table->ACPIState.level.vddc); 4514 4511 } 4515 4512 } else { 4516 4513 ret = si_populate_voltage_value(rdev, &eg_pi->vddc_voltage_table, 4517 - pi->min_vddc_in_table, &table->ACPIState.levels[0].vddc); 4514 + pi->min_vddc_in_table, &table->ACPIState.level.vddc); 4518 4515 if (!ret) { 4519 4516 u16 std_vddc; 4520 4517 4521 4518 ret = si_get_std_voltage_value(rdev, 4522 - &table->ACPIState.levels[0].vddc, &std_vddc); 4519 + &table->ACPIState.level.vddc, &std_vddc); 4523 4520 4524 4521 if (!ret) 4525 4522 si_populate_std_voltage_value(rdev, std_vddc, 4526 - table->ACPIState.levels[0].vddc.index, 4527 - &table->ACPIState.levels[0].std_vddc); 4523 + table->ACPIState.level.vddc.index, 4524 + &table->ACPIState.level.std_vddc); 4528 4525 } 4529 - table->ACPIState.levels[0].gen2PCIE = (u8)r600_get_pcie_gen_support(rdev, 4526 + table->ACPIState.level.gen2PCIE = (u8)r600_get_pcie_gen_support(rdev, 4530 4527 si_pi->sys_pcie_mask, 4531 4528 si_pi->boot_pcie_gen, 4532 4529 RADEON_PCIE_GEN1); ··· 4537 4534 pi->min_vddc_in_table, 4538 4535 0, 4539 4536 0, 4540 - &table->ACPIState.levels[0].vddc); 4537 + &table->ACPIState.level.vddc); 4541 4538 } 4542 4539 4543 4540 if (pi->acpi_vddc) { 4544 4541 if (eg_pi->acpi_vddci) 4545 4542 si_populate_voltage_value(rdev, &eg_pi->vddci_voltage_table, 4546 4543 eg_pi->acpi_vddci, 4547 - &table->ACPIState.levels[0].vddci); 4544 + &table->ACPIState.level.vddci); 4548 4545 } 4549 4546 4550 4547 mclk_pwrmgt_cntl |= MRDCK0_RESET | MRDCK1_RESET; ··· 4555 4552 spll_func_cntl_2 &= ~SCLK_MUX_SEL_MASK; 4556 4553 spll_func_cntl_2 |= SCLK_MUX_SEL(4); 4557 4554 4558 - table->ACPIState.levels[0].mclk.vDLL_CNTL = 4555 + table->ACPIState.level.mclk.vDLL_CNTL = 4559 4556 cpu_to_be32(dll_cntl); 4560 - table->ACPIState.levels[0].mclk.vMCLK_PWRMGT_CNTL = 4557 + table->ACPIState.level.mclk.vMCLK_PWRMGT_CNTL = 4561 4558 cpu_to_be32(mclk_pwrmgt_cntl); 4562 - table->ACPIState.levels[0].mclk.vMPLL_AD_FUNC_CNTL = 4559 + table->ACPIState.level.mclk.vMPLL_AD_FUNC_CNTL = 4563 4560 cpu_to_be32(mpll_ad_func_cntl); 4564 - table->ACPIState.levels[0].mclk.vMPLL_DQ_FUNC_CNTL = 4561 + table->ACPIState.level.mclk.vMPLL_DQ_FUNC_CNTL = 4565 4562 cpu_to_be32(mpll_dq_func_cntl); 4566 - table->ACPIState.levels[0].mclk.vMPLL_FUNC_CNTL = 4563 + table->ACPIState.level.mclk.vMPLL_FUNC_CNTL = 4567 4564 cpu_to_be32(mpll_func_cntl); 4568 - table->ACPIState.levels[0].mclk.vMPLL_FUNC_CNTL_1 = 4565 + table->ACPIState.level.mclk.vMPLL_FUNC_CNTL_1 = 4569 4566 cpu_to_be32(mpll_func_cntl_1); 4570 - table->ACPIState.levels[0].mclk.vMPLL_FUNC_CNTL_2 = 4567 + table->ACPIState.level.mclk.vMPLL_FUNC_CNTL_2 = 4571 4568 cpu_to_be32(mpll_func_cntl_2); 4572 - table->ACPIState.levels[0].mclk.vMPLL_SS = 4569 + table->ACPIState.level.mclk.vMPLL_SS = 4573 4570 cpu_to_be32(si_pi->clock_registers.mpll_ss1); 4574 - table->ACPIState.levels[0].mclk.vMPLL_SS2 = 4571 + table->ACPIState.level.mclk.vMPLL_SS2 = 4575 4572 cpu_to_be32(si_pi->clock_registers.mpll_ss2); 4576 4573 4577 - table->ACPIState.levels[0].sclk.vCG_SPLL_FUNC_CNTL = 4574 + table->ACPIState.level.sclk.vCG_SPLL_FUNC_CNTL = 4578 4575 cpu_to_be32(spll_func_cntl); 4579 - table->ACPIState.levels[0].sclk.vCG_SPLL_FUNC_CNTL_2 = 4576 + table->ACPIState.level.sclk.vCG_SPLL_FUNC_CNTL_2 = 4580 4577 cpu_to_be32(spll_func_cntl_2); 4581 - table->ACPIState.levels[0].sclk.vCG_SPLL_FUNC_CNTL_3 = 4578 + table->ACPIState.level.sclk.vCG_SPLL_FUNC_CNTL_3 = 4582 4579 cpu_to_be32(spll_func_cntl_3); 4583 - table->ACPIState.levels[0].sclk.vCG_SPLL_FUNC_CNTL_4 = 4580 + table->ACPIState.level.sclk.vCG_SPLL_FUNC_CNTL_4 = 4584 4581 cpu_to_be32(spll_func_cntl_4); 4585 4582 4586 - table->ACPIState.levels[0].mclk.mclk_value = 0; 4587 - table->ACPIState.levels[0].sclk.sclk_value = 0; 4583 + table->ACPIState.level.mclk.mclk_value = 0; 4584 + table->ACPIState.level.sclk.sclk_value = 0; 4588 4585 4589 - si_populate_mvdd_value(rdev, 0, &table->ACPIState.levels[0].mvdd); 4586 + si_populate_mvdd_value(rdev, 0, &table->ACPIState.level.mvdd); 4590 4587 4591 4588 if (eg_pi->dynamic_ac_timing) 4592 - table->ACPIState.levels[0].ACIndex = 0; 4589 + table->ACPIState.level.ACIndex = 0; 4593 4590 4594 - table->ACPIState.levels[0].dpm2.MaxPS = 0; 4595 - table->ACPIState.levels[0].dpm2.NearTDPDec = 0; 4596 - table->ACPIState.levels[0].dpm2.AboveSafeInc = 0; 4597 - table->ACPIState.levels[0].dpm2.BelowSafeInc = 0; 4598 - table->ACPIState.levels[0].dpm2.PwrEfficiencyRatio = 0; 4591 + table->ACPIState.level.dpm2.MaxPS = 0; 4592 + table->ACPIState.level.dpm2.NearTDPDec = 0; 4593 + table->ACPIState.level.dpm2.AboveSafeInc = 0; 4594 + table->ACPIState.level.dpm2.BelowSafeInc = 0; 4595 + table->ACPIState.level.dpm2.PwrEfficiencyRatio = 0; 4599 4596 4600 4597 reg = MIN_POWER_MASK | MAX_POWER_MASK; 4601 - table->ACPIState.levels[0].SQPowerThrottle = cpu_to_be32(reg); 4598 + table->ACPIState.level.SQPowerThrottle = cpu_to_be32(reg); 4602 4599 4603 4600 reg = MAX_POWER_DELTA_MASK | STI_SIZE_MASK | LTI_RATIO_MASK; 4604 - table->ACPIState.levels[0].SQPowerThrottle_2 = cpu_to_be32(reg); 4601 + table->ACPIState.level.SQPowerThrottle_2 = cpu_to_be32(reg); 4605 4602 4606 4603 return 0; 4607 4604 } 4608 4605 4609 4606 static int si_populate_ulv_state(struct radeon_device *rdev, 4610 - SISLANDS_SMC_SWSTATE *state) 4607 + struct SISLANDS_SMC_SWSTATE_SINGLE *state) 4611 4608 { 4612 4609 struct evergreen_power_info *eg_pi = evergreen_get_pi(rdev); 4613 4610 struct si_power_info *si_pi = si_get_pi(rdev); ··· 4616 4613 int ret; 4617 4614 4618 4615 ret = si_convert_power_level_to_smc(rdev, &ulv->pl, 4619 - &state->levels[0]); 4616 + &state->level); 4620 4617 if (!ret) { 4621 4618 if (eg_pi->sclk_deep_sleep) { 4622 4619 if (sclk_in_sr <= SCLK_MIN_DEEPSLEEP_FREQ) 4623 - state->levels[0].stateFlags |= PPSMC_STATEFLAG_DEEPSLEEP_BYPASS; 4620 + state->level.stateFlags |= PPSMC_STATEFLAG_DEEPSLEEP_BYPASS; 4624 4621 else 4625 - state->levels[0].stateFlags |= PPSMC_STATEFLAG_DEEPSLEEP_THROTTLE; 4622 + state->level.stateFlags |= PPSMC_STATEFLAG_DEEPSLEEP_THROTTLE; 4626 4623 } 4627 4624 if (ulv->one_pcie_lane_in_ulv) 4628 4625 state->flags |= PPSMC_SWSTATE_FLAG_PCIE_X1; 4629 - state->levels[0].arbRefreshState = (u8)(SISLANDS_ULV_STATE_ARB_INDEX); 4630 - state->levels[0].ACIndex = 1; 4631 - state->levels[0].std_vddc = state->levels[0].vddc; 4626 + state->level.arbRefreshState = (u8)(SISLANDS_ULV_STATE_ARB_INDEX); 4627 + state->level.ACIndex = 1; 4628 + state->level.std_vddc = state->level.vddc; 4632 4629 state->levelCount = 1; 4633 4630 4634 4631 state->flags |= PPSMC_SWSTATE_FLAG_DC; ··· 4728 4725 if (ret) 4729 4726 return ret; 4730 4727 4731 - table->driverState = table->initialState; 4728 + table->driverState.flags = table->initialState.flags; 4729 + table->driverState.levelCount = table->initialState.levelCount; 4730 + table->driverState.levels[0] = table->initialState.level; 4732 4731 4733 4732 ret = si_do_program_memory_timing_parameters(rdev, radeon_boot_state, 4734 4733 SISLANDS_INITIAL_STATE_ARB_INDEX); ··· 5280 5275 if (ulv->supported && ulv->pl.vddc) { 5281 5276 u32 address = si_pi->state_table_start + 5282 5277 offsetof(SISLANDS_SMC_STATETABLE, ULVState); 5283 - SISLANDS_SMC_SWSTATE *smc_state = &si_pi->smc_statetable.ULVState; 5284 - u32 state_size = sizeof(SISLANDS_SMC_SWSTATE); 5278 + struct SISLANDS_SMC_SWSTATE_SINGLE *smc_state = &si_pi->smc_statetable.ULVState; 5279 + u32 state_size = sizeof(struct SISLANDS_SMC_SWSTATE_SINGLE); 5285 5280 5286 5281 memset(smc_state, 0, state_size); 5287 5282
+21 -13
drivers/gpu/drm/radeon/sislands_smc.h
··· 191 191 192 192 typedef struct SISLANDS_SMC_SWSTATE SISLANDS_SMC_SWSTATE; 193 193 194 + struct SISLANDS_SMC_SWSTATE_SINGLE { 195 + uint8_t flags; 196 + uint8_t levelCount; 197 + uint8_t padding2; 198 + uint8_t padding3; 199 + SISLANDS_SMC_HW_PERFORMANCE_LEVEL level; 200 + }; 201 + 194 202 #define SISLANDS_SMC_VOLTAGEMASK_VDDC 0 195 203 #define SISLANDS_SMC_VOLTAGEMASK_MVDD 1 196 204 #define SISLANDS_SMC_VOLTAGEMASK_VDDCI 2 ··· 216 208 217 209 struct SISLANDS_SMC_STATETABLE 218 210 { 219 - uint8_t thermalProtectType; 220 - uint8_t systemFlags; 221 - uint8_t maxVDDCIndexInPPTable; 222 - uint8_t extraFlags; 223 - uint32_t lowSMIO[SISLANDS_MAX_NO_VREG_STEPS]; 224 - SISLANDS_SMC_VOLTAGEMASKTABLE voltageMaskTable; 225 - SISLANDS_SMC_VOLTAGEMASKTABLE phaseMaskTable; 226 - PP_SIslands_DPM2Parameters dpm2Params; 227 - SISLANDS_SMC_SWSTATE initialState; 228 - SISLANDS_SMC_SWSTATE ACPIState; 229 - SISLANDS_SMC_SWSTATE ULVState; 230 - SISLANDS_SMC_SWSTATE driverState; 231 - SISLANDS_SMC_HW_PERFORMANCE_LEVEL dpmLevels[SISLANDS_MAX_SMC_PERFORMANCE_LEVELS_PER_SWSTATE - 1]; 211 + uint8_t thermalProtectType; 212 + uint8_t systemFlags; 213 + uint8_t maxVDDCIndexInPPTable; 214 + uint8_t extraFlags; 215 + uint32_t lowSMIO[SISLANDS_MAX_NO_VREG_STEPS]; 216 + SISLANDS_SMC_VOLTAGEMASKTABLE voltageMaskTable; 217 + SISLANDS_SMC_VOLTAGEMASKTABLE phaseMaskTable; 218 + PP_SIslands_DPM2Parameters dpm2Params; 219 + struct SISLANDS_SMC_SWSTATE_SINGLE initialState; 220 + struct SISLANDS_SMC_SWSTATE_SINGLE ACPIState; 221 + struct SISLANDS_SMC_SWSTATE_SINGLE ULVState; 222 + SISLANDS_SMC_SWSTATE driverState; 223 + SISLANDS_SMC_HW_PERFORMANCE_LEVEL dpmLevels[SISLANDS_MAX_SMC_PERFORMANCE_LEVELS_PER_SWSTATE]; 232 224 }; 233 225 234 226 typedef struct SISLANDS_SMC_STATETABLE SISLANDS_SMC_STATETABLE;
-6
drivers/gpu/drm/vc4/vc4_vec.c
··· 197 197 struct drm_encoder *encoder; 198 198 }; 199 199 200 - static inline struct vc4_vec_connector * 201 - to_vc4_vec_connector(struct drm_connector *connector) 202 - { 203 - return container_of(connector, struct vc4_vec_connector, base); 204 - } 205 - 206 200 enum vc4_vec_tv_mode_id { 207 201 VC4_VEC_TV_MODE_NTSC, 208 202 VC4_VEC_TV_MODE_NTSC_J,
+1 -1
drivers/hwmon/adm9240.c
··· 485 485 reg = ADM9240_REG_IN_MIN(channel); 486 486 break; 487 487 case hwmon_in_max: 488 - reg = ADM9240_REG_IN(channel); 488 + reg = ADM9240_REG_IN_MAX(channel); 489 489 break; 490 490 default: 491 491 return -EOPNOTSUPP;
+2 -2
drivers/hwmon/corsair-psu.c
··· 355 355 return 0444; 356 356 default: 357 357 return 0; 358 - }; 358 + } 359 359 } 360 360 361 361 static umode_t corsairpsu_hwmon_in_is_visible(const struct corsairpsu_data *priv, u32 attr, ··· 376 376 break; 377 377 default: 378 378 break; 379 - }; 379 + } 380 380 381 381 return res; 382 382 }
+2 -9
drivers/hwmon/lm80.c
··· 596 596 struct device *dev = &client->dev; 597 597 struct device *hwmon_dev; 598 598 struct lm80_data *data; 599 - int rv; 600 599 601 600 data = devm_kzalloc(dev, sizeof(struct lm80_data), GFP_KERNEL); 602 601 if (!data) ··· 608 609 lm80_init_client(client); 609 610 610 611 /* A few vars need to be filled upon startup */ 611 - rv = lm80_read_value(client, LM80_REG_FAN_MIN(1)); 612 - if (rv < 0) 613 - return rv; 614 - data->fan[f_min][0] = rv; 615 - rv = lm80_read_value(client, LM80_REG_FAN_MIN(2)); 616 - if (rv < 0) 617 - return rv; 618 - data->fan[f_min][1] = rv; 612 + data->fan[f_min][0] = lm80_read_value(client, LM80_REG_FAN_MIN(1)); 613 + data->fan[f_min][1] = lm80_read_value(client, LM80_REG_FAN_MIN(2)); 619 614 620 615 hwmon_dev = devm_hwmon_device_register_with_groups(dev, client->name, 621 616 data, lm80_groups);
+6 -2
drivers/hwmon/ltc2992.c
··· 900 900 901 901 fwnode_for_each_available_child_node(fwnode, child) { 902 902 ret = fwnode_property_read_u32(child, "reg", &addr); 903 - if (ret < 0) 903 + if (ret < 0) { 904 + fwnode_handle_put(child); 904 905 return ret; 906 + } 905 907 906 - if (addr > 1) 908 + if (addr > 1) { 909 + fwnode_handle_put(child); 907 910 return -EINVAL; 911 + } 908 912 909 913 ret = fwnode_property_read_u32(child, "shunt-resistor-micro-ohms", &val); 910 914 if (!ret)
+3 -2
drivers/hwmon/occ/common.c
··· 217 217 return rc; 218 218 219 219 /* limit the maximum rate of polling the OCC */ 220 - if (time_after(jiffies, occ->last_update + OCC_UPDATE_FREQUENCY)) { 220 + if (time_after(jiffies, occ->next_update)) { 221 221 rc = occ_poll(occ); 222 - occ->last_update = jiffies; 222 + occ->next_update = jiffies + OCC_UPDATE_FREQUENCY; 223 223 } else { 224 224 rc = occ->last_error; 225 225 } ··· 1165 1165 return rc; 1166 1166 } 1167 1167 1168 + occ->next_update = jiffies + OCC_UPDATE_FREQUENCY; 1168 1169 occ_parse_poll_response(occ); 1169 1170 1170 1171 rc = occ_setup_sensor_attrs(occ);
+1 -1
drivers/hwmon/occ/common.h
··· 99 99 u8 poll_cmd_data; /* to perform OCC poll command */ 100 100 int (*send_cmd)(struct occ *occ, u8 *cmd); 101 101 102 - unsigned long last_update; 102 + unsigned long next_update; 103 103 struct mutex lock; /* lock OCC access */ 104 104 105 105 struct device *hwmon;
+25 -2
drivers/hwmon/pmbus/fsp-3y.c
··· 57 57 case YH5151E_PAGE_12V_LOG: 58 58 return YH5151E_PAGE_12V_REAL; 59 59 case YH5151E_PAGE_5V_LOG: 60 - return YH5151E_PAGE_5V_LOG; 60 + return YH5151E_PAGE_5V_REAL; 61 61 case YH5151E_PAGE_3V3_LOG: 62 62 return YH5151E_PAGE_3V3_REAL; 63 63 } ··· 103 103 104 104 static int fsp3y_read_byte_data(struct i2c_client *client, int page, int reg) 105 105 { 106 + const struct pmbus_driver_info *info = pmbus_get_driver_info(client); 107 + struct fsp3y_data *data = to_fsp3y_data(info); 106 108 int rv; 109 + 110 + /* 111 + * YH5151-E outputs vout in linear11. The conversion is done when 112 + * reading. Here, we have to inject pmbus_core with the correct 113 + * exponent (it is -6). 114 + */ 115 + if (data->chip == yh5151e && reg == PMBUS_VOUT_MODE) 116 + return 0x1A; 107 117 108 118 rv = set_page(client, page); 109 119 if (rv < 0) ··· 124 114 125 115 static int fsp3y_read_word_data(struct i2c_client *client, int page, int phase, int reg) 126 116 { 117 + const struct pmbus_driver_info *info = pmbus_get_driver_info(client); 118 + struct fsp3y_data *data = to_fsp3y_data(info); 127 119 int rv; 128 120 129 121 /* ··· 156 144 if (rv < 0) 157 145 return rv; 158 146 159 - return i2c_smbus_read_word_data(client, reg); 147 + rv = i2c_smbus_read_word_data(client, reg); 148 + if (rv < 0) 149 + return rv; 150 + 151 + /* 152 + * YH-5151E is non-compliant and outputs output voltages in linear11 153 + * instead of linear16. 154 + */ 155 + if (data->chip == yh5151e && reg == PMBUS_READ_VOUT) 156 + rv = sign_extend32(rv, 10) & 0xffff; 157 + 158 + return rv; 160 159 } 161 160 162 161 static struct pmbus_driver_info fsp3y_info[] = {
-1
drivers/iio/accel/Kconfig
··· 229 229 config HID_SENSOR_ACCEL_3D 230 230 depends on HID_SENSOR_HUB 231 231 select IIO_BUFFER 232 - select IIO_TRIGGERED_BUFFER 233 232 select HID_SENSOR_IIO_COMMON 234 233 select HID_SENSOR_IIO_TRIGGER 235 234 tristate "HID Accelerometers 3D"
+1
drivers/iio/common/hid-sensors/Kconfig
··· 19 19 tristate "Common module (trigger) for all HID Sensor IIO drivers" 20 20 depends on HID_SENSOR_HUB && HID_SENSOR_IIO_COMMON && IIO_BUFFER 21 21 select IIO_TRIGGER 22 + select IIO_TRIGGERED_BUFFER 22 23 help 23 24 Say yes here to build trigger support for HID sensors. 24 25 Triggers will be send if all requested attributes were read.
-1
drivers/iio/gyro/Kconfig
··· 111 111 config HID_SENSOR_GYRO_3D 112 112 depends on HID_SENSOR_HUB 113 113 select IIO_BUFFER 114 - select IIO_TRIGGERED_BUFFER 115 114 select HID_SENSOR_IIO_COMMON 116 115 select HID_SENSOR_IIO_TRIGGER 117 116 tristate "HID Gyroscope 3D"
+11 -2
drivers/iio/gyro/mpu3050-core.c
··· 272 272 case IIO_CHAN_INFO_OFFSET: 273 273 switch (chan->type) { 274 274 case IIO_TEMP: 275 - /* The temperature scaling is (x+23000)/280 Celsius */ 275 + /* 276 + * The temperature scaling is (x+23000)/280 Celsius 277 + * for the "best fit straight line" temperature range 278 + * of -30C..85C. The 23000 includes room temperature 279 + * offset of +35C, 280 is the precision scale and x is 280 + * the 16-bit signed integer reported by hardware. 281 + * 282 + * Temperature value itself represents temperature of 283 + * the sensor die. 284 + */ 276 285 *val = 23000; 277 286 return IIO_VAL_INT; 278 287 default: ··· 338 329 goto out_read_raw_unlock; 339 330 } 340 331 341 - *val = be16_to_cpu(raw_val); 332 + *val = (s16)be16_to_cpu(raw_val); 342 333 ret = IIO_VAL_INT; 343 334 344 335 goto out_read_raw_unlock;
-1
drivers/iio/humidity/Kconfig
··· 52 52 tristate "HID Environmental humidity sensor" 53 53 depends on HID_SENSOR_HUB 54 54 select IIO_BUFFER 55 - select IIO_TRIGGERED_BUFFER 56 55 select HID_SENSOR_IIO_COMMON 57 56 select HID_SENSOR_IIO_TRIGGER 58 57 help
+1 -8
drivers/iio/industrialio-core.c
··· 1778 1778 if (!indio_dev->info) 1779 1779 goto out_unlock; 1780 1780 1781 - ret = -EINVAL; 1782 1781 list_for_each_entry(h, &iio_dev_opaque->ioctl_handlers, entry) { 1783 1782 ret = h->ioctl(indio_dev, filp, cmd, arg); 1784 1783 if (ret != IIO_IOCTL_UNHANDLED) ··· 1785 1786 } 1786 1787 1787 1788 if (ret == IIO_IOCTL_UNHANDLED) 1788 - ret = -EINVAL; 1789 + ret = -ENODEV; 1789 1790 1790 1791 out_unlock: 1791 1792 mutex_unlock(&indio_dev->info_exist_lock); ··· 1925 1926 **/ 1926 1927 void iio_device_unregister(struct iio_dev *indio_dev) 1927 1928 { 1928 - struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); 1929 - struct iio_ioctl_handler *h, *t; 1930 - 1931 1929 cdev_device_del(&indio_dev->chrdev, &indio_dev->dev); 1932 1930 1933 1931 mutex_lock(&indio_dev->info_exist_lock); ··· 1934 1938 iio_disable_all_buffers(indio_dev); 1935 1939 1936 1940 indio_dev->info = NULL; 1937 - 1938 - list_for_each_entry_safe(h, t, &iio_dev_opaque->ioctl_handlers, entry) 1939 - list_del(&h->entry); 1940 1941 1941 1942 iio_device_wakeup_eventset(indio_dev); 1942 1943 iio_buffer_wakeup_poll(indio_dev);
-2
drivers/iio/light/Kconfig
··· 256 256 config HID_SENSOR_ALS 257 257 depends on HID_SENSOR_HUB 258 258 select IIO_BUFFER 259 - select IIO_TRIGGERED_BUFFER 260 259 select HID_SENSOR_IIO_COMMON 261 260 select HID_SENSOR_IIO_TRIGGER 262 261 tristate "HID ALS" ··· 269 270 config HID_SENSOR_PROX 270 271 depends on HID_SENSOR_HUB 271 272 select IIO_BUFFER 272 - select IIO_TRIGGERED_BUFFER 273 273 select HID_SENSOR_IIO_COMMON 274 274 select HID_SENSOR_IIO_TRIGGER 275 275 tristate "HID PROX"
+3 -2
drivers/iio/light/gp2ap002.c
··· 582 582 "gp2ap002", indio_dev); 583 583 if (ret) { 584 584 dev_err(dev, "unable to request IRQ\n"); 585 - goto out_disable_vio; 585 + goto out_put_pm; 586 586 } 587 587 gp2ap002->irq = client->irq; 588 588 ··· 612 612 613 613 return 0; 614 614 615 - out_disable_pm: 615 + out_put_pm: 616 616 pm_runtime_put_noidle(dev); 617 + out_disable_pm: 617 618 pm_runtime_disable(dev); 618 619 out_disable_vio: 619 620 regulator_disable(gp2ap002->vio);
+8
drivers/iio/light/tsl2583.c
··· 341 341 return lux_val; 342 342 } 343 343 344 + /* Avoid division by zero of lux_value later on */ 345 + if (lux_val == 0) { 346 + dev_err(&chip->client->dev, 347 + "%s: lux_val of 0 will produce out of range trim_value\n", 348 + __func__); 349 + return -ENODATA; 350 + } 351 + 344 352 gain_trim_val = (unsigned int)(((chip->als_settings.als_cal_target) 345 353 * chip->als_settings.als_gain_trim) / lux_val); 346 354 if ((gain_trim_val < 250) || (gain_trim_val > 4000)) {
-1
drivers/iio/magnetometer/Kconfig
··· 95 95 config HID_SENSOR_MAGNETOMETER_3D 96 96 depends on HID_SENSOR_HUB 97 97 select IIO_BUFFER 98 - select IIO_TRIGGERED_BUFFER 99 98 select HID_SENSOR_IIO_COMMON 100 99 select HID_SENSOR_IIO_TRIGGER 101 100 tristate "HID Magenetometer 3D"
-2
drivers/iio/orientation/Kconfig
··· 9 9 config HID_SENSOR_INCLINOMETER_3D 10 10 depends on HID_SENSOR_HUB 11 11 select IIO_BUFFER 12 - select IIO_TRIGGERED_BUFFER 13 12 select HID_SENSOR_IIO_COMMON 14 13 select HID_SENSOR_IIO_TRIGGER 15 14 tristate "HID Inclinometer 3D" ··· 19 20 config HID_SENSOR_DEVICE_ROTATION 20 21 depends on HID_SENSOR_HUB 21 22 select IIO_BUFFER 22 - select IIO_TRIGGERED_BUFFER 23 23 select HID_SENSOR_IIO_COMMON 24 24 select HID_SENSOR_IIO_TRIGGER 25 25 tristate "HID Device Rotation"
-1
drivers/iio/pressure/Kconfig
··· 79 79 config HID_SENSOR_PRESS 80 80 depends on HID_SENSOR_HUB 81 81 select IIO_BUFFER 82 - select IIO_TRIGGERED_BUFFER 83 82 select HID_SENSOR_IIO_COMMON 84 83 select HID_SENSOR_IIO_TRIGGER 85 84 tristate "HID PRESS"
+1
drivers/iio/proximity/pulsedlight-lidar-lite-v2.c
··· 160 160 ret = lidar_write_control(data, LIDAR_REG_CONTROL_ACQUIRE); 161 161 if (ret < 0) { 162 162 dev_err(&client->dev, "cannot send start measurement command"); 163 + pm_runtime_put_noidle(&client->dev); 163 164 return ret; 164 165 } 165 166
-1
drivers/iio/temperature/Kconfig
··· 45 45 tristate "HID Environmental temperature sensor" 46 46 depends on HID_SENSOR_HUB 47 47 select IIO_BUFFER 48 - select IIO_TRIGGERED_BUFFER 49 48 select HID_SENSOR_IIO_COMMON 50 49 select HID_SENSOR_IIO_TRIGGER 51 50 help
+8 -9
drivers/isdn/hardware/mISDN/hfcsusb.c
··· 46 46 static void hfcsusb_stop_endpoint(struct hfcsusb *hw, int channel); 47 47 static int hfcsusb_setup_bch(struct bchannel *bch, int protocol); 48 48 static void deactivate_bchannel(struct bchannel *bch); 49 - static void hfcsusb_ph_info(struct hfcsusb *hw); 49 + static int hfcsusb_ph_info(struct hfcsusb *hw); 50 50 51 51 /* start next background transfer for control channel */ 52 52 static void ··· 241 241 * send full D/B channel status information 242 242 * as MPH_INFORMATION_IND 243 243 */ 244 - static void 244 + static int 245 245 hfcsusb_ph_info(struct hfcsusb *hw) 246 246 { 247 247 struct ph_info *phi; ··· 250 250 251 251 phi = kzalloc(struct_size(phi, bch, dch->dev.nrbchan), GFP_ATOMIC); 252 252 if (!phi) 253 - return; 253 + return -ENOMEM; 254 254 255 255 phi->dch.ch.protocol = hw->protocol; 256 256 phi->dch.ch.Flags = dch->Flags; ··· 263 263 _queue_data(&dch->dev.D, MPH_INFORMATION_IND, MISDN_ID_ANY, 264 264 struct_size(phi, bch, dch->dev.nrbchan), phi, GFP_ATOMIC); 265 265 kfree(phi); 266 + 267 + return 0; 266 268 } 267 269 268 270 /* ··· 349 347 ret = l1_event(dch->l1, hh->prim); 350 348 break; 351 349 case MPH_INFORMATION_REQ: 352 - hfcsusb_ph_info(hw); 353 - ret = 0; 350 + ret = hfcsusb_ph_info(hw); 354 351 break; 355 352 } 356 353 ··· 404 403 hw->name, __func__, cmd); 405 404 return -1; 406 405 } 407 - hfcsusb_ph_info(hw); 408 - return 0; 406 + return hfcsusb_ph_info(hw); 409 407 } 410 408 411 409 static int ··· 746 746 handle_led(hw, (bch->nr == 1) ? LED_B1_OFF : 747 747 LED_B2_OFF); 748 748 } 749 - hfcsusb_ph_info(hw); 750 - return 0; 749 + return hfcsusb_ph_info(hw); 751 750 } 752 751 753 752 static void
+13 -8
drivers/isdn/hardware/mISDN/mISDNinfineon.c
··· 630 630 release_io(struct inf_hw *hw) 631 631 { 632 632 if (hw->cfg.mode) { 633 - if (hw->cfg.p) { 633 + if (hw->cfg.mode == AM_MEMIO) { 634 634 release_mem_region(hw->cfg.start, hw->cfg.size); 635 - iounmap(hw->cfg.p); 635 + if (hw->cfg.p) 636 + iounmap(hw->cfg.p); 636 637 } else 637 638 release_region(hw->cfg.start, hw->cfg.size); 638 639 hw->cfg.mode = AM_NONE; 639 640 } 640 641 if (hw->addr.mode) { 641 - if (hw->addr.p) { 642 + if (hw->addr.mode == AM_MEMIO) { 642 643 release_mem_region(hw->addr.start, hw->addr.size); 643 - iounmap(hw->addr.p); 644 + if (hw->addr.p) 645 + iounmap(hw->addr.p); 644 646 } else 645 647 release_region(hw->addr.start, hw->addr.size); 646 648 hw->addr.mode = AM_NONE; ··· 672 670 (ulong)hw->cfg.start, (ulong)hw->cfg.size); 673 671 return err; 674 672 } 675 - if (hw->ci->cfg_mode == AM_MEMIO) 676 - hw->cfg.p = ioremap(hw->cfg.start, hw->cfg.size); 677 673 hw->cfg.mode = hw->ci->cfg_mode; 674 + if (hw->ci->cfg_mode == AM_MEMIO) { 675 + hw->cfg.p = ioremap(hw->cfg.start, hw->cfg.size); 676 + if (!hw->cfg.p) 677 + return -ENOMEM; 678 + } 678 679 if (debug & DEBUG_HW) 679 680 pr_notice("%s: IO cfg %lx (%lu bytes) mode%d\n", 680 681 hw->name, (ulong)hw->cfg.start, ··· 702 697 (ulong)hw->addr.start, (ulong)hw->addr.size); 703 698 return err; 704 699 } 700 + hw->addr.mode = hw->ci->addr_mode; 705 701 if (hw->ci->addr_mode == AM_MEMIO) { 706 702 hw->addr.p = ioremap(hw->addr.start, hw->addr.size); 707 - if (unlikely(!hw->addr.p)) 703 + if (!hw->addr.p) 708 704 return -ENOMEM; 709 705 } 710 - hw->addr.mode = hw->ci->addr_mode; 711 706 if (debug & DEBUG_HW) 712 707 pr_notice("%s: IO addr %lx (%lu bytes) mode%d\n", 713 708 hw->name, (ulong)hw->addr.start,
+1 -1
drivers/leds/leds-lp5523.c
··· 307 307 usleep_range(3000, 6000); 308 308 ret = lp55xx_read(chip, LP5523_REG_STATUS, &status); 309 309 if (ret) 310 - return ret; 310 + goto out; 311 311 status &= LP5523_ENG_STATUS_MASK; 312 312 313 313 if (status != LP5523_ENG_STATUS_MASK) {
+1 -1
drivers/media/dvb-frontends/sp8870.c
··· 281 281 282 282 // read status reg in order to clear pending irqs 283 283 err = sp8870_readreg(state, 0x200); 284 - if (err) 284 + if (err < 0) 285 285 return err; 286 286 287 287 // system controller start
-1
drivers/media/platform/rcar_drif.c
··· 915 915 { 916 916 struct rcar_drif_sdr *sdr = video_drvdata(file); 917 917 918 - memset(f->fmt.sdr.reserved, 0, sizeof(f->fmt.sdr.reserved)); 919 918 f->fmt.sdr.pixelformat = sdr->fmt->pixelformat; 920 919 f->fmt.sdr.buffersize = sdr->fmt->buffersize; 921 920
+1 -5
drivers/media/usb/gspca/cpia1.c
··· 1424 1424 { 1425 1425 struct sd *sd = (struct sd *) gspca_dev; 1426 1426 struct cam *cam; 1427 - int ret; 1428 1427 1429 1428 sd->mainsFreq = FREQ_DEF == V4L2_CID_POWER_LINE_FREQUENCY_60HZ; 1430 1429 reset_camera_params(gspca_dev); ··· 1435 1436 cam->cam_mode = mode; 1436 1437 cam->nmodes = ARRAY_SIZE(mode); 1437 1438 1438 - ret = goto_low_power(gspca_dev); 1439 - if (ret) 1440 - gspca_err(gspca_dev, "Cannot go to low power mode: %d\n", 1441 - ret); 1439 + goto_low_power(gspca_dev); 1442 1440 /* Check the firmware version. */ 1443 1441 sd->params.version.firmwareVersion = 0; 1444 1442 get_version_information(gspca_dev);
+8 -8
drivers/media/usb/gspca/m5602/m5602_mt9m111.c
··· 195 195 int mt9m111_probe(struct sd *sd) 196 196 { 197 197 u8 data[2] = {0x00, 0x00}; 198 - int i, rc = 0; 198 + int i, err; 199 199 struct gspca_dev *gspca_dev = (struct gspca_dev *)sd; 200 200 201 201 if (force_sensor) { ··· 213 213 /* Do the preinit */ 214 214 for (i = 0; i < ARRAY_SIZE(preinit_mt9m111); i++) { 215 215 if (preinit_mt9m111[i][0] == BRIDGE) { 216 - rc |= m5602_write_bridge(sd, 217 - preinit_mt9m111[i][1], 218 - preinit_mt9m111[i][2]); 216 + err = m5602_write_bridge(sd, 217 + preinit_mt9m111[i][1], 218 + preinit_mt9m111[i][2]); 219 219 } else { 220 220 data[0] = preinit_mt9m111[i][2]; 221 221 data[1] = preinit_mt9m111[i][3]; 222 - rc |= m5602_write_sensor(sd, 223 - preinit_mt9m111[i][1], data, 2); 222 + err = m5602_write_sensor(sd, 223 + preinit_mt9m111[i][1], data, 2); 224 224 } 225 + if (err < 0) 226 + return err; 225 227 } 226 - if (rc < 0) 227 - return rc; 228 228 229 229 if (m5602_read_sensor(sd, MT9M111_SC_CHIPVER, data, 2)) 230 230 return -ENODEV;
+7 -7
drivers/media/usb/gspca/m5602/m5602_po1030.c
··· 154 154 155 155 int po1030_probe(struct sd *sd) 156 156 { 157 - int rc = 0; 158 157 u8 dev_id_h = 0, i; 158 + int err; 159 159 struct gspca_dev *gspca_dev = (struct gspca_dev *)sd; 160 160 161 161 if (force_sensor) { ··· 174 174 for (i = 0; i < ARRAY_SIZE(preinit_po1030); i++) { 175 175 u8 data = preinit_po1030[i][2]; 176 176 if (preinit_po1030[i][0] == SENSOR) 177 - rc |= m5602_write_sensor(sd, 178 - preinit_po1030[i][1], &data, 1); 177 + err = m5602_write_sensor(sd, preinit_po1030[i][1], 178 + &data, 1); 179 179 else 180 - rc |= m5602_write_bridge(sd, preinit_po1030[i][1], 181 - data); 180 + err = m5602_write_bridge(sd, preinit_po1030[i][1], 181 + data); 182 + if (err < 0) 183 + return err; 182 184 } 183 - if (rc < 0) 184 - return rc; 185 185 186 186 if (m5602_read_sensor(sd, PO1030_DEVID_H, &dev_id_h, 1)) 187 187 return -ENODEV;
+4 -2
drivers/misc/eeprom/at24.c
··· 763 763 at24->nvmem = devm_nvmem_register(dev, &nvmem_config); 764 764 if (IS_ERR(at24->nvmem)) { 765 765 pm_runtime_disable(dev); 766 - regulator_disable(at24->vcc_reg); 766 + if (!pm_runtime_status_suspended(dev)) 767 + regulator_disable(at24->vcc_reg); 767 768 return PTR_ERR(at24->nvmem); 768 769 } 769 770 ··· 775 774 err = at24_read(at24, 0, &test_byte, 1); 776 775 if (err) { 777 776 pm_runtime_disable(dev); 778 - regulator_disable(at24->vcc_reg); 777 + if (!pm_runtime_status_suspended(dev)) 778 + regulator_disable(at24->vcc_reg); 779 779 return -ENODEV; 780 780 } 781 781
+1 -1
drivers/misc/habanalabs/common/command_submission.c
··· 2017 2017 if (completion_value >= target_value) { 2018 2018 *status = CS_WAIT_STATUS_COMPLETED; 2019 2019 } else { 2020 - timeout -= jiffies_to_usecs(completion_rc); 2020 + timeout = completion_rc; 2021 2021 goto wait_again; 2022 2022 } 2023 2023 } else {
+35 -26
drivers/misc/habanalabs/common/firmware_if.c
··· 362 362 } 363 363 364 364 if (err_val & CPU_BOOT_ERR0_SECURITY_NOT_RDY) { 365 - dev_warn(hdev->dev, 365 + dev_err(hdev->dev, 366 366 "Device boot warning - security not ready\n"); 367 - /* This is a warning so we don't want it to disable the 368 - * device 369 - */ 370 - err_val &= ~CPU_BOOT_ERR0_SECURITY_NOT_RDY; 367 + err_exists = true; 371 368 } 372 369 373 370 if (err_val & CPU_BOOT_ERR0_SECURITY_FAIL) { ··· 400 403 err_exists = true; 401 404 } 402 405 403 - if (err_exists) 406 + if (err_exists && ((err_val & ~CPU_BOOT_ERR0_ENABLED) & 407 + lower_32_bits(hdev->boot_error_status_mask))) 404 408 return -EIO; 405 409 406 410 return 0; ··· 659 661 return rc; 660 662 } 661 663 662 - int get_used_pll_index(struct hl_device *hdev, enum pll_index input_pll_index, 664 + int get_used_pll_index(struct hl_device *hdev, u32 input_pll_index, 663 665 enum pll_index *pll_index) 664 666 { 665 667 struct asic_fixed_properties *prop = &hdev->asic_prop; 666 668 u8 pll_byte, pll_bit_off; 667 669 bool dynamic_pll; 668 - 669 - if (input_pll_index >= PLL_MAX) { 670 - dev_err(hdev->dev, "PLL index %d is out of range\n", 671 - input_pll_index); 672 - return -EINVAL; 673 - } 670 + int fw_pll_idx; 674 671 675 672 dynamic_pll = prop->fw_security_status_valid && 676 673 (prop->fw_app_security_map & CPU_BOOT_DEV_STS0_DYN_PLL_EN); ··· 673 680 if (!dynamic_pll) { 674 681 /* 675 682 * in case we are working with legacy FW (each asic has unique 676 - * PLL numbering) extract the legacy numbering 683 + * PLL numbering) use the driver based index as they are 684 + * aligned with fw legacy numbering 677 685 */ 678 - *pll_index = hdev->legacy_pll_map[input_pll_index]; 686 + *pll_index = input_pll_index; 679 687 return 0; 680 688 } 681 689 682 - /* PLL map is a u8 array */ 683 - pll_byte = prop->cpucp_info.pll_map[input_pll_index >> 3]; 684 - pll_bit_off = input_pll_index & 0x7; 685 - 686 - if (!(pll_byte & BIT(pll_bit_off))) { 687 - dev_err(hdev->dev, "PLL index %d is not supported\n", 688 - input_pll_index); 690 + /* retrieve a FW compatible PLL index based on 691 + * ASIC specific user request 692 + */ 693 + fw_pll_idx = hdev->asic_funcs->map_pll_idx_to_fw_idx(input_pll_index); 694 + if (fw_pll_idx < 0) { 695 + dev_err(hdev->dev, "Invalid PLL index (%u) error %d\n", 696 + input_pll_index, fw_pll_idx); 689 697 return -EINVAL; 690 698 } 691 699 692 - *pll_index = input_pll_index; 700 + /* PLL map is a u8 array */ 701 + pll_byte = prop->cpucp_info.pll_map[fw_pll_idx >> 3]; 702 + pll_bit_off = fw_pll_idx & 0x7; 703 + 704 + if (!(pll_byte & BIT(pll_bit_off))) { 705 + dev_err(hdev->dev, "PLL index %d is not supported\n", 706 + fw_pll_idx); 707 + return -EINVAL; 708 + } 709 + 710 + *pll_index = fw_pll_idx; 693 711 694 712 return 0; 695 713 } 696 714 697 - int hl_fw_cpucp_pll_info_get(struct hl_device *hdev, enum pll_index pll_index, 715 + int hl_fw_cpucp_pll_info_get(struct hl_device *hdev, u32 pll_index, 698 716 u16 *pll_freq_arr) 699 717 { 700 718 struct cpucp_packet pkt; ··· 848 844 if (rc) { 849 845 dev_err(hdev->dev, "Failed to read preboot version\n"); 850 846 detect_cpu_boot_status(hdev, status); 851 - fw_read_errors(hdev, boot_err0_reg, 852 - cpu_security_boot_status_reg); 847 + 848 + /* If we read all FF, then something is totally wrong, no point 849 + * of reading specific errors 850 + */ 851 + if (status != -1) 852 + fw_read_errors(hdev, boot_err0_reg, 853 + cpu_security_boot_status_reg); 853 854 return -EIO; 854 855 } 855 856
+15 -8
drivers/misc/habanalabs/common/habanalabs.h
··· 930 930 * driver is ready to receive asynchronous events. This 931 931 * function should be called during the first init and 932 932 * after every hard-reset of the device 933 + * @get_msi_info: Retrieve asic-specific MSI ID of the f/w async event 934 + * @map_pll_idx_to_fw_idx: convert driver specific per asic PLL index to 935 + * generic f/w compatible PLL Indexes 933 936 */ 934 937 struct hl_asic_funcs { 935 938 int (*early_init)(struct hl_device *hdev); ··· 1057 1054 u32 block_id, u32 block_size); 1058 1055 void (*enable_events_from_fw)(struct hl_device *hdev); 1059 1056 void (*get_msi_info)(u32 *table); 1057 + int (*map_pll_idx_to_fw_idx)(u32 pll_idx); 1060 1058 }; 1061 1059 1062 1060 ··· 1954 1950 * @aggregated_cs_counters: aggregated cs counters among all contexts 1955 1951 * @mmu_priv: device-specific MMU data. 1956 1952 * @mmu_func: device-related MMU functions. 1957 - * @legacy_pll_map: map holding map between dynamic (common) PLL indexes and 1958 - * static (asic specific) PLL indexes. 1959 1953 * @dram_used_mem: current DRAM memory consumption. 1960 1954 * @timeout_jiffies: device CS timeout value. 1961 1955 * @max_power: the max power of the device, as configured by the sysadmin. This ··· 1962 1960 * @clock_gating_mask: is clock gating enabled. bitmask that represents the 1963 1961 * different engines. See debugfs-driver-habanalabs for 1964 1962 * details. 1963 + * @boot_error_status_mask: contains a mask of the device boot error status. 1964 + * Each bit represents a different error, according to 1965 + * the defines in hl_boot_if.h. If the bit is cleared, 1966 + * the error will be ignored by the driver during 1967 + * device initialization. Mainly used to debug and 1968 + * workaround firmware bugs 1965 1969 * @in_reset: is device in reset flow. 1966 1970 * @curr_pll_profile: current PLL profile. 1967 1971 * @card_type: Various ASICs have several card types. This indicates the card ··· 2079 2071 struct hl_mmu_priv mmu_priv; 2080 2072 struct hl_mmu_funcs mmu_func[MMU_NUM_PGT_LOCATIONS]; 2081 2073 2082 - enum pll_index *legacy_pll_map; 2083 - 2084 2074 atomic64_t dram_used_mem; 2085 2075 u64 timeout_jiffies; 2086 2076 u64 max_power; 2087 2077 u64 clock_gating_mask; 2078 + u64 boot_error_status_mask; 2088 2079 atomic_t in_reset; 2089 2080 enum hl_pll_frequency curr_pll_profile; 2090 2081 enum cpucp_card_types card_type; ··· 2394 2387 struct hl_info_pci_counters *counters); 2395 2388 int hl_fw_cpucp_total_energy_get(struct hl_device *hdev, 2396 2389 u64 *total_energy); 2397 - int get_used_pll_index(struct hl_device *hdev, enum pll_index input_pll_index, 2390 + int get_used_pll_index(struct hl_device *hdev, u32 input_pll_index, 2398 2391 enum pll_index *pll_index); 2399 - int hl_fw_cpucp_pll_info_get(struct hl_device *hdev, enum pll_index pll_index, 2392 + int hl_fw_cpucp_pll_info_get(struct hl_device *hdev, u32 pll_index, 2400 2393 u16 *pll_freq_arr); 2401 2394 int hl_fw_cpucp_power_get(struct hl_device *hdev, u64 *power); 2402 2395 int hl_fw_init_cpu(struct hl_device *hdev, u32 cpu_boot_status_reg, ··· 2418 2411 int hl_pci_init(struct hl_device *hdev); 2419 2412 void hl_pci_fini(struct hl_device *hdev); 2420 2413 2421 - long hl_get_frequency(struct hl_device *hdev, enum pll_index pll_index, 2414 + long hl_get_frequency(struct hl_device *hdev, u32 pll_index, 2422 2415 bool curr); 2423 - void hl_set_frequency(struct hl_device *hdev, enum pll_index pll_index, 2416 + void hl_set_frequency(struct hl_device *hdev, u32 pll_index, 2424 2417 u64 freq); 2425 2418 int hl_get_temperature(struct hl_device *hdev, 2426 2419 int sensor_index, u32 attr, long *value);
+7
drivers/misc/habanalabs/common/habanalabs_drv.c
··· 30 30 static int timeout_locked = 30; 31 31 static int reset_on_lockup = 1; 32 32 static int memory_scrub = 1; 33 + static ulong boot_error_status_mask = ULONG_MAX; 33 34 34 35 module_param(timeout_locked, int, 0444); 35 36 MODULE_PARM_DESC(timeout_locked, ··· 43 42 module_param(memory_scrub, int, 0444); 44 43 MODULE_PARM_DESC(memory_scrub, 45 44 "Scrub device memory in various states (0 = no, 1 = yes, default yes)"); 45 + 46 + module_param(boot_error_status_mask, ulong, 0444); 47 + MODULE_PARM_DESC(boot_error_status_mask, 48 + "Mask of the error status during device CPU boot (If bitX is cleared then error X is masked. Default all 1's)"); 46 49 47 50 #define PCI_VENDOR_ID_HABANALABS 0x1da3 48 51 ··· 324 319 hdev->major = hl_major; 325 320 hdev->reset_on_lockup = reset_on_lockup; 326 321 hdev->memory_scrub = memory_scrub; 322 + hdev->boot_error_status_mask = boot_error_status_mask; 323 + 327 324 hdev->pldm = 0; 328 325 329 326 set_driver_behavior_per_device(hdev);
+2 -2
drivers/misc/habanalabs/common/sysfs.c
··· 9 9 10 10 #include <linux/pci.h> 11 11 12 - long hl_get_frequency(struct hl_device *hdev, enum pll_index pll_index, 12 + long hl_get_frequency(struct hl_device *hdev, u32 pll_index, 13 13 bool curr) 14 14 { 15 15 struct cpucp_packet pkt; ··· 44 44 return (long) result; 45 45 } 46 46 47 - void hl_set_frequency(struct hl_device *hdev, enum pll_index pll_index, 47 + void hl_set_frequency(struct hl_device *hdev, u32 pll_index, 48 48 u64 freq) 49 49 { 50 50 struct cpucp_packet pkt;
+23 -36
drivers/misc/habanalabs/gaudi/gaudi.c
··· 105 105 106 106 #define GAUDI_PLL_MAX 10 107 107 108 - /* 109 - * this enum kept here for compatibility with old FW (in which each asic has 110 - * unique PLL numbering 111 - */ 112 - enum gaudi_pll_index { 113 - GAUDI_CPU_PLL = 0, 114 - GAUDI_PCI_PLL, 115 - GAUDI_SRAM_PLL, 116 - GAUDI_HBM_PLL, 117 - GAUDI_NIC_PLL, 118 - GAUDI_DMA_PLL, 119 - GAUDI_MESH_PLL, 120 - GAUDI_MME_PLL, 121 - GAUDI_TPC_PLL, 122 - GAUDI_IF_PLL, 123 - }; 124 - 125 - static enum pll_index gaudi_pll_map[PLL_MAX] = { 126 - [CPU_PLL] = GAUDI_CPU_PLL, 127 - [PCI_PLL] = GAUDI_PCI_PLL, 128 - [SRAM_PLL] = GAUDI_SRAM_PLL, 129 - [HBM_PLL] = GAUDI_HBM_PLL, 130 - [NIC_PLL] = GAUDI_NIC_PLL, 131 - [DMA_PLL] = GAUDI_DMA_PLL, 132 - [MESH_PLL] = GAUDI_MESH_PLL, 133 - [MME_PLL] = GAUDI_MME_PLL, 134 - [TPC_PLL] = GAUDI_TPC_PLL, 135 - [IF_PLL] = GAUDI_IF_PLL, 136 - }; 137 - 138 108 static const char gaudi_irq_name[GAUDI_MSI_ENTRIES][GAUDI_MAX_STRING_LEN] = { 139 109 "gaudi cq 0_0", "gaudi cq 0_1", "gaudi cq 0_2", "gaudi cq 0_3", 140 110 "gaudi cq 1_0", "gaudi cq 1_1", "gaudi cq 1_2", "gaudi cq 1_3", ··· 780 810 freq = 0; 781 811 } 782 812 } else { 783 - rc = hl_fw_cpucp_pll_info_get(hdev, CPU_PLL, pll_freq_arr); 813 + rc = hl_fw_cpucp_pll_info_get(hdev, HL_GAUDI_CPU_PLL, pll_freq_arr); 784 814 785 815 if (rc) 786 816 return rc; ··· 1621 1651 gaudi->max_freq_value = GAUDI_MAX_CLK_FREQ; 1622 1652 1623 1653 hdev->asic_specific = gaudi; 1624 - 1625 - /* store legacy PLL map */ 1626 - hdev->legacy_pll_map = gaudi_pll_map; 1627 1654 1628 1655 /* Create DMA pool for small allocations */ 1629 1656 hdev->dma_pool = dma_pool_create(dev_name(hdev->dev), ··· 5579 5612 struct hl_cs_job *job; 5580 5613 u32 cb_size, ctl, err_cause; 5581 5614 struct hl_cb *cb; 5615 + u64 id; 5582 5616 int rc; 5583 5617 5584 5618 cb = hl_cb_kernel_create(hdev, PAGE_SIZE, false); ··· 5646 5678 } 5647 5679 5648 5680 release_cb: 5681 + id = cb->id; 5649 5682 hl_cb_put(cb); 5650 - hl_cb_destroy(hdev, &hdev->kernel_cb_mgr, cb->id << PAGE_SHIFT); 5683 + hl_cb_destroy(hdev, &hdev->kernel_cb_mgr, id << PAGE_SHIFT); 5651 5684 5652 5685 return rc; 5653 5686 } ··· 8752 8783 WREG32(mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR, GAUDI_EVENT_INTS_REGISTER); 8753 8784 } 8754 8785 8786 + static int gaudi_map_pll_idx_to_fw_idx(u32 pll_idx) 8787 + { 8788 + switch (pll_idx) { 8789 + case HL_GAUDI_CPU_PLL: return CPU_PLL; 8790 + case HL_GAUDI_PCI_PLL: return PCI_PLL; 8791 + case HL_GAUDI_NIC_PLL: return NIC_PLL; 8792 + case HL_GAUDI_DMA_PLL: return DMA_PLL; 8793 + case HL_GAUDI_MESH_PLL: return MESH_PLL; 8794 + case HL_GAUDI_MME_PLL: return MME_PLL; 8795 + case HL_GAUDI_TPC_PLL: return TPC_PLL; 8796 + case HL_GAUDI_IF_PLL: return IF_PLL; 8797 + case HL_GAUDI_SRAM_PLL: return SRAM_PLL; 8798 + case HL_GAUDI_HBM_PLL: return HBM_PLL; 8799 + default: return -EINVAL; 8800 + } 8801 + } 8802 + 8755 8803 static const struct hl_asic_funcs gaudi_funcs = { 8756 8804 .early_init = gaudi_early_init, 8757 8805 .early_fini = gaudi_early_fini, ··· 8852 8866 .ack_protection_bits_errors = gaudi_ack_protection_bits_errors, 8853 8867 .get_hw_block_id = gaudi_get_hw_block_id, 8854 8868 .hw_block_mmap = gaudi_block_mmap, 8855 - .enable_events_from_fw = gaudi_enable_events_from_fw 8869 + .enable_events_from_fw = gaudi_enable_events_from_fw, 8870 + .map_pll_idx_to_fw_idx = gaudi_map_pll_idx_to_fw_idx 8856 8871 }; 8857 8872 8858 8873 /**
+6 -6
drivers/misc/habanalabs/gaudi/gaudi_hwmgr.c
··· 13 13 struct gaudi_device *gaudi = hdev->asic_specific; 14 14 15 15 if (freq == PLL_LAST) 16 - hl_set_frequency(hdev, MME_PLL, gaudi->max_freq_value); 16 + hl_set_frequency(hdev, HL_GAUDI_MME_PLL, gaudi->max_freq_value); 17 17 } 18 18 19 19 int gaudi_get_clk_rate(struct hl_device *hdev, u32 *cur_clk, u32 *max_clk) ··· 23 23 if (!hl_device_operational(hdev, NULL)) 24 24 return -ENODEV; 25 25 26 - value = hl_get_frequency(hdev, MME_PLL, false); 26 + value = hl_get_frequency(hdev, HL_GAUDI_MME_PLL, false); 27 27 28 28 if (value < 0) { 29 29 dev_err(hdev->dev, "Failed to retrieve device max clock %ld\n", ··· 33 33 34 34 *max_clk = (value / 1000 / 1000); 35 35 36 - value = hl_get_frequency(hdev, MME_PLL, true); 36 + value = hl_get_frequency(hdev, HL_GAUDI_MME_PLL, true); 37 37 38 38 if (value < 0) { 39 39 dev_err(hdev->dev, ··· 57 57 if (!hl_device_operational(hdev, NULL)) 58 58 return -ENODEV; 59 59 60 - value = hl_get_frequency(hdev, MME_PLL, false); 60 + value = hl_get_frequency(hdev, HL_GAUDI_MME_PLL, false); 61 61 62 62 gaudi->max_freq_value = value; 63 63 ··· 85 85 86 86 gaudi->max_freq_value = value * 1000 * 1000; 87 87 88 - hl_set_frequency(hdev, MME_PLL, gaudi->max_freq_value); 88 + hl_set_frequency(hdev, HL_GAUDI_MME_PLL, gaudi->max_freq_value); 89 89 90 90 fail: 91 91 return count; ··· 100 100 if (!hl_device_operational(hdev, NULL)) 101 101 return -ENODEV; 102 102 103 - value = hl_get_frequency(hdev, MME_PLL, true); 103 + value = hl_get_frequency(hdev, HL_GAUDI_MME_PLL, true); 104 104 105 105 return sprintf(buf, "%lu\n", (value / 1000 / 1000)); 106 106 }
+18 -29
drivers/misc/habanalabs/goya/goya.c
··· 118 118 #define IS_MME_IDLE(mme_arch_sts) \ 119 119 (((mme_arch_sts) & MME_ARCH_IDLE_MASK) == MME_ARCH_IDLE_MASK) 120 120 121 - /* 122 - * this enum kept here for compatibility with old FW (in which each asic has 123 - * unique PLL numbering 124 - */ 125 - enum goya_pll_index { 126 - GOYA_CPU_PLL = 0, 127 - GOYA_IC_PLL, 128 - GOYA_MC_PLL, 129 - GOYA_MME_PLL, 130 - GOYA_PCI_PLL, 131 - GOYA_EMMC_PLL, 132 - GOYA_TPC_PLL, 133 - }; 134 - 135 - static enum pll_index goya_pll_map[PLL_MAX] = { 136 - [CPU_PLL] = GOYA_CPU_PLL, 137 - [IC_PLL] = GOYA_IC_PLL, 138 - [MC_PLL] = GOYA_MC_PLL, 139 - [MME_PLL] = GOYA_MME_PLL, 140 - [PCI_PLL] = GOYA_PCI_PLL, 141 - [EMMC_PLL] = GOYA_EMMC_PLL, 142 - [TPC_PLL] = GOYA_TPC_PLL, 143 - }; 144 - 145 121 static const char goya_irq_name[GOYA_MSIX_ENTRIES][GOYA_MAX_STRING_LEN] = { 146 122 "goya cq 0", "goya cq 1", "goya cq 2", "goya cq 3", 147 123 "goya cq 4", "goya cpu eq" ··· 751 775 freq = 0; 752 776 } 753 777 } else { 754 - rc = hl_fw_cpucp_pll_info_get(hdev, PCI_PLL, pll_freq_arr); 778 + rc = hl_fw_cpucp_pll_info_get(hdev, HL_GOYA_PCI_PLL, 779 + pll_freq_arr); 755 780 756 781 if (rc) 757 782 return; ··· 873 896 goya->ic_clk = GOYA_PLL_FREQ_LOW; 874 897 875 898 hdev->asic_specific = goya; 876 - 877 - /* store legacy PLL map */ 878 - hdev->legacy_pll_map = goya_pll_map; 879 899 880 900 /* Create DMA pool for small allocations */ 881 901 hdev->dma_pool = dma_pool_create(dev_name(hdev->dev), ··· 5486 5512 GOYA_ASYNC_EVENT_ID_INTS_REGISTER); 5487 5513 } 5488 5514 5515 + static int goya_map_pll_idx_to_fw_idx(u32 pll_idx) 5516 + { 5517 + switch (pll_idx) { 5518 + case HL_GOYA_CPU_PLL: return CPU_PLL; 5519 + case HL_GOYA_PCI_PLL: return PCI_PLL; 5520 + case HL_GOYA_MME_PLL: return MME_PLL; 5521 + case HL_GOYA_TPC_PLL: return TPC_PLL; 5522 + case HL_GOYA_IC_PLL: return IC_PLL; 5523 + case HL_GOYA_MC_PLL: return MC_PLL; 5524 + case HL_GOYA_EMMC_PLL: return EMMC_PLL; 5525 + default: return -EINVAL; 5526 + } 5527 + } 5528 + 5489 5529 static const struct hl_asic_funcs goya_funcs = { 5490 5530 .early_init = goya_early_init, 5491 5531 .early_fini = goya_early_fini, ··· 5583 5595 .ack_protection_bits_errors = goya_ack_protection_bits_errors, 5584 5596 .get_hw_block_id = goya_get_hw_block_id, 5585 5597 .hw_block_mmap = goya_block_mmap, 5586 - .enable_events_from_fw = goya_enable_events_from_fw 5598 + .enable_events_from_fw = goya_enable_events_from_fw, 5599 + .map_pll_idx_to_fw_idx = goya_map_pll_idx_to_fw_idx 5587 5600 }; 5588 5601 5589 5602 /*
+20 -20
drivers/misc/habanalabs/goya/goya_hwmgr.c
··· 13 13 14 14 switch (freq) { 15 15 case PLL_HIGH: 16 - hl_set_frequency(hdev, MME_PLL, hdev->high_pll); 17 - hl_set_frequency(hdev, TPC_PLL, hdev->high_pll); 18 - hl_set_frequency(hdev, IC_PLL, hdev->high_pll); 16 + hl_set_frequency(hdev, HL_GOYA_MME_PLL, hdev->high_pll); 17 + hl_set_frequency(hdev, HL_GOYA_TPC_PLL, hdev->high_pll); 18 + hl_set_frequency(hdev, HL_GOYA_IC_PLL, hdev->high_pll); 19 19 break; 20 20 case PLL_LOW: 21 - hl_set_frequency(hdev, MME_PLL, GOYA_PLL_FREQ_LOW); 22 - hl_set_frequency(hdev, TPC_PLL, GOYA_PLL_FREQ_LOW); 23 - hl_set_frequency(hdev, IC_PLL, GOYA_PLL_FREQ_LOW); 21 + hl_set_frequency(hdev, HL_GOYA_MME_PLL, GOYA_PLL_FREQ_LOW); 22 + hl_set_frequency(hdev, HL_GOYA_TPC_PLL, GOYA_PLL_FREQ_LOW); 23 + hl_set_frequency(hdev, HL_GOYA_IC_PLL, GOYA_PLL_FREQ_LOW); 24 24 break; 25 25 case PLL_LAST: 26 - hl_set_frequency(hdev, MME_PLL, goya->mme_clk); 27 - hl_set_frequency(hdev, TPC_PLL, goya->tpc_clk); 28 - hl_set_frequency(hdev, IC_PLL, goya->ic_clk); 26 + hl_set_frequency(hdev, HL_GOYA_MME_PLL, goya->mme_clk); 27 + hl_set_frequency(hdev, HL_GOYA_TPC_PLL, goya->tpc_clk); 28 + hl_set_frequency(hdev, HL_GOYA_IC_PLL, goya->ic_clk); 29 29 break; 30 30 default: 31 31 dev_err(hdev->dev, "unknown frequency setting\n"); ··· 39 39 if (!hl_device_operational(hdev, NULL)) 40 40 return -ENODEV; 41 41 42 - value = hl_get_frequency(hdev, MME_PLL, false); 42 + value = hl_get_frequency(hdev, HL_GOYA_MME_PLL, false); 43 43 44 44 if (value < 0) { 45 45 dev_err(hdev->dev, "Failed to retrieve device max clock %ld\n", ··· 49 49 50 50 *max_clk = (value / 1000 / 1000); 51 51 52 - value = hl_get_frequency(hdev, MME_PLL, true); 52 + value = hl_get_frequency(hdev, HL_GOYA_MME_PLL, true); 53 53 54 54 if (value < 0) { 55 55 dev_err(hdev->dev, ··· 72 72 if (!hl_device_operational(hdev, NULL)) 73 73 return -ENODEV; 74 74 75 - value = hl_get_frequency(hdev, MME_PLL, false); 75 + value = hl_get_frequency(hdev, HL_GOYA_MME_PLL, false); 76 76 77 77 if (value < 0) 78 78 return value; ··· 105 105 goto fail; 106 106 } 107 107 108 - hl_set_frequency(hdev, MME_PLL, value); 108 + hl_set_frequency(hdev, HL_GOYA_MME_PLL, value); 109 109 goya->mme_clk = value; 110 110 111 111 fail: ··· 121 121 if (!hl_device_operational(hdev, NULL)) 122 122 return -ENODEV; 123 123 124 - value = hl_get_frequency(hdev, TPC_PLL, false); 124 + value = hl_get_frequency(hdev, HL_GOYA_TPC_PLL, false); 125 125 126 126 if (value < 0) 127 127 return value; ··· 154 154 goto fail; 155 155 } 156 156 157 - hl_set_frequency(hdev, TPC_PLL, value); 157 + hl_set_frequency(hdev, HL_GOYA_TPC_PLL, value); 158 158 goya->tpc_clk = value; 159 159 160 160 fail: ··· 170 170 if (!hl_device_operational(hdev, NULL)) 171 171 return -ENODEV; 172 172 173 - value = hl_get_frequency(hdev, IC_PLL, false); 173 + value = hl_get_frequency(hdev, HL_GOYA_IC_PLL, false); 174 174 175 175 if (value < 0) 176 176 return value; ··· 203 203 goto fail; 204 204 } 205 205 206 - hl_set_frequency(hdev, IC_PLL, value); 206 + hl_set_frequency(hdev, HL_GOYA_IC_PLL, value); 207 207 goya->ic_clk = value; 208 208 209 209 fail: ··· 219 219 if (!hl_device_operational(hdev, NULL)) 220 220 return -ENODEV; 221 221 222 - value = hl_get_frequency(hdev, MME_PLL, true); 222 + value = hl_get_frequency(hdev, HL_GOYA_MME_PLL, true); 223 223 224 224 if (value < 0) 225 225 return value; ··· 236 236 if (!hl_device_operational(hdev, NULL)) 237 237 return -ENODEV; 238 238 239 - value = hl_get_frequency(hdev, TPC_PLL, true); 239 + value = hl_get_frequency(hdev, HL_GOYA_TPC_PLL, true); 240 240 241 241 if (value < 0) 242 242 return value; ··· 253 253 if (!hl_device_operational(hdev, NULL)) 254 254 return -ENODEV; 255 255 256 - value = hl_get_frequency(hdev, IC_PLL, true); 256 + value = hl_get_frequency(hdev, HL_GOYA_IC_PLL, true); 257 257 258 258 if (value < 0) 259 259 return value;
+1 -1
drivers/misc/ics932s401.c
··· 134 134 for (i = 0; i < NUM_MIRRORED_REGS; i++) { 135 135 temp = i2c_smbus_read_word_data(client, regs_to_copy[i]); 136 136 if (temp < 0) 137 - data->regs[regs_to_copy[i]] = 0; 137 + temp = 0; 138 138 data->regs[regs_to_copy[i]] = temp >> 8; 139 139 } 140 140
-3
drivers/net/caif/caif_serial.c
··· 269 269 { 270 270 struct ser_device *ser; 271 271 272 - if (WARN_ON(!dev)) 273 - return -EINVAL; 274 - 275 272 ser = netdev_priv(dev); 276 273 277 274 /* Send flow off once, on high water mark */
+17 -10
drivers/net/ethernet/cavium/liquidio/lio_main.c
··· 1153 1153 * @lio: per-network private data 1154 1154 * @start_stop: whether to start or stop 1155 1155 */ 1156 - static void send_rx_ctrl_cmd(struct lio *lio, int start_stop) 1156 + static int send_rx_ctrl_cmd(struct lio *lio, int start_stop) 1157 1157 { 1158 1158 struct octeon_soft_command *sc; 1159 1159 union octnet_cmd *ncmd; ··· 1161 1161 int retval; 1162 1162 1163 1163 if (oct->props[lio->ifidx].rx_on == start_stop) 1164 - return; 1164 + return 0; 1165 1165 1166 1166 sc = (struct octeon_soft_command *) 1167 1167 octeon_alloc_soft_command(oct, OCTNET_CMD_SIZE, 1168 1168 16, 0); 1169 1169 if (!sc) { 1170 1170 netif_info(lio, rx_err, lio->netdev, 1171 - "Failed to allocate octeon_soft_command\n"); 1172 - return; 1171 + "Failed to allocate octeon_soft_command struct\n"); 1172 + return -ENOMEM; 1173 1173 } 1174 1174 1175 1175 ncmd = (union octnet_cmd *)sc->virtdptr; ··· 1192 1192 if (retval == IQ_SEND_FAILED) { 1193 1193 netif_info(lio, rx_err, lio->netdev, "Failed to send RX Control message\n"); 1194 1194 octeon_free_soft_command(oct, sc); 1195 - return; 1196 1195 } else { 1197 1196 /* Sleep on a wait queue till the cond flag indicates that the 1198 1197 * response arrived or timed-out. 1199 1198 */ 1200 1199 retval = wait_for_sc_completion_timeout(oct, sc, 0); 1201 1200 if (retval) 1202 - return; 1201 + return retval; 1203 1202 1204 1203 oct->props[lio->ifidx].rx_on = start_stop; 1205 1204 WRITE_ONCE(sc->caller_is_done, true); 1206 1205 } 1206 + 1207 + return retval; 1207 1208 } 1208 1209 1209 1210 /** ··· 1779 1778 struct octeon_device_priv *oct_priv = 1780 1779 (struct octeon_device_priv *)oct->priv; 1781 1780 struct napi_struct *napi, *n; 1781 + int ret = 0; 1782 1782 1783 1783 if (oct->props[lio->ifidx].napi_enabled == 0) { 1784 1784 tasklet_disable(&oct_priv->droq_tasklet); ··· 1815 1813 netif_info(lio, ifup, lio->netdev, "Interface Open, ready for traffic\n"); 1816 1814 1817 1815 /* tell Octeon to start forwarding packets to host */ 1818 - send_rx_ctrl_cmd(lio, 1); 1816 + ret = send_rx_ctrl_cmd(lio, 1); 1817 + if (ret) 1818 + return ret; 1819 1819 1820 1820 /* start periodical statistics fetch */ 1821 1821 INIT_DELAYED_WORK(&lio->stats_wk.work, lio_fetch_stats); ··· 1828 1824 dev_info(&oct->pci_dev->dev, "%s interface is opened\n", 1829 1825 netdev->name); 1830 1826 1831 - return 0; 1827 + return ret; 1832 1828 } 1833 1829 1834 1830 /** ··· 1842 1838 struct octeon_device_priv *oct_priv = 1843 1839 (struct octeon_device_priv *)oct->priv; 1844 1840 struct napi_struct *napi, *n; 1841 + int ret = 0; 1845 1842 1846 1843 ifstate_reset(lio, LIO_IFSTATE_RUNNING); 1847 1844 ··· 1859 1854 lio->link_changes++; 1860 1855 1861 1856 /* Tell Octeon that nic interface is down. */ 1862 - send_rx_ctrl_cmd(lio, 0); 1857 + ret = send_rx_ctrl_cmd(lio, 0); 1858 + if (ret) 1859 + return ret; 1863 1860 1864 1861 if (OCTEON_CN23XX_PF(oct)) { 1865 1862 if (!oct->msix_on) ··· 1896 1889 1897 1890 dev_info(&oct->pci_dev->dev, "%s interface is stopped\n", netdev->name); 1898 1891 1899 - return 0; 1892 + return ret; 1900 1893 } 1901 1894 1902 1895 /**
+20 -7
drivers/net/ethernet/cavium/liquidio/lio_vf_main.c
··· 595 595 * @lio: per-network private data 596 596 * @start_stop: whether to start or stop 597 597 */ 598 - static void send_rx_ctrl_cmd(struct lio *lio, int start_stop) 598 + static int send_rx_ctrl_cmd(struct lio *lio, int start_stop) 599 599 { 600 600 struct octeon_device *oct = (struct octeon_device *)lio->oct_dev; 601 601 struct octeon_soft_command *sc; ··· 603 603 int retval; 604 604 605 605 if (oct->props[lio->ifidx].rx_on == start_stop) 606 - return; 606 + return 0; 607 607 608 608 sc = (struct octeon_soft_command *) 609 609 octeon_alloc_soft_command(oct, OCTNET_CMD_SIZE, 610 610 16, 0); 611 + if (!sc) { 612 + netif_info(lio, rx_err, lio->netdev, 613 + "Failed to allocate octeon_soft_command struct\n"); 614 + return -ENOMEM; 615 + } 611 616 612 617 ncmd = (union octnet_cmd *)sc->virtdptr; 613 618 ··· 640 635 */ 641 636 retval = wait_for_sc_completion_timeout(oct, sc, 0); 642 637 if (retval) 643 - return; 638 + return retval; 644 639 645 640 oct->props[lio->ifidx].rx_on = start_stop; 646 641 WRITE_ONCE(sc->caller_is_done, true); 647 642 } 643 + 644 + return retval; 648 645 } 649 646 650 647 /** ··· 913 906 struct octeon_device_priv *oct_priv = 914 907 (struct octeon_device_priv *)oct->priv; 915 908 struct napi_struct *napi, *n; 909 + int ret = 0; 916 910 917 911 if (!oct->props[lio->ifidx].napi_enabled) { 918 912 tasklet_disable(&oct_priv->droq_tasklet); ··· 940 932 (LIQUIDIO_NDEV_STATS_POLL_TIME_MS)); 941 933 942 934 /* tell Octeon to start forwarding packets to host */ 943 - send_rx_ctrl_cmd(lio, 1); 935 + ret = send_rx_ctrl_cmd(lio, 1); 936 + if (ret) 937 + return ret; 944 938 945 939 dev_info(&oct->pci_dev->dev, "%s interface is opened\n", netdev->name); 946 940 947 - return 0; 941 + return ret; 948 942 } 949 943 950 944 /** ··· 960 950 struct octeon_device_priv *oct_priv = 961 951 (struct octeon_device_priv *)oct->priv; 962 952 struct napi_struct *napi, *n; 953 + int ret = 0; 963 954 964 955 /* tell Octeon to stop forwarding packets to host */ 965 - send_rx_ctrl_cmd(lio, 0); 956 + ret = send_rx_ctrl_cmd(lio, 0); 957 + if (ret) 958 + return ret; 966 959 967 960 netif_info(lio, ifdown, lio->netdev, "Stopping interface!\n"); 968 961 /* Inform that netif carrier is down */ ··· 999 986 1000 987 dev_info(&oct->pci_dev->dev, "%s interface is stopped\n", netdev->name); 1001 988 1002 - return 0; 989 + return ret; 1003 990 } 1004 991 1005 992 /**
+2 -2
drivers/net/ethernet/fujitsu/fmvj18x_cs.c
··· 548 548 549 549 base = ioremap(link->resource[2]->start, resource_size(link->resource[2])); 550 550 if (!base) { 551 - pcmcia_release_window(link, link->resource[2]); 552 - return -ENOMEM; 551 + pcmcia_release_window(link, link->resource[2]); 552 + return -1; 553 553 } 554 554 555 555 pcmcia_map_mem_page(link, link->resource[2], 0);
+2 -1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c
··· 1048 1048 for (i = 0; i < QLCNIC_NUM_ILB_PKT; i++) { 1049 1049 skb = netdev_alloc_skb(adapter->netdev, QLCNIC_ILB_PKT_SIZE); 1050 1050 if (!skb) 1051 - break; 1051 + goto error; 1052 1052 qlcnic_create_loopback_buff(skb->data, adapter->mac_addr); 1053 1053 skb_put(skb, QLCNIC_ILB_PKT_SIZE); 1054 1054 adapter->ahw->diag_cnt = 0; ··· 1072 1072 cnt++; 1073 1073 } 1074 1074 if (cnt != i) { 1075 + error: 1075 1076 dev_err(&adapter->pdev->dev, 1076 1077 "LB Test: failed, TX[%d], RX[%d]\n", i, cnt); 1077 1078 if (mode != QLCNIC_ILB_MODE)
+4 -4
drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c
··· 30 30 static int sun7i_gmac_init(struct platform_device *pdev, void *priv) 31 31 { 32 32 struct sunxi_priv_data *gmac = priv; 33 - int ret; 33 + int ret = 0; 34 34 35 35 if (gmac->regulator) { 36 36 ret = regulator_enable(gmac->regulator); ··· 51 51 } else { 52 52 clk_set_rate(gmac->tx_clk, SUN7I_GMAC_MII_RATE); 53 53 ret = clk_prepare(gmac->tx_clk); 54 - if (ret) 55 - return ret; 54 + if (ret && gmac->regulator) 55 + regulator_disable(gmac->regulator); 56 56 } 57 57 58 - return 0; 58 + return ret; 59 59 } 60 60 61 61 static void sun7i_gmac_exit(struct platform_device *pdev, void *priv)
+20 -12
drivers/net/ethernet/sun/niu.c
··· 8144 8144 "VPD_SCAN: Reading in property [%s] len[%d]\n", 8145 8145 namebuf, prop_len); 8146 8146 for (i = 0; i < prop_len; i++) { 8147 - err = niu_pci_eeprom_read(np, off + i); 8148 - if (err >= 0) 8149 - *prop_buf = err; 8150 - ++prop_buf; 8147 + err = niu_pci_eeprom_read(np, off + i); 8148 + if (err < 0) 8149 + return err; 8150 + *prop_buf++ = err; 8151 8151 } 8152 8152 } 8153 8153 ··· 8158 8158 } 8159 8159 8160 8160 /* ESPC_PIO_EN_ENABLE must be set */ 8161 - static void niu_pci_vpd_fetch(struct niu *np, u32 start) 8161 + static int niu_pci_vpd_fetch(struct niu *np, u32 start) 8162 8162 { 8163 8163 u32 offset; 8164 8164 int err; 8165 8165 8166 8166 err = niu_pci_eeprom_read16_swp(np, start + 1); 8167 8167 if (err < 0) 8168 - return; 8168 + return err; 8169 8169 8170 8170 offset = err + 3; 8171 8171 ··· 8174 8174 u32 end; 8175 8175 8176 8176 err = niu_pci_eeprom_read(np, here); 8177 + if (err < 0) 8178 + return err; 8177 8179 if (err != 0x90) 8178 - return; 8180 + return -EINVAL; 8179 8181 8180 8182 err = niu_pci_eeprom_read16_swp(np, here + 1); 8181 8183 if (err < 0) 8182 - return; 8184 + return err; 8183 8185 8184 8186 here = start + offset + 3; 8185 8187 end = start + offset + err; ··· 8189 8187 offset += err; 8190 8188 8191 8189 err = niu_pci_vpd_scan_props(np, here, end); 8192 - if (err < 0 || err == 1) 8193 - return; 8190 + if (err < 0) 8191 + return err; 8192 + if (err == 1) 8193 + return -EINVAL; 8194 8194 } 8195 + return 0; 8195 8196 } 8196 8197 8197 8198 /* ESPC_PIO_EN_ENABLE must be set */ ··· 9285 9280 offset = niu_pci_vpd_offset(np); 9286 9281 netif_printk(np, probe, KERN_DEBUG, np->dev, 9287 9282 "%s() VPD offset [%08x]\n", __func__, offset); 9288 - if (offset) 9289 - niu_pci_vpd_fetch(np, offset); 9283 + if (offset) { 9284 + err = niu_pci_vpd_fetch(np, offset); 9285 + if (err < 0) 9286 + return err; 9287 + } 9290 9288 nw64(ESPC_PIO_EN, 0); 9291 9289 9292 9290 if (np->flags & NIU_FLAGS_VPD_VALID) {
+4 -1
drivers/net/wireless/ath/ath6kl/debug.c
··· 1027 1027 { 1028 1028 struct ath6kl *ar = file->private_data; 1029 1029 unsigned long lrssi_roam_threshold; 1030 + int ret; 1030 1031 1031 1032 if (kstrtoul_from_user(user_buf, count, 0, &lrssi_roam_threshold)) 1032 1033 return -EINVAL; 1033 1034 1034 1035 ar->lrssi_roam_threshold = lrssi_roam_threshold; 1035 1036 1036 - ath6kl_wmi_set_roam_lrssi_cmd(ar->wmi, ar->lrssi_roam_threshold); 1037 + ret = ath6kl_wmi_set_roam_lrssi_cmd(ar->wmi, ar->lrssi_roam_threshold); 1037 1038 1039 + if (ret) 1040 + return ret; 1038 1041 return count; 1039 1042 } 1040 1043
+2 -6
drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
··· 1217 1217 }, 1218 1218 }; 1219 1219 1220 - void brcmf_sdio_register(void) 1220 + int brcmf_sdio_register(void) 1221 1221 { 1222 - int ret; 1223 - 1224 - ret = sdio_register_driver(&brcmf_sdmmc_driver); 1225 - if (ret) 1226 - brcmf_err("sdio_register_driver failed: %d\n", ret); 1222 + return sdio_register_driver(&brcmf_sdmmc_driver); 1227 1223 } 1228 1224 1229 1225 void brcmf_sdio_exit(void)
+17 -2
drivers/net/wireless/broadcom/brcm80211/brcmfmac/bus.h
··· 275 275 276 276 #ifdef CONFIG_BRCMFMAC_SDIO 277 277 void brcmf_sdio_exit(void); 278 - void brcmf_sdio_register(void); 278 + int brcmf_sdio_register(void); 279 + #else 280 + static inline void brcmf_sdio_exit(void) { } 281 + static inline int brcmf_sdio_register(void) { return 0; } 279 282 #endif 283 + 280 284 #ifdef CONFIG_BRCMFMAC_USB 281 285 void brcmf_usb_exit(void); 282 - void brcmf_usb_register(void); 286 + int brcmf_usb_register(void); 287 + #else 288 + static inline void brcmf_usb_exit(void) { } 289 + static inline int brcmf_usb_register(void) { return 0; } 290 + #endif 291 + 292 + #ifdef CONFIG_BRCMFMAC_PCIE 293 + void brcmf_pcie_exit(void); 294 + int brcmf_pcie_register(void); 295 + #else 296 + static inline void brcmf_pcie_exit(void) { } 297 + static inline int brcmf_pcie_register(void) { return 0; } 283 298 #endif 284 299 285 300 #endif /* BRCMFMAC_BUS_H */
+18 -24
drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
··· 1518 1518 } 1519 1519 } 1520 1520 1521 - static void brcmf_driver_register(struct work_struct *work) 1522 - { 1523 - #ifdef CONFIG_BRCMFMAC_SDIO 1524 - brcmf_sdio_register(); 1525 - #endif 1526 - #ifdef CONFIG_BRCMFMAC_USB 1527 - brcmf_usb_register(); 1528 - #endif 1529 - #ifdef CONFIG_BRCMFMAC_PCIE 1530 - brcmf_pcie_register(); 1531 - #endif 1532 - } 1533 - static DECLARE_WORK(brcmf_driver_work, brcmf_driver_register); 1534 - 1535 1521 int __init brcmf_core_init(void) 1536 1522 { 1537 - if (!schedule_work(&brcmf_driver_work)) 1538 - return -EBUSY; 1523 + int err; 1539 1524 1525 + err = brcmf_sdio_register(); 1526 + if (err) 1527 + return err; 1528 + 1529 + err = brcmf_usb_register(); 1530 + if (err) 1531 + goto error_usb_register; 1532 + 1533 + err = brcmf_pcie_register(); 1534 + if (err) 1535 + goto error_pcie_register; 1540 1536 return 0; 1537 + 1538 + error_pcie_register: 1539 + brcmf_usb_exit(); 1540 + error_usb_register: 1541 + brcmf_sdio_exit(); 1542 + return err; 1541 1543 } 1542 1544 1543 1545 void __exit brcmf_core_exit(void) 1544 1546 { 1545 - cancel_work_sync(&brcmf_driver_work); 1546 - 1547 - #ifdef CONFIG_BRCMFMAC_SDIO 1548 1547 brcmf_sdio_exit(); 1549 - #endif 1550 - #ifdef CONFIG_BRCMFMAC_USB 1551 1548 brcmf_usb_exit(); 1552 - #endif 1553 - #ifdef CONFIG_BRCMFMAC_PCIE 1554 1549 brcmf_pcie_exit(); 1555 - #endif 1556 1550 } 1557 1551
+2 -7
drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
··· 2140 2140 }; 2141 2141 2142 2142 2143 - void brcmf_pcie_register(void) 2143 + int brcmf_pcie_register(void) 2144 2144 { 2145 - int err; 2146 - 2147 2145 brcmf_dbg(PCIE, "Enter\n"); 2148 - err = pci_register_driver(&brcmf_pciedrvr); 2149 - if (err) 2150 - brcmf_err(NULL, "PCIE driver registration failed, err=%d\n", 2151 - err); 2146 + return pci_register_driver(&brcmf_pciedrvr); 2152 2147 } 2153 2148 2154 2149
-5
drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.h
··· 11 11 struct brcmf_pciedev_info *devinfo; 12 12 }; 13 13 14 - 15 - void brcmf_pcie_exit(void); 16 - void brcmf_pcie_register(void); 17 - 18 - 19 14 #endif /* BRCMFMAC_PCIE_H */
+2 -6
drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c
··· 1584 1584 usb_deregister(&brcmf_usbdrvr); 1585 1585 } 1586 1586 1587 - void brcmf_usb_register(void) 1587 + int brcmf_usb_register(void) 1588 1588 { 1589 - int ret; 1590 - 1591 1589 brcmf_dbg(USB, "Enter\n"); 1592 - ret = usb_register(&brcmf_usbdrvr); 1593 - if (ret) 1594 - brcmf_err("usb_register failed %d\n", ret); 1590 + return usb_register(&brcmf_usbdrvr); 1595 1591 }
+4 -29
drivers/net/wireless/marvell/libertas/mesh.c
··· 801 801 .attrs = mesh_ie_attrs, 802 802 }; 803 803 804 - static void lbs_persist_config_init(struct net_device *dev) 805 - { 806 - int ret; 807 - ret = sysfs_create_group(&(dev->dev.kobj), &boot_opts_group); 808 - if (ret) 809 - pr_err("failed to create boot_opts_group.\n"); 810 - 811 - ret = sysfs_create_group(&(dev->dev.kobj), &mesh_ie_group); 812 - if (ret) 813 - pr_err("failed to create mesh_ie_group.\n"); 814 - } 815 - 816 - static void lbs_persist_config_remove(struct net_device *dev) 817 - { 818 - sysfs_remove_group(&(dev->dev.kobj), &boot_opts_group); 819 - sysfs_remove_group(&(dev->dev.kobj), &mesh_ie_group); 820 - } 821 - 822 804 823 805 /*************************************************************************** 824 806 * Initializing and starting, stopping mesh ··· 996 1014 SET_NETDEV_DEV(priv->mesh_dev, priv->dev->dev.parent); 997 1015 998 1016 mesh_dev->flags |= IFF_BROADCAST | IFF_MULTICAST; 1017 + mesh_dev->sysfs_groups[0] = &lbs_mesh_attr_group; 1018 + mesh_dev->sysfs_groups[1] = &boot_opts_group; 1019 + mesh_dev->sysfs_groups[2] = &mesh_ie_group; 1020 + 999 1021 /* Register virtual mesh interface */ 1000 1022 ret = register_netdev(mesh_dev); 1001 1023 if (ret) { ··· 1007 1021 goto err_free_netdev; 1008 1022 } 1009 1023 1010 - ret = sysfs_create_group(&(mesh_dev->dev.kobj), &lbs_mesh_attr_group); 1011 - if (ret) 1012 - goto err_unregister; 1013 - 1014 - lbs_persist_config_init(mesh_dev); 1015 - 1016 1024 /* Everything successful */ 1017 1025 ret = 0; 1018 1026 goto done; 1019 - 1020 - err_unregister: 1021 - unregister_netdev(mesh_dev); 1022 1027 1023 1028 err_free_netdev: 1024 1029 free_netdev(mesh_dev); ··· 1031 1054 1032 1055 netif_stop_queue(mesh_dev); 1033 1056 netif_carrier_off(mesh_dev); 1034 - sysfs_remove_group(&(mesh_dev->dev.kobj), &lbs_mesh_attr_group); 1035 - lbs_persist_config_remove(mesh_dev); 1036 1057 unregister_netdev(mesh_dev); 1037 1058 priv->mesh_dev = NULL; 1038 1059 kfree(mesh_dev->ieee80211_ptr);
+9 -9
drivers/net/wireless/realtek/rtlwifi/base.c
··· 440 440 static void rtl_fwevt_wq_callback(struct work_struct *work); 441 441 static void rtl_c2hcmd_wq_callback(struct work_struct *work); 442 442 443 - static void _rtl_init_deferred_work(struct ieee80211_hw *hw) 443 + static int _rtl_init_deferred_work(struct ieee80211_hw *hw) 444 444 { 445 445 struct rtl_priv *rtlpriv = rtl_priv(hw); 446 + struct workqueue_struct *wq; 447 + 448 + wq = alloc_workqueue("%s", 0, 0, rtlpriv->cfg->name); 449 + if (!wq) 450 + return -ENOMEM; 446 451 447 452 /* <1> timer */ 448 453 timer_setup(&rtlpriv->works.watchdog_timer, ··· 456 451 rtl_easy_concurrent_retrytimer_callback, 0); 457 452 /* <2> work queue */ 458 453 rtlpriv->works.hw = hw; 459 - rtlpriv->works.rtl_wq = alloc_workqueue("%s", 0, 0, rtlpriv->cfg->name); 460 - if (unlikely(!rtlpriv->works.rtl_wq)) { 461 - pr_err("Failed to allocate work queue\n"); 462 - return; 463 - } 454 + rtlpriv->works.rtl_wq = wq; 464 455 465 456 INIT_DELAYED_WORK(&rtlpriv->works.watchdog_wq, 466 457 rtl_watchdog_wq_callback); ··· 467 466 rtl_swlps_rfon_wq_callback); 468 467 INIT_DELAYED_WORK(&rtlpriv->works.fwevt_wq, rtl_fwevt_wq_callback); 469 468 INIT_DELAYED_WORK(&rtlpriv->works.c2hcmd_wq, rtl_c2hcmd_wq_callback); 469 + return 0; 470 470 } 471 471 472 472 void rtl_deinit_deferred_work(struct ieee80211_hw *hw, bool ips_wq) ··· 566 564 rtlmac->link_state = MAC80211_NOLINK; 567 565 568 566 /* <6> init deferred work */ 569 - _rtl_init_deferred_work(hw); 570 - 571 - return 0; 567 + return _rtl_init_deferred_work(hw); 572 568 } 573 569 EXPORT_SYMBOL_GPL(rtl_init_core); 574 570
+2 -1
drivers/nvme/host/core.c
··· 2901 2901 ctrl->hmmaxd = le16_to_cpu(id->hmmaxd); 2902 2902 } 2903 2903 2904 - ret = nvme_mpath_init(ctrl, id); 2904 + ret = nvme_mpath_init_identify(ctrl, id); 2905 2905 if (ret < 0) 2906 2906 goto out_free; 2907 2907 ··· 4364 4364 min(default_ps_max_latency_us, (unsigned long)S32_MAX)); 4365 4365 4366 4366 nvme_fault_inject_init(&ctrl->fault_inject, dev_name(ctrl->device)); 4367 + nvme_mpath_init_ctrl(ctrl); 4367 4368 4368 4369 return 0; 4369 4370 out_free_name:
+29 -26
drivers/nvme/host/multipath.c
··· 781 781 put_disk(head->disk); 782 782 } 783 783 784 - int nvme_mpath_init(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id) 784 + void nvme_mpath_init_ctrl(struct nvme_ctrl *ctrl) 785 785 { 786 - int error; 786 + mutex_init(&ctrl->ana_lock); 787 + timer_setup(&ctrl->anatt_timer, nvme_anatt_timeout, 0); 788 + INIT_WORK(&ctrl->ana_work, nvme_ana_work); 789 + } 790 + 791 + int nvme_mpath_init_identify(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id) 792 + { 793 + size_t max_transfer_size = ctrl->max_hw_sectors << SECTOR_SHIFT; 794 + size_t ana_log_size; 795 + int error = 0; 787 796 788 797 /* check if multipath is enabled and we have the capability */ 789 798 if (!multipath || !ctrl->subsys || ··· 804 795 ctrl->nanagrpid = le32_to_cpu(id->nanagrpid); 805 796 ctrl->anagrpmax = le32_to_cpu(id->anagrpmax); 806 797 807 - mutex_init(&ctrl->ana_lock); 808 - timer_setup(&ctrl->anatt_timer, nvme_anatt_timeout, 0); 809 - ctrl->ana_log_size = sizeof(struct nvme_ana_rsp_hdr) + 810 - ctrl->nanagrpid * sizeof(struct nvme_ana_group_desc); 811 - ctrl->ana_log_size += ctrl->max_namespaces * sizeof(__le32); 812 - 813 - if (ctrl->ana_log_size > ctrl->max_hw_sectors << SECTOR_SHIFT) { 798 + ana_log_size = sizeof(struct nvme_ana_rsp_hdr) + 799 + ctrl->nanagrpid * sizeof(struct nvme_ana_group_desc) + 800 + ctrl->max_namespaces * sizeof(__le32); 801 + if (ana_log_size > max_transfer_size) { 814 802 dev_err(ctrl->device, 815 - "ANA log page size (%zd) larger than MDTS (%d).\n", 816 - ctrl->ana_log_size, 817 - ctrl->max_hw_sectors << SECTOR_SHIFT); 803 + "ANA log page size (%zd) larger than MDTS (%zd).\n", 804 + ana_log_size, max_transfer_size); 818 805 dev_err(ctrl->device, "disabling ANA support.\n"); 819 - return 0; 806 + goto out_uninit; 820 807 } 821 - 822 - INIT_WORK(&ctrl->ana_work, nvme_ana_work); 823 - kfree(ctrl->ana_log_buf); 824 - ctrl->ana_log_buf = kmalloc(ctrl->ana_log_size, GFP_KERNEL); 825 - if (!ctrl->ana_log_buf) { 826 - error = -ENOMEM; 827 - goto out; 808 + if (ana_log_size > ctrl->ana_log_size) { 809 + nvme_mpath_stop(ctrl); 810 + kfree(ctrl->ana_log_buf); 811 + ctrl->ana_log_buf = kmalloc(ana_log_size, GFP_KERNEL); 812 + if (!ctrl->ana_log_buf) 813 + return -ENOMEM; 828 814 } 829 - 815 + ctrl->ana_log_size = ana_log_size; 830 816 error = nvme_read_ana_log(ctrl); 831 817 if (error) 832 - goto out_free_ana_log_buf; 818 + goto out_uninit; 833 819 return 0; 834 - out_free_ana_log_buf: 835 - kfree(ctrl->ana_log_buf); 836 - ctrl->ana_log_buf = NULL; 837 - out: 820 + 821 + out_uninit: 822 + nvme_mpath_uninit(ctrl); 838 823 return error; 839 824 } 840 825
+6 -2
drivers/nvme/host/nvme.h
··· 712 712 int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl,struct nvme_ns_head *head); 713 713 void nvme_mpath_add_disk(struct nvme_ns *ns, struct nvme_id_ns *id); 714 714 void nvme_mpath_remove_disk(struct nvme_ns_head *head); 715 - int nvme_mpath_init(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id); 715 + int nvme_mpath_init_identify(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id); 716 + void nvme_mpath_init_ctrl(struct nvme_ctrl *ctrl); 716 717 void nvme_mpath_uninit(struct nvme_ctrl *ctrl); 717 718 void nvme_mpath_stop(struct nvme_ctrl *ctrl); 718 719 bool nvme_mpath_clear_current_path(struct nvme_ns *ns); ··· 781 780 static inline void nvme_trace_bio_complete(struct request *req) 782 781 { 783 782 } 784 - static inline int nvme_mpath_init(struct nvme_ctrl *ctrl, 783 + static inline void nvme_mpath_init_ctrl(struct nvme_ctrl *ctrl) 784 + { 785 + } 786 + static inline int nvme_mpath_init_identify(struct nvme_ctrl *ctrl, 785 787 struct nvme_id_ctrl *id) 786 788 { 787 789 if (ctrl->subsys->cmic & NVME_CTRL_CMIC_ANA)
+2 -5
drivers/nvme/target/admin-cmd.c
··· 975 975 case nvme_admin_keep_alive: 976 976 req->execute = nvmet_execute_keep_alive; 977 977 return 0; 978 + default: 979 + return nvmet_report_invalid_opcode(req); 978 980 } 979 - 980 - pr_debug("unhandled cmd %d on qid %d\n", cmd->common.opcode, 981 - req->sq->qid); 982 - req->error_loc = offsetof(struct nvme_common_command, opcode); 983 - return NVME_SC_INVALID_OPCODE | NVME_SC_DNR; 984 981 }
+1 -1
drivers/nvme/target/discovery.c
··· 379 379 req->execute = nvmet_execute_disc_identify; 380 380 return 0; 381 381 default: 382 - pr_err("unhandled cmd %d\n", cmd->common.opcode); 382 + pr_debug("unhandled cmd %d\n", cmd->common.opcode); 383 383 req->error_loc = offsetof(struct nvme_common_command, opcode); 384 384 return NVME_SC_INVALID_OPCODE | NVME_SC_DNR; 385 385 }
+3 -3
drivers/nvme/target/fabrics-cmd.c
··· 94 94 req->execute = nvmet_execute_prop_get; 95 95 break; 96 96 default: 97 - pr_err("received unknown capsule type 0x%x\n", 97 + pr_debug("received unknown capsule type 0x%x\n", 98 98 cmd->fabrics.fctype); 99 99 req->error_loc = offsetof(struct nvmf_common_command, fctype); 100 100 return NVME_SC_INVALID_OPCODE | NVME_SC_DNR; ··· 284 284 struct nvme_command *cmd = req->cmd; 285 285 286 286 if (!nvme_is_fabrics(cmd)) { 287 - pr_err("invalid command 0x%x on unconnected queue.\n", 287 + pr_debug("invalid command 0x%x on unconnected queue.\n", 288 288 cmd->fabrics.opcode); 289 289 req->error_loc = offsetof(struct nvme_common_command, opcode); 290 290 return NVME_SC_INVALID_OPCODE | NVME_SC_DNR; 291 291 } 292 292 if (cmd->fabrics.fctype != nvme_fabrics_type_connect) { 293 - pr_err("invalid capsule type 0x%x on unconnected queue.\n", 293 + pr_debug("invalid capsule type 0x%x on unconnected queue.\n", 294 294 cmd->fabrics.fctype); 295 295 req->error_loc = offsetof(struct nvmf_common_command, fctype); 296 296 return NVME_SC_INVALID_OPCODE | NVME_SC_DNR;
+1 -1
drivers/nvme/target/io-cmd-bdev.c
··· 258 258 259 259 sector = nvmet_lba_to_sect(req->ns, req->cmd->rw.slba); 260 260 261 - if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) { 261 + if (nvmet_use_inline_bvec(req)) { 262 262 bio = &req->b.inline_bio; 263 263 bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec)); 264 264 } else {
+5 -3
drivers/nvme/target/io-cmd-file.c
··· 49 49 50 50 ns->file = filp_open(ns->device_path, flags, 0); 51 51 if (IS_ERR(ns->file)) { 52 - pr_err("failed to open file %s: (%ld)\n", 53 - ns->device_path, PTR_ERR(ns->file)); 54 - return PTR_ERR(ns->file); 52 + ret = PTR_ERR(ns->file); 53 + pr_err("failed to open file %s: (%d)\n", 54 + ns->device_path, ret); 55 + ns->file = NULL; 56 + return ret; 55 57 } 56 58 57 59 ret = nvmet_file_ns_revalidate(ns);
+6
drivers/nvme/target/nvmet.h
··· 616 616 return le64_to_cpu(lba) << (ns->blksize_shift - SECTOR_SHIFT); 617 617 } 618 618 619 + static inline bool nvmet_use_inline_bvec(struct nvmet_req *req) 620 + { 621 + return req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN && 622 + req->sg_cnt <= NVMET_MAX_INLINE_BIOVEC; 623 + } 624 + 619 625 #endif /* _NVMET_H */
+1 -1
drivers/nvme/target/passthru.c
··· 194 194 if (req->sg_cnt > BIO_MAX_VECS) 195 195 return -EINVAL; 196 196 197 - if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) { 197 + if (nvmet_use_inline_bvec(req)) { 198 198 bio = &req->p.inline_bio; 199 199 bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec)); 200 200 } else {
+2 -2
drivers/nvme/target/rdma.c
··· 700 700 { 701 701 struct nvmet_rdma_rsp *rsp = 702 702 container_of(wc->wr_cqe, struct nvmet_rdma_rsp, send_cqe); 703 - struct nvmet_rdma_queue *queue = cq->cq_context; 703 + struct nvmet_rdma_queue *queue = wc->qp->qp_context; 704 704 705 705 nvmet_rdma_release_rsp(rsp); 706 706 ··· 786 786 { 787 787 struct nvmet_rdma_rsp *rsp = 788 788 container_of(wc->wr_cqe, struct nvmet_rdma_rsp, write_cqe); 789 - struct nvmet_rdma_queue *queue = cq->cq_context; 789 + struct nvmet_rdma_queue *queue = wc->qp->qp_context; 790 790 struct rdma_cm_id *cm_id = rsp->queue->cm_id; 791 791 u16 status; 792 792
+8 -9
drivers/rapidio/rio_cm.c
··· 2127 2127 return -ENODEV; 2128 2128 } 2129 2129 2130 + cm->rx_wq = create_workqueue(DRV_NAME "/rxq"); 2131 + if (!cm->rx_wq) { 2132 + rio_release_inb_mbox(mport, cmbox); 2133 + rio_release_outb_mbox(mport, cmbox); 2134 + kfree(cm); 2135 + return -ENOMEM; 2136 + } 2137 + 2130 2138 /* 2131 2139 * Allocate and register inbound messaging buffers to be ready 2132 2140 * to receive channel and system management requests ··· 2145 2137 cm->rx_slots = RIOCM_RX_RING_SIZE; 2146 2138 mutex_init(&cm->rx_lock); 2147 2139 riocm_rx_fill(cm, RIOCM_RX_RING_SIZE); 2148 - cm->rx_wq = create_workqueue(DRV_NAME "/rxq"); 2149 - if (!cm->rx_wq) { 2150 - riocm_error("failed to allocate IBMBOX_%d on %s", 2151 - cmbox, mport->name); 2152 - rio_release_outb_mbox(mport, cmbox); 2153 - kfree(cm); 2154 - return -ENOMEM; 2155 - } 2156 - 2157 2140 INIT_WORK(&cm->rx_work, rio_ibmsg_handler); 2158 2141 2159 2142 cm->tx_slot = 0;
+9 -6
drivers/scsi/ufs/ufs-hisi.c
··· 467 467 host->hba = hba; 468 468 ufshcd_set_variant(hba, host); 469 469 470 - host->rst = devm_reset_control_get(dev, "rst"); 470 + host->rst = devm_reset_control_get(dev, "rst"); 471 471 if (IS_ERR(host->rst)) { 472 472 dev_err(dev, "%s: failed to get reset control\n", __func__); 473 - return PTR_ERR(host->rst); 473 + err = PTR_ERR(host->rst); 474 + goto error; 474 475 } 475 476 476 477 ufs_hisi_set_pm_lvl(hba); 477 478 478 479 err = ufs_hisi_get_resource(host); 479 - if (err) { 480 - ufshcd_set_variant(hba, NULL); 481 - return err; 482 - } 480 + if (err) 481 + goto error; 483 482 484 483 return 0; 484 + 485 + error: 486 + ufshcd_set_variant(hba, NULL); 487 + return err; 485 488 } 486 489 487 490 static int ufs_hi3660_init(struct ufs_hba *hba)
+13 -10
drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c
··· 527 527 struct mlme_priv *pmlmepriv = &padapter->mlmepriv; 528 528 struct security_priv *psecuritypriv = &(padapter->securitypriv); 529 529 struct sta_priv *pstapriv = &padapter->stapriv; 530 + char *grpkey = padapter->securitypriv.dot118021XGrpKey[param->u.crypt.idx].skey; 531 + char *txkey = padapter->securitypriv.dot118021XGrptxmickey[param->u.crypt.idx].skey; 532 + char *rxkey = padapter->securitypriv.dot118021XGrprxmickey[param->u.crypt.idx].skey; 530 533 531 534 param->u.crypt.err = 0; 532 535 param->u.crypt.alg[IEEE_CRYPT_ALG_NAME_LEN - 1] = '\0'; ··· 612 609 { 613 610 if (strcmp(param->u.crypt.alg, "WEP") == 0) 614 611 { 615 - memcpy(psecuritypriv->dot118021XGrpKey[param->u.crypt.idx].skey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len)); 612 + memcpy(grpkey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len)); 616 613 617 614 psecuritypriv->dot118021XGrpPrivacy = _WEP40_; 618 615 if (param->u.crypt.key_len == 13) ··· 625 622 { 626 623 psecuritypriv->dot118021XGrpPrivacy = _TKIP_; 627 624 628 - memcpy(psecuritypriv->dot118021XGrpKey[param->u.crypt.idx].skey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len)); 625 + memcpy(grpkey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len)); 629 626 630 627 /* DEBUG_ERR("set key length :param->u.crypt.key_len =%d\n", param->u.crypt.key_len); */ 631 628 /* set mic key */ 632 - memcpy(psecuritypriv->dot118021XGrptxmickey[param->u.crypt.idx].skey, &(param->u.crypt.key[16]), 8); 633 - memcpy(psecuritypriv->dot118021XGrprxmickey[param->u.crypt.idx].skey, &(param->u.crypt.key[24]), 8); 629 + memcpy(txkey, &(param->u.crypt.key[16]), 8); 630 + memcpy(rxkey, &(param->u.crypt.key[24]), 8); 634 631 635 632 psecuritypriv->busetkipkey = true; 636 633 ··· 639 636 { 640 637 psecuritypriv->dot118021XGrpPrivacy = _AES_; 641 638 642 - memcpy(psecuritypriv->dot118021XGrpKey[param->u.crypt.idx].skey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len)); 639 + memcpy(grpkey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len)); 643 640 } 644 641 else 645 642 { ··· 716 713 { 717 714 if (strcmp(param->u.crypt.alg, "WEP") == 0) 718 715 { 719 - memcpy(psecuritypriv->dot118021XGrpKey[param->u.crypt.idx].skey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len)); 716 + memcpy(grpkey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len)); 720 717 721 718 psecuritypriv->dot118021XGrpPrivacy = _WEP40_; 722 719 if (param->u.crypt.key_len == 13) ··· 728 725 { 729 726 psecuritypriv->dot118021XGrpPrivacy = _TKIP_; 730 727 731 - memcpy(psecuritypriv->dot118021XGrpKey[param->u.crypt.idx].skey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len)); 728 + memcpy(grpkey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len)); 732 729 733 730 /* DEBUG_ERR("set key length :param->u.crypt.key_len =%d\n", param->u.crypt.key_len); */ 734 731 /* set mic key */ 735 - memcpy(psecuritypriv->dot118021XGrptxmickey[param->u.crypt.idx].skey, &(param->u.crypt.key[16]), 8); 736 - memcpy(psecuritypriv->dot118021XGrprxmickey[param->u.crypt.idx].skey, &(param->u.crypt.key[24]), 8); 732 + memcpy(txkey, &(param->u.crypt.key[16]), 8); 733 + memcpy(rxkey, &(param->u.crypt.key[24]), 8); 737 734 738 735 psecuritypriv->busetkipkey = true; 739 736 ··· 742 739 { 743 740 psecuritypriv->dot118021XGrpPrivacy = _AES_; 744 741 745 - memcpy(psecuritypriv->dot118021XGrpKey[param->u.crypt.idx].skey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len)); 742 + memcpy(grpkey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len)); 746 743 } 747 744 else 748 745 {
+12 -9
drivers/staging/rtl8723bs/os_dep/ioctl_linux.c
··· 2963 2963 struct mlme_priv *pmlmepriv = &padapter->mlmepriv; 2964 2964 struct security_priv *psecuritypriv = &(padapter->securitypriv); 2965 2965 struct sta_priv *pstapriv = &padapter->stapriv; 2966 + char *txkey = padapter->securitypriv.dot118021XGrptxmickey[param->u.crypt.idx].skey; 2967 + char *rxkey = padapter->securitypriv.dot118021XGrprxmickey[param->u.crypt.idx].skey; 2968 + char *grpkey = psecuritypriv->dot118021XGrpKey[param->u.crypt.idx].skey; 2966 2969 2967 2970 param->u.crypt.err = 0; 2968 2971 param->u.crypt.alg[IEEE_CRYPT_ALG_NAME_LEN - 1] = '\0'; ··· 3067 3064 if (!psta && check_fwstate(pmlmepriv, WIFI_AP_STATE)) { /* group key */ 3068 3065 if (param->u.crypt.set_tx == 1) { 3069 3066 if (strcmp(param->u.crypt.alg, "WEP") == 0) { 3070 - memcpy(psecuritypriv->dot118021XGrpKey[param->u.crypt.idx].skey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len)); 3067 + memcpy(grpkey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len)); 3071 3068 3072 3069 psecuritypriv->dot118021XGrpPrivacy = _WEP40_; 3073 3070 if (param->u.crypt.key_len == 13) ··· 3076 3073 } else if (strcmp(param->u.crypt.alg, "TKIP") == 0) { 3077 3074 psecuritypriv->dot118021XGrpPrivacy = _TKIP_; 3078 3075 3079 - memcpy(psecuritypriv->dot118021XGrpKey[param->u.crypt.idx].skey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len)); 3076 + memcpy(grpkey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len)); 3080 3077 3081 3078 /* DEBUG_ERR("set key length :param->u.crypt.key_len =%d\n", param->u.crypt.key_len); */ 3082 3079 /* set mic key */ 3083 - memcpy(psecuritypriv->dot118021XGrptxmickey[param->u.crypt.idx].skey, &(param->u.crypt.key[16]), 8); 3080 + memcpy(txkey, &(param->u.crypt.key[16]), 8); 3084 3081 memcpy(psecuritypriv->dot118021XGrprxmickey[param->u.crypt.idx].skey, &(param->u.crypt.key[24]), 8); 3085 3082 3086 3083 psecuritypriv->busetkipkey = true; ··· 3089 3086 else if (strcmp(param->u.crypt.alg, "CCMP") == 0) { 3090 3087 psecuritypriv->dot118021XGrpPrivacy = _AES_; 3091 3088 3092 - memcpy(psecuritypriv->dot118021XGrpKey[param->u.crypt.idx].skey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len)); 3089 + memcpy(grpkey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len)); 3093 3090 } else { 3094 3091 psecuritypriv->dot118021XGrpPrivacy = _NO_PRIVACY_; 3095 3092 } ··· 3145 3142 3146 3143 } else { /* group key??? */ 3147 3144 if (strcmp(param->u.crypt.alg, "WEP") == 0) { 3148 - memcpy(psecuritypriv->dot118021XGrpKey[param->u.crypt.idx].skey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len)); 3145 + memcpy(grpkey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len)); 3149 3146 3150 3147 psecuritypriv->dot118021XGrpPrivacy = _WEP40_; 3151 3148 if (param->u.crypt.key_len == 13) ··· 3153 3150 } else if (strcmp(param->u.crypt.alg, "TKIP") == 0) { 3154 3151 psecuritypriv->dot118021XGrpPrivacy = _TKIP_; 3155 3152 3156 - memcpy(psecuritypriv->dot118021XGrpKey[param->u.crypt.idx].skey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len)); 3153 + memcpy(grpkey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len)); 3157 3154 3158 3155 /* DEBUG_ERR("set key length :param->u.crypt.key_len =%d\n", param->u.crypt.key_len); */ 3159 3156 /* set mic key */ 3160 - memcpy(psecuritypriv->dot118021XGrptxmickey[param->u.crypt.idx].skey, &(param->u.crypt.key[16]), 8); 3161 - memcpy(psecuritypriv->dot118021XGrprxmickey[param->u.crypt.idx].skey, &(param->u.crypt.key[24]), 8); 3157 + memcpy(txkey, &(param->u.crypt.key[16]), 8); 3158 + memcpy(rxkey, &(param->u.crypt.key[24]), 8); 3162 3159 3163 3160 psecuritypriv->busetkipkey = true; 3164 3161 3165 3162 } else if (strcmp(param->u.crypt.alg, "CCMP") == 0) { 3166 3163 psecuritypriv->dot118021XGrpPrivacy = _AES_; 3167 3164 3168 - memcpy(psecuritypriv->dot118021XGrpKey[param->u.crypt.idx].skey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len)); 3165 + memcpy(grpkey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len)); 3169 3166 } else { 3170 3167 psecuritypriv->dot118021XGrpPrivacy = _NO_PRIVACY_; 3171 3168 }
+2
drivers/tty/serial/max310x.c
··· 1519 1519 1520 1520 #ifdef CONFIG_SPI_MASTER 1521 1521 ret = spi_register_driver(&max310x_spi_driver); 1522 + if (ret) 1523 + uart_unregister_driver(&max310x_uart); 1522 1524 #endif 1523 1525 1524 1526 return ret;
-3
drivers/tty/serial/mvebu-uart.c
··· 818 818 return -EINVAL; 819 819 } 820 820 821 - if (!match) 822 - return -ENODEV; 823 - 824 821 /* Assume that all UART ports have a DT alias or none has */ 825 822 id = of_alias_get_id(pdev->dev.of_node, "serial"); 826 823 if (!pdev->dev.of_node || id < 0)
+1 -1
drivers/tty/vt/vt.c
··· 1171 1171 /* Resizes the resolution of the display adapater */ 1172 1172 int err = 0; 1173 1173 1174 - if (vc->vc_mode != KD_GRAPHICS && vc->vc_sw->con_resize) 1174 + if (vc->vc_sw->con_resize) 1175 1175 err = vc->vc_sw->con_resize(vc, width, height, user); 1176 1176 1177 1177 return err;
+49 -12
drivers/tty/vt/vt_ioctl.c
··· 671 671 if (copy_from_user(&v, cs, sizeof(struct vt_consize))) 672 672 return -EFAULT; 673 673 674 - if (v.v_vlin) 675 - pr_info_once("\"struct vt_consize\"->v_vlin is ignored. Please report if you need this.\n"); 676 - if (v.v_clin) 677 - pr_info_once("\"struct vt_consize\"->v_clin is ignored. Please report if you need this.\n"); 674 + /* FIXME: Should check the copies properly */ 675 + if (!v.v_vlin) 676 + v.v_vlin = vc->vc_scan_lines; 678 677 679 - console_lock(); 680 - for (i = 0; i < MAX_NR_CONSOLES; i++) { 681 - vc = vc_cons[i].d; 682 - 683 - if (vc) { 684 - vc->vc_resize_user = 1; 685 - vc_resize(vc, v.v_cols, v.v_rows); 678 + if (v.v_clin) { 679 + int rows = v.v_vlin / v.v_clin; 680 + if (v.v_rows != rows) { 681 + if (v.v_rows) /* Parameters don't add up */ 682 + return -EINVAL; 683 + v.v_rows = rows; 686 684 } 687 685 } 688 - console_unlock(); 686 + 687 + if (v.v_vcol && v.v_ccol) { 688 + int cols = v.v_vcol / v.v_ccol; 689 + if (v.v_cols != cols) { 690 + if (v.v_cols) 691 + return -EINVAL; 692 + v.v_cols = cols; 693 + } 694 + } 695 + 696 + if (v.v_clin > 32) 697 + return -EINVAL; 698 + 699 + for (i = 0; i < MAX_NR_CONSOLES; i++) { 700 + struct vc_data *vcp; 701 + 702 + if (!vc_cons[i].d) 703 + continue; 704 + console_lock(); 705 + vcp = vc_cons[i].d; 706 + if (vcp) { 707 + int ret; 708 + int save_scan_lines = vcp->vc_scan_lines; 709 + int save_cell_height = vcp->vc_cell_height; 710 + 711 + if (v.v_vlin) 712 + vcp->vc_scan_lines = v.v_vlin; 713 + if (v.v_clin) 714 + vcp->vc_cell_height = v.v_clin; 715 + vcp->vc_resize_user = 1; 716 + ret = vc_resize(vcp, v.v_cols, v.v_rows); 717 + if (ret) { 718 + vcp->vc_scan_lines = save_scan_lines; 719 + vcp->vc_cell_height = save_cell_height; 720 + console_unlock(); 721 + return ret; 722 + } 723 + } 724 + console_unlock(); 725 + } 689 726 690 727 return 0; 691 728 }
+9 -3
drivers/uio/uio_hv_generic.c
··· 291 291 pdata->recv_buf = vzalloc(RECV_BUFFER_SIZE); 292 292 if (pdata->recv_buf == NULL) { 293 293 ret = -ENOMEM; 294 - goto fail_close; 294 + goto fail_free_ring; 295 295 } 296 296 297 297 ret = vmbus_establish_gpadl(channel, pdata->recv_buf, 298 298 RECV_BUFFER_SIZE, &pdata->recv_gpadl); 299 - if (ret) 299 + if (ret) { 300 + vfree(pdata->recv_buf); 300 301 goto fail_close; 302 + } 301 303 302 304 /* put Global Physical Address Label in name */ 303 305 snprintf(pdata->recv_name, sizeof(pdata->recv_name), ··· 318 316 319 317 ret = vmbus_establish_gpadl(channel, pdata->send_buf, 320 318 SEND_BUFFER_SIZE, &pdata->send_gpadl); 321 - if (ret) 319 + if (ret) { 320 + vfree(pdata->send_buf); 322 321 goto fail_close; 322 + } 323 323 324 324 snprintf(pdata->send_name, sizeof(pdata->send_name), 325 325 "send:%u", pdata->send_gpadl); ··· 351 347 352 348 fail_close: 353 349 hv_uio_cleanup(dev, pdata); 350 + fail_free_ring: 351 + vmbus_free_ring(dev->channel); 354 352 355 353 return ret; 356 354 }
+1 -1
drivers/uio/uio_pci_generic.c
··· 84 84 } 85 85 86 86 if (pdev->irq && !pci_intx_mask_supported(pdev)) 87 - return -ENOMEM; 87 + return -ENODEV; 88 88 89 89 gdev = devm_kzalloc(&pdev->dev, sizeof(struct uio_pci_generic_dev), GFP_KERNEL); 90 90 if (!gdev)
+22 -8
drivers/usb/class/cdc-wdm.c
··· 321 321 322 322 } 323 323 324 - static void kill_urbs(struct wdm_device *desc) 324 + static void poison_urbs(struct wdm_device *desc) 325 325 { 326 326 /* the order here is essential */ 327 - usb_kill_urb(desc->command); 328 - usb_kill_urb(desc->validity); 329 - usb_kill_urb(desc->response); 327 + usb_poison_urb(desc->command); 328 + usb_poison_urb(desc->validity); 329 + usb_poison_urb(desc->response); 330 + } 331 + 332 + static void unpoison_urbs(struct wdm_device *desc) 333 + { 334 + /* 335 + * the order here is not essential 336 + * it is symmetrical just to be nice 337 + */ 338 + usb_unpoison_urb(desc->response); 339 + usb_unpoison_urb(desc->validity); 340 + usb_unpoison_urb(desc->command); 330 341 } 331 342 332 343 static void free_urbs(struct wdm_device *desc) ··· 752 741 if (!desc->count) { 753 742 if (!test_bit(WDM_DISCONNECTING, &desc->flags)) { 754 743 dev_dbg(&desc->intf->dev, "wdm_release: cleanup\n"); 755 - kill_urbs(desc); 744 + poison_urbs(desc); 756 745 spin_lock_irq(&desc->iuspin); 757 746 desc->resp_count = 0; 758 747 spin_unlock_irq(&desc->iuspin); 759 748 desc->manage_power(desc->intf, 0); 749 + unpoison_urbs(desc); 760 750 } else { 761 751 /* must avoid dev_printk here as desc->intf is invalid */ 762 752 pr_debug(KBUILD_MODNAME " %s: device gone - cleaning up\n", __func__); ··· 1049 1037 wake_up_all(&desc->wait); 1050 1038 mutex_lock(&desc->rlock); 1051 1039 mutex_lock(&desc->wlock); 1040 + poison_urbs(desc); 1052 1041 cancel_work_sync(&desc->rxwork); 1053 1042 cancel_work_sync(&desc->service_outs_intr); 1054 - kill_urbs(desc); 1055 1043 mutex_unlock(&desc->wlock); 1056 1044 mutex_unlock(&desc->rlock); 1057 1045 ··· 1092 1080 set_bit(WDM_SUSPENDING, &desc->flags); 1093 1081 spin_unlock_irq(&desc->iuspin); 1094 1082 /* callback submits work - order is essential */ 1095 - kill_urbs(desc); 1083 + poison_urbs(desc); 1096 1084 cancel_work_sync(&desc->rxwork); 1097 1085 cancel_work_sync(&desc->service_outs_intr); 1086 + unpoison_urbs(desc); 1098 1087 } 1099 1088 if (!PMSG_IS_AUTO(message)) { 1100 1089 mutex_unlock(&desc->wlock); ··· 1153 1140 wake_up_all(&desc->wait); 1154 1141 mutex_lock(&desc->rlock); 1155 1142 mutex_lock(&desc->wlock); 1156 - kill_urbs(desc); 1143 + poison_urbs(desc); 1157 1144 cancel_work_sync(&desc->rxwork); 1158 1145 cancel_work_sync(&desc->service_outs_intr); 1159 1146 return 0; ··· 1164 1151 struct wdm_device *desc = wdm_find_device(intf); 1165 1152 int rv; 1166 1153 1154 + unpoison_urbs(desc); 1167 1155 clear_bit(WDM_OVERFLOW, &desc->flags); 1168 1156 clear_bit(WDM_RESETTING, &desc->flags); 1169 1157 rv = recover_from_urb_loss(desc);
+3 -3
drivers/usb/core/hub.c
··· 3642 3642 * sequence. 3643 3643 */ 3644 3644 status = hub_port_status(hub, port1, &portstatus, &portchange); 3645 - 3646 - /* TRSMRCY = 10 msec */ 3647 - msleep(10); 3648 3645 } 3649 3646 3650 3647 SuspendCleared: ··· 3656 3659 usb_clear_port_feature(hub->hdev, port1, 3657 3660 USB_PORT_FEAT_C_SUSPEND); 3658 3661 } 3662 + 3663 + /* TRSMRCY = 10 msec */ 3664 + msleep(10); 3659 3665 } 3660 3666 3661 3667 if (udev->persist_enabled)
+2
drivers/usb/dwc2/core.h
··· 113 113 * @debugfs: File entry for debugfs file for this endpoint. 114 114 * @dir_in: Set to true if this endpoint is of the IN direction, which 115 115 * means that it is sending data to the Host. 116 + * @map_dir: Set to the value of dir_in when the DMA buffer is mapped. 116 117 * @index: The index for the endpoint registers. 117 118 * @mc: Multi Count - number of transactions per microframe 118 119 * @interval: Interval for periodic endpoints, in frames or microframes. ··· 163 162 unsigned short fifo_index; 164 163 165 164 unsigned char dir_in; 165 + unsigned char map_dir; 166 166 unsigned char index; 167 167 unsigned char mc; 168 168 u16 interval;
+2 -1
drivers/usb/dwc2/gadget.c
··· 422 422 { 423 423 struct usb_request *req = &hs_req->req; 424 424 425 - usb_gadget_unmap_request(&hsotg->gadget, req, hs_ep->dir_in); 425 + usb_gadget_unmap_request(&hsotg->gadget, req, hs_ep->map_dir); 426 426 } 427 427 428 428 /* ··· 1242 1242 { 1243 1243 int ret; 1244 1244 1245 + hs_ep->map_dir = hs_ep->dir_in; 1245 1246 ret = usb_gadget_map_request(&hsotg->gadget, req, hs_ep->dir_in); 1246 1247 if (ret) 1247 1248 goto dma_error;
-4
drivers/usb/dwc2/platform.c
··· 776 776 }; 777 777 778 778 module_platform_driver(dwc2_platform_driver); 779 - 780 - MODULE_DESCRIPTION("DESIGNWARE HS OTG Platform Glue"); 781 - MODULE_AUTHOR("Matthijs Kooijman <matthijs@stdin.nl>"); 782 - MODULE_LICENSE("Dual BSD/GPL");
+4 -3
drivers/usb/dwc3/core.h
··· 57 57 #define DWC3_DEVICE_EVENT_LINK_STATUS_CHANGE 3 58 58 #define DWC3_DEVICE_EVENT_WAKEUP 4 59 59 #define DWC3_DEVICE_EVENT_HIBER_REQ 5 60 - #define DWC3_DEVICE_EVENT_EOPF 6 60 + #define DWC3_DEVICE_EVENT_SUSPEND 6 61 61 #define DWC3_DEVICE_EVENT_SOF 7 62 62 #define DWC3_DEVICE_EVENT_ERRATIC_ERROR 9 63 63 #define DWC3_DEVICE_EVENT_CMD_CMPL 10 ··· 460 460 #define DWC3_DEVTEN_CMDCMPLTEN BIT(10) 461 461 #define DWC3_DEVTEN_ERRTICERREN BIT(9) 462 462 #define DWC3_DEVTEN_SOFEN BIT(7) 463 - #define DWC3_DEVTEN_EOPFEN BIT(6) 463 + #define DWC3_DEVTEN_U3L2L1SUSPEN BIT(6) 464 464 #define DWC3_DEVTEN_HIBERNATIONREQEVTEN BIT(5) 465 465 #define DWC3_DEVTEN_WKUPEVTEN BIT(4) 466 466 #define DWC3_DEVTEN_ULSTCNGEN BIT(3) ··· 850 850 * @hwparams6: GHWPARAMS6 851 851 * @hwparams7: GHWPARAMS7 852 852 * @hwparams8: GHWPARAMS8 853 + * @hwparams9: GHWPARAMS9 853 854 */ 854 855 struct dwc3_hwparams { 855 856 u32 hwparams0; ··· 1375 1374 * 3 - ULStChng 1376 1375 * 4 - WkUpEvt 1377 1376 * 5 - Reserved 1378 - * 6 - EOPF 1377 + * 6 - Suspend (EOPF on revisions 2.10a and prior) 1379 1378 * 7 - SOF 1380 1379 * 8 - Reserved 1381 1380 * 9 - ErrticErr
+4 -4
drivers/usb/dwc3/debug.h
··· 221 221 snprintf(str, size, "WakeUp [%s]", 222 222 dwc3_gadget_link_string(state)); 223 223 break; 224 - case DWC3_DEVICE_EVENT_EOPF: 225 - snprintf(str, size, "End-Of-Frame [%s]", 224 + case DWC3_DEVICE_EVENT_SUSPEND: 225 + snprintf(str, size, "Suspend [%s]", 226 226 dwc3_gadget_link_string(state)); 227 227 break; 228 228 case DWC3_DEVICE_EVENT_SOF: ··· 353 353 return "Wake-Up"; 354 354 case DWC3_DEVICE_EVENT_HIBER_REQ: 355 355 return "Hibernation"; 356 - case DWC3_DEVICE_EVENT_EOPF: 357 - return "End of Periodic Frame"; 356 + case DWC3_DEVICE_EVENT_SUSPEND: 357 + return "Suspend"; 358 358 case DWC3_DEVICE_EVENT_SOF: 359 359 return "Start of Frame"; 360 360 case DWC3_DEVICE_EVENT_ERRATIC_ERROR:
+2 -1
drivers/usb/dwc3/dwc3-imx8mp.c
··· 165 165 if (err < 0) 166 166 goto disable_rpm; 167 167 168 - dwc3_np = of_get_child_by_name(node, "dwc3"); 168 + dwc3_np = of_get_compatible_child(node, "snps,dwc3"); 169 169 if (!dwc3_np) { 170 + err = -ENODEV; 170 171 dev_err(dev, "failed to find dwc3 core child\n"); 171 172 goto disable_rpm; 172 173 }
+5
drivers/usb/dwc3/dwc3-omap.c
··· 437 437 438 438 if (extcon_get_state(edev, EXTCON_USB) == true) 439 439 dwc3_omap_set_mailbox(omap, OMAP_DWC3_VBUS_VALID); 440 + else 441 + dwc3_omap_set_mailbox(omap, OMAP_DWC3_VBUS_OFF); 442 + 440 443 if (extcon_get_state(edev, EXTCON_USB_HOST) == true) 441 444 dwc3_omap_set_mailbox(omap, OMAP_DWC3_ID_GROUND); 445 + else 446 + dwc3_omap_set_mailbox(omap, OMAP_DWC3_ID_FLOAT); 442 447 443 448 omap->edev = edev; 444 449 }
+1
drivers/usb/dwc3/dwc3-pci.c
··· 123 123 PROPERTY_ENTRY_STRING("linux,extcon-name", "mrfld_bcove_pwrsrc"), 124 124 PROPERTY_ENTRY_BOOL("snps,dis_u3_susphy_quirk"), 125 125 PROPERTY_ENTRY_BOOL("snps,dis_u2_susphy_quirk"), 126 + PROPERTY_ENTRY_BOOL("snps,usb2-gadget-lpm-disable"), 126 127 PROPERTY_ENTRY_BOOL("linux,sysdev_is_parent"), 127 128 {} 128 129 };
+10 -3
drivers/usb/dwc3/gadget.c
··· 1684 1684 } 1685 1685 } 1686 1686 1687 - return __dwc3_gadget_kick_transfer(dep); 1687 + __dwc3_gadget_kick_transfer(dep); 1688 + 1689 + return 0; 1688 1690 } 1689 1691 1690 1692 static int dwc3_gadget_ep_queue(struct usb_ep *ep, struct usb_request *request, ··· 2324 2322 2325 2323 if (DWC3_VER_IS_PRIOR(DWC3, 250A)) 2326 2324 reg |= DWC3_DEVTEN_ULSTCNGEN; 2325 + 2326 + /* On 2.30a and above this bit enables U3/L2-L1 Suspend Events */ 2327 + if (!DWC3_VER_IS_PRIOR(DWC3, 230A)) 2328 + reg |= DWC3_DEVTEN_U3L2L1SUSPEN; 2327 2329 2328 2330 dwc3_writel(dwc->regs, DWC3_DEVTEN, reg); 2329 2331 } ··· 3746 3740 case DWC3_DEVICE_EVENT_LINK_STATUS_CHANGE: 3747 3741 dwc3_gadget_linksts_change_interrupt(dwc, event->event_info); 3748 3742 break; 3749 - case DWC3_DEVICE_EVENT_EOPF: 3743 + case DWC3_DEVICE_EVENT_SUSPEND: 3750 3744 /* It changed to be suspend event for version 2.30a and above */ 3751 3745 if (!DWC3_VER_IS_PRIOR(DWC3, 230A)) { 3752 3746 /* ··· 4064 4058 4065 4059 void dwc3_gadget_exit(struct dwc3 *dwc) 4066 4060 { 4067 - usb_del_gadget_udc(dwc->gadget); 4061 + usb_del_gadget(dwc->gadget); 4068 4062 dwc3_gadget_free_endpoints(dwc); 4063 + usb_put_gadget(dwc->gadget); 4069 4064 dma_free_coherent(dwc->sysdev, DWC3_BOUNCE_SIZE, dwc->bounce, 4070 4065 dwc->bounce_addr); 4071 4066 kfree(dwc->setup_buf);
+2 -2
drivers/usb/host/fotg210-hcd.c
··· 5568 5568 struct usb_hcd *hcd; 5569 5569 struct resource *res; 5570 5570 int irq; 5571 - int retval = -ENODEV; 5571 + int retval; 5572 5572 struct fotg210_hcd *fotg210; 5573 5573 5574 5574 if (usb_disabled()) ··· 5588 5588 hcd = usb_create_hcd(&fotg210_fotg210_hc_driver, dev, 5589 5589 dev_name(dev)); 5590 5590 if (!hcd) { 5591 - dev_err(dev, "failed to create hcd with err %d\n", retval); 5591 + dev_err(dev, "failed to create hcd\n"); 5592 5592 retval = -ENOMEM; 5593 5593 goto fail_create_hcd; 5594 5594 }
+3 -2
drivers/usb/host/xhci-ext-caps.h
··· 7 7 * Author: Sarah Sharp 8 8 * Some code borrowed from the Linux EHCI driver. 9 9 */ 10 - /* Up to 16 ms to halt an HC */ 11 - #define XHCI_MAX_HALT_USEC (16*1000) 10 + 11 + /* HC should halt within 16 ms, but use 32 ms as some hosts take longer */ 12 + #define XHCI_MAX_HALT_USEC (32 * 1000) 12 13 /* HC not running - set to 1 when run/stop bit is cleared. */ 13 14 #define XHCI_STS_HALT (1<<0) 14 15
+6 -2
drivers/usb/host/xhci-pci.c
··· 57 57 #define PCI_DEVICE_ID_INTEL_CML_XHCI 0xa3af 58 58 #define PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI 0x9a13 59 59 #define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI 0x1138 60 + #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI 0x461e 60 61 61 62 #define PCI_DEVICE_ID_AMD_PROMONTORYA_4 0x43b9 62 63 #define PCI_DEVICE_ID_AMD_PROMONTORYA_3 0x43ba ··· 167 166 (pdev->device == 0x15e0 || pdev->device == 0x15e1)) 168 167 xhci->quirks |= XHCI_SNPS_BROKEN_SUSPEND; 169 168 170 - if (pdev->vendor == PCI_VENDOR_ID_AMD && pdev->device == 0x15e5) 169 + if (pdev->vendor == PCI_VENDOR_ID_AMD && pdev->device == 0x15e5) { 171 170 xhci->quirks |= XHCI_DISABLE_SPARSE; 171 + xhci->quirks |= XHCI_RESET_ON_RESUME; 172 + } 172 173 173 174 if (pdev->vendor == PCI_VENDOR_ID_AMD) 174 175 xhci->quirks |= XHCI_TRUST_TX_LENGTH; ··· 246 243 pdev->device == PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_XHCI || 247 244 pdev->device == PCI_DEVICE_ID_INTEL_ICE_LAKE_XHCI || 248 245 pdev->device == PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI || 249 - pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI)) 246 + pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI || 247 + pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI)) 250 248 xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW; 251 249 252 250 if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
+11 -5
drivers/usb/host/xhci-ring.c
··· 862 862 return ret; 863 863 } 864 864 865 - static void xhci_handle_halted_endpoint(struct xhci_hcd *xhci, 865 + static int xhci_handle_halted_endpoint(struct xhci_hcd *xhci, 866 866 struct xhci_virt_ep *ep, unsigned int stream_id, 867 867 struct xhci_td *td, 868 868 enum xhci_ep_reset_type reset_type) ··· 875 875 * Device will be reset soon to recover the link so don't do anything 876 876 */ 877 877 if (ep->vdev->flags & VDEV_PORT_ERROR) 878 - return; 878 + return -ENODEV; 879 879 880 880 /* add td to cancelled list and let reset ep handler take care of it */ 881 881 if (reset_type == EP_HARD_RESET) { ··· 888 888 889 889 if (ep->ep_state & EP_HALTED) { 890 890 xhci_dbg(xhci, "Reset ep command already pending\n"); 891 - return; 891 + return 0; 892 892 } 893 893 894 894 err = xhci_reset_halted_ep(xhci, slot_id, ep->ep_index, reset_type); 895 895 if (err) 896 - return; 896 + return err; 897 897 898 898 ep->ep_state |= EP_HALTED; 899 899 900 900 xhci_ring_cmd_db(xhci); 901 + 902 + return 0; 901 903 } 902 904 903 905 /* ··· 1016 1014 struct xhci_td *td = NULL; 1017 1015 enum xhci_ep_reset_type reset_type; 1018 1016 struct xhci_command *command; 1017 + int err; 1019 1018 1020 1019 if (unlikely(TRB_TO_SUSPEND_PORT(le32_to_cpu(trb->generic.field[3])))) { 1021 1020 if (!xhci->devs[slot_id]) ··· 1061 1058 td->status = -EPROTO; 1062 1059 } 1063 1060 /* reset ep, reset handler cleans up cancelled tds */ 1064 - xhci_handle_halted_endpoint(xhci, ep, 0, td, reset_type); 1061 + err = xhci_handle_halted_endpoint(xhci, ep, 0, td, 1062 + reset_type); 1063 + if (err) 1064 + break; 1065 1065 xhci_stop_watchdog_timer_in_irq(xhci, ep); 1066 1066 return; 1067 1067 case EP_STATE_RUNNING:
+3 -3
drivers/usb/host/xhci.c
··· 1514 1514 * we need to issue an evaluate context command and wait on it. 1515 1515 */ 1516 1516 static int xhci_check_maxpacket(struct xhci_hcd *xhci, unsigned int slot_id, 1517 - unsigned int ep_index, struct urb *urb) 1517 + unsigned int ep_index, struct urb *urb, gfp_t mem_flags) 1518 1518 { 1519 1519 struct xhci_container_ctx *out_ctx; 1520 1520 struct xhci_input_control_ctx *ctrl_ctx; ··· 1545 1545 * changes max packet sizes. 1546 1546 */ 1547 1547 1548 - command = xhci_alloc_command(xhci, true, GFP_KERNEL); 1548 + command = xhci_alloc_command(xhci, true, mem_flags); 1549 1549 if (!command) 1550 1550 return -ENOMEM; 1551 1551 ··· 1639 1639 */ 1640 1640 if (urb->dev->speed == USB_SPEED_FULL) { 1641 1641 ret = xhci_check_maxpacket(xhci, slot_id, 1642 - ep_index, urb); 1642 + ep_index, urb, mem_flags); 1643 1643 if (ret < 0) { 1644 1644 xhci_urb_free_priv(urb_priv); 1645 1645 urb->hcpriv = NULL;
+1 -1
drivers/usb/musb/mediatek.c
··· 518 518 519 519 glue->xceiv = devm_usb_get_phy(dev, USB_PHY_TYPE_USB2); 520 520 if (IS_ERR(glue->xceiv)) { 521 - dev_err(dev, "fail to getting usb-phy %d\n", ret); 522 521 ret = PTR_ERR(glue->xceiv); 522 + dev_err(dev, "fail to getting usb-phy %d\n", ret); 523 523 goto err_unregister_usb_phy; 524 524 } 525 525
+98 -14
drivers/usb/typec/tcpm/tcpm.c
··· 259 259 #define ALTMODE_DISCOVERY_MAX (SVID_DISCOVERY_MAX * MODE_DISCOVERY_MAX) 260 260 261 261 #define GET_SINK_CAP_RETRY_MS 100 262 + #define SEND_DISCOVER_RETRY_MS 100 262 263 263 264 struct pd_mode_data { 264 265 int svid_index; /* current SVID index */ ··· 367 366 struct kthread_work vdm_state_machine; 368 367 struct hrtimer enable_frs_timer; 369 368 struct kthread_work enable_frs; 369 + struct hrtimer send_discover_timer; 370 + struct kthread_work send_discover_work; 370 371 bool state_machine_running; 371 372 bool vdm_sm_running; 372 373 ··· 1181 1178 } 1182 1179 } 1183 1180 1181 + static void mod_send_discover_delayed_work(struct tcpm_port *port, unsigned int delay_ms) 1182 + { 1183 + if (delay_ms) { 1184 + hrtimer_start(&port->send_discover_timer, ms_to_ktime(delay_ms), HRTIMER_MODE_REL); 1185 + } else { 1186 + hrtimer_cancel(&port->send_discover_timer); 1187 + kthread_queue_work(port->wq, &port->send_discover_work); 1188 + } 1189 + } 1190 + 1184 1191 static void tcpm_set_state(struct tcpm_port *port, enum tcpm_state state, 1185 1192 unsigned int delay_ms) 1186 1193 { ··· 1868 1855 res = tcpm_ams_start(port, DISCOVER_IDENTITY); 1869 1856 if (res == 0) 1870 1857 port->send_discover = false; 1858 + else if (res == -EAGAIN) 1859 + mod_send_discover_delayed_work(port, 1860 + SEND_DISCOVER_RETRY_MS); 1871 1861 break; 1872 1862 case CMD_DISCOVER_SVID: 1873 1863 res = tcpm_ams_start(port, DISCOVER_SVIDS); ··· 1896 1880 } 1897 1881 1898 1882 if (res < 0) { 1899 - port->vdm_sm_running = false; 1883 + port->vdm_state = VDM_STATE_ERR_BUSY; 1900 1884 return; 1901 1885 } 1902 1886 } ··· 1912 1896 port->vdo_data[0] = port->vdo_retry; 1913 1897 port->vdo_count = 1; 1914 1898 port->vdm_state = VDM_STATE_READY; 1899 + tcpm_ams_finish(port); 1915 1900 break; 1916 1901 case VDM_STATE_BUSY: 1917 1902 port->vdm_state = VDM_STATE_ERR_TMOUT; ··· 1978 1961 port->vdm_state != VDM_STATE_BUSY && 1979 1962 port->vdm_state != VDM_STATE_SEND_MESSAGE); 1980 1963 1981 - if (port->vdm_state == VDM_STATE_ERR_TMOUT) 1964 + if (port->vdm_state < VDM_STATE_READY) 1982 1965 port->vdm_sm_running = false; 1983 1966 1984 1967 mutex_unlock(&port->lock); ··· 2407 2390 port->nr_sink_caps = cnt; 2408 2391 port->sink_cap_done = true; 2409 2392 if (port->ams == GET_SINK_CAPABILITIES) 2410 - tcpm_pd_handle_state(port, ready_state(port), NONE_AMS, 0); 2393 + tcpm_set_state(port, ready_state(port), 0); 2411 2394 /* Unexpected Sink Capabilities */ 2412 2395 else 2413 2396 tcpm_pd_handle_msg(port, ··· 2569 2552 port->sink_cap_done = true; 2570 2553 tcpm_set_state(port, ready_state(port), 0); 2571 2554 break; 2555 + case SRC_READY: 2556 + case SNK_READY: 2557 + if (port->vdm_state > VDM_STATE_READY) { 2558 + port->vdm_state = VDM_STATE_DONE; 2559 + if (tcpm_vdm_ams(port)) 2560 + tcpm_ams_finish(port); 2561 + mod_vdm_delayed_work(port, 0); 2562 + break; 2563 + } 2564 + fallthrough; 2572 2565 default: 2573 2566 tcpm_pd_handle_state(port, 2574 2567 port->pwr_role == TYPEC_SOURCE ? ··· 3709 3682 return SNK_UNATTACHED; 3710 3683 } 3711 3684 3712 - static void tcpm_check_send_discover(struct tcpm_port *port) 3713 - { 3714 - if ((port->data_role == TYPEC_HOST || port->negotiated_rev > PD_REV20) && 3715 - port->send_discover && port->pd_capable) 3716 - tcpm_send_vdm(port, USB_SID_PD, CMD_DISCOVER_IDENT, NULL, 0); 3717 - port->send_discover = false; 3718 - } 3719 - 3720 3685 static void tcpm_swap_complete(struct tcpm_port *port, int result) 3721 3686 { 3722 3687 if (port->swap_pending) { ··· 3945 3926 break; 3946 3927 } 3947 3928 3948 - tcpm_check_send_discover(port); 3929 + /* 3930 + * 6.4.4.3.1 Discover Identity 3931 + * "The Discover Identity Command Shall only be sent to SOP when there is an 3932 + * Explicit Contract." 3933 + * For now, this driver only supports SOP for DISCOVER_IDENTITY, thus using 3934 + * port->explicit_contract to decide whether to send the command. 3935 + */ 3936 + if (port->explicit_contract) 3937 + mod_send_discover_delayed_work(port, 0); 3938 + else 3939 + port->send_discover = false; 3940 + 3949 3941 /* 3950 3942 * 6.3.5 3951 3943 * Sending ping messages is not necessary if ··· 4085 4055 if (port->vbus_present) { 4086 4056 u32 current_lim = tcpm_get_current_limit(port); 4087 4057 4088 - if (port->slow_charger_loop || (current_lim > PD_P_SNK_STDBY_MW / 5)) 4058 + if (port->slow_charger_loop && (current_lim > PD_P_SNK_STDBY_MW / 5)) 4089 4059 current_lim = PD_P_SNK_STDBY_MW / 5; 4090 4060 tcpm_set_current_limit(port, current_lim, 5000); 4091 4061 tcpm_set_charge(port, true); ··· 4224 4194 break; 4225 4195 } 4226 4196 4227 - tcpm_check_send_discover(port); 4197 + /* 4198 + * 6.4.4.3.1 Discover Identity 4199 + * "The Discover Identity Command Shall only be sent to SOP when there is an 4200 + * Explicit Contract." 4201 + * For now, this driver only supports SOP for DISCOVER_IDENTITY, thus using 4202 + * port->explicit_contract. 4203 + */ 4204 + if (port->explicit_contract) 4205 + mod_send_discover_delayed_work(port, 0); 4206 + else 4207 + port->send_discover = false; 4208 + 4228 4209 power_supply_changed(port->psy); 4229 4210 break; 4230 4211 ··· 5329 5288 mutex_unlock(&port->lock); 5330 5289 } 5331 5290 5291 + static void tcpm_send_discover_work(struct kthread_work *work) 5292 + { 5293 + struct tcpm_port *port = container_of(work, struct tcpm_port, send_discover_work); 5294 + 5295 + mutex_lock(&port->lock); 5296 + /* No need to send DISCOVER_IDENTITY anymore */ 5297 + if (!port->send_discover) 5298 + goto unlock; 5299 + 5300 + /* Retry if the port is not idle */ 5301 + if ((port->state != SRC_READY && port->state != SNK_READY) || port->vdm_sm_running) { 5302 + mod_send_discover_delayed_work(port, SEND_DISCOVER_RETRY_MS); 5303 + goto unlock; 5304 + } 5305 + 5306 + /* Only send the Message if the port is host for PD rev2.0 */ 5307 + if (port->data_role == TYPEC_HOST || port->negotiated_rev > PD_REV20) 5308 + tcpm_send_vdm(port, USB_SID_PD, CMD_DISCOVER_IDENT, NULL, 0); 5309 + 5310 + unlock: 5311 + mutex_unlock(&port->lock); 5312 + } 5313 + 5332 5314 static int tcpm_dr_set(struct typec_port *p, enum typec_data_role data) 5333 5315 { 5334 5316 struct tcpm_port *port = typec_get_drvdata(p); ··· 5818 5754 if (!fwnode) 5819 5755 return -EINVAL; 5820 5756 5757 + /* 5758 + * This fwnode has a "compatible" property, but is never populated as a 5759 + * struct device. Instead we simply parse it to read the properties. 5760 + * This it breaks fw_devlink=on. To maintain backward compatibility 5761 + * with existing DT files, we work around this by deleting any 5762 + * fwnode_links to/from this fwnode. 5763 + */ 5764 + fw_devlink_purge_absent_suppliers(fwnode); 5765 + 5821 5766 /* USB data support is optional */ 5822 5767 ret = fwnode_property_read_string(fwnode, "data-role", &cap_str); 5823 5768 if (ret == 0) { ··· 6166 6093 return HRTIMER_NORESTART; 6167 6094 } 6168 6095 6096 + static enum hrtimer_restart send_discover_timer_handler(struct hrtimer *timer) 6097 + { 6098 + struct tcpm_port *port = container_of(timer, struct tcpm_port, send_discover_timer); 6099 + 6100 + kthread_queue_work(port->wq, &port->send_discover_work); 6101 + return HRTIMER_NORESTART; 6102 + } 6103 + 6169 6104 struct tcpm_port *tcpm_register_port(struct device *dev, struct tcpc_dev *tcpc) 6170 6105 { 6171 6106 struct tcpm_port *port; ··· 6204 6123 kthread_init_work(&port->vdm_state_machine, vdm_state_machine_work); 6205 6124 kthread_init_work(&port->event_work, tcpm_pd_event_handler); 6206 6125 kthread_init_work(&port->enable_frs, tcpm_enable_frs_work); 6126 + kthread_init_work(&port->send_discover_work, tcpm_send_discover_work); 6207 6127 hrtimer_init(&port->state_machine_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); 6208 6128 port->state_machine_timer.function = state_machine_timer_handler; 6209 6129 hrtimer_init(&port->vdm_state_machine_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); 6210 6130 port->vdm_state_machine_timer.function = vdm_state_machine_timer_handler; 6211 6131 hrtimer_init(&port->enable_frs_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); 6212 6132 port->enable_frs_timer.function = enable_frs_timer_handler; 6133 + hrtimer_init(&port->send_discover_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); 6134 + port->send_discover_timer.function = send_discover_timer_handler; 6213 6135 6214 6136 spin_lock_init(&port->pd_event_lock); 6215 6137
+37 -11
drivers/usb/typec/ucsi/ucsi.c
··· 495 495 } 496 496 } 497 497 498 - static void ucsi_get_pdos(struct ucsi_connector *con, int is_partner) 498 + static int ucsi_get_pdos(struct ucsi_connector *con, int is_partner, 499 + u32 *pdos, int offset, int num_pdos) 499 500 { 500 501 struct ucsi *ucsi = con->ucsi; 501 502 u64 command; ··· 504 503 505 504 command = UCSI_COMMAND(UCSI_GET_PDOS) | UCSI_CONNECTOR_NUMBER(con->num); 506 505 command |= UCSI_GET_PDOS_PARTNER_PDO(is_partner); 507 - command |= UCSI_GET_PDOS_NUM_PDOS(UCSI_MAX_PDOS - 1); 506 + command |= UCSI_GET_PDOS_PDO_OFFSET(offset); 507 + command |= UCSI_GET_PDOS_NUM_PDOS(num_pdos - 1); 508 508 command |= UCSI_GET_PDOS_SRC_PDOS; 509 - ret = ucsi_send_command(ucsi, command, con->src_pdos, 510 - sizeof(con->src_pdos)); 511 - if (ret < 0) { 509 + ret = ucsi_send_command(ucsi, command, pdos + offset, 510 + num_pdos * sizeof(u32)); 511 + if (ret < 0) 512 512 dev_err(ucsi->dev, "UCSI_GET_PDOS failed (%d)\n", ret); 513 - return; 514 - } 515 - con->num_pdos = ret / sizeof(u32); /* number of bytes to 32-bit PDOs */ 516 - if (ret == 0) 513 + if (ret == 0 && offset == 0) 517 514 dev_warn(ucsi->dev, "UCSI_GET_PDOS returned 0 bytes\n"); 515 + 516 + return ret; 517 + } 518 + 519 + static void ucsi_get_src_pdos(struct ucsi_connector *con, int is_partner) 520 + { 521 + int ret; 522 + 523 + /* UCSI max payload means only getting at most 4 PDOs at a time */ 524 + ret = ucsi_get_pdos(con, 1, con->src_pdos, 0, UCSI_MAX_PDOS); 525 + if (ret < 0) 526 + return; 527 + 528 + con->num_pdos = ret / sizeof(u32); /* number of bytes to 32-bit PDOs */ 529 + if (con->num_pdos < UCSI_MAX_PDOS) 530 + return; 531 + 532 + /* get the remaining PDOs, if any */ 533 + ret = ucsi_get_pdos(con, 1, con->src_pdos, UCSI_MAX_PDOS, 534 + PDO_MAX_OBJECTS - UCSI_MAX_PDOS); 535 + if (ret < 0) 536 + return; 537 + 538 + con->num_pdos += ret / sizeof(u32); 518 539 } 519 540 520 541 static void ucsi_pwr_opmode_change(struct ucsi_connector *con) ··· 545 522 case UCSI_CONSTAT_PWR_OPMODE_PD: 546 523 con->rdo = con->status.request_data_obj; 547 524 typec_set_pwr_opmode(con->port, TYPEC_PWR_MODE_PD); 548 - ucsi_get_pdos(con, 1); 525 + ucsi_get_src_pdos(con, 1); 549 526 break; 550 527 case UCSI_CONSTAT_PWR_OPMODE_TYPEC1_5: 551 528 con->rdo = 0; ··· 1022 999 .pr_set = ucsi_pr_swap 1023 1000 }; 1024 1001 1002 + /* Caller must call fwnode_handle_put() after use */ 1025 1003 static struct fwnode_handle *ucsi_find_fwnode(struct ucsi_connector *con) 1026 1004 { 1027 1005 struct fwnode_handle *fwnode; ··· 1057 1033 command |= UCSI_CONNECTOR_NUMBER(con->num); 1058 1034 ret = ucsi_send_command(ucsi, command, &con->cap, sizeof(con->cap)); 1059 1035 if (ret < 0) 1060 - goto out; 1036 + goto out_unlock; 1061 1037 1062 1038 if (con->cap.op_mode & UCSI_CONCAP_OPMODE_DRP) 1063 1039 cap->data = TYPEC_PORT_DRD; ··· 1175 1151 trace_ucsi_register_port(con->num, &con->status); 1176 1152 1177 1153 out: 1154 + fwnode_handle_put(cap->fwnode); 1155 + out_unlock: 1178 1156 mutex_unlock(&con->lock); 1179 1157 return ret; 1180 1158 }
+4 -2
drivers/usb/typec/ucsi/ucsi.h
··· 8 8 #include <linux/power_supply.h> 9 9 #include <linux/types.h> 10 10 #include <linux/usb/typec.h> 11 + #include <linux/usb/pd.h> 11 12 #include <linux/usb/role.h> 12 13 13 14 /* -------------------------------------------------------------------------- */ ··· 135 134 136 135 /* GET_PDOS command bits */ 137 136 #define UCSI_GET_PDOS_PARTNER_PDO(_r_) ((u64)(_r_) << 23) 137 + #define UCSI_GET_PDOS_PDO_OFFSET(_r_) ((u64)(_r_) << 24) 138 138 #define UCSI_GET_PDOS_NUM_PDOS(_r_) ((u64)(_r_) << 32) 139 + #define UCSI_MAX_PDOS (4) 139 140 #define UCSI_GET_PDOS_SRC_PDOS ((u64)1 << 34) 140 141 141 142 /* -------------------------------------------------------------------------- */ ··· 305 302 306 303 #define UCSI_MAX_SVID 5 307 304 #define UCSI_MAX_ALTMODES (UCSI_MAX_SVID * 6) 308 - #define UCSI_MAX_PDOS (4) 309 305 310 306 #define UCSI_TYPEC_VSAFE5V 5000 311 307 #define UCSI_TYPEC_1_5_CURRENT 1500 ··· 332 330 struct power_supply *psy; 333 331 struct power_supply_desc psy_desc; 334 332 u32 rdo; 335 - u32 src_pdos[UCSI_MAX_PDOS]; 333 + u32 src_pdos[PDO_MAX_OBJECTS]; 336 334 int num_pdos; 337 335 338 336 struct usb_role_switch *usb_role_sw;
+32 -24
drivers/video/console/vgacon.c
··· 380 380 vc_resize(c, vga_video_num_columns, vga_video_num_lines); 381 381 382 382 c->vc_scan_lines = vga_scan_lines; 383 - c->vc_font.height = vga_video_font_height; 383 + c->vc_font.height = c->vc_cell_height = vga_video_font_height; 384 384 c->vc_complement_mask = 0x7700; 385 385 if (vga_512_chars) 386 386 c->vc_hi_font_mask = 0x0800; ··· 515 515 switch (CUR_SIZE(c->vc_cursor_type)) { 516 516 case CUR_UNDERLINE: 517 517 vgacon_set_cursor_size(c->state.x, 518 - c->vc_font.height - 519 - (c->vc_font.height < 518 + c->vc_cell_height - 519 + (c->vc_cell_height < 520 520 10 ? 2 : 3), 521 - c->vc_font.height - 522 - (c->vc_font.height < 521 + c->vc_cell_height - 522 + (c->vc_cell_height < 523 523 10 ? 1 : 2)); 524 524 break; 525 525 case CUR_TWO_THIRDS: 526 526 vgacon_set_cursor_size(c->state.x, 527 - c->vc_font.height / 3, 528 - c->vc_font.height - 529 - (c->vc_font.height < 527 + c->vc_cell_height / 3, 528 + c->vc_cell_height - 529 + (c->vc_cell_height < 530 530 10 ? 1 : 2)); 531 531 break; 532 532 case CUR_LOWER_THIRD: 533 533 vgacon_set_cursor_size(c->state.x, 534 - (c->vc_font.height * 2) / 3, 535 - c->vc_font.height - 536 - (c->vc_font.height < 534 + (c->vc_cell_height * 2) / 3, 535 + c->vc_cell_height - 536 + (c->vc_cell_height < 537 537 10 ? 1 : 2)); 538 538 break; 539 539 case CUR_LOWER_HALF: 540 540 vgacon_set_cursor_size(c->state.x, 541 - c->vc_font.height / 2, 542 - c->vc_font.height - 543 - (c->vc_font.height < 541 + c->vc_cell_height / 2, 542 + c->vc_cell_height - 543 + (c->vc_cell_height < 544 544 10 ? 1 : 2)); 545 545 break; 546 546 case CUR_NONE: ··· 551 551 break; 552 552 default: 553 553 vgacon_set_cursor_size(c->state.x, 1, 554 - c->vc_font.height); 554 + c->vc_cell_height); 555 555 break; 556 556 } 557 557 break; ··· 562 562 unsigned int width, unsigned int height) 563 563 { 564 564 unsigned long flags; 565 - unsigned int scanlines = height * c->vc_font.height; 565 + unsigned int scanlines = height * c->vc_cell_height; 566 566 u8 scanlines_lo = 0, r7 = 0, vsync_end = 0, mode, max_scan; 567 567 568 568 raw_spin_lock_irqsave(&vga_lock, flags); 569 569 570 570 vgacon_xres = width * VGA_FONTWIDTH; 571 - vgacon_yres = height * c->vc_font.height; 571 + vgacon_yres = height * c->vc_cell_height; 572 572 if (vga_video_type >= VIDEO_TYPE_VGAC) { 573 573 outb_p(VGA_CRTC_MAX_SCAN, vga_video_port_reg); 574 574 max_scan = inb_p(vga_video_port_val); ··· 623 623 static int vgacon_switch(struct vc_data *c) 624 624 { 625 625 int x = c->vc_cols * VGA_FONTWIDTH; 626 - int y = c->vc_rows * c->vc_font.height; 626 + int y = c->vc_rows * c->vc_cell_height; 627 627 int rows = screen_info.orig_video_lines * vga_default_font_height/ 628 - c->vc_font.height; 628 + c->vc_cell_height; 629 629 /* 630 630 * We need to save screen size here as it's the only way 631 631 * we can spot the screen has been resized and we need to ··· 1038 1038 cursor_size_lastto = 0; 1039 1039 c->vc_sw->con_cursor(c, CM_DRAW); 1040 1040 } 1041 - c->vc_font.height = fontheight; 1041 + c->vc_font.height = c->vc_cell_height = fontheight; 1042 1042 vc_resize(c, 0, rows); /* Adjust console size */ 1043 1043 } 1044 1044 } ··· 1086 1086 if ((width << 1) * height > vga_vram_size) 1087 1087 return -EINVAL; 1088 1088 1089 + if (user) { 1090 + /* 1091 + * Ho ho! Someone (svgatextmode, eh?) may have reprogrammed 1092 + * the video mode! Set the new defaults then and go away. 1093 + */ 1094 + screen_info.orig_video_cols = width; 1095 + screen_info.orig_video_lines = height; 1096 + vga_default_font_height = c->vc_cell_height; 1097 + return 0; 1098 + } 1089 1099 if (width % 2 || width > screen_info.orig_video_cols || 1090 1100 height > (screen_info.orig_video_lines * vga_default_font_height)/ 1091 - c->vc_font.height) 1092 - /* let svgatextmode tinker with video timings and 1093 - return success */ 1094 - return (user) ? 0 : -EINVAL; 1101 + c->vc_cell_height) 1102 + return -EINVAL; 1095 1103 1096 1104 if (con_is_visible(c) && !vga_is_gfx) /* who knows */ 1097 1105 vgacon_doresize(c, width, height);
+1 -1
drivers/video/fbdev/core/fbcon.c
··· 2019 2019 return -EINVAL; 2020 2020 2021 2021 pr_debug("resize now %ix%i\n", var.xres, var.yres); 2022 - if (con_is_visible(vc)) { 2022 + if (con_is_visible(vc) && vc->vc_mode == KD_TEXT) { 2023 2023 var.activate = FB_ACTIVATE_NOW | 2024 2024 FB_ACTIVATE_FORCE; 2025 2025 fb_set_var(info, &var);
+12 -9
drivers/video/fbdev/hgafb.c
··· 286 286 287 287 hga_vram = ioremap(0xb0000, hga_vram_len); 288 288 if (!hga_vram) 289 - goto error; 289 + return -ENOMEM; 290 290 291 291 if (request_region(0x3b0, 12, "hgafb")) 292 292 release_io_ports = 1; ··· 346 346 hga_type_name = "Hercules"; 347 347 break; 348 348 } 349 - return 1; 349 + return 0; 350 350 error: 351 351 if (release_io_ports) 352 352 release_region(0x3b0, 12); 353 353 if (release_io_port) 354 354 release_region(0x3bf, 1); 355 - return 0; 355 + 356 + iounmap(hga_vram); 357 + 358 + pr_err("hgafb: HGA card not detected.\n"); 359 + 360 + return -EINVAL; 356 361 } 357 362 358 363 /** ··· 555 550 static int hgafb_probe(struct platform_device *pdev) 556 551 { 557 552 struct fb_info *info; 553 + int ret; 558 554 559 - if (! hga_card_detect()) { 560 - printk(KERN_INFO "hgafb: HGA card not detected.\n"); 561 - if (hga_vram) 562 - iounmap(hga_vram); 563 - return -EINVAL; 564 - } 555 + ret = hga_card_detect(); 556 + if (!ret) 557 + return ret; 565 558 566 559 printk(KERN_INFO "hgafb: %s with %ldK of memory detected.\n", 567 560 hga_type_name, hga_vram_len/1024);
+18 -8
drivers/video/fbdev/imsttfb.c
··· 1469 1469 struct imstt_par *par; 1470 1470 struct fb_info *info; 1471 1471 struct device_node *dp; 1472 + int ret = -ENOMEM; 1472 1473 1473 1474 dp = pci_device_to_OF_node(pdev); 1474 1475 if(dp) ··· 1505 1504 default: 1506 1505 printk(KERN_INFO "imsttfb: Device 0x%x unknown, " 1507 1506 "contact maintainer.\n", pdev->device); 1508 - release_mem_region(addr, size); 1509 - framebuffer_release(info); 1510 - return -ENODEV; 1507 + ret = -ENODEV; 1508 + goto error; 1511 1509 } 1512 1510 1513 1511 info->fix.smem_start = addr; 1514 1512 info->screen_base = (__u8 *)ioremap(addr, par->ramdac == IBM ? 1515 1513 0x400000 : 0x800000); 1516 - if (!info->screen_base) { 1517 - release_mem_region(addr, size); 1518 - framebuffer_release(info); 1519 - return -ENOMEM; 1520 - } 1514 + if (!info->screen_base) 1515 + goto error; 1521 1516 info->fix.mmio_start = addr + 0x800000; 1522 1517 par->dc_regs = ioremap(addr + 0x800000, 0x1000); 1518 + if (!par->dc_regs) 1519 + goto error; 1523 1520 par->cmap_regs_phys = addr + 0x840000; 1524 1521 par->cmap_regs = (__u8 *)ioremap(addr + 0x840000, 0x1000); 1522 + if (!par->cmap_regs) 1523 + goto error; 1525 1524 info->pseudo_palette = par->palette; 1526 1525 init_imstt(info); 1527 1526 1528 1527 pci_set_drvdata(pdev, info); 1529 1528 return 0; 1529 + 1530 + error: 1531 + if (par->dc_regs) 1532 + iounmap(par->dc_regs); 1533 + if (info->screen_base) 1534 + iounmap(info->screen_base); 1535 + release_mem_region(addr, size); 1536 + framebuffer_release(info); 1537 + return ret; 1530 1538 } 1531 1539 1532 1540 static void imsttfb_remove(struct pci_dev *pdev)
+3 -1
drivers/xen/gntdev.c
··· 1017 1017 err = mmu_interval_notifier_insert_locked( 1018 1018 &map->notifier, vma->vm_mm, vma->vm_start, 1019 1019 vma->vm_end - vma->vm_start, &gntdev_mmu_ops); 1020 - if (err) 1020 + if (err) { 1021 + map->vma = NULL; 1021 1022 goto out_unlock_put; 1023 + } 1022 1024 } 1023 1025 mutex_unlock(&priv->lock); 1024 1026
+5
drivers/xen/swiotlb-xen.c
··· 164 164 int rc = -ENOMEM; 165 165 char *start; 166 166 167 + if (io_tlb_default_mem != NULL) { 168 + pr_warn("swiotlb buffer already initialized\n"); 169 + return -EEXIST; 170 + } 171 + 167 172 retry: 168 173 m_ret = XEN_SWIOTLB_ENOMEM; 169 174 order = get_order(bytes);
+3 -1
drivers/xen/unpopulated-alloc.c
··· 39 39 } 40 40 41 41 pgmap = kzalloc(sizeof(*pgmap), GFP_KERNEL); 42 - if (!pgmap) 42 + if (!pgmap) { 43 + ret = -ENOMEM; 43 44 goto err_pgmap; 45 + } 44 46 45 47 pgmap->type = MEMORY_DEVICE_GENERIC; 46 48 pgmap->range = (struct range) {
+1 -1
fs/btrfs/ctree.h
··· 3127 3127 struct btrfs_inode *inode, u64 new_size, 3128 3128 u32 min_type); 3129 3129 3130 - int btrfs_start_delalloc_snapshot(struct btrfs_root *root); 3130 + int btrfs_start_delalloc_snapshot(struct btrfs_root *root, bool in_reclaim_context); 3131 3131 int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, long nr, 3132 3132 bool in_reclaim_context); 3133 3133 int btrfs_set_extent_delalloc(struct btrfs_inode *inode, u64 start, u64 end,
+5 -1
fs/btrfs/extent-tree.c
··· 1340 1340 stripe = bbio->stripes; 1341 1341 for (i = 0; i < bbio->num_stripes; i++, stripe++) { 1342 1342 u64 bytes; 1343 + struct btrfs_device *device = stripe->dev; 1343 1344 1344 - if (!stripe->dev->bdev) { 1345 + if (!device->bdev) { 1345 1346 ASSERT(btrfs_test_opt(fs_info, DEGRADED)); 1346 1347 continue; 1347 1348 } 1349 + 1350 + if (!test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state)) 1351 + continue; 1348 1352 1349 1353 ret = do_discard_extent(stripe, &bytes); 1350 1354 if (!ret) {
+6 -1
fs/btrfs/extent_io.c
··· 5196 5196 u64 start, u64 len) 5197 5197 { 5198 5198 int ret = 0; 5199 - u64 off = start; 5199 + u64 off; 5200 5200 u64 max = start + len; 5201 5201 u32 flags = 0; 5202 5202 u32 found_type; ··· 5231 5231 goto out_free_ulist; 5232 5232 } 5233 5233 5234 + /* 5235 + * We can't initialize that to 'start' as this could miss extents due 5236 + * to extent item merging 5237 + */ 5238 + off = 0; 5234 5239 start = round_down(start, btrfs_inode_sectorsize(inode)); 5235 5240 len = round_up(max, btrfs_inode_sectorsize(inode)) - start; 5236 5241
+25 -10
fs/btrfs/file.c
··· 2067 2067 return ret; 2068 2068 } 2069 2069 2070 + static inline bool skip_inode_logging(const struct btrfs_log_ctx *ctx) 2071 + { 2072 + struct btrfs_inode *inode = BTRFS_I(ctx->inode); 2073 + struct btrfs_fs_info *fs_info = inode->root->fs_info; 2074 + 2075 + if (btrfs_inode_in_log(inode, fs_info->generation) && 2076 + list_empty(&ctx->ordered_extents)) 2077 + return true; 2078 + 2079 + /* 2080 + * If we are doing a fast fsync we can not bail out if the inode's 2081 + * last_trans is <= then the last committed transaction, because we only 2082 + * update the last_trans of the inode during ordered extent completion, 2083 + * and for a fast fsync we don't wait for that, we only wait for the 2084 + * writeback to complete. 2085 + */ 2086 + if (inode->last_trans <= fs_info->last_trans_committed && 2087 + (test_bit(BTRFS_INODE_NEEDS_FULL_SYNC, &inode->runtime_flags) || 2088 + list_empty(&ctx->ordered_extents))) 2089 + return true; 2090 + 2091 + return false; 2092 + } 2093 + 2070 2094 /* 2071 2095 * fsync call for both files and directories. This logs the inode into 2072 2096 * the tree log instead of forcing full commits whenever possible. ··· 2209 2185 2210 2186 atomic_inc(&root->log_batch); 2211 2187 2212 - /* 2213 - * If we are doing a fast fsync we can not bail out if the inode's 2214 - * last_trans is <= then the last committed transaction, because we only 2215 - * update the last_trans of the inode during ordered extent completion, 2216 - * and for a fast fsync we don't wait for that, we only wait for the 2217 - * writeback to complete. 2218 - */ 2219 2188 smp_mb(); 2220 - if (btrfs_inode_in_log(BTRFS_I(inode), fs_info->generation) || 2221 - (BTRFS_I(inode)->last_trans <= fs_info->last_trans_committed && 2222 - (full_sync || list_empty(&ctx.ordered_extents)))) { 2189 + if (skip_inode_logging(&ctx)) { 2223 2190 /* 2224 2191 * We've had everything committed since the last time we were 2225 2192 * modified so clear this flag in case it was set for whatever
+1 -1
fs/btrfs/free-space-cache.c
··· 3949 3949 { 3950 3950 struct btrfs_block_group *block_group; 3951 3951 struct rb_node *node; 3952 - int ret; 3952 + int ret = 0; 3953 3953 3954 3954 btrfs_info(fs_info, "cleaning free space cache v1"); 3955 3955
+3 -2
fs/btrfs/inode.c
··· 3241 3241 inode = list_first_entry(&fs_info->delayed_iputs, 3242 3242 struct btrfs_inode, delayed_iput); 3243 3243 run_delayed_iput_locked(fs_info, inode); 3244 + cond_resched_lock(&fs_info->delayed_iput_lock); 3244 3245 } 3245 3246 spin_unlock(&fs_info->delayed_iput_lock); 3246 3247 } ··· 9679 9678 return ret; 9680 9679 } 9681 9680 9682 - int btrfs_start_delalloc_snapshot(struct btrfs_root *root) 9681 + int btrfs_start_delalloc_snapshot(struct btrfs_root *root, bool in_reclaim_context) 9683 9682 { 9684 9683 struct writeback_control wbc = { 9685 9684 .nr_to_write = LONG_MAX, ··· 9692 9691 if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) 9693 9692 return -EROFS; 9694 9693 9695 - return start_delalloc_inodes(root, &wbc, true, false); 9694 + return start_delalloc_inodes(root, &wbc, true, in_reclaim_context); 9696 9695 } 9697 9696 9698 9697 int btrfs_start_delalloc_roots(struct btrfs_fs_info *fs_info, long nr,
+3 -1
fs/btrfs/ioctl.c
··· 259 259 if (!fa->flags_valid) { 260 260 /* 1 item for the inode */ 261 261 trans = btrfs_start_transaction(root, 1); 262 + if (IS_ERR(trans)) 263 + return PTR_ERR(trans); 262 264 goto update_flags; 263 265 } 264 266 ··· 909 907 */ 910 908 btrfs_drew_read_lock(&root->snapshot_lock); 911 909 912 - ret = btrfs_start_delalloc_snapshot(root); 910 + ret = btrfs_start_delalloc_snapshot(root, false); 913 911 if (ret) 914 912 goto out; 915 913
+1 -1
fs/btrfs/ordered-data.c
··· 984 984 985 985 if (pre) 986 986 ret = clone_ordered_extent(ordered, 0, pre); 987 - if (post) 987 + if (ret == 0 && post) 988 988 ret = clone_ordered_extent(ordered, pre + ordered->disk_num_bytes, 989 989 post); 990 990
+10 -6
fs/btrfs/qgroup.c
··· 3545 3545 struct btrfs_trans_handle *trans; 3546 3546 int ret; 3547 3547 3548 - /* Can't hold an open transaction or we run the risk of deadlocking */ 3549 - ASSERT(current->journal_info == NULL || 3550 - current->journal_info == BTRFS_SEND_TRANS_STUB); 3551 - if (WARN_ON(current->journal_info && 3552 - current->journal_info != BTRFS_SEND_TRANS_STUB)) 3548 + /* 3549 + * Can't hold an open transaction or we run the risk of deadlocking, 3550 + * and can't either be under the context of a send operation (where 3551 + * current->journal_info is set to BTRFS_SEND_TRANS_STUB), as that 3552 + * would result in a crash when starting a transaction and does not 3553 + * make sense either (send is a read-only operation). 3554 + */ 3555 + ASSERT(current->journal_info == NULL); 3556 + if (WARN_ON(current->journal_info)) 3553 3557 return 0; 3554 3558 3555 3559 /* ··· 3566 3562 return 0; 3567 3563 } 3568 3564 3569 - ret = btrfs_start_delalloc_snapshot(root); 3565 + ret = btrfs_start_delalloc_snapshot(root, true); 3570 3566 if (ret < 0) 3571 3567 goto out; 3572 3568 btrfs_wait_ordered_extents(root, U64_MAX, 0, (u64)-1);
+2 -2
fs/btrfs/send.c
··· 7170 7170 int i; 7171 7171 7172 7172 if (root) { 7173 - ret = btrfs_start_delalloc_snapshot(root); 7173 + ret = btrfs_start_delalloc_snapshot(root, false); 7174 7174 if (ret) 7175 7175 return ret; 7176 7176 btrfs_wait_ordered_extents(root, U64_MAX, 0, U64_MAX); ··· 7178 7178 7179 7179 for (i = 0; i < sctx->clone_roots_cnt; i++) { 7180 7180 root = sctx->clone_roots[i].root; 7181 - ret = btrfs_start_delalloc_snapshot(root); 7181 + ret = btrfs_start_delalloc_snapshot(root, false); 7182 7182 if (ret) 7183 7183 return ret; 7184 7184 btrfs_wait_ordered_extents(root, U64_MAX, 0, U64_MAX);
+20 -1
fs/btrfs/tree-log.c
··· 6061 6061 * (since logging them is pointless, a link count of 0 means they 6062 6062 * will never be accessible). 6063 6063 */ 6064 - if (btrfs_inode_in_log(inode, trans->transid) || 6064 + if ((btrfs_inode_in_log(inode, trans->transid) && 6065 + list_empty(&ctx->ordered_extents)) || 6065 6066 inode->vfs_inode.i_nlink == 0) { 6066 6067 ret = BTRFS_NO_LOG_SYNC; 6067 6068 goto end_no_trans; ··· 6462 6461 if (inode->logged_trans < trans->transid && 6463 6462 (!old_dir || old_dir->logged_trans < trans->transid)) 6464 6463 return; 6464 + 6465 + /* 6466 + * If we are doing a rename (old_dir is not NULL) from a directory that 6467 + * was previously logged, make sure the next log attempt on the directory 6468 + * is not skipped and logs the inode again. This is because the log may 6469 + * not currently be authoritative for a range including the old 6470 + * BTRFS_DIR_ITEM_KEY and BTRFS_DIR_INDEX_KEY keys, so we want to make 6471 + * sure after a log replay we do not end up with both the new and old 6472 + * dentries around (in case the inode is a directory we would have a 6473 + * directory with two hard links and 2 inode references for different 6474 + * parents). The next log attempt of old_dir will happen at 6475 + * btrfs_log_all_parents(), called through btrfs_log_inode_parent() 6476 + * below, because we have previously set inode->last_unlink_trans to the 6477 + * current transaction ID, either here or at btrfs_record_unlink_dir() in 6478 + * case inode is a directory. 6479 + */ 6480 + if (old_dir) 6481 + old_dir->logged_trans = 0; 6465 6482 6466 6483 btrfs_init_log_ctx(&ctx, &inode->vfs_inode); 6467 6484 ctx.logging_new_name = true;
+1 -1
fs/btrfs/volumes.c
··· 1459 1459 /* Given hole range was invalid (outside of device) */ 1460 1460 if (ret == -ERANGE) { 1461 1461 *hole_start += *hole_size; 1462 - *hole_size = false; 1462 + *hole_size = 0; 1463 1463 return true; 1464 1464 } 1465 1465
+5
fs/btrfs/zoned.c
··· 1126 1126 goto out; 1127 1127 } 1128 1128 1129 + if (zone.type == BLK_ZONE_TYPE_CONVENTIONAL) { 1130 + ret = -EIO; 1131 + goto out; 1132 + } 1133 + 1129 1134 switch (zone.cond) { 1130 1135 case BLK_ZONE_COND_OFFLINE: 1131 1136 case BLK_ZONE_COND_READONLY:
+23 -12
fs/dax.c
··· 144 144 struct exceptional_entry_key key; 145 145 }; 146 146 147 + /** 148 + * enum dax_wake_mode: waitqueue wakeup behaviour 149 + * @WAKE_ALL: wake all waiters in the waitqueue 150 + * @WAKE_NEXT: wake only the first waiter in the waitqueue 151 + */ 152 + enum dax_wake_mode { 153 + WAKE_ALL, 154 + WAKE_NEXT, 155 + }; 156 + 147 157 static wait_queue_head_t *dax_entry_waitqueue(struct xa_state *xas, 148 158 void *entry, struct exceptional_entry_key *key) 149 159 { ··· 192 182 * The important information it's conveying is whether the entry at 193 183 * this index used to be a PMD entry. 194 184 */ 195 - static void dax_wake_entry(struct xa_state *xas, void *entry, bool wake_all) 185 + static void dax_wake_entry(struct xa_state *xas, void *entry, 186 + enum dax_wake_mode mode) 196 187 { 197 188 struct exceptional_entry_key key; 198 189 wait_queue_head_t *wq; ··· 207 196 * must be in the waitqueue and the following check will see them. 208 197 */ 209 198 if (waitqueue_active(wq)) 210 - __wake_up(wq, TASK_NORMAL, wake_all ? 0 : 1, &key); 199 + __wake_up(wq, TASK_NORMAL, mode == WAKE_ALL ? 0 : 1, &key); 211 200 } 212 201 213 202 /* ··· 275 264 finish_wait(wq, &ewait.wait); 276 265 } 277 266 278 - static void put_unlocked_entry(struct xa_state *xas, void *entry) 267 + static void put_unlocked_entry(struct xa_state *xas, void *entry, 268 + enum dax_wake_mode mode) 279 269 { 280 - /* If we were the only waiter woken, wake the next one */ 281 270 if (entry && !dax_is_conflict(entry)) 282 - dax_wake_entry(xas, entry, false); 271 + dax_wake_entry(xas, entry, mode); 283 272 } 284 273 285 274 /* ··· 297 286 old = xas_store(xas, entry); 298 287 xas_unlock_irq(xas); 299 288 BUG_ON(!dax_is_locked(old)); 300 - dax_wake_entry(xas, entry, false); 289 + dax_wake_entry(xas, entry, WAKE_NEXT); 301 290 } 302 291 303 292 /* ··· 535 524 536 525 dax_disassociate_entry(entry, mapping, false); 537 526 xas_store(xas, NULL); /* undo the PMD join */ 538 - dax_wake_entry(xas, entry, true); 527 + dax_wake_entry(xas, entry, WAKE_ALL); 539 528 mapping->nrpages -= PG_PMD_NR; 540 529 entry = NULL; 541 530 xas_set(xas, index); ··· 633 622 entry = get_unlocked_entry(&xas, 0); 634 623 if (entry) 635 624 page = dax_busy_page(entry); 636 - put_unlocked_entry(&xas, entry); 625 + put_unlocked_entry(&xas, entry, WAKE_NEXT); 637 626 if (page) 638 627 break; 639 628 if (++scanned % XA_CHECK_SCHED) ··· 675 664 mapping->nrpages -= 1UL << dax_entry_order(entry); 676 665 ret = 1; 677 666 out: 678 - put_unlocked_entry(&xas, entry); 667 + put_unlocked_entry(&xas, entry, WAKE_ALL); 679 668 xas_unlock_irq(&xas); 680 669 return ret; 681 670 } ··· 948 937 xas_lock_irq(xas); 949 938 xas_store(xas, entry); 950 939 xas_clear_mark(xas, PAGECACHE_TAG_DIRTY); 951 - dax_wake_entry(xas, entry, false); 940 + dax_wake_entry(xas, entry, WAKE_NEXT); 952 941 953 942 trace_dax_writeback_one(mapping->host, index, count); 954 943 return ret; 955 944 956 945 put_unlocked: 957 - put_unlocked_entry(xas, entry); 946 + put_unlocked_entry(xas, entry, WAKE_NEXT); 958 947 return ret; 959 948 } 960 949 ··· 1695 1684 /* Did we race with someone splitting entry or so? */ 1696 1685 if (!entry || dax_is_conflict(entry) || 1697 1686 (order == 0 && !dax_is_pte_entry(entry))) { 1698 - put_unlocked_entry(&xas, entry); 1687 + put_unlocked_entry(&xas, entry, WAKE_NEXT); 1699 1688 xas_unlock_irq(&xas); 1700 1689 trace_dax_insert_pfn_mkwrite_no_entry(mapping->host, vmf, 1701 1690 VM_FAULT_NOPAGE);
-4
fs/ecryptfs/crypto.c
··· 296 296 struct extent_crypt_result ecr; 297 297 int rc = 0; 298 298 299 - if (!crypt_stat || !crypt_stat->tfm 300 - || !(crypt_stat->flags & ECRYPTFS_STRUCT_INITIALIZED)) 301 - return -EINVAL; 302 - 303 299 if (unlikely(ecryptfs_verbosity > 0)) { 304 300 ecryptfs_printk(KERN_DEBUG, "Key size [%zd]; key:\n", 305 301 crypt_stat->key_size);
+19 -2
fs/erofs/zmap.c
··· 450 450 lcn = m->lcn + 1; 451 451 if (m->compressedlcs) 452 452 goto out; 453 - if (lcn == initial_lcn) 454 - goto err_bonus_cblkcnt; 455 453 456 454 err = z_erofs_load_cluster_from_disk(m, lcn); 457 455 if (err) 458 456 return err; 459 457 458 + /* 459 + * If the 1st NONHEAD lcluster has already been handled initially w/o 460 + * valid compressedlcs, which means at least it mustn't be CBLKCNT, or 461 + * an internal implemenatation error is detected. 462 + * 463 + * The following code can also handle it properly anyway, but let's 464 + * BUG_ON in the debugging mode only for developers to notice that. 465 + */ 466 + DBG_BUGON(lcn == initial_lcn && 467 + m->type == Z_EROFS_VLE_CLUSTER_TYPE_NONHEAD); 468 + 460 469 switch (m->type) { 470 + case Z_EROFS_VLE_CLUSTER_TYPE_PLAIN: 471 + case Z_EROFS_VLE_CLUSTER_TYPE_HEAD: 472 + /* 473 + * if the 1st NONHEAD lcluster is actually PLAIN or HEAD type 474 + * rather than CBLKCNT, it's a 1 lcluster-sized pcluster. 475 + */ 476 + m->compressedlcs = 1; 477 + break; 461 478 case Z_EROFS_VLE_CLUSTER_TYPE_NONHEAD: 462 479 if (m->delta[0] != 1) 463 480 goto err_bonus_cblkcnt;
+23 -32
fs/f2fs/compress.c
··· 117 117 f2fs_drop_rpages(cc, len, true); 118 118 } 119 119 120 - static void f2fs_put_rpages_mapping(struct address_space *mapping, 121 - pgoff_t start, int len) 122 - { 123 - int i; 124 - 125 - for (i = 0; i < len; i++) { 126 - struct page *page = find_get_page(mapping, start + i); 127 - 128 - put_page(page); 129 - put_page(page); 130 - } 131 - } 132 - 133 120 static void f2fs_put_rpages_wbc(struct compress_ctx *cc, 134 121 struct writeback_control *wbc, bool redirty, int unlock) 135 122 { ··· 145 158 return cc->rpages ? 0 : -ENOMEM; 146 159 } 147 160 148 - void f2fs_destroy_compress_ctx(struct compress_ctx *cc) 161 + void f2fs_destroy_compress_ctx(struct compress_ctx *cc, bool reuse) 149 162 { 150 163 page_array_free(cc->inode, cc->rpages, cc->cluster_size); 151 164 cc->rpages = NULL; 152 165 cc->nr_rpages = 0; 153 166 cc->nr_cpages = 0; 154 - cc->cluster_idx = NULL_CLUSTER; 167 + if (!reuse) 168 + cc->cluster_idx = NULL_CLUSTER; 155 169 } 156 170 157 171 void f2fs_compress_ctx_add_page(struct compress_ctx *cc, struct page *page) ··· 1024 1036 } 1025 1037 1026 1038 if (PageUptodate(page)) 1027 - unlock_page(page); 1039 + f2fs_put_page(page, 1); 1028 1040 else 1029 1041 f2fs_compress_ctx_add_page(cc, page); 1030 1042 } ··· 1034 1046 1035 1047 ret = f2fs_read_multi_pages(cc, &bio, cc->cluster_size, 1036 1048 &last_block_in_bio, false, true); 1037 - f2fs_destroy_compress_ctx(cc); 1049 + f2fs_put_rpages(cc); 1050 + f2fs_destroy_compress_ctx(cc, true); 1038 1051 if (ret) 1039 - goto release_pages; 1052 + goto out; 1040 1053 if (bio) 1041 1054 f2fs_submit_bio(sbi, bio, DATA); 1042 1055 1043 1056 ret = f2fs_init_compress_ctx(cc); 1044 1057 if (ret) 1045 - goto release_pages; 1058 + goto out; 1046 1059 } 1047 1060 1048 1061 for (i = 0; i < cc->cluster_size; i++) { 1049 1062 f2fs_bug_on(sbi, cc->rpages[i]); 1050 1063 1051 1064 page = find_lock_page(mapping, start_idx + i); 1052 - f2fs_bug_on(sbi, !page); 1065 + if (!page) { 1066 + /* page can be truncated */ 1067 + goto release_and_retry; 1068 + } 1053 1069 1054 1070 f2fs_wait_on_page_writeback(page, DATA, true, true); 1055 - 1056 1071 f2fs_compress_ctx_add_page(cc, page); 1057 - f2fs_put_page(page, 0); 1058 1072 1059 1073 if (!PageUptodate(page)) { 1074 + release_and_retry: 1075 + f2fs_put_rpages(cc); 1060 1076 f2fs_unlock_rpages(cc, i + 1); 1061 - f2fs_put_rpages_mapping(mapping, start_idx, 1062 - cc->cluster_size); 1063 - f2fs_destroy_compress_ctx(cc); 1077 + f2fs_destroy_compress_ctx(cc, true); 1064 1078 goto retry; 1065 1079 } 1066 1080 } ··· 1093 1103 } 1094 1104 1095 1105 unlock_pages: 1106 + f2fs_put_rpages(cc); 1096 1107 f2fs_unlock_rpages(cc, i); 1097 - release_pages: 1098 - f2fs_put_rpages_mapping(mapping, start_idx, i); 1099 - f2fs_destroy_compress_ctx(cc); 1108 + f2fs_destroy_compress_ctx(cc, true); 1109 + out: 1100 1110 return ret; 1101 1111 } 1102 1112 ··· 1131 1141 set_cluster_dirty(&cc); 1132 1142 1133 1143 f2fs_put_rpages_wbc(&cc, NULL, false, 1); 1134 - f2fs_destroy_compress_ctx(&cc); 1144 + f2fs_destroy_compress_ctx(&cc, false); 1135 1145 1136 1146 return first_index; 1137 1147 } ··· 1351 1361 f2fs_put_rpages(cc); 1352 1362 page_array_free(cc->inode, cc->cpages, cc->nr_cpages); 1353 1363 cc->cpages = NULL; 1354 - f2fs_destroy_compress_ctx(cc); 1364 + f2fs_destroy_compress_ctx(cc, false); 1355 1365 return 0; 1356 1366 1357 1367 out_destroy_crypt: ··· 1362 1372 for (i = 0; i < cc->nr_cpages; i++) { 1363 1373 if (!cc->cpages[i]) 1364 1374 continue; 1365 - f2fs_put_page(cc->cpages[i], 1); 1375 + f2fs_compress_free_page(cc->cpages[i]); 1376 + cc->cpages[i] = NULL; 1366 1377 } 1367 1378 out_put_cic: 1368 1379 kmem_cache_free(cic_entry_slab, cic); ··· 1513 1522 err = f2fs_write_raw_pages(cc, submitted, wbc, io_type); 1514 1523 f2fs_put_rpages_wbc(cc, wbc, false, 0); 1515 1524 destroy_out: 1516 - f2fs_destroy_compress_ctx(cc); 1525 + f2fs_destroy_compress_ctx(cc, false); 1517 1526 return err; 1518 1527 } 1519 1528
+28 -11
fs/f2fs/data.c
··· 2287 2287 max_nr_pages, 2288 2288 &last_block_in_bio, 2289 2289 rac != NULL, false); 2290 - f2fs_destroy_compress_ctx(&cc); 2290 + f2fs_destroy_compress_ctx(&cc, false); 2291 2291 if (ret) 2292 2292 goto set_error_page; 2293 2293 } ··· 2332 2332 max_nr_pages, 2333 2333 &last_block_in_bio, 2334 2334 rac != NULL, false); 2335 - f2fs_destroy_compress_ctx(&cc); 2335 + f2fs_destroy_compress_ctx(&cc, false); 2336 2336 } 2337 2337 } 2338 2338 #endif ··· 3033 3033 } 3034 3034 } 3035 3035 if (f2fs_compressed_file(inode)) 3036 - f2fs_destroy_compress_ctx(&cc); 3036 + f2fs_destroy_compress_ctx(&cc, false); 3037 3037 #endif 3038 3038 if (retry) { 3039 3039 index = 0; ··· 3801 3801 block_t pblock; 3802 3802 unsigned long nr_pblocks; 3803 3803 unsigned int blocks_per_sec = BLKS_PER_SEC(sbi); 3804 + unsigned int not_aligned = 0; 3804 3805 int ret = 0; 3805 3806 3806 3807 cur_lblock = 0; ··· 3834 3833 3835 3834 if ((pblock - main_blkaddr) & (blocks_per_sec - 1) || 3836 3835 nr_pblocks & (blocks_per_sec - 1)) { 3837 - f2fs_err(sbi, "Swapfile does not align to section"); 3838 - ret = -EINVAL; 3839 - goto out; 3836 + if (f2fs_is_pinned_file(inode)) { 3837 + f2fs_err(sbi, "Swapfile does not align to section"); 3838 + ret = -EINVAL; 3839 + goto out; 3840 + } 3841 + not_aligned++; 3840 3842 } 3841 3843 3842 3844 cur_lblock += nr_pblocks; 3843 3845 } 3846 + if (not_aligned) 3847 + f2fs_warn(sbi, "Swapfile (%u) is not align to section: \n" 3848 + "\t1) creat(), 2) ioctl(F2FS_IOC_SET_PIN_FILE), 3) fallocate()", 3849 + not_aligned); 3844 3850 out: 3845 3851 return ret; 3846 3852 } ··· 3866 3858 int nr_extents = 0; 3867 3859 unsigned long nr_pblocks; 3868 3860 unsigned int blocks_per_sec = BLKS_PER_SEC(sbi); 3861 + unsigned int not_aligned = 0; 3869 3862 int ret = 0; 3870 3863 3871 3864 /* ··· 3896 3887 /* hole */ 3897 3888 if (!(map.m_flags & F2FS_MAP_FLAGS)) { 3898 3889 f2fs_err(sbi, "Swapfile has holes\n"); 3899 - ret = -ENOENT; 3890 + ret = -EINVAL; 3900 3891 goto out; 3901 3892 } 3902 3893 ··· 3905 3896 3906 3897 if ((pblock - SM_I(sbi)->main_blkaddr) & (blocks_per_sec - 1) || 3907 3898 nr_pblocks & (blocks_per_sec - 1)) { 3908 - f2fs_err(sbi, "Swapfile does not align to section"); 3909 - ret = -EINVAL; 3910 - goto out; 3899 + if (f2fs_is_pinned_file(inode)) { 3900 + f2fs_err(sbi, "Swapfile does not align to section"); 3901 + ret = -EINVAL; 3902 + goto out; 3903 + } 3904 + not_aligned++; 3911 3905 } 3912 3906 3913 3907 if (cur_lblock + nr_pblocks >= sis->max) ··· 3939 3927 sis->max = cur_lblock; 3940 3928 sis->pages = cur_lblock - 1; 3941 3929 sis->highest_bit = cur_lblock - 1; 3930 + 3931 + if (not_aligned) 3932 + f2fs_warn(sbi, "Swapfile (%u) is not align to section: \n" 3933 + "\t1) creat(), 2) ioctl(F2FS_IOC_SET_PIN_FILE), 3) fallocate()", 3934 + not_aligned); 3942 3935 out: 3943 3936 return ret; 3944 3937 } ··· 4052 4035 return ret; 4053 4036 bad_bmap: 4054 4037 f2fs_err(sbi, "Swapfile has holes\n"); 4055 - return -ENOENT; 4038 + return -EINVAL; 4056 4039 } 4057 4040 4058 4041 static int f2fs_swap_activate(struct swap_info_struct *sis, struct file *file,
+1 -1
fs/f2fs/f2fs.h
··· 3956 3956 void f2fs_decompress_end_io(struct decompress_io_ctx *dic, bool failed); 3957 3957 void f2fs_put_page_dic(struct page *page); 3958 3958 int f2fs_init_compress_ctx(struct compress_ctx *cc); 3959 - void f2fs_destroy_compress_ctx(struct compress_ctx *cc); 3959 + void f2fs_destroy_compress_ctx(struct compress_ctx *cc, bool reuse); 3960 3960 void f2fs_init_compress_info(struct f2fs_sb_info *sbi); 3961 3961 int f2fs_init_page_array_cache(struct f2fs_sb_info *sbi); 3962 3962 void f2fs_destroy_page_array_cache(struct f2fs_sb_info *sbi);
+2 -1
fs/f2fs/file.c
··· 1817 1817 struct f2fs_inode_info *fi = F2FS_I(inode); 1818 1818 u32 masked_flags = fi->i_flags & mask; 1819 1819 1820 - f2fs_bug_on(F2FS_I_SB(inode), (iflags & ~mask)); 1820 + /* mask can be shrunk by flags_valid selector */ 1821 + iflags &= mask; 1821 1822 1822 1823 /* Is it quota file? Do not allow user to mess with it */ 1823 1824 if (IS_NOQUOTA(inode))
+2 -2
fs/f2fs/segment.c
··· 3574 3574 3575 3575 return err; 3576 3576 drop_bio: 3577 - if (fio->bio) { 3577 + if (fio->bio && *(fio->bio)) { 3578 3578 struct bio *bio = *(fio->bio); 3579 3579 3580 3580 bio->bi_status = BLK_STS_IOERR; 3581 3581 bio_endio(bio); 3582 - fio->bio = NULL; 3582 + *(fio->bio) = NULL; 3583 3583 } 3584 3584 return err; 3585 3585 }
+4 -3
fs/hfsplus/extents.c
··· 598 598 res = __hfsplus_ext_cache_extent(&fd, inode, alloc_cnt); 599 599 if (res) 600 600 break; 601 - hfs_brec_remove(&fd); 602 601 603 - mutex_unlock(&fd.tree->tree_lock); 604 602 start = hip->cached_start; 603 + if (blk_cnt <= start) 604 + hfs_brec_remove(&fd); 605 + mutex_unlock(&fd.tree->tree_lock); 605 606 hfsplus_free_extents(sb, hip->cached_extents, 606 607 alloc_cnt - start, alloc_cnt - blk_cnt); 607 608 hfsplus_dump_extent(hip->cached_extents); 609 + mutex_lock(&fd.tree->tree_lock); 608 610 if (blk_cnt > start) { 609 611 hip->extent_state |= HFSPLUS_EXT_DIRTY; 610 612 break; ··· 614 612 alloc_cnt = start; 615 613 hip->cached_start = hip->cached_blocks = 0; 616 614 hip->extent_state &= ~(HFSPLUS_EXT_DIRTY | HFSPLUS_EXT_NEW); 617 - mutex_lock(&fd.tree->tree_lock); 618 615 } 619 616 hfs_find_exit(&fd); 620 617
+5
fs/hugetlbfs/inode.c
··· 131 131 static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma) 132 132 { 133 133 struct inode *inode = file_inode(file); 134 + struct hugetlbfs_inode_info *info = HUGETLBFS_I(inode); 134 135 loff_t len, vma_len; 135 136 int ret; 136 137 struct hstate *h = hstate_file(file); ··· 146 145 */ 147 146 vma->vm_flags |= VM_HUGETLB | VM_DONTEXPAND; 148 147 vma->vm_ops = &hugetlb_vm_ops; 148 + 149 + ret = seal_check_future_write(info->seals, vma); 150 + if (ret) 151 + return ret; 149 152 150 153 /* 151 154 * page based offset in vm_pgoff could be sufficiently large to
+10 -9
fs/io_uring.c
··· 100 100 #define IORING_MAX_RESTRICTIONS (IORING_RESTRICTION_LAST + \ 101 101 IORING_REGISTER_LAST + IORING_OP_LAST) 102 102 103 + #define IORING_MAX_REG_BUFFERS (1U << 14) 104 + 103 105 #define SQE_VALID_FLAGS (IOSQE_FIXED_FILE|IOSQE_IO_DRAIN|IOSQE_IO_LINK| \ 104 106 IOSQE_IO_HARDLINK | IOSQE_ASYNC | \ 105 107 IOSQE_BUFFER_SELECT) ··· 4037 4035 #if defined(CONFIG_EPOLL) 4038 4036 if (sqe->ioprio || sqe->buf_index) 4039 4037 return -EINVAL; 4040 - if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL))) 4038 + if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) 4041 4039 return -EINVAL; 4042 4040 4043 4041 req->epoll.epfd = READ_ONCE(sqe->fd); ··· 4152 4150 4153 4151 static int io_statx_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) 4154 4152 { 4155 - if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL))) 4153 + if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) 4156 4154 return -EINVAL; 4157 4155 if (sqe->ioprio || sqe->buf_index) 4158 4156 return -EINVAL; ··· 5829 5827 static int io_rsrc_update_prep(struct io_kiocb *req, 5830 5828 const struct io_uring_sqe *sqe) 5831 5829 { 5832 - if (unlikely(req->ctx->flags & IORING_SETUP_SQPOLL)) 5833 - return -EINVAL; 5834 5830 if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT))) 5835 5831 return -EINVAL; 5836 5832 if (sqe->ioprio || sqe->rw_flags) ··· 6354 6354 * We don't expect the list to be empty, that will only happen if we 6355 6355 * race with the completion of the linked work. 6356 6356 */ 6357 - if (prev && req_ref_inc_not_zero(prev)) 6357 + if (prev) { 6358 6358 io_remove_next_linked(prev); 6359 - else 6360 - prev = NULL; 6359 + if (!req_ref_inc_not_zero(prev)) 6360 + prev = NULL; 6361 + } 6361 6362 spin_unlock_irqrestore(&ctx->completion_lock, flags); 6362 6363 6363 6364 if (prev) { 6364 6365 io_async_find_and_cancel(ctx, req, prev->user_data, -ETIME); 6365 6366 io_put_req_deferred(prev, 1); 6367 + io_put_req_deferred(req, 1); 6366 6368 } else { 6367 6369 io_req_complete_post(req, -ETIME, 0); 6368 6370 } 6369 - io_put_req_deferred(req, 1); 6370 6371 return HRTIMER_NORESTART; 6371 6372 } 6372 6373 ··· 8391 8390 8392 8391 if (ctx->user_bufs) 8393 8392 return -EBUSY; 8394 - if (!nr_args || nr_args > UIO_MAXIOV) 8393 + if (!nr_args || nr_args > IORING_MAX_REG_BUFFERS) 8395 8394 return -EINVAL; 8396 8395 ret = io_rsrc_node_switch_start(ctx); 8397 8396 if (ret)
+2 -2
fs/iomap/buffered-io.c
··· 394 394 { 395 395 struct inode *inode = rac->mapping->host; 396 396 loff_t pos = readahead_pos(rac); 397 - loff_t length = readahead_length(rac); 397 + size_t length = readahead_length(rac); 398 398 struct iomap_readpage_ctx ctx = { 399 399 .rac = rac, 400 400 }; ··· 402 402 trace_iomap_readahead(inode, readahead_count(rac)); 403 403 404 404 while (length > 0) { 405 - loff_t ret = iomap_apply(inode, pos, length, 0, ops, 405 + ssize_t ret = iomap_apply(inode, pos, length, 0, ops, 406 406 &ctx, iomap_readahead_actor); 407 407 if (ret <= 0) { 408 408 WARN_ON_ONCE(ret == 0);
+5 -1
fs/namespace.c
··· 3855 3855 if (!(m->mnt_sb->s_type->fs_flags & FS_ALLOW_IDMAP)) 3856 3856 return -EINVAL; 3857 3857 3858 + /* Don't yet support filesystem mountable in user namespaces. */ 3859 + if (m->mnt_sb->s_user_ns != &init_user_ns) 3860 + return -EINVAL; 3861 + 3858 3862 /* We're not controlling the superblock. */ 3859 - if (!ns_capable(m->mnt_sb->s_user_ns, CAP_SYS_ADMIN)) 3863 + if (!capable(CAP_SYS_ADMIN)) 3860 3864 return -EPERM; 3861 3865 3862 3866 /* Mount has already been visible in the filesystem hierarchy. */
+2 -4
fs/quota/dquot.c
··· 288 288 static struct dquot *find_dquot(unsigned int hashent, struct super_block *sb, 289 289 struct kqid qid) 290 290 { 291 - struct hlist_node *node; 292 291 struct dquot *dquot; 293 292 294 - hlist_for_each (node, dquot_hash+hashent) { 295 - dquot = hlist_entry(node, struct dquot, dq_hash); 293 + hlist_for_each_entry(dquot, dquot_hash+hashent, dq_hash) 296 294 if (dquot->dq_sb == sb && qid_eq(dquot->dq_id, qid)) 297 295 return dquot; 298 - } 296 + 299 297 return NULL; 300 298 } 301 299
+3 -3
fs/squashfs/file.c
··· 211 211 * If the skip factor is limited in this way then the file will use multiple 212 212 * slots. 213 213 */ 214 - static inline int calculate_skip(int blocks) 214 + static inline int calculate_skip(u64 blocks) 215 215 { 216 - int skip = blocks / ((SQUASHFS_META_ENTRIES + 1) 216 + u64 skip = blocks / ((SQUASHFS_META_ENTRIES + 1) 217 217 * SQUASHFS_META_INDEXES); 218 - return min(SQUASHFS_CACHED_BLKS - 1, skip + 1); 218 + return min((u64) SQUASHFS_CACHED_BLKS - 1, skip + 1); 219 219 } 220 220 221 221
-5
include/linux/blkdev.h
··· 676 676 extern void blk_set_pm_only(struct request_queue *q); 677 677 extern void blk_clear_pm_only(struct request_queue *q); 678 678 679 - static inline bool blk_account_rq(struct request *rq) 680 - { 681 - return (rq->rq_flags & RQF_STARTED) && !blk_rq_is_passthrough(rq); 682 - } 683 - 684 679 #define list_entry_rq(ptr) list_entry((ptr), struct request, queuelist) 685 680 686 681 #define rq_data_dir(rq) (op_is_write(req_op(rq)) ? WRITE : READ)
+1
include/linux/console_struct.h
··· 101 101 unsigned int vc_rows; 102 102 unsigned int vc_size_row; /* Bytes per row */ 103 103 unsigned int vc_scan_lines; /* # of scan lines */ 104 + unsigned int vc_cell_height; /* CRTC character cell height */ 104 105 unsigned long vc_origin; /* [!] Start of real screen */ 105 106 unsigned long vc_scr_end; /* [!] End of real screen */ 106 107 unsigned long vc_visible_origin; /* [!] Top of visible window */
+5
include/linux/dynamic_debug.h
··· 32 32 #define _DPRINTK_FLAGS_INCL_FUNCNAME (1<<2) 33 33 #define _DPRINTK_FLAGS_INCL_LINENO (1<<3) 34 34 #define _DPRINTK_FLAGS_INCL_TID (1<<4) 35 + 36 + #define _DPRINTK_FLAGS_INCL_ANY \ 37 + (_DPRINTK_FLAGS_INCL_MODNAME | _DPRINTK_FLAGS_INCL_FUNCNAME |\ 38 + _DPRINTK_FLAGS_INCL_LINENO | _DPRINTK_FLAGS_INCL_TID) 39 + 35 40 #if defined DEBUG 36 41 #define _DPRINTK_FLAGS_DEFAULT _DPRINTK_FLAGS_PRINT 37 42 #else
+1 -1
include/linux/elevator.h
··· 34 34 void (*depth_updated)(struct blk_mq_hw_ctx *); 35 35 36 36 bool (*allow_merge)(struct request_queue *, struct request *, struct bio *); 37 - bool (*bio_merge)(struct blk_mq_hw_ctx *, struct bio *, unsigned int); 37 + bool (*bio_merge)(struct request_queue *, struct bio *, unsigned int); 38 38 int (*request_merge)(struct request_queue *q, struct request **, struct bio *); 39 39 void (*request_merged)(struct request_queue *, struct request *, enum elv_merge); 40 40 void (*requests_merged)(struct request_queue *, struct request *, struct request *);
+1
include/linux/fwnode.h
··· 187 187 extern bool fw_devlink_is_strict(void); 188 188 int fwnode_link_add(struct fwnode_handle *con, struct fwnode_handle *sup); 189 189 void fwnode_links_purge(struct fwnode_handle *fwnode); 190 + void fw_devlink_purge_absent_suppliers(struct fwnode_handle *fwnode); 190 191 191 192 #endif
-1
include/linux/libnvdimm.h
··· 141 141 142 142 struct nvdimm_bus; 143 143 struct module; 144 - struct device; 145 144 struct nd_blk_region; 146 145 struct nd_blk_region_desc { 147 146 int (*enable)(struct nvdimm_bus *nvdimm_bus, struct device *dev);
+32
include/linux/mm.h
··· 3216 3216 static inline void mem_dump_obj(void *object) {} 3217 3217 #endif 3218 3218 3219 + /** 3220 + * seal_check_future_write - Check for F_SEAL_FUTURE_WRITE flag and handle it 3221 + * @seals: the seals to check 3222 + * @vma: the vma to operate on 3223 + * 3224 + * Check whether F_SEAL_FUTURE_WRITE is set; if so, do proper check/handling on 3225 + * the vma flags. Return 0 if check pass, or <0 for errors. 3226 + */ 3227 + static inline int seal_check_future_write(int seals, struct vm_area_struct *vma) 3228 + { 3229 + if (seals & F_SEAL_FUTURE_WRITE) { 3230 + /* 3231 + * New PROT_WRITE and MAP_SHARED mmaps are not allowed when 3232 + * "future write" seal active. 3233 + */ 3234 + if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_WRITE)) 3235 + return -EPERM; 3236 + 3237 + /* 3238 + * Since an F_SEAL_FUTURE_WRITE sealed memfd can be mapped as 3239 + * MAP_SHARED and read-only, take care to not allow mprotect to 3240 + * revert protections on such mappings. Do this only for shared 3241 + * mappings. For private mappings, don't need to mask 3242 + * VM_MAYWRITE as we still want them to be COW-writable. 3243 + */ 3244 + if (vma->vm_flags & VM_SHARED) 3245 + vma->vm_flags &= ~(VM_MAYWRITE); 3246 + } 3247 + 3248 + return 0; 3249 + } 3250 + 3219 3251 #endif /* __KERNEL__ */ 3220 3252 #endif /* _LINUX_MM_H */
+2 -2
include/linux/mm_types.h
··· 97 97 }; 98 98 struct { /* page_pool used by netstack */ 99 99 /** 100 - * @dma_addr: might require a 64-bit value even on 100 + * @dma_addr: might require a 64-bit value on 101 101 * 32-bit architectures. 102 102 */ 103 - dma_addr_t dma_addr; 103 + unsigned long dma_addr[2]; 104 104 }; 105 105 struct { /* slab, slob and slub */ 106 106 union {
+3 -3
include/linux/pagemap.h
··· 997 997 * readahead_length - The number of bytes in this readahead request. 998 998 * @rac: The readahead request. 999 999 */ 1000 - static inline loff_t readahead_length(struct readahead_control *rac) 1000 + static inline size_t readahead_length(struct readahead_control *rac) 1001 1001 { 1002 - return (loff_t)rac->_nr_pages * PAGE_SIZE; 1002 + return rac->_nr_pages * PAGE_SIZE; 1003 1003 } 1004 1004 1005 1005 /** ··· 1024 1024 * readahead_batch_length - The number of bytes in the current batch. 1025 1025 * @rac: The readahead request. 1026 1026 */ 1027 - static inline loff_t readahead_batch_length(struct readahead_control *rac) 1027 + static inline size_t readahead_batch_length(struct readahead_control *rac) 1028 1028 { 1029 1029 return rac->_batch_count * PAGE_SIZE; 1030 1030 }
+1
include/linux/pm.h
··· 601 601 unsigned int idle_notification:1; 602 602 unsigned int request_pending:1; 603 603 unsigned int deferred_resume:1; 604 + unsigned int needs_force_resume:1; 604 605 unsigned int runtime_auto:1; 605 606 bool ignore_children:1; 606 607 unsigned int no_callbacks:1;
+1 -1
include/linux/randomize_kstack.h
··· 38 38 u32 offset = raw_cpu_read(kstack_offset); \ 39 39 u8 *ptr = __builtin_alloca(KSTACK_OFFSET_MAX(offset)); \ 40 40 /* Keep allocation even after "ptr" loses scope. */ \ 41 - asm volatile("" : "=o"(*ptr) :: "memory"); \ 41 + asm volatile("" :: "r"(ptr) : "memory"); \ 42 42 } \ 43 43 } while (0) 44 44
+11 -1
include/net/page_pool.h
··· 198 198 199 199 static inline dma_addr_t page_pool_get_dma_addr(struct page *page) 200 200 { 201 - return page->dma_addr; 201 + dma_addr_t ret = page->dma_addr[0]; 202 + if (sizeof(dma_addr_t) > sizeof(unsigned long)) 203 + ret |= (dma_addr_t)page->dma_addr[1] << 16 << 16; 204 + return ret; 205 + } 206 + 207 + static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr) 208 + { 209 + page->dma_addr[0] = addr; 210 + if (sizeof(dma_addr_t) > sizeof(unsigned long)) 211 + page->dma_addr[1] = upper_32_bits(addr); 202 212 } 203 213 204 214 static inline bool is_page_pool_compiled_in(void)
+1 -1
include/uapi/linux/fs.h
··· 185 185 #define BLKROTATIONAL _IO(0x12,126) 186 186 #define BLKZEROOUT _IO(0x12,127) 187 187 /* 188 - * A jump here: 130-131 are reserved for zoned block devices 188 + * A jump here: 130-136 are reserved for zoned block devices 189 189 * (see uapi/linux/blkzoned.h) 190 190 */ 191 191
+33
include/uapi/misc/habanalabs.h
··· 239 239 GAUDI_ENGINE_ID_SIZE 240 240 }; 241 241 242 + /* 243 + * ASIC specific PLL index 244 + * 245 + * Used to retrieve in frequency info of different IPs via 246 + * HL_INFO_PLL_FREQUENCY under HL_IOCTL_INFO IOCTL. The enums need to be 247 + * used as an index in struct hl_pll_frequency_info 248 + */ 249 + 250 + enum hl_goya_pll_index { 251 + HL_GOYA_CPU_PLL = 0, 252 + HL_GOYA_IC_PLL, 253 + HL_GOYA_MC_PLL, 254 + HL_GOYA_MME_PLL, 255 + HL_GOYA_PCI_PLL, 256 + HL_GOYA_EMMC_PLL, 257 + HL_GOYA_TPC_PLL, 258 + HL_GOYA_PLL_MAX 259 + }; 260 + 261 + enum hl_gaudi_pll_index { 262 + HL_GAUDI_CPU_PLL = 0, 263 + HL_GAUDI_PCI_PLL, 264 + HL_GAUDI_SRAM_PLL, 265 + HL_GAUDI_HBM_PLL, 266 + HL_GAUDI_NIC_PLL, 267 + HL_GAUDI_DMA_PLL, 268 + HL_GAUDI_MESH_PLL, 269 + HL_GAUDI_MME_PLL, 270 + HL_GAUDI_TPC_PLL, 271 + HL_GAUDI_IF_PLL, 272 + HL_GAUDI_PLL_MAX 273 + }; 274 + 242 275 enum hl_device_status { 243 276 HL_DEVICE_STATUS_OPERATIONAL, 244 277 HL_DEVICE_STATUS_IN_RESET,
+14 -1
include/xen/arm/swiotlb-xen.h
··· 2 2 #ifndef _ASM_ARM_SWIOTLB_XEN_H 3 3 #define _ASM_ARM_SWIOTLB_XEN_H 4 4 5 - extern int xen_swiotlb_detect(void); 5 + #include <xen/features.h> 6 + #include <xen/xen.h> 7 + 8 + static inline int xen_swiotlb_detect(void) 9 + { 10 + if (!xen_domain()) 11 + return 0; 12 + if (xen_feature(XENFEAT_direct_mapped)) 13 + return 1; 14 + /* legacy case */ 15 + if (!xen_feature(XENFEAT_not_direct_mapped) && xen_initial_domain()) 16 + return 1; 17 + return 0; 18 + } 6 19 7 20 #endif /* _ASM_ARM_SWIOTLB_XEN_H */
+17 -1
kernel/ptrace.c
··· 170 170 spin_unlock(&child->sighand->siglock); 171 171 } 172 172 173 + static bool looks_like_a_spurious_pid(struct task_struct *task) 174 + { 175 + if (task->exit_code != ((PTRACE_EVENT_EXEC << 8) | SIGTRAP)) 176 + return false; 177 + 178 + if (task_pid_vnr(task) == task->ptrace_message) 179 + return false; 180 + /* 181 + * The tracee changed its pid but the PTRACE_EVENT_EXEC event 182 + * was not wait()'ed, most probably debugger targets the old 183 + * leader which was destroyed in de_thread(). 184 + */ 185 + return true; 186 + } 187 + 173 188 /* Ensure that nothing can wake it up, even SIGKILL */ 174 189 static bool ptrace_freeze_traced(struct task_struct *task) 175 190 { ··· 195 180 return ret; 196 181 197 182 spin_lock_irq(&task->sighand->siglock); 198 - if (task_is_traced(task) && !__fatal_signal_pending(task)) { 183 + if (task_is_traced(task) && !looks_like_a_spurious_pid(task) && 184 + !__fatal_signal_pending(task)) { 199 185 task->state = __TASK_TRACED; 200 186 ret = true; 201 187 }
+1 -1
kernel/resource.c
··· 1805 1805 REGION_DISJOINT) 1806 1806 continue; 1807 1807 1808 - if (!__request_region_locked(res, &iomem_resource, addr, size, 1808 + if (__request_region_locked(res, &iomem_resource, addr, size, 1809 1809 name, 0)) 1810 1810 break; 1811 1811
+1 -1
kernel/sched/fair.c
··· 6217 6217 } 6218 6218 6219 6219 if (has_idle_core) 6220 - set_idle_cores(this, false); 6220 + set_idle_cores(target, false); 6221 6221 6222 6222 if (sched_feat(SIS_PROP) && !has_idle_core) { 6223 6223 time = cpu_clock(this) - time;
+1 -1
kernel/time/alarmtimer.c
··· 92 92 if (rtcdev) 93 93 return -EBUSY; 94 94 95 - if (!rtc->ops->set_alarm) 95 + if (!test_bit(RTC_FEATURE_ALARM, rtc->features)) 96 96 return -1; 97 97 if (!device_may_wakeup(rtc->dev.parent)) 98 98 return -1;
+27 -4
kernel/trace/trace.c
··· 3704 3704 goto print; 3705 3705 3706 3706 while (*p) { 3707 + bool star = false; 3708 + int len = 0; 3709 + 3707 3710 j = 0; 3708 3711 3709 3712 /* We only care about %s and variants */ ··· 3728 3725 /* Need to test cases like %08.*s */ 3729 3726 for (j = 1; p[i+j]; j++) { 3730 3727 if (isdigit(p[i+j]) || 3731 - p[i+j] == '*' || 3732 3728 p[i+j] == '.') 3733 3729 continue; 3730 + if (p[i+j] == '*') { 3731 + star = true; 3732 + continue; 3733 + } 3734 3734 break; 3735 3735 } 3736 3736 if (p[i+j] == 's') 3737 3737 break; 3738 + star = false; 3738 3739 } 3739 3740 j = 0; 3740 3741 } ··· 3750 3743 strncpy(iter->fmt, p, i); 3751 3744 iter->fmt[i] = '\0'; 3752 3745 trace_seq_vprintf(&iter->seq, iter->fmt, ap); 3746 + 3747 + if (star) 3748 + len = va_arg(ap, int); 3753 3749 3754 3750 /* The ap now points to the string data of the %s */ 3755 3751 str = va_arg(ap, const char *); ··· 3772 3762 int ret; 3773 3763 3774 3764 /* Try to safely read the string */ 3775 - ret = strncpy_from_kernel_nofault(iter->fmt, str, 3776 - iter->fmt_size); 3765 + if (star) { 3766 + if (len + 1 > iter->fmt_size) 3767 + len = iter->fmt_size - 1; 3768 + if (len < 0) 3769 + len = 0; 3770 + ret = copy_from_kernel_nofault(iter->fmt, str, len); 3771 + iter->fmt[len] = 0; 3772 + star = false; 3773 + } else { 3774 + ret = strncpy_from_kernel_nofault(iter->fmt, str, 3775 + iter->fmt_size); 3776 + } 3777 3777 if (ret < 0) 3778 3778 trace_seq_printf(&iter->seq, "(0x%px)", str); 3779 3779 else ··· 3795 3775 strncpy(iter->fmt, p + i, j + 1); 3796 3776 iter->fmt[j+1] = '\0'; 3797 3777 } 3798 - trace_seq_printf(&iter->seq, iter->fmt, str); 3778 + if (star) 3779 + trace_seq_printf(&iter->seq, iter->fmt, len, str); 3780 + else 3781 + trace_seq_printf(&iter->seq, iter->fmt, str); 3799 3782 3800 3783 p += i + j + 1; 3801 3784 }
+12 -8
lib/dynamic_debug.c
··· 586 586 return 0; 587 587 } 588 588 589 - static char *dynamic_emit_prefix(const struct _ddebug *desc, char *buf) 589 + static char *__dynamic_emit_prefix(const struct _ddebug *desc, char *buf) 590 590 { 591 591 int pos_after_tid; 592 592 int pos = 0; 593 - 594 - *buf = '\0'; 595 593 596 594 if (desc->flags & _DPRINTK_FLAGS_INCL_TID) { 597 595 if (in_interrupt()) ··· 616 618 return buf; 617 619 } 618 620 621 + static inline char *dynamic_emit_prefix(struct _ddebug *desc, char *buf) 622 + { 623 + if (unlikely(desc->flags & _DPRINTK_FLAGS_INCL_ANY)) 624 + return __dynamic_emit_prefix(desc, buf); 625 + return buf; 626 + } 627 + 619 628 void __dynamic_pr_debug(struct _ddebug *descriptor, const char *fmt, ...) 620 629 { 621 630 va_list args; 622 631 struct va_format vaf; 623 - char buf[PREFIX_SIZE]; 632 + char buf[PREFIX_SIZE] = ""; 624 633 625 634 BUG_ON(!descriptor); 626 635 BUG_ON(!fmt); ··· 660 655 if (!dev) { 661 656 printk(KERN_DEBUG "(NULL device *): %pV", &vaf); 662 657 } else { 663 - char buf[PREFIX_SIZE]; 658 + char buf[PREFIX_SIZE] = ""; 664 659 665 660 dev_printk_emit(LOGLEVEL_DEBUG, dev, "%s%s %s: %pV", 666 661 dynamic_emit_prefix(descriptor, buf), ··· 689 684 vaf.va = &args; 690 685 691 686 if (dev && dev->dev.parent) { 692 - char buf[PREFIX_SIZE]; 687 + char buf[PREFIX_SIZE] = ""; 693 688 694 689 dev_printk_emit(LOGLEVEL_DEBUG, dev->dev.parent, 695 690 "%s%s %s %s%s: %pV", ··· 725 720 vaf.va = &args; 726 721 727 722 if (ibdev && ibdev->dev.parent) { 728 - char buf[PREFIX_SIZE]; 723 + char buf[PREFIX_SIZE] = ""; 729 724 730 725 dev_printk_emit(LOGLEVEL_DEBUG, ibdev->dev.parent, 731 726 "%s%s %s %s: %pV", ··· 920 915 921 916 static int ddebug_proc_open(struct inode *inode, struct file *file) 922 917 { 923 - vpr_info("called\n"); 924 918 return seq_open_private(file, &ddebug_proc_seqops, 925 919 sizeof(struct ddebug_iter)); 926 920 }
+23 -6
lib/test_kasan.c
··· 654 654 655 655 static void kasan_global_oob(struct kunit *test) 656 656 { 657 - volatile int i = 3; 658 - char *p = &global_array[ARRAY_SIZE(global_array) + i]; 657 + /* 658 + * Deliberate out-of-bounds access. To prevent CONFIG_UBSAN_LOCAL_BOUNDS 659 + * from failing here and panicing the kernel, access the array via a 660 + * volatile pointer, which will prevent the compiler from being able to 661 + * determine the array bounds. 662 + * 663 + * This access uses a volatile pointer to char (char *volatile) rather 664 + * than the more conventional pointer to volatile char (volatile char *) 665 + * because we want to prevent the compiler from making inferences about 666 + * the pointer itself (i.e. its array bounds), not the data that it 667 + * refers to. 668 + */ 669 + char *volatile array = global_array; 670 + char *p = &array[ARRAY_SIZE(global_array) + 3]; 659 671 660 672 /* Only generic mode instruments globals. */ 661 673 KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_GENERIC); ··· 715 703 static void kasan_stack_oob(struct kunit *test) 716 704 { 717 705 char stack_array[10]; 718 - volatile int i = OOB_TAG_OFF; 719 - char *p = &stack_array[ARRAY_SIZE(stack_array) + i]; 706 + /* See comment in kasan_global_oob. */ 707 + char *volatile array = stack_array; 708 + char *p = &array[ARRAY_SIZE(stack_array) + OOB_TAG_OFF]; 720 709 721 710 KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_STACK); 722 711 ··· 728 715 { 729 716 volatile int i = 10; 730 717 char alloca_array[i]; 731 - char *p = alloca_array - 1; 718 + /* See comment in kasan_global_oob. */ 719 + char *volatile array = alloca_array; 720 + char *p = array - 1; 732 721 733 722 /* Only generic mode instruments dynamic allocas. */ 734 723 KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_GENERIC); ··· 743 728 { 744 729 volatile int i = 10; 745 730 char alloca_array[i]; 746 - char *p = alloca_array + i; 731 + /* See comment in kasan_global_oob. */ 732 + char *volatile array = alloca_array; 733 + char *p = array + i; 747 734 748 735 /* Only generic mode instruments dynamic allocas. */ 749 736 KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_GENERIC);
+1
mm/hugetlb.c
··· 4056 4056 * See Documentation/vm/mmu_notifier.rst 4057 4057 */ 4058 4058 huge_ptep_set_wrprotect(src, addr, src_pte); 4059 + entry = huge_pte_wrprotect(entry); 4059 4060 } 4060 4061 4061 4062 page_dup_rmap(ptepage, true);
+3 -3
mm/ioremap.c
··· 16 16 #include "pgalloc-track.h" 17 17 18 18 #ifdef CONFIG_HAVE_ARCH_HUGE_VMAP 19 - static bool __ro_after_init iomap_max_page_shift = PAGE_SHIFT; 19 + static unsigned int __ro_after_init iomap_max_page_shift = BITS_PER_LONG - 1; 20 20 21 21 static int __init set_nohugeiomap(char *str) 22 22 { 23 - iomap_max_page_shift = P4D_SHIFT; 23 + iomap_max_page_shift = PAGE_SHIFT; 24 24 return 0; 25 25 } 26 26 early_param("nohugeiomap", set_nohugeiomap); 27 27 #else /* CONFIG_HAVE_ARCH_HUGE_VMAP */ 28 - static const bool iomap_max_page_shift = PAGE_SHIFT; 28 + static const unsigned int iomap_max_page_shift = PAGE_SHIFT; 29 29 #endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */ 30 30 31 31 int ioremap_page_range(unsigned long addr,
+2 -1
mm/ksm.c
··· 776 776 struct page *page; 777 777 778 778 stable_node = rmap_item->head; 779 - page = get_ksm_page(stable_node, GET_KSM_PAGE_NOLOCK); 779 + page = get_ksm_page(stable_node, GET_KSM_PAGE_LOCK); 780 780 if (!page) 781 781 goto out; 782 782 783 783 hlist_del(&rmap_item->hlist); 784 + unlock_page(page); 784 785 put_page(page); 785 786 786 787 if (!hlist_empty(&stable_node->hlist))
+15 -19
mm/shmem.c
··· 2258 2258 static int shmem_mmap(struct file *file, struct vm_area_struct *vma) 2259 2259 { 2260 2260 struct shmem_inode_info *info = SHMEM_I(file_inode(file)); 2261 + int ret; 2261 2262 2262 - if (info->seals & F_SEAL_FUTURE_WRITE) { 2263 - /* 2264 - * New PROT_WRITE and MAP_SHARED mmaps are not allowed when 2265 - * "future write" seal active. 2266 - */ 2267 - if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_WRITE)) 2268 - return -EPERM; 2269 - 2270 - /* 2271 - * Since an F_SEAL_FUTURE_WRITE sealed memfd can be mapped as 2272 - * MAP_SHARED and read-only, take care to not allow mprotect to 2273 - * revert protections on such mappings. Do this only for shared 2274 - * mappings. For private mappings, don't need to mask 2275 - * VM_MAYWRITE as we still want them to be COW-writable. 2276 - */ 2277 - if (vma->vm_flags & VM_SHARED) 2278 - vma->vm_flags &= ~(VM_MAYWRITE); 2279 - } 2263 + ret = seal_check_future_write(info->seals, vma); 2264 + if (ret) 2265 + return ret; 2280 2266 2281 2267 /* arm64 - allow memory tagging on RAM-based files */ 2282 2268 vma->vm_flags |= VM_MTE_ALLOWED; ··· 2361 2375 pgoff_t offset, max_off; 2362 2376 2363 2377 ret = -ENOMEM; 2364 - if (!shmem_inode_acct_block(inode, 1)) 2378 + if (!shmem_inode_acct_block(inode, 1)) { 2379 + /* 2380 + * We may have got a page, returned -ENOENT triggering a retry, 2381 + * and now we find ourselves with -ENOMEM. Release the page, to 2382 + * avoid a BUG_ON in our caller. 2383 + */ 2384 + if (unlikely(*pagep)) { 2385 + put_page(*pagep); 2386 + *pagep = NULL; 2387 + } 2365 2388 goto out; 2389 + } 2366 2390 2367 2391 if (!*pagep) { 2368 2392 page = shmem_alloc_page(gfp, info, pgoff);
+10
mm/slab_common.c
··· 318 318 const char *cache_name; 319 319 int err; 320 320 321 + #ifdef CONFIG_SLUB_DEBUG 322 + /* 323 + * If no slub_debug was enabled globally, the static key is not yet 324 + * enabled by setup_slub_debug(). Enable it if the cache is being 325 + * created with any of the debugging flags passed explicitly. 326 + */ 327 + if (flags & SLAB_DEBUG_FLAGS) 328 + static_branch_enable(&slub_debug_enabled); 329 + #endif 330 + 321 331 mutex_lock(&slab_mutex); 322 332 323 333 err = kmem_cache_sanity_check(name, size);
-9
mm/slub.c
··· 3828 3828 3829 3829 static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) 3830 3830 { 3831 - #ifdef CONFIG_SLUB_DEBUG 3832 - /* 3833 - * If no slub_debug was enabled globally, the static key is not yet 3834 - * enabled by setup_slub_debug(). Enable it if the cache is being 3835 - * created with any of the debugging flags passed explicitly. 3836 - */ 3837 - if (flags & SLAB_DEBUG_FLAGS) 3838 - static_branch_enable(&slub_debug_enabled); 3839 - #endif 3840 3831 s->flags = kmem_cache_flags(s->size, flags, s->name); 3841 3832 #ifdef CONFIG_SLAB_FREELIST_HARDENED 3842 3833 s->random = get_random_long();
+7 -5
net/core/page_pool.c
··· 174 174 struct page *page, 175 175 unsigned int dma_sync_size) 176 176 { 177 + dma_addr_t dma_addr = page_pool_get_dma_addr(page); 178 + 177 179 dma_sync_size = min(dma_sync_size, pool->p.max_len); 178 - dma_sync_single_range_for_device(pool->p.dev, page->dma_addr, 180 + dma_sync_single_range_for_device(pool->p.dev, dma_addr, 179 181 pool->p.offset, dma_sync_size, 180 182 pool->p.dma_dir); 181 183 } ··· 197 195 if (dma_mapping_error(pool->p.dev, dma)) 198 196 return false; 199 197 200 - page->dma_addr = dma; 198 + page_pool_set_dma_addr(page, dma); 201 199 202 200 if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) 203 201 page_pool_dma_sync_for_device(pool, page, pool->p.max_len); ··· 333 331 */ 334 332 goto skip_dma_unmap; 335 333 336 - dma = page->dma_addr; 334 + dma = page_pool_get_dma_addr(page); 337 335 338 - /* When page is unmapped, it cannot be returned our pool */ 336 + /* When page is unmapped, it cannot be returned to our pool */ 339 337 dma_unmap_page_attrs(pool->p.dev, dma, 340 338 PAGE_SIZE << pool->p.order, pool->p.dma_dir, 341 339 DMA_ATTR_SKIP_CPU_SYNC); 342 - page->dma_addr = 0; 340 + page_pool_set_dma_addr(page, 0); 343 341 skip_dma_unmap: 344 342 /* This may be the last page returned, releasing the pool, so 345 343 * it is not safe to reference pool afterwards.
+8 -7
net/smc/smc_ism.c
··· 402 402 return NULL; 403 403 } 404 404 405 + smcd->event_wq = alloc_ordered_workqueue("ism_evt_wq-%s)", 406 + WQ_MEM_RECLAIM, name); 407 + if (!smcd->event_wq) { 408 + kfree(smcd->conn); 409 + kfree(smcd); 410 + return NULL; 411 + } 412 + 405 413 smcd->dev.parent = parent; 406 414 smcd->dev.release = smcd_release; 407 415 device_initialize(&smcd->dev); ··· 423 415 INIT_LIST_HEAD(&smcd->vlan); 424 416 INIT_LIST_HEAD(&smcd->lgr_list); 425 417 init_waitqueue_head(&smcd->lgrs_deleted); 426 - smcd->event_wq = alloc_ordered_workqueue("ism_evt_wq-%s)", 427 - WQ_MEM_RECLAIM, name); 428 - if (!smcd->event_wq) { 429 - kfree(smcd->conn); 430 - kfree(smcd); 431 - return NULL; 432 - } 433 418 return smcd; 434 419 } 435 420 EXPORT_SYMBOL_GPL(smcd_alloc_dev);
+5 -3
security/keys/trusted-keys/trusted_tpm1.c
··· 493 493 494 494 ret = tpm_get_random(chip, td->nonceodd, TPM_NONCE_SIZE); 495 495 if (ret < 0) 496 - return ret; 496 + goto out; 497 497 498 - if (ret != TPM_NONCE_SIZE) 499 - return -EIO; 498 + if (ret != TPM_NONCE_SIZE) { 499 + ret = -EIO; 500 + goto out; 501 + } 500 502 501 503 ordinal = htonl(TPM_ORD_SEAL); 502 504 datsize = htonl(datalen);
+3 -3
security/keys/trusted-keys/trusted_tpm2.c
··· 336 336 rc = -EPERM; 337 337 } 338 338 if (blob_len < 0) 339 - return blob_len; 340 - 341 - payload->blob_len = blob_len; 339 + rc = blob_len; 340 + else 341 + payload->blob_len = blob_len; 342 342 343 343 tpm_put_ops(chip); 344 344 return rc;
+2 -11
sound/isa/gus/gus_main.c
··· 77 77 78 78 static void snd_gus_init_control(struct snd_gus_card *gus) 79 79 { 80 - int ret; 81 - 82 - if (!gus->ace_flag) { 83 - ret = 84 - snd_ctl_add(gus->card, 85 - snd_ctl_new1(&snd_gus_joystick_control, 86 - gus)); 87 - if (ret) 88 - snd_printk(KERN_ERR "gus: snd_ctl_add failed: %d\n", 89 - ret); 90 - } 80 + if (!gus->ace_flag) 81 + snd_ctl_add(gus->card, snd_ctl_new1(&snd_gus_joystick_control, gus)); 91 82 } 92 83 93 84 /*
+3 -7
sound/isa/sb/sb16_main.c
··· 846 846 snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_sb16_playback_ops); 847 847 snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &snd_sb16_capture_ops); 848 848 849 - if (chip->dma16 >= 0 && chip->dma8 != chip->dma16) { 850 - err = snd_ctl_add(card, snd_ctl_new1( 851 - &snd_sb16_dma_control, chip)); 852 - if (err) 853 - return err; 854 - } else { 849 + if (chip->dma16 >= 0 && chip->dma8 != chip->dma16) 850 + snd_ctl_add(card, snd_ctl_new1(&snd_sb16_dma_control, chip)); 851 + else 855 852 pcm->info_flags = SNDRV_PCM_INFO_HALF_DUPLEX; 856 - } 857 853 858 854 snd_pcm_set_managed_buffer_all(pcm, SNDRV_DMA_TYPE_DEV, 859 855 card->dev, 64*1024, 128*1024);
+5 -5
sound/isa/sb/sb8.c
··· 93 93 acard = card->private_data; 94 94 card->private_free = snd_sb8_free; 95 95 96 - /* block the 0x388 port to avoid PnP conflicts */ 96 + /* 97 + * Block the 0x388 port to avoid PnP conflicts. 98 + * No need to check this value after request_region, 99 + * as we never do anything with it. 100 + */ 97 101 acard->fm_res = request_region(0x388, 4, "SoundBlaster FM"); 98 - if (!acard->fm_res) { 99 - err = -EBUSY; 100 - goto _err; 101 - } 102 102 103 103 if (port[dev] != SNDRV_AUTO_PORT) { 104 104 if ((err = snd_sbdsp_create(card, port[dev], irq[dev],
+13 -15
sound/soc/codecs/cs43130.c
··· 1735 1735 static DEVICE_ATTR(hpload_ac_l, 0444, cs43130_show_ac_l, NULL); 1736 1736 static DEVICE_ATTR(hpload_ac_r, 0444, cs43130_show_ac_r, NULL); 1737 1737 1738 + static struct attribute *hpload_attrs[] = { 1739 + &dev_attr_hpload_dc_l.attr, 1740 + &dev_attr_hpload_dc_r.attr, 1741 + &dev_attr_hpload_ac_l.attr, 1742 + &dev_attr_hpload_ac_r.attr, 1743 + }; 1744 + ATTRIBUTE_GROUPS(hpload); 1745 + 1738 1746 static struct reg_sequence hp_en_cal_seq[] = { 1739 1747 {CS43130_INT_MASK_4, CS43130_INT_MASK_ALL}, 1740 1748 {CS43130_HP_MEAS_LOAD_1, 0}, ··· 2310 2302 2311 2303 cs43130->hpload_done = false; 2312 2304 if (cs43130->dc_meas) { 2313 - ret = device_create_file(component->dev, &dev_attr_hpload_dc_l); 2314 - if (ret < 0) 2315 - return ret; 2316 - 2317 - ret = device_create_file(component->dev, &dev_attr_hpload_dc_r); 2318 - if (ret < 0) 2319 - return ret; 2320 - 2321 - ret = device_create_file(component->dev, &dev_attr_hpload_ac_l); 2322 - if (ret < 0) 2323 - return ret; 2324 - 2325 - ret = device_create_file(component->dev, &dev_attr_hpload_ac_r); 2326 - if (ret < 0) 2305 + ret = sysfs_create_groups(&component->dev->kobj, hpload_groups); 2306 + if (ret) 2327 2307 return ret; 2328 2308 2329 2309 cs43130->wq = create_singlethread_workqueue("cs43130_hp"); 2330 - if (!cs43130->wq) 2310 + if (!cs43130->wq) { 2311 + sysfs_remove_groups(&component->dev->kobj, hpload_groups); 2331 2312 return -ENOMEM; 2313 + } 2332 2314 INIT_WORK(&cs43130->work, cs43130_imp_meas); 2333 2315 } 2334 2316
+38 -11
sound/soc/codecs/rt5645.c
··· 3388 3388 { 3389 3389 struct snd_soc_dapm_context *dapm = snd_soc_component_get_dapm(component); 3390 3390 struct rt5645_priv *rt5645 = snd_soc_component_get_drvdata(component); 3391 + int ret = 0; 3391 3392 3392 3393 rt5645->component = component; 3393 3394 3394 3395 switch (rt5645->codec_type) { 3395 3396 case CODEC_TYPE_RT5645: 3396 - snd_soc_dapm_new_controls(dapm, 3397 + ret = snd_soc_dapm_new_controls(dapm, 3397 3398 rt5645_specific_dapm_widgets, 3398 3399 ARRAY_SIZE(rt5645_specific_dapm_widgets)); 3399 - snd_soc_dapm_add_routes(dapm, 3400 + if (ret < 0) 3401 + goto exit; 3402 + 3403 + ret = snd_soc_dapm_add_routes(dapm, 3400 3404 rt5645_specific_dapm_routes, 3401 3405 ARRAY_SIZE(rt5645_specific_dapm_routes)); 3406 + if (ret < 0) 3407 + goto exit; 3408 + 3402 3409 if (rt5645->v_id < 3) { 3403 - snd_soc_dapm_add_routes(dapm, 3410 + ret = snd_soc_dapm_add_routes(dapm, 3404 3411 rt5645_old_dapm_routes, 3405 3412 ARRAY_SIZE(rt5645_old_dapm_routes)); 3413 + if (ret < 0) 3414 + goto exit; 3406 3415 } 3407 3416 break; 3408 3417 case CODEC_TYPE_RT5650: 3409 - snd_soc_dapm_new_controls(dapm, 3418 + ret = snd_soc_dapm_new_controls(dapm, 3410 3419 rt5650_specific_dapm_widgets, 3411 3420 ARRAY_SIZE(rt5650_specific_dapm_widgets)); 3412 - snd_soc_dapm_add_routes(dapm, 3421 + if (ret < 0) 3422 + goto exit; 3423 + 3424 + ret = snd_soc_dapm_add_routes(dapm, 3413 3425 rt5650_specific_dapm_routes, 3414 3426 ARRAY_SIZE(rt5650_specific_dapm_routes)); 3427 + if (ret < 0) 3428 + goto exit; 3415 3429 break; 3416 3430 } 3417 3431 ··· 3433 3419 3434 3420 /* for JD function */ 3435 3421 if (rt5645->pdata.jd_mode) { 3436 - snd_soc_dapm_force_enable_pin(dapm, "JD Power"); 3437 - snd_soc_dapm_force_enable_pin(dapm, "LDO2"); 3438 - snd_soc_dapm_sync(dapm); 3422 + ret = snd_soc_dapm_force_enable_pin(dapm, "JD Power"); 3423 + if (ret < 0) 3424 + goto exit; 3425 + 3426 + ret = snd_soc_dapm_force_enable_pin(dapm, "LDO2"); 3427 + if (ret < 0) 3428 + goto exit; 3429 + 3430 + ret = snd_soc_dapm_sync(dapm); 3431 + if (ret < 0) 3432 + goto exit; 3439 3433 } 3440 3434 3441 3435 if (rt5645->pdata.long_name) ··· 3454 3432 GFP_KERNEL); 3455 3433 3456 3434 if (!rt5645->eq_param) 3457 - return -ENOMEM; 3458 - 3459 - return 0; 3435 + ret = -ENOMEM; 3436 + exit: 3437 + /* 3438 + * If there was an error above, everything will be cleaned up by the 3439 + * caller if we return an error here. This will be done with a later 3440 + * call to rt5645_remove(). 3441 + */ 3442 + return ret; 3460 3443 } 3461 3444 3462 3445 static void rt5645_remove(struct snd_soc_component *component)
+1
tools/arch/powerpc/include/uapi/asm/errno.h
··· 2 2 #ifndef _ASM_POWERPC_ERRNO_H 3 3 #define _ASM_POWERPC_ERRNO_H 4 4 5 + #undef EDEADLOCK 5 6 #include <asm-generic/errno.h> 6 7 7 8 #undef EDEADLOCK
+8 -1
tools/arch/x86/include/asm/cpufeatures.h
··· 84 84 85 85 /* CPU types for specific tunings: */ 86 86 #define X86_FEATURE_K8 ( 3*32+ 4) /* "" Opteron, Athlon64 */ 87 - #define X86_FEATURE_K7 ( 3*32+ 5) /* "" Athlon */ 87 + /* FREE, was #define X86_FEATURE_K7 ( 3*32+ 5) "" Athlon */ 88 88 #define X86_FEATURE_P3 ( 3*32+ 6) /* "" P3 */ 89 89 #define X86_FEATURE_P4 ( 3*32+ 7) /* "" P4 */ 90 90 #define X86_FEATURE_CONSTANT_TSC ( 3*32+ 8) /* TSC ticks at a constant rate */ ··· 236 236 #define X86_FEATURE_EPT_AD ( 8*32+17) /* Intel Extended Page Table access-dirty bit */ 237 237 #define X86_FEATURE_VMCALL ( 8*32+18) /* "" Hypervisor supports the VMCALL instruction */ 238 238 #define X86_FEATURE_VMW_VMMCALL ( 8*32+19) /* "" VMware prefers VMMCALL hypercall instruction */ 239 + #define X86_FEATURE_PVUNLOCK ( 8*32+20) /* "" PV unlock function */ 240 + #define X86_FEATURE_VCPUPREEMPT ( 8*32+21) /* "" PV vcpu_is_preempted function */ 239 241 240 242 /* Intel-defined CPU features, CPUID level 0x00000007:0 (EBX), word 9 */ 241 243 #define X86_FEATURE_FSGSBASE ( 9*32+ 0) /* RDFSBASE, WRFSBASE, RDGSBASE, WRGSBASE instructions*/ ··· 292 290 #define X86_FEATURE_FENCE_SWAPGS_KERNEL (11*32+ 5) /* "" LFENCE in kernel entry SWAPGS path */ 293 291 #define X86_FEATURE_SPLIT_LOCK_DETECT (11*32+ 6) /* #AC for split lock */ 294 292 #define X86_FEATURE_PER_THREAD_MBA (11*32+ 7) /* "" Per-thread Memory Bandwidth Allocation */ 293 + #define X86_FEATURE_SGX1 (11*32+ 8) /* "" Basic SGX */ 294 + #define X86_FEATURE_SGX2 (11*32+ 9) /* "" SGX Enclave Dynamic Memory Management (EDMM) */ 295 295 296 296 /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ 297 297 #define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */ ··· 340 336 #define X86_FEATURE_AVIC (15*32+13) /* Virtual Interrupt Controller */ 341 337 #define X86_FEATURE_V_VMSAVE_VMLOAD (15*32+15) /* Virtual VMSAVE VMLOAD */ 342 338 #define X86_FEATURE_VGIF (15*32+16) /* Virtual GIF */ 339 + #define X86_FEATURE_V_SPEC_CTRL (15*32+20) /* Virtual SPEC_CTRL */ 343 340 #define X86_FEATURE_SVME_ADDR_CHK (15*32+28) /* "" SVME addr check */ 344 341 345 342 /* Intel-defined CPU features, CPUID level 0x00000007:0 (ECX), word 16 */ ··· 359 354 #define X86_FEATURE_AVX512_VPOPCNTDQ (16*32+14) /* POPCNT for vectors of DW/QW */ 360 355 #define X86_FEATURE_LA57 (16*32+16) /* 5-level page tables */ 361 356 #define X86_FEATURE_RDPID (16*32+22) /* RDPID instruction */ 357 + #define X86_FEATURE_BUS_LOCK_DETECT (16*32+24) /* Bus Lock detect */ 362 358 #define X86_FEATURE_CLDEMOTE (16*32+25) /* CLDEMOTE instruction */ 363 359 #define X86_FEATURE_MOVDIRI (16*32+27) /* MOVDIRI instruction */ 364 360 #define X86_FEATURE_MOVDIR64B (16*32+28) /* MOVDIR64B instruction */ ··· 380 374 #define X86_FEATURE_MD_CLEAR (18*32+10) /* VERW clears CPU buffers */ 381 375 #define X86_FEATURE_TSX_FORCE_ABORT (18*32+13) /* "" TSX_FORCE_ABORT */ 382 376 #define X86_FEATURE_SERIALIZE (18*32+14) /* SERIALIZE instruction */ 377 + #define X86_FEATURE_HYBRID_CPU (18*32+15) /* "" This part has CPUs of more than one type */ 383 378 #define X86_FEATURE_TSXLDTRK (18*32+16) /* TSX Suspend Load Address Tracking */ 384 379 #define X86_FEATURE_PCONFIG (18*32+18) /* Intel PCONFIG */ 385 380 #define X86_FEATURE_ARCH_LBR (18*32+19) /* Intel ARCH LBR */
+7 -3
tools/arch/x86/include/asm/msr-index.h
··· 185 185 #define MSR_PEBS_DATA_CFG 0x000003f2 186 186 #define MSR_IA32_DS_AREA 0x00000600 187 187 #define MSR_IA32_PERF_CAPABILITIES 0x00000345 188 + #define PERF_CAP_METRICS_IDX 15 189 + #define PERF_CAP_PT_IDX 16 190 + 188 191 #define MSR_PEBS_LD_LAT_THRESHOLD 0x000003f6 189 192 190 193 #define MSR_IA32_RTIT_CTL 0x00000570 ··· 268 265 #define DEBUGCTLMSR_LBR (1UL << 0) /* last branch recording */ 269 266 #define DEBUGCTLMSR_BTF_SHIFT 1 270 267 #define DEBUGCTLMSR_BTF (1UL << 1) /* single-step on branches */ 268 + #define DEBUGCTLMSR_BUS_LOCK_DETECT (1UL << 2) 271 269 #define DEBUGCTLMSR_TR (1UL << 6) 272 270 #define DEBUGCTLMSR_BTS (1UL << 7) 273 271 #define DEBUGCTLMSR_BTINT (1UL << 8) ··· 537 533 /* K8 MSRs */ 538 534 #define MSR_K8_TOP_MEM1 0xc001001a 539 535 #define MSR_K8_TOP_MEM2 0xc001001d 540 - #define MSR_K8_SYSCFG 0xc0010010 541 - #define MSR_K8_SYSCFG_MEM_ENCRYPT_BIT 23 542 - #define MSR_K8_SYSCFG_MEM_ENCRYPT BIT_ULL(MSR_K8_SYSCFG_MEM_ENCRYPT_BIT) 536 + #define MSR_AMD64_SYSCFG 0xc0010010 537 + #define MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT 23 538 + #define MSR_AMD64_SYSCFG_MEM_ENCRYPT BIT_ULL(MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT) 543 539 #define MSR_K8_INT_PENDING_MSG 0xc0010055 544 540 /* C1E active bits in int pending message */ 545 541 #define K8_INTP_C1E_ACTIVE_MASK 0x18000000
+1
tools/arch/x86/include/uapi/asm/vmx.h
··· 27 27 28 28 29 29 #define VMX_EXIT_REASONS_FAILED_VMENTRY 0x80000000 30 + #define VMX_EXIT_REASONS_SGX_ENCLAVE_MODE 0x08000000 30 31 31 32 #define EXIT_REASON_EXCEPTION_NMI 0 32 33 #define EXIT_REASON_EXTERNAL_INTERRUPT 1
+1 -1
tools/arch/x86/lib/memcpy_64.S
··· 4 4 #include <linux/linkage.h> 5 5 #include <asm/errno.h> 6 6 #include <asm/cpufeatures.h> 7 - #include <asm/alternative-asm.h> 7 + #include <asm/alternative.h> 8 8 #include <asm/export.h> 9 9 10 10 .pushsection .noinstr.text, "ax"
+1 -1
tools/arch/x86/lib/memset_64.S
··· 3 3 4 4 #include <linux/linkage.h> 5 5 #include <asm/cpufeatures.h> 6 - #include <asm/alternative-asm.h> 6 + #include <asm/alternative.h> 7 7 #include <asm/export.h> 8 8 9 9 /*
tools/include/asm/alternative-asm.h tools/include/asm/alternative.h
+10 -1
tools/include/uapi/asm-generic/unistd.h
··· 863 863 __SC_COMP(__NR_epoll_pwait2, sys_epoll_pwait2, compat_sys_epoll_pwait2) 864 864 #define __NR_mount_setattr 442 865 865 __SYSCALL(__NR_mount_setattr, sys_mount_setattr) 866 + #define __NR_quotactl_path 443 867 + __SYSCALL(__NR_quotactl_path, sys_quotactl_path) 868 + 869 + #define __NR_landlock_create_ruleset 444 870 + __SYSCALL(__NR_landlock_create_ruleset, sys_landlock_create_ruleset) 871 + #define __NR_landlock_add_rule 445 872 + __SYSCALL(__NR_landlock_add_rule, sys_landlock_add_rule) 873 + #define __NR_landlock_restrict_self 446 874 + __SYSCALL(__NR_landlock_restrict_self, sys_landlock_restrict_self) 866 875 867 876 #undef __NR_syscalls 868 - #define __NR_syscalls 443 877 + #define __NR_syscalls 447 869 878 870 879 /* 871 880 * 32 bit systems traditionally used different
+121 -4
tools/include/uapi/drm/drm.h
··· 625 625 __u64 size; 626 626 }; 627 627 628 + /** 629 + * DRM_CAP_DUMB_BUFFER 630 + * 631 + * If set to 1, the driver supports creating dumb buffers via the 632 + * &DRM_IOCTL_MODE_CREATE_DUMB ioctl. 633 + */ 628 634 #define DRM_CAP_DUMB_BUFFER 0x1 635 + /** 636 + * DRM_CAP_VBLANK_HIGH_CRTC 637 + * 638 + * If set to 1, the kernel supports specifying a CRTC index in the high bits of 639 + * &drm_wait_vblank_request.type. 640 + * 641 + * Starting kernel version 2.6.39, this capability is always set to 1. 642 + */ 629 643 #define DRM_CAP_VBLANK_HIGH_CRTC 0x2 644 + /** 645 + * DRM_CAP_DUMB_PREFERRED_DEPTH 646 + * 647 + * The preferred bit depth for dumb buffers. 648 + * 649 + * The bit depth is the number of bits used to indicate the color of a single 650 + * pixel excluding any padding. This is different from the number of bits per 651 + * pixel. For instance, XRGB8888 has a bit depth of 24 but has 32 bits per 652 + * pixel. 653 + * 654 + * Note that this preference only applies to dumb buffers, it's irrelevant for 655 + * other types of buffers. 656 + */ 630 657 #define DRM_CAP_DUMB_PREFERRED_DEPTH 0x3 658 + /** 659 + * DRM_CAP_DUMB_PREFER_SHADOW 660 + * 661 + * If set to 1, the driver prefers userspace to render to a shadow buffer 662 + * instead of directly rendering to a dumb buffer. For best speed, userspace 663 + * should do streaming ordered memory copies into the dumb buffer and never 664 + * read from it. 665 + * 666 + * Note that this preference only applies to dumb buffers, it's irrelevant for 667 + * other types of buffers. 668 + */ 631 669 #define DRM_CAP_DUMB_PREFER_SHADOW 0x4 670 + /** 671 + * DRM_CAP_PRIME 672 + * 673 + * Bitfield of supported PRIME sharing capabilities. See &DRM_PRIME_CAP_IMPORT 674 + * and &DRM_PRIME_CAP_EXPORT. 675 + * 676 + * PRIME buffers are exposed as dma-buf file descriptors. See 677 + * Documentation/gpu/drm-mm.rst, section "PRIME Buffer Sharing". 678 + */ 632 679 #define DRM_CAP_PRIME 0x5 680 + /** 681 + * DRM_PRIME_CAP_IMPORT 682 + * 683 + * If this bit is set in &DRM_CAP_PRIME, the driver supports importing PRIME 684 + * buffers via the &DRM_IOCTL_PRIME_FD_TO_HANDLE ioctl. 685 + */ 633 686 #define DRM_PRIME_CAP_IMPORT 0x1 687 + /** 688 + * DRM_PRIME_CAP_EXPORT 689 + * 690 + * If this bit is set in &DRM_CAP_PRIME, the driver supports exporting PRIME 691 + * buffers via the &DRM_IOCTL_PRIME_HANDLE_TO_FD ioctl. 692 + */ 634 693 #define DRM_PRIME_CAP_EXPORT 0x2 694 + /** 695 + * DRM_CAP_TIMESTAMP_MONOTONIC 696 + * 697 + * If set to 0, the kernel will report timestamps with ``CLOCK_REALTIME`` in 698 + * struct drm_event_vblank. If set to 1, the kernel will report timestamps with 699 + * ``CLOCK_MONOTONIC``. See ``clock_gettime(2)`` for the definition of these 700 + * clocks. 701 + * 702 + * Starting from kernel version 2.6.39, the default value for this capability 703 + * is 1. Starting kernel version 4.15, this capability is always set to 1. 704 + */ 635 705 #define DRM_CAP_TIMESTAMP_MONOTONIC 0x6 706 + /** 707 + * DRM_CAP_ASYNC_PAGE_FLIP 708 + * 709 + * If set to 1, the driver supports &DRM_MODE_PAGE_FLIP_ASYNC. 710 + */ 636 711 #define DRM_CAP_ASYNC_PAGE_FLIP 0x7 637 - /* 638 - * The CURSOR_WIDTH and CURSOR_HEIGHT capabilities return a valid widthxheight 639 - * combination for the hardware cursor. The intention is that a hardware 640 - * agnostic userspace can query a cursor plane size to use. 712 + /** 713 + * DRM_CAP_CURSOR_WIDTH 714 + * 715 + * The ``CURSOR_WIDTH`` and ``CURSOR_HEIGHT`` capabilities return a valid 716 + * width x height combination for the hardware cursor. The intention is that a 717 + * hardware agnostic userspace can query a cursor plane size to use. 641 718 * 642 719 * Note that the cross-driver contract is to merely return a valid size; 643 720 * drivers are free to attach another meaning on top, eg. i915 returns the 644 721 * maximum plane size. 645 722 */ 646 723 #define DRM_CAP_CURSOR_WIDTH 0x8 724 + /** 725 + * DRM_CAP_CURSOR_HEIGHT 726 + * 727 + * See &DRM_CAP_CURSOR_WIDTH. 728 + */ 647 729 #define DRM_CAP_CURSOR_HEIGHT 0x9 730 + /** 731 + * DRM_CAP_ADDFB2_MODIFIERS 732 + * 733 + * If set to 1, the driver supports supplying modifiers in the 734 + * &DRM_IOCTL_MODE_ADDFB2 ioctl. 735 + */ 648 736 #define DRM_CAP_ADDFB2_MODIFIERS 0x10 737 + /** 738 + * DRM_CAP_PAGE_FLIP_TARGET 739 + * 740 + * If set to 1, the driver supports the &DRM_MODE_PAGE_FLIP_TARGET_ABSOLUTE and 741 + * &DRM_MODE_PAGE_FLIP_TARGET_RELATIVE flags in 742 + * &drm_mode_crtc_page_flip_target.flags for the &DRM_IOCTL_MODE_PAGE_FLIP 743 + * ioctl. 744 + */ 649 745 #define DRM_CAP_PAGE_FLIP_TARGET 0x11 746 + /** 747 + * DRM_CAP_CRTC_IN_VBLANK_EVENT 748 + * 749 + * If set to 1, the kernel supports reporting the CRTC ID in 750 + * &drm_event_vblank.crtc_id for the &DRM_EVENT_VBLANK and 751 + * &DRM_EVENT_FLIP_COMPLETE events. 752 + * 753 + * Starting kernel version 4.12, this capability is always set to 1. 754 + */ 650 755 #define DRM_CAP_CRTC_IN_VBLANK_EVENT 0x12 756 + /** 757 + * DRM_CAP_SYNCOBJ 758 + * 759 + * If set to 1, the driver supports sync objects. See 760 + * Documentation/gpu/drm-mm.rst, section "DRM Sync Objects". 761 + */ 651 762 #define DRM_CAP_SYNCOBJ 0x13 763 + /** 764 + * DRM_CAP_SYNCOBJ_TIMELINE 765 + * 766 + * If set to 1, the driver supports timeline operations on sync objects. See 767 + * Documentation/gpu/drm-mm.rst, section "DRM Sync Objects". 768 + */ 652 769 #define DRM_CAP_SYNCOBJ_TIMELINE 0x14 653 770 654 771 /* DRM_IOCTL_GET_CAP ioctl argument type */
+1
tools/include/uapi/drm/i915_drm.h
··· 943 943 __u64 offset; 944 944 }; 945 945 946 + /* DRM_IOCTL_I915_GEM_EXECBUFFER was removed in Linux 5.13 */ 946 947 struct drm_i915_gem_execbuffer { 947 948 /** 948 949 * List of buffers to be validated with their relocations to be
+45
tools/include/uapi/linux/kvm.h
··· 1078 1078 #define KVM_CAP_DIRTY_LOG_RING 192 1079 1079 #define KVM_CAP_X86_BUS_LOCK_EXIT 193 1080 1080 #define KVM_CAP_PPC_DAWR1 194 1081 + #define KVM_CAP_SET_GUEST_DEBUG2 195 1082 + #define KVM_CAP_SGX_ATTRIBUTE 196 1083 + #define KVM_CAP_VM_COPY_ENC_CONTEXT_FROM 197 1084 + #define KVM_CAP_PTP_KVM 198 1081 1085 1082 1086 #ifdef KVM_CAP_IRQ_ROUTING 1083 1087 ··· 1675 1671 KVM_SEV_CERT_EXPORT, 1676 1672 /* Attestation report */ 1677 1673 KVM_SEV_GET_ATTESTATION_REPORT, 1674 + /* Guest Migration Extension */ 1675 + KVM_SEV_SEND_CANCEL, 1678 1676 1679 1677 KVM_SEV_NR_MAX, 1680 1678 }; ··· 1733 1727 __u8 mnonce[16]; 1734 1728 __u64 uaddr; 1735 1729 __u32 len; 1730 + }; 1731 + 1732 + struct kvm_sev_send_start { 1733 + __u32 policy; 1734 + __u64 pdh_cert_uaddr; 1735 + __u32 pdh_cert_len; 1736 + __u64 plat_certs_uaddr; 1737 + __u32 plat_certs_len; 1738 + __u64 amd_certs_uaddr; 1739 + __u32 amd_certs_len; 1740 + __u64 session_uaddr; 1741 + __u32 session_len; 1742 + }; 1743 + 1744 + struct kvm_sev_send_update_data { 1745 + __u64 hdr_uaddr; 1746 + __u32 hdr_len; 1747 + __u64 guest_uaddr; 1748 + __u32 guest_len; 1749 + __u64 trans_uaddr; 1750 + __u32 trans_len; 1751 + }; 1752 + 1753 + struct kvm_sev_receive_start { 1754 + __u32 handle; 1755 + __u32 policy; 1756 + __u64 pdh_uaddr; 1757 + __u32 pdh_len; 1758 + __u64 session_uaddr; 1759 + __u32 session_len; 1760 + }; 1761 + 1762 + struct kvm_sev_receive_update_data { 1763 + __u64 hdr_uaddr; 1764 + __u32 hdr_len; 1765 + __u64 guest_uaddr; 1766 + __u32 guest_len; 1767 + __u64 trans_uaddr; 1768 + __u32 trans_len; 1736 1769 }; 1737 1770 1738 1771 #define KVM_DEV_ASSIGN_ENABLE_IOMMU (1 << 0)
+21 -5
tools/include/uapi/linux/perf_event.h
··· 127 127 PERF_COUNT_SW_EMULATION_FAULTS = 8, 128 128 PERF_COUNT_SW_DUMMY = 9, 129 129 PERF_COUNT_SW_BPF_OUTPUT = 10, 130 + PERF_COUNT_SW_CGROUP_SWITCHES = 11, 130 131 131 132 PERF_COUNT_SW_MAX, /* non-ABI */ 132 133 }; ··· 327 326 #define PERF_ATTR_SIZE_VER4 104 /* add: sample_regs_intr */ 328 327 #define PERF_ATTR_SIZE_VER5 112 /* add: aux_watermark */ 329 328 #define PERF_ATTR_SIZE_VER6 120 /* add: aux_sample_size */ 329 + #define PERF_ATTR_SIZE_VER7 128 /* add: sig_data */ 330 330 331 331 /* 332 332 * Hardware event_id to monitor via a performance monitoring event: ··· 406 404 cgroup : 1, /* include cgroup events */ 407 405 text_poke : 1, /* include text poke events */ 408 406 build_id : 1, /* use build id in mmap2 events */ 409 - __reserved_1 : 29; 407 + inherit_thread : 1, /* children only inherit if cloned with CLONE_THREAD */ 408 + remove_on_exec : 1, /* event is removed from task on exec */ 409 + sigtrap : 1, /* send synchronous SIGTRAP on event */ 410 + __reserved_1 : 26; 410 411 411 412 union { 412 413 __u32 wakeup_events; /* wakeup every n events */ ··· 461 456 __u16 __reserved_2; 462 457 __u32 aux_sample_size; 463 458 __u32 __reserved_3; 459 + 460 + /* 461 + * User provided data if sigtrap=1, passed back to user via 462 + * siginfo_t::si_perf, e.g. to permit user to identify the event. 463 + */ 464 + __u64 sig_data; 464 465 }; 465 466 466 467 /* ··· 1182 1171 /** 1183 1172 * PERF_RECORD_AUX::flags bits 1184 1173 */ 1185 - #define PERF_AUX_FLAG_TRUNCATED 0x01 /* record was truncated to fit */ 1186 - #define PERF_AUX_FLAG_OVERWRITE 0x02 /* snapshot from overwrite mode */ 1187 - #define PERF_AUX_FLAG_PARTIAL 0x04 /* record contains gaps */ 1188 - #define PERF_AUX_FLAG_COLLISION 0x08 /* sample collided with another */ 1174 + #define PERF_AUX_FLAG_TRUNCATED 0x01 /* record was truncated to fit */ 1175 + #define PERF_AUX_FLAG_OVERWRITE 0x02 /* snapshot from overwrite mode */ 1176 + #define PERF_AUX_FLAG_PARTIAL 0x04 /* record contains gaps */ 1177 + #define PERF_AUX_FLAG_COLLISION 0x08 /* sample collided with another */ 1178 + #define PERF_AUX_FLAG_PMU_FORMAT_TYPE_MASK 0xff00 /* PMU specific trace format type */ 1179 + 1180 + /* CoreSight PMU AUX buffer formats */ 1181 + #define PERF_AUX_FLAG_CORESIGHT_FORMAT_CORESIGHT 0x0000 /* Default for backward compatibility */ 1182 + #define PERF_AUX_FLAG_CORESIGHT_FORMAT_RAW 0x0100 /* Raw format of the source */ 1189 1183 1190 1184 #define PERF_FLAG_FD_NO_GROUP (1UL << 0) 1191 1185 #define PERF_FLAG_FD_OUTPUT (1UL << 1)
+4
tools/include/uapi/linux/prctl.h
··· 255 255 # define SYSCALL_DISPATCH_FILTER_ALLOW 0 256 256 # define SYSCALL_DISPATCH_FILTER_BLOCK 1 257 257 258 + /* Set/get enabled arm64 pointer authentication keys */ 259 + #define PR_PAC_SET_ENABLED_KEYS 60 260 + #define PR_PAC_GET_ENABLED_KEYS 61 261 + 258 262 #endif /* _LINUX_PRCTL_H */
+1 -1
tools/kvm/kvm_stat/kvm_stat.txt
··· 111 111 --tracepoints:: 112 112 retrieve statistics from tracepoints 113 113 114 - *z*:: 114 + -z:: 115 115 --skip-zero-records:: 116 116 omit records with all zeros in logging mode 117 117
+2 -1
tools/objtool/arch/x86/decode.c
··· 19 19 #include <objtool/elf.h> 20 20 #include <objtool/arch.h> 21 21 #include <objtool/warn.h> 22 + #include <objtool/endianness.h> 22 23 #include <arch/elf.h> 23 24 24 25 static int is_x86_64(const struct elf *elf) ··· 726 725 return -1; 727 726 } 728 727 729 - alt->cpuid = cpuid; 728 + alt->cpuid = bswap_if_needed(cpuid); 730 729 alt->instrlen = orig_len; 731 730 alt->replacementlen = repl_len; 732 731
+1
tools/objtool/elf.c
··· 762 762 data->d_buf = &sym->sym; 763 763 data->d_size = sizeof(sym->sym); 764 764 data->d_align = 1; 765 + data->d_type = ELF_T_SYM; 765 766 766 767 sym->idx = symtab->len / sizeof(sym->sym); 767 768
+1
tools/perf/Makefile.config
··· 540 540 ifdef LIBBPF_DYNAMIC 541 541 ifeq ($(feature-libbpf), 1) 542 542 EXTLIBS += -lbpf 543 + $(call detected,CONFIG_LIBBPF_DYNAMIC) 543 544 else 544 545 dummy := $(error Error: No libbpf devel library found, please install libbpf-devel); 545 546 endif
+1 -1
tools/perf/arch/arm64/util/kvm-stat.c
··· 71 71 .name = "vmexit", 72 72 .ops = &exit_events, 73 73 }, 74 - { NULL }, 74 + { NULL, NULL }, 75 75 }; 76 76 77 77 const char * const kvm_skip_events[] = {
+5
tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl
··· 356 356 439 n64 faccessat2 sys_faccessat2 357 357 440 n64 process_madvise sys_process_madvise 358 358 441 n64 epoll_pwait2 sys_epoll_pwait2 359 + 442 n64 mount_setattr sys_mount_setattr 360 + 443 n64 quotactl_path sys_quotactl_path 361 + 444 n64 landlock_create_ruleset sys_landlock_create_ruleset 362 + 445 n64 landlock_add_rule sys_landlock_add_rule 363 + 446 n64 landlock_restrict_self sys_landlock_restrict_self
+4
tools/perf/arch/powerpc/entry/syscalls/syscall.tbl
··· 522 522 440 common process_madvise sys_process_madvise 523 523 441 common epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2 524 524 442 common mount_setattr sys_mount_setattr 525 + 443 common quotactl_path sys_quotactl_path 526 + 444 common landlock_create_ruleset sys_landlock_create_ruleset 527 + 445 common landlock_add_rule sys_landlock_add_rule 528 + 446 common landlock_restrict_self sys_landlock_restrict_self
+4
tools/perf/arch/s390/entry/syscalls/syscall.tbl
··· 445 445 440 common process_madvise sys_process_madvise sys_process_madvise 446 446 441 common epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2 447 447 442 common mount_setattr sys_mount_setattr sys_mount_setattr 448 + 443 common quotactl_path sys_quotactl_path sys_quotactl_path 449 + 444 common landlock_create_ruleset sys_landlock_create_ruleset sys_landlock_create_ruleset 450 + 445 common landlock_add_rule sys_landlock_add_rule sys_landlock_add_rule 451 + 446 common landlock_restrict_self sys_landlock_restrict_self sys_landlock_restrict_self
+4
tools/perf/arch/x86/entry/syscalls/syscall_64.tbl
··· 364 364 440 common process_madvise sys_process_madvise 365 365 441 common epoll_pwait2 sys_epoll_pwait2 366 366 442 common mount_setattr sys_mount_setattr 367 + 443 common quotactl_path sys_quotactl_path 368 + 444 common landlock_create_ruleset sys_landlock_create_ruleset 369 + 445 common landlock_add_rule sys_landlock_add_rule 370 + 446 common landlock_restrict_self sys_landlock_restrict_self 367 371 368 372 # 369 373 # Due to a historical design error, certain syscalls are numbered differently
+4 -2
tools/perf/pmu-events/jevents.c
··· 1123 1123 mapfile = strdup(fpath); 1124 1124 return 0; 1125 1125 } 1126 - 1127 - pr_info("%s: Ignoring file %s\n", prog, fpath); 1126 + if (is_json_file(bname)) 1127 + pr_debug("%s: ArchStd json is preprocessed %s\n", prog, fpath); 1128 + else 1129 + pr_info("%s: Ignoring file %s\n", prog, fpath); 1128 1130 return 0; 1129 1131 } 1130 1132
+1 -1
tools/perf/tests/attr/base-record
··· 5 5 flags=0|8 6 6 cpu=* 7 7 type=0|1 8 - size=120 8 + size=128 9 9 config=0 10 10 sample_period=* 11 11 sample_type=263
+1 -1
tools/perf/tests/attr/base-stat
··· 5 5 flags=0|8 6 6 cpu=* 7 7 type=0 8 - size=120 8 + size=128 9 9 config=0 10 10 sample_period=0 11 11 sample_type=65536
+1 -1
tools/perf/tests/attr/system-wide-dummy
··· 7 7 pid=-1 8 8 flags=8 9 9 type=1 10 - size=120 10 + size=128 11 11 config=9 12 12 sample_period=4000 13 13 sample_type=455
+7
tools/perf/util/Build
··· 145 145 perf-$(CONFIG_LIBELF) += probe-file.o 146 146 perf-$(CONFIG_LIBELF) += probe-event.o 147 147 148 + ifdef CONFIG_LIBBPF_DYNAMIC 149 + hashmap := 1 150 + endif 148 151 ifndef CONFIG_LIBBPF 152 + hashmap := 1 153 + endif 154 + 155 + ifdef hashmap 149 156 perf-y += hashmap.o 150 157 endif 151 158
+7 -1
tools/perf/util/record.c
··· 157 157 static int record_opts__config_freq(struct record_opts *opts) 158 158 { 159 159 bool user_freq = opts->user_freq != UINT_MAX; 160 + bool user_interval = opts->user_interval != ULLONG_MAX; 160 161 unsigned int max_rate; 161 162 162 - if (opts->user_interval != ULLONG_MAX) 163 + if (user_interval && user_freq) { 164 + pr_err("cannot set frequency and period at the same time\n"); 165 + return -1; 166 + } 167 + 168 + if (user_interval) 163 169 opts->default_interval = opts->user_interval; 164 170 if (user_freq) 165 171 opts->freq = opts->user_freq;
+2 -2
tools/perf/util/session.c
··· 904 904 struct perf_record_record_cpu_map *mask; 905 905 unsigned i; 906 906 907 - data->type = bswap_64(data->type); 907 + data->type = bswap_16(data->type); 908 908 909 909 switch (data->type) { 910 910 case PERF_CPU_MAP__CPUS: ··· 937 937 { 938 938 u64 size; 939 939 940 - size = event->stat_config.nr * sizeof(event->stat_config.data[0]); 940 + size = bswap_64(event->stat_config.nr) * sizeof(event->stat_config.data[0]); 941 941 size += 1; /* nr item itself */ 942 942 mem_bswap_64(&event->stat_config.nr, size); 943 943 }
+1 -1
tools/testing/nvdimm/test/iomap.c
··· 62 62 } 63 63 EXPORT_SYMBOL(get_nfit_res); 64 64 65 - void __iomem *__nfit_test_ioremap(resource_size_t offset, unsigned long size, 65 + static void __iomem *__nfit_test_ioremap(resource_size_t offset, unsigned long size, 66 66 void __iomem *(*fallback_fn)(resource_size_t, unsigned long)) 67 67 { 68 68 struct nfit_test_resource *nfit_res = get_nfit_res(offset);
+25 -17
tools/testing/nvdimm/test/nfit.c
··· 1871 1871 } 1872 1872 } 1873 1873 1874 + static size_t sizeof_spa(struct acpi_nfit_system_address *spa) 1875 + { 1876 + /* until spa location cookie support is added... */ 1877 + return sizeof(*spa) - 8; 1878 + } 1879 + 1874 1880 static int nfit_test0_alloc(struct nfit_test *t) 1875 1881 { 1876 - size_t nfit_size = sizeof(struct acpi_nfit_system_address) * NUM_SPA 1882 + struct acpi_nfit_system_address *spa = NULL; 1883 + size_t nfit_size = sizeof_spa(spa) * NUM_SPA 1877 1884 + sizeof(struct acpi_nfit_memory_map) * NUM_MEM 1878 1885 + sizeof(struct acpi_nfit_control_region) * NUM_DCR 1879 1886 + offsetof(struct acpi_nfit_control_region, ··· 1944 1937 1945 1938 static int nfit_test1_alloc(struct nfit_test *t) 1946 1939 { 1947 - size_t nfit_size = sizeof(struct acpi_nfit_system_address) * 2 1940 + struct acpi_nfit_system_address *spa = NULL; 1941 + size_t nfit_size = sizeof_spa(spa) * 2 1948 1942 + sizeof(struct acpi_nfit_memory_map) * 2 1949 1943 + offsetof(struct acpi_nfit_control_region, window_size) * 2; 1950 1944 int i; ··· 2008 2000 */ 2009 2001 spa = nfit_buf; 2010 2002 spa->header.type = ACPI_NFIT_TYPE_SYSTEM_ADDRESS; 2011 - spa->header.length = sizeof(*spa); 2003 + spa->header.length = sizeof_spa(spa); 2012 2004 memcpy(spa->range_guid, to_nfit_uuid(NFIT_SPA_PM), 16); 2013 2005 spa->range_index = 0+1; 2014 2006 spa->address = t->spa_set_dma[0]; ··· 2022 2014 */ 2023 2015 spa = nfit_buf + offset; 2024 2016 spa->header.type = ACPI_NFIT_TYPE_SYSTEM_ADDRESS; 2025 - spa->header.length = sizeof(*spa); 2017 + spa->header.length = sizeof_spa(spa); 2026 2018 memcpy(spa->range_guid, to_nfit_uuid(NFIT_SPA_PM), 16); 2027 2019 spa->range_index = 1+1; 2028 2020 spa->address = t->spa_set_dma[1]; ··· 2032 2024 /* spa2 (dcr0) dimm0 */ 2033 2025 spa = nfit_buf + offset; 2034 2026 spa->header.type = ACPI_NFIT_TYPE_SYSTEM_ADDRESS; 2035 - spa->header.length = sizeof(*spa); 2027 + spa->header.length = sizeof_spa(spa); 2036 2028 memcpy(spa->range_guid, to_nfit_uuid(NFIT_SPA_DCR), 16); 2037 2029 spa->range_index = 2+1; 2038 2030 spa->address = t->dcr_dma[0]; ··· 2042 2034 /* spa3 (dcr1) dimm1 */ 2043 2035 spa = nfit_buf + offset; 2044 2036 spa->header.type = ACPI_NFIT_TYPE_SYSTEM_ADDRESS; 2045 - spa->header.length = sizeof(*spa); 2037 + spa->header.length = sizeof_spa(spa); 2046 2038 memcpy(spa->range_guid, to_nfit_uuid(NFIT_SPA_DCR), 16); 2047 2039 spa->range_index = 3+1; 2048 2040 spa->address = t->dcr_dma[1]; ··· 2052 2044 /* spa4 (dcr2) dimm2 */ 2053 2045 spa = nfit_buf + offset; 2054 2046 spa->header.type = ACPI_NFIT_TYPE_SYSTEM_ADDRESS; 2055 - spa->header.length = sizeof(*spa); 2047 + spa->header.length = sizeof_spa(spa); 2056 2048 memcpy(spa->range_guid, to_nfit_uuid(NFIT_SPA_DCR), 16); 2057 2049 spa->range_index = 4+1; 2058 2050 spa->address = t->dcr_dma[2]; ··· 2062 2054 /* spa5 (dcr3) dimm3 */ 2063 2055 spa = nfit_buf + offset; 2064 2056 spa->header.type = ACPI_NFIT_TYPE_SYSTEM_ADDRESS; 2065 - spa->header.length = sizeof(*spa); 2057 + spa->header.length = sizeof_spa(spa); 2066 2058 memcpy(spa->range_guid, to_nfit_uuid(NFIT_SPA_DCR), 16); 2067 2059 spa->range_index = 5+1; 2068 2060 spa->address = t->dcr_dma[3]; ··· 2072 2064 /* spa6 (bdw for dcr0) dimm0 */ 2073 2065 spa = nfit_buf + offset; 2074 2066 spa->header.type = ACPI_NFIT_TYPE_SYSTEM_ADDRESS; 2075 - spa->header.length = sizeof(*spa); 2067 + spa->header.length = sizeof_spa(spa); 2076 2068 memcpy(spa->range_guid, to_nfit_uuid(NFIT_SPA_BDW), 16); 2077 2069 spa->range_index = 6+1; 2078 2070 spa->address = t->dimm_dma[0]; ··· 2082 2074 /* spa7 (bdw for dcr1) dimm1 */ 2083 2075 spa = nfit_buf + offset; 2084 2076 spa->header.type = ACPI_NFIT_TYPE_SYSTEM_ADDRESS; 2085 - spa->header.length = sizeof(*spa); 2077 + spa->header.length = sizeof_spa(spa); 2086 2078 memcpy(spa->range_guid, to_nfit_uuid(NFIT_SPA_BDW), 16); 2087 2079 spa->range_index = 7+1; 2088 2080 spa->address = t->dimm_dma[1]; ··· 2092 2084 /* spa8 (bdw for dcr2) dimm2 */ 2093 2085 spa = nfit_buf + offset; 2094 2086 spa->header.type = ACPI_NFIT_TYPE_SYSTEM_ADDRESS; 2095 - spa->header.length = sizeof(*spa); 2087 + spa->header.length = sizeof_spa(spa); 2096 2088 memcpy(spa->range_guid, to_nfit_uuid(NFIT_SPA_BDW), 16); 2097 2089 spa->range_index = 8+1; 2098 2090 spa->address = t->dimm_dma[2]; ··· 2102 2094 /* spa9 (bdw for dcr3) dimm3 */ 2103 2095 spa = nfit_buf + offset; 2104 2096 spa->header.type = ACPI_NFIT_TYPE_SYSTEM_ADDRESS; 2105 - spa->header.length = sizeof(*spa); 2097 + spa->header.length = sizeof_spa(spa); 2106 2098 memcpy(spa->range_guid, to_nfit_uuid(NFIT_SPA_BDW), 16); 2107 2099 spa->range_index = 9+1; 2108 2100 spa->address = t->dimm_dma[3]; ··· 2589 2581 /* spa10 (dcr4) dimm4 */ 2590 2582 spa = nfit_buf + offset; 2591 2583 spa->header.type = ACPI_NFIT_TYPE_SYSTEM_ADDRESS; 2592 - spa->header.length = sizeof(*spa); 2584 + spa->header.length = sizeof_spa(spa); 2593 2585 memcpy(spa->range_guid, to_nfit_uuid(NFIT_SPA_DCR), 16); 2594 2586 spa->range_index = 10+1; 2595 2587 spa->address = t->dcr_dma[4]; ··· 2603 2595 */ 2604 2596 spa = nfit_buf + offset; 2605 2597 spa->header.type = ACPI_NFIT_TYPE_SYSTEM_ADDRESS; 2606 - spa->header.length = sizeof(*spa); 2598 + spa->header.length = sizeof_spa(spa); 2607 2599 memcpy(spa->range_guid, to_nfit_uuid(NFIT_SPA_PM), 16); 2608 2600 spa->range_index = 11+1; 2609 2601 spa->address = t->spa_set_dma[2]; ··· 2613 2605 /* spa12 (bdw for dcr4) dimm4 */ 2614 2606 spa = nfit_buf + offset; 2615 2607 spa->header.type = ACPI_NFIT_TYPE_SYSTEM_ADDRESS; 2616 - spa->header.length = sizeof(*spa); 2608 + spa->header.length = sizeof_spa(spa); 2617 2609 memcpy(spa->range_guid, to_nfit_uuid(NFIT_SPA_BDW), 16); 2618 2610 spa->range_index = 12+1; 2619 2611 spa->address = t->dimm_dma[4]; ··· 2747 2739 /* spa0 (flat range with no bdw aliasing) */ 2748 2740 spa = nfit_buf + offset; 2749 2741 spa->header.type = ACPI_NFIT_TYPE_SYSTEM_ADDRESS; 2750 - spa->header.length = sizeof(*spa); 2742 + spa->header.length = sizeof_spa(spa); 2751 2743 memcpy(spa->range_guid, to_nfit_uuid(NFIT_SPA_PM), 16); 2752 2744 spa->range_index = 0+1; 2753 2745 spa->address = t->spa_set_dma[0]; ··· 2757 2749 /* virtual cd region */ 2758 2750 spa = nfit_buf + offset; 2759 2751 spa->header.type = ACPI_NFIT_TYPE_SYSTEM_ADDRESS; 2760 - spa->header.length = sizeof(*spa); 2752 + spa->header.length = sizeof_spa(spa); 2761 2753 memcpy(spa->range_guid, to_nfit_uuid(NFIT_SPA_VCD), 16); 2762 2754 spa->range_index = 0; 2763 2755 spa->address = t->spa_set_dma[1];
+1
tools/testing/selftests/arm64/bti/test.c
··· 6 6 7 7 #include "system.h" 8 8 9 + #include <stddef.h> 9 10 #include <linux/errno.h> 10 11 #include <linux/auxvec.h> 11 12 #include <linux/signal.h>
+2 -2
tools/testing/selftests/kvm/lib/x86_64/handlers.S
··· 54 54 .align 8 55 55 56 56 /* Fetch current address and append it to idt_handlers. */ 57 - current_handler = . 57 + 666 : 58 58 .pushsection .rodata 59 - .quad current_handler 59 + .quad 666b 60 60 .popsection 61 61 62 62 .if ! \has_error
+71 -19
tools/testing/selftests/kvm/x86_64/evmcs_test.c
··· 18 18 #include "vmx.h" 19 19 20 20 #define VCPU_ID 5 21 + #define NMI_VECTOR 2 22 + 23 + static int ud_count; 24 + 25 + void enable_x2apic(void) 26 + { 27 + uint32_t spiv_reg = APIC_BASE_MSR + (APIC_SPIV >> 4); 28 + 29 + wrmsr(MSR_IA32_APICBASE, rdmsr(MSR_IA32_APICBASE) | 30 + MSR_IA32_APICBASE_ENABLE | MSR_IA32_APICBASE_EXTD); 31 + wrmsr(spiv_reg, rdmsr(spiv_reg) | APIC_SPIV_APIC_ENABLED); 32 + } 33 + 34 + static void guest_ud_handler(struct ex_regs *regs) 35 + { 36 + ud_count++; 37 + regs->rip += 3; /* VMLAUNCH */ 38 + } 39 + 40 + static void guest_nmi_handler(struct ex_regs *regs) 41 + { 42 + } 21 43 22 44 void l2_guest_code(void) 23 45 { ··· 47 25 48 26 GUEST_SYNC(8); 49 27 28 + /* Forced exit to L1 upon restore */ 29 + GUEST_SYNC(9); 30 + 50 31 /* Done, exit to L1 and never come back. */ 51 32 vmcall(); 52 33 } 53 34 54 - void l1_guest_code(struct vmx_pages *vmx_pages) 35 + void guest_code(struct vmx_pages *vmx_pages) 55 36 { 56 37 #define L2_GUEST_STACK_SIZE 64 57 38 unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE]; 39 + 40 + enable_x2apic(); 41 + 42 + GUEST_SYNC(1); 43 + GUEST_SYNC(2); 58 44 59 45 enable_vp_assist(vmx_pages->vp_assist_gpa, vmx_pages->vp_assist); 60 46 ··· 85 55 current_evmcs->revision_id = EVMCS_VERSION; 86 56 GUEST_SYNC(6); 87 57 58 + current_evmcs->pin_based_vm_exec_control |= 59 + PIN_BASED_NMI_EXITING; 88 60 GUEST_ASSERT(!vmlaunch()); 89 61 GUEST_ASSERT(vmptrstz() == vmx_pages->enlightened_vmcs_gpa); 90 - GUEST_SYNC(9); 62 + 63 + /* 64 + * NMI forces L2->L1 exit, resuming L2 and hope that EVMCS is 65 + * up-to-date (RIP points where it should and not at the beginning 66 + * of l2_guest_code(). GUEST_SYNC(9) checkes that. 67 + */ 91 68 GUEST_ASSERT(!vmresume()); 92 - GUEST_ASSERT(vmreadz(VM_EXIT_REASON) == EXIT_REASON_VMCALL); 69 + 93 70 GUEST_SYNC(10); 94 - } 95 71 96 - void guest_code(struct vmx_pages *vmx_pages) 97 - { 98 - GUEST_SYNC(1); 99 - GUEST_SYNC(2); 100 - 101 - if (vmx_pages) 102 - l1_guest_code(vmx_pages); 103 - 104 - GUEST_DONE(); 72 + GUEST_ASSERT(vmreadz(VM_EXIT_REASON) == EXIT_REASON_VMCALL); 73 + GUEST_SYNC(11); 105 74 106 75 /* Try enlightened vmptrld with an incorrect GPA */ 107 76 evmcs_vmptrld(0xdeadbeef, vmx_pages->enlightened_vmcs); 108 77 GUEST_ASSERT(vmlaunch()); 78 + GUEST_ASSERT(ud_count == 1); 79 + GUEST_DONE(); 80 + } 81 + 82 + void inject_nmi(struct kvm_vm *vm) 83 + { 84 + struct kvm_vcpu_events events; 85 + 86 + vcpu_events_get(vm, VCPU_ID, &events); 87 + 88 + events.nmi.pending = 1; 89 + events.flags |= KVM_VCPUEVENT_VALID_NMI_PENDING; 90 + 91 + vcpu_events_set(vm, VCPU_ID, &events); 109 92 } 110 93 111 94 int main(int argc, char *argv[]) ··· 152 109 vcpu_alloc_vmx(vm, &vmx_pages_gva); 153 110 vcpu_args_set(vm, VCPU_ID, 1, vmx_pages_gva); 154 111 112 + vm_init_descriptor_tables(vm); 113 + vcpu_init_descriptor_tables(vm, VCPU_ID); 114 + vm_handle_exception(vm, UD_VECTOR, guest_ud_handler); 115 + vm_handle_exception(vm, NMI_VECTOR, guest_nmi_handler); 116 + 117 + pr_info("Running L1 which uses EVMCS to run L2\n"); 118 + 155 119 for (stage = 1;; stage++) { 156 120 _vcpu_run(vm, VCPU_ID); 157 121 TEST_ASSERT(run->exit_reason == KVM_EXIT_IO, ··· 174 124 case UCALL_SYNC: 175 125 break; 176 126 case UCALL_DONE: 177 - goto part1_done; 127 + goto done; 178 128 default: 179 129 TEST_FAIL("Unknown ucall %lu", uc.cmd); 180 130 } ··· 204 154 TEST_ASSERT(!memcmp(&regs1, &regs2, sizeof(regs2)), 205 155 "Unexpected register values after vcpu_load_state; rdi: %lx rsi: %lx", 206 156 (ulong) regs2.rdi, (ulong) regs2.rsi); 157 + 158 + /* Force immediate L2->L1 exit before resuming */ 159 + if (stage == 8) { 160 + pr_info("Injecting NMI into L1 before L2 had a chance to run after restore\n"); 161 + inject_nmi(vm); 162 + } 207 163 } 208 164 209 - part1_done: 210 - _vcpu_run(vm, VCPU_ID); 211 - TEST_ASSERT(run->exit_reason == KVM_EXIT_SHUTDOWN, 212 - "Unexpected successful VMEnter with invalid eVMCS pointer!"); 213 - 165 + done: 214 166 kvm_vm_free(vm); 215 167 }
+4 -3
virt/kvm/kvm_main.c
··· 2893 2893 if (val < grow_start) 2894 2894 val = grow_start; 2895 2895 2896 - if (val > halt_poll_ns) 2897 - val = halt_poll_ns; 2896 + if (val > vcpu->kvm->max_halt_poll_ns) 2897 + val = vcpu->kvm->max_halt_poll_ns; 2898 2898 2899 2899 vcpu->halt_poll_ns = val; 2900 2900 out: ··· 2973 2973 goto out; 2974 2974 } 2975 2975 poll_end = cur = ktime_get(); 2976 - } while (single_task_running() && ktime_before(cur, stop)); 2976 + } while (single_task_running() && !need_resched() && 2977 + ktime_before(cur, stop)); 2977 2978 } 2978 2979 2979 2980 prepare_to_rcuwait(&vcpu->wait);