Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI updates from James Bottomley:
"Updates to the usual drivers (scsi_debug, ufs, lpfc, st, fnic, mpi3mr,
mpt3sas) and the removal of cxlflash.

The only non-trivial core change is an addition to unit attention
handling to recognize UAs for power on/reset and new media so the tape
driver can use it"

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (107 commits)
scsi: st: Tighten the page format heuristics with MODE SELECT
scsi: st: ERASE does not change tape location
scsi: st: Fix array overflow in st_setup()
scsi: target: tcm_loop: Fix wrong abort tag
scsi: lpfc: Restore clearing of NLP_UNREG_INP in ndlp->nlp_flag
scsi: hisi_sas: Fixed failure to issue vendor specific commands
scsi: fnic: Remove unnecessary NUL-terminations
scsi: fnic: Remove redundant flush_workqueue() calls
scsi: core: Use a switch statement when attaching VPD pages
scsi: ufs: renesas: Add initialization code for R-Car S4-8 ES1.2
scsi: ufs: renesas: Add reusable functions
scsi: ufs: renesas: Refactor 0x10ad/0x10af PHY settings
scsi: ufs: renesas: Remove register control helper function
scsi: ufs: renesas: Add register read to remove save/set/restore
scsi: ufs: renesas: Replace init data by init code
scsi: ufs: dt-bindings: renesas,ufs: Add calibration data
scsi: mpi3mr: Task Abort EH Support
scsi: storvsc: Don't report the host packet status as the hv status
scsi: isci: Make most module parameters static
scsi: megaraid_sas: Make most module parameters static
...

+3514 -12683
+45
Documentation/ABI/testing/sysfs-driver-ufs
··· 1559 1559 Symbol - HCMID. This file shows the UFSHCD manufacturer id. 1560 1560 The Manufacturer ID is defined by JEDEC in JEDEC-JEP106. 1561 1561 The file is read only. 1562 + 1563 + What: /sys/bus/platform/drivers/ufshcd/*/critical_health 1564 + What: /sys/bus/platform/devices/*.ufs/critical_health 1565 + Date: February 2025 1566 + Contact: Avri Altman <avri.altman@wdc.com> 1567 + Description: Report the number of times a critical health event has been 1568 + reported by a UFS device. Further insight into the specific 1569 + issue can be gained by reading one of: bPreEOLInfo, 1570 + bDeviceLifeTimeEstA, bDeviceLifeTimeEstB, 1571 + bWriteBoosterBufferLifeTimeEst, and bRPMBLifeTimeEst. 1572 + 1573 + The file is read only. 1574 + 1575 + What: /sys/bus/platform/drivers/ufshcd/*/clkscale_enable 1576 + What: /sys/bus/platform/devices/*.ufs/clkscale_enable 1577 + Date: January 2025 1578 + Contact: Ziqi Chen <quic_ziqichen@quicinc.com> 1579 + Description: 1580 + This attribute shows whether the UFS clock scaling is enabled or not. 1581 + And it can be used to enable/disable the clock scaling by writing 1582 + 1 or 0 to this attribute. 1583 + 1584 + The attribute is read/write. 1585 + 1586 + What: /sys/bus/platform/drivers/ufshcd/*/clkgate_enable 1587 + What: /sys/bus/platform/devices/*.ufs/clkgate_enable 1588 + Date: January 2025 1589 + Contact: Ziqi Chen <quic_ziqichen@quicinc.com> 1590 + Description: 1591 + This attribute shows whether the UFS clock gating is enabled or not. 1592 + And it can be used to enable/disable the clock gating by writing 1593 + 1 or 0 to this attribute. 1594 + 1595 + The attribute is read/write. 1596 + 1597 + What: /sys/bus/platform/drivers/ufshcd/*/clkgate_delay_ms 1598 + What: /sys/bus/platform/devices/*.ufs/clkgate_delay_ms 1599 + Date: January 2025 1600 + Contact: Ziqi Chen <quic_ziqichen@quicinc.com> 1601 + Description: 1602 + This attribute shows and sets the number of milliseconds of idle time 1603 + before the UFS driver starts to perform clock gating. This can 1604 + prevent the UFS from frequently performing clock gating/ungating. 1605 + 1606 + The attribute is read/write.
-433
Documentation/arch/powerpc/cxlflash.rst
··· 1 - ================================ 2 - Coherent Accelerator (CXL) Flash 3 - ================================ 4 - 5 - Introduction 6 - ============ 7 - 8 - The IBM Power architecture provides support for CAPI (Coherent 9 - Accelerator Power Interface), which is available to certain PCIe slots 10 - on Power 8 systems. CAPI can be thought of as a special tunneling 11 - protocol through PCIe that allow PCIe adapters to look like special 12 - purpose co-processors which can read or write an application's 13 - memory and generate page faults. As a result, the host interface to 14 - an adapter running in CAPI mode does not require the data buffers to 15 - be mapped to the device's memory (IOMMU bypass) nor does it require 16 - memory to be pinned. 17 - 18 - On Linux, Coherent Accelerator (CXL) kernel services present CAPI 19 - devices as a PCI device by implementing a virtual PCI host bridge. 20 - This abstraction simplifies the infrastructure and programming 21 - model, allowing for drivers to look similar to other native PCI 22 - device drivers. 23 - 24 - CXL provides a mechanism by which user space applications can 25 - directly talk to a device (network or storage) bypassing the typical 26 - kernel/device driver stack. The CXL Flash Adapter Driver enables a 27 - user space application direct access to Flash storage. 28 - 29 - The CXL Flash Adapter Driver is a kernel module that sits in the 30 - SCSI stack as a low level device driver (below the SCSI disk and 31 - protocol drivers) for the IBM CXL Flash Adapter. This driver is 32 - responsible for the initialization of the adapter, setting up the 33 - special path for user space access, and performing error recovery. It 34 - communicates directly the Flash Accelerator Functional Unit (AFU) 35 - as described in Documentation/arch/powerpc/cxl.rst. 36 - 37 - The cxlflash driver supports two, mutually exclusive, modes of 38 - operation at the device (LUN) level: 39 - 40 - - Any flash device (LUN) can be configured to be accessed as a 41 - regular disk device (i.e.: /dev/sdc). This is the default mode. 42 - 43 - - Any flash device (LUN) can be configured to be accessed from 44 - user space with a special block library. This mode further 45 - specifies the means of accessing the device and provides for 46 - either raw access to the entire LUN (referred to as direct 47 - or physical LUN access) or access to a kernel/AFU-mediated 48 - partition of the LUN (referred to as virtual LUN access). The 49 - segmentation of a disk device into virtual LUNs is assisted 50 - by special translation services provided by the Flash AFU. 51 - 52 - Overview 53 - ======== 54 - 55 - The Coherent Accelerator Interface Architecture (CAIA) introduces a 56 - concept of a master context. A master typically has special privileges 57 - granted to it by the kernel or hypervisor allowing it to perform AFU 58 - wide management and control. The master may or may not be involved 59 - directly in each user I/O, but at the minimum is involved in the 60 - initial setup before the user application is allowed to send requests 61 - directly to the AFU. 62 - 63 - The CXL Flash Adapter Driver establishes a master context with the 64 - AFU. It uses memory mapped I/O (MMIO) for this control and setup. The 65 - Adapter Problem Space Memory Map looks like this:: 66 - 67 - +-------------------------------+ 68 - | 512 * 64 KB User MMIO | 69 - | (per context) | 70 - | User Accessible | 71 - +-------------------------------+ 72 - | 512 * 128 B per context | 73 - | Provisioning and Control | 74 - | Trusted Process accessible | 75 - +-------------------------------+ 76 - | 64 KB Global | 77 - | Trusted Process accessible | 78 - +-------------------------------+ 79 - 80 - This driver configures itself into the SCSI software stack as an 81 - adapter driver. The driver is the only entity that is considered a 82 - Trusted Process to program the Provisioning and Control and Global 83 - areas in the MMIO Space shown above. The master context driver 84 - discovers all LUNs attached to the CXL Flash adapter and instantiates 85 - scsi block devices (/dev/sdb, /dev/sdc etc.) for each unique LUN 86 - seen from each path. 87 - 88 - Once these scsi block devices are instantiated, an application 89 - written to a specification provided by the block library may get 90 - access to the Flash from user space (without requiring a system call). 91 - 92 - This master context driver also provides a series of ioctls for this 93 - block library to enable this user space access. The driver supports 94 - two modes for accessing the block device. 95 - 96 - The first mode is called a virtual mode. In this mode a single scsi 97 - block device (/dev/sdb) may be carved up into any number of distinct 98 - virtual LUNs. The virtual LUNs may be resized as long as the sum of 99 - the sizes of all the virtual LUNs, along with the meta-data associated 100 - with it does not exceed the physical capacity. 101 - 102 - The second mode is called the physical mode. In this mode a single 103 - block device (/dev/sdb) may be opened directly by the block library 104 - and the entire space for the LUN is available to the application. 105 - 106 - Only the physical mode provides persistence of the data. i.e. The 107 - data written to the block device will survive application exit and 108 - restart and also reboot. The virtual LUNs do not persist (i.e. do 109 - not survive after the application terminates or the system reboots). 110 - 111 - 112 - Block library API 113 - ================= 114 - 115 - Applications intending to get access to the CXL Flash from user 116 - space should use the block library, as it abstracts the details of 117 - interfacing directly with the cxlflash driver that are necessary for 118 - performing administrative actions (i.e.: setup, tear down, resize). 119 - The block library can be thought of as a 'user' of services, 120 - implemented as IOCTLs, that are provided by the cxlflash driver 121 - specifically for devices (LUNs) operating in user space access 122 - mode. While it is not a requirement that applications understand 123 - the interface between the block library and the cxlflash driver, 124 - a high-level overview of each supported service (IOCTL) is provided 125 - below. 126 - 127 - The block library can be found on GitHub: 128 - http://github.com/open-power/capiflash 129 - 130 - 131 - CXL Flash Driver LUN IOCTLs 132 - =========================== 133 - 134 - Users, such as the block library, that wish to interface with a flash 135 - device (LUN) via user space access need to use the services provided 136 - by the cxlflash driver. As these services are implemented as ioctls, 137 - a file descriptor handle must first be obtained in order to establish 138 - the communication channel between a user and the kernel. This file 139 - descriptor is obtained by opening the device special file associated 140 - with the scsi disk device (/dev/sdb) that was created during LUN 141 - discovery. As per the location of the cxlflash driver within the 142 - SCSI protocol stack, this open is actually not seen by the cxlflash 143 - driver. Upon successful open, the user receives a file descriptor 144 - (herein referred to as fd1) that should be used for issuing the 145 - subsequent ioctls listed below. 146 - 147 - The structure definitions for these IOCTLs are available in: 148 - uapi/scsi/cxlflash_ioctl.h 149 - 150 - DK_CXLFLASH_ATTACH 151 - ------------------ 152 - 153 - This ioctl obtains, initializes, and starts a context using the CXL 154 - kernel services. These services specify a context id (u16) by which 155 - to uniquely identify the context and its allocated resources. The 156 - services additionally provide a second file descriptor (herein 157 - referred to as fd2) that is used by the block library to initiate 158 - memory mapped I/O (via mmap()) to the CXL flash device and poll for 159 - completion events. This file descriptor is intentionally installed by 160 - this driver and not the CXL kernel services to allow for intermediary 161 - notification and access in the event of a non-user-initiated close(), 162 - such as a killed process. This design point is described in further 163 - detail in the description for the DK_CXLFLASH_DETACH ioctl. 164 - 165 - There are a few important aspects regarding the "tokens" (context id 166 - and fd2) that are provided back to the user: 167 - 168 - - These tokens are only valid for the process under which they 169 - were created. The child of a forked process cannot continue 170 - to use the context id or file descriptor created by its parent 171 - (see DK_CXLFLASH_VLUN_CLONE for further details). 172 - 173 - - These tokens are only valid for the lifetime of the context and 174 - the process under which they were created. Once either is 175 - destroyed, the tokens are to be considered stale and subsequent 176 - usage will result in errors. 177 - 178 - - A valid adapter file descriptor (fd2 >= 0) is only returned on 179 - the initial attach for a context. Subsequent attaches to an 180 - existing context (DK_CXLFLASH_ATTACH_REUSE_CONTEXT flag present) 181 - do not provide the adapter file descriptor as it was previously 182 - made known to the application. 183 - 184 - - When a context is no longer needed, the user shall detach from 185 - the context via the DK_CXLFLASH_DETACH ioctl. When this ioctl 186 - returns with a valid adapter file descriptor and the return flag 187 - DK_CXLFLASH_APP_CLOSE_ADAP_FD is present, the application _must_ 188 - close the adapter file descriptor following a successful detach. 189 - 190 - - When this ioctl returns with a valid fd2 and the return flag 191 - DK_CXLFLASH_APP_CLOSE_ADAP_FD is present, the application _must_ 192 - close fd2 in the following circumstances: 193 - 194 - + Following a successful detach of the last user of the context 195 - + Following a successful recovery on the context's original fd2 196 - + In the child process of a fork(), following a clone ioctl, 197 - on the fd2 associated with the source context 198 - 199 - - At any time, a close on fd2 will invalidate the tokens. Applications 200 - should exercise caution to only close fd2 when appropriate (outlined 201 - in the previous bullet) to avoid premature loss of I/O. 202 - 203 - DK_CXLFLASH_USER_DIRECT 204 - ----------------------- 205 - This ioctl is responsible for transitioning the LUN to direct 206 - (physical) mode access and configuring the AFU for direct access from 207 - user space on a per-context basis. Additionally, the block size and 208 - last logical block address (LBA) are returned to the user. 209 - 210 - As mentioned previously, when operating in user space access mode, 211 - LUNs may be accessed in whole or in part. Only one mode is allowed 212 - at a time and if one mode is active (outstanding references exist), 213 - requests to use the LUN in a different mode are denied. 214 - 215 - The AFU is configured for direct access from user space by adding an 216 - entry to the AFU's resource handle table. The index of the entry is 217 - treated as a resource handle that is returned to the user. The user 218 - is then able to use the handle to reference the LUN during I/O. 219 - 220 - DK_CXLFLASH_USER_VIRTUAL 221 - ------------------------ 222 - This ioctl is responsible for transitioning the LUN to virtual mode 223 - of access and configuring the AFU for virtual access from user space 224 - on a per-context basis. Additionally, the block size and last logical 225 - block address (LBA) are returned to the user. 226 - 227 - As mentioned previously, when operating in user space access mode, 228 - LUNs may be accessed in whole or in part. Only one mode is allowed 229 - at a time and if one mode is active (outstanding references exist), 230 - requests to use the LUN in a different mode are denied. 231 - 232 - The AFU is configured for virtual access from user space by adding 233 - an entry to the AFU's resource handle table. The index of the entry 234 - is treated as a resource handle that is returned to the user. The 235 - user is then able to use the handle to reference the LUN during I/O. 236 - 237 - By default, the virtual LUN is created with a size of 0. The user 238 - would need to use the DK_CXLFLASH_VLUN_RESIZE ioctl to adjust the grow 239 - the virtual LUN to a desired size. To avoid having to perform this 240 - resize for the initial creation of the virtual LUN, the user has the 241 - option of specifying a size as part of the DK_CXLFLASH_USER_VIRTUAL 242 - ioctl, such that when success is returned to the user, the 243 - resource handle that is provided is already referencing provisioned 244 - storage. This is reflected by the last LBA being a non-zero value. 245 - 246 - When a LUN is accessible from more than one port, this ioctl will 247 - return with the DK_CXLFLASH_ALL_PORTS_ACTIVE return flag set. This 248 - provides the user with a hint that I/O can be retried in the event 249 - of an I/O error as the LUN can be reached over multiple paths. 250 - 251 - DK_CXLFLASH_VLUN_RESIZE 252 - ----------------------- 253 - This ioctl is responsible for resizing a previously created virtual 254 - LUN and will fail if invoked upon a LUN that is not in virtual 255 - mode. Upon success, an updated last LBA is returned to the user 256 - indicating the new size of the virtual LUN associated with the 257 - resource handle. 258 - 259 - The partitioning of virtual LUNs is jointly mediated by the cxlflash 260 - driver and the AFU. An allocation table is kept for each LUN that is 261 - operating in the virtual mode and used to program a LUN translation 262 - table that the AFU references when provided with a resource handle. 263 - 264 - This ioctl can return -EAGAIN if an AFU sync operation takes too long. 265 - In addition to returning a failure to user, cxlflash will also schedule 266 - an asynchronous AFU reset. Should the user choose to retry the operation, 267 - it is expected to succeed. If this ioctl fails with -EAGAIN, the user 268 - can either retry the operation or treat it as a failure. 269 - 270 - DK_CXLFLASH_RELEASE 271 - ------------------- 272 - This ioctl is responsible for releasing a previously obtained 273 - reference to either a physical or virtual LUN. This can be 274 - thought of as the inverse of the DK_CXLFLASH_USER_DIRECT or 275 - DK_CXLFLASH_USER_VIRTUAL ioctls. Upon success, the resource handle 276 - is no longer valid and the entry in the resource handle table is 277 - made available to be used again. 278 - 279 - As part of the release process for virtual LUNs, the virtual LUN 280 - is first resized to 0 to clear out and free the translation tables 281 - associated with the virtual LUN reference. 282 - 283 - DK_CXLFLASH_DETACH 284 - ------------------ 285 - This ioctl is responsible for unregistering a context with the 286 - cxlflash driver and release outstanding resources that were 287 - not explicitly released via the DK_CXLFLASH_RELEASE ioctl. Upon 288 - success, all "tokens" which had been provided to the user from the 289 - DK_CXLFLASH_ATTACH onward are no longer valid. 290 - 291 - When the DK_CXLFLASH_APP_CLOSE_ADAP_FD flag was returned on a successful 292 - attach, the application _must_ close the fd2 associated with the context 293 - following the detach of the final user of the context. 294 - 295 - DK_CXLFLASH_VLUN_CLONE 296 - ---------------------- 297 - This ioctl is responsible for cloning a previously created 298 - context to a more recently created context. It exists solely to 299 - support maintaining user space access to storage after a process 300 - forks. Upon success, the child process (which invoked the ioctl) 301 - will have access to the same LUNs via the same resource handle(s) 302 - as the parent, but under a different context. 303 - 304 - Context sharing across processes is not supported with CXL and 305 - therefore each fork must be met with establishing a new context 306 - for the child process. This ioctl simplifies the state management 307 - and playback required by a user in such a scenario. When a process 308 - forks, child process can clone the parents context by first creating 309 - a context (via DK_CXLFLASH_ATTACH) and then using this ioctl to 310 - perform the clone from the parent to the child. 311 - 312 - The clone itself is fairly simple. The resource handle and lun 313 - translation tables are copied from the parent context to the child's 314 - and then synced with the AFU. 315 - 316 - When the DK_CXLFLASH_APP_CLOSE_ADAP_FD flag was returned on a successful 317 - attach, the application _must_ close the fd2 associated with the source 318 - context (still resident/accessible in the parent process) following the 319 - clone. This is to avoid a stale entry in the file descriptor table of the 320 - child process. 321 - 322 - This ioctl can return -EAGAIN if an AFU sync operation takes too long. 323 - In addition to returning a failure to user, cxlflash will also schedule 324 - an asynchronous AFU reset. Should the user choose to retry the operation, 325 - it is expected to succeed. If this ioctl fails with -EAGAIN, the user 326 - can either retry the operation or treat it as a failure. 327 - 328 - DK_CXLFLASH_VERIFY 329 - ------------------ 330 - This ioctl is used to detect various changes such as the capacity of 331 - the disk changing, the number of LUNs visible changing, etc. In cases 332 - where the changes affect the application (such as a LUN resize), the 333 - cxlflash driver will report the changed state to the application. 334 - 335 - The user calls in when they want to validate that a LUN hasn't been 336 - changed in response to a check condition. As the user is operating out 337 - of band from the kernel, they will see these types of events without 338 - the kernel's knowledge. When encountered, the user's architected 339 - behavior is to call in to this ioctl, indicating what they want to 340 - verify and passing along any appropriate information. For now, only 341 - verifying a LUN change (ie: size different) with sense data is 342 - supported. 343 - 344 - DK_CXLFLASH_RECOVER_AFU 345 - ----------------------- 346 - This ioctl is used to drive recovery (if such an action is warranted) 347 - of a specified user context. Any state associated with the user context 348 - is re-established upon successful recovery. 349 - 350 - User contexts are put into an error condition when the device needs to 351 - be reset or is terminating. Users are notified of this error condition 352 - by seeing all 0xF's on an MMIO read. Upon encountering this, the 353 - architected behavior for a user is to call into this ioctl to recover 354 - their context. A user may also call into this ioctl at any time to 355 - check if the device is operating normally. If a failure is returned 356 - from this ioctl, the user is expected to gracefully clean up their 357 - context via release/detach ioctls. Until they do, the context they 358 - hold is not relinquished. The user may also optionally exit the process 359 - at which time the context/resources they held will be freed as part of 360 - the release fop. 361 - 362 - When the DK_CXLFLASH_APP_CLOSE_ADAP_FD flag was returned on a successful 363 - attach, the application _must_ unmap and close the fd2 associated with the 364 - original context following this ioctl returning success and indicating that 365 - the context was recovered (DK_CXLFLASH_RECOVER_AFU_CONTEXT_RESET). 366 - 367 - DK_CXLFLASH_MANAGE_LUN 368 - ---------------------- 369 - This ioctl is used to switch a LUN from a mode where it is available 370 - for file-system access (legacy), to a mode where it is set aside for 371 - exclusive user space access (superpipe). In case a LUN is visible 372 - across multiple ports and adapters, this ioctl is used to uniquely 373 - identify each LUN by its World Wide Node Name (WWNN). 374 - 375 - 376 - CXL Flash Driver Host IOCTLs 377 - ============================ 378 - 379 - Each host adapter instance that is supported by the cxlflash driver 380 - has a special character device associated with it to enable a set of 381 - host management function. These character devices are hosted in a 382 - class dedicated for cxlflash and can be accessed via `/dev/cxlflash/*`. 383 - 384 - Applications can be written to perform various functions using the 385 - host ioctl APIs below. 386 - 387 - The structure definitions for these IOCTLs are available in: 388 - uapi/scsi/cxlflash_ioctl.h 389 - 390 - HT_CXLFLASH_LUN_PROVISION 391 - ------------------------- 392 - This ioctl is used to create and delete persistent LUNs on cxlflash 393 - devices that lack an external LUN management interface. It is only 394 - valid when used with AFUs that support the LUN provision capability. 395 - 396 - When sufficient space is available, LUNs can be created by specifying 397 - the target port to host the LUN and a desired size in 4K blocks. Upon 398 - success, the LUN ID and WWID of the created LUN will be returned and 399 - the SCSI bus can be scanned to detect the change in LUN topology. Note 400 - that partial allocations are not supported. Should a creation fail due 401 - to a space issue, the target port can be queried for its current LUN 402 - geometry. 403 - 404 - To remove a LUN, the device must first be disassociated from the Linux 405 - SCSI subsystem. The LUN deletion can then be initiated by specifying a 406 - target port and LUN ID. Upon success, the LUN geometry associated with 407 - the port will be updated to reflect new number of provisioned LUNs and 408 - available capacity. 409 - 410 - To query the LUN geometry of a port, the target port is specified and 411 - upon success, the following information is presented: 412 - 413 - - Maximum number of provisioned LUNs allowed for the port 414 - - Current number of provisioned LUNs for the port 415 - - Maximum total capacity of provisioned LUNs for the port (4K blocks) 416 - - Current total capacity of provisioned LUNs for the port (4K blocks) 417 - 418 - With this information, the number of available LUNs and capacity can be 419 - can be calculated. 420 - 421 - HT_CXLFLASH_AFU_DEBUG 422 - --------------------- 423 - This ioctl is used to debug AFUs by supporting a command pass-through 424 - interface. It is only valid when used with AFUs that support the AFU 425 - debug capability. 426 - 427 - With exception of buffer management, AFU debug commands are opaque to 428 - cxlflash and treated as pass-through. For debug commands that do require 429 - data transfer, the user supplies an adequately sized data buffer and must 430 - specify the data transfer direction with respect to the host. There is a 431 - maximum transfer size of 256K imposed. Note that partial read completions 432 - are not supported - when errors are experienced with a host read data 433 - transfer, the data buffer is not copied back to the user.
-1
Documentation/arch/powerpc/index.rst
··· 13 13 cpu_families 14 14 cpu_features 15 15 cxl 16 - cxlflash 17 16 dawr-power9 18 17 dexcr 19 18 dscr
+12
Documentation/devicetree/bindings/ufs/renesas,ufs.yaml
··· 33 33 resets: 34 34 maxItems: 1 35 35 36 + nvmem-cells: 37 + maxItems: 1 38 + 39 + nvmem-cell-names: 40 + items: 41 + - const: calibration 42 + 43 + dependencies: 44 + nvmem-cells: [ nvmem-cell-names ] 45 + 36 46 required: 37 47 - compatible 38 48 - reg ··· 68 58 freq-table-hz = <200000000 200000000>, <38400000 38400000>; 69 59 power-domains = <&sysc R8A779F0_PD_ALWAYS_ON>; 70 60 resets = <&cpg 1514>; 61 + nvmem-cells = <&ufs_tune>; 62 + nvmem-cell-names = "calibration"; 71 63 };
+105
Documentation/devicetree/bindings/ufs/rockchip,rk3576-ufshc.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/ufs/rockchip,rk3576-ufshc.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Rockchip UFS Host Controller 8 + 9 + maintainers: 10 + - Shawn Lin <shawn.lin@rock-chips.com> 11 + 12 + allOf: 13 + - $ref: ufs-common.yaml 14 + 15 + properties: 16 + compatible: 17 + const: rockchip,rk3576-ufshc 18 + 19 + reg: 20 + maxItems: 5 21 + 22 + reg-names: 23 + items: 24 + - const: hci 25 + - const: mphy 26 + - const: hci_grf 27 + - const: mphy_grf 28 + - const: hci_apb 29 + 30 + clocks: 31 + maxItems: 4 32 + 33 + clock-names: 34 + items: 35 + - const: core 36 + - const: pclk 37 + - const: pclk_mphy 38 + - const: ref_out 39 + 40 + power-domains: 41 + maxItems: 1 42 + 43 + resets: 44 + maxItems: 4 45 + 46 + reset-names: 47 + items: 48 + - const: biu 49 + - const: sys 50 + - const: ufs 51 + - const: grf 52 + 53 + reset-gpios: 54 + maxItems: 1 55 + description: | 56 + GPIO specifiers for host to reset the whole UFS device including PHY and 57 + memory. This gpio is active low and should choose the one whose high output 58 + voltage is lower than 1.5V based on the UFS spec. 59 + 60 + required: 61 + - compatible 62 + - reg 63 + - reg-names 64 + - clocks 65 + - clock-names 66 + - interrupts 67 + - power-domains 68 + - resets 69 + - reset-names 70 + - reset-gpios 71 + 72 + unevaluatedProperties: false 73 + 74 + examples: 75 + - | 76 + #include <dt-bindings/clock/rockchip,rk3576-cru.h> 77 + #include <dt-bindings/reset/rockchip,rk3576-cru.h> 78 + #include <dt-bindings/interrupt-controller/arm-gic.h> 79 + #include <dt-bindings/power/rockchip,rk3576-power.h> 80 + #include <dt-bindings/pinctrl/rockchip.h> 81 + #include <dt-bindings/gpio/gpio.h> 82 + 83 + soc { 84 + #address-cells = <2>; 85 + #size-cells = <2>; 86 + 87 + ufshc: ufshc@2a2d0000 { 88 + compatible = "rockchip,rk3576-ufshc"; 89 + reg = <0x0 0x2a2d0000 0x0 0x10000>, 90 + <0x0 0x2b040000 0x0 0x10000>, 91 + <0x0 0x2601f000 0x0 0x1000>, 92 + <0x0 0x2603c000 0x0 0x1000>, 93 + <0x0 0x2a2e0000 0x0 0x10000>; 94 + reg-names = "hci", "mphy", "hci_grf", "mphy_grf", "hci_apb"; 95 + clocks = <&cru ACLK_UFS_SYS>, <&cru PCLK_USB_ROOT>, <&cru PCLK_MPHY>, 96 + <&cru CLK_REF_UFS_CLKOUT>; 97 + clock-names = "core", "pclk", "pclk_mphy", "ref_out"; 98 + interrupts = <GIC_SPI 361 IRQ_TYPE_LEVEL_HIGH>; 99 + power-domains = <&power RK3576_PD_USB>; 100 + resets = <&cru SRST_A_UFS_BIU>, <&cru SRST_A_UFS_SYS>, <&cru SRST_A_UFS>, 101 + <&cru SRST_P_UFS_GRF>; 102 + reset-names = "biu", "sys", "ufs", "grf"; 103 + reset-gpios = <&gpio4 RK_PD0 GPIO_ACTIVE_LOW>; 104 + }; 105 + };
+5
Documentation/scsi/st.rst
··· 157 157 bit definitions are the same as those used with MTSETDRVBUFFER in setting the 158 158 options. 159 159 160 + Each directory contains the entry 'position_lost_in_reset'. If this value is 161 + one, reading and writing to the device is blocked after device reset. Most 162 + devices rewind the tape after reset and the writes/read don't access the 163 + tape position the user expects. 164 + 160 165 A link named 'tape' is made from the SCSI device directory to the class 161 166 directory corresponding to the mode 0 auto-rewind device (e.g., st0). 162 167
+1 -1
Documentation/userspace-api/ioctl/ioctl-number.rst
··· 377 377 0xC0 00-0F linux/usb/iowarrior.h 378 378 0xCA 00-0F uapi/misc/cxl.h 379 379 0xCA 10-2F uapi/misc/ocxl.h 380 - 0xCA 80-BF uapi/scsi/cxlflash_ioctl.h 380 + 0xCA 80-BF uapi/scsi/cxlflash_ioctl.h Dead since 6.14 381 381 0xCB 00-1F CBM serial IEC bus in development: 382 382 <mailto:michael.klein@puffin.lb.shuttle.de> 383 383 0xCC 00-0F drivers/misc/ibmvmc.h pseries VMC driver
-9
MAINTAINERS
··· 6354 6354 F: include/misc/cxl* 6355 6355 F: include/uapi/misc/cxl.h 6356 6356 6357 - CXLFLASH (IBM Coherent Accelerator Processor Interface CAPI Flash) SCSI DRIVER 6358 - M: Manoj N. Kumar <manoj@linux.ibm.com> 6359 - M: Uma Krishnan <ukrishn@linux.ibm.com> 6360 - L: linux-scsi@vger.kernel.org 6361 - S: Obsolete 6362 - F: Documentation/arch/powerpc/cxlflash.rst 6363 - F: drivers/scsi/cxlflash/ 6364 - F: include/uapi/scsi/cxlflash_ioctl.h 6365 - 6366 6357 CYBERPRO FB DRIVER 6367 6358 M: Russell King <linux@armlinux.org.uk> 6368 6359 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+24
arch/arm64/boot/dts/rockchip/rk3576.dtsi
··· 1221 1221 }; 1222 1222 }; 1223 1223 1224 + ufshc: ufshc@2a2d0000 { 1225 + compatible = "rockchip,rk3576-ufshc"; 1226 + reg = <0x0 0x2a2d0000 0x0 0x10000>, 1227 + <0x0 0x2b040000 0x0 0x10000>, 1228 + <0x0 0x2601f000 0x0 0x1000>, 1229 + <0x0 0x2603c000 0x0 0x1000>, 1230 + <0x0 0x2a2e0000 0x0 0x10000>; 1231 + reg-names = "hci", "mphy", "hci_grf", "mphy_grf", "hci_apb"; 1232 + clocks = <&cru ACLK_UFS_SYS>, <&cru PCLK_USB_ROOT>, <&cru PCLK_MPHY>, 1233 + <&cru CLK_REF_UFS_CLKOUT>; 1234 + clock-names = "core", "pclk", "pclk_mphy", "ref_out"; 1235 + assigned-clocks = <&cru CLK_REF_OSC_MPHY>; 1236 + assigned-clock-parents = <&cru CLK_REF_MPHY_26M>; 1237 + interrupts = <GIC_SPI 361 IRQ_TYPE_LEVEL_HIGH>; 1238 + power-domains = <&power RK3576_PD_USB>; 1239 + pinctrl-0 = <&ufs_refclk>; 1240 + pinctrl-names = "default"; 1241 + resets = <&cru SRST_A_UFS_BIU>, <&cru SRST_A_UFS_SYS>, 1242 + <&cru SRST_A_UFS>, <&cru SRST_P_UFS_GRF>; 1243 + reset-names = "biu", "sys", "ufs", "grf"; 1244 + reset-gpios = <&gpio4 RK_PD0 GPIO_ACTIVE_LOW>; 1245 + status = "disabled"; 1246 + }; 1247 + 1224 1248 sdmmc: mmc@2a310000 { 1225 1249 compatible = "rockchip,rk3576-dw-mshc"; 1226 1250 reg = <0x0 0x2a310000 0x0 0x4000>;
+2 -62
drivers/message/fusion/mptscsih.c
··· 1843 1843 return FAILED; 1844 1844 } 1845 1845 1846 - /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ 1847 - /** 1848 - * mptscsih_target_reset - Perform a SCSI TARGET_RESET! 1849 - * @SCpnt: Pointer to scsi_cmnd structure, IO which reset is due to 1850 - * 1851 - * (linux scsi_host_template.eh_target_reset_handler routine) 1852 - * 1853 - * Returns SUCCESS or FAILED. 1854 - **/ 1855 - int 1856 - mptscsih_target_reset(struct scsi_cmnd * SCpnt) 1857 - { 1858 - MPT_SCSI_HOST *hd; 1859 - int retval; 1860 - VirtDevice *vdevice; 1861 - MPT_ADAPTER *ioc; 1862 - 1863 - /* If we can't locate our host adapter structure, return FAILED status. 1864 - */ 1865 - if ((hd = shost_priv(SCpnt->device->host)) == NULL){ 1866 - printk(KERN_ERR MYNAM ": target reset: " 1867 - "Can't locate host! (sc=%p)\n", SCpnt); 1868 - return FAILED; 1869 - } 1870 - 1871 - ioc = hd->ioc; 1872 - printk(MYIOC_s_INFO_FMT "attempting target reset! (sc=%p)\n", 1873 - ioc->name, SCpnt); 1874 - scsi_print_command(SCpnt); 1875 - 1876 - vdevice = SCpnt->device->hostdata; 1877 - if (!vdevice || !vdevice->vtarget) { 1878 - retval = 0; 1879 - goto out; 1880 - } 1881 - 1882 - /* Target reset to hidden raid component is not supported 1883 - */ 1884 - if (vdevice->vtarget->tflags & MPT_TARGET_FLAGS_RAID_COMPONENT) { 1885 - retval = FAILED; 1886 - goto out; 1887 - } 1888 - 1889 - retval = mptscsih_IssueTaskMgmt(hd, 1890 - MPI_SCSITASKMGMT_TASKTYPE_TARGET_RESET, 1891 - vdevice->vtarget->channel, 1892 - vdevice->vtarget->id, 0, 0, 1893 - mptscsih_get_tm_timeout(ioc)); 1894 - 1895 - out: 1896 - printk (MYIOC_s_INFO_FMT "target reset: %s (sc=%p)\n", 1897 - ioc->name, ((retval == 0) ? "SUCCESS" : "FAILED" ), SCpnt); 1898 - 1899 - if (retval == 0) 1900 - return SUCCESS; 1901 - else 1902 - return FAILED; 1903 - } 1904 - 1905 1846 1906 1847 /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ 1907 1848 /** ··· 2856 2915 timeout = 10; 2857 2916 break; 2858 2917 2859 - case RESERVE: 2918 + case RESERVE_6: 2860 2919 cmdLen = 6; 2861 2920 dir = MPI_SCSIIO_CONTROL_READ; 2862 2921 CDB[0] = cmd; 2863 2922 timeout = 10; 2864 2923 break; 2865 2924 2866 - case RELEASE: 2925 + case RELEASE_6: 2867 2926 cmdLen = 6; 2868 2927 dir = MPI_SCSIIO_CONTROL_READ; 2869 2928 CDB[0] = cmd; ··· 3247 3306 EXPORT_SYMBOL(mptscsih_sdev_configure); 3248 3307 EXPORT_SYMBOL(mptscsih_abort); 3249 3308 EXPORT_SYMBOL(mptscsih_dev_reset); 3250 - EXPORT_SYMBOL(mptscsih_target_reset); 3251 3309 EXPORT_SYMBOL(mptscsih_bus_reset); 3252 3310 EXPORT_SYMBOL(mptscsih_host_reset); 3253 3311 EXPORT_SYMBOL(mptscsih_bios_param);
-1
drivers/message/fusion/mptscsih.h
··· 121 121 struct queue_limits *lim); 122 122 extern int mptscsih_abort(struct scsi_cmnd * SCpnt); 123 123 extern int mptscsih_dev_reset(struct scsi_cmnd * SCpnt); 124 - extern int mptscsih_target_reset(struct scsi_cmnd * SCpnt); 125 124 extern int mptscsih_bus_reset(struct scsi_cmnd * SCpnt); 126 125 extern int mptscsih_host_reset(struct scsi_cmnd *SCpnt); 127 126 extern int mptscsih_bios_param(struct scsi_device * sdev, struct block_device *bdev, sector_t capacity, int geom[]);
+1 -2
drivers/scsi/Kconfig
··· 303 303 config ISCSI_TCP 304 304 tristate "iSCSI Initiator over TCP/IP" 305 305 depends on SCSI && INET 306 + select CRC32 306 307 select CRYPTO 307 308 select CRYPTO_MD5 308 - select CRYPTO_CRC32C 309 309 select SCSI_ISCSI_ATTRS 310 310 help 311 311 The iSCSI Driver provides a host with the ability to access storage ··· 336 336 source "drivers/scsi/bnx2i/Kconfig" 337 337 source "drivers/scsi/bnx2fc/Kconfig" 338 338 source "drivers/scsi/be2iscsi/Kconfig" 339 - source "drivers/scsi/cxlflash/Kconfig" 340 339 341 340 config SGIWD93_SCSI 342 341 tristate "SGI WD93C93 SCSI Driver"
-1
drivers/scsi/Makefile
··· 96 96 obj-$(CONFIG_SCSI_ZALON) += zalon7xx.o 97 97 obj-$(CONFIG_SCSI_DC395x) += dc395x.o 98 98 obj-$(CONFIG_SCSI_AM53C974) += esp_scsi.o am53c974.o 99 - obj-$(CONFIG_CXLFLASH) += cxlflash/ 100 99 obj-$(CONFIG_MEGARAID_LEGACY) += megaraid.o 101 100 obj-$(CONFIG_MEGARAID_NEWGEN) += megaraid/ 102 101 obj-$(CONFIG_MEGARAID_SAS) += megaraid/
+2 -2
drivers/scsi/aacraid/aachba.c
··· 3221 3221 break; 3222 3222 } 3223 3223 fallthrough; 3224 - case RESERVE: 3225 - case RELEASE: 3224 + case RESERVE_6: 3225 + case RELEASE_6: 3226 3226 case REZERO_UNIT: 3227 3227 case REASSIGN_BLOCKS: 3228 3228 case SEEK_10:
+1 -1
drivers/scsi/aacraid/linit.c
··· 2029 2029 dev_err(&pdev->dev, "aacraid: PCI error - resume\n"); 2030 2030 } 2031 2031 2032 - static struct pci_error_handlers aac_pci_err_handler = { 2032 + static const struct pci_error_handlers aac_pci_err_handler = { 2033 2033 .error_detected = aac_pci_error_detected, 2034 2034 .mmio_enabled = aac_pci_mmio_enabled, 2035 2035 .slot_reset = aac_pci_slot_reset,
+1 -1
drivers/scsi/arm/acornscsi.c
··· 591 591 case CHANGE_DEFINITION: case COMPARE: case COPY: 592 592 case COPY_VERIFY: case LOG_SELECT: case MODE_SELECT: 593 593 case MODE_SELECT_10: case SEND_DIAGNOSTIC: case WRITE_BUFFER: 594 - case FORMAT_UNIT: case REASSIGN_BLOCKS: case RESERVE: 594 + case FORMAT_UNIT: case REASSIGN_BLOCKS: case RESERVE_6: 595 595 case SEARCH_EQUAL: case SEARCH_HIGH: case SEARCH_LOW: 596 596 case WRITE_6: case WRITE_10: case WRITE_VERIFY: 597 597 case UPDATE_BLOCK: case WRITE_LONG: case WRITE_SAME:
+1 -1
drivers/scsi/be2iscsi/be_main.c
··· 5776 5776 } 5777 5777 5778 5778 5779 - static struct pci_error_handlers beiscsi_eeh_handlers = { 5779 + static const struct pci_error_handlers beiscsi_eeh_handlers = { 5780 5780 .error_detected = beiscsi_eeh_err_detected, 5781 5781 .slot_reset = beiscsi_eeh_reset, 5782 5782 .resume = beiscsi_eeh_resume,
+1 -1
drivers/scsi/bfa/bfad.c
··· 1642 1642 /* 1643 1643 * PCI error recovery handlers. 1644 1644 */ 1645 - static struct pci_error_handlers bfad_err_handler = { 1645 + static const struct pci_error_handlers bfad_err_handler = { 1646 1646 .error_detected = bfad_pci_error_detected, 1647 1647 .slot_reset = bfad_pci_slot_reset, 1648 1648 .mmio_enabled = bfad_pci_mmio_enabled,
+1 -1
drivers/scsi/csiostor/csio_init.c
··· 1162 1162 dev_err(&pdev->dev, "resume of device failed: %d\n", rv); 1163 1163 } 1164 1164 1165 - static struct pci_error_handlers csio_err_handler = { 1165 + static const struct pci_error_handlers csio_err_handler = { 1166 1166 .error_detected = csio_pci_error_detected, 1167 1167 .slot_reset = csio_pci_slot_reset, 1168 1168 .resume = csio_pci_resume,
-15
drivers/scsi/cxlflash/Kconfig
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - # 3 - # IBM CXL-attached Flash Accelerator SCSI Driver 4 - # 5 - 6 - config CXLFLASH 7 - tristate "Support for IBM CAPI Flash (DEPRECATED)" 8 - depends on PCI && SCSI && (CXL || OCXL) && EEH 9 - select IRQ_POLL 10 - help 11 - The cxlflash driver is deprecated and will be removed in a future 12 - kernel release. 13 - 14 - Allows CAPI Accelerated IO to Flash 15 - If unsure, say N.
-5
drivers/scsi/cxlflash/Makefile
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - obj-$(CONFIG_CXLFLASH) += cxlflash.o 3 - cxlflash-y += main.o superpipe.o lunmgt.o vlun.o 4 - cxlflash-$(CONFIG_CXL) += cxl_hw.o 5 - cxlflash-$(CONFIG_OCXL) += ocxl_hw.o
-48
drivers/scsi/cxlflash/backend.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* 3 - * CXL Flash Device Driver 4 - * 5 - * Written by: Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation 6 - * Uma Krishnan <ukrishn@linux.vnet.ibm.com>, IBM Corporation 7 - * 8 - * Copyright (C) 2018 IBM Corporation 9 - */ 10 - 11 - #ifndef _CXLFLASH_BACKEND_H 12 - #define _CXLFLASH_BACKEND_H 13 - 14 - extern const struct cxlflash_backend_ops cxlflash_cxl_ops; 15 - extern const struct cxlflash_backend_ops cxlflash_ocxl_ops; 16 - 17 - struct cxlflash_backend_ops { 18 - struct module *module; 19 - void __iomem * (*psa_map)(void *ctx_cookie); 20 - void (*psa_unmap)(void __iomem *addr); 21 - int (*process_element)(void *ctx_cookie); 22 - int (*map_afu_irq)(void *ctx_cookie, int num, irq_handler_t handler, 23 - void *cookie, char *name); 24 - void (*unmap_afu_irq)(void *ctx_cookie, int num, void *cookie); 25 - u64 (*get_irq_objhndl)(void *ctx_cookie, int irq); 26 - int (*start_context)(void *ctx_cookie); 27 - int (*stop_context)(void *ctx_cookie); 28 - int (*afu_reset)(void *ctx_cookie); 29 - void (*set_master)(void *ctx_cookie); 30 - void * (*get_context)(struct pci_dev *dev, void *afu_cookie); 31 - void * (*dev_context_init)(struct pci_dev *dev, void *afu_cookie); 32 - int (*release_context)(void *ctx_cookie); 33 - void (*perst_reloads_same_image)(void *afu_cookie, bool image); 34 - ssize_t (*read_adapter_vpd)(struct pci_dev *dev, void *buf, 35 - size_t count); 36 - int (*allocate_afu_irqs)(void *ctx_cookie, int num); 37 - void (*free_afu_irqs)(void *ctx_cookie); 38 - void * (*create_afu)(struct pci_dev *dev); 39 - void (*destroy_afu)(void *afu_cookie); 40 - struct file * (*get_fd)(void *ctx_cookie, struct file_operations *fops, 41 - int *fd); 42 - void * (*fops_get_context)(struct file *file); 43 - int (*start_work)(void *ctx_cookie, u64 irqs); 44 - int (*fd_mmap)(struct file *file, struct vm_area_struct *vm); 45 - int (*fd_release)(struct inode *inode, struct file *file); 46 - }; 47 - 48 - #endif /* _CXLFLASH_BACKEND_H */
-340
drivers/scsi/cxlflash/common.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* 3 - * CXL Flash Device Driver 4 - * 5 - * Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation 6 - * Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation 7 - * 8 - * Copyright (C) 2015 IBM Corporation 9 - */ 10 - 11 - #ifndef _CXLFLASH_COMMON_H 12 - #define _CXLFLASH_COMMON_H 13 - 14 - #include <linux/async.h> 15 - #include <linux/cdev.h> 16 - #include <linux/irq_poll.h> 17 - #include <linux/list.h> 18 - #include <linux/rwsem.h> 19 - #include <linux/types.h> 20 - #include <scsi/scsi.h> 21 - #include <scsi/scsi_cmnd.h> 22 - #include <scsi/scsi_device.h> 23 - 24 - #include "backend.h" 25 - 26 - extern const struct file_operations cxlflash_cxl_fops; 27 - 28 - #define MAX_CONTEXT CXLFLASH_MAX_CONTEXT /* num contexts per afu */ 29 - #define MAX_FC_PORTS CXLFLASH_MAX_FC_PORTS /* max ports per AFU */ 30 - #define LEGACY_FC_PORTS 2 /* legacy ports per AFU */ 31 - 32 - #define CHAN2PORTBANK(_x) ((_x) >> ilog2(CXLFLASH_NUM_FC_PORTS_PER_BANK)) 33 - #define CHAN2BANKPORT(_x) ((_x) & (CXLFLASH_NUM_FC_PORTS_PER_BANK - 1)) 34 - 35 - #define CHAN2PORTMASK(_x) (1 << (_x)) /* channel to port mask */ 36 - #define PORTMASK2CHAN(_x) (ilog2((_x))) /* port mask to channel */ 37 - #define PORTNUM2CHAN(_x) ((_x) - 1) /* port number to channel */ 38 - 39 - #define CXLFLASH_BLOCK_SIZE 4096 /* 4K blocks */ 40 - #define CXLFLASH_MAX_XFER_SIZE 16777216 /* 16MB transfer */ 41 - #define CXLFLASH_MAX_SECTORS (CXLFLASH_MAX_XFER_SIZE/512) /* SCSI wants 42 - * max_sectors 43 - * in units of 44 - * 512 byte 45 - * sectors 46 - */ 47 - 48 - #define MAX_RHT_PER_CONTEXT (PAGE_SIZE / sizeof(struct sisl_rht_entry)) 49 - 50 - /* AFU command retry limit */ 51 - #define MC_RETRY_CNT 5 /* Sufficient for SCSI and certain AFU errors */ 52 - 53 - /* Command management definitions */ 54 - #define CXLFLASH_MAX_CMDS 256 55 - #define CXLFLASH_MAX_CMDS_PER_LUN CXLFLASH_MAX_CMDS 56 - 57 - /* RRQ for master issued cmds */ 58 - #define NUM_RRQ_ENTRY CXLFLASH_MAX_CMDS 59 - 60 - /* SQ for master issued cmds */ 61 - #define NUM_SQ_ENTRY CXLFLASH_MAX_CMDS 62 - 63 - /* Hardware queue definitions */ 64 - #define CXLFLASH_DEF_HWQS 1 65 - #define CXLFLASH_MAX_HWQS 8 66 - #define PRIMARY_HWQ 0 67 - 68 - 69 - static inline void check_sizes(void) 70 - { 71 - BUILD_BUG_ON_NOT_POWER_OF_2(CXLFLASH_NUM_FC_PORTS_PER_BANK); 72 - BUILD_BUG_ON_NOT_POWER_OF_2(CXLFLASH_MAX_CMDS); 73 - } 74 - 75 - /* AFU defines a fixed size of 4K for command buffers (borrow 4K page define) */ 76 - #define CMD_BUFSIZE SIZE_4K 77 - 78 - enum cxlflash_lr_state { 79 - LINK_RESET_INVALID, 80 - LINK_RESET_REQUIRED, 81 - LINK_RESET_COMPLETE 82 - }; 83 - 84 - enum cxlflash_init_state { 85 - INIT_STATE_NONE, 86 - INIT_STATE_PCI, 87 - INIT_STATE_AFU, 88 - INIT_STATE_SCSI, 89 - INIT_STATE_CDEV 90 - }; 91 - 92 - enum cxlflash_state { 93 - STATE_PROBING, /* Initial state during probe */ 94 - STATE_PROBED, /* Temporary state, probe completed but EEH occurred */ 95 - STATE_NORMAL, /* Normal running state, everything good */ 96 - STATE_RESET, /* Reset state, trying to reset/recover */ 97 - STATE_FAILTERM /* Failed/terminating state, error out users/threads */ 98 - }; 99 - 100 - enum cxlflash_hwq_mode { 101 - HWQ_MODE_RR, /* Roundrobin (default) */ 102 - HWQ_MODE_TAG, /* Distribute based on block MQ tag */ 103 - HWQ_MODE_CPU, /* CPU affinity */ 104 - MAX_HWQ_MODE 105 - }; 106 - 107 - /* 108 - * Each context has its own set of resource handles that is visible 109 - * only from that context. 110 - */ 111 - 112 - struct cxlflash_cfg { 113 - struct afu *afu; 114 - 115 - const struct cxlflash_backend_ops *ops; 116 - struct pci_dev *dev; 117 - struct pci_device_id *dev_id; 118 - struct Scsi_Host *host; 119 - int num_fc_ports; 120 - struct cdev cdev; 121 - struct device *chardev; 122 - 123 - ulong cxlflash_regs_pci; 124 - 125 - struct work_struct work_q; 126 - enum cxlflash_init_state init_state; 127 - enum cxlflash_lr_state lr_state; 128 - int lr_port; 129 - atomic_t scan_host_needed; 130 - 131 - void *afu_cookie; 132 - 133 - atomic_t recovery_threads; 134 - struct mutex ctx_recovery_mutex; 135 - struct mutex ctx_tbl_list_mutex; 136 - struct rw_semaphore ioctl_rwsem; 137 - struct ctx_info *ctx_tbl[MAX_CONTEXT]; 138 - struct list_head ctx_err_recovery; /* contexts w/ recovery pending */ 139 - struct file_operations cxl_fops; 140 - 141 - /* Parameters that are LUN table related */ 142 - int last_lun_index[MAX_FC_PORTS]; 143 - int promote_lun_index; 144 - struct list_head lluns; /* list of llun_info structs */ 145 - 146 - wait_queue_head_t tmf_waitq; 147 - spinlock_t tmf_slock; 148 - bool tmf_active; 149 - bool ws_unmap; /* Write-same unmap supported */ 150 - wait_queue_head_t reset_waitq; 151 - enum cxlflash_state state; 152 - async_cookie_t async_reset_cookie; 153 - }; 154 - 155 - struct afu_cmd { 156 - struct sisl_ioarcb rcb; /* IOARCB (cache line aligned) */ 157 - struct sisl_ioasa sa; /* IOASA must follow IOARCB */ 158 - struct afu *parent; 159 - struct scsi_cmnd *scp; 160 - struct completion cevent; 161 - struct list_head queue; 162 - u32 hwq_index; 163 - 164 - u8 cmd_tmf:1, 165 - cmd_aborted:1; 166 - 167 - struct list_head list; /* Pending commands link */ 168 - 169 - /* As per the SISLITE spec the IOARCB EA has to be 16-byte aligned. 170 - * However for performance reasons the IOARCB/IOASA should be 171 - * cache line aligned. 172 - */ 173 - } __aligned(cache_line_size()); 174 - 175 - static inline struct afu_cmd *sc_to_afuc(struct scsi_cmnd *sc) 176 - { 177 - return PTR_ALIGN(scsi_cmd_priv(sc), __alignof__(struct afu_cmd)); 178 - } 179 - 180 - static inline struct afu_cmd *sc_to_afuci(struct scsi_cmnd *sc) 181 - { 182 - struct afu_cmd *afuc = sc_to_afuc(sc); 183 - 184 - INIT_LIST_HEAD(&afuc->queue); 185 - return afuc; 186 - } 187 - 188 - static inline struct afu_cmd *sc_to_afucz(struct scsi_cmnd *sc) 189 - { 190 - struct afu_cmd *afuc = sc_to_afuc(sc); 191 - 192 - memset(afuc, 0, sizeof(*afuc)); 193 - return sc_to_afuci(sc); 194 - } 195 - 196 - struct hwq { 197 - /* Stuff requiring alignment go first. */ 198 - struct sisl_ioarcb sq[NUM_SQ_ENTRY]; /* 16K SQ */ 199 - u64 rrq_entry[NUM_RRQ_ENTRY]; /* 2K RRQ */ 200 - 201 - /* Beware of alignment till here. Preferably introduce new 202 - * fields after this point 203 - */ 204 - struct afu *afu; 205 - void *ctx_cookie; 206 - struct sisl_host_map __iomem *host_map; /* MC host map */ 207 - struct sisl_ctrl_map __iomem *ctrl_map; /* MC control map */ 208 - ctx_hndl_t ctx_hndl; /* master's context handle */ 209 - u32 index; /* Index of this hwq */ 210 - int num_irqs; /* Number of interrupts requested for context */ 211 - struct list_head pending_cmds; /* Commands pending completion */ 212 - 213 - atomic_t hsq_credits; 214 - spinlock_t hsq_slock; /* Hardware send queue lock */ 215 - struct sisl_ioarcb *hsq_start; 216 - struct sisl_ioarcb *hsq_end; 217 - struct sisl_ioarcb *hsq_curr; 218 - spinlock_t hrrq_slock; 219 - u64 *hrrq_start; 220 - u64 *hrrq_end; 221 - u64 *hrrq_curr; 222 - bool toggle; 223 - bool hrrq_online; 224 - 225 - s64 room; 226 - 227 - struct irq_poll irqpoll; 228 - } __aligned(cache_line_size()); 229 - 230 - struct afu { 231 - struct hwq hwqs[CXLFLASH_MAX_HWQS]; 232 - int (*send_cmd)(struct afu *afu, struct afu_cmd *cmd); 233 - int (*context_reset)(struct hwq *hwq); 234 - 235 - /* AFU HW */ 236 - struct cxlflash_afu_map __iomem *afu_map; /* entire MMIO map */ 237 - 238 - atomic_t cmds_active; /* Number of currently active AFU commands */ 239 - struct mutex sync_active; /* Mutex to serialize AFU commands */ 240 - u64 hb; 241 - u32 internal_lun; /* User-desired LUN mode for this AFU */ 242 - 243 - u32 num_hwqs; /* Number of hardware queues */ 244 - u32 desired_hwqs; /* Desired h/w queues, effective on AFU reset */ 245 - enum cxlflash_hwq_mode hwq_mode; /* Steering mode for h/w queues */ 246 - u32 hwq_rr_count; /* Count to distribute traffic for roundrobin */ 247 - 248 - char version[16]; 249 - u64 interface_version; 250 - 251 - u32 irqpoll_weight; 252 - struct cxlflash_cfg *parent; /* Pointer back to parent cxlflash_cfg */ 253 - }; 254 - 255 - static inline struct hwq *get_hwq(struct afu *afu, u32 index) 256 - { 257 - WARN_ON(index >= CXLFLASH_MAX_HWQS); 258 - 259 - return &afu->hwqs[index]; 260 - } 261 - 262 - static inline bool afu_is_irqpoll_enabled(struct afu *afu) 263 - { 264 - return !!afu->irqpoll_weight; 265 - } 266 - 267 - static inline bool afu_has_cap(struct afu *afu, u64 cap) 268 - { 269 - u64 afu_cap = afu->interface_version >> SISL_INTVER_CAP_SHIFT; 270 - 271 - return afu_cap & cap; 272 - } 273 - 274 - static inline bool afu_is_ocxl_lisn(struct afu *afu) 275 - { 276 - return afu_has_cap(afu, SISL_INTVER_CAP_OCXL_LISN); 277 - } 278 - 279 - static inline bool afu_is_afu_debug(struct afu *afu) 280 - { 281 - return afu_has_cap(afu, SISL_INTVER_CAP_AFU_DEBUG); 282 - } 283 - 284 - static inline bool afu_is_lun_provision(struct afu *afu) 285 - { 286 - return afu_has_cap(afu, SISL_INTVER_CAP_LUN_PROVISION); 287 - } 288 - 289 - static inline bool afu_is_sq_cmd_mode(struct afu *afu) 290 - { 291 - return afu_has_cap(afu, SISL_INTVER_CAP_SQ_CMD_MODE); 292 - } 293 - 294 - static inline bool afu_is_ioarrin_cmd_mode(struct afu *afu) 295 - { 296 - return afu_has_cap(afu, SISL_INTVER_CAP_IOARRIN_CMD_MODE); 297 - } 298 - 299 - static inline u64 lun_to_lunid(u64 lun) 300 - { 301 - __be64 lun_id; 302 - 303 - int_to_scsilun(lun, (struct scsi_lun *)&lun_id); 304 - return be64_to_cpu(lun_id); 305 - } 306 - 307 - static inline struct fc_port_bank __iomem *get_fc_port_bank( 308 - struct cxlflash_cfg *cfg, int i) 309 - { 310 - struct afu *afu = cfg->afu; 311 - 312 - return &afu->afu_map->global.bank[CHAN2PORTBANK(i)]; 313 - } 314 - 315 - static inline __be64 __iomem *get_fc_port_regs(struct cxlflash_cfg *cfg, int i) 316 - { 317 - struct fc_port_bank __iomem *fcpb = get_fc_port_bank(cfg, i); 318 - 319 - return &fcpb->fc_port_regs[CHAN2BANKPORT(i)][0]; 320 - } 321 - 322 - static inline __be64 __iomem *get_fc_port_luns(struct cxlflash_cfg *cfg, int i) 323 - { 324 - struct fc_port_bank __iomem *fcpb = get_fc_port_bank(cfg, i); 325 - 326 - return &fcpb->fc_port_luns[CHAN2BANKPORT(i)][0]; 327 - } 328 - 329 - int cxlflash_afu_sync(struct afu *afu, ctx_hndl_t c, res_hndl_t r, u8 mode); 330 - void cxlflash_list_init(void); 331 - void cxlflash_term_global_luns(void); 332 - void cxlflash_free_errpage(void); 333 - int cxlflash_ioctl(struct scsi_device *sdev, unsigned int cmd, 334 - void __user *arg); 335 - void cxlflash_stop_term_user_contexts(struct cxlflash_cfg *cfg); 336 - int cxlflash_mark_contexts_error(struct cxlflash_cfg *cfg); 337 - void cxlflash_term_local_luns(struct cxlflash_cfg *cfg); 338 - void cxlflash_restore_luntable(struct cxlflash_cfg *cfg); 339 - 340 - #endif /* ifndef _CXLFLASH_COMMON_H */
-177
drivers/scsi/cxlflash/cxl_hw.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * CXL Flash Device Driver 4 - * 5 - * Written by: Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation 6 - * Uma Krishnan <ukrishn@linux.vnet.ibm.com>, IBM Corporation 7 - * 8 - * Copyright (C) 2018 IBM Corporation 9 - */ 10 - 11 - #include <misc/cxl.h> 12 - 13 - #include "backend.h" 14 - 15 - /* 16 - * The following routines map the cxlflash backend operations to existing CXL 17 - * kernel API function and are largely simple shims that provide an abstraction 18 - * for converting generic context and AFU cookies into cxl_context or cxl_afu 19 - * pointers. 20 - */ 21 - 22 - static void __iomem *cxlflash_psa_map(void *ctx_cookie) 23 - { 24 - return cxl_psa_map(ctx_cookie); 25 - } 26 - 27 - static void cxlflash_psa_unmap(void __iomem *addr) 28 - { 29 - cxl_psa_unmap(addr); 30 - } 31 - 32 - static int cxlflash_process_element(void *ctx_cookie) 33 - { 34 - return cxl_process_element(ctx_cookie); 35 - } 36 - 37 - static int cxlflash_map_afu_irq(void *ctx_cookie, int num, 38 - irq_handler_t handler, void *cookie, char *name) 39 - { 40 - return cxl_map_afu_irq(ctx_cookie, num, handler, cookie, name); 41 - } 42 - 43 - static void cxlflash_unmap_afu_irq(void *ctx_cookie, int num, void *cookie) 44 - { 45 - cxl_unmap_afu_irq(ctx_cookie, num, cookie); 46 - } 47 - 48 - static u64 cxlflash_get_irq_objhndl(void *ctx_cookie, int irq) 49 - { 50 - /* Dummy fop for cxl */ 51 - return 0; 52 - } 53 - 54 - static int cxlflash_start_context(void *ctx_cookie) 55 - { 56 - return cxl_start_context(ctx_cookie, 0, NULL); 57 - } 58 - 59 - static int cxlflash_stop_context(void *ctx_cookie) 60 - { 61 - return cxl_stop_context(ctx_cookie); 62 - } 63 - 64 - static int cxlflash_afu_reset(void *ctx_cookie) 65 - { 66 - return cxl_afu_reset(ctx_cookie); 67 - } 68 - 69 - static void cxlflash_set_master(void *ctx_cookie) 70 - { 71 - cxl_set_master(ctx_cookie); 72 - } 73 - 74 - static void *cxlflash_get_context(struct pci_dev *dev, void *afu_cookie) 75 - { 76 - return cxl_get_context(dev); 77 - } 78 - 79 - static void *cxlflash_dev_context_init(struct pci_dev *dev, void *afu_cookie) 80 - { 81 - return cxl_dev_context_init(dev); 82 - } 83 - 84 - static int cxlflash_release_context(void *ctx_cookie) 85 - { 86 - return cxl_release_context(ctx_cookie); 87 - } 88 - 89 - static void cxlflash_perst_reloads_same_image(void *afu_cookie, bool image) 90 - { 91 - cxl_perst_reloads_same_image(afu_cookie, image); 92 - } 93 - 94 - static ssize_t cxlflash_read_adapter_vpd(struct pci_dev *dev, 95 - void *buf, size_t count) 96 - { 97 - return cxl_read_adapter_vpd(dev, buf, count); 98 - } 99 - 100 - static int cxlflash_allocate_afu_irqs(void *ctx_cookie, int num) 101 - { 102 - return cxl_allocate_afu_irqs(ctx_cookie, num); 103 - } 104 - 105 - static void cxlflash_free_afu_irqs(void *ctx_cookie) 106 - { 107 - cxl_free_afu_irqs(ctx_cookie); 108 - } 109 - 110 - static void *cxlflash_create_afu(struct pci_dev *dev) 111 - { 112 - return cxl_pci_to_afu(dev); 113 - } 114 - 115 - static void cxlflash_destroy_afu(void *afu) 116 - { 117 - /* Dummy fop for cxl */ 118 - } 119 - 120 - static struct file *cxlflash_get_fd(void *ctx_cookie, 121 - struct file_operations *fops, int *fd) 122 - { 123 - return cxl_get_fd(ctx_cookie, fops, fd); 124 - } 125 - 126 - static void *cxlflash_fops_get_context(struct file *file) 127 - { 128 - return cxl_fops_get_context(file); 129 - } 130 - 131 - static int cxlflash_start_work(void *ctx_cookie, u64 irqs) 132 - { 133 - struct cxl_ioctl_start_work work = { 0 }; 134 - 135 - work.num_interrupts = irqs; 136 - work.flags = CXL_START_WORK_NUM_IRQS; 137 - 138 - return cxl_start_work(ctx_cookie, &work); 139 - } 140 - 141 - static int cxlflash_fd_mmap(struct file *file, struct vm_area_struct *vm) 142 - { 143 - return cxl_fd_mmap(file, vm); 144 - } 145 - 146 - static int cxlflash_fd_release(struct inode *inode, struct file *file) 147 - { 148 - return cxl_fd_release(inode, file); 149 - } 150 - 151 - const struct cxlflash_backend_ops cxlflash_cxl_ops = { 152 - .module = THIS_MODULE, 153 - .psa_map = cxlflash_psa_map, 154 - .psa_unmap = cxlflash_psa_unmap, 155 - .process_element = cxlflash_process_element, 156 - .map_afu_irq = cxlflash_map_afu_irq, 157 - .unmap_afu_irq = cxlflash_unmap_afu_irq, 158 - .get_irq_objhndl = cxlflash_get_irq_objhndl, 159 - .start_context = cxlflash_start_context, 160 - .stop_context = cxlflash_stop_context, 161 - .afu_reset = cxlflash_afu_reset, 162 - .set_master = cxlflash_set_master, 163 - .get_context = cxlflash_get_context, 164 - .dev_context_init = cxlflash_dev_context_init, 165 - .release_context = cxlflash_release_context, 166 - .perst_reloads_same_image = cxlflash_perst_reloads_same_image, 167 - .read_adapter_vpd = cxlflash_read_adapter_vpd, 168 - .allocate_afu_irqs = cxlflash_allocate_afu_irqs, 169 - .free_afu_irqs = cxlflash_free_afu_irqs, 170 - .create_afu = cxlflash_create_afu, 171 - .destroy_afu = cxlflash_destroy_afu, 172 - .get_fd = cxlflash_get_fd, 173 - .fops_get_context = cxlflash_fops_get_context, 174 - .start_work = cxlflash_start_work, 175 - .fd_mmap = cxlflash_fd_mmap, 176 - .fd_release = cxlflash_fd_release, 177 - };
-278
drivers/scsi/cxlflash/lunmgt.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * CXL Flash Device Driver 4 - * 5 - * Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation 6 - * Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation 7 - * 8 - * Copyright (C) 2015 IBM Corporation 9 - */ 10 - 11 - #include <linux/unaligned.h> 12 - 13 - #include <linux/interrupt.h> 14 - #include <linux/pci.h> 15 - 16 - #include <scsi/scsi_host.h> 17 - #include <uapi/scsi/cxlflash_ioctl.h> 18 - 19 - #include "sislite.h" 20 - #include "common.h" 21 - #include "vlun.h" 22 - #include "superpipe.h" 23 - 24 - /** 25 - * create_local() - allocate and initialize a local LUN information structure 26 - * @sdev: SCSI device associated with LUN. 27 - * @wwid: World Wide Node Name for LUN. 28 - * 29 - * Return: Allocated local llun_info structure on success, NULL on failure 30 - */ 31 - static struct llun_info *create_local(struct scsi_device *sdev, u8 *wwid) 32 - { 33 - struct cxlflash_cfg *cfg = shost_priv(sdev->host); 34 - struct device *dev = &cfg->dev->dev; 35 - struct llun_info *lli = NULL; 36 - 37 - lli = kzalloc(sizeof(*lli), GFP_KERNEL); 38 - if (unlikely(!lli)) { 39 - dev_err(dev, "%s: could not allocate lli\n", __func__); 40 - goto out; 41 - } 42 - 43 - lli->sdev = sdev; 44 - lli->host_no = sdev->host->host_no; 45 - lli->in_table = false; 46 - 47 - memcpy(lli->wwid, wwid, DK_CXLFLASH_MANAGE_LUN_WWID_LEN); 48 - out: 49 - return lli; 50 - } 51 - 52 - /** 53 - * create_global() - allocate and initialize a global LUN information structure 54 - * @sdev: SCSI device associated with LUN. 55 - * @wwid: World Wide Node Name for LUN. 56 - * 57 - * Return: Allocated global glun_info structure on success, NULL on failure 58 - */ 59 - static struct glun_info *create_global(struct scsi_device *sdev, u8 *wwid) 60 - { 61 - struct cxlflash_cfg *cfg = shost_priv(sdev->host); 62 - struct device *dev = &cfg->dev->dev; 63 - struct glun_info *gli = NULL; 64 - 65 - gli = kzalloc(sizeof(*gli), GFP_KERNEL); 66 - if (unlikely(!gli)) { 67 - dev_err(dev, "%s: could not allocate gli\n", __func__); 68 - goto out; 69 - } 70 - 71 - mutex_init(&gli->mutex); 72 - memcpy(gli->wwid, wwid, DK_CXLFLASH_MANAGE_LUN_WWID_LEN); 73 - out: 74 - return gli; 75 - } 76 - 77 - /** 78 - * lookup_local() - find a local LUN information structure by WWID 79 - * @cfg: Internal structure associated with the host. 80 - * @wwid: WWID associated with LUN. 81 - * 82 - * Return: Found local lun_info structure on success, NULL on failure 83 - */ 84 - static struct llun_info *lookup_local(struct cxlflash_cfg *cfg, u8 *wwid) 85 - { 86 - struct llun_info *lli, *temp; 87 - 88 - list_for_each_entry_safe(lli, temp, &cfg->lluns, list) 89 - if (!memcmp(lli->wwid, wwid, DK_CXLFLASH_MANAGE_LUN_WWID_LEN)) 90 - return lli; 91 - 92 - return NULL; 93 - } 94 - 95 - /** 96 - * lookup_global() - find a global LUN information structure by WWID 97 - * @wwid: WWID associated with LUN. 98 - * 99 - * Return: Found global lun_info structure on success, NULL on failure 100 - */ 101 - static struct glun_info *lookup_global(u8 *wwid) 102 - { 103 - struct glun_info *gli, *temp; 104 - 105 - list_for_each_entry_safe(gli, temp, &global.gluns, list) 106 - if (!memcmp(gli->wwid, wwid, DK_CXLFLASH_MANAGE_LUN_WWID_LEN)) 107 - return gli; 108 - 109 - return NULL; 110 - } 111 - 112 - /** 113 - * find_and_create_lun() - find or create a local LUN information structure 114 - * @sdev: SCSI device associated with LUN. 115 - * @wwid: WWID associated with LUN. 116 - * 117 - * The LUN is kept both in a local list (per adapter) and in a global list 118 - * (across all adapters). Certain attributes of the LUN are local to the 119 - * adapter (such as index, port selection mask, etc.). 120 - * 121 - * The block allocation map is shared across all adapters (i.e. associated 122 - * wih the global list). Since different attributes are associated with 123 - * the per adapter and global entries, allocate two separate structures for each 124 - * LUN (one local, one global). 125 - * 126 - * Keep a pointer back from the local to the global entry. 127 - * 128 - * This routine assumes the caller holds the global mutex. 129 - * 130 - * Return: Found/Allocated local lun_info structure on success, NULL on failure 131 - */ 132 - static struct llun_info *find_and_create_lun(struct scsi_device *sdev, u8 *wwid) 133 - { 134 - struct cxlflash_cfg *cfg = shost_priv(sdev->host); 135 - struct device *dev = &cfg->dev->dev; 136 - struct llun_info *lli = NULL; 137 - struct glun_info *gli = NULL; 138 - 139 - if (unlikely(!wwid)) 140 - goto out; 141 - 142 - lli = lookup_local(cfg, wwid); 143 - if (lli) 144 - goto out; 145 - 146 - lli = create_local(sdev, wwid); 147 - if (unlikely(!lli)) 148 - goto out; 149 - 150 - gli = lookup_global(wwid); 151 - if (gli) { 152 - lli->parent = gli; 153 - list_add(&lli->list, &cfg->lluns); 154 - goto out; 155 - } 156 - 157 - gli = create_global(sdev, wwid); 158 - if (unlikely(!gli)) { 159 - kfree(lli); 160 - lli = NULL; 161 - goto out; 162 - } 163 - 164 - lli->parent = gli; 165 - list_add(&lli->list, &cfg->lluns); 166 - 167 - list_add(&gli->list, &global.gluns); 168 - 169 - out: 170 - dev_dbg(dev, "%s: returning lli=%p, gli=%p\n", __func__, lli, gli); 171 - return lli; 172 - } 173 - 174 - /** 175 - * cxlflash_term_local_luns() - Delete all entries from local LUN list, free. 176 - * @cfg: Internal structure associated with the host. 177 - */ 178 - void cxlflash_term_local_luns(struct cxlflash_cfg *cfg) 179 - { 180 - struct llun_info *lli, *temp; 181 - 182 - mutex_lock(&global.mutex); 183 - list_for_each_entry_safe(lli, temp, &cfg->lluns, list) { 184 - list_del(&lli->list); 185 - kfree(lli); 186 - } 187 - mutex_unlock(&global.mutex); 188 - } 189 - 190 - /** 191 - * cxlflash_list_init() - initializes the global LUN list 192 - */ 193 - void cxlflash_list_init(void) 194 - { 195 - INIT_LIST_HEAD(&global.gluns); 196 - mutex_init(&global.mutex); 197 - global.err_page = NULL; 198 - } 199 - 200 - /** 201 - * cxlflash_term_global_luns() - frees resources associated with global LUN list 202 - */ 203 - void cxlflash_term_global_luns(void) 204 - { 205 - struct glun_info *gli, *temp; 206 - 207 - mutex_lock(&global.mutex); 208 - list_for_each_entry_safe(gli, temp, &global.gluns, list) { 209 - list_del(&gli->list); 210 - cxlflash_ba_terminate(&gli->blka.ba_lun); 211 - kfree(gli); 212 - } 213 - mutex_unlock(&global.mutex); 214 - } 215 - 216 - /** 217 - * cxlflash_manage_lun() - handles LUN management activities 218 - * @sdev: SCSI device associated with LUN. 219 - * @arg: Manage ioctl data structure. 220 - * 221 - * This routine is used to notify the driver about a LUN's WWID and associate 222 - * SCSI devices (sdev) with a global LUN instance. Additionally it serves to 223 - * change a LUN's operating mode: legacy or superpipe. 224 - * 225 - * Return: 0 on success, -errno on failure 226 - */ 227 - int cxlflash_manage_lun(struct scsi_device *sdev, void *arg) 228 - { 229 - struct dk_cxlflash_manage_lun *manage = arg; 230 - struct cxlflash_cfg *cfg = shost_priv(sdev->host); 231 - struct device *dev = &cfg->dev->dev; 232 - struct llun_info *lli = NULL; 233 - int rc = 0; 234 - u64 flags = manage->hdr.flags; 235 - u32 chan = sdev->channel; 236 - 237 - mutex_lock(&global.mutex); 238 - lli = find_and_create_lun(sdev, manage->wwid); 239 - dev_dbg(dev, "%s: WWID=%016llx%016llx, flags=%016llx lli=%p\n", 240 - __func__, get_unaligned_be64(&manage->wwid[0]), 241 - get_unaligned_be64(&manage->wwid[8]), manage->hdr.flags, lli); 242 - if (unlikely(!lli)) { 243 - rc = -ENOMEM; 244 - goto out; 245 - } 246 - 247 - if (flags & DK_CXLFLASH_MANAGE_LUN_ENABLE_SUPERPIPE) { 248 - /* 249 - * Update port selection mask based upon channel, store off LUN 250 - * in unpacked, AFU-friendly format, and hang LUN reference in 251 - * the sdev. 252 - */ 253 - lli->port_sel |= CHAN2PORTMASK(chan); 254 - lli->lun_id[chan] = lun_to_lunid(sdev->lun); 255 - sdev->hostdata = lli; 256 - } else if (flags & DK_CXLFLASH_MANAGE_LUN_DISABLE_SUPERPIPE) { 257 - if (lli->parent->mode != MODE_NONE) 258 - rc = -EBUSY; 259 - else { 260 - /* 261 - * Clean up local LUN for this port and reset table 262 - * tracking when no more references exist. 263 - */ 264 - sdev->hostdata = NULL; 265 - lli->port_sel &= ~CHAN2PORTMASK(chan); 266 - if (lli->port_sel == 0U) 267 - lli->in_table = false; 268 - } 269 - } 270 - 271 - dev_dbg(dev, "%s: port_sel=%08x chan=%u lun_id=%016llx\n", 272 - __func__, lli->port_sel, chan, lli->lun_id[chan]); 273 - 274 - out: 275 - mutex_unlock(&global.mutex); 276 - dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 277 - return rc; 278 - }
-3970
drivers/scsi/cxlflash/main.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * CXL Flash Device Driver 4 - * 5 - * Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation 6 - * Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation 7 - * 8 - * Copyright (C) 2015 IBM Corporation 9 - */ 10 - 11 - #include <linux/delay.h> 12 - #include <linux/list.h> 13 - #include <linux/module.h> 14 - #include <linux/pci.h> 15 - 16 - #include <linux/unaligned.h> 17 - 18 - #include <scsi/scsi_cmnd.h> 19 - #include <scsi/scsi_host.h> 20 - #include <uapi/scsi/cxlflash_ioctl.h> 21 - 22 - #include "main.h" 23 - #include "sislite.h" 24 - #include "common.h" 25 - 26 - MODULE_DESCRIPTION(CXLFLASH_ADAPTER_NAME); 27 - MODULE_AUTHOR("Manoj N. Kumar <manoj@linux.vnet.ibm.com>"); 28 - MODULE_AUTHOR("Matthew R. Ochs <mrochs@linux.vnet.ibm.com>"); 29 - MODULE_LICENSE("GPL"); 30 - 31 - static char *cxlflash_devnode(const struct device *dev, umode_t *mode); 32 - static const struct class cxlflash_class = { 33 - .name = "cxlflash", 34 - .devnode = cxlflash_devnode, 35 - }; 36 - 37 - static u32 cxlflash_major; 38 - static DECLARE_BITMAP(cxlflash_minor, CXLFLASH_MAX_ADAPTERS); 39 - 40 - /** 41 - * process_cmd_err() - command error handler 42 - * @cmd: AFU command that experienced the error. 43 - * @scp: SCSI command associated with the AFU command in error. 44 - * 45 - * Translates error bits from AFU command to SCSI command results. 46 - */ 47 - static void process_cmd_err(struct afu_cmd *cmd, struct scsi_cmnd *scp) 48 - { 49 - struct afu *afu = cmd->parent; 50 - struct cxlflash_cfg *cfg = afu->parent; 51 - struct device *dev = &cfg->dev->dev; 52 - struct sisl_ioasa *ioasa; 53 - u32 resid; 54 - 55 - ioasa = &(cmd->sa); 56 - 57 - if (ioasa->rc.flags & SISL_RC_FLAGS_UNDERRUN) { 58 - resid = ioasa->resid; 59 - scsi_set_resid(scp, resid); 60 - dev_dbg(dev, "%s: cmd underrun cmd = %p scp = %p, resid = %d\n", 61 - __func__, cmd, scp, resid); 62 - } 63 - 64 - if (ioasa->rc.flags & SISL_RC_FLAGS_OVERRUN) { 65 - dev_dbg(dev, "%s: cmd underrun cmd = %p scp = %p\n", 66 - __func__, cmd, scp); 67 - scp->result = (DID_ERROR << 16); 68 - } 69 - 70 - dev_dbg(dev, "%s: cmd failed afu_rc=%02x scsi_rc=%02x fc_rc=%02x " 71 - "afu_extra=%02x scsi_extra=%02x fc_extra=%02x\n", __func__, 72 - ioasa->rc.afu_rc, ioasa->rc.scsi_rc, ioasa->rc.fc_rc, 73 - ioasa->afu_extra, ioasa->scsi_extra, ioasa->fc_extra); 74 - 75 - if (ioasa->rc.scsi_rc) { 76 - /* We have a SCSI status */ 77 - if (ioasa->rc.flags & SISL_RC_FLAGS_SENSE_VALID) { 78 - memcpy(scp->sense_buffer, ioasa->sense_data, 79 - SISL_SENSE_DATA_LEN); 80 - scp->result = ioasa->rc.scsi_rc; 81 - } else 82 - scp->result = ioasa->rc.scsi_rc | (DID_ERROR << 16); 83 - } 84 - 85 - /* 86 - * We encountered an error. Set scp->result based on nature 87 - * of error. 88 - */ 89 - if (ioasa->rc.fc_rc) { 90 - /* We have an FC status */ 91 - switch (ioasa->rc.fc_rc) { 92 - case SISL_FC_RC_LINKDOWN: 93 - scp->result = (DID_REQUEUE << 16); 94 - break; 95 - case SISL_FC_RC_RESID: 96 - /* This indicates an FCP resid underrun */ 97 - if (!(ioasa->rc.flags & SISL_RC_FLAGS_OVERRUN)) { 98 - /* If the SISL_RC_FLAGS_OVERRUN flag was set, 99 - * then we will handle this error else where. 100 - * If not then we must handle it here. 101 - * This is probably an AFU bug. 102 - */ 103 - scp->result = (DID_ERROR << 16); 104 - } 105 - break; 106 - case SISL_FC_RC_RESIDERR: 107 - /* Resid mismatch between adapter and device */ 108 - case SISL_FC_RC_TGTABORT: 109 - case SISL_FC_RC_ABORTOK: 110 - case SISL_FC_RC_ABORTFAIL: 111 - case SISL_FC_RC_NOLOGI: 112 - case SISL_FC_RC_ABORTPEND: 113 - case SISL_FC_RC_WRABORTPEND: 114 - case SISL_FC_RC_NOEXP: 115 - case SISL_FC_RC_INUSE: 116 - scp->result = (DID_ERROR << 16); 117 - break; 118 - } 119 - } 120 - 121 - if (ioasa->rc.afu_rc) { 122 - /* We have an AFU error */ 123 - switch (ioasa->rc.afu_rc) { 124 - case SISL_AFU_RC_NO_CHANNELS: 125 - scp->result = (DID_NO_CONNECT << 16); 126 - break; 127 - case SISL_AFU_RC_DATA_DMA_ERR: 128 - switch (ioasa->afu_extra) { 129 - case SISL_AFU_DMA_ERR_PAGE_IN: 130 - /* Retry */ 131 - scp->result = (DID_IMM_RETRY << 16); 132 - break; 133 - case SISL_AFU_DMA_ERR_INVALID_EA: 134 - default: 135 - scp->result = (DID_ERROR << 16); 136 - } 137 - break; 138 - case SISL_AFU_RC_OUT_OF_DATA_BUFS: 139 - /* Retry */ 140 - scp->result = (DID_ERROR << 16); 141 - break; 142 - default: 143 - scp->result = (DID_ERROR << 16); 144 - } 145 - } 146 - } 147 - 148 - /** 149 - * cmd_complete() - command completion handler 150 - * @cmd: AFU command that has completed. 151 - * 152 - * For SCSI commands this routine prepares and submits commands that have 153 - * either completed or timed out to the SCSI stack. For internal commands 154 - * (TMF or AFU), this routine simply notifies the originator that the 155 - * command has completed. 156 - */ 157 - static void cmd_complete(struct afu_cmd *cmd) 158 - { 159 - struct scsi_cmnd *scp; 160 - ulong lock_flags; 161 - struct afu *afu = cmd->parent; 162 - struct cxlflash_cfg *cfg = afu->parent; 163 - struct device *dev = &cfg->dev->dev; 164 - struct hwq *hwq = get_hwq(afu, cmd->hwq_index); 165 - 166 - spin_lock_irqsave(&hwq->hsq_slock, lock_flags); 167 - list_del(&cmd->list); 168 - spin_unlock_irqrestore(&hwq->hsq_slock, lock_flags); 169 - 170 - if (cmd->scp) { 171 - scp = cmd->scp; 172 - if (unlikely(cmd->sa.ioasc)) 173 - process_cmd_err(cmd, scp); 174 - else 175 - scp->result = (DID_OK << 16); 176 - 177 - dev_dbg_ratelimited(dev, "%s:scp=%p result=%08x ioasc=%08x\n", 178 - __func__, scp, scp->result, cmd->sa.ioasc); 179 - scsi_done(scp); 180 - } else if (cmd->cmd_tmf) { 181 - spin_lock_irqsave(&cfg->tmf_slock, lock_flags); 182 - cfg->tmf_active = false; 183 - wake_up_all_locked(&cfg->tmf_waitq); 184 - spin_unlock_irqrestore(&cfg->tmf_slock, lock_flags); 185 - } else 186 - complete(&cmd->cevent); 187 - } 188 - 189 - /** 190 - * flush_pending_cmds() - flush all pending commands on this hardware queue 191 - * @hwq: Hardware queue to flush. 192 - * 193 - * The hardware send queue lock associated with this hardware queue must be 194 - * held when calling this routine. 195 - */ 196 - static void flush_pending_cmds(struct hwq *hwq) 197 - { 198 - struct cxlflash_cfg *cfg = hwq->afu->parent; 199 - struct afu_cmd *cmd, *tmp; 200 - struct scsi_cmnd *scp; 201 - ulong lock_flags; 202 - 203 - list_for_each_entry_safe(cmd, tmp, &hwq->pending_cmds, list) { 204 - /* Bypass command when on a doneq, cmd_complete() will handle */ 205 - if (!list_empty(&cmd->queue)) 206 - continue; 207 - 208 - list_del(&cmd->list); 209 - 210 - if (cmd->scp) { 211 - scp = cmd->scp; 212 - scp->result = (DID_IMM_RETRY << 16); 213 - scsi_done(scp); 214 - } else { 215 - cmd->cmd_aborted = true; 216 - 217 - if (cmd->cmd_tmf) { 218 - spin_lock_irqsave(&cfg->tmf_slock, lock_flags); 219 - cfg->tmf_active = false; 220 - wake_up_all_locked(&cfg->tmf_waitq); 221 - spin_unlock_irqrestore(&cfg->tmf_slock, 222 - lock_flags); 223 - } else 224 - complete(&cmd->cevent); 225 - } 226 - } 227 - } 228 - 229 - /** 230 - * context_reset() - reset context via specified register 231 - * @hwq: Hardware queue owning the context to be reset. 232 - * @reset_reg: MMIO register to perform reset. 233 - * 234 - * When the reset is successful, the SISLite specification guarantees that 235 - * the AFU has aborted all currently pending I/O. Accordingly, these commands 236 - * must be flushed. 237 - * 238 - * Return: 0 on success, -errno on failure 239 - */ 240 - static int context_reset(struct hwq *hwq, __be64 __iomem *reset_reg) 241 - { 242 - struct cxlflash_cfg *cfg = hwq->afu->parent; 243 - struct device *dev = &cfg->dev->dev; 244 - int rc = -ETIMEDOUT; 245 - int nretry = 0; 246 - u64 val = 0x1; 247 - ulong lock_flags; 248 - 249 - dev_dbg(dev, "%s: hwq=%p\n", __func__, hwq); 250 - 251 - spin_lock_irqsave(&hwq->hsq_slock, lock_flags); 252 - 253 - writeq_be(val, reset_reg); 254 - do { 255 - val = readq_be(reset_reg); 256 - if ((val & 0x1) == 0x0) { 257 - rc = 0; 258 - break; 259 - } 260 - 261 - /* Double delay each time */ 262 - udelay(1 << nretry); 263 - } while (nretry++ < MC_ROOM_RETRY_CNT); 264 - 265 - if (!rc) 266 - flush_pending_cmds(hwq); 267 - 268 - spin_unlock_irqrestore(&hwq->hsq_slock, lock_flags); 269 - 270 - dev_dbg(dev, "%s: returning rc=%d, val=%016llx nretry=%d\n", 271 - __func__, rc, val, nretry); 272 - return rc; 273 - } 274 - 275 - /** 276 - * context_reset_ioarrin() - reset context via IOARRIN register 277 - * @hwq: Hardware queue owning the context to be reset. 278 - * 279 - * Return: 0 on success, -errno on failure 280 - */ 281 - static int context_reset_ioarrin(struct hwq *hwq) 282 - { 283 - return context_reset(hwq, &hwq->host_map->ioarrin); 284 - } 285 - 286 - /** 287 - * context_reset_sq() - reset context via SQ_CONTEXT_RESET register 288 - * @hwq: Hardware queue owning the context to be reset. 289 - * 290 - * Return: 0 on success, -errno on failure 291 - */ 292 - static int context_reset_sq(struct hwq *hwq) 293 - { 294 - return context_reset(hwq, &hwq->host_map->sq_ctx_reset); 295 - } 296 - 297 - /** 298 - * send_cmd_ioarrin() - sends an AFU command via IOARRIN register 299 - * @afu: AFU associated with the host. 300 - * @cmd: AFU command to send. 301 - * 302 - * Return: 303 - * 0 on success, SCSI_MLQUEUE_HOST_BUSY on failure 304 - */ 305 - static int send_cmd_ioarrin(struct afu *afu, struct afu_cmd *cmd) 306 - { 307 - struct cxlflash_cfg *cfg = afu->parent; 308 - struct device *dev = &cfg->dev->dev; 309 - struct hwq *hwq = get_hwq(afu, cmd->hwq_index); 310 - int rc = 0; 311 - s64 room; 312 - ulong lock_flags; 313 - 314 - /* 315 - * To avoid the performance penalty of MMIO, spread the update of 316 - * 'room' over multiple commands. 317 - */ 318 - spin_lock_irqsave(&hwq->hsq_slock, lock_flags); 319 - if (--hwq->room < 0) { 320 - room = readq_be(&hwq->host_map->cmd_room); 321 - if (room <= 0) { 322 - dev_dbg_ratelimited(dev, "%s: no cmd_room to send " 323 - "0x%02X, room=0x%016llX\n", 324 - __func__, cmd->rcb.cdb[0], room); 325 - hwq->room = 0; 326 - rc = SCSI_MLQUEUE_HOST_BUSY; 327 - goto out; 328 - } 329 - hwq->room = room - 1; 330 - } 331 - 332 - list_add(&cmd->list, &hwq->pending_cmds); 333 - writeq_be((u64)&cmd->rcb, &hwq->host_map->ioarrin); 334 - out: 335 - spin_unlock_irqrestore(&hwq->hsq_slock, lock_flags); 336 - dev_dbg_ratelimited(dev, "%s: cmd=%p len=%u ea=%016llx rc=%d\n", 337 - __func__, cmd, cmd->rcb.data_len, cmd->rcb.data_ea, rc); 338 - return rc; 339 - } 340 - 341 - /** 342 - * send_cmd_sq() - sends an AFU command via SQ ring 343 - * @afu: AFU associated with the host. 344 - * @cmd: AFU command to send. 345 - * 346 - * Return: 347 - * 0 on success, SCSI_MLQUEUE_HOST_BUSY on failure 348 - */ 349 - static int send_cmd_sq(struct afu *afu, struct afu_cmd *cmd) 350 - { 351 - struct cxlflash_cfg *cfg = afu->parent; 352 - struct device *dev = &cfg->dev->dev; 353 - struct hwq *hwq = get_hwq(afu, cmd->hwq_index); 354 - int rc = 0; 355 - int newval; 356 - ulong lock_flags; 357 - 358 - newval = atomic_dec_if_positive(&hwq->hsq_credits); 359 - if (newval <= 0) { 360 - rc = SCSI_MLQUEUE_HOST_BUSY; 361 - goto out; 362 - } 363 - 364 - cmd->rcb.ioasa = &cmd->sa; 365 - 366 - spin_lock_irqsave(&hwq->hsq_slock, lock_flags); 367 - 368 - *hwq->hsq_curr = cmd->rcb; 369 - if (hwq->hsq_curr < hwq->hsq_end) 370 - hwq->hsq_curr++; 371 - else 372 - hwq->hsq_curr = hwq->hsq_start; 373 - 374 - list_add(&cmd->list, &hwq->pending_cmds); 375 - writeq_be((u64)hwq->hsq_curr, &hwq->host_map->sq_tail); 376 - 377 - spin_unlock_irqrestore(&hwq->hsq_slock, lock_flags); 378 - out: 379 - dev_dbg(dev, "%s: cmd=%p len=%u ea=%016llx ioasa=%p rc=%d curr=%p " 380 - "head=%016llx tail=%016llx\n", __func__, cmd, cmd->rcb.data_len, 381 - cmd->rcb.data_ea, cmd->rcb.ioasa, rc, hwq->hsq_curr, 382 - readq_be(&hwq->host_map->sq_head), 383 - readq_be(&hwq->host_map->sq_tail)); 384 - return rc; 385 - } 386 - 387 - /** 388 - * wait_resp() - polls for a response or timeout to a sent AFU command 389 - * @afu: AFU associated with the host. 390 - * @cmd: AFU command that was sent. 391 - * 392 - * Return: 0 on success, -errno on failure 393 - */ 394 - static int wait_resp(struct afu *afu, struct afu_cmd *cmd) 395 - { 396 - struct cxlflash_cfg *cfg = afu->parent; 397 - struct device *dev = &cfg->dev->dev; 398 - int rc = 0; 399 - ulong timeout = msecs_to_jiffies(cmd->rcb.timeout * 2 * 1000); 400 - 401 - timeout = wait_for_completion_timeout(&cmd->cevent, timeout); 402 - if (!timeout) 403 - rc = -ETIMEDOUT; 404 - 405 - if (cmd->cmd_aborted) 406 - rc = -EAGAIN; 407 - 408 - if (unlikely(cmd->sa.ioasc != 0)) { 409 - dev_err(dev, "%s: cmd %02x failed, ioasc=%08x\n", 410 - __func__, cmd->rcb.cdb[0], cmd->sa.ioasc); 411 - rc = -EIO; 412 - } 413 - 414 - return rc; 415 - } 416 - 417 - /** 418 - * cmd_to_target_hwq() - selects a target hardware queue for a SCSI command 419 - * @host: SCSI host associated with device. 420 - * @scp: SCSI command to send. 421 - * @afu: SCSI command to send. 422 - * 423 - * Hashes a command based upon the hardware queue mode. 424 - * 425 - * Return: Trusted index of target hardware queue 426 - */ 427 - static u32 cmd_to_target_hwq(struct Scsi_Host *host, struct scsi_cmnd *scp, 428 - struct afu *afu) 429 - { 430 - u32 tag; 431 - u32 hwq = 0; 432 - 433 - if (afu->num_hwqs == 1) 434 - return 0; 435 - 436 - switch (afu->hwq_mode) { 437 - case HWQ_MODE_RR: 438 - hwq = afu->hwq_rr_count++ % afu->num_hwqs; 439 - break; 440 - case HWQ_MODE_TAG: 441 - tag = blk_mq_unique_tag(scsi_cmd_to_rq(scp)); 442 - hwq = blk_mq_unique_tag_to_hwq(tag); 443 - break; 444 - case HWQ_MODE_CPU: 445 - hwq = smp_processor_id() % afu->num_hwqs; 446 - break; 447 - default: 448 - WARN_ON_ONCE(1); 449 - } 450 - 451 - return hwq; 452 - } 453 - 454 - /** 455 - * send_tmf() - sends a Task Management Function (TMF) 456 - * @cfg: Internal structure associated with the host. 457 - * @sdev: SCSI device destined for TMF. 458 - * @tmfcmd: TMF command to send. 459 - * 460 - * Return: 461 - * 0 on success, SCSI_MLQUEUE_HOST_BUSY or -errno on failure 462 - */ 463 - static int send_tmf(struct cxlflash_cfg *cfg, struct scsi_device *sdev, 464 - u64 tmfcmd) 465 - { 466 - struct afu *afu = cfg->afu; 467 - struct afu_cmd *cmd = NULL; 468 - struct device *dev = &cfg->dev->dev; 469 - struct hwq *hwq = get_hwq(afu, PRIMARY_HWQ); 470 - bool needs_deletion = false; 471 - char *buf = NULL; 472 - ulong lock_flags; 473 - int rc = 0; 474 - ulong to; 475 - 476 - buf = kzalloc(sizeof(*cmd) + __alignof__(*cmd) - 1, GFP_KERNEL); 477 - if (unlikely(!buf)) { 478 - dev_err(dev, "%s: no memory for command\n", __func__); 479 - rc = -ENOMEM; 480 - goto out; 481 - } 482 - 483 - cmd = (struct afu_cmd *)PTR_ALIGN(buf, __alignof__(*cmd)); 484 - INIT_LIST_HEAD(&cmd->queue); 485 - 486 - /* When Task Management Function is active do not send another */ 487 - spin_lock_irqsave(&cfg->tmf_slock, lock_flags); 488 - if (cfg->tmf_active) 489 - wait_event_interruptible_lock_irq(cfg->tmf_waitq, 490 - !cfg->tmf_active, 491 - cfg->tmf_slock); 492 - cfg->tmf_active = true; 493 - spin_unlock_irqrestore(&cfg->tmf_slock, lock_flags); 494 - 495 - cmd->parent = afu; 496 - cmd->cmd_tmf = true; 497 - cmd->hwq_index = hwq->index; 498 - 499 - cmd->rcb.ctx_id = hwq->ctx_hndl; 500 - cmd->rcb.msi = SISL_MSI_RRQ_UPDATED; 501 - cmd->rcb.port_sel = CHAN2PORTMASK(sdev->channel); 502 - cmd->rcb.lun_id = lun_to_lunid(sdev->lun); 503 - cmd->rcb.req_flags = (SISL_REQ_FLAGS_PORT_LUN_ID | 504 - SISL_REQ_FLAGS_SUP_UNDERRUN | 505 - SISL_REQ_FLAGS_TMF_CMD); 506 - memcpy(cmd->rcb.cdb, &tmfcmd, sizeof(tmfcmd)); 507 - 508 - rc = afu->send_cmd(afu, cmd); 509 - if (unlikely(rc)) { 510 - spin_lock_irqsave(&cfg->tmf_slock, lock_flags); 511 - cfg->tmf_active = false; 512 - spin_unlock_irqrestore(&cfg->tmf_slock, lock_flags); 513 - goto out; 514 - } 515 - 516 - spin_lock_irqsave(&cfg->tmf_slock, lock_flags); 517 - to = msecs_to_jiffies(5000); 518 - to = wait_event_interruptible_lock_irq_timeout(cfg->tmf_waitq, 519 - !cfg->tmf_active, 520 - cfg->tmf_slock, 521 - to); 522 - if (!to) { 523 - dev_err(dev, "%s: TMF timed out\n", __func__); 524 - rc = -ETIMEDOUT; 525 - needs_deletion = true; 526 - } else if (cmd->cmd_aborted) { 527 - dev_err(dev, "%s: TMF aborted\n", __func__); 528 - rc = -EAGAIN; 529 - } else if (cmd->sa.ioasc) { 530 - dev_err(dev, "%s: TMF failed ioasc=%08x\n", 531 - __func__, cmd->sa.ioasc); 532 - rc = -EIO; 533 - } 534 - cfg->tmf_active = false; 535 - spin_unlock_irqrestore(&cfg->tmf_slock, lock_flags); 536 - 537 - if (needs_deletion) { 538 - spin_lock_irqsave(&hwq->hsq_slock, lock_flags); 539 - list_del(&cmd->list); 540 - spin_unlock_irqrestore(&hwq->hsq_slock, lock_flags); 541 - } 542 - out: 543 - kfree(buf); 544 - return rc; 545 - } 546 - 547 - /** 548 - * cxlflash_driver_info() - information handler for this host driver 549 - * @host: SCSI host associated with device. 550 - * 551 - * Return: A string describing the device. 552 - */ 553 - static const char *cxlflash_driver_info(struct Scsi_Host *host) 554 - { 555 - return CXLFLASH_ADAPTER_NAME; 556 - } 557 - 558 - /** 559 - * cxlflash_queuecommand() - sends a mid-layer request 560 - * @host: SCSI host associated with device. 561 - * @scp: SCSI command to send. 562 - * 563 - * Return: 0 on success, SCSI_MLQUEUE_HOST_BUSY on failure 564 - */ 565 - static int cxlflash_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scp) 566 - { 567 - struct cxlflash_cfg *cfg = shost_priv(host); 568 - struct afu *afu = cfg->afu; 569 - struct device *dev = &cfg->dev->dev; 570 - struct afu_cmd *cmd = sc_to_afuci(scp); 571 - struct scatterlist *sg = scsi_sglist(scp); 572 - int hwq_index = cmd_to_target_hwq(host, scp, afu); 573 - struct hwq *hwq = get_hwq(afu, hwq_index); 574 - u16 req_flags = SISL_REQ_FLAGS_SUP_UNDERRUN; 575 - ulong lock_flags; 576 - int rc = 0; 577 - 578 - dev_dbg_ratelimited(dev, "%s: (scp=%p) %d/%d/%d/%llu " 579 - "cdb=(%08x-%08x-%08x-%08x)\n", 580 - __func__, scp, host->host_no, scp->device->channel, 581 - scp->device->id, scp->device->lun, 582 - get_unaligned_be32(&((u32 *)scp->cmnd)[0]), 583 - get_unaligned_be32(&((u32 *)scp->cmnd)[1]), 584 - get_unaligned_be32(&((u32 *)scp->cmnd)[2]), 585 - get_unaligned_be32(&((u32 *)scp->cmnd)[3])); 586 - 587 - /* 588 - * If a Task Management Function is active, wait for it to complete 589 - * before continuing with regular commands. 590 - */ 591 - spin_lock_irqsave(&cfg->tmf_slock, lock_flags); 592 - if (cfg->tmf_active) { 593 - spin_unlock_irqrestore(&cfg->tmf_slock, lock_flags); 594 - rc = SCSI_MLQUEUE_HOST_BUSY; 595 - goto out; 596 - } 597 - spin_unlock_irqrestore(&cfg->tmf_slock, lock_flags); 598 - 599 - switch (cfg->state) { 600 - case STATE_PROBING: 601 - case STATE_PROBED: 602 - case STATE_RESET: 603 - dev_dbg_ratelimited(dev, "%s: device is in reset\n", __func__); 604 - rc = SCSI_MLQUEUE_HOST_BUSY; 605 - goto out; 606 - case STATE_FAILTERM: 607 - dev_dbg_ratelimited(dev, "%s: device has failed\n", __func__); 608 - scp->result = (DID_NO_CONNECT << 16); 609 - scsi_done(scp); 610 - rc = 0; 611 - goto out; 612 - default: 613 - atomic_inc(&afu->cmds_active); 614 - break; 615 - } 616 - 617 - if (likely(sg)) { 618 - cmd->rcb.data_len = sg->length; 619 - cmd->rcb.data_ea = (uintptr_t)sg_virt(sg); 620 - } 621 - 622 - cmd->scp = scp; 623 - cmd->parent = afu; 624 - cmd->hwq_index = hwq_index; 625 - 626 - cmd->sa.ioasc = 0; 627 - cmd->rcb.ctx_id = hwq->ctx_hndl; 628 - cmd->rcb.msi = SISL_MSI_RRQ_UPDATED; 629 - cmd->rcb.port_sel = CHAN2PORTMASK(scp->device->channel); 630 - cmd->rcb.lun_id = lun_to_lunid(scp->device->lun); 631 - 632 - if (scp->sc_data_direction == DMA_TO_DEVICE) 633 - req_flags |= SISL_REQ_FLAGS_HOST_WRITE; 634 - 635 - cmd->rcb.req_flags = req_flags; 636 - memcpy(cmd->rcb.cdb, scp->cmnd, sizeof(cmd->rcb.cdb)); 637 - 638 - rc = afu->send_cmd(afu, cmd); 639 - atomic_dec(&afu->cmds_active); 640 - out: 641 - return rc; 642 - } 643 - 644 - /** 645 - * cxlflash_wait_for_pci_err_recovery() - wait for error recovery during probe 646 - * @cfg: Internal structure associated with the host. 647 - */ 648 - static void cxlflash_wait_for_pci_err_recovery(struct cxlflash_cfg *cfg) 649 - { 650 - struct pci_dev *pdev = cfg->dev; 651 - 652 - if (pci_channel_offline(pdev)) 653 - wait_event_timeout(cfg->reset_waitq, 654 - !pci_channel_offline(pdev), 655 - CXLFLASH_PCI_ERROR_RECOVERY_TIMEOUT); 656 - } 657 - 658 - /** 659 - * free_mem() - free memory associated with the AFU 660 - * @cfg: Internal structure associated with the host. 661 - */ 662 - static void free_mem(struct cxlflash_cfg *cfg) 663 - { 664 - struct afu *afu = cfg->afu; 665 - 666 - if (cfg->afu) { 667 - free_pages((ulong)afu, get_order(sizeof(struct afu))); 668 - cfg->afu = NULL; 669 - } 670 - } 671 - 672 - /** 673 - * cxlflash_reset_sync() - synchronizing point for asynchronous resets 674 - * @cfg: Internal structure associated with the host. 675 - */ 676 - static void cxlflash_reset_sync(struct cxlflash_cfg *cfg) 677 - { 678 - if (cfg->async_reset_cookie == 0) 679 - return; 680 - 681 - /* Wait until all async calls prior to this cookie have completed */ 682 - async_synchronize_cookie(cfg->async_reset_cookie + 1); 683 - cfg->async_reset_cookie = 0; 684 - } 685 - 686 - /** 687 - * stop_afu() - stops the AFU command timers and unmaps the MMIO space 688 - * @cfg: Internal structure associated with the host. 689 - * 690 - * Safe to call with AFU in a partially allocated/initialized state. 691 - * 692 - * Cancels scheduled worker threads, waits for any active internal AFU 693 - * commands to timeout, disables IRQ polling and then unmaps the MMIO space. 694 - */ 695 - static void stop_afu(struct cxlflash_cfg *cfg) 696 - { 697 - struct afu *afu = cfg->afu; 698 - struct hwq *hwq; 699 - int i; 700 - 701 - cancel_work_sync(&cfg->work_q); 702 - if (!current_is_async()) 703 - cxlflash_reset_sync(cfg); 704 - 705 - if (likely(afu)) { 706 - while (atomic_read(&afu->cmds_active)) 707 - ssleep(1); 708 - 709 - if (afu_is_irqpoll_enabled(afu)) { 710 - for (i = 0; i < afu->num_hwqs; i++) { 711 - hwq = get_hwq(afu, i); 712 - 713 - irq_poll_disable(&hwq->irqpoll); 714 - } 715 - } 716 - 717 - if (likely(afu->afu_map)) { 718 - cfg->ops->psa_unmap(afu->afu_map); 719 - afu->afu_map = NULL; 720 - } 721 - } 722 - } 723 - 724 - /** 725 - * term_intr() - disables all AFU interrupts 726 - * @cfg: Internal structure associated with the host. 727 - * @level: Depth of allocation, where to begin waterfall tear down. 728 - * @index: Index of the hardware queue. 729 - * 730 - * Safe to call with AFU/MC in partially allocated/initialized state. 731 - */ 732 - static void term_intr(struct cxlflash_cfg *cfg, enum undo_level level, 733 - u32 index) 734 - { 735 - struct afu *afu = cfg->afu; 736 - struct device *dev = &cfg->dev->dev; 737 - struct hwq *hwq; 738 - 739 - if (!afu) { 740 - dev_err(dev, "%s: returning with NULL afu\n", __func__); 741 - return; 742 - } 743 - 744 - hwq = get_hwq(afu, index); 745 - 746 - if (!hwq->ctx_cookie) { 747 - dev_err(dev, "%s: returning with NULL MC\n", __func__); 748 - return; 749 - } 750 - 751 - switch (level) { 752 - case UNMAP_THREE: 753 - /* SISL_MSI_ASYNC_ERROR is setup only for the primary HWQ */ 754 - if (index == PRIMARY_HWQ) 755 - cfg->ops->unmap_afu_irq(hwq->ctx_cookie, 3, hwq); 756 - fallthrough; 757 - case UNMAP_TWO: 758 - cfg->ops->unmap_afu_irq(hwq->ctx_cookie, 2, hwq); 759 - fallthrough; 760 - case UNMAP_ONE: 761 - cfg->ops->unmap_afu_irq(hwq->ctx_cookie, 1, hwq); 762 - fallthrough; 763 - case FREE_IRQ: 764 - cfg->ops->free_afu_irqs(hwq->ctx_cookie); 765 - fallthrough; 766 - case UNDO_NOOP: 767 - /* No action required */ 768 - break; 769 - } 770 - } 771 - 772 - /** 773 - * term_mc() - terminates the master context 774 - * @cfg: Internal structure associated with the host. 775 - * @index: Index of the hardware queue. 776 - * 777 - * Safe to call with AFU/MC in partially allocated/initialized state. 778 - */ 779 - static void term_mc(struct cxlflash_cfg *cfg, u32 index) 780 - { 781 - struct afu *afu = cfg->afu; 782 - struct device *dev = &cfg->dev->dev; 783 - struct hwq *hwq; 784 - ulong lock_flags; 785 - 786 - if (!afu) { 787 - dev_err(dev, "%s: returning with NULL afu\n", __func__); 788 - return; 789 - } 790 - 791 - hwq = get_hwq(afu, index); 792 - 793 - if (!hwq->ctx_cookie) { 794 - dev_err(dev, "%s: returning with NULL MC\n", __func__); 795 - return; 796 - } 797 - 798 - WARN_ON(cfg->ops->stop_context(hwq->ctx_cookie)); 799 - if (index != PRIMARY_HWQ) 800 - WARN_ON(cfg->ops->release_context(hwq->ctx_cookie)); 801 - hwq->ctx_cookie = NULL; 802 - 803 - spin_lock_irqsave(&hwq->hrrq_slock, lock_flags); 804 - hwq->hrrq_online = false; 805 - spin_unlock_irqrestore(&hwq->hrrq_slock, lock_flags); 806 - 807 - spin_lock_irqsave(&hwq->hsq_slock, lock_flags); 808 - flush_pending_cmds(hwq); 809 - spin_unlock_irqrestore(&hwq->hsq_slock, lock_flags); 810 - } 811 - 812 - /** 813 - * term_afu() - terminates the AFU 814 - * @cfg: Internal structure associated with the host. 815 - * 816 - * Safe to call with AFU/MC in partially allocated/initialized state. 817 - */ 818 - static void term_afu(struct cxlflash_cfg *cfg) 819 - { 820 - struct device *dev = &cfg->dev->dev; 821 - int k; 822 - 823 - /* 824 - * Tear down is carefully orchestrated to ensure 825 - * no interrupts can come in when the problem state 826 - * area is unmapped. 827 - * 828 - * 1) Disable all AFU interrupts for each master 829 - * 2) Unmap the problem state area 830 - * 3) Stop each master context 831 - */ 832 - for (k = cfg->afu->num_hwqs - 1; k >= 0; k--) 833 - term_intr(cfg, UNMAP_THREE, k); 834 - 835 - stop_afu(cfg); 836 - 837 - for (k = cfg->afu->num_hwqs - 1; k >= 0; k--) 838 - term_mc(cfg, k); 839 - 840 - dev_dbg(dev, "%s: returning\n", __func__); 841 - } 842 - 843 - /** 844 - * notify_shutdown() - notifies device of pending shutdown 845 - * @cfg: Internal structure associated with the host. 846 - * @wait: Whether to wait for shutdown processing to complete. 847 - * 848 - * This function will notify the AFU that the adapter is being shutdown 849 - * and will wait for shutdown processing to complete if wait is true. 850 - * This notification should flush pending I/Os to the device and halt 851 - * further I/Os until the next AFU reset is issued and device restarted. 852 - */ 853 - static void notify_shutdown(struct cxlflash_cfg *cfg, bool wait) 854 - { 855 - struct afu *afu = cfg->afu; 856 - struct device *dev = &cfg->dev->dev; 857 - struct dev_dependent_vals *ddv; 858 - __be64 __iomem *fc_port_regs; 859 - u64 reg, status; 860 - int i, retry_cnt = 0; 861 - 862 - ddv = (struct dev_dependent_vals *)cfg->dev_id->driver_data; 863 - if (!(ddv->flags & CXLFLASH_NOTIFY_SHUTDOWN)) 864 - return; 865 - 866 - if (!afu || !afu->afu_map) { 867 - dev_dbg(dev, "%s: Problem state area not mapped\n", __func__); 868 - return; 869 - } 870 - 871 - /* Notify AFU */ 872 - for (i = 0; i < cfg->num_fc_ports; i++) { 873 - fc_port_regs = get_fc_port_regs(cfg, i); 874 - 875 - reg = readq_be(&fc_port_regs[FC_CONFIG2 / 8]); 876 - reg |= SISL_FC_SHUTDOWN_NORMAL; 877 - writeq_be(reg, &fc_port_regs[FC_CONFIG2 / 8]); 878 - } 879 - 880 - if (!wait) 881 - return; 882 - 883 - /* Wait up to 1.5 seconds for shutdown processing to complete */ 884 - for (i = 0; i < cfg->num_fc_ports; i++) { 885 - fc_port_regs = get_fc_port_regs(cfg, i); 886 - retry_cnt = 0; 887 - 888 - while (true) { 889 - status = readq_be(&fc_port_regs[FC_STATUS / 8]); 890 - if (status & SISL_STATUS_SHUTDOWN_COMPLETE) 891 - break; 892 - if (++retry_cnt >= MC_RETRY_CNT) { 893 - dev_dbg(dev, "%s: port %d shutdown processing " 894 - "not yet completed\n", __func__, i); 895 - break; 896 - } 897 - msleep(100 * retry_cnt); 898 - } 899 - } 900 - } 901 - 902 - /** 903 - * cxlflash_get_minor() - gets the first available minor number 904 - * 905 - * Return: Unique minor number that can be used to create the character device. 906 - */ 907 - static int cxlflash_get_minor(void) 908 - { 909 - int minor; 910 - long bit; 911 - 912 - bit = find_first_zero_bit(cxlflash_minor, CXLFLASH_MAX_ADAPTERS); 913 - if (bit >= CXLFLASH_MAX_ADAPTERS) 914 - return -1; 915 - 916 - minor = bit & MINORMASK; 917 - set_bit(minor, cxlflash_minor); 918 - return minor; 919 - } 920 - 921 - /** 922 - * cxlflash_put_minor() - releases the minor number 923 - * @minor: Minor number that is no longer needed. 924 - */ 925 - static void cxlflash_put_minor(int minor) 926 - { 927 - clear_bit(minor, cxlflash_minor); 928 - } 929 - 930 - /** 931 - * cxlflash_release_chrdev() - release the character device for the host 932 - * @cfg: Internal structure associated with the host. 933 - */ 934 - static void cxlflash_release_chrdev(struct cxlflash_cfg *cfg) 935 - { 936 - device_unregister(cfg->chardev); 937 - cfg->chardev = NULL; 938 - cdev_del(&cfg->cdev); 939 - cxlflash_put_minor(MINOR(cfg->cdev.dev)); 940 - } 941 - 942 - /** 943 - * cxlflash_remove() - PCI entry point to tear down host 944 - * @pdev: PCI device associated with the host. 945 - * 946 - * Safe to use as a cleanup in partially allocated/initialized state. Note that 947 - * the reset_waitq is flushed as part of the stop/termination of user contexts. 948 - */ 949 - static void cxlflash_remove(struct pci_dev *pdev) 950 - { 951 - struct cxlflash_cfg *cfg = pci_get_drvdata(pdev); 952 - struct device *dev = &pdev->dev; 953 - ulong lock_flags; 954 - 955 - if (!pci_is_enabled(pdev)) { 956 - dev_dbg(dev, "%s: Device is disabled\n", __func__); 957 - return; 958 - } 959 - 960 - /* Yield to running recovery threads before continuing with remove */ 961 - wait_event(cfg->reset_waitq, cfg->state != STATE_RESET && 962 - cfg->state != STATE_PROBING); 963 - spin_lock_irqsave(&cfg->tmf_slock, lock_flags); 964 - if (cfg->tmf_active) 965 - wait_event_interruptible_lock_irq(cfg->tmf_waitq, 966 - !cfg->tmf_active, 967 - cfg->tmf_slock); 968 - spin_unlock_irqrestore(&cfg->tmf_slock, lock_flags); 969 - 970 - /* Notify AFU and wait for shutdown processing to complete */ 971 - notify_shutdown(cfg, true); 972 - 973 - cfg->state = STATE_FAILTERM; 974 - cxlflash_stop_term_user_contexts(cfg); 975 - 976 - switch (cfg->init_state) { 977 - case INIT_STATE_CDEV: 978 - cxlflash_release_chrdev(cfg); 979 - fallthrough; 980 - case INIT_STATE_SCSI: 981 - cxlflash_term_local_luns(cfg); 982 - scsi_remove_host(cfg->host); 983 - fallthrough; 984 - case INIT_STATE_AFU: 985 - term_afu(cfg); 986 - fallthrough; 987 - case INIT_STATE_PCI: 988 - cfg->ops->destroy_afu(cfg->afu_cookie); 989 - pci_disable_device(pdev); 990 - fallthrough; 991 - case INIT_STATE_NONE: 992 - free_mem(cfg); 993 - scsi_host_put(cfg->host); 994 - break; 995 - } 996 - 997 - dev_dbg(dev, "%s: returning\n", __func__); 998 - } 999 - 1000 - /** 1001 - * alloc_mem() - allocates the AFU and its command pool 1002 - * @cfg: Internal structure associated with the host. 1003 - * 1004 - * A partially allocated state remains on failure. 1005 - * 1006 - * Return: 1007 - * 0 on success 1008 - * -ENOMEM on failure to allocate memory 1009 - */ 1010 - static int alloc_mem(struct cxlflash_cfg *cfg) 1011 - { 1012 - int rc = 0; 1013 - struct device *dev = &cfg->dev->dev; 1014 - 1015 - /* AFU is ~28k, i.e. only one 64k page or up to seven 4k pages */ 1016 - cfg->afu = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 1017 - get_order(sizeof(struct afu))); 1018 - if (unlikely(!cfg->afu)) { 1019 - dev_err(dev, "%s: cannot get %d free pages\n", 1020 - __func__, get_order(sizeof(struct afu))); 1021 - rc = -ENOMEM; 1022 - goto out; 1023 - } 1024 - cfg->afu->parent = cfg; 1025 - cfg->afu->desired_hwqs = CXLFLASH_DEF_HWQS; 1026 - cfg->afu->afu_map = NULL; 1027 - out: 1028 - return rc; 1029 - } 1030 - 1031 - /** 1032 - * init_pci() - initializes the host as a PCI device 1033 - * @cfg: Internal structure associated with the host. 1034 - * 1035 - * Return: 0 on success, -errno on failure 1036 - */ 1037 - static int init_pci(struct cxlflash_cfg *cfg) 1038 - { 1039 - struct pci_dev *pdev = cfg->dev; 1040 - struct device *dev = &cfg->dev->dev; 1041 - int rc = 0; 1042 - 1043 - rc = pci_enable_device(pdev); 1044 - if (rc || pci_channel_offline(pdev)) { 1045 - if (pci_channel_offline(pdev)) { 1046 - cxlflash_wait_for_pci_err_recovery(cfg); 1047 - rc = pci_enable_device(pdev); 1048 - } 1049 - 1050 - if (rc) { 1051 - dev_err(dev, "%s: Cannot enable adapter\n", __func__); 1052 - cxlflash_wait_for_pci_err_recovery(cfg); 1053 - goto out; 1054 - } 1055 - } 1056 - 1057 - out: 1058 - dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 1059 - return rc; 1060 - } 1061 - 1062 - /** 1063 - * init_scsi() - adds the host to the SCSI stack and kicks off host scan 1064 - * @cfg: Internal structure associated with the host. 1065 - * 1066 - * Return: 0 on success, -errno on failure 1067 - */ 1068 - static int init_scsi(struct cxlflash_cfg *cfg) 1069 - { 1070 - struct pci_dev *pdev = cfg->dev; 1071 - struct device *dev = &cfg->dev->dev; 1072 - int rc = 0; 1073 - 1074 - rc = scsi_add_host(cfg->host, &pdev->dev); 1075 - if (rc) { 1076 - dev_err(dev, "%s: scsi_add_host failed rc=%d\n", __func__, rc); 1077 - goto out; 1078 - } 1079 - 1080 - scsi_scan_host(cfg->host); 1081 - 1082 - out: 1083 - dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 1084 - return rc; 1085 - } 1086 - 1087 - /** 1088 - * set_port_online() - transitions the specified host FC port to online state 1089 - * @fc_regs: Top of MMIO region defined for specified port. 1090 - * 1091 - * The provided MMIO region must be mapped prior to call. Online state means 1092 - * that the FC link layer has synced, completed the handshaking process, and 1093 - * is ready for login to start. 1094 - */ 1095 - static void set_port_online(__be64 __iomem *fc_regs) 1096 - { 1097 - u64 cmdcfg; 1098 - 1099 - cmdcfg = readq_be(&fc_regs[FC_MTIP_CMDCONFIG / 8]); 1100 - cmdcfg &= (~FC_MTIP_CMDCONFIG_OFFLINE); /* clear OFF_LINE */ 1101 - cmdcfg |= (FC_MTIP_CMDCONFIG_ONLINE); /* set ON_LINE */ 1102 - writeq_be(cmdcfg, &fc_regs[FC_MTIP_CMDCONFIG / 8]); 1103 - } 1104 - 1105 - /** 1106 - * set_port_offline() - transitions the specified host FC port to offline state 1107 - * @fc_regs: Top of MMIO region defined for specified port. 1108 - * 1109 - * The provided MMIO region must be mapped prior to call. 1110 - */ 1111 - static void set_port_offline(__be64 __iomem *fc_regs) 1112 - { 1113 - u64 cmdcfg; 1114 - 1115 - cmdcfg = readq_be(&fc_regs[FC_MTIP_CMDCONFIG / 8]); 1116 - cmdcfg &= (~FC_MTIP_CMDCONFIG_ONLINE); /* clear ON_LINE */ 1117 - cmdcfg |= (FC_MTIP_CMDCONFIG_OFFLINE); /* set OFF_LINE */ 1118 - writeq_be(cmdcfg, &fc_regs[FC_MTIP_CMDCONFIG / 8]); 1119 - } 1120 - 1121 - /** 1122 - * wait_port_online() - waits for the specified host FC port come online 1123 - * @fc_regs: Top of MMIO region defined for specified port. 1124 - * @delay_us: Number of microseconds to delay between reading port status. 1125 - * @nretry: Number of cycles to retry reading port status. 1126 - * 1127 - * The provided MMIO region must be mapped prior to call. This will timeout 1128 - * when the cable is not plugged in. 1129 - * 1130 - * Return: 1131 - * TRUE (1) when the specified port is online 1132 - * FALSE (0) when the specified port fails to come online after timeout 1133 - */ 1134 - static bool wait_port_online(__be64 __iomem *fc_regs, u32 delay_us, u32 nretry) 1135 - { 1136 - u64 status; 1137 - 1138 - WARN_ON(delay_us < 1000); 1139 - 1140 - do { 1141 - msleep(delay_us / 1000); 1142 - status = readq_be(&fc_regs[FC_MTIP_STATUS / 8]); 1143 - if (status == U64_MAX) 1144 - nretry /= 2; 1145 - } while ((status & FC_MTIP_STATUS_MASK) != FC_MTIP_STATUS_ONLINE && 1146 - nretry--); 1147 - 1148 - return ((status & FC_MTIP_STATUS_MASK) == FC_MTIP_STATUS_ONLINE); 1149 - } 1150 - 1151 - /** 1152 - * wait_port_offline() - waits for the specified host FC port go offline 1153 - * @fc_regs: Top of MMIO region defined for specified port. 1154 - * @delay_us: Number of microseconds to delay between reading port status. 1155 - * @nretry: Number of cycles to retry reading port status. 1156 - * 1157 - * The provided MMIO region must be mapped prior to call. 1158 - * 1159 - * Return: 1160 - * TRUE (1) when the specified port is offline 1161 - * FALSE (0) when the specified port fails to go offline after timeout 1162 - */ 1163 - static bool wait_port_offline(__be64 __iomem *fc_regs, u32 delay_us, u32 nretry) 1164 - { 1165 - u64 status; 1166 - 1167 - WARN_ON(delay_us < 1000); 1168 - 1169 - do { 1170 - msleep(delay_us / 1000); 1171 - status = readq_be(&fc_regs[FC_MTIP_STATUS / 8]); 1172 - if (status == U64_MAX) 1173 - nretry /= 2; 1174 - } while ((status & FC_MTIP_STATUS_MASK) != FC_MTIP_STATUS_OFFLINE && 1175 - nretry--); 1176 - 1177 - return ((status & FC_MTIP_STATUS_MASK) == FC_MTIP_STATUS_OFFLINE); 1178 - } 1179 - 1180 - /** 1181 - * afu_set_wwpn() - configures the WWPN for the specified host FC port 1182 - * @afu: AFU associated with the host that owns the specified FC port. 1183 - * @port: Port number being configured. 1184 - * @fc_regs: Top of MMIO region defined for specified port. 1185 - * @wwpn: The world-wide-port-number previously discovered for port. 1186 - * 1187 - * The provided MMIO region must be mapped prior to call. As part of the 1188 - * sequence to configure the WWPN, the port is toggled offline and then back 1189 - * online. This toggling action can cause this routine to delay up to a few 1190 - * seconds. When configured to use the internal LUN feature of the AFU, a 1191 - * failure to come online is overridden. 1192 - */ 1193 - static void afu_set_wwpn(struct afu *afu, int port, __be64 __iomem *fc_regs, 1194 - u64 wwpn) 1195 - { 1196 - struct cxlflash_cfg *cfg = afu->parent; 1197 - struct device *dev = &cfg->dev->dev; 1198 - 1199 - set_port_offline(fc_regs); 1200 - if (!wait_port_offline(fc_regs, FC_PORT_STATUS_RETRY_INTERVAL_US, 1201 - FC_PORT_STATUS_RETRY_CNT)) { 1202 - dev_dbg(dev, "%s: wait on port %d to go offline timed out\n", 1203 - __func__, port); 1204 - } 1205 - 1206 - writeq_be(wwpn, &fc_regs[FC_PNAME / 8]); 1207 - 1208 - set_port_online(fc_regs); 1209 - if (!wait_port_online(fc_regs, FC_PORT_STATUS_RETRY_INTERVAL_US, 1210 - FC_PORT_STATUS_RETRY_CNT)) { 1211 - dev_dbg(dev, "%s: wait on port %d to go online timed out\n", 1212 - __func__, port); 1213 - } 1214 - } 1215 - 1216 - /** 1217 - * afu_link_reset() - resets the specified host FC port 1218 - * @afu: AFU associated with the host that owns the specified FC port. 1219 - * @port: Port number being configured. 1220 - * @fc_regs: Top of MMIO region defined for specified port. 1221 - * 1222 - * The provided MMIO region must be mapped prior to call. The sequence to 1223 - * reset the port involves toggling it offline and then back online. This 1224 - * action can cause this routine to delay up to a few seconds. An effort 1225 - * is made to maintain link with the device by switching to host to use 1226 - * the alternate port exclusively while the reset takes place. 1227 - * failure to come online is overridden. 1228 - */ 1229 - static void afu_link_reset(struct afu *afu, int port, __be64 __iomem *fc_regs) 1230 - { 1231 - struct cxlflash_cfg *cfg = afu->parent; 1232 - struct device *dev = &cfg->dev->dev; 1233 - u64 port_sel; 1234 - 1235 - /* first switch the AFU to the other links, if any */ 1236 - port_sel = readq_be(&afu->afu_map->global.regs.afu_port_sel); 1237 - port_sel &= ~(1ULL << port); 1238 - writeq_be(port_sel, &afu->afu_map->global.regs.afu_port_sel); 1239 - cxlflash_afu_sync(afu, 0, 0, AFU_GSYNC); 1240 - 1241 - set_port_offline(fc_regs); 1242 - if (!wait_port_offline(fc_regs, FC_PORT_STATUS_RETRY_INTERVAL_US, 1243 - FC_PORT_STATUS_RETRY_CNT)) 1244 - dev_err(dev, "%s: wait on port %d to go offline timed out\n", 1245 - __func__, port); 1246 - 1247 - set_port_online(fc_regs); 1248 - if (!wait_port_online(fc_regs, FC_PORT_STATUS_RETRY_INTERVAL_US, 1249 - FC_PORT_STATUS_RETRY_CNT)) 1250 - dev_err(dev, "%s: wait on port %d to go online timed out\n", 1251 - __func__, port); 1252 - 1253 - /* switch back to include this port */ 1254 - port_sel |= (1ULL << port); 1255 - writeq_be(port_sel, &afu->afu_map->global.regs.afu_port_sel); 1256 - cxlflash_afu_sync(afu, 0, 0, AFU_GSYNC); 1257 - 1258 - dev_dbg(dev, "%s: returning port_sel=%016llx\n", __func__, port_sel); 1259 - } 1260 - 1261 - /** 1262 - * afu_err_intr_init() - clears and initializes the AFU for error interrupts 1263 - * @afu: AFU associated with the host. 1264 - */ 1265 - static void afu_err_intr_init(struct afu *afu) 1266 - { 1267 - struct cxlflash_cfg *cfg = afu->parent; 1268 - __be64 __iomem *fc_port_regs; 1269 - int i; 1270 - struct hwq *hwq = get_hwq(afu, PRIMARY_HWQ); 1271 - u64 reg; 1272 - 1273 - /* global async interrupts: AFU clears afu_ctrl on context exit 1274 - * if async interrupts were sent to that context. This prevents 1275 - * the AFU form sending further async interrupts when 1276 - * there is 1277 - * nobody to receive them. 1278 - */ 1279 - 1280 - /* mask all */ 1281 - writeq_be(-1ULL, &afu->afu_map->global.regs.aintr_mask); 1282 - /* set LISN# to send and point to primary master context */ 1283 - reg = ((u64) (((hwq->ctx_hndl << 8) | SISL_MSI_ASYNC_ERROR)) << 40); 1284 - 1285 - if (afu->internal_lun) 1286 - reg |= 1; /* Bit 63 indicates local lun */ 1287 - writeq_be(reg, &afu->afu_map->global.regs.afu_ctrl); 1288 - /* clear all */ 1289 - writeq_be(-1ULL, &afu->afu_map->global.regs.aintr_clear); 1290 - /* unmask bits that are of interest */ 1291 - /* note: afu can send an interrupt after this step */ 1292 - writeq_be(SISL_ASTATUS_MASK, &afu->afu_map->global.regs.aintr_mask); 1293 - /* clear again in case a bit came on after previous clear but before */ 1294 - /* unmask */ 1295 - writeq_be(-1ULL, &afu->afu_map->global.regs.aintr_clear); 1296 - 1297 - /* Clear/Set internal lun bits */ 1298 - fc_port_regs = get_fc_port_regs(cfg, 0); 1299 - reg = readq_be(&fc_port_regs[FC_CONFIG2 / 8]); 1300 - reg &= SISL_FC_INTERNAL_MASK; 1301 - if (afu->internal_lun) 1302 - reg |= ((u64)(afu->internal_lun - 1) << SISL_FC_INTERNAL_SHIFT); 1303 - writeq_be(reg, &fc_port_regs[FC_CONFIG2 / 8]); 1304 - 1305 - /* now clear FC errors */ 1306 - for (i = 0; i < cfg->num_fc_ports; i++) { 1307 - fc_port_regs = get_fc_port_regs(cfg, i); 1308 - 1309 - writeq_be(0xFFFFFFFFU, &fc_port_regs[FC_ERROR / 8]); 1310 - writeq_be(0, &fc_port_regs[FC_ERRCAP / 8]); 1311 - } 1312 - 1313 - /* sync interrupts for master's IOARRIN write */ 1314 - /* note that unlike asyncs, there can be no pending sync interrupts */ 1315 - /* at this time (this is a fresh context and master has not written */ 1316 - /* IOARRIN yet), so there is nothing to clear. */ 1317 - 1318 - /* set LISN#, it is always sent to the context that wrote IOARRIN */ 1319 - for (i = 0; i < afu->num_hwqs; i++) { 1320 - hwq = get_hwq(afu, i); 1321 - 1322 - reg = readq_be(&hwq->host_map->ctx_ctrl); 1323 - WARN_ON((reg & SISL_CTX_CTRL_LISN_MASK) != 0); 1324 - reg |= SISL_MSI_SYNC_ERROR; 1325 - writeq_be(reg, &hwq->host_map->ctx_ctrl); 1326 - writeq_be(SISL_ISTATUS_MASK, &hwq->host_map->intr_mask); 1327 - } 1328 - } 1329 - 1330 - /** 1331 - * cxlflash_sync_err_irq() - interrupt handler for synchronous errors 1332 - * @irq: Interrupt number. 1333 - * @data: Private data provided at interrupt registration, the AFU. 1334 - * 1335 - * Return: Always return IRQ_HANDLED. 1336 - */ 1337 - static irqreturn_t cxlflash_sync_err_irq(int irq, void *data) 1338 - { 1339 - struct hwq *hwq = (struct hwq *)data; 1340 - struct cxlflash_cfg *cfg = hwq->afu->parent; 1341 - struct device *dev = &cfg->dev->dev; 1342 - u64 reg; 1343 - u64 reg_unmasked; 1344 - 1345 - reg = readq_be(&hwq->host_map->intr_status); 1346 - reg_unmasked = (reg & SISL_ISTATUS_UNMASK); 1347 - 1348 - if (reg_unmasked == 0UL) { 1349 - dev_err(dev, "%s: spurious interrupt, intr_status=%016llx\n", 1350 - __func__, reg); 1351 - goto cxlflash_sync_err_irq_exit; 1352 - } 1353 - 1354 - dev_err(dev, "%s: unexpected interrupt, intr_status=%016llx\n", 1355 - __func__, reg); 1356 - 1357 - writeq_be(reg_unmasked, &hwq->host_map->intr_clear); 1358 - 1359 - cxlflash_sync_err_irq_exit: 1360 - return IRQ_HANDLED; 1361 - } 1362 - 1363 - /** 1364 - * process_hrrq() - process the read-response queue 1365 - * @hwq: HWQ associated with the host. 1366 - * @doneq: Queue of commands harvested from the RRQ. 1367 - * @budget: Threshold of RRQ entries to process. 1368 - * 1369 - * This routine must be called holding the disabled RRQ spin lock. 1370 - * 1371 - * Return: The number of entries processed. 1372 - */ 1373 - static int process_hrrq(struct hwq *hwq, struct list_head *doneq, int budget) 1374 - { 1375 - struct afu *afu = hwq->afu; 1376 - struct afu_cmd *cmd; 1377 - struct sisl_ioasa *ioasa; 1378 - struct sisl_ioarcb *ioarcb; 1379 - bool toggle = hwq->toggle; 1380 - int num_hrrq = 0; 1381 - u64 entry, 1382 - *hrrq_start = hwq->hrrq_start, 1383 - *hrrq_end = hwq->hrrq_end, 1384 - *hrrq_curr = hwq->hrrq_curr; 1385 - 1386 - /* Process ready RRQ entries up to the specified budget (if any) */ 1387 - while (true) { 1388 - entry = *hrrq_curr; 1389 - 1390 - if ((entry & SISL_RESP_HANDLE_T_BIT) != toggle) 1391 - break; 1392 - 1393 - entry &= ~SISL_RESP_HANDLE_T_BIT; 1394 - 1395 - if (afu_is_sq_cmd_mode(afu)) { 1396 - ioasa = (struct sisl_ioasa *)entry; 1397 - cmd = container_of(ioasa, struct afu_cmd, sa); 1398 - } else { 1399 - ioarcb = (struct sisl_ioarcb *)entry; 1400 - cmd = container_of(ioarcb, struct afu_cmd, rcb); 1401 - } 1402 - 1403 - list_add_tail(&cmd->queue, doneq); 1404 - 1405 - /* Advance to next entry or wrap and flip the toggle bit */ 1406 - if (hrrq_curr < hrrq_end) 1407 - hrrq_curr++; 1408 - else { 1409 - hrrq_curr = hrrq_start; 1410 - toggle ^= SISL_RESP_HANDLE_T_BIT; 1411 - } 1412 - 1413 - atomic_inc(&hwq->hsq_credits); 1414 - num_hrrq++; 1415 - 1416 - if (budget > 0 && num_hrrq >= budget) 1417 - break; 1418 - } 1419 - 1420 - hwq->hrrq_curr = hrrq_curr; 1421 - hwq->toggle = toggle; 1422 - 1423 - return num_hrrq; 1424 - } 1425 - 1426 - /** 1427 - * process_cmd_doneq() - process a queue of harvested RRQ commands 1428 - * @doneq: Queue of completed commands. 1429 - * 1430 - * Note that upon return the queue can no longer be trusted. 1431 - */ 1432 - static void process_cmd_doneq(struct list_head *doneq) 1433 - { 1434 - struct afu_cmd *cmd, *tmp; 1435 - 1436 - WARN_ON(list_empty(doneq)); 1437 - 1438 - list_for_each_entry_safe(cmd, tmp, doneq, queue) 1439 - cmd_complete(cmd); 1440 - } 1441 - 1442 - /** 1443 - * cxlflash_irqpoll() - process a queue of harvested RRQ commands 1444 - * @irqpoll: IRQ poll structure associated with queue to poll. 1445 - * @budget: Threshold of RRQ entries to process per poll. 1446 - * 1447 - * Return: The number of entries processed. 1448 - */ 1449 - static int cxlflash_irqpoll(struct irq_poll *irqpoll, int budget) 1450 - { 1451 - struct hwq *hwq = container_of(irqpoll, struct hwq, irqpoll); 1452 - unsigned long hrrq_flags; 1453 - LIST_HEAD(doneq); 1454 - int num_entries = 0; 1455 - 1456 - spin_lock_irqsave(&hwq->hrrq_slock, hrrq_flags); 1457 - 1458 - num_entries = process_hrrq(hwq, &doneq, budget); 1459 - if (num_entries < budget) 1460 - irq_poll_complete(irqpoll); 1461 - 1462 - spin_unlock_irqrestore(&hwq->hrrq_slock, hrrq_flags); 1463 - 1464 - process_cmd_doneq(&doneq); 1465 - return num_entries; 1466 - } 1467 - 1468 - /** 1469 - * cxlflash_rrq_irq() - interrupt handler for read-response queue (normal path) 1470 - * @irq: Interrupt number. 1471 - * @data: Private data provided at interrupt registration, the AFU. 1472 - * 1473 - * Return: IRQ_HANDLED or IRQ_NONE when no ready entries found. 1474 - */ 1475 - static irqreturn_t cxlflash_rrq_irq(int irq, void *data) 1476 - { 1477 - struct hwq *hwq = (struct hwq *)data; 1478 - struct afu *afu = hwq->afu; 1479 - unsigned long hrrq_flags; 1480 - LIST_HEAD(doneq); 1481 - int num_entries = 0; 1482 - 1483 - spin_lock_irqsave(&hwq->hrrq_slock, hrrq_flags); 1484 - 1485 - /* Silently drop spurious interrupts when queue is not online */ 1486 - if (!hwq->hrrq_online) { 1487 - spin_unlock_irqrestore(&hwq->hrrq_slock, hrrq_flags); 1488 - return IRQ_HANDLED; 1489 - } 1490 - 1491 - if (afu_is_irqpoll_enabled(afu)) { 1492 - irq_poll_sched(&hwq->irqpoll); 1493 - spin_unlock_irqrestore(&hwq->hrrq_slock, hrrq_flags); 1494 - return IRQ_HANDLED; 1495 - } 1496 - 1497 - num_entries = process_hrrq(hwq, &doneq, -1); 1498 - spin_unlock_irqrestore(&hwq->hrrq_slock, hrrq_flags); 1499 - 1500 - if (num_entries == 0) 1501 - return IRQ_NONE; 1502 - 1503 - process_cmd_doneq(&doneq); 1504 - return IRQ_HANDLED; 1505 - } 1506 - 1507 - /* 1508 - * Asynchronous interrupt information table 1509 - * 1510 - * NOTE: 1511 - * - Order matters here as this array is indexed by bit position. 1512 - * 1513 - * - The checkpatch script considers the BUILD_SISL_ASTATUS_FC_PORT macro 1514 - * as complex and complains due to a lack of parentheses/braces. 1515 - */ 1516 - #define ASTATUS_FC(_a, _b, _c, _d) \ 1517 - { SISL_ASTATUS_FC##_a##_##_b, _c, _a, (_d) } 1518 - 1519 - #define BUILD_SISL_ASTATUS_FC_PORT(_a) \ 1520 - ASTATUS_FC(_a, LINK_UP, "link up", 0), \ 1521 - ASTATUS_FC(_a, LINK_DN, "link down", 0), \ 1522 - ASTATUS_FC(_a, LOGI_S, "login succeeded", SCAN_HOST), \ 1523 - ASTATUS_FC(_a, LOGI_F, "login failed", CLR_FC_ERROR), \ 1524 - ASTATUS_FC(_a, LOGI_R, "login timed out, retrying", LINK_RESET), \ 1525 - ASTATUS_FC(_a, CRC_T, "CRC threshold exceeded", LINK_RESET), \ 1526 - ASTATUS_FC(_a, LOGO, "target initiated LOGO", 0), \ 1527 - ASTATUS_FC(_a, OTHER, "other error", CLR_FC_ERROR | LINK_RESET) 1528 - 1529 - static const struct asyc_intr_info ainfo[] = { 1530 - BUILD_SISL_ASTATUS_FC_PORT(1), 1531 - BUILD_SISL_ASTATUS_FC_PORT(0), 1532 - BUILD_SISL_ASTATUS_FC_PORT(3), 1533 - BUILD_SISL_ASTATUS_FC_PORT(2) 1534 - }; 1535 - 1536 - /** 1537 - * cxlflash_async_err_irq() - interrupt handler for asynchronous errors 1538 - * @irq: Interrupt number. 1539 - * @data: Private data provided at interrupt registration, the AFU. 1540 - * 1541 - * Return: Always return IRQ_HANDLED. 1542 - */ 1543 - static irqreturn_t cxlflash_async_err_irq(int irq, void *data) 1544 - { 1545 - struct hwq *hwq = (struct hwq *)data; 1546 - struct afu *afu = hwq->afu; 1547 - struct cxlflash_cfg *cfg = afu->parent; 1548 - struct device *dev = &cfg->dev->dev; 1549 - const struct asyc_intr_info *info; 1550 - struct sisl_global_map __iomem *global = &afu->afu_map->global; 1551 - __be64 __iomem *fc_port_regs; 1552 - u64 reg_unmasked; 1553 - u64 reg; 1554 - u64 bit; 1555 - u8 port; 1556 - 1557 - reg = readq_be(&global->regs.aintr_status); 1558 - reg_unmasked = (reg & SISL_ASTATUS_UNMASK); 1559 - 1560 - if (unlikely(reg_unmasked == 0)) { 1561 - dev_err(dev, "%s: spurious interrupt, aintr_status=%016llx\n", 1562 - __func__, reg); 1563 - goto out; 1564 - } 1565 - 1566 - /* FYI, it is 'okay' to clear AFU status before FC_ERROR */ 1567 - writeq_be(reg_unmasked, &global->regs.aintr_clear); 1568 - 1569 - /* Check each bit that is on */ 1570 - for_each_set_bit(bit, (ulong *)&reg_unmasked, BITS_PER_LONG) { 1571 - if (unlikely(bit >= ARRAY_SIZE(ainfo))) { 1572 - WARN_ON_ONCE(1); 1573 - continue; 1574 - } 1575 - 1576 - info = &ainfo[bit]; 1577 - if (unlikely(info->status != 1ULL << bit)) { 1578 - WARN_ON_ONCE(1); 1579 - continue; 1580 - } 1581 - 1582 - port = info->port; 1583 - fc_port_regs = get_fc_port_regs(cfg, port); 1584 - 1585 - dev_err(dev, "%s: FC Port %d -> %s, fc_status=%016llx\n", 1586 - __func__, port, info->desc, 1587 - readq_be(&fc_port_regs[FC_STATUS / 8])); 1588 - 1589 - /* 1590 - * Do link reset first, some OTHER errors will set FC_ERROR 1591 - * again if cleared before or w/o a reset 1592 - */ 1593 - if (info->action & LINK_RESET) { 1594 - dev_err(dev, "%s: FC Port %d: resetting link\n", 1595 - __func__, port); 1596 - cfg->lr_state = LINK_RESET_REQUIRED; 1597 - cfg->lr_port = port; 1598 - schedule_work(&cfg->work_q); 1599 - } 1600 - 1601 - if (info->action & CLR_FC_ERROR) { 1602 - reg = readq_be(&fc_port_regs[FC_ERROR / 8]); 1603 - 1604 - /* 1605 - * Since all errors are unmasked, FC_ERROR and FC_ERRCAP 1606 - * should be the same and tracing one is sufficient. 1607 - */ 1608 - 1609 - dev_err(dev, "%s: fc %d: clearing fc_error=%016llx\n", 1610 - __func__, port, reg); 1611 - 1612 - writeq_be(reg, &fc_port_regs[FC_ERROR / 8]); 1613 - writeq_be(0, &fc_port_regs[FC_ERRCAP / 8]); 1614 - } 1615 - 1616 - if (info->action & SCAN_HOST) { 1617 - atomic_inc(&cfg->scan_host_needed); 1618 - schedule_work(&cfg->work_q); 1619 - } 1620 - } 1621 - 1622 - out: 1623 - return IRQ_HANDLED; 1624 - } 1625 - 1626 - /** 1627 - * read_vpd() - obtains the WWPNs from VPD 1628 - * @cfg: Internal structure associated with the host. 1629 - * @wwpn: Array of size MAX_FC_PORTS to pass back WWPNs 1630 - * 1631 - * Return: 0 on success, -errno on failure 1632 - */ 1633 - static int read_vpd(struct cxlflash_cfg *cfg, u64 wwpn[]) 1634 - { 1635 - struct device *dev = &cfg->dev->dev; 1636 - struct pci_dev *pdev = cfg->dev; 1637 - int i, k, rc = 0; 1638 - unsigned int kw_size; 1639 - ssize_t vpd_size; 1640 - char vpd_data[CXLFLASH_VPD_LEN]; 1641 - char tmp_buf[WWPN_BUF_LEN] = { 0 }; 1642 - const struct dev_dependent_vals *ddv = (struct dev_dependent_vals *) 1643 - cfg->dev_id->driver_data; 1644 - const bool wwpn_vpd_required = ddv->flags & CXLFLASH_WWPN_VPD_REQUIRED; 1645 - const char *wwpn_vpd_tags[MAX_FC_PORTS] = { "V5", "V6", "V7", "V8" }; 1646 - 1647 - /* Get the VPD data from the device */ 1648 - vpd_size = cfg->ops->read_adapter_vpd(pdev, vpd_data, sizeof(vpd_data)); 1649 - if (unlikely(vpd_size <= 0)) { 1650 - dev_err(dev, "%s: Unable to read VPD (size = %ld)\n", 1651 - __func__, vpd_size); 1652 - rc = -ENODEV; 1653 - goto out; 1654 - } 1655 - 1656 - /* 1657 - * Find the offset of the WWPN tag within the read only 1658 - * VPD data and validate the found field (partials are 1659 - * no good to us). Convert the ASCII data to an integer 1660 - * value. Note that we must copy to a temporary buffer 1661 - * because the conversion service requires that the ASCII 1662 - * string be terminated. 1663 - * 1664 - * Allow for WWPN not being found for all devices, setting 1665 - * the returned WWPN to zero when not found. Notify with a 1666 - * log error for cards that should have had WWPN keywords 1667 - * in the VPD - cards requiring WWPN will not have their 1668 - * ports programmed and operate in an undefined state. 1669 - */ 1670 - for (k = 0; k < cfg->num_fc_ports; k++) { 1671 - i = pci_vpd_find_ro_info_keyword(vpd_data, vpd_size, 1672 - wwpn_vpd_tags[k], &kw_size); 1673 - if (i == -ENOENT) { 1674 - if (wwpn_vpd_required) 1675 - dev_err(dev, "%s: Port %d WWPN not found\n", 1676 - __func__, k); 1677 - wwpn[k] = 0ULL; 1678 - continue; 1679 - } 1680 - 1681 - if (i < 0 || kw_size != WWPN_LEN) { 1682 - dev_err(dev, "%s: Port %d WWPN incomplete or bad VPD\n", 1683 - __func__, k); 1684 - rc = -ENODEV; 1685 - goto out; 1686 - } 1687 - 1688 - memcpy(tmp_buf, &vpd_data[i], WWPN_LEN); 1689 - rc = kstrtoul(tmp_buf, WWPN_LEN, (ulong *)&wwpn[k]); 1690 - if (unlikely(rc)) { 1691 - dev_err(dev, "%s: WWPN conversion failed for port %d\n", 1692 - __func__, k); 1693 - rc = -ENODEV; 1694 - goto out; 1695 - } 1696 - 1697 - dev_dbg(dev, "%s: wwpn%d=%016llx\n", __func__, k, wwpn[k]); 1698 - } 1699 - 1700 - out: 1701 - dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 1702 - return rc; 1703 - } 1704 - 1705 - /** 1706 - * init_pcr() - initialize the provisioning and control registers 1707 - * @cfg: Internal structure associated with the host. 1708 - * 1709 - * Also sets up fast access to the mapped registers and initializes AFU 1710 - * command fields that never change. 1711 - */ 1712 - static void init_pcr(struct cxlflash_cfg *cfg) 1713 - { 1714 - struct afu *afu = cfg->afu; 1715 - struct sisl_ctrl_map __iomem *ctrl_map; 1716 - struct hwq *hwq; 1717 - void *cookie; 1718 - int i; 1719 - 1720 - for (i = 0; i < MAX_CONTEXT; i++) { 1721 - ctrl_map = &afu->afu_map->ctrls[i].ctrl; 1722 - /* Disrupt any clients that could be running */ 1723 - /* e.g. clients that survived a master restart */ 1724 - writeq_be(0, &ctrl_map->rht_start); 1725 - writeq_be(0, &ctrl_map->rht_cnt_id); 1726 - writeq_be(0, &ctrl_map->ctx_cap); 1727 - } 1728 - 1729 - /* Copy frequently used fields into hwq */ 1730 - for (i = 0; i < afu->num_hwqs; i++) { 1731 - hwq = get_hwq(afu, i); 1732 - cookie = hwq->ctx_cookie; 1733 - 1734 - hwq->ctx_hndl = (u16) cfg->ops->process_element(cookie); 1735 - hwq->host_map = &afu->afu_map->hosts[hwq->ctx_hndl].host; 1736 - hwq->ctrl_map = &afu->afu_map->ctrls[hwq->ctx_hndl].ctrl; 1737 - 1738 - /* Program the Endian Control for the master context */ 1739 - writeq_be(SISL_ENDIAN_CTRL, &hwq->host_map->endian_ctrl); 1740 - } 1741 - } 1742 - 1743 - /** 1744 - * init_global() - initialize AFU global registers 1745 - * @cfg: Internal structure associated with the host. 1746 - */ 1747 - static int init_global(struct cxlflash_cfg *cfg) 1748 - { 1749 - struct afu *afu = cfg->afu; 1750 - struct device *dev = &cfg->dev->dev; 1751 - struct hwq *hwq; 1752 - struct sisl_host_map __iomem *hmap; 1753 - __be64 __iomem *fc_port_regs; 1754 - u64 wwpn[MAX_FC_PORTS]; /* wwpn of AFU ports */ 1755 - int i = 0, num_ports = 0; 1756 - int rc = 0; 1757 - int j; 1758 - void *ctx; 1759 - u64 reg; 1760 - 1761 - rc = read_vpd(cfg, &wwpn[0]); 1762 - if (rc) { 1763 - dev_err(dev, "%s: could not read vpd rc=%d\n", __func__, rc); 1764 - goto out; 1765 - } 1766 - 1767 - /* Set up RRQ and SQ in HWQ for master issued cmds */ 1768 - for (i = 0; i < afu->num_hwqs; i++) { 1769 - hwq = get_hwq(afu, i); 1770 - hmap = hwq->host_map; 1771 - 1772 - writeq_be((u64) hwq->hrrq_start, &hmap->rrq_start); 1773 - writeq_be((u64) hwq->hrrq_end, &hmap->rrq_end); 1774 - hwq->hrrq_online = true; 1775 - 1776 - if (afu_is_sq_cmd_mode(afu)) { 1777 - writeq_be((u64)hwq->hsq_start, &hmap->sq_start); 1778 - writeq_be((u64)hwq->hsq_end, &hmap->sq_end); 1779 - } 1780 - } 1781 - 1782 - /* AFU configuration */ 1783 - reg = readq_be(&afu->afu_map->global.regs.afu_config); 1784 - reg |= SISL_AFUCONF_AR_ALL|SISL_AFUCONF_ENDIAN; 1785 - /* enable all auto retry options and control endianness */ 1786 - /* leave others at default: */ 1787 - /* CTX_CAP write protected, mbox_r does not clear on read and */ 1788 - /* checker on if dual afu */ 1789 - writeq_be(reg, &afu->afu_map->global.regs.afu_config); 1790 - 1791 - /* Global port select: select either port */ 1792 - if (afu->internal_lun) { 1793 - /* Only use port 0 */ 1794 - writeq_be(PORT0, &afu->afu_map->global.regs.afu_port_sel); 1795 - num_ports = 0; 1796 - } else { 1797 - writeq_be(PORT_MASK(cfg->num_fc_ports), 1798 - &afu->afu_map->global.regs.afu_port_sel); 1799 - num_ports = cfg->num_fc_ports; 1800 - } 1801 - 1802 - for (i = 0; i < num_ports; i++) { 1803 - fc_port_regs = get_fc_port_regs(cfg, i); 1804 - 1805 - /* Unmask all errors (but they are still masked at AFU) */ 1806 - writeq_be(0, &fc_port_regs[FC_ERRMSK / 8]); 1807 - /* Clear CRC error cnt & set a threshold */ 1808 - (void)readq_be(&fc_port_regs[FC_CNT_CRCERR / 8]); 1809 - writeq_be(MC_CRC_THRESH, &fc_port_regs[FC_CRC_THRESH / 8]); 1810 - 1811 - /* Set WWPNs. If already programmed, wwpn[i] is 0 */ 1812 - if (wwpn[i] != 0) 1813 - afu_set_wwpn(afu, i, &fc_port_regs[0], wwpn[i]); 1814 - /* Programming WWPN back to back causes additional 1815 - * offline/online transitions and a PLOGI 1816 - */ 1817 - msleep(100); 1818 - } 1819 - 1820 - if (afu_is_ocxl_lisn(afu)) { 1821 - /* Set up the LISN effective address for each master */ 1822 - for (i = 0; i < afu->num_hwqs; i++) { 1823 - hwq = get_hwq(afu, i); 1824 - ctx = hwq->ctx_cookie; 1825 - 1826 - for (j = 0; j < hwq->num_irqs; j++) { 1827 - reg = cfg->ops->get_irq_objhndl(ctx, j); 1828 - writeq_be(reg, &hwq->ctrl_map->lisn_ea[j]); 1829 - } 1830 - 1831 - reg = hwq->ctx_hndl; 1832 - writeq_be(SISL_LISN_PASID(reg, reg), 1833 - &hwq->ctrl_map->lisn_pasid[0]); 1834 - writeq_be(SISL_LISN_PASID(0UL, reg), 1835 - &hwq->ctrl_map->lisn_pasid[1]); 1836 - } 1837 - } 1838 - 1839 - /* Set up master's own CTX_CAP to allow real mode, host translation */ 1840 - /* tables, afu cmds and read/write GSCSI cmds. */ 1841 - /* First, unlock ctx_cap write by reading mbox */ 1842 - for (i = 0; i < afu->num_hwqs; i++) { 1843 - hwq = get_hwq(afu, i); 1844 - 1845 - (void)readq_be(&hwq->ctrl_map->mbox_r); /* unlock ctx_cap */ 1846 - writeq_be((SISL_CTX_CAP_REAL_MODE | SISL_CTX_CAP_HOST_XLATE | 1847 - SISL_CTX_CAP_READ_CMD | SISL_CTX_CAP_WRITE_CMD | 1848 - SISL_CTX_CAP_AFU_CMD | SISL_CTX_CAP_GSCSI_CMD), 1849 - &hwq->ctrl_map->ctx_cap); 1850 - } 1851 - 1852 - /* 1853 - * Determine write-same unmap support for host by evaluating the unmap 1854 - * sector support bit of the context control register associated with 1855 - * the primary hardware queue. Note that while this status is reflected 1856 - * in a context register, the outcome can be assumed to be host-wide. 1857 - */ 1858 - hwq = get_hwq(afu, PRIMARY_HWQ); 1859 - reg = readq_be(&hwq->host_map->ctx_ctrl); 1860 - if (reg & SISL_CTX_CTRL_UNMAP_SECTOR) 1861 - cfg->ws_unmap = true; 1862 - 1863 - /* Initialize heartbeat */ 1864 - afu->hb = readq_be(&afu->afu_map->global.regs.afu_hb); 1865 - out: 1866 - return rc; 1867 - } 1868 - 1869 - /** 1870 - * start_afu() - initializes and starts the AFU 1871 - * @cfg: Internal structure associated with the host. 1872 - */ 1873 - static int start_afu(struct cxlflash_cfg *cfg) 1874 - { 1875 - struct afu *afu = cfg->afu; 1876 - struct device *dev = &cfg->dev->dev; 1877 - struct hwq *hwq; 1878 - int rc = 0; 1879 - int i; 1880 - 1881 - init_pcr(cfg); 1882 - 1883 - /* Initialize each HWQ */ 1884 - for (i = 0; i < afu->num_hwqs; i++) { 1885 - hwq = get_hwq(afu, i); 1886 - 1887 - /* After an AFU reset, RRQ entries are stale, clear them */ 1888 - memset(&hwq->rrq_entry, 0, sizeof(hwq->rrq_entry)); 1889 - 1890 - /* Initialize RRQ pointers */ 1891 - hwq->hrrq_start = &hwq->rrq_entry[0]; 1892 - hwq->hrrq_end = &hwq->rrq_entry[NUM_RRQ_ENTRY - 1]; 1893 - hwq->hrrq_curr = hwq->hrrq_start; 1894 - hwq->toggle = 1; 1895 - 1896 - /* Initialize spin locks */ 1897 - spin_lock_init(&hwq->hrrq_slock); 1898 - spin_lock_init(&hwq->hsq_slock); 1899 - 1900 - /* Initialize SQ */ 1901 - if (afu_is_sq_cmd_mode(afu)) { 1902 - memset(&hwq->sq, 0, sizeof(hwq->sq)); 1903 - hwq->hsq_start = &hwq->sq[0]; 1904 - hwq->hsq_end = &hwq->sq[NUM_SQ_ENTRY - 1]; 1905 - hwq->hsq_curr = hwq->hsq_start; 1906 - 1907 - atomic_set(&hwq->hsq_credits, NUM_SQ_ENTRY - 1); 1908 - } 1909 - 1910 - /* Initialize IRQ poll */ 1911 - if (afu_is_irqpoll_enabled(afu)) 1912 - irq_poll_init(&hwq->irqpoll, afu->irqpoll_weight, 1913 - cxlflash_irqpoll); 1914 - 1915 - } 1916 - 1917 - rc = init_global(cfg); 1918 - 1919 - dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 1920 - return rc; 1921 - } 1922 - 1923 - /** 1924 - * init_intr() - setup interrupt handlers for the master context 1925 - * @cfg: Internal structure associated with the host. 1926 - * @hwq: Hardware queue to initialize. 1927 - * 1928 - * Return: 0 on success, -errno on failure 1929 - */ 1930 - static enum undo_level init_intr(struct cxlflash_cfg *cfg, 1931 - struct hwq *hwq) 1932 - { 1933 - struct device *dev = &cfg->dev->dev; 1934 - void *ctx = hwq->ctx_cookie; 1935 - int rc = 0; 1936 - enum undo_level level = UNDO_NOOP; 1937 - bool is_primary_hwq = (hwq->index == PRIMARY_HWQ); 1938 - int num_irqs = hwq->num_irqs; 1939 - 1940 - rc = cfg->ops->allocate_afu_irqs(ctx, num_irqs); 1941 - if (unlikely(rc)) { 1942 - dev_err(dev, "%s: allocate_afu_irqs failed rc=%d\n", 1943 - __func__, rc); 1944 - level = UNDO_NOOP; 1945 - goto out; 1946 - } 1947 - 1948 - rc = cfg->ops->map_afu_irq(ctx, 1, cxlflash_sync_err_irq, hwq, 1949 - "SISL_MSI_SYNC_ERROR"); 1950 - if (unlikely(rc <= 0)) { 1951 - dev_err(dev, "%s: SISL_MSI_SYNC_ERROR map failed\n", __func__); 1952 - level = FREE_IRQ; 1953 - goto out; 1954 - } 1955 - 1956 - rc = cfg->ops->map_afu_irq(ctx, 2, cxlflash_rrq_irq, hwq, 1957 - "SISL_MSI_RRQ_UPDATED"); 1958 - if (unlikely(rc <= 0)) { 1959 - dev_err(dev, "%s: SISL_MSI_RRQ_UPDATED map failed\n", __func__); 1960 - level = UNMAP_ONE; 1961 - goto out; 1962 - } 1963 - 1964 - /* SISL_MSI_ASYNC_ERROR is setup only for the primary HWQ */ 1965 - if (!is_primary_hwq) 1966 - goto out; 1967 - 1968 - rc = cfg->ops->map_afu_irq(ctx, 3, cxlflash_async_err_irq, hwq, 1969 - "SISL_MSI_ASYNC_ERROR"); 1970 - if (unlikely(rc <= 0)) { 1971 - dev_err(dev, "%s: SISL_MSI_ASYNC_ERROR map failed\n", __func__); 1972 - level = UNMAP_TWO; 1973 - goto out; 1974 - } 1975 - out: 1976 - return level; 1977 - } 1978 - 1979 - /** 1980 - * init_mc() - create and register as the master context 1981 - * @cfg: Internal structure associated with the host. 1982 - * @index: HWQ Index of the master context. 1983 - * 1984 - * Return: 0 on success, -errno on failure 1985 - */ 1986 - static int init_mc(struct cxlflash_cfg *cfg, u32 index) 1987 - { 1988 - void *ctx; 1989 - struct device *dev = &cfg->dev->dev; 1990 - struct hwq *hwq = get_hwq(cfg->afu, index); 1991 - int rc = 0; 1992 - int num_irqs; 1993 - enum undo_level level; 1994 - 1995 - hwq->afu = cfg->afu; 1996 - hwq->index = index; 1997 - INIT_LIST_HEAD(&hwq->pending_cmds); 1998 - 1999 - if (index == PRIMARY_HWQ) { 2000 - ctx = cfg->ops->get_context(cfg->dev, cfg->afu_cookie); 2001 - num_irqs = 3; 2002 - } else { 2003 - ctx = cfg->ops->dev_context_init(cfg->dev, cfg->afu_cookie); 2004 - num_irqs = 2; 2005 - } 2006 - if (IS_ERR_OR_NULL(ctx)) { 2007 - rc = -ENOMEM; 2008 - goto err1; 2009 - } 2010 - 2011 - WARN_ON(hwq->ctx_cookie); 2012 - hwq->ctx_cookie = ctx; 2013 - hwq->num_irqs = num_irqs; 2014 - 2015 - /* Set it up as a master with the CXL */ 2016 - cfg->ops->set_master(ctx); 2017 - 2018 - /* Reset AFU when initializing primary context */ 2019 - if (index == PRIMARY_HWQ) { 2020 - rc = cfg->ops->afu_reset(ctx); 2021 - if (unlikely(rc)) { 2022 - dev_err(dev, "%s: AFU reset failed rc=%d\n", 2023 - __func__, rc); 2024 - goto err1; 2025 - } 2026 - } 2027 - 2028 - level = init_intr(cfg, hwq); 2029 - if (unlikely(level)) { 2030 - dev_err(dev, "%s: interrupt init failed rc=%d\n", __func__, rc); 2031 - goto err2; 2032 - } 2033 - 2034 - /* Finally, activate the context by starting it */ 2035 - rc = cfg->ops->start_context(hwq->ctx_cookie); 2036 - if (unlikely(rc)) { 2037 - dev_err(dev, "%s: start context failed rc=%d\n", __func__, rc); 2038 - level = UNMAP_THREE; 2039 - goto err2; 2040 - } 2041 - 2042 - out: 2043 - dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 2044 - return rc; 2045 - err2: 2046 - term_intr(cfg, level, index); 2047 - if (index != PRIMARY_HWQ) 2048 - cfg->ops->release_context(ctx); 2049 - err1: 2050 - hwq->ctx_cookie = NULL; 2051 - goto out; 2052 - } 2053 - 2054 - /** 2055 - * get_num_afu_ports() - determines and configures the number of AFU ports 2056 - * @cfg: Internal structure associated with the host. 2057 - * 2058 - * This routine determines the number of AFU ports by converting the global 2059 - * port selection mask. The converted value is only valid following an AFU 2060 - * reset (explicit or power-on). This routine must be invoked shortly after 2061 - * mapping as other routines are dependent on the number of ports during the 2062 - * initialization sequence. 2063 - * 2064 - * To support legacy AFUs that might not have reflected an initial global 2065 - * port mask (value read is 0), default to the number of ports originally 2066 - * supported by the cxlflash driver (2) before hardware with other port 2067 - * offerings was introduced. 2068 - */ 2069 - static void get_num_afu_ports(struct cxlflash_cfg *cfg) 2070 - { 2071 - struct afu *afu = cfg->afu; 2072 - struct device *dev = &cfg->dev->dev; 2073 - u64 port_mask; 2074 - int num_fc_ports = LEGACY_FC_PORTS; 2075 - 2076 - port_mask = readq_be(&afu->afu_map->global.regs.afu_port_sel); 2077 - if (port_mask != 0ULL) 2078 - num_fc_ports = min(ilog2(port_mask) + 1, MAX_FC_PORTS); 2079 - 2080 - dev_dbg(dev, "%s: port_mask=%016llx num_fc_ports=%d\n", 2081 - __func__, port_mask, num_fc_ports); 2082 - 2083 - cfg->num_fc_ports = num_fc_ports; 2084 - cfg->host->max_channel = PORTNUM2CHAN(num_fc_ports); 2085 - } 2086 - 2087 - /** 2088 - * init_afu() - setup as master context and start AFU 2089 - * @cfg: Internal structure associated with the host. 2090 - * 2091 - * This routine is a higher level of control for configuring the 2092 - * AFU on probe and reset paths. 2093 - * 2094 - * Return: 0 on success, -errno on failure 2095 - */ 2096 - static int init_afu(struct cxlflash_cfg *cfg) 2097 - { 2098 - u64 reg; 2099 - int rc = 0; 2100 - struct afu *afu = cfg->afu; 2101 - struct device *dev = &cfg->dev->dev; 2102 - struct hwq *hwq; 2103 - int i; 2104 - 2105 - cfg->ops->perst_reloads_same_image(cfg->afu_cookie, true); 2106 - 2107 - mutex_init(&afu->sync_active); 2108 - afu->num_hwqs = afu->desired_hwqs; 2109 - for (i = 0; i < afu->num_hwqs; i++) { 2110 - rc = init_mc(cfg, i); 2111 - if (rc) { 2112 - dev_err(dev, "%s: init_mc failed rc=%d index=%d\n", 2113 - __func__, rc, i); 2114 - goto err1; 2115 - } 2116 - } 2117 - 2118 - /* Map the entire MMIO space of the AFU using the first context */ 2119 - hwq = get_hwq(afu, PRIMARY_HWQ); 2120 - afu->afu_map = cfg->ops->psa_map(hwq->ctx_cookie); 2121 - if (!afu->afu_map) { 2122 - dev_err(dev, "%s: psa_map failed\n", __func__); 2123 - rc = -ENOMEM; 2124 - goto err1; 2125 - } 2126 - 2127 - /* No byte reverse on reading afu_version or string will be backwards */ 2128 - reg = readq(&afu->afu_map->global.regs.afu_version); 2129 - memcpy(afu->version, &reg, sizeof(reg)); 2130 - afu->interface_version = 2131 - readq_be(&afu->afu_map->global.regs.interface_version); 2132 - if ((afu->interface_version + 1) == 0) { 2133 - dev_err(dev, "Back level AFU, please upgrade. AFU version %s " 2134 - "interface version %016llx\n", afu->version, 2135 - afu->interface_version); 2136 - rc = -EINVAL; 2137 - goto err1; 2138 - } 2139 - 2140 - if (afu_is_sq_cmd_mode(afu)) { 2141 - afu->send_cmd = send_cmd_sq; 2142 - afu->context_reset = context_reset_sq; 2143 - } else { 2144 - afu->send_cmd = send_cmd_ioarrin; 2145 - afu->context_reset = context_reset_ioarrin; 2146 - } 2147 - 2148 - dev_dbg(dev, "%s: afu_ver=%s interface_ver=%016llx\n", __func__, 2149 - afu->version, afu->interface_version); 2150 - 2151 - get_num_afu_ports(cfg); 2152 - 2153 - rc = start_afu(cfg); 2154 - if (rc) { 2155 - dev_err(dev, "%s: start_afu failed, rc=%d\n", __func__, rc); 2156 - goto err1; 2157 - } 2158 - 2159 - afu_err_intr_init(cfg->afu); 2160 - for (i = 0; i < afu->num_hwqs; i++) { 2161 - hwq = get_hwq(afu, i); 2162 - 2163 - hwq->room = readq_be(&hwq->host_map->cmd_room); 2164 - } 2165 - 2166 - /* Restore the LUN mappings */ 2167 - cxlflash_restore_luntable(cfg); 2168 - out: 2169 - dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 2170 - return rc; 2171 - 2172 - err1: 2173 - for (i = afu->num_hwqs - 1; i >= 0; i--) { 2174 - term_intr(cfg, UNMAP_THREE, i); 2175 - term_mc(cfg, i); 2176 - } 2177 - goto out; 2178 - } 2179 - 2180 - /** 2181 - * afu_reset() - resets the AFU 2182 - * @cfg: Internal structure associated with the host. 2183 - * 2184 - * Return: 0 on success, -errno on failure 2185 - */ 2186 - static int afu_reset(struct cxlflash_cfg *cfg) 2187 - { 2188 - struct device *dev = &cfg->dev->dev; 2189 - int rc = 0; 2190 - 2191 - /* Stop the context before the reset. Since the context is 2192 - * no longer available restart it after the reset is complete 2193 - */ 2194 - term_afu(cfg); 2195 - 2196 - rc = init_afu(cfg); 2197 - 2198 - dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 2199 - return rc; 2200 - } 2201 - 2202 - /** 2203 - * drain_ioctls() - wait until all currently executing ioctls have completed 2204 - * @cfg: Internal structure associated with the host. 2205 - * 2206 - * Obtain write access to read/write semaphore that wraps ioctl 2207 - * handling to 'drain' ioctls currently executing. 2208 - */ 2209 - static void drain_ioctls(struct cxlflash_cfg *cfg) 2210 - { 2211 - down_write(&cfg->ioctl_rwsem); 2212 - up_write(&cfg->ioctl_rwsem); 2213 - } 2214 - 2215 - /** 2216 - * cxlflash_async_reset_host() - asynchronous host reset handler 2217 - * @data: Private data provided while scheduling reset. 2218 - * @cookie: Cookie that can be used for checkpointing. 2219 - */ 2220 - static void cxlflash_async_reset_host(void *data, async_cookie_t cookie) 2221 - { 2222 - struct cxlflash_cfg *cfg = data; 2223 - struct device *dev = &cfg->dev->dev; 2224 - int rc = 0; 2225 - 2226 - if (cfg->state != STATE_RESET) { 2227 - dev_dbg(dev, "%s: Not performing a reset, state=%d\n", 2228 - __func__, cfg->state); 2229 - goto out; 2230 - } 2231 - 2232 - drain_ioctls(cfg); 2233 - cxlflash_mark_contexts_error(cfg); 2234 - rc = afu_reset(cfg); 2235 - if (rc) 2236 - cfg->state = STATE_FAILTERM; 2237 - else 2238 - cfg->state = STATE_NORMAL; 2239 - wake_up_all(&cfg->reset_waitq); 2240 - 2241 - out: 2242 - scsi_unblock_requests(cfg->host); 2243 - } 2244 - 2245 - /** 2246 - * cxlflash_schedule_async_reset() - schedule an asynchronous host reset 2247 - * @cfg: Internal structure associated with the host. 2248 - */ 2249 - static void cxlflash_schedule_async_reset(struct cxlflash_cfg *cfg) 2250 - { 2251 - struct device *dev = &cfg->dev->dev; 2252 - 2253 - if (cfg->state != STATE_NORMAL) { 2254 - dev_dbg(dev, "%s: Not performing reset state=%d\n", 2255 - __func__, cfg->state); 2256 - return; 2257 - } 2258 - 2259 - cfg->state = STATE_RESET; 2260 - scsi_block_requests(cfg->host); 2261 - cfg->async_reset_cookie = async_schedule(cxlflash_async_reset_host, 2262 - cfg); 2263 - } 2264 - 2265 - /** 2266 - * send_afu_cmd() - builds and sends an internal AFU command 2267 - * @afu: AFU associated with the host. 2268 - * @rcb: Pre-populated IOARCB describing command to send. 2269 - * 2270 - * The AFU can only take one internal AFU command at a time. This limitation is 2271 - * enforced by using a mutex to provide exclusive access to the AFU during the 2272 - * operation. This design point requires calling threads to not be on interrupt 2273 - * context due to the possibility of sleeping during concurrent AFU operations. 2274 - * 2275 - * The command status is optionally passed back to the caller when the caller 2276 - * populates the IOASA field of the IOARCB with a pointer to an IOASA structure. 2277 - * 2278 - * Return: 2279 - * 0 on success, -errno on failure 2280 - */ 2281 - static int send_afu_cmd(struct afu *afu, struct sisl_ioarcb *rcb) 2282 - { 2283 - struct cxlflash_cfg *cfg = afu->parent; 2284 - struct device *dev = &cfg->dev->dev; 2285 - struct afu_cmd *cmd = NULL; 2286 - struct hwq *hwq = get_hwq(afu, PRIMARY_HWQ); 2287 - ulong lock_flags; 2288 - char *buf = NULL; 2289 - int rc = 0; 2290 - int nretry = 0; 2291 - 2292 - if (cfg->state != STATE_NORMAL) { 2293 - dev_dbg(dev, "%s: Sync not required state=%u\n", 2294 - __func__, cfg->state); 2295 - return 0; 2296 - } 2297 - 2298 - mutex_lock(&afu->sync_active); 2299 - atomic_inc(&afu->cmds_active); 2300 - buf = kmalloc(sizeof(*cmd) + __alignof__(*cmd) - 1, GFP_KERNEL); 2301 - if (unlikely(!buf)) { 2302 - dev_err(dev, "%s: no memory for command\n", __func__); 2303 - rc = -ENOMEM; 2304 - goto out; 2305 - } 2306 - 2307 - cmd = (struct afu_cmd *)PTR_ALIGN(buf, __alignof__(*cmd)); 2308 - 2309 - retry: 2310 - memset(cmd, 0, sizeof(*cmd)); 2311 - memcpy(&cmd->rcb, rcb, sizeof(*rcb)); 2312 - INIT_LIST_HEAD(&cmd->queue); 2313 - init_completion(&cmd->cevent); 2314 - cmd->parent = afu; 2315 - cmd->hwq_index = hwq->index; 2316 - cmd->rcb.ctx_id = hwq->ctx_hndl; 2317 - 2318 - dev_dbg(dev, "%s: afu=%p cmd=%p type=%02x nretry=%d\n", 2319 - __func__, afu, cmd, cmd->rcb.cdb[0], nretry); 2320 - 2321 - rc = afu->send_cmd(afu, cmd); 2322 - if (unlikely(rc)) { 2323 - rc = -ENOBUFS; 2324 - goto out; 2325 - } 2326 - 2327 - rc = wait_resp(afu, cmd); 2328 - switch (rc) { 2329 - case -ETIMEDOUT: 2330 - rc = afu->context_reset(hwq); 2331 - if (rc) { 2332 - /* Delete the command from pending_cmds list */ 2333 - spin_lock_irqsave(&hwq->hsq_slock, lock_flags); 2334 - list_del(&cmd->list); 2335 - spin_unlock_irqrestore(&hwq->hsq_slock, lock_flags); 2336 - 2337 - cxlflash_schedule_async_reset(cfg); 2338 - break; 2339 - } 2340 - fallthrough; /* to retry */ 2341 - case -EAGAIN: 2342 - if (++nretry < 2) 2343 - goto retry; 2344 - fallthrough; /* to exit */ 2345 - default: 2346 - break; 2347 - } 2348 - 2349 - if (rcb->ioasa) 2350 - *rcb->ioasa = cmd->sa; 2351 - out: 2352 - atomic_dec(&afu->cmds_active); 2353 - mutex_unlock(&afu->sync_active); 2354 - kfree(buf); 2355 - dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 2356 - return rc; 2357 - } 2358 - 2359 - /** 2360 - * cxlflash_afu_sync() - builds and sends an AFU sync command 2361 - * @afu: AFU associated with the host. 2362 - * @ctx: Identifies context requesting sync. 2363 - * @res: Identifies resource requesting sync. 2364 - * @mode: Type of sync to issue (lightweight, heavyweight, global). 2365 - * 2366 - * AFU sync operations are only necessary and allowed when the device is 2367 - * operating normally. When not operating normally, sync requests can occur as 2368 - * part of cleaning up resources associated with an adapter prior to removal. 2369 - * In this scenario, these requests are simply ignored (safe due to the AFU 2370 - * going away). 2371 - * 2372 - * Return: 2373 - * 0 on success, -errno on failure 2374 - */ 2375 - int cxlflash_afu_sync(struct afu *afu, ctx_hndl_t ctx, res_hndl_t res, u8 mode) 2376 - { 2377 - struct cxlflash_cfg *cfg = afu->parent; 2378 - struct device *dev = &cfg->dev->dev; 2379 - struct sisl_ioarcb rcb = { 0 }; 2380 - 2381 - dev_dbg(dev, "%s: afu=%p ctx=%u res=%u mode=%u\n", 2382 - __func__, afu, ctx, res, mode); 2383 - 2384 - rcb.req_flags = SISL_REQ_FLAGS_AFU_CMD; 2385 - rcb.msi = SISL_MSI_RRQ_UPDATED; 2386 - rcb.timeout = MC_AFU_SYNC_TIMEOUT; 2387 - 2388 - rcb.cdb[0] = SISL_AFU_CMD_SYNC; 2389 - rcb.cdb[1] = mode; 2390 - put_unaligned_be16(ctx, &rcb.cdb[2]); 2391 - put_unaligned_be32(res, &rcb.cdb[4]); 2392 - 2393 - return send_afu_cmd(afu, &rcb); 2394 - } 2395 - 2396 - /** 2397 - * cxlflash_eh_abort_handler() - abort a SCSI command 2398 - * @scp: SCSI command to abort. 2399 - * 2400 - * CXL Flash devices do not support a single command abort. Reset the context 2401 - * as per SISLite specification. Flush any pending commands in the hardware 2402 - * queue before the reset. 2403 - * 2404 - * Return: SUCCESS/FAILED as defined in scsi/scsi.h 2405 - */ 2406 - static int cxlflash_eh_abort_handler(struct scsi_cmnd *scp) 2407 - { 2408 - int rc = FAILED; 2409 - struct Scsi_Host *host = scp->device->host; 2410 - struct cxlflash_cfg *cfg = shost_priv(host); 2411 - struct afu_cmd *cmd = sc_to_afuc(scp); 2412 - struct device *dev = &cfg->dev->dev; 2413 - struct afu *afu = cfg->afu; 2414 - struct hwq *hwq = get_hwq(afu, cmd->hwq_index); 2415 - 2416 - dev_dbg(dev, "%s: (scp=%p) %d/%d/%d/%llu " 2417 - "cdb=(%08x-%08x-%08x-%08x)\n", __func__, scp, host->host_no, 2418 - scp->device->channel, scp->device->id, scp->device->lun, 2419 - get_unaligned_be32(&((u32 *)scp->cmnd)[0]), 2420 - get_unaligned_be32(&((u32 *)scp->cmnd)[1]), 2421 - get_unaligned_be32(&((u32 *)scp->cmnd)[2]), 2422 - get_unaligned_be32(&((u32 *)scp->cmnd)[3])); 2423 - 2424 - /* When the state is not normal, another reset/reload is in progress. 2425 - * Return failed and the mid-layer will invoke host reset handler. 2426 - */ 2427 - if (cfg->state != STATE_NORMAL) { 2428 - dev_dbg(dev, "%s: Invalid state for abort, state=%d\n", 2429 - __func__, cfg->state); 2430 - goto out; 2431 - } 2432 - 2433 - rc = afu->context_reset(hwq); 2434 - if (unlikely(rc)) 2435 - goto out; 2436 - 2437 - rc = SUCCESS; 2438 - 2439 - out: 2440 - dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 2441 - return rc; 2442 - } 2443 - 2444 - /** 2445 - * cxlflash_eh_device_reset_handler() - reset a single LUN 2446 - * @scp: SCSI command to send. 2447 - * 2448 - * Return: 2449 - * SUCCESS as defined in scsi/scsi.h 2450 - * FAILED as defined in scsi/scsi.h 2451 - */ 2452 - static int cxlflash_eh_device_reset_handler(struct scsi_cmnd *scp) 2453 - { 2454 - int rc = SUCCESS; 2455 - struct scsi_device *sdev = scp->device; 2456 - struct Scsi_Host *host = sdev->host; 2457 - struct cxlflash_cfg *cfg = shost_priv(host); 2458 - struct device *dev = &cfg->dev->dev; 2459 - int rcr = 0; 2460 - 2461 - dev_dbg(dev, "%s: %d/%d/%d/%llu\n", __func__, 2462 - host->host_no, sdev->channel, sdev->id, sdev->lun); 2463 - retry: 2464 - switch (cfg->state) { 2465 - case STATE_NORMAL: 2466 - rcr = send_tmf(cfg, sdev, TMF_LUN_RESET); 2467 - if (unlikely(rcr)) 2468 - rc = FAILED; 2469 - break; 2470 - case STATE_RESET: 2471 - wait_event(cfg->reset_waitq, cfg->state != STATE_RESET); 2472 - goto retry; 2473 - default: 2474 - rc = FAILED; 2475 - break; 2476 - } 2477 - 2478 - dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 2479 - return rc; 2480 - } 2481 - 2482 - /** 2483 - * cxlflash_eh_host_reset_handler() - reset the host adapter 2484 - * @scp: SCSI command from stack identifying host. 2485 - * 2486 - * Following a reset, the state is evaluated again in case an EEH occurred 2487 - * during the reset. In such a scenario, the host reset will either yield 2488 - * until the EEH recovery is complete or return success or failure based 2489 - * upon the current device state. 2490 - * 2491 - * Return: 2492 - * SUCCESS as defined in scsi/scsi.h 2493 - * FAILED as defined in scsi/scsi.h 2494 - */ 2495 - static int cxlflash_eh_host_reset_handler(struct scsi_cmnd *scp) 2496 - { 2497 - int rc = SUCCESS; 2498 - int rcr = 0; 2499 - struct Scsi_Host *host = scp->device->host; 2500 - struct cxlflash_cfg *cfg = shost_priv(host); 2501 - struct device *dev = &cfg->dev->dev; 2502 - 2503 - dev_dbg(dev, "%s: %d\n", __func__, host->host_no); 2504 - 2505 - switch (cfg->state) { 2506 - case STATE_NORMAL: 2507 - cfg->state = STATE_RESET; 2508 - drain_ioctls(cfg); 2509 - cxlflash_mark_contexts_error(cfg); 2510 - rcr = afu_reset(cfg); 2511 - if (rcr) { 2512 - rc = FAILED; 2513 - cfg->state = STATE_FAILTERM; 2514 - } else 2515 - cfg->state = STATE_NORMAL; 2516 - wake_up_all(&cfg->reset_waitq); 2517 - ssleep(1); 2518 - fallthrough; 2519 - case STATE_RESET: 2520 - wait_event(cfg->reset_waitq, cfg->state != STATE_RESET); 2521 - if (cfg->state == STATE_NORMAL) 2522 - break; 2523 - fallthrough; 2524 - default: 2525 - rc = FAILED; 2526 - break; 2527 - } 2528 - 2529 - dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 2530 - return rc; 2531 - } 2532 - 2533 - /** 2534 - * cxlflash_change_queue_depth() - change the queue depth for the device 2535 - * @sdev: SCSI device destined for queue depth change. 2536 - * @qdepth: Requested queue depth value to set. 2537 - * 2538 - * The requested queue depth is capped to the maximum supported value. 2539 - * 2540 - * Return: The actual queue depth set. 2541 - */ 2542 - static int cxlflash_change_queue_depth(struct scsi_device *sdev, int qdepth) 2543 - { 2544 - 2545 - if (qdepth > CXLFLASH_MAX_CMDS_PER_LUN) 2546 - qdepth = CXLFLASH_MAX_CMDS_PER_LUN; 2547 - 2548 - scsi_change_queue_depth(sdev, qdepth); 2549 - return sdev->queue_depth; 2550 - } 2551 - 2552 - /** 2553 - * cxlflash_show_port_status() - queries and presents the current port status 2554 - * @port: Desired port for status reporting. 2555 - * @cfg: Internal structure associated with the host. 2556 - * @buf: Buffer of length PAGE_SIZE to report back port status in ASCII. 2557 - * 2558 - * Return: The size of the ASCII string returned in @buf or -EINVAL. 2559 - */ 2560 - static ssize_t cxlflash_show_port_status(u32 port, 2561 - struct cxlflash_cfg *cfg, 2562 - char *buf) 2563 - { 2564 - struct device *dev = &cfg->dev->dev; 2565 - char *disp_status; 2566 - u64 status; 2567 - __be64 __iomem *fc_port_regs; 2568 - 2569 - WARN_ON(port >= MAX_FC_PORTS); 2570 - 2571 - if (port >= cfg->num_fc_ports) { 2572 - dev_info(dev, "%s: Port %d not supported on this card.\n", 2573 - __func__, port); 2574 - return -EINVAL; 2575 - } 2576 - 2577 - fc_port_regs = get_fc_port_regs(cfg, port); 2578 - status = readq_be(&fc_port_regs[FC_MTIP_STATUS / 8]); 2579 - status &= FC_MTIP_STATUS_MASK; 2580 - 2581 - if (status == FC_MTIP_STATUS_ONLINE) 2582 - disp_status = "online"; 2583 - else if (status == FC_MTIP_STATUS_OFFLINE) 2584 - disp_status = "offline"; 2585 - else 2586 - disp_status = "unknown"; 2587 - 2588 - return scnprintf(buf, PAGE_SIZE, "%s\n", disp_status); 2589 - } 2590 - 2591 - /** 2592 - * port0_show() - queries and presents the current status of port 0 2593 - * @dev: Generic device associated with the host owning the port. 2594 - * @attr: Device attribute representing the port. 2595 - * @buf: Buffer of length PAGE_SIZE to report back port status in ASCII. 2596 - * 2597 - * Return: The size of the ASCII string returned in @buf. 2598 - */ 2599 - static ssize_t port0_show(struct device *dev, 2600 - struct device_attribute *attr, 2601 - char *buf) 2602 - { 2603 - struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2604 - 2605 - return cxlflash_show_port_status(0, cfg, buf); 2606 - } 2607 - 2608 - /** 2609 - * port1_show() - queries and presents the current status of port 1 2610 - * @dev: Generic device associated with the host owning the port. 2611 - * @attr: Device attribute representing the port. 2612 - * @buf: Buffer of length PAGE_SIZE to report back port status in ASCII. 2613 - * 2614 - * Return: The size of the ASCII string returned in @buf. 2615 - */ 2616 - static ssize_t port1_show(struct device *dev, 2617 - struct device_attribute *attr, 2618 - char *buf) 2619 - { 2620 - struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2621 - 2622 - return cxlflash_show_port_status(1, cfg, buf); 2623 - } 2624 - 2625 - /** 2626 - * port2_show() - queries and presents the current status of port 2 2627 - * @dev: Generic device associated with the host owning the port. 2628 - * @attr: Device attribute representing the port. 2629 - * @buf: Buffer of length PAGE_SIZE to report back port status in ASCII. 2630 - * 2631 - * Return: The size of the ASCII string returned in @buf. 2632 - */ 2633 - static ssize_t port2_show(struct device *dev, 2634 - struct device_attribute *attr, 2635 - char *buf) 2636 - { 2637 - struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2638 - 2639 - return cxlflash_show_port_status(2, cfg, buf); 2640 - } 2641 - 2642 - /** 2643 - * port3_show() - queries and presents the current status of port 3 2644 - * @dev: Generic device associated with the host owning the port. 2645 - * @attr: Device attribute representing the port. 2646 - * @buf: Buffer of length PAGE_SIZE to report back port status in ASCII. 2647 - * 2648 - * Return: The size of the ASCII string returned in @buf. 2649 - */ 2650 - static ssize_t port3_show(struct device *dev, 2651 - struct device_attribute *attr, 2652 - char *buf) 2653 - { 2654 - struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2655 - 2656 - return cxlflash_show_port_status(3, cfg, buf); 2657 - } 2658 - 2659 - /** 2660 - * lun_mode_show() - presents the current LUN mode of the host 2661 - * @dev: Generic device associated with the host. 2662 - * @attr: Device attribute representing the LUN mode. 2663 - * @buf: Buffer of length PAGE_SIZE to report back the LUN mode in ASCII. 2664 - * 2665 - * Return: The size of the ASCII string returned in @buf. 2666 - */ 2667 - static ssize_t lun_mode_show(struct device *dev, 2668 - struct device_attribute *attr, char *buf) 2669 - { 2670 - struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2671 - struct afu *afu = cfg->afu; 2672 - 2673 - return scnprintf(buf, PAGE_SIZE, "%u\n", afu->internal_lun); 2674 - } 2675 - 2676 - /** 2677 - * lun_mode_store() - sets the LUN mode of the host 2678 - * @dev: Generic device associated with the host. 2679 - * @attr: Device attribute representing the LUN mode. 2680 - * @buf: Buffer of length PAGE_SIZE containing the LUN mode in ASCII. 2681 - * @count: Length of data resizing in @buf. 2682 - * 2683 - * The CXL Flash AFU supports a dummy LUN mode where the external 2684 - * links and storage are not required. Space on the FPGA is used 2685 - * to create 1 or 2 small LUNs which are presented to the system 2686 - * as if they were a normal storage device. This feature is useful 2687 - * during development and also provides manufacturing with a way 2688 - * to test the AFU without an actual device. 2689 - * 2690 - * 0 = external LUN[s] (default) 2691 - * 1 = internal LUN (1 x 64K, 512B blocks, id 0) 2692 - * 2 = internal LUN (1 x 64K, 4K blocks, id 0) 2693 - * 3 = internal LUN (2 x 32K, 512B blocks, ids 0,1) 2694 - * 4 = internal LUN (2 x 32K, 4K blocks, ids 0,1) 2695 - * 2696 - * Return: The size of the ASCII string returned in @buf. 2697 - */ 2698 - static ssize_t lun_mode_store(struct device *dev, 2699 - struct device_attribute *attr, 2700 - const char *buf, size_t count) 2701 - { 2702 - struct Scsi_Host *shost = class_to_shost(dev); 2703 - struct cxlflash_cfg *cfg = shost_priv(shost); 2704 - struct afu *afu = cfg->afu; 2705 - int rc; 2706 - u32 lun_mode; 2707 - 2708 - rc = kstrtouint(buf, 10, &lun_mode); 2709 - if (!rc && (lun_mode < 5) && (lun_mode != afu->internal_lun)) { 2710 - afu->internal_lun = lun_mode; 2711 - 2712 - /* 2713 - * When configured for internal LUN, there is only one channel, 2714 - * channel number 0, else there will be one less than the number 2715 - * of fc ports for this card. 2716 - */ 2717 - if (afu->internal_lun) 2718 - shost->max_channel = 0; 2719 - else 2720 - shost->max_channel = PORTNUM2CHAN(cfg->num_fc_ports); 2721 - 2722 - afu_reset(cfg); 2723 - scsi_scan_host(cfg->host); 2724 - } 2725 - 2726 - return count; 2727 - } 2728 - 2729 - /** 2730 - * ioctl_version_show() - presents the current ioctl version of the host 2731 - * @dev: Generic device associated with the host. 2732 - * @attr: Device attribute representing the ioctl version. 2733 - * @buf: Buffer of length PAGE_SIZE to report back the ioctl version. 2734 - * 2735 - * Return: The size of the ASCII string returned in @buf. 2736 - */ 2737 - static ssize_t ioctl_version_show(struct device *dev, 2738 - struct device_attribute *attr, char *buf) 2739 - { 2740 - ssize_t bytes = 0; 2741 - 2742 - bytes = scnprintf(buf, PAGE_SIZE, 2743 - "disk: %u\n", DK_CXLFLASH_VERSION_0); 2744 - bytes += scnprintf(buf + bytes, PAGE_SIZE - bytes, 2745 - "host: %u\n", HT_CXLFLASH_VERSION_0); 2746 - 2747 - return bytes; 2748 - } 2749 - 2750 - /** 2751 - * cxlflash_show_port_lun_table() - queries and presents the port LUN table 2752 - * @port: Desired port for status reporting. 2753 - * @cfg: Internal structure associated with the host. 2754 - * @buf: Buffer of length PAGE_SIZE to report back port status in ASCII. 2755 - * 2756 - * Return: The size of the ASCII string returned in @buf or -EINVAL. 2757 - */ 2758 - static ssize_t cxlflash_show_port_lun_table(u32 port, 2759 - struct cxlflash_cfg *cfg, 2760 - char *buf) 2761 - { 2762 - struct device *dev = &cfg->dev->dev; 2763 - __be64 __iomem *fc_port_luns; 2764 - int i; 2765 - ssize_t bytes = 0; 2766 - 2767 - WARN_ON(port >= MAX_FC_PORTS); 2768 - 2769 - if (port >= cfg->num_fc_ports) { 2770 - dev_info(dev, "%s: Port %d not supported on this card.\n", 2771 - __func__, port); 2772 - return -EINVAL; 2773 - } 2774 - 2775 - fc_port_luns = get_fc_port_luns(cfg, port); 2776 - 2777 - for (i = 0; i < CXLFLASH_NUM_VLUNS; i++) 2778 - bytes += scnprintf(buf + bytes, PAGE_SIZE - bytes, 2779 - "%03d: %016llx\n", 2780 - i, readq_be(&fc_port_luns[i])); 2781 - return bytes; 2782 - } 2783 - 2784 - /** 2785 - * port0_lun_table_show() - presents the current LUN table of port 0 2786 - * @dev: Generic device associated with the host owning the port. 2787 - * @attr: Device attribute representing the port. 2788 - * @buf: Buffer of length PAGE_SIZE to report back port status in ASCII. 2789 - * 2790 - * Return: The size of the ASCII string returned in @buf. 2791 - */ 2792 - static ssize_t port0_lun_table_show(struct device *dev, 2793 - struct device_attribute *attr, 2794 - char *buf) 2795 - { 2796 - struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2797 - 2798 - return cxlflash_show_port_lun_table(0, cfg, buf); 2799 - } 2800 - 2801 - /** 2802 - * port1_lun_table_show() - presents the current LUN table of port 1 2803 - * @dev: Generic device associated with the host owning the port. 2804 - * @attr: Device attribute representing the port. 2805 - * @buf: Buffer of length PAGE_SIZE to report back port status in ASCII. 2806 - * 2807 - * Return: The size of the ASCII string returned in @buf. 2808 - */ 2809 - static ssize_t port1_lun_table_show(struct device *dev, 2810 - struct device_attribute *attr, 2811 - char *buf) 2812 - { 2813 - struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2814 - 2815 - return cxlflash_show_port_lun_table(1, cfg, buf); 2816 - } 2817 - 2818 - /** 2819 - * port2_lun_table_show() - presents the current LUN table of port 2 2820 - * @dev: Generic device associated with the host owning the port. 2821 - * @attr: Device attribute representing the port. 2822 - * @buf: Buffer of length PAGE_SIZE to report back port status in ASCII. 2823 - * 2824 - * Return: The size of the ASCII string returned in @buf. 2825 - */ 2826 - static ssize_t port2_lun_table_show(struct device *dev, 2827 - struct device_attribute *attr, 2828 - char *buf) 2829 - { 2830 - struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2831 - 2832 - return cxlflash_show_port_lun_table(2, cfg, buf); 2833 - } 2834 - 2835 - /** 2836 - * port3_lun_table_show() - presents the current LUN table of port 3 2837 - * @dev: Generic device associated with the host owning the port. 2838 - * @attr: Device attribute representing the port. 2839 - * @buf: Buffer of length PAGE_SIZE to report back port status in ASCII. 2840 - * 2841 - * Return: The size of the ASCII string returned in @buf. 2842 - */ 2843 - static ssize_t port3_lun_table_show(struct device *dev, 2844 - struct device_attribute *attr, 2845 - char *buf) 2846 - { 2847 - struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2848 - 2849 - return cxlflash_show_port_lun_table(3, cfg, buf); 2850 - } 2851 - 2852 - /** 2853 - * irqpoll_weight_show() - presents the current IRQ poll weight for the host 2854 - * @dev: Generic device associated with the host. 2855 - * @attr: Device attribute representing the IRQ poll weight. 2856 - * @buf: Buffer of length PAGE_SIZE to report back the current IRQ poll 2857 - * weight in ASCII. 2858 - * 2859 - * An IRQ poll weight of 0 indicates polling is disabled. 2860 - * 2861 - * Return: The size of the ASCII string returned in @buf. 2862 - */ 2863 - static ssize_t irqpoll_weight_show(struct device *dev, 2864 - struct device_attribute *attr, char *buf) 2865 - { 2866 - struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2867 - struct afu *afu = cfg->afu; 2868 - 2869 - return scnprintf(buf, PAGE_SIZE, "%u\n", afu->irqpoll_weight); 2870 - } 2871 - 2872 - /** 2873 - * irqpoll_weight_store() - sets the current IRQ poll weight for the host 2874 - * @dev: Generic device associated with the host. 2875 - * @attr: Device attribute representing the IRQ poll weight. 2876 - * @buf: Buffer of length PAGE_SIZE containing the desired IRQ poll 2877 - * weight in ASCII. 2878 - * @count: Length of data resizing in @buf. 2879 - * 2880 - * An IRQ poll weight of 0 indicates polling is disabled. 2881 - * 2882 - * Return: The size of the ASCII string returned in @buf. 2883 - */ 2884 - static ssize_t irqpoll_weight_store(struct device *dev, 2885 - struct device_attribute *attr, 2886 - const char *buf, size_t count) 2887 - { 2888 - struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2889 - struct device *cfgdev = &cfg->dev->dev; 2890 - struct afu *afu = cfg->afu; 2891 - struct hwq *hwq; 2892 - u32 weight; 2893 - int rc, i; 2894 - 2895 - rc = kstrtouint(buf, 10, &weight); 2896 - if (rc) 2897 - return -EINVAL; 2898 - 2899 - if (weight > 256) { 2900 - dev_info(cfgdev, 2901 - "Invalid IRQ poll weight. It must be 256 or less.\n"); 2902 - return -EINVAL; 2903 - } 2904 - 2905 - if (weight == afu->irqpoll_weight) { 2906 - dev_info(cfgdev, 2907 - "Current IRQ poll weight has the same weight.\n"); 2908 - return -EINVAL; 2909 - } 2910 - 2911 - if (afu_is_irqpoll_enabled(afu)) { 2912 - for (i = 0; i < afu->num_hwqs; i++) { 2913 - hwq = get_hwq(afu, i); 2914 - 2915 - irq_poll_disable(&hwq->irqpoll); 2916 - } 2917 - } 2918 - 2919 - afu->irqpoll_weight = weight; 2920 - 2921 - if (weight > 0) { 2922 - for (i = 0; i < afu->num_hwqs; i++) { 2923 - hwq = get_hwq(afu, i); 2924 - 2925 - irq_poll_init(&hwq->irqpoll, weight, cxlflash_irqpoll); 2926 - } 2927 - } 2928 - 2929 - return count; 2930 - } 2931 - 2932 - /** 2933 - * num_hwqs_show() - presents the number of hardware queues for the host 2934 - * @dev: Generic device associated with the host. 2935 - * @attr: Device attribute representing the number of hardware queues. 2936 - * @buf: Buffer of length PAGE_SIZE to report back the number of hardware 2937 - * queues in ASCII. 2938 - * 2939 - * Return: The size of the ASCII string returned in @buf. 2940 - */ 2941 - static ssize_t num_hwqs_show(struct device *dev, 2942 - struct device_attribute *attr, char *buf) 2943 - { 2944 - struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2945 - struct afu *afu = cfg->afu; 2946 - 2947 - return scnprintf(buf, PAGE_SIZE, "%u\n", afu->num_hwqs); 2948 - } 2949 - 2950 - /** 2951 - * num_hwqs_store() - sets the number of hardware queues for the host 2952 - * @dev: Generic device associated with the host. 2953 - * @attr: Device attribute representing the number of hardware queues. 2954 - * @buf: Buffer of length PAGE_SIZE containing the number of hardware 2955 - * queues in ASCII. 2956 - * @count: Length of data resizing in @buf. 2957 - * 2958 - * n > 0: num_hwqs = n 2959 - * n = 0: num_hwqs = num_online_cpus() 2960 - * n < 0: num_online_cpus() / abs(n) 2961 - * 2962 - * Return: The size of the ASCII string returned in @buf. 2963 - */ 2964 - static ssize_t num_hwqs_store(struct device *dev, 2965 - struct device_attribute *attr, 2966 - const char *buf, size_t count) 2967 - { 2968 - struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 2969 - struct afu *afu = cfg->afu; 2970 - int rc; 2971 - int nhwqs, num_hwqs; 2972 - 2973 - rc = kstrtoint(buf, 10, &nhwqs); 2974 - if (rc) 2975 - return -EINVAL; 2976 - 2977 - if (nhwqs >= 1) 2978 - num_hwqs = nhwqs; 2979 - else if (nhwqs == 0) 2980 - num_hwqs = num_online_cpus(); 2981 - else 2982 - num_hwqs = num_online_cpus() / abs(nhwqs); 2983 - 2984 - afu->desired_hwqs = min(num_hwqs, CXLFLASH_MAX_HWQS); 2985 - WARN_ON_ONCE(afu->desired_hwqs == 0); 2986 - 2987 - retry: 2988 - switch (cfg->state) { 2989 - case STATE_NORMAL: 2990 - cfg->state = STATE_RESET; 2991 - drain_ioctls(cfg); 2992 - cxlflash_mark_contexts_error(cfg); 2993 - rc = afu_reset(cfg); 2994 - if (rc) 2995 - cfg->state = STATE_FAILTERM; 2996 - else 2997 - cfg->state = STATE_NORMAL; 2998 - wake_up_all(&cfg->reset_waitq); 2999 - break; 3000 - case STATE_RESET: 3001 - wait_event(cfg->reset_waitq, cfg->state != STATE_RESET); 3002 - if (cfg->state == STATE_NORMAL) 3003 - goto retry; 3004 - fallthrough; 3005 - default: 3006 - /* Ideally should not happen */ 3007 - dev_err(dev, "%s: Device is not ready, state=%d\n", 3008 - __func__, cfg->state); 3009 - break; 3010 - } 3011 - 3012 - return count; 3013 - } 3014 - 3015 - static const char *hwq_mode_name[MAX_HWQ_MODE] = { "rr", "tag", "cpu" }; 3016 - 3017 - /** 3018 - * hwq_mode_show() - presents the HWQ steering mode for the host 3019 - * @dev: Generic device associated with the host. 3020 - * @attr: Device attribute representing the HWQ steering mode. 3021 - * @buf: Buffer of length PAGE_SIZE to report back the HWQ steering mode 3022 - * as a character string. 3023 - * 3024 - * Return: The size of the ASCII string returned in @buf. 3025 - */ 3026 - static ssize_t hwq_mode_show(struct device *dev, 3027 - struct device_attribute *attr, char *buf) 3028 - { 3029 - struct cxlflash_cfg *cfg = shost_priv(class_to_shost(dev)); 3030 - struct afu *afu = cfg->afu; 3031 - 3032 - return scnprintf(buf, PAGE_SIZE, "%s\n", hwq_mode_name[afu->hwq_mode]); 3033 - } 3034 - 3035 - /** 3036 - * hwq_mode_store() - sets the HWQ steering mode for the host 3037 - * @dev: Generic device associated with the host. 3038 - * @attr: Device attribute representing the HWQ steering mode. 3039 - * @buf: Buffer of length PAGE_SIZE containing the HWQ steering mode 3040 - * as a character string. 3041 - * @count: Length of data resizing in @buf. 3042 - * 3043 - * rr = Round-Robin 3044 - * tag = Block MQ Tagging 3045 - * cpu = CPU Affinity 3046 - * 3047 - * Return: The size of the ASCII string returned in @buf. 3048 - */ 3049 - static ssize_t hwq_mode_store(struct device *dev, 3050 - struct device_attribute *attr, 3051 - const char *buf, size_t count) 3052 - { 3053 - struct Scsi_Host *shost = class_to_shost(dev); 3054 - struct cxlflash_cfg *cfg = shost_priv(shost); 3055 - struct device *cfgdev = &cfg->dev->dev; 3056 - struct afu *afu = cfg->afu; 3057 - int i; 3058 - u32 mode = MAX_HWQ_MODE; 3059 - 3060 - for (i = 0; i < MAX_HWQ_MODE; i++) { 3061 - if (!strncmp(hwq_mode_name[i], buf, strlen(hwq_mode_name[i]))) { 3062 - mode = i; 3063 - break; 3064 - } 3065 - } 3066 - 3067 - if (mode >= MAX_HWQ_MODE) { 3068 - dev_info(cfgdev, "Invalid HWQ steering mode.\n"); 3069 - return -EINVAL; 3070 - } 3071 - 3072 - afu->hwq_mode = mode; 3073 - 3074 - return count; 3075 - } 3076 - 3077 - /** 3078 - * mode_show() - presents the current mode of the device 3079 - * @dev: Generic device associated with the device. 3080 - * @attr: Device attribute representing the device mode. 3081 - * @buf: Buffer of length PAGE_SIZE to report back the dev mode in ASCII. 3082 - * 3083 - * Return: The size of the ASCII string returned in @buf. 3084 - */ 3085 - static ssize_t mode_show(struct device *dev, 3086 - struct device_attribute *attr, char *buf) 3087 - { 3088 - struct scsi_device *sdev = to_scsi_device(dev); 3089 - 3090 - return scnprintf(buf, PAGE_SIZE, "%s\n", 3091 - sdev->hostdata ? "superpipe" : "legacy"); 3092 - } 3093 - 3094 - /* 3095 - * Host attributes 3096 - */ 3097 - static DEVICE_ATTR_RO(port0); 3098 - static DEVICE_ATTR_RO(port1); 3099 - static DEVICE_ATTR_RO(port2); 3100 - static DEVICE_ATTR_RO(port3); 3101 - static DEVICE_ATTR_RW(lun_mode); 3102 - static DEVICE_ATTR_RO(ioctl_version); 3103 - static DEVICE_ATTR_RO(port0_lun_table); 3104 - static DEVICE_ATTR_RO(port1_lun_table); 3105 - static DEVICE_ATTR_RO(port2_lun_table); 3106 - static DEVICE_ATTR_RO(port3_lun_table); 3107 - static DEVICE_ATTR_RW(irqpoll_weight); 3108 - static DEVICE_ATTR_RW(num_hwqs); 3109 - static DEVICE_ATTR_RW(hwq_mode); 3110 - 3111 - static struct attribute *cxlflash_host_attrs[] = { 3112 - &dev_attr_port0.attr, 3113 - &dev_attr_port1.attr, 3114 - &dev_attr_port2.attr, 3115 - &dev_attr_port3.attr, 3116 - &dev_attr_lun_mode.attr, 3117 - &dev_attr_ioctl_version.attr, 3118 - &dev_attr_port0_lun_table.attr, 3119 - &dev_attr_port1_lun_table.attr, 3120 - &dev_attr_port2_lun_table.attr, 3121 - &dev_attr_port3_lun_table.attr, 3122 - &dev_attr_irqpoll_weight.attr, 3123 - &dev_attr_num_hwqs.attr, 3124 - &dev_attr_hwq_mode.attr, 3125 - NULL 3126 - }; 3127 - 3128 - ATTRIBUTE_GROUPS(cxlflash_host); 3129 - 3130 - /* 3131 - * Device attributes 3132 - */ 3133 - static DEVICE_ATTR_RO(mode); 3134 - 3135 - static struct attribute *cxlflash_dev_attrs[] = { 3136 - &dev_attr_mode.attr, 3137 - NULL 3138 - }; 3139 - 3140 - ATTRIBUTE_GROUPS(cxlflash_dev); 3141 - 3142 - /* 3143 - * Host template 3144 - */ 3145 - static struct scsi_host_template driver_template = { 3146 - .module = THIS_MODULE, 3147 - .name = CXLFLASH_ADAPTER_NAME, 3148 - .info = cxlflash_driver_info, 3149 - .ioctl = cxlflash_ioctl, 3150 - .proc_name = CXLFLASH_NAME, 3151 - .queuecommand = cxlflash_queuecommand, 3152 - .eh_abort_handler = cxlflash_eh_abort_handler, 3153 - .eh_device_reset_handler = cxlflash_eh_device_reset_handler, 3154 - .eh_host_reset_handler = cxlflash_eh_host_reset_handler, 3155 - .change_queue_depth = cxlflash_change_queue_depth, 3156 - .cmd_per_lun = CXLFLASH_MAX_CMDS_PER_LUN, 3157 - .can_queue = CXLFLASH_MAX_CMDS, 3158 - .cmd_size = sizeof(struct afu_cmd) + __alignof__(struct afu_cmd) - 1, 3159 - .this_id = -1, 3160 - .sg_tablesize = 1, /* No scatter gather support */ 3161 - .max_sectors = CXLFLASH_MAX_SECTORS, 3162 - .shost_groups = cxlflash_host_groups, 3163 - .sdev_groups = cxlflash_dev_groups, 3164 - }; 3165 - 3166 - /* 3167 - * Device dependent values 3168 - */ 3169 - static struct dev_dependent_vals dev_corsa_vals = { CXLFLASH_MAX_SECTORS, 3170 - CXLFLASH_WWPN_VPD_REQUIRED }; 3171 - static struct dev_dependent_vals dev_flash_gt_vals = { CXLFLASH_MAX_SECTORS, 3172 - CXLFLASH_NOTIFY_SHUTDOWN }; 3173 - static struct dev_dependent_vals dev_briard_vals = { CXLFLASH_MAX_SECTORS, 3174 - (CXLFLASH_NOTIFY_SHUTDOWN | 3175 - CXLFLASH_OCXL_DEV) }; 3176 - 3177 - /* 3178 - * PCI device binding table 3179 - */ 3180 - static const struct pci_device_id cxlflash_pci_table[] = { 3181 - {PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CORSA, 3182 - PCI_ANY_ID, PCI_ANY_ID, 0, 0, (kernel_ulong_t)&dev_corsa_vals}, 3183 - {PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_FLASH_GT, 3184 - PCI_ANY_ID, PCI_ANY_ID, 0, 0, (kernel_ulong_t)&dev_flash_gt_vals}, 3185 - {PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_BRIARD, 3186 - PCI_ANY_ID, PCI_ANY_ID, 0, 0, (kernel_ulong_t)&dev_briard_vals}, 3187 - {} 3188 - }; 3189 - 3190 - MODULE_DEVICE_TABLE(pci, cxlflash_pci_table); 3191 - 3192 - /** 3193 - * cxlflash_worker_thread() - work thread handler for the AFU 3194 - * @work: Work structure contained within cxlflash associated with host. 3195 - * 3196 - * Handles the following events: 3197 - * - Link reset which cannot be performed on interrupt context due to 3198 - * blocking up to a few seconds 3199 - * - Rescan the host 3200 - */ 3201 - static void cxlflash_worker_thread(struct work_struct *work) 3202 - { 3203 - struct cxlflash_cfg *cfg = container_of(work, struct cxlflash_cfg, 3204 - work_q); 3205 - struct afu *afu = cfg->afu; 3206 - struct device *dev = &cfg->dev->dev; 3207 - __be64 __iomem *fc_port_regs; 3208 - int port; 3209 - ulong lock_flags; 3210 - 3211 - /* Avoid MMIO if the device has failed */ 3212 - 3213 - if (cfg->state != STATE_NORMAL) 3214 - return; 3215 - 3216 - spin_lock_irqsave(cfg->host->host_lock, lock_flags); 3217 - 3218 - if (cfg->lr_state == LINK_RESET_REQUIRED) { 3219 - port = cfg->lr_port; 3220 - if (port < 0) 3221 - dev_err(dev, "%s: invalid port index %d\n", 3222 - __func__, port); 3223 - else { 3224 - spin_unlock_irqrestore(cfg->host->host_lock, 3225 - lock_flags); 3226 - 3227 - /* The reset can block... */ 3228 - fc_port_regs = get_fc_port_regs(cfg, port); 3229 - afu_link_reset(afu, port, fc_port_regs); 3230 - spin_lock_irqsave(cfg->host->host_lock, lock_flags); 3231 - } 3232 - 3233 - cfg->lr_state = LINK_RESET_COMPLETE; 3234 - } 3235 - 3236 - spin_unlock_irqrestore(cfg->host->host_lock, lock_flags); 3237 - 3238 - if (atomic_dec_if_positive(&cfg->scan_host_needed) >= 0) 3239 - scsi_scan_host(cfg->host); 3240 - } 3241 - 3242 - /** 3243 - * cxlflash_chr_open() - character device open handler 3244 - * @inode: Device inode associated with this character device. 3245 - * @file: File pointer for this device. 3246 - * 3247 - * Only users with admin privileges are allowed to open the character device. 3248 - * 3249 - * Return: 0 on success, -errno on failure 3250 - */ 3251 - static int cxlflash_chr_open(struct inode *inode, struct file *file) 3252 - { 3253 - struct cxlflash_cfg *cfg; 3254 - 3255 - if (!capable(CAP_SYS_ADMIN)) 3256 - return -EACCES; 3257 - 3258 - cfg = container_of(inode->i_cdev, struct cxlflash_cfg, cdev); 3259 - file->private_data = cfg; 3260 - 3261 - return 0; 3262 - } 3263 - 3264 - /** 3265 - * decode_hioctl() - translates encoded host ioctl to easily identifiable string 3266 - * @cmd: The host ioctl command to decode. 3267 - * 3268 - * Return: A string identifying the decoded host ioctl. 3269 - */ 3270 - static char *decode_hioctl(unsigned int cmd) 3271 - { 3272 - switch (cmd) { 3273 - case HT_CXLFLASH_LUN_PROVISION: 3274 - return __stringify_1(HT_CXLFLASH_LUN_PROVISION); 3275 - } 3276 - 3277 - return "UNKNOWN"; 3278 - } 3279 - 3280 - /** 3281 - * cxlflash_lun_provision() - host LUN provisioning handler 3282 - * @cfg: Internal structure associated with the host. 3283 - * @arg: Kernel copy of userspace ioctl data structure. 3284 - * 3285 - * Return: 0 on success, -errno on failure 3286 - */ 3287 - static int cxlflash_lun_provision(struct cxlflash_cfg *cfg, void *arg) 3288 - { 3289 - struct ht_cxlflash_lun_provision *lunprov = arg; 3290 - struct afu *afu = cfg->afu; 3291 - struct device *dev = &cfg->dev->dev; 3292 - struct sisl_ioarcb rcb; 3293 - struct sisl_ioasa asa; 3294 - __be64 __iomem *fc_port_regs; 3295 - u16 port = lunprov->port; 3296 - u16 scmd = lunprov->hdr.subcmd; 3297 - u16 type; 3298 - u64 reg; 3299 - u64 size; 3300 - u64 lun_id; 3301 - int rc = 0; 3302 - 3303 - if (!afu_is_lun_provision(afu)) { 3304 - rc = -ENOTSUPP; 3305 - goto out; 3306 - } 3307 - 3308 - if (port >= cfg->num_fc_ports) { 3309 - rc = -EINVAL; 3310 - goto out; 3311 - } 3312 - 3313 - switch (scmd) { 3314 - case HT_CXLFLASH_LUN_PROVISION_SUBCMD_CREATE_LUN: 3315 - type = SISL_AFU_LUN_PROVISION_CREATE; 3316 - size = lunprov->size; 3317 - lun_id = 0; 3318 - break; 3319 - case HT_CXLFLASH_LUN_PROVISION_SUBCMD_DELETE_LUN: 3320 - type = SISL_AFU_LUN_PROVISION_DELETE; 3321 - size = 0; 3322 - lun_id = lunprov->lun_id; 3323 - break; 3324 - case HT_CXLFLASH_LUN_PROVISION_SUBCMD_QUERY_PORT: 3325 - fc_port_regs = get_fc_port_regs(cfg, port); 3326 - 3327 - reg = readq_be(&fc_port_regs[FC_MAX_NUM_LUNS / 8]); 3328 - lunprov->max_num_luns = reg; 3329 - reg = readq_be(&fc_port_regs[FC_CUR_NUM_LUNS / 8]); 3330 - lunprov->cur_num_luns = reg; 3331 - reg = readq_be(&fc_port_regs[FC_MAX_CAP_PORT / 8]); 3332 - lunprov->max_cap_port = reg; 3333 - reg = readq_be(&fc_port_regs[FC_CUR_CAP_PORT / 8]); 3334 - lunprov->cur_cap_port = reg; 3335 - 3336 - goto out; 3337 - default: 3338 - rc = -EINVAL; 3339 - goto out; 3340 - } 3341 - 3342 - memset(&rcb, 0, sizeof(rcb)); 3343 - memset(&asa, 0, sizeof(asa)); 3344 - rcb.req_flags = SISL_REQ_FLAGS_AFU_CMD; 3345 - rcb.lun_id = lun_id; 3346 - rcb.msi = SISL_MSI_RRQ_UPDATED; 3347 - rcb.timeout = MC_LUN_PROV_TIMEOUT; 3348 - rcb.ioasa = &asa; 3349 - 3350 - rcb.cdb[0] = SISL_AFU_CMD_LUN_PROVISION; 3351 - rcb.cdb[1] = type; 3352 - rcb.cdb[2] = port; 3353 - put_unaligned_be64(size, &rcb.cdb[8]); 3354 - 3355 - rc = send_afu_cmd(afu, &rcb); 3356 - if (rc) { 3357 - dev_err(dev, "%s: send_afu_cmd failed rc=%d asc=%08x afux=%x\n", 3358 - __func__, rc, asa.ioasc, asa.afu_extra); 3359 - goto out; 3360 - } 3361 - 3362 - if (scmd == HT_CXLFLASH_LUN_PROVISION_SUBCMD_CREATE_LUN) { 3363 - lunprov->lun_id = (u64)asa.lunid_hi << 32 | asa.lunid_lo; 3364 - memcpy(lunprov->wwid, asa.wwid, sizeof(lunprov->wwid)); 3365 - } 3366 - out: 3367 - dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 3368 - return rc; 3369 - } 3370 - 3371 - /** 3372 - * cxlflash_afu_debug() - host AFU debug handler 3373 - * @cfg: Internal structure associated with the host. 3374 - * @arg: Kernel copy of userspace ioctl data structure. 3375 - * 3376 - * For debug requests requiring a data buffer, always provide an aligned 3377 - * (cache line) buffer to the AFU to appease any alignment requirements. 3378 - * 3379 - * Return: 0 on success, -errno on failure 3380 - */ 3381 - static int cxlflash_afu_debug(struct cxlflash_cfg *cfg, void *arg) 3382 - { 3383 - struct ht_cxlflash_afu_debug *afu_dbg = arg; 3384 - struct afu *afu = cfg->afu; 3385 - struct device *dev = &cfg->dev->dev; 3386 - struct sisl_ioarcb rcb; 3387 - struct sisl_ioasa asa; 3388 - char *buf = NULL; 3389 - char *kbuf = NULL; 3390 - void __user *ubuf = (__force void __user *)afu_dbg->data_ea; 3391 - u16 req_flags = SISL_REQ_FLAGS_AFU_CMD; 3392 - u32 ulen = afu_dbg->data_len; 3393 - bool is_write = afu_dbg->hdr.flags & HT_CXLFLASH_HOST_WRITE; 3394 - int rc = 0; 3395 - 3396 - if (!afu_is_afu_debug(afu)) { 3397 - rc = -ENOTSUPP; 3398 - goto out; 3399 - } 3400 - 3401 - if (ulen) { 3402 - req_flags |= SISL_REQ_FLAGS_SUP_UNDERRUN; 3403 - 3404 - if (ulen > HT_CXLFLASH_AFU_DEBUG_MAX_DATA_LEN) { 3405 - rc = -EINVAL; 3406 - goto out; 3407 - } 3408 - 3409 - buf = kmalloc(ulen + cache_line_size() - 1, GFP_KERNEL); 3410 - if (unlikely(!buf)) { 3411 - rc = -ENOMEM; 3412 - goto out; 3413 - } 3414 - 3415 - kbuf = PTR_ALIGN(buf, cache_line_size()); 3416 - 3417 - if (is_write) { 3418 - req_flags |= SISL_REQ_FLAGS_HOST_WRITE; 3419 - 3420 - if (copy_from_user(kbuf, ubuf, ulen)) { 3421 - rc = -EFAULT; 3422 - goto out; 3423 - } 3424 - } 3425 - } 3426 - 3427 - memset(&rcb, 0, sizeof(rcb)); 3428 - memset(&asa, 0, sizeof(asa)); 3429 - 3430 - rcb.req_flags = req_flags; 3431 - rcb.msi = SISL_MSI_RRQ_UPDATED; 3432 - rcb.timeout = MC_AFU_DEBUG_TIMEOUT; 3433 - rcb.ioasa = &asa; 3434 - 3435 - if (ulen) { 3436 - rcb.data_len = ulen; 3437 - rcb.data_ea = (uintptr_t)kbuf; 3438 - } 3439 - 3440 - rcb.cdb[0] = SISL_AFU_CMD_DEBUG; 3441 - memcpy(&rcb.cdb[4], afu_dbg->afu_subcmd, 3442 - HT_CXLFLASH_AFU_DEBUG_SUBCMD_LEN); 3443 - 3444 - rc = send_afu_cmd(afu, &rcb); 3445 - if (rc) { 3446 - dev_err(dev, "%s: send_afu_cmd failed rc=%d asc=%08x afux=%x\n", 3447 - __func__, rc, asa.ioasc, asa.afu_extra); 3448 - goto out; 3449 - } 3450 - 3451 - if (ulen && !is_write) { 3452 - if (copy_to_user(ubuf, kbuf, ulen)) 3453 - rc = -EFAULT; 3454 - } 3455 - out: 3456 - kfree(buf); 3457 - dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 3458 - return rc; 3459 - } 3460 - 3461 - /** 3462 - * cxlflash_chr_ioctl() - character device IOCTL handler 3463 - * @file: File pointer for this device. 3464 - * @cmd: IOCTL command. 3465 - * @arg: Userspace ioctl data structure. 3466 - * 3467 - * A read/write semaphore is used to implement a 'drain' of currently 3468 - * running ioctls. The read semaphore is taken at the beginning of each 3469 - * ioctl thread and released upon concluding execution. Additionally the 3470 - * semaphore should be released and then reacquired in any ioctl execution 3471 - * path which will wait for an event to occur that is outside the scope of 3472 - * the ioctl (i.e. an adapter reset). To drain the ioctls currently running, 3473 - * a thread simply needs to acquire the write semaphore. 3474 - * 3475 - * Return: 0 on success, -errno on failure 3476 - */ 3477 - static long cxlflash_chr_ioctl(struct file *file, unsigned int cmd, 3478 - unsigned long arg) 3479 - { 3480 - typedef int (*hioctl) (struct cxlflash_cfg *, void *); 3481 - 3482 - struct cxlflash_cfg *cfg = file->private_data; 3483 - struct device *dev = &cfg->dev->dev; 3484 - char buf[sizeof(union cxlflash_ht_ioctls)]; 3485 - void __user *uarg = (void __user *)arg; 3486 - struct ht_cxlflash_hdr *hdr; 3487 - size_t size = 0; 3488 - bool known_ioctl = false; 3489 - int idx = 0; 3490 - int rc = 0; 3491 - hioctl do_ioctl = NULL; 3492 - 3493 - static const struct { 3494 - size_t size; 3495 - hioctl ioctl; 3496 - } ioctl_tbl[] = { /* NOTE: order matters here */ 3497 - { sizeof(struct ht_cxlflash_lun_provision), cxlflash_lun_provision }, 3498 - { sizeof(struct ht_cxlflash_afu_debug), cxlflash_afu_debug }, 3499 - }; 3500 - 3501 - /* Hold read semaphore so we can drain if needed */ 3502 - down_read(&cfg->ioctl_rwsem); 3503 - 3504 - dev_dbg(dev, "%s: cmd=%u idx=%d tbl_size=%lu\n", 3505 - __func__, cmd, idx, sizeof(ioctl_tbl)); 3506 - 3507 - switch (cmd) { 3508 - case HT_CXLFLASH_LUN_PROVISION: 3509 - case HT_CXLFLASH_AFU_DEBUG: 3510 - known_ioctl = true; 3511 - idx = _IOC_NR(HT_CXLFLASH_LUN_PROVISION) - _IOC_NR(cmd); 3512 - size = ioctl_tbl[idx].size; 3513 - do_ioctl = ioctl_tbl[idx].ioctl; 3514 - 3515 - if (likely(do_ioctl)) 3516 - break; 3517 - 3518 - fallthrough; 3519 - default: 3520 - rc = -EINVAL; 3521 - goto out; 3522 - } 3523 - 3524 - if (unlikely(copy_from_user(&buf, uarg, size))) { 3525 - dev_err(dev, "%s: copy_from_user() fail " 3526 - "size=%lu cmd=%d (%s) uarg=%p\n", 3527 - __func__, size, cmd, decode_hioctl(cmd), uarg); 3528 - rc = -EFAULT; 3529 - goto out; 3530 - } 3531 - 3532 - hdr = (struct ht_cxlflash_hdr *)&buf; 3533 - if (hdr->version != HT_CXLFLASH_VERSION_0) { 3534 - dev_dbg(dev, "%s: Version %u not supported for %s\n", 3535 - __func__, hdr->version, decode_hioctl(cmd)); 3536 - rc = -EINVAL; 3537 - goto out; 3538 - } 3539 - 3540 - if (hdr->rsvd[0] || hdr->rsvd[1] || hdr->return_flags) { 3541 - dev_dbg(dev, "%s: Reserved/rflags populated\n", __func__); 3542 - rc = -EINVAL; 3543 - goto out; 3544 - } 3545 - 3546 - rc = do_ioctl(cfg, (void *)&buf); 3547 - if (likely(!rc)) 3548 - if (unlikely(copy_to_user(uarg, &buf, size))) { 3549 - dev_err(dev, "%s: copy_to_user() fail " 3550 - "size=%lu cmd=%d (%s) uarg=%p\n", 3551 - __func__, size, cmd, decode_hioctl(cmd), uarg); 3552 - rc = -EFAULT; 3553 - } 3554 - 3555 - /* fall through to exit */ 3556 - 3557 - out: 3558 - up_read(&cfg->ioctl_rwsem); 3559 - if (unlikely(rc && known_ioctl)) 3560 - dev_err(dev, "%s: ioctl %s (%08X) returned rc=%d\n", 3561 - __func__, decode_hioctl(cmd), cmd, rc); 3562 - else 3563 - dev_dbg(dev, "%s: ioctl %s (%08X) returned rc=%d\n", 3564 - __func__, decode_hioctl(cmd), cmd, rc); 3565 - return rc; 3566 - } 3567 - 3568 - /* 3569 - * Character device file operations 3570 - */ 3571 - static const struct file_operations cxlflash_chr_fops = { 3572 - .owner = THIS_MODULE, 3573 - .open = cxlflash_chr_open, 3574 - .unlocked_ioctl = cxlflash_chr_ioctl, 3575 - .compat_ioctl = compat_ptr_ioctl, 3576 - }; 3577 - 3578 - /** 3579 - * init_chrdev() - initialize the character device for the host 3580 - * @cfg: Internal structure associated with the host. 3581 - * 3582 - * Return: 0 on success, -errno on failure 3583 - */ 3584 - static int init_chrdev(struct cxlflash_cfg *cfg) 3585 - { 3586 - struct device *dev = &cfg->dev->dev; 3587 - struct device *char_dev; 3588 - dev_t devno; 3589 - int minor; 3590 - int rc = 0; 3591 - 3592 - minor = cxlflash_get_minor(); 3593 - if (unlikely(minor < 0)) { 3594 - dev_err(dev, "%s: Exhausted allowed adapters\n", __func__); 3595 - rc = -ENOSPC; 3596 - goto out; 3597 - } 3598 - 3599 - devno = MKDEV(cxlflash_major, minor); 3600 - cdev_init(&cfg->cdev, &cxlflash_chr_fops); 3601 - 3602 - rc = cdev_add(&cfg->cdev, devno, 1); 3603 - if (rc) { 3604 - dev_err(dev, "%s: cdev_add failed rc=%d\n", __func__, rc); 3605 - goto err1; 3606 - } 3607 - 3608 - char_dev = device_create(&cxlflash_class, NULL, devno, 3609 - NULL, "cxlflash%d", minor); 3610 - if (IS_ERR(char_dev)) { 3611 - rc = PTR_ERR(char_dev); 3612 - dev_err(dev, "%s: device_create failed rc=%d\n", 3613 - __func__, rc); 3614 - goto err2; 3615 - } 3616 - 3617 - cfg->chardev = char_dev; 3618 - out: 3619 - dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 3620 - return rc; 3621 - err2: 3622 - cdev_del(&cfg->cdev); 3623 - err1: 3624 - cxlflash_put_minor(minor); 3625 - goto out; 3626 - } 3627 - 3628 - /** 3629 - * cxlflash_probe() - PCI entry point to add host 3630 - * @pdev: PCI device associated with the host. 3631 - * @dev_id: PCI device id associated with device. 3632 - * 3633 - * The device will initially start out in a 'probing' state and 3634 - * transition to the 'normal' state at the end of a successful 3635 - * probe. Should an EEH event occur during probe, the notification 3636 - * thread (error_detected()) will wait until the probe handler 3637 - * is nearly complete. At that time, the device will be moved to 3638 - * a 'probed' state and the EEH thread woken up to drive the slot 3639 - * reset and recovery (device moves to 'normal' state). Meanwhile, 3640 - * the probe will be allowed to exit successfully. 3641 - * 3642 - * Return: 0 on success, -errno on failure 3643 - */ 3644 - static int cxlflash_probe(struct pci_dev *pdev, 3645 - const struct pci_device_id *dev_id) 3646 - { 3647 - struct Scsi_Host *host; 3648 - struct cxlflash_cfg *cfg = NULL; 3649 - struct device *dev = &pdev->dev; 3650 - struct dev_dependent_vals *ddv; 3651 - int rc = 0; 3652 - int k; 3653 - 3654 - dev_err_once(&pdev->dev, "DEPRECATION: cxlflash is deprecated and will be removed in a future kernel release\n"); 3655 - 3656 - dev_dbg(&pdev->dev, "%s: Found CXLFLASH with IRQ: %d\n", 3657 - __func__, pdev->irq); 3658 - 3659 - ddv = (struct dev_dependent_vals *)dev_id->driver_data; 3660 - driver_template.max_sectors = ddv->max_sectors; 3661 - 3662 - host = scsi_host_alloc(&driver_template, sizeof(struct cxlflash_cfg)); 3663 - if (!host) { 3664 - dev_err(dev, "%s: scsi_host_alloc failed\n", __func__); 3665 - rc = -ENOMEM; 3666 - goto out; 3667 - } 3668 - 3669 - host->max_id = CXLFLASH_MAX_NUM_TARGETS_PER_BUS; 3670 - host->max_lun = CXLFLASH_MAX_NUM_LUNS_PER_TARGET; 3671 - host->unique_id = host->host_no; 3672 - host->max_cmd_len = CXLFLASH_MAX_CDB_LEN; 3673 - 3674 - cfg = shost_priv(host); 3675 - cfg->state = STATE_PROBING; 3676 - cfg->host = host; 3677 - rc = alloc_mem(cfg); 3678 - if (rc) { 3679 - dev_err(dev, "%s: alloc_mem failed\n", __func__); 3680 - rc = -ENOMEM; 3681 - scsi_host_put(cfg->host); 3682 - goto out; 3683 - } 3684 - 3685 - cfg->init_state = INIT_STATE_NONE; 3686 - cfg->dev = pdev; 3687 - cfg->cxl_fops = cxlflash_cxl_fops; 3688 - cfg->ops = cxlflash_assign_ops(ddv); 3689 - WARN_ON_ONCE(!cfg->ops); 3690 - 3691 - /* 3692 - * Promoted LUNs move to the top of the LUN table. The rest stay on 3693 - * the bottom half. The bottom half grows from the end (index = 255), 3694 - * whereas the top half grows from the beginning (index = 0). 3695 - * 3696 - * Initialize the last LUN index for all possible ports. 3697 - */ 3698 - cfg->promote_lun_index = 0; 3699 - 3700 - for (k = 0; k < MAX_FC_PORTS; k++) 3701 - cfg->last_lun_index[k] = CXLFLASH_NUM_VLUNS/2 - 1; 3702 - 3703 - cfg->dev_id = (struct pci_device_id *)dev_id; 3704 - 3705 - init_waitqueue_head(&cfg->tmf_waitq); 3706 - init_waitqueue_head(&cfg->reset_waitq); 3707 - 3708 - INIT_WORK(&cfg->work_q, cxlflash_worker_thread); 3709 - cfg->lr_state = LINK_RESET_INVALID; 3710 - cfg->lr_port = -1; 3711 - spin_lock_init(&cfg->tmf_slock); 3712 - mutex_init(&cfg->ctx_tbl_list_mutex); 3713 - mutex_init(&cfg->ctx_recovery_mutex); 3714 - init_rwsem(&cfg->ioctl_rwsem); 3715 - INIT_LIST_HEAD(&cfg->ctx_err_recovery); 3716 - INIT_LIST_HEAD(&cfg->lluns); 3717 - 3718 - pci_set_drvdata(pdev, cfg); 3719 - 3720 - rc = init_pci(cfg); 3721 - if (rc) { 3722 - dev_err(dev, "%s: init_pci failed rc=%d\n", __func__, rc); 3723 - goto out_remove; 3724 - } 3725 - cfg->init_state = INIT_STATE_PCI; 3726 - 3727 - cfg->afu_cookie = cfg->ops->create_afu(pdev); 3728 - if (unlikely(!cfg->afu_cookie)) { 3729 - dev_err(dev, "%s: create_afu failed\n", __func__); 3730 - rc = -ENOMEM; 3731 - goto out_remove; 3732 - } 3733 - 3734 - rc = init_afu(cfg); 3735 - if (rc && !wq_has_sleeper(&cfg->reset_waitq)) { 3736 - dev_err(dev, "%s: init_afu failed rc=%d\n", __func__, rc); 3737 - goto out_remove; 3738 - } 3739 - cfg->init_state = INIT_STATE_AFU; 3740 - 3741 - rc = init_scsi(cfg); 3742 - if (rc) { 3743 - dev_err(dev, "%s: init_scsi failed rc=%d\n", __func__, rc); 3744 - goto out_remove; 3745 - } 3746 - cfg->init_state = INIT_STATE_SCSI; 3747 - 3748 - rc = init_chrdev(cfg); 3749 - if (rc) { 3750 - dev_err(dev, "%s: init_chrdev failed rc=%d\n", __func__, rc); 3751 - goto out_remove; 3752 - } 3753 - cfg->init_state = INIT_STATE_CDEV; 3754 - 3755 - if (wq_has_sleeper(&cfg->reset_waitq)) { 3756 - cfg->state = STATE_PROBED; 3757 - wake_up_all(&cfg->reset_waitq); 3758 - } else 3759 - cfg->state = STATE_NORMAL; 3760 - out: 3761 - dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 3762 - return rc; 3763 - 3764 - out_remove: 3765 - cfg->state = STATE_PROBED; 3766 - cxlflash_remove(pdev); 3767 - goto out; 3768 - } 3769 - 3770 - /** 3771 - * cxlflash_pci_error_detected() - called when a PCI error is detected 3772 - * @pdev: PCI device struct. 3773 - * @state: PCI channel state. 3774 - * 3775 - * When an EEH occurs during an active reset, wait until the reset is 3776 - * complete and then take action based upon the device state. 3777 - * 3778 - * Return: PCI_ERS_RESULT_NEED_RESET or PCI_ERS_RESULT_DISCONNECT 3779 - */ 3780 - static pci_ers_result_t cxlflash_pci_error_detected(struct pci_dev *pdev, 3781 - pci_channel_state_t state) 3782 - { 3783 - int rc = 0; 3784 - struct cxlflash_cfg *cfg = pci_get_drvdata(pdev); 3785 - struct device *dev = &cfg->dev->dev; 3786 - 3787 - dev_dbg(dev, "%s: pdev=%p state=%u\n", __func__, pdev, state); 3788 - 3789 - switch (state) { 3790 - case pci_channel_io_frozen: 3791 - wait_event(cfg->reset_waitq, cfg->state != STATE_RESET && 3792 - cfg->state != STATE_PROBING); 3793 - if (cfg->state == STATE_FAILTERM) 3794 - return PCI_ERS_RESULT_DISCONNECT; 3795 - 3796 - cfg->state = STATE_RESET; 3797 - scsi_block_requests(cfg->host); 3798 - drain_ioctls(cfg); 3799 - rc = cxlflash_mark_contexts_error(cfg); 3800 - if (unlikely(rc)) 3801 - dev_err(dev, "%s: Failed to mark user contexts rc=%d\n", 3802 - __func__, rc); 3803 - term_afu(cfg); 3804 - return PCI_ERS_RESULT_NEED_RESET; 3805 - case pci_channel_io_perm_failure: 3806 - cfg->state = STATE_FAILTERM; 3807 - wake_up_all(&cfg->reset_waitq); 3808 - scsi_unblock_requests(cfg->host); 3809 - return PCI_ERS_RESULT_DISCONNECT; 3810 - default: 3811 - break; 3812 - } 3813 - return PCI_ERS_RESULT_NEED_RESET; 3814 - } 3815 - 3816 - /** 3817 - * cxlflash_pci_slot_reset() - called when PCI slot has been reset 3818 - * @pdev: PCI device struct. 3819 - * 3820 - * This routine is called by the pci error recovery code after the PCI 3821 - * slot has been reset, just before we should resume normal operations. 3822 - * 3823 - * Return: PCI_ERS_RESULT_RECOVERED or PCI_ERS_RESULT_DISCONNECT 3824 - */ 3825 - static pci_ers_result_t cxlflash_pci_slot_reset(struct pci_dev *pdev) 3826 - { 3827 - int rc = 0; 3828 - struct cxlflash_cfg *cfg = pci_get_drvdata(pdev); 3829 - struct device *dev = &cfg->dev->dev; 3830 - 3831 - dev_dbg(dev, "%s: pdev=%p\n", __func__, pdev); 3832 - 3833 - rc = init_afu(cfg); 3834 - if (unlikely(rc)) { 3835 - dev_err(dev, "%s: EEH recovery failed rc=%d\n", __func__, rc); 3836 - return PCI_ERS_RESULT_DISCONNECT; 3837 - } 3838 - 3839 - return PCI_ERS_RESULT_RECOVERED; 3840 - } 3841 - 3842 - /** 3843 - * cxlflash_pci_resume() - called when normal operation can resume 3844 - * @pdev: PCI device struct 3845 - */ 3846 - static void cxlflash_pci_resume(struct pci_dev *pdev) 3847 - { 3848 - struct cxlflash_cfg *cfg = pci_get_drvdata(pdev); 3849 - struct device *dev = &cfg->dev->dev; 3850 - 3851 - dev_dbg(dev, "%s: pdev=%p\n", __func__, pdev); 3852 - 3853 - cfg->state = STATE_NORMAL; 3854 - wake_up_all(&cfg->reset_waitq); 3855 - scsi_unblock_requests(cfg->host); 3856 - } 3857 - 3858 - /** 3859 - * cxlflash_devnode() - provides devtmpfs for devices in the cxlflash class 3860 - * @dev: Character device. 3861 - * @mode: Mode that can be used to verify access. 3862 - * 3863 - * Return: Allocated string describing the devtmpfs structure. 3864 - */ 3865 - static char *cxlflash_devnode(const struct device *dev, umode_t *mode) 3866 - { 3867 - return kasprintf(GFP_KERNEL, "cxlflash/%s", dev_name(dev)); 3868 - } 3869 - 3870 - /** 3871 - * cxlflash_class_init() - create character device class 3872 - * 3873 - * Return: 0 on success, -errno on failure 3874 - */ 3875 - static int cxlflash_class_init(void) 3876 - { 3877 - dev_t devno; 3878 - int rc = 0; 3879 - 3880 - rc = alloc_chrdev_region(&devno, 0, CXLFLASH_MAX_ADAPTERS, "cxlflash"); 3881 - if (unlikely(rc)) { 3882 - pr_err("%s: alloc_chrdev_region failed rc=%d\n", __func__, rc); 3883 - goto out; 3884 - } 3885 - 3886 - cxlflash_major = MAJOR(devno); 3887 - 3888 - rc = class_register(&cxlflash_class); 3889 - if (rc) { 3890 - pr_err("%s: class_create failed rc=%d\n", __func__, rc); 3891 - goto err; 3892 - } 3893 - 3894 - out: 3895 - pr_debug("%s: returning rc=%d\n", __func__, rc); 3896 - return rc; 3897 - err: 3898 - unregister_chrdev_region(devno, CXLFLASH_MAX_ADAPTERS); 3899 - goto out; 3900 - } 3901 - 3902 - /** 3903 - * cxlflash_class_exit() - destroy character device class 3904 - */ 3905 - static void cxlflash_class_exit(void) 3906 - { 3907 - dev_t devno = MKDEV(cxlflash_major, 0); 3908 - 3909 - class_unregister(&cxlflash_class); 3910 - unregister_chrdev_region(devno, CXLFLASH_MAX_ADAPTERS); 3911 - } 3912 - 3913 - static const struct pci_error_handlers cxlflash_err_handler = { 3914 - .error_detected = cxlflash_pci_error_detected, 3915 - .slot_reset = cxlflash_pci_slot_reset, 3916 - .resume = cxlflash_pci_resume, 3917 - }; 3918 - 3919 - /* 3920 - * PCI device structure 3921 - */ 3922 - static struct pci_driver cxlflash_driver = { 3923 - .name = CXLFLASH_NAME, 3924 - .id_table = cxlflash_pci_table, 3925 - .probe = cxlflash_probe, 3926 - .remove = cxlflash_remove, 3927 - .shutdown = cxlflash_remove, 3928 - .err_handler = &cxlflash_err_handler, 3929 - }; 3930 - 3931 - /** 3932 - * init_cxlflash() - module entry point 3933 - * 3934 - * Return: 0 on success, -errno on failure 3935 - */ 3936 - static int __init init_cxlflash(void) 3937 - { 3938 - int rc; 3939 - 3940 - check_sizes(); 3941 - cxlflash_list_init(); 3942 - rc = cxlflash_class_init(); 3943 - if (unlikely(rc)) 3944 - goto out; 3945 - 3946 - rc = pci_register_driver(&cxlflash_driver); 3947 - if (unlikely(rc)) 3948 - goto err; 3949 - out: 3950 - pr_debug("%s: returning rc=%d\n", __func__, rc); 3951 - return rc; 3952 - err: 3953 - cxlflash_class_exit(); 3954 - goto out; 3955 - } 3956 - 3957 - /** 3958 - * exit_cxlflash() - module exit point 3959 - */ 3960 - static void __exit exit_cxlflash(void) 3961 - { 3962 - cxlflash_term_global_luns(); 3963 - cxlflash_free_errpage(); 3964 - 3965 - pci_unregister_driver(&cxlflash_driver); 3966 - cxlflash_class_exit(); 3967 - } 3968 - 3969 - module_init(init_cxlflash); 3970 - module_exit(exit_cxlflash);
-129
drivers/scsi/cxlflash/main.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* 3 - * CXL Flash Device Driver 4 - * 5 - * Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation 6 - * Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation 7 - * 8 - * Copyright (C) 2015 IBM Corporation 9 - */ 10 - 11 - #ifndef _CXLFLASH_MAIN_H 12 - #define _CXLFLASH_MAIN_H 13 - 14 - #include <linux/list.h> 15 - #include <linux/types.h> 16 - #include <scsi/scsi.h> 17 - #include <scsi/scsi_device.h> 18 - 19 - #include "backend.h" 20 - 21 - #define CXLFLASH_NAME "cxlflash" 22 - #define CXLFLASH_ADAPTER_NAME "IBM POWER CXL Flash Adapter" 23 - #define CXLFLASH_MAX_ADAPTERS 32 24 - 25 - #define PCI_DEVICE_ID_IBM_CORSA 0x04F0 26 - #define PCI_DEVICE_ID_IBM_FLASH_GT 0x0600 27 - #define PCI_DEVICE_ID_IBM_BRIARD 0x0624 28 - 29 - /* Since there is only one target, make it 0 */ 30 - #define CXLFLASH_TARGET 0 31 - #define CXLFLASH_MAX_CDB_LEN 16 32 - 33 - /* Really only one target per bus since the Texan is directly attached */ 34 - #define CXLFLASH_MAX_NUM_TARGETS_PER_BUS 1 35 - #define CXLFLASH_MAX_NUM_LUNS_PER_TARGET 65536 36 - 37 - #define CXLFLASH_PCI_ERROR_RECOVERY_TIMEOUT (120 * HZ) 38 - 39 - /* FC defines */ 40 - #define FC_MTIP_CMDCONFIG 0x010 41 - #define FC_MTIP_STATUS 0x018 42 - #define FC_MAX_NUM_LUNS 0x080 /* Max LUNs host can provision for port */ 43 - #define FC_CUR_NUM_LUNS 0x088 /* Cur number LUNs provisioned for port */ 44 - #define FC_MAX_CAP_PORT 0x090 /* Max capacity all LUNs for port (4K blocks) */ 45 - #define FC_CUR_CAP_PORT 0x098 /* Cur capacity all LUNs for port (4K blocks) */ 46 - 47 - #define FC_PNAME 0x300 48 - #define FC_CONFIG 0x320 49 - #define FC_CONFIG2 0x328 50 - #define FC_STATUS 0x330 51 - #define FC_ERROR 0x380 52 - #define FC_ERRCAP 0x388 53 - #define FC_ERRMSK 0x390 54 - #define FC_CNT_CRCERR 0x538 55 - #define FC_CRC_THRESH 0x580 56 - 57 - #define FC_MTIP_CMDCONFIG_ONLINE 0x20ULL 58 - #define FC_MTIP_CMDCONFIG_OFFLINE 0x40ULL 59 - 60 - #define FC_MTIP_STATUS_MASK 0x30ULL 61 - #define FC_MTIP_STATUS_ONLINE 0x20ULL 62 - #define FC_MTIP_STATUS_OFFLINE 0x10ULL 63 - 64 - /* TIMEOUT and RETRY definitions */ 65 - 66 - /* AFU command timeout values */ 67 - #define MC_AFU_SYNC_TIMEOUT 5 /* 5 secs */ 68 - #define MC_LUN_PROV_TIMEOUT 5 /* 5 secs */ 69 - #define MC_AFU_DEBUG_TIMEOUT 5 /* 5 secs */ 70 - 71 - /* AFU command room retry limit */ 72 - #define MC_ROOM_RETRY_CNT 10 73 - 74 - /* FC CRC clear periodic timer */ 75 - #define MC_CRC_THRESH 100 /* threshold in 5 mins */ 76 - 77 - #define FC_PORT_STATUS_RETRY_CNT 100 /* 100 100ms retries = 10 seconds */ 78 - #define FC_PORT_STATUS_RETRY_INTERVAL_US 100000 /* microseconds */ 79 - 80 - /* VPD defines */ 81 - #define CXLFLASH_VPD_LEN 256 82 - #define WWPN_LEN 16 83 - #define WWPN_BUF_LEN (WWPN_LEN + 1) 84 - 85 - enum undo_level { 86 - UNDO_NOOP = 0, 87 - FREE_IRQ, 88 - UNMAP_ONE, 89 - UNMAP_TWO, 90 - UNMAP_THREE 91 - }; 92 - 93 - struct dev_dependent_vals { 94 - u64 max_sectors; 95 - u64 flags; 96 - #define CXLFLASH_NOTIFY_SHUTDOWN 0x0000000000000001ULL 97 - #define CXLFLASH_WWPN_VPD_REQUIRED 0x0000000000000002ULL 98 - #define CXLFLASH_OCXL_DEV 0x0000000000000004ULL 99 - }; 100 - 101 - static inline const struct cxlflash_backend_ops * 102 - cxlflash_assign_ops(struct dev_dependent_vals *ddv) 103 - { 104 - const struct cxlflash_backend_ops *ops = NULL; 105 - 106 - #ifdef CONFIG_OCXL_BASE 107 - if (ddv->flags & CXLFLASH_OCXL_DEV) 108 - ops = &cxlflash_ocxl_ops; 109 - #endif 110 - 111 - #ifdef CONFIG_CXL_BASE 112 - if (!(ddv->flags & CXLFLASH_OCXL_DEV)) 113 - ops = &cxlflash_cxl_ops; 114 - #endif 115 - 116 - return ops; 117 - } 118 - 119 - struct asyc_intr_info { 120 - u64 status; 121 - char *desc; 122 - u8 port; 123 - u8 action; 124 - #define CLR_FC_ERROR 0x01 125 - #define LINK_RESET 0x02 126 - #define SCAN_HOST 0x04 127 - }; 128 - 129 - #endif /* _CXLFLASH_MAIN_H */
-1399
drivers/scsi/cxlflash/ocxl_hw.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * CXL Flash Device Driver 4 - * 5 - * Written by: Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation 6 - * Uma Krishnan <ukrishn@linux.vnet.ibm.com>, IBM Corporation 7 - * 8 - * Copyright (C) 2018 IBM Corporation 9 - */ 10 - 11 - #include <linux/file.h> 12 - #include <linux/idr.h> 13 - #include <linux/module.h> 14 - #include <linux/mount.h> 15 - #include <linux/pseudo_fs.h> 16 - #include <linux/poll.h> 17 - #include <linux/sched/signal.h> 18 - #include <linux/interrupt.h> 19 - #include <linux/irqdomain.h> 20 - #include <asm/xive.h> 21 - #include <misc/ocxl.h> 22 - 23 - #include <uapi/misc/cxl.h> 24 - 25 - #include "backend.h" 26 - #include "ocxl_hw.h" 27 - 28 - /* 29 - * Pseudo-filesystem to allocate inodes. 30 - */ 31 - 32 - #define OCXLFLASH_FS_MAGIC 0x1697698f 33 - 34 - static int ocxlflash_fs_cnt; 35 - static struct vfsmount *ocxlflash_vfs_mount; 36 - 37 - static int ocxlflash_fs_init_fs_context(struct fs_context *fc) 38 - { 39 - return init_pseudo(fc, OCXLFLASH_FS_MAGIC) ? 0 : -ENOMEM; 40 - } 41 - 42 - static struct file_system_type ocxlflash_fs_type = { 43 - .name = "ocxlflash", 44 - .owner = THIS_MODULE, 45 - .init_fs_context = ocxlflash_fs_init_fs_context, 46 - .kill_sb = kill_anon_super, 47 - }; 48 - 49 - /* 50 - * ocxlflash_release_mapping() - release the memory mapping 51 - * @ctx: Context whose mapping is to be released. 52 - */ 53 - static void ocxlflash_release_mapping(struct ocxlflash_context *ctx) 54 - { 55 - if (ctx->mapping) 56 - simple_release_fs(&ocxlflash_vfs_mount, &ocxlflash_fs_cnt); 57 - ctx->mapping = NULL; 58 - } 59 - 60 - /* 61 - * ocxlflash_getfile() - allocate pseudo filesystem, inode, and the file 62 - * @dev: Generic device of the host. 63 - * @name: Name of the pseudo filesystem. 64 - * @fops: File operations. 65 - * @priv: Private data. 66 - * @flags: Flags for the file. 67 - * 68 - * Return: pointer to the file on success, ERR_PTR on failure 69 - */ 70 - static struct file *ocxlflash_getfile(struct device *dev, const char *name, 71 - const struct file_operations *fops, 72 - void *priv, int flags) 73 - { 74 - struct file *file; 75 - struct inode *inode; 76 - int rc; 77 - 78 - if (fops->owner && !try_module_get(fops->owner)) { 79 - dev_err(dev, "%s: Owner does not exist\n", __func__); 80 - rc = -ENOENT; 81 - goto err1; 82 - } 83 - 84 - rc = simple_pin_fs(&ocxlflash_fs_type, &ocxlflash_vfs_mount, 85 - &ocxlflash_fs_cnt); 86 - if (unlikely(rc < 0)) { 87 - dev_err(dev, "%s: Cannot mount ocxlflash pseudofs rc=%d\n", 88 - __func__, rc); 89 - goto err2; 90 - } 91 - 92 - inode = alloc_anon_inode(ocxlflash_vfs_mount->mnt_sb); 93 - if (IS_ERR(inode)) { 94 - rc = PTR_ERR(inode); 95 - dev_err(dev, "%s: alloc_anon_inode failed rc=%d\n", 96 - __func__, rc); 97 - goto err3; 98 - } 99 - 100 - file = alloc_file_pseudo(inode, ocxlflash_vfs_mount, name, 101 - flags & (O_ACCMODE | O_NONBLOCK), fops); 102 - if (IS_ERR(file)) { 103 - rc = PTR_ERR(file); 104 - dev_err(dev, "%s: alloc_file failed rc=%d\n", 105 - __func__, rc); 106 - goto err4; 107 - } 108 - 109 - file->private_data = priv; 110 - out: 111 - return file; 112 - err4: 113 - iput(inode); 114 - err3: 115 - simple_release_fs(&ocxlflash_vfs_mount, &ocxlflash_fs_cnt); 116 - err2: 117 - module_put(fops->owner); 118 - err1: 119 - file = ERR_PTR(rc); 120 - goto out; 121 - } 122 - 123 - /** 124 - * ocxlflash_psa_map() - map the process specific MMIO space 125 - * @ctx_cookie: Adapter context for which the mapping needs to be done. 126 - * 127 - * Return: MMIO pointer of the mapped region 128 - */ 129 - static void __iomem *ocxlflash_psa_map(void *ctx_cookie) 130 - { 131 - struct ocxlflash_context *ctx = ctx_cookie; 132 - struct device *dev = ctx->hw_afu->dev; 133 - 134 - mutex_lock(&ctx->state_mutex); 135 - if (ctx->state != STARTED) { 136 - dev_err(dev, "%s: Context not started, state=%d\n", __func__, 137 - ctx->state); 138 - mutex_unlock(&ctx->state_mutex); 139 - return NULL; 140 - } 141 - mutex_unlock(&ctx->state_mutex); 142 - 143 - return ioremap(ctx->psn_phys, ctx->psn_size); 144 - } 145 - 146 - /** 147 - * ocxlflash_psa_unmap() - unmap the process specific MMIO space 148 - * @addr: MMIO pointer to unmap. 149 - */ 150 - static void ocxlflash_psa_unmap(void __iomem *addr) 151 - { 152 - iounmap(addr); 153 - } 154 - 155 - /** 156 - * ocxlflash_process_element() - get process element of the adapter context 157 - * @ctx_cookie: Adapter context associated with the process element. 158 - * 159 - * Return: process element of the adapter context 160 - */ 161 - static int ocxlflash_process_element(void *ctx_cookie) 162 - { 163 - struct ocxlflash_context *ctx = ctx_cookie; 164 - 165 - return ctx->pe; 166 - } 167 - 168 - /** 169 - * afu_map_irq() - map the interrupt of the adapter context 170 - * @flags: Flags. 171 - * @ctx: Adapter context. 172 - * @num: Per-context AFU interrupt number. 173 - * @handler: Interrupt handler to register. 174 - * @cookie: Interrupt handler private data. 175 - * @name: Name of the interrupt. 176 - * 177 - * Return: 0 on success, -errno on failure 178 - */ 179 - static int afu_map_irq(u64 flags, struct ocxlflash_context *ctx, int num, 180 - irq_handler_t handler, void *cookie, char *name) 181 - { 182 - struct ocxl_hw_afu *afu = ctx->hw_afu; 183 - struct device *dev = afu->dev; 184 - struct ocxlflash_irqs *irq; 185 - struct xive_irq_data *xd; 186 - u32 virq; 187 - int rc = 0; 188 - 189 - if (num < 0 || num >= ctx->num_irqs) { 190 - dev_err(dev, "%s: Interrupt %d not allocated\n", __func__, num); 191 - rc = -ENOENT; 192 - goto out; 193 - } 194 - 195 - irq = &ctx->irqs[num]; 196 - virq = irq_create_mapping(NULL, irq->hwirq); 197 - if (unlikely(!virq)) { 198 - dev_err(dev, "%s: irq_create_mapping failed\n", __func__); 199 - rc = -ENOMEM; 200 - goto out; 201 - } 202 - 203 - rc = request_irq(virq, handler, 0, name, cookie); 204 - if (unlikely(rc)) { 205 - dev_err(dev, "%s: request_irq failed rc=%d\n", __func__, rc); 206 - goto err1; 207 - } 208 - 209 - xd = irq_get_handler_data(virq); 210 - if (unlikely(!xd)) { 211 - dev_err(dev, "%s: Can't get interrupt data\n", __func__); 212 - rc = -ENXIO; 213 - goto err2; 214 - } 215 - 216 - irq->virq = virq; 217 - irq->vtrig = xd->trig_mmio; 218 - out: 219 - return rc; 220 - err2: 221 - free_irq(virq, cookie); 222 - err1: 223 - irq_dispose_mapping(virq); 224 - goto out; 225 - } 226 - 227 - /** 228 - * ocxlflash_map_afu_irq() - map the interrupt of the adapter context 229 - * @ctx_cookie: Adapter context. 230 - * @num: Per-context AFU interrupt number. 231 - * @handler: Interrupt handler to register. 232 - * @cookie: Interrupt handler private data. 233 - * @name: Name of the interrupt. 234 - * 235 - * Return: 0 on success, -errno on failure 236 - */ 237 - static int ocxlflash_map_afu_irq(void *ctx_cookie, int num, 238 - irq_handler_t handler, void *cookie, 239 - char *name) 240 - { 241 - return afu_map_irq(0, ctx_cookie, num, handler, cookie, name); 242 - } 243 - 244 - /** 245 - * afu_unmap_irq() - unmap the interrupt 246 - * @flags: Flags. 247 - * @ctx: Adapter context. 248 - * @num: Per-context AFU interrupt number. 249 - * @cookie: Interrupt handler private data. 250 - */ 251 - static void afu_unmap_irq(u64 flags, struct ocxlflash_context *ctx, int num, 252 - void *cookie) 253 - { 254 - struct ocxl_hw_afu *afu = ctx->hw_afu; 255 - struct device *dev = afu->dev; 256 - struct ocxlflash_irqs *irq; 257 - 258 - if (num < 0 || num >= ctx->num_irqs) { 259 - dev_err(dev, "%s: Interrupt %d not allocated\n", __func__, num); 260 - return; 261 - } 262 - 263 - irq = &ctx->irqs[num]; 264 - 265 - if (irq_find_mapping(NULL, irq->hwirq)) { 266 - free_irq(irq->virq, cookie); 267 - irq_dispose_mapping(irq->virq); 268 - } 269 - 270 - memset(irq, 0, sizeof(*irq)); 271 - } 272 - 273 - /** 274 - * ocxlflash_unmap_afu_irq() - unmap the interrupt 275 - * @ctx_cookie: Adapter context. 276 - * @num: Per-context AFU interrupt number. 277 - * @cookie: Interrupt handler private data. 278 - */ 279 - static void ocxlflash_unmap_afu_irq(void *ctx_cookie, int num, void *cookie) 280 - { 281 - return afu_unmap_irq(0, ctx_cookie, num, cookie); 282 - } 283 - 284 - /** 285 - * ocxlflash_get_irq_objhndl() - get the object handle for an interrupt 286 - * @ctx_cookie: Context associated with the interrupt. 287 - * @irq: Interrupt number. 288 - * 289 - * Return: effective address of the mapped region 290 - */ 291 - static u64 ocxlflash_get_irq_objhndl(void *ctx_cookie, int irq) 292 - { 293 - struct ocxlflash_context *ctx = ctx_cookie; 294 - 295 - if (irq < 0 || irq >= ctx->num_irqs) 296 - return 0; 297 - 298 - return (__force u64)ctx->irqs[irq].vtrig; 299 - } 300 - 301 - /** 302 - * ocxlflash_xsl_fault() - callback when translation error is triggered 303 - * @data: Private data provided at callback registration, the context. 304 - * @addr: Address that triggered the error. 305 - * @dsisr: Value of dsisr register. 306 - */ 307 - static void ocxlflash_xsl_fault(void *data, u64 addr, u64 dsisr) 308 - { 309 - struct ocxlflash_context *ctx = data; 310 - 311 - spin_lock(&ctx->slock); 312 - ctx->fault_addr = addr; 313 - ctx->fault_dsisr = dsisr; 314 - ctx->pending_fault = true; 315 - spin_unlock(&ctx->slock); 316 - 317 - wake_up_all(&ctx->wq); 318 - } 319 - 320 - /** 321 - * start_context() - local routine to start a context 322 - * @ctx: Adapter context to be started. 323 - * 324 - * Assign the context specific MMIO space, add and enable the PE. 325 - * 326 - * Return: 0 on success, -errno on failure 327 - */ 328 - static int start_context(struct ocxlflash_context *ctx) 329 - { 330 - struct ocxl_hw_afu *afu = ctx->hw_afu; 331 - struct ocxl_afu_config *acfg = &afu->acfg; 332 - void *link_token = afu->link_token; 333 - struct pci_dev *pdev = afu->pdev; 334 - struct device *dev = afu->dev; 335 - bool master = ctx->master; 336 - struct mm_struct *mm; 337 - int rc = 0; 338 - u32 pid; 339 - 340 - mutex_lock(&ctx->state_mutex); 341 - if (ctx->state != OPENED) { 342 - dev_err(dev, "%s: Context state invalid, state=%d\n", 343 - __func__, ctx->state); 344 - rc = -EINVAL; 345 - goto out; 346 - } 347 - 348 - if (master) { 349 - ctx->psn_size = acfg->global_mmio_size; 350 - ctx->psn_phys = afu->gmmio_phys; 351 - } else { 352 - ctx->psn_size = acfg->pp_mmio_stride; 353 - ctx->psn_phys = afu->ppmmio_phys + (ctx->pe * ctx->psn_size); 354 - } 355 - 356 - /* pid and mm not set for master contexts */ 357 - if (master) { 358 - pid = 0; 359 - mm = NULL; 360 - } else { 361 - pid = current->mm->context.id; 362 - mm = current->mm; 363 - } 364 - 365 - rc = ocxl_link_add_pe(link_token, ctx->pe, pid, 0, 0, 366 - pci_dev_id(pdev), mm, ocxlflash_xsl_fault, 367 - ctx); 368 - if (unlikely(rc)) { 369 - dev_err(dev, "%s: ocxl_link_add_pe failed rc=%d\n", 370 - __func__, rc); 371 - goto out; 372 - } 373 - 374 - ctx->state = STARTED; 375 - out: 376 - mutex_unlock(&ctx->state_mutex); 377 - return rc; 378 - } 379 - 380 - /** 381 - * ocxlflash_start_context() - start a kernel context 382 - * @ctx_cookie: Adapter context to be started. 383 - * 384 - * Return: 0 on success, -errno on failure 385 - */ 386 - static int ocxlflash_start_context(void *ctx_cookie) 387 - { 388 - struct ocxlflash_context *ctx = ctx_cookie; 389 - 390 - return start_context(ctx); 391 - } 392 - 393 - /** 394 - * ocxlflash_stop_context() - stop a context 395 - * @ctx_cookie: Adapter context to be stopped. 396 - * 397 - * Return: 0 on success, -errno on failure 398 - */ 399 - static int ocxlflash_stop_context(void *ctx_cookie) 400 - { 401 - struct ocxlflash_context *ctx = ctx_cookie; 402 - struct ocxl_hw_afu *afu = ctx->hw_afu; 403 - struct ocxl_afu_config *acfg = &afu->acfg; 404 - struct pci_dev *pdev = afu->pdev; 405 - struct device *dev = afu->dev; 406 - enum ocxlflash_ctx_state state; 407 - int rc = 0; 408 - 409 - mutex_lock(&ctx->state_mutex); 410 - state = ctx->state; 411 - ctx->state = CLOSED; 412 - mutex_unlock(&ctx->state_mutex); 413 - if (state != STARTED) 414 - goto out; 415 - 416 - rc = ocxl_config_terminate_pasid(pdev, acfg->dvsec_afu_control_pos, 417 - ctx->pe); 418 - if (unlikely(rc)) { 419 - dev_err(dev, "%s: ocxl_config_terminate_pasid failed rc=%d\n", 420 - __func__, rc); 421 - /* If EBUSY, PE could be referenced in future by the AFU */ 422 - if (rc == -EBUSY) 423 - goto out; 424 - } 425 - 426 - rc = ocxl_link_remove_pe(afu->link_token, ctx->pe); 427 - if (unlikely(rc)) { 428 - dev_err(dev, "%s: ocxl_link_remove_pe failed rc=%d\n", 429 - __func__, rc); 430 - goto out; 431 - } 432 - out: 433 - return rc; 434 - } 435 - 436 - /** 437 - * ocxlflash_afu_reset() - reset the AFU 438 - * @ctx_cookie: Adapter context. 439 - */ 440 - static int ocxlflash_afu_reset(void *ctx_cookie) 441 - { 442 - struct ocxlflash_context *ctx = ctx_cookie; 443 - struct device *dev = ctx->hw_afu->dev; 444 - 445 - /* Pending implementation from OCXL transport services */ 446 - dev_err_once(dev, "%s: afu_reset() fop not supported\n", __func__); 447 - 448 - /* Silently return success until it is implemented */ 449 - return 0; 450 - } 451 - 452 - /** 453 - * ocxlflash_set_master() - sets the context as master 454 - * @ctx_cookie: Adapter context to set as master. 455 - */ 456 - static void ocxlflash_set_master(void *ctx_cookie) 457 - { 458 - struct ocxlflash_context *ctx = ctx_cookie; 459 - 460 - ctx->master = true; 461 - } 462 - 463 - /** 464 - * ocxlflash_get_context() - obtains the context associated with the host 465 - * @pdev: PCI device associated with the host. 466 - * @afu_cookie: Hardware AFU associated with the host. 467 - * 468 - * Return: returns the pointer to host adapter context 469 - */ 470 - static void *ocxlflash_get_context(struct pci_dev *pdev, void *afu_cookie) 471 - { 472 - struct ocxl_hw_afu *afu = afu_cookie; 473 - 474 - return afu->ocxl_ctx; 475 - } 476 - 477 - /** 478 - * ocxlflash_dev_context_init() - allocate and initialize an adapter context 479 - * @pdev: PCI device associated with the host. 480 - * @afu_cookie: Hardware AFU associated with the host. 481 - * 482 - * Return: returns the adapter context on success, ERR_PTR on failure 483 - */ 484 - static void *ocxlflash_dev_context_init(struct pci_dev *pdev, void *afu_cookie) 485 - { 486 - struct ocxl_hw_afu *afu = afu_cookie; 487 - struct device *dev = afu->dev; 488 - struct ocxlflash_context *ctx; 489 - int rc; 490 - 491 - ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); 492 - if (unlikely(!ctx)) { 493 - dev_err(dev, "%s: Context allocation failed\n", __func__); 494 - rc = -ENOMEM; 495 - goto err1; 496 - } 497 - 498 - idr_preload(GFP_KERNEL); 499 - rc = idr_alloc(&afu->idr, ctx, 0, afu->max_pasid, GFP_NOWAIT); 500 - idr_preload_end(); 501 - if (unlikely(rc < 0)) { 502 - dev_err(dev, "%s: idr_alloc failed rc=%d\n", __func__, rc); 503 - goto err2; 504 - } 505 - 506 - spin_lock_init(&ctx->slock); 507 - init_waitqueue_head(&ctx->wq); 508 - mutex_init(&ctx->state_mutex); 509 - 510 - ctx->state = OPENED; 511 - ctx->pe = rc; 512 - ctx->master = false; 513 - ctx->mapping = NULL; 514 - ctx->hw_afu = afu; 515 - ctx->irq_bitmap = 0; 516 - ctx->pending_irq = false; 517 - ctx->pending_fault = false; 518 - out: 519 - return ctx; 520 - err2: 521 - kfree(ctx); 522 - err1: 523 - ctx = ERR_PTR(rc); 524 - goto out; 525 - } 526 - 527 - /** 528 - * ocxlflash_release_context() - releases an adapter context 529 - * @ctx_cookie: Adapter context to be released. 530 - * 531 - * Return: 0 on success, -errno on failure 532 - */ 533 - static int ocxlflash_release_context(void *ctx_cookie) 534 - { 535 - struct ocxlflash_context *ctx = ctx_cookie; 536 - struct device *dev; 537 - int rc = 0; 538 - 539 - if (!ctx) 540 - goto out; 541 - 542 - dev = ctx->hw_afu->dev; 543 - mutex_lock(&ctx->state_mutex); 544 - if (ctx->state >= STARTED) { 545 - dev_err(dev, "%s: Context in use, state=%d\n", __func__, 546 - ctx->state); 547 - mutex_unlock(&ctx->state_mutex); 548 - rc = -EBUSY; 549 - goto out; 550 - } 551 - mutex_unlock(&ctx->state_mutex); 552 - 553 - idr_remove(&ctx->hw_afu->idr, ctx->pe); 554 - ocxlflash_release_mapping(ctx); 555 - kfree(ctx); 556 - out: 557 - return rc; 558 - } 559 - 560 - /** 561 - * ocxlflash_perst_reloads_same_image() - sets the image reload policy 562 - * @afu_cookie: Hardware AFU associated with the host. 563 - * @image: Whether to load the same image on PERST. 564 - */ 565 - static void ocxlflash_perst_reloads_same_image(void *afu_cookie, bool image) 566 - { 567 - struct ocxl_hw_afu *afu = afu_cookie; 568 - 569 - afu->perst_same_image = image; 570 - } 571 - 572 - /** 573 - * ocxlflash_read_adapter_vpd() - reads the adapter VPD 574 - * @pdev: PCI device associated with the host. 575 - * @buf: Buffer to get the VPD data. 576 - * @count: Size of buffer (maximum bytes that can be read). 577 - * 578 - * Return: size of VPD on success, -errno on failure 579 - */ 580 - static ssize_t ocxlflash_read_adapter_vpd(struct pci_dev *pdev, void *buf, 581 - size_t count) 582 - { 583 - return pci_read_vpd(pdev, 0, count, buf); 584 - } 585 - 586 - /** 587 - * free_afu_irqs() - internal service to free interrupts 588 - * @ctx: Adapter context. 589 - */ 590 - static void free_afu_irqs(struct ocxlflash_context *ctx) 591 - { 592 - struct ocxl_hw_afu *afu = ctx->hw_afu; 593 - struct device *dev = afu->dev; 594 - int i; 595 - 596 - if (!ctx->irqs) { 597 - dev_err(dev, "%s: Interrupts not allocated\n", __func__); 598 - return; 599 - } 600 - 601 - for (i = ctx->num_irqs; i >= 0; i--) 602 - ocxl_link_free_irq(afu->link_token, ctx->irqs[i].hwirq); 603 - 604 - kfree(ctx->irqs); 605 - ctx->irqs = NULL; 606 - } 607 - 608 - /** 609 - * alloc_afu_irqs() - internal service to allocate interrupts 610 - * @ctx: Context associated with the request. 611 - * @num: Number of interrupts requested. 612 - * 613 - * Return: 0 on success, -errno on failure 614 - */ 615 - static int alloc_afu_irqs(struct ocxlflash_context *ctx, int num) 616 - { 617 - struct ocxl_hw_afu *afu = ctx->hw_afu; 618 - struct device *dev = afu->dev; 619 - struct ocxlflash_irqs *irqs; 620 - int rc = 0; 621 - int hwirq; 622 - int i; 623 - 624 - if (ctx->irqs) { 625 - dev_err(dev, "%s: Interrupts already allocated\n", __func__); 626 - rc = -EEXIST; 627 - goto out; 628 - } 629 - 630 - if (num > OCXL_MAX_IRQS) { 631 - dev_err(dev, "%s: Too many interrupts num=%d\n", __func__, num); 632 - rc = -EINVAL; 633 - goto out; 634 - } 635 - 636 - irqs = kcalloc(num, sizeof(*irqs), GFP_KERNEL); 637 - if (unlikely(!irqs)) { 638 - dev_err(dev, "%s: Context irqs allocation failed\n", __func__); 639 - rc = -ENOMEM; 640 - goto out; 641 - } 642 - 643 - for (i = 0; i < num; i++) { 644 - rc = ocxl_link_irq_alloc(afu->link_token, &hwirq); 645 - if (unlikely(rc)) { 646 - dev_err(dev, "%s: ocxl_link_irq_alloc failed rc=%d\n", 647 - __func__, rc); 648 - goto err; 649 - } 650 - 651 - irqs[i].hwirq = hwirq; 652 - } 653 - 654 - ctx->irqs = irqs; 655 - ctx->num_irqs = num; 656 - out: 657 - return rc; 658 - err: 659 - for (i = i-1; i >= 0; i--) 660 - ocxl_link_free_irq(afu->link_token, irqs[i].hwirq); 661 - kfree(irqs); 662 - goto out; 663 - } 664 - 665 - /** 666 - * ocxlflash_allocate_afu_irqs() - allocates the requested number of interrupts 667 - * @ctx_cookie: Context associated with the request. 668 - * @num: Number of interrupts requested. 669 - * 670 - * Return: 0 on success, -errno on failure 671 - */ 672 - static int ocxlflash_allocate_afu_irqs(void *ctx_cookie, int num) 673 - { 674 - return alloc_afu_irqs(ctx_cookie, num); 675 - } 676 - 677 - /** 678 - * ocxlflash_free_afu_irqs() - frees the interrupts of an adapter context 679 - * @ctx_cookie: Adapter context. 680 - */ 681 - static void ocxlflash_free_afu_irqs(void *ctx_cookie) 682 - { 683 - free_afu_irqs(ctx_cookie); 684 - } 685 - 686 - /** 687 - * ocxlflash_unconfig_afu() - unconfigure the AFU 688 - * @afu: AFU associated with the host. 689 - */ 690 - static void ocxlflash_unconfig_afu(struct ocxl_hw_afu *afu) 691 - { 692 - if (afu->gmmio_virt) { 693 - iounmap(afu->gmmio_virt); 694 - afu->gmmio_virt = NULL; 695 - } 696 - } 697 - 698 - /** 699 - * ocxlflash_destroy_afu() - destroy the AFU structure 700 - * @afu_cookie: AFU to be freed. 701 - */ 702 - static void ocxlflash_destroy_afu(void *afu_cookie) 703 - { 704 - struct ocxl_hw_afu *afu = afu_cookie; 705 - int pos; 706 - 707 - if (!afu) 708 - return; 709 - 710 - ocxlflash_release_context(afu->ocxl_ctx); 711 - idr_destroy(&afu->idr); 712 - 713 - /* Disable the AFU */ 714 - pos = afu->acfg.dvsec_afu_control_pos; 715 - ocxl_config_set_afu_state(afu->pdev, pos, 0); 716 - 717 - ocxlflash_unconfig_afu(afu); 718 - kfree(afu); 719 - } 720 - 721 - /** 722 - * ocxlflash_config_fn() - configure the host function 723 - * @pdev: PCI device associated with the host. 724 - * @afu: AFU associated with the host. 725 - * 726 - * Return: 0 on success, -errno on failure 727 - */ 728 - static int ocxlflash_config_fn(struct pci_dev *pdev, struct ocxl_hw_afu *afu) 729 - { 730 - struct ocxl_fn_config *fcfg = &afu->fcfg; 731 - struct device *dev = &pdev->dev; 732 - u16 base, enabled, supported; 733 - int rc = 0; 734 - 735 - /* Read DVSEC config of the function */ 736 - rc = ocxl_config_read_function(pdev, fcfg); 737 - if (unlikely(rc)) { 738 - dev_err(dev, "%s: ocxl_config_read_function failed rc=%d\n", 739 - __func__, rc); 740 - goto out; 741 - } 742 - 743 - /* Check if function has AFUs defined, only 1 per function supported */ 744 - if (fcfg->max_afu_index >= 0) { 745 - afu->is_present = true; 746 - if (fcfg->max_afu_index != 0) 747 - dev_warn(dev, "%s: Unexpected AFU index value %d\n", 748 - __func__, fcfg->max_afu_index); 749 - } 750 - 751 - rc = ocxl_config_get_actag_info(pdev, &base, &enabled, &supported); 752 - if (unlikely(rc)) { 753 - dev_err(dev, "%s: ocxl_config_get_actag_info failed rc=%d\n", 754 - __func__, rc); 755 - goto out; 756 - } 757 - 758 - afu->fn_actag_base = base; 759 - afu->fn_actag_enabled = enabled; 760 - 761 - ocxl_config_set_actag(pdev, fcfg->dvsec_function_pos, base, enabled); 762 - dev_dbg(dev, "%s: Function acTag range base=%u enabled=%u\n", 763 - __func__, base, enabled); 764 - 765 - rc = ocxl_link_setup(pdev, 0, &afu->link_token); 766 - if (unlikely(rc)) { 767 - dev_err(dev, "%s: ocxl_link_setup failed rc=%d\n", 768 - __func__, rc); 769 - goto out; 770 - } 771 - 772 - rc = ocxl_config_set_TL(pdev, fcfg->dvsec_tl_pos); 773 - if (unlikely(rc)) { 774 - dev_err(dev, "%s: ocxl_config_set_TL failed rc=%d\n", 775 - __func__, rc); 776 - goto err; 777 - } 778 - out: 779 - return rc; 780 - err: 781 - ocxl_link_release(pdev, afu->link_token); 782 - goto out; 783 - } 784 - 785 - /** 786 - * ocxlflash_unconfig_fn() - unconfigure the host function 787 - * @pdev: PCI device associated with the host. 788 - * @afu: AFU associated with the host. 789 - */ 790 - static void ocxlflash_unconfig_fn(struct pci_dev *pdev, struct ocxl_hw_afu *afu) 791 - { 792 - ocxl_link_release(pdev, afu->link_token); 793 - } 794 - 795 - /** 796 - * ocxlflash_map_mmio() - map the AFU MMIO space 797 - * @afu: AFU associated with the host. 798 - * 799 - * Return: 0 on success, -errno on failure 800 - */ 801 - static int ocxlflash_map_mmio(struct ocxl_hw_afu *afu) 802 - { 803 - struct ocxl_afu_config *acfg = &afu->acfg; 804 - struct pci_dev *pdev = afu->pdev; 805 - struct device *dev = afu->dev; 806 - phys_addr_t gmmio, ppmmio; 807 - int rc = 0; 808 - 809 - rc = pci_request_region(pdev, acfg->global_mmio_bar, "ocxlflash"); 810 - if (unlikely(rc)) { 811 - dev_err(dev, "%s: pci_request_region for global failed rc=%d\n", 812 - __func__, rc); 813 - goto out; 814 - } 815 - gmmio = pci_resource_start(pdev, acfg->global_mmio_bar); 816 - gmmio += acfg->global_mmio_offset; 817 - 818 - rc = pci_request_region(pdev, acfg->pp_mmio_bar, "ocxlflash"); 819 - if (unlikely(rc)) { 820 - dev_err(dev, "%s: pci_request_region for pp bar failed rc=%d\n", 821 - __func__, rc); 822 - goto err1; 823 - } 824 - ppmmio = pci_resource_start(pdev, acfg->pp_mmio_bar); 825 - ppmmio += acfg->pp_mmio_offset; 826 - 827 - afu->gmmio_virt = ioremap(gmmio, acfg->global_mmio_size); 828 - if (unlikely(!afu->gmmio_virt)) { 829 - dev_err(dev, "%s: MMIO mapping failed\n", __func__); 830 - rc = -ENOMEM; 831 - goto err2; 832 - } 833 - 834 - afu->gmmio_phys = gmmio; 835 - afu->ppmmio_phys = ppmmio; 836 - out: 837 - return rc; 838 - err2: 839 - pci_release_region(pdev, acfg->pp_mmio_bar); 840 - err1: 841 - pci_release_region(pdev, acfg->global_mmio_bar); 842 - goto out; 843 - } 844 - 845 - /** 846 - * ocxlflash_config_afu() - configure the host AFU 847 - * @pdev: PCI device associated with the host. 848 - * @afu: AFU associated with the host. 849 - * 850 - * Must be called _after_ host function configuration. 851 - * 852 - * Return: 0 on success, -errno on failure 853 - */ 854 - static int ocxlflash_config_afu(struct pci_dev *pdev, struct ocxl_hw_afu *afu) 855 - { 856 - struct ocxl_afu_config *acfg = &afu->acfg; 857 - struct ocxl_fn_config *fcfg = &afu->fcfg; 858 - struct device *dev = &pdev->dev; 859 - int count; 860 - int base; 861 - int pos; 862 - int rc = 0; 863 - 864 - /* This HW AFU function does not have any AFUs defined */ 865 - if (!afu->is_present) 866 - goto out; 867 - 868 - /* Read AFU config at index 0 */ 869 - rc = ocxl_config_read_afu(pdev, fcfg, acfg, 0); 870 - if (unlikely(rc)) { 871 - dev_err(dev, "%s: ocxl_config_read_afu failed rc=%d\n", 872 - __func__, rc); 873 - goto out; 874 - } 875 - 876 - /* Only one AFU per function is supported, so actag_base is same */ 877 - base = afu->fn_actag_base; 878 - count = min_t(int, acfg->actag_supported, afu->fn_actag_enabled); 879 - pos = acfg->dvsec_afu_control_pos; 880 - 881 - ocxl_config_set_afu_actag(pdev, pos, base, count); 882 - dev_dbg(dev, "%s: acTag base=%d enabled=%d\n", __func__, base, count); 883 - afu->afu_actag_base = base; 884 - afu->afu_actag_enabled = count; 885 - afu->max_pasid = 1 << acfg->pasid_supported_log; 886 - 887 - ocxl_config_set_afu_pasid(pdev, pos, 0, acfg->pasid_supported_log); 888 - 889 - rc = ocxlflash_map_mmio(afu); 890 - if (unlikely(rc)) { 891 - dev_err(dev, "%s: ocxlflash_map_mmio failed rc=%d\n", 892 - __func__, rc); 893 - goto out; 894 - } 895 - 896 - /* Enable the AFU */ 897 - ocxl_config_set_afu_state(pdev, acfg->dvsec_afu_control_pos, 1); 898 - out: 899 - return rc; 900 - } 901 - 902 - /** 903 - * ocxlflash_create_afu() - create the AFU for OCXL 904 - * @pdev: PCI device associated with the host. 905 - * 906 - * Return: AFU on success, NULL on failure 907 - */ 908 - static void *ocxlflash_create_afu(struct pci_dev *pdev) 909 - { 910 - struct device *dev = &pdev->dev; 911 - struct ocxlflash_context *ctx; 912 - struct ocxl_hw_afu *afu; 913 - int rc; 914 - 915 - afu = kzalloc(sizeof(*afu), GFP_KERNEL); 916 - if (unlikely(!afu)) { 917 - dev_err(dev, "%s: HW AFU allocation failed\n", __func__); 918 - goto out; 919 - } 920 - 921 - afu->pdev = pdev; 922 - afu->dev = dev; 923 - idr_init(&afu->idr); 924 - 925 - rc = ocxlflash_config_fn(pdev, afu); 926 - if (unlikely(rc)) { 927 - dev_err(dev, "%s: Function configuration failed rc=%d\n", 928 - __func__, rc); 929 - goto err1; 930 - } 931 - 932 - rc = ocxlflash_config_afu(pdev, afu); 933 - if (unlikely(rc)) { 934 - dev_err(dev, "%s: AFU configuration failed rc=%d\n", 935 - __func__, rc); 936 - goto err2; 937 - } 938 - 939 - ctx = ocxlflash_dev_context_init(pdev, afu); 940 - if (IS_ERR(ctx)) { 941 - rc = PTR_ERR(ctx); 942 - dev_err(dev, "%s: ocxlflash_dev_context_init failed rc=%d\n", 943 - __func__, rc); 944 - goto err3; 945 - } 946 - 947 - afu->ocxl_ctx = ctx; 948 - out: 949 - return afu; 950 - err3: 951 - ocxlflash_unconfig_afu(afu); 952 - err2: 953 - ocxlflash_unconfig_fn(pdev, afu); 954 - err1: 955 - idr_destroy(&afu->idr); 956 - kfree(afu); 957 - afu = NULL; 958 - goto out; 959 - } 960 - 961 - /** 962 - * ctx_event_pending() - check for any event pending on the context 963 - * @ctx: Context to be checked. 964 - * 965 - * Return: true if there is an event pending, false if none pending 966 - */ 967 - static inline bool ctx_event_pending(struct ocxlflash_context *ctx) 968 - { 969 - if (ctx->pending_irq || ctx->pending_fault) 970 - return true; 971 - 972 - return false; 973 - } 974 - 975 - /** 976 - * afu_poll() - poll the AFU for events on the context 977 - * @file: File associated with the adapter context. 978 - * @poll: Poll structure from the user. 979 - * 980 - * Return: poll mask 981 - */ 982 - static unsigned int afu_poll(struct file *file, struct poll_table_struct *poll) 983 - { 984 - struct ocxlflash_context *ctx = file->private_data; 985 - struct device *dev = ctx->hw_afu->dev; 986 - ulong lock_flags; 987 - int mask = 0; 988 - 989 - poll_wait(file, &ctx->wq, poll); 990 - 991 - spin_lock_irqsave(&ctx->slock, lock_flags); 992 - if (ctx_event_pending(ctx)) 993 - mask |= POLLIN | POLLRDNORM; 994 - else if (ctx->state == CLOSED) 995 - mask |= POLLERR; 996 - spin_unlock_irqrestore(&ctx->slock, lock_flags); 997 - 998 - dev_dbg(dev, "%s: Poll wait completed for pe %i mask %i\n", 999 - __func__, ctx->pe, mask); 1000 - 1001 - return mask; 1002 - } 1003 - 1004 - /** 1005 - * afu_read() - perform a read on the context for any event 1006 - * @file: File associated with the adapter context. 1007 - * @buf: Buffer to receive the data. 1008 - * @count: Size of buffer (maximum bytes that can be read). 1009 - * @off: Offset. 1010 - * 1011 - * Return: size of the data read on success, -errno on failure 1012 - */ 1013 - static ssize_t afu_read(struct file *file, char __user *buf, size_t count, 1014 - loff_t *off) 1015 - { 1016 - struct ocxlflash_context *ctx = file->private_data; 1017 - struct device *dev = ctx->hw_afu->dev; 1018 - struct cxl_event event; 1019 - ulong lock_flags; 1020 - ssize_t esize; 1021 - ssize_t rc; 1022 - int bit; 1023 - DEFINE_WAIT(event_wait); 1024 - 1025 - if (*off != 0) { 1026 - dev_err(dev, "%s: Non-zero offset not supported, off=%lld\n", 1027 - __func__, *off); 1028 - rc = -EINVAL; 1029 - goto out; 1030 - } 1031 - 1032 - spin_lock_irqsave(&ctx->slock, lock_flags); 1033 - 1034 - for (;;) { 1035 - prepare_to_wait(&ctx->wq, &event_wait, TASK_INTERRUPTIBLE); 1036 - 1037 - if (ctx_event_pending(ctx) || (ctx->state == CLOSED)) 1038 - break; 1039 - 1040 - if (file->f_flags & O_NONBLOCK) { 1041 - dev_err(dev, "%s: File cannot be blocked on I/O\n", 1042 - __func__); 1043 - rc = -EAGAIN; 1044 - goto err; 1045 - } 1046 - 1047 - if (signal_pending(current)) { 1048 - dev_err(dev, "%s: Signal pending on the process\n", 1049 - __func__); 1050 - rc = -ERESTARTSYS; 1051 - goto err; 1052 - } 1053 - 1054 - spin_unlock_irqrestore(&ctx->slock, lock_flags); 1055 - schedule(); 1056 - spin_lock_irqsave(&ctx->slock, lock_flags); 1057 - } 1058 - 1059 - finish_wait(&ctx->wq, &event_wait); 1060 - 1061 - memset(&event, 0, sizeof(event)); 1062 - event.header.process_element = ctx->pe; 1063 - event.header.size = sizeof(struct cxl_event_header); 1064 - if (ctx->pending_irq) { 1065 - esize = sizeof(struct cxl_event_afu_interrupt); 1066 - event.header.size += esize; 1067 - event.header.type = CXL_EVENT_AFU_INTERRUPT; 1068 - 1069 - bit = find_first_bit(&ctx->irq_bitmap, ctx->num_irqs); 1070 - clear_bit(bit, &ctx->irq_bitmap); 1071 - event.irq.irq = bit + 1; 1072 - if (bitmap_empty(&ctx->irq_bitmap, ctx->num_irqs)) 1073 - ctx->pending_irq = false; 1074 - } else if (ctx->pending_fault) { 1075 - event.header.size += sizeof(struct cxl_event_data_storage); 1076 - event.header.type = CXL_EVENT_DATA_STORAGE; 1077 - event.fault.addr = ctx->fault_addr; 1078 - event.fault.dsisr = ctx->fault_dsisr; 1079 - ctx->pending_fault = false; 1080 - } 1081 - 1082 - spin_unlock_irqrestore(&ctx->slock, lock_flags); 1083 - 1084 - if (copy_to_user(buf, &event, event.header.size)) { 1085 - dev_err(dev, "%s: copy_to_user failed\n", __func__); 1086 - rc = -EFAULT; 1087 - goto out; 1088 - } 1089 - 1090 - rc = event.header.size; 1091 - out: 1092 - return rc; 1093 - err: 1094 - finish_wait(&ctx->wq, &event_wait); 1095 - spin_unlock_irqrestore(&ctx->slock, lock_flags); 1096 - goto out; 1097 - } 1098 - 1099 - /** 1100 - * afu_release() - release and free the context 1101 - * @inode: File inode pointer. 1102 - * @file: File associated with the context. 1103 - * 1104 - * Return: 0 on success, -errno on failure 1105 - */ 1106 - static int afu_release(struct inode *inode, struct file *file) 1107 - { 1108 - struct ocxlflash_context *ctx = file->private_data; 1109 - int i; 1110 - 1111 - /* Unmap and free the interrupts associated with the context */ 1112 - for (i = ctx->num_irqs; i >= 0; i--) 1113 - afu_unmap_irq(0, ctx, i, ctx); 1114 - free_afu_irqs(ctx); 1115 - 1116 - return ocxlflash_release_context(ctx); 1117 - } 1118 - 1119 - /** 1120 - * ocxlflash_mmap_fault() - mmap fault handler 1121 - * @vmf: VM fault associated with current fault. 1122 - * 1123 - * Return: 0 on success, -errno on failure 1124 - */ 1125 - static vm_fault_t ocxlflash_mmap_fault(struct vm_fault *vmf) 1126 - { 1127 - struct vm_area_struct *vma = vmf->vma; 1128 - struct ocxlflash_context *ctx = vma->vm_file->private_data; 1129 - struct device *dev = ctx->hw_afu->dev; 1130 - u64 mmio_area, offset; 1131 - 1132 - offset = vmf->pgoff << PAGE_SHIFT; 1133 - if (offset >= ctx->psn_size) 1134 - return VM_FAULT_SIGBUS; 1135 - 1136 - mutex_lock(&ctx->state_mutex); 1137 - if (ctx->state != STARTED) { 1138 - dev_err(dev, "%s: Context not started, state=%d\n", 1139 - __func__, ctx->state); 1140 - mutex_unlock(&ctx->state_mutex); 1141 - return VM_FAULT_SIGBUS; 1142 - } 1143 - mutex_unlock(&ctx->state_mutex); 1144 - 1145 - mmio_area = ctx->psn_phys; 1146 - mmio_area += offset; 1147 - 1148 - return vmf_insert_pfn(vma, vmf->address, mmio_area >> PAGE_SHIFT); 1149 - } 1150 - 1151 - static const struct vm_operations_struct ocxlflash_vmops = { 1152 - .fault = ocxlflash_mmap_fault, 1153 - }; 1154 - 1155 - /** 1156 - * afu_mmap() - map the fault handler operations 1157 - * @file: File associated with the context. 1158 - * @vma: VM area associated with mapping. 1159 - * 1160 - * Return: 0 on success, -errno on failure 1161 - */ 1162 - static int afu_mmap(struct file *file, struct vm_area_struct *vma) 1163 - { 1164 - struct ocxlflash_context *ctx = file->private_data; 1165 - 1166 - if ((vma_pages(vma) + vma->vm_pgoff) > 1167 - (ctx->psn_size >> PAGE_SHIFT)) 1168 - return -EINVAL; 1169 - 1170 - vm_flags_set(vma, VM_IO | VM_PFNMAP); 1171 - vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 1172 - vma->vm_ops = &ocxlflash_vmops; 1173 - return 0; 1174 - } 1175 - 1176 - static const struct file_operations ocxl_afu_fops = { 1177 - .owner = THIS_MODULE, 1178 - .poll = afu_poll, 1179 - .read = afu_read, 1180 - .release = afu_release, 1181 - .mmap = afu_mmap, 1182 - }; 1183 - 1184 - #define PATCH_FOPS(NAME) \ 1185 - do { if (!fops->NAME) fops->NAME = ocxl_afu_fops.NAME; } while (0) 1186 - 1187 - /** 1188 - * ocxlflash_get_fd() - get file descriptor for an adapter context 1189 - * @ctx_cookie: Adapter context. 1190 - * @fops: File operations to be associated. 1191 - * @fd: File descriptor to be returned back. 1192 - * 1193 - * Return: pointer to the file on success, ERR_PTR on failure 1194 - */ 1195 - static struct file *ocxlflash_get_fd(void *ctx_cookie, 1196 - struct file_operations *fops, int *fd) 1197 - { 1198 - struct ocxlflash_context *ctx = ctx_cookie; 1199 - struct device *dev = ctx->hw_afu->dev; 1200 - struct file *file; 1201 - int flags, fdtmp; 1202 - int rc = 0; 1203 - char *name = NULL; 1204 - 1205 - /* Only allow one fd per context */ 1206 - if (ctx->mapping) { 1207 - dev_err(dev, "%s: Context is already mapped to an fd\n", 1208 - __func__); 1209 - rc = -EEXIST; 1210 - goto err1; 1211 - } 1212 - 1213 - flags = O_RDWR | O_CLOEXEC; 1214 - 1215 - /* This code is similar to anon_inode_getfd() */ 1216 - rc = get_unused_fd_flags(flags); 1217 - if (unlikely(rc < 0)) { 1218 - dev_err(dev, "%s: get_unused_fd_flags failed rc=%d\n", 1219 - __func__, rc); 1220 - goto err1; 1221 - } 1222 - fdtmp = rc; 1223 - 1224 - /* Patch the file ops that are not defined */ 1225 - if (fops) { 1226 - PATCH_FOPS(poll); 1227 - PATCH_FOPS(read); 1228 - PATCH_FOPS(release); 1229 - PATCH_FOPS(mmap); 1230 - } else /* Use default ops */ 1231 - fops = (struct file_operations *)&ocxl_afu_fops; 1232 - 1233 - name = kasprintf(GFP_KERNEL, "ocxlflash:%d", ctx->pe); 1234 - file = ocxlflash_getfile(dev, name, fops, ctx, flags); 1235 - kfree(name); 1236 - if (IS_ERR(file)) { 1237 - rc = PTR_ERR(file); 1238 - dev_err(dev, "%s: ocxlflash_getfile failed rc=%d\n", 1239 - __func__, rc); 1240 - goto err2; 1241 - } 1242 - 1243 - ctx->mapping = file->f_mapping; 1244 - *fd = fdtmp; 1245 - out: 1246 - return file; 1247 - err2: 1248 - put_unused_fd(fdtmp); 1249 - err1: 1250 - file = ERR_PTR(rc); 1251 - goto out; 1252 - } 1253 - 1254 - /** 1255 - * ocxlflash_fops_get_context() - get the context associated with the file 1256 - * @file: File associated with the adapter context. 1257 - * 1258 - * Return: pointer to the context 1259 - */ 1260 - static void *ocxlflash_fops_get_context(struct file *file) 1261 - { 1262 - return file->private_data; 1263 - } 1264 - 1265 - /** 1266 - * ocxlflash_afu_irq() - interrupt handler for user contexts 1267 - * @irq: Interrupt number. 1268 - * @data: Private data provided at interrupt registration, the context. 1269 - * 1270 - * Return: Always return IRQ_HANDLED. 1271 - */ 1272 - static irqreturn_t ocxlflash_afu_irq(int irq, void *data) 1273 - { 1274 - struct ocxlflash_context *ctx = data; 1275 - struct device *dev = ctx->hw_afu->dev; 1276 - int i; 1277 - 1278 - dev_dbg(dev, "%s: Interrupt raised for pe %i virq %i\n", 1279 - __func__, ctx->pe, irq); 1280 - 1281 - for (i = 0; i < ctx->num_irqs; i++) { 1282 - if (ctx->irqs[i].virq == irq) 1283 - break; 1284 - } 1285 - if (unlikely(i >= ctx->num_irqs)) { 1286 - dev_err(dev, "%s: Received AFU IRQ out of range\n", __func__); 1287 - goto out; 1288 - } 1289 - 1290 - spin_lock(&ctx->slock); 1291 - set_bit(i - 1, &ctx->irq_bitmap); 1292 - ctx->pending_irq = true; 1293 - spin_unlock(&ctx->slock); 1294 - 1295 - wake_up_all(&ctx->wq); 1296 - out: 1297 - return IRQ_HANDLED; 1298 - } 1299 - 1300 - /** 1301 - * ocxlflash_start_work() - start a user context 1302 - * @ctx_cookie: Context to be started. 1303 - * @num_irqs: Number of interrupts requested. 1304 - * 1305 - * Return: 0 on success, -errno on failure 1306 - */ 1307 - static int ocxlflash_start_work(void *ctx_cookie, u64 num_irqs) 1308 - { 1309 - struct ocxlflash_context *ctx = ctx_cookie; 1310 - struct ocxl_hw_afu *afu = ctx->hw_afu; 1311 - struct device *dev = afu->dev; 1312 - char *name; 1313 - int rc = 0; 1314 - int i; 1315 - 1316 - rc = alloc_afu_irqs(ctx, num_irqs); 1317 - if (unlikely(rc < 0)) { 1318 - dev_err(dev, "%s: alloc_afu_irqs failed rc=%d\n", __func__, rc); 1319 - goto out; 1320 - } 1321 - 1322 - for (i = 0; i < num_irqs; i++) { 1323 - name = kasprintf(GFP_KERNEL, "ocxlflash-%s-pe%i-%i", 1324 - dev_name(dev), ctx->pe, i); 1325 - rc = afu_map_irq(0, ctx, i, ocxlflash_afu_irq, ctx, name); 1326 - kfree(name); 1327 - if (unlikely(rc < 0)) { 1328 - dev_err(dev, "%s: afu_map_irq failed rc=%d\n", 1329 - __func__, rc); 1330 - goto err; 1331 - } 1332 - } 1333 - 1334 - rc = start_context(ctx); 1335 - if (unlikely(rc)) { 1336 - dev_err(dev, "%s: start_context failed rc=%d\n", __func__, rc); 1337 - goto err; 1338 - } 1339 - out: 1340 - return rc; 1341 - err: 1342 - for (i = i-1; i >= 0; i--) 1343 - afu_unmap_irq(0, ctx, i, ctx); 1344 - free_afu_irqs(ctx); 1345 - goto out; 1346 - }; 1347 - 1348 - /** 1349 - * ocxlflash_fd_mmap() - mmap handler for adapter file descriptor 1350 - * @file: File installed with adapter file descriptor. 1351 - * @vma: VM area associated with mapping. 1352 - * 1353 - * Return: 0 on success, -errno on failure 1354 - */ 1355 - static int ocxlflash_fd_mmap(struct file *file, struct vm_area_struct *vma) 1356 - { 1357 - return afu_mmap(file, vma); 1358 - } 1359 - 1360 - /** 1361 - * ocxlflash_fd_release() - release the context associated with the file 1362 - * @inode: File inode pointer. 1363 - * @file: File associated with the adapter context. 1364 - * 1365 - * Return: 0 on success, -errno on failure 1366 - */ 1367 - static int ocxlflash_fd_release(struct inode *inode, struct file *file) 1368 - { 1369 - return afu_release(inode, file); 1370 - } 1371 - 1372 - /* Backend ops to ocxlflash services */ 1373 - const struct cxlflash_backend_ops cxlflash_ocxl_ops = { 1374 - .module = THIS_MODULE, 1375 - .psa_map = ocxlflash_psa_map, 1376 - .psa_unmap = ocxlflash_psa_unmap, 1377 - .process_element = ocxlflash_process_element, 1378 - .map_afu_irq = ocxlflash_map_afu_irq, 1379 - .unmap_afu_irq = ocxlflash_unmap_afu_irq, 1380 - .get_irq_objhndl = ocxlflash_get_irq_objhndl, 1381 - .start_context = ocxlflash_start_context, 1382 - .stop_context = ocxlflash_stop_context, 1383 - .afu_reset = ocxlflash_afu_reset, 1384 - .set_master = ocxlflash_set_master, 1385 - .get_context = ocxlflash_get_context, 1386 - .dev_context_init = ocxlflash_dev_context_init, 1387 - .release_context = ocxlflash_release_context, 1388 - .perst_reloads_same_image = ocxlflash_perst_reloads_same_image, 1389 - .read_adapter_vpd = ocxlflash_read_adapter_vpd, 1390 - .allocate_afu_irqs = ocxlflash_allocate_afu_irqs, 1391 - .free_afu_irqs = ocxlflash_free_afu_irqs, 1392 - .create_afu = ocxlflash_create_afu, 1393 - .destroy_afu = ocxlflash_destroy_afu, 1394 - .get_fd = ocxlflash_get_fd, 1395 - .fops_get_context = ocxlflash_fops_get_context, 1396 - .start_work = ocxlflash_start_work, 1397 - .fd_mmap = ocxlflash_fd_mmap, 1398 - .fd_release = ocxlflash_fd_release, 1399 - };
-72
drivers/scsi/cxlflash/ocxl_hw.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* 3 - * CXL Flash Device Driver 4 - * 5 - * Written by: Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation 6 - * Uma Krishnan <ukrishn@linux.vnet.ibm.com>, IBM Corporation 7 - * 8 - * Copyright (C) 2018 IBM Corporation 9 - */ 10 - 11 - #define OCXL_MAX_IRQS 4 /* Max interrupts per process */ 12 - 13 - struct ocxlflash_irqs { 14 - int hwirq; 15 - u32 virq; 16 - void __iomem *vtrig; 17 - }; 18 - 19 - /* OCXL hardware AFU associated with the host */ 20 - struct ocxl_hw_afu { 21 - struct ocxlflash_context *ocxl_ctx; /* Host context */ 22 - struct pci_dev *pdev; /* PCI device */ 23 - struct device *dev; /* Generic device */ 24 - bool perst_same_image; /* Same image loaded on perst */ 25 - 26 - struct ocxl_fn_config fcfg; /* DVSEC config of the function */ 27 - struct ocxl_afu_config acfg; /* AFU configuration data */ 28 - 29 - int fn_actag_base; /* Function acTag base */ 30 - int fn_actag_enabled; /* Function acTag number enabled */ 31 - int afu_actag_base; /* AFU acTag base */ 32 - int afu_actag_enabled; /* AFU acTag number enabled */ 33 - 34 - phys_addr_t ppmmio_phys; /* Per process MMIO space */ 35 - phys_addr_t gmmio_phys; /* Global AFU MMIO space */ 36 - void __iomem *gmmio_virt; /* Global MMIO map */ 37 - 38 - void *link_token; /* Link token for the SPA */ 39 - struct idr idr; /* IDR to manage contexts */ 40 - int max_pasid; /* Maximum number of contexts */ 41 - bool is_present; /* Function has AFUs defined */ 42 - }; 43 - 44 - enum ocxlflash_ctx_state { 45 - CLOSED, 46 - OPENED, 47 - STARTED 48 - }; 49 - 50 - struct ocxlflash_context { 51 - struct ocxl_hw_afu *hw_afu; /* HW AFU back pointer */ 52 - struct address_space *mapping; /* Mapping for pseudo filesystem */ 53 - bool master; /* Whether this is a master context */ 54 - int pe; /* Process element */ 55 - 56 - phys_addr_t psn_phys; /* Process mapping */ 57 - u64 psn_size; /* Process mapping size */ 58 - 59 - spinlock_t slock; /* Protects irq/fault/event updates */ 60 - wait_queue_head_t wq; /* Wait queue for poll and interrupts */ 61 - struct mutex state_mutex; /* Mutex to update context state */ 62 - enum ocxlflash_ctx_state state; /* Context state */ 63 - 64 - struct ocxlflash_irqs *irqs; /* Pointer to array of structures */ 65 - int num_irqs; /* Number of interrupts */ 66 - bool pending_irq; /* Pending interrupt on the context */ 67 - ulong irq_bitmap; /* Bits indicating pending irq num */ 68 - 69 - u64 fault_addr; /* Address that triggered the fault */ 70 - u64 fault_dsisr; /* Value of dsisr register at fault */ 71 - bool pending_fault; /* Pending translation fault */ 72 - };
-560
drivers/scsi/cxlflash/sislite.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* 3 - * CXL Flash Device Driver 4 - * 5 - * Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation 6 - * Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation 7 - * 8 - * Copyright (C) 2015 IBM Corporation 9 - */ 10 - 11 - #ifndef _SISLITE_H 12 - #define _SISLITE_H 13 - 14 - #include <linux/types.h> 15 - 16 - typedef u16 ctx_hndl_t; 17 - typedef u32 res_hndl_t; 18 - 19 - #define SIZE_4K 4096 20 - #define SIZE_64K 65536 21 - 22 - /* 23 - * IOARCB: 64 bytes, min 16 byte alignment required, host native endianness 24 - * except for SCSI CDB which remains big endian per SCSI standards. 25 - */ 26 - struct sisl_ioarcb { 27 - u16 ctx_id; /* ctx_hndl_t */ 28 - u16 req_flags; 29 - #define SISL_REQ_FLAGS_RES_HNDL 0x8000U /* bit 0 (MSB) */ 30 - #define SISL_REQ_FLAGS_PORT_LUN_ID 0x0000U 31 - 32 - #define SISL_REQ_FLAGS_SUP_UNDERRUN 0x4000U /* bit 1 */ 33 - 34 - #define SISL_REQ_FLAGS_TIMEOUT_SECS 0x0000U /* bits 8,9 */ 35 - #define SISL_REQ_FLAGS_TIMEOUT_MSECS 0x0040U 36 - #define SISL_REQ_FLAGS_TIMEOUT_USECS 0x0080U 37 - #define SISL_REQ_FLAGS_TIMEOUT_CYCLES 0x00C0U 38 - 39 - #define SISL_REQ_FLAGS_TMF_CMD 0x0004u /* bit 13 */ 40 - 41 - #define SISL_REQ_FLAGS_AFU_CMD 0x0002U /* bit 14 */ 42 - 43 - #define SISL_REQ_FLAGS_HOST_WRITE 0x0001U /* bit 15 (LSB) */ 44 - #define SISL_REQ_FLAGS_HOST_READ 0x0000U 45 - 46 - union { 47 - u32 res_hndl; /* res_hndl_t */ 48 - u32 port_sel; /* this is a selection mask: 49 - * 0x1 -> port#0 can be selected, 50 - * 0x2 -> port#1 can be selected. 51 - * Can be bitwise ORed. 52 - */ 53 - }; 54 - u64 lun_id; 55 - u32 data_len; /* 4K for read/write */ 56 - u32 ioadl_len; 57 - union { 58 - u64 data_ea; /* min 16 byte aligned */ 59 - u64 ioadl_ea; 60 - }; 61 - u8 msi; /* LISN to send on RRQ write */ 62 - #define SISL_MSI_CXL_PFAULT 0 /* reserved for CXL page faults */ 63 - #define SISL_MSI_SYNC_ERROR 1 /* recommended for AFU sync error */ 64 - #define SISL_MSI_RRQ_UPDATED 2 /* recommended for IO completion */ 65 - #define SISL_MSI_ASYNC_ERROR 3 /* master only - for AFU async error */ 66 - 67 - u8 rrq; /* 0 for a single RRQ */ 68 - u16 timeout; /* in units specified by req_flags */ 69 - u32 rsvd1; 70 - u8 cdb[16]; /* must be in big endian */ 71 - #define SISL_AFU_CMD_SYNC 0xC0 /* AFU sync command */ 72 - #define SISL_AFU_CMD_LUN_PROVISION 0xD0 /* AFU LUN provision command */ 73 - #define SISL_AFU_CMD_DEBUG 0xE0 /* AFU debug command */ 74 - 75 - #define SISL_AFU_LUN_PROVISION_CREATE 0x00 /* LUN provision create type */ 76 - #define SISL_AFU_LUN_PROVISION_DELETE 0x01 /* LUN provision delete type */ 77 - 78 - union { 79 - u64 reserved; /* Reserved for IOARRIN mode */ 80 - struct sisl_ioasa *ioasa; /* IOASA EA for SQ Mode */ 81 - }; 82 - } __packed; 83 - 84 - struct sisl_rc { 85 - u8 flags; 86 - #define SISL_RC_FLAGS_SENSE_VALID 0x80U 87 - #define SISL_RC_FLAGS_FCP_RSP_CODE_VALID 0x40U 88 - #define SISL_RC_FLAGS_OVERRUN 0x20U 89 - #define SISL_RC_FLAGS_UNDERRUN 0x10U 90 - 91 - u8 afu_rc; 92 - #define SISL_AFU_RC_RHT_INVALID 0x01U /* user error */ 93 - #define SISL_AFU_RC_RHT_UNALIGNED 0x02U /* should never happen */ 94 - #define SISL_AFU_RC_RHT_OUT_OF_BOUNDS 0x03u /* user error */ 95 - #define SISL_AFU_RC_RHT_DMA_ERR 0x04u /* see afu_extra 96 - * may retry if afu_retry is off 97 - * possible on master exit 98 - */ 99 - #define SISL_AFU_RC_RHT_RW_PERM 0x05u /* no RW perms, user error */ 100 - #define SISL_AFU_RC_LXT_UNALIGNED 0x12U /* should never happen */ 101 - #define SISL_AFU_RC_LXT_OUT_OF_BOUNDS 0x13u /* user error */ 102 - #define SISL_AFU_RC_LXT_DMA_ERR 0x14u /* see afu_extra 103 - * may retry if afu_retry is off 104 - * possible on master exit 105 - */ 106 - #define SISL_AFU_RC_LXT_RW_PERM 0x15u /* no RW perms, user error */ 107 - 108 - #define SISL_AFU_RC_NOT_XLATE_HOST 0x1au /* possible if master exited */ 109 - 110 - /* NO_CHANNELS means the FC ports selected by dest_port in 111 - * IOARCB or in the LXT entry are down when the AFU tried to select 112 - * a FC port. If the port went down on an active IO, it will set 113 - * fc_rc to =0x54(NOLOGI) or 0x57(LINKDOWN) instead. 114 - */ 115 - #define SISL_AFU_RC_NO_CHANNELS 0x20U /* see afu_extra, may retry */ 116 - #define SISL_AFU_RC_CAP_VIOLATION 0x21U /* either user error or 117 - * afu reset/master restart 118 - */ 119 - #define SISL_AFU_RC_OUT_OF_DATA_BUFS 0x30U /* always retry */ 120 - #define SISL_AFU_RC_DATA_DMA_ERR 0x31U /* see afu_extra 121 - * may retry if afu_retry is off 122 - */ 123 - 124 - u8 scsi_rc; /* SCSI status byte, retry as appropriate */ 125 - #define SISL_SCSI_RC_CHECK 0x02U 126 - #define SISL_SCSI_RC_BUSY 0x08u 127 - 128 - u8 fc_rc; /* retry */ 129 - /* 130 - * We should only see fc_rc=0x57 (LINKDOWN) or 0x54(NOLOGI) for 131 - * commands that are in flight when a link goes down or is logged out. 132 - * If the link is down or logged out before AFU selects the port, either 133 - * it will choose the other port or we will get afu_rc=0x20 (no_channel) 134 - * if there is no valid port to use. 135 - * 136 - * ABORTPEND/ABORTOK/ABORTFAIL/TGTABORT can be retried, typically these 137 - * would happen if a frame is dropped and something times out. 138 - * NOLOGI or LINKDOWN can be retried if the other port is up. 139 - * RESIDERR can be retried as well. 140 - * 141 - * ABORTFAIL might indicate that lots of frames are getting CRC errors. 142 - * So it maybe retried once and reset the link if it happens again. 143 - * The link can also be reset on the CRC error threshold interrupt. 144 - */ 145 - #define SISL_FC_RC_ABORTPEND 0x52 /* exchange timeout or abort request */ 146 - #define SISL_FC_RC_WRABORTPEND 0x53 /* due to write XFER_RDY invalid */ 147 - #define SISL_FC_RC_NOLOGI 0x54 /* port not logged in, in-flight cmds */ 148 - #define SISL_FC_RC_NOEXP 0x55 /* FC protocol error or HW bug */ 149 - #define SISL_FC_RC_INUSE 0x56 /* tag already in use, HW bug */ 150 - #define SISL_FC_RC_LINKDOWN 0x57 /* link down, in-flight cmds */ 151 - #define SISL_FC_RC_ABORTOK 0x58 /* pending abort completed w/success */ 152 - #define SISL_FC_RC_ABORTFAIL 0x59 /* pending abort completed w/fail */ 153 - #define SISL_FC_RC_RESID 0x5A /* ioasa underrun/overrun flags set */ 154 - #define SISL_FC_RC_RESIDERR 0x5B /* actual data len does not match SCSI 155 - * reported len, possibly due to dropped 156 - * frames 157 - */ 158 - #define SISL_FC_RC_TGTABORT 0x5C /* command aborted by target */ 159 - }; 160 - 161 - #define SISL_SENSE_DATA_LEN 20 /* Sense data length */ 162 - #define SISL_WWID_DATA_LEN 16 /* WWID data length */ 163 - 164 - /* 165 - * IOASA: 64 bytes & must follow IOARCB, min 16 byte alignment required, 166 - * host native endianness 167 - */ 168 - struct sisl_ioasa { 169 - union { 170 - struct sisl_rc rc; 171 - u32 ioasc; 172 - #define SISL_IOASC_GOOD_COMPLETION 0x00000000U 173 - }; 174 - 175 - union { 176 - u32 resid; 177 - u32 lunid_hi; 178 - }; 179 - 180 - u8 port; 181 - u8 afu_extra; 182 - /* when afu_rc=0x04, 0x14, 0x31 (_xxx_DMA_ERR): 183 - * afu_exta contains PSL response code. Useful codes are: 184 - */ 185 - #define SISL_AFU_DMA_ERR_PAGE_IN 0x0A /* AFU_retry_on_pagein Action 186 - * Enabled N/A 187 - * Disabled retry 188 - */ 189 - #define SISL_AFU_DMA_ERR_INVALID_EA 0x0B /* this is a hard error 190 - * afu_rc Implies 191 - * 0x04, 0x14 master exit. 192 - * 0x31 user error. 193 - */ 194 - /* when afu rc=0x20 (no channels): 195 - * afu_extra bits [4:5]: available portmask, [6:7]: requested portmask. 196 - */ 197 - #define SISL_AFU_NO_CLANNELS_AMASK(afu_extra) (((afu_extra) & 0x0C) >> 2) 198 - #define SISL_AFU_NO_CLANNELS_RMASK(afu_extra) ((afu_extra) & 0x03) 199 - 200 - u8 scsi_extra; 201 - u8 fc_extra; 202 - 203 - union { 204 - u8 sense_data[SISL_SENSE_DATA_LEN]; 205 - struct { 206 - u32 lunid_lo; 207 - u8 wwid[SISL_WWID_DATA_LEN]; 208 - }; 209 - }; 210 - 211 - /* These fields are defined by the SISlite architecture for the 212 - * host to use as they see fit for their implementation. 213 - */ 214 - union { 215 - u64 host_use[4]; 216 - u8 host_use_b[32]; 217 - }; 218 - } __packed; 219 - 220 - #define SISL_RESP_HANDLE_T_BIT 0x1ULL /* Toggle bit */ 221 - 222 - /* MMIO space is required to support only 64-bit access */ 223 - 224 - /* 225 - * This AFU has two mechanisms to deal with endian-ness. 226 - * One is a global configuration (in the afu_config) register 227 - * below that specifies the endian-ness of the host. 228 - * The other is a per context (i.e. application) specification 229 - * controlled by the endian_ctrl field here. Since the master 230 - * context is one such application the master context's 231 - * endian-ness is set to be the same as the host. 232 - * 233 - * As per the SISlite spec, the MMIO registers are always 234 - * big endian. 235 - */ 236 - #define SISL_ENDIAN_CTRL_BE 0x8000000000000080ULL 237 - #define SISL_ENDIAN_CTRL_LE 0x0000000000000000ULL 238 - 239 - #ifdef __BIG_ENDIAN 240 - #define SISL_ENDIAN_CTRL SISL_ENDIAN_CTRL_BE 241 - #else 242 - #define SISL_ENDIAN_CTRL SISL_ENDIAN_CTRL_LE 243 - #endif 244 - 245 - /* per context host transport MMIO */ 246 - struct sisl_host_map { 247 - __be64 endian_ctrl; /* Per context Endian Control. The AFU will 248 - * operate on whatever the context is of the 249 - * host application. 250 - */ 251 - 252 - __be64 intr_status; /* this sends LISN# programmed in ctx_ctrl. 253 - * Only recovery in a PERM_ERR is a context 254 - * exit since there is no way to tell which 255 - * command caused the error. 256 - */ 257 - #define SISL_ISTATUS_PERM_ERR_LISN_3_EA 0x0400ULL /* b53, user error */ 258 - #define SISL_ISTATUS_PERM_ERR_LISN_2_EA 0x0200ULL /* b54, user error */ 259 - #define SISL_ISTATUS_PERM_ERR_LISN_1_EA 0x0100ULL /* b55, user error */ 260 - #define SISL_ISTATUS_PERM_ERR_LISN_3_PASID 0x0080ULL /* b56, user error */ 261 - #define SISL_ISTATUS_PERM_ERR_LISN_2_PASID 0x0040ULL /* b57, user error */ 262 - #define SISL_ISTATUS_PERM_ERR_LISN_1_PASID 0x0020ULL /* b58, user error */ 263 - #define SISL_ISTATUS_PERM_ERR_CMDROOM 0x0010ULL /* b59, user error */ 264 - #define SISL_ISTATUS_PERM_ERR_RCB_READ 0x0008ULL /* b60, user error */ 265 - #define SISL_ISTATUS_PERM_ERR_SA_WRITE 0x0004ULL /* b61, user error */ 266 - #define SISL_ISTATUS_PERM_ERR_RRQ_WRITE 0x0002ULL /* b62, user error */ 267 - /* Page in wait accessing RCB/IOASA/RRQ is reported in b63. 268 - * Same error in data/LXT/RHT access is reported via IOASA. 269 - */ 270 - #define SISL_ISTATUS_TEMP_ERR_PAGEIN 0x0001ULL /* b63, can only be 271 - * generated when AFU 272 - * auto retry is 273 - * disabled. If user 274 - * can determine the 275 - * command that caused 276 - * the error, it can 277 - * be retried. 278 - */ 279 - #define SISL_ISTATUS_UNMASK (0x07FFULL) /* 1 means unmasked */ 280 - #define SISL_ISTATUS_MASK ~(SISL_ISTATUS_UNMASK) /* 1 means masked */ 281 - 282 - __be64 intr_clear; 283 - __be64 intr_mask; 284 - __be64 ioarrin; /* only write what cmd_room permits */ 285 - __be64 rrq_start; /* start & end are both inclusive */ 286 - __be64 rrq_end; /* write sequence: start followed by end */ 287 - __be64 cmd_room; 288 - __be64 ctx_ctrl; /* least significant byte or b56:63 is LISN# */ 289 - #define SISL_CTX_CTRL_UNMAP_SECTOR 0x8000000000000000ULL /* b0 */ 290 - #define SISL_CTX_CTRL_LISN_MASK (0xFFULL) 291 - __be64 mbox_w; /* restricted use */ 292 - __be64 sq_start; /* Submission Queue (R/W): write sequence and */ 293 - __be64 sq_end; /* inclusion semantics are the same as RRQ */ 294 - __be64 sq_head; /* Submission Queue Head (R): for debugging */ 295 - __be64 sq_tail; /* Submission Queue TAIL (R/W): next IOARCB */ 296 - __be64 sq_ctx_reset; /* Submission Queue Context Reset (R/W) */ 297 - }; 298 - 299 - /* per context provisioning & control MMIO */ 300 - struct sisl_ctrl_map { 301 - __be64 rht_start; 302 - __be64 rht_cnt_id; 303 - /* both cnt & ctx_id args must be ULL */ 304 - #define SISL_RHT_CNT_ID(cnt, ctx_id) (((cnt) << 48) | ((ctx_id) << 32)) 305 - 306 - __be64 ctx_cap; /* afu_rc below is when the capability is violated */ 307 - #define SISL_CTX_CAP_PROXY_ISSUE 0x8000000000000000ULL /* afu_rc 0x21 */ 308 - #define SISL_CTX_CAP_REAL_MODE 0x4000000000000000ULL /* afu_rc 0x21 */ 309 - #define SISL_CTX_CAP_HOST_XLATE 0x2000000000000000ULL /* afu_rc 0x1a */ 310 - #define SISL_CTX_CAP_PROXY_TARGET 0x1000000000000000ULL /* afu_rc 0x21 */ 311 - #define SISL_CTX_CAP_AFU_CMD 0x0000000000000008ULL /* afu_rc 0x21 */ 312 - #define SISL_CTX_CAP_GSCSI_CMD 0x0000000000000004ULL /* afu_rc 0x21 */ 313 - #define SISL_CTX_CAP_WRITE_CMD 0x0000000000000002ULL /* afu_rc 0x21 */ 314 - #define SISL_CTX_CAP_READ_CMD 0x0000000000000001ULL /* afu_rc 0x21 */ 315 - __be64 mbox_r; 316 - __be64 lisn_pasid[2]; 317 - /* pasid _a arg must be ULL */ 318 - #define SISL_LISN_PASID(_a, _b) (((_a) << 32) | (_b)) 319 - __be64 lisn_ea[3]; 320 - }; 321 - 322 - /* single copy global regs */ 323 - struct sisl_global_regs { 324 - __be64 aintr_status; 325 - /* 326 - * In cxlflash, FC port/link are arranged in port pairs, each 327 - * gets a byte of status: 328 - * 329 - * *_OTHER: other err, FC_ERRCAP[31:20] 330 - * *_LOGO: target sent FLOGI/PLOGI/LOGO while logged in 331 - * *_CRC_T: CRC threshold exceeded 332 - * *_LOGI_R: login state machine timed out and retrying 333 - * *_LOGI_F: login failed, FC_ERROR[19:0] 334 - * *_LOGI_S: login succeeded 335 - * *_LINK_DN: link online to offline 336 - * *_LINK_UP: link offline to online 337 - */ 338 - #define SISL_ASTATUS_FC2_OTHER 0x80000000ULL /* b32 */ 339 - #define SISL_ASTATUS_FC2_LOGO 0x40000000ULL /* b33 */ 340 - #define SISL_ASTATUS_FC2_CRC_T 0x20000000ULL /* b34 */ 341 - #define SISL_ASTATUS_FC2_LOGI_R 0x10000000ULL /* b35 */ 342 - #define SISL_ASTATUS_FC2_LOGI_F 0x08000000ULL /* b36 */ 343 - #define SISL_ASTATUS_FC2_LOGI_S 0x04000000ULL /* b37 */ 344 - #define SISL_ASTATUS_FC2_LINK_DN 0x02000000ULL /* b38 */ 345 - #define SISL_ASTATUS_FC2_LINK_UP 0x01000000ULL /* b39 */ 346 - 347 - #define SISL_ASTATUS_FC3_OTHER 0x00800000ULL /* b40 */ 348 - #define SISL_ASTATUS_FC3_LOGO 0x00400000ULL /* b41 */ 349 - #define SISL_ASTATUS_FC3_CRC_T 0x00200000ULL /* b42 */ 350 - #define SISL_ASTATUS_FC3_LOGI_R 0x00100000ULL /* b43 */ 351 - #define SISL_ASTATUS_FC3_LOGI_F 0x00080000ULL /* b44 */ 352 - #define SISL_ASTATUS_FC3_LOGI_S 0x00040000ULL /* b45 */ 353 - #define SISL_ASTATUS_FC3_LINK_DN 0x00020000ULL /* b46 */ 354 - #define SISL_ASTATUS_FC3_LINK_UP 0x00010000ULL /* b47 */ 355 - 356 - #define SISL_ASTATUS_FC0_OTHER 0x00008000ULL /* b48 */ 357 - #define SISL_ASTATUS_FC0_LOGO 0x00004000ULL /* b49 */ 358 - #define SISL_ASTATUS_FC0_CRC_T 0x00002000ULL /* b50 */ 359 - #define SISL_ASTATUS_FC0_LOGI_R 0x00001000ULL /* b51 */ 360 - #define SISL_ASTATUS_FC0_LOGI_F 0x00000800ULL /* b52 */ 361 - #define SISL_ASTATUS_FC0_LOGI_S 0x00000400ULL /* b53 */ 362 - #define SISL_ASTATUS_FC0_LINK_DN 0x00000200ULL /* b54 */ 363 - #define SISL_ASTATUS_FC0_LINK_UP 0x00000100ULL /* b55 */ 364 - 365 - #define SISL_ASTATUS_FC1_OTHER 0x00000080ULL /* b56 */ 366 - #define SISL_ASTATUS_FC1_LOGO 0x00000040ULL /* b57 */ 367 - #define SISL_ASTATUS_FC1_CRC_T 0x00000020ULL /* b58 */ 368 - #define SISL_ASTATUS_FC1_LOGI_R 0x00000010ULL /* b59 */ 369 - #define SISL_ASTATUS_FC1_LOGI_F 0x00000008ULL /* b60 */ 370 - #define SISL_ASTATUS_FC1_LOGI_S 0x00000004ULL /* b61 */ 371 - #define SISL_ASTATUS_FC1_LINK_DN 0x00000002ULL /* b62 */ 372 - #define SISL_ASTATUS_FC1_LINK_UP 0x00000001ULL /* b63 */ 373 - 374 - #define SISL_FC_INTERNAL_UNMASK 0x0000000300000000ULL /* 1 means unmasked */ 375 - #define SISL_FC_INTERNAL_MASK ~(SISL_FC_INTERNAL_UNMASK) 376 - #define SISL_FC_INTERNAL_SHIFT 32 377 - 378 - #define SISL_FC_SHUTDOWN_NORMAL 0x0000000000000010ULL 379 - #define SISL_FC_SHUTDOWN_ABRUPT 0x0000000000000020ULL 380 - 381 - #define SISL_STATUS_SHUTDOWN_ACTIVE 0x0000000000000010ULL 382 - #define SISL_STATUS_SHUTDOWN_COMPLETE 0x0000000000000020ULL 383 - 384 - #define SISL_ASTATUS_UNMASK 0xFFFFFFFFULL /* 1 means unmasked */ 385 - #define SISL_ASTATUS_MASK ~(SISL_ASTATUS_UNMASK) /* 1 means masked */ 386 - 387 - __be64 aintr_clear; 388 - __be64 aintr_mask; 389 - __be64 afu_ctrl; 390 - __be64 afu_hb; 391 - __be64 afu_scratch_pad; 392 - __be64 afu_port_sel; 393 - #define SISL_AFUCONF_AR_IOARCB 0x4000ULL 394 - #define SISL_AFUCONF_AR_LXT 0x2000ULL 395 - #define SISL_AFUCONF_AR_RHT 0x1000ULL 396 - #define SISL_AFUCONF_AR_DATA 0x0800ULL 397 - #define SISL_AFUCONF_AR_RSRC 0x0400ULL 398 - #define SISL_AFUCONF_AR_IOASA 0x0200ULL 399 - #define SISL_AFUCONF_AR_RRQ 0x0100ULL 400 - /* Aggregate all Auto Retry Bits */ 401 - #define SISL_AFUCONF_AR_ALL (SISL_AFUCONF_AR_IOARCB|SISL_AFUCONF_AR_LXT| \ 402 - SISL_AFUCONF_AR_RHT|SISL_AFUCONF_AR_DATA| \ 403 - SISL_AFUCONF_AR_RSRC|SISL_AFUCONF_AR_IOASA| \ 404 - SISL_AFUCONF_AR_RRQ) 405 - #ifdef __BIG_ENDIAN 406 - #define SISL_AFUCONF_ENDIAN 0x0000ULL 407 - #else 408 - #define SISL_AFUCONF_ENDIAN 0x0020ULL 409 - #endif 410 - #define SISL_AFUCONF_MBOX_CLR_READ 0x0010ULL 411 - __be64 afu_config; 412 - __be64 rsvd[0xf8]; 413 - __le64 afu_version; 414 - __be64 interface_version; 415 - #define SISL_INTVER_CAP_SHIFT 16 416 - #define SISL_INTVER_MAJ_SHIFT 8 417 - #define SISL_INTVER_CAP_MASK 0xFFFFFFFF00000000ULL 418 - #define SISL_INTVER_MAJ_MASK 0x00000000FFFF0000ULL 419 - #define SISL_INTVER_MIN_MASK 0x000000000000FFFFULL 420 - #define SISL_INTVER_CAP_IOARRIN_CMD_MODE 0x800000000000ULL 421 - #define SISL_INTVER_CAP_SQ_CMD_MODE 0x400000000000ULL 422 - #define SISL_INTVER_CAP_RESERVED_CMD_MODE_A 0x200000000000ULL 423 - #define SISL_INTVER_CAP_RESERVED_CMD_MODE_B 0x100000000000ULL 424 - #define SISL_INTVER_CAP_LUN_PROVISION 0x080000000000ULL 425 - #define SISL_INTVER_CAP_AFU_DEBUG 0x040000000000ULL 426 - #define SISL_INTVER_CAP_OCXL_LISN 0x020000000000ULL 427 - }; 428 - 429 - #define CXLFLASH_NUM_FC_PORTS_PER_BANK 2 /* fixed # of ports per bank */ 430 - #define CXLFLASH_MAX_FC_BANKS 2 /* max # of banks supported */ 431 - #define CXLFLASH_MAX_FC_PORTS (CXLFLASH_NUM_FC_PORTS_PER_BANK * \ 432 - CXLFLASH_MAX_FC_BANKS) 433 - #define CXLFLASH_MAX_CONTEXT 512 /* number of contexts per AFU */ 434 - #define CXLFLASH_NUM_VLUNS 512 /* number of vluns per AFU/port */ 435 - #define CXLFLASH_NUM_REGS 512 /* number of registers per port */ 436 - 437 - struct fc_port_bank { 438 - __be64 fc_port_regs[CXLFLASH_NUM_FC_PORTS_PER_BANK][CXLFLASH_NUM_REGS]; 439 - __be64 fc_port_luns[CXLFLASH_NUM_FC_PORTS_PER_BANK][CXLFLASH_NUM_VLUNS]; 440 - }; 441 - 442 - struct sisl_global_map { 443 - union { 444 - struct sisl_global_regs regs; 445 - char page0[SIZE_4K]; /* page 0 */ 446 - }; 447 - 448 - char page1[SIZE_4K]; /* page 1 */ 449 - 450 - struct fc_port_bank bank[CXLFLASH_MAX_FC_BANKS]; /* pages 2 - 9 */ 451 - 452 - /* pages 10 - 15 are reserved */ 453 - 454 - }; 455 - 456 - /* 457 - * CXL Flash Memory Map 458 - * 459 - * +-------------------------------+ 460 - * | 512 * 64 KB User MMIO | 461 - * | (per context) | 462 - * | User Accessible | 463 - * +-------------------------------+ 464 - * | 512 * 128 B per context | 465 - * | Provisioning and Control | 466 - * | Trusted Process accessible | 467 - * +-------------------------------+ 468 - * | 64 KB Global | 469 - * | Trusted Process accessible | 470 - * +-------------------------------+ 471 - */ 472 - struct cxlflash_afu_map { 473 - union { 474 - struct sisl_host_map host; 475 - char harea[SIZE_64K]; /* 64KB each */ 476 - } hosts[CXLFLASH_MAX_CONTEXT]; 477 - 478 - union { 479 - struct sisl_ctrl_map ctrl; 480 - char carea[cache_line_size()]; /* 128B each */ 481 - } ctrls[CXLFLASH_MAX_CONTEXT]; 482 - 483 - union { 484 - struct sisl_global_map global; 485 - char garea[SIZE_64K]; /* 64KB single block */ 486 - }; 487 - }; 488 - 489 - /* 490 - * LXT - LBA Translation Table 491 - * LXT control blocks 492 - */ 493 - struct sisl_lxt_entry { 494 - u64 rlba_base; /* bits 0:47 is base 495 - * b48:55 is lun index 496 - * b58:59 is write & read perms 497 - * (if no perm, afu_rc=0x15) 498 - * b60:63 is port_sel mask 499 - */ 500 - }; 501 - 502 - /* 503 - * RHT - Resource Handle Table 504 - * Per the SISlite spec, RHT entries are to be 16-byte aligned 505 - */ 506 - struct sisl_rht_entry { 507 - struct sisl_lxt_entry *lxt_start; 508 - u32 lxt_cnt; 509 - u16 rsvd; 510 - u8 fp; /* format & perm nibbles. 511 - * (if no perm, afu_rc=0x05) 512 - */ 513 - u8 nmask; 514 - } __packed __aligned(16); 515 - 516 - struct sisl_rht_entry_f1 { 517 - u64 lun_id; 518 - union { 519 - struct { 520 - u8 valid; 521 - u8 rsvd[5]; 522 - u8 fp; 523 - u8 port_sel; 524 - }; 525 - 526 - u64 dw; 527 - }; 528 - } __packed __aligned(16); 529 - 530 - /* make the fp byte */ 531 - #define SISL_RHT_FP(fmt, perm) (((fmt) << 4) | (perm)) 532 - 533 - /* make the fp byte for a clone from a source fp and clone flags 534 - * flags must be only 2 LSB bits. 535 - */ 536 - #define SISL_RHT_FP_CLONE(src_fp, cln_flags) ((src_fp) & (0xFC | (cln_flags))) 537 - 538 - #define RHT_PERM_READ 0x01U 539 - #define RHT_PERM_WRITE 0x02U 540 - #define RHT_PERM_RW (RHT_PERM_READ | RHT_PERM_WRITE) 541 - 542 - /* extract the perm bits from a fp */ 543 - #define SISL_RHT_PERM(fp) ((fp) & RHT_PERM_RW) 544 - 545 - #define PORT0 0x01U 546 - #define PORT1 0x02U 547 - #define PORT2 0x04U 548 - #define PORT3 0x08U 549 - #define PORT_MASK(_n) ((1 << (_n)) - 1) 550 - 551 - /* AFU Sync Mode byte */ 552 - #define AFU_LW_SYNC 0x0U 553 - #define AFU_HW_SYNC 0x1U 554 - #define AFU_GSYNC 0x2U 555 - 556 - /* Special Task Management Function CDB */ 557 - #define TMF_LUN_RESET 0x1U 558 - #define TMF_CLEAR_ACA 0x2U 559 - 560 - #endif /* _SISLITE_H */
-2218
drivers/scsi/cxlflash/superpipe.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * CXL Flash Device Driver 4 - * 5 - * Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation 6 - * Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation 7 - * 8 - * Copyright (C) 2015 IBM Corporation 9 - */ 10 - 11 - #include <linux/delay.h> 12 - #include <linux/file.h> 13 - #include <linux/interrupt.h> 14 - #include <linux/pci.h> 15 - #include <linux/syscalls.h> 16 - #include <linux/unaligned.h> 17 - 18 - #include <scsi/scsi.h> 19 - #include <scsi/scsi_host.h> 20 - #include <scsi/scsi_cmnd.h> 21 - #include <scsi/scsi_eh.h> 22 - #include <uapi/scsi/cxlflash_ioctl.h> 23 - 24 - #include "sislite.h" 25 - #include "common.h" 26 - #include "vlun.h" 27 - #include "superpipe.h" 28 - 29 - struct cxlflash_global global; 30 - 31 - /** 32 - * marshal_rele_to_resize() - translate release to resize structure 33 - * @release: Source structure from which to translate/copy. 34 - * @resize: Destination structure for the translate/copy. 35 - */ 36 - static void marshal_rele_to_resize(struct dk_cxlflash_release *release, 37 - struct dk_cxlflash_resize *resize) 38 - { 39 - resize->hdr = release->hdr; 40 - resize->context_id = release->context_id; 41 - resize->rsrc_handle = release->rsrc_handle; 42 - } 43 - 44 - /** 45 - * marshal_det_to_rele() - translate detach to release structure 46 - * @detach: Destination structure for the translate/copy. 47 - * @release: Source structure from which to translate/copy. 48 - */ 49 - static void marshal_det_to_rele(struct dk_cxlflash_detach *detach, 50 - struct dk_cxlflash_release *release) 51 - { 52 - release->hdr = detach->hdr; 53 - release->context_id = detach->context_id; 54 - } 55 - 56 - /** 57 - * marshal_udir_to_rele() - translate udirect to release structure 58 - * @udirect: Source structure from which to translate/copy. 59 - * @release: Destination structure for the translate/copy. 60 - */ 61 - static void marshal_udir_to_rele(struct dk_cxlflash_udirect *udirect, 62 - struct dk_cxlflash_release *release) 63 - { 64 - release->hdr = udirect->hdr; 65 - release->context_id = udirect->context_id; 66 - release->rsrc_handle = udirect->rsrc_handle; 67 - } 68 - 69 - /** 70 - * cxlflash_free_errpage() - frees resources associated with global error page 71 - */ 72 - void cxlflash_free_errpage(void) 73 - { 74 - 75 - mutex_lock(&global.mutex); 76 - if (global.err_page) { 77 - __free_page(global.err_page); 78 - global.err_page = NULL; 79 - } 80 - mutex_unlock(&global.mutex); 81 - } 82 - 83 - /** 84 - * cxlflash_stop_term_user_contexts() - stops/terminates known user contexts 85 - * @cfg: Internal structure associated with the host. 86 - * 87 - * When the host needs to go down, all users must be quiesced and their 88 - * memory freed. This is accomplished by putting the contexts in error 89 - * state which will notify the user and let them 'drive' the tear down. 90 - * Meanwhile, this routine camps until all user contexts have been removed. 91 - * 92 - * Note that the main loop in this routine will always execute at least once 93 - * to flush the reset_waitq. 94 - */ 95 - void cxlflash_stop_term_user_contexts(struct cxlflash_cfg *cfg) 96 - { 97 - struct device *dev = &cfg->dev->dev; 98 - int i, found = true; 99 - 100 - cxlflash_mark_contexts_error(cfg); 101 - 102 - while (true) { 103 - for (i = 0; i < MAX_CONTEXT; i++) 104 - if (cfg->ctx_tbl[i]) { 105 - found = true; 106 - break; 107 - } 108 - 109 - if (!found && list_empty(&cfg->ctx_err_recovery)) 110 - return; 111 - 112 - dev_dbg(dev, "%s: Wait for user contexts to quiesce...\n", 113 - __func__); 114 - wake_up_all(&cfg->reset_waitq); 115 - ssleep(1); 116 - found = false; 117 - } 118 - } 119 - 120 - /** 121 - * find_error_context() - locates a context by cookie on the error recovery list 122 - * @cfg: Internal structure associated with the host. 123 - * @rctxid: Desired context by id. 124 - * @file: Desired context by file. 125 - * 126 - * Return: Found context on success, NULL on failure 127 - */ 128 - static struct ctx_info *find_error_context(struct cxlflash_cfg *cfg, u64 rctxid, 129 - struct file *file) 130 - { 131 - struct ctx_info *ctxi; 132 - 133 - list_for_each_entry(ctxi, &cfg->ctx_err_recovery, list) 134 - if ((ctxi->ctxid == rctxid) || (ctxi->file == file)) 135 - return ctxi; 136 - 137 - return NULL; 138 - } 139 - 140 - /** 141 - * get_context() - obtains a validated and locked context reference 142 - * @cfg: Internal structure associated with the host. 143 - * @rctxid: Desired context (raw, un-decoded format). 144 - * @arg: LUN information or file associated with request. 145 - * @ctx_ctrl: Control information to 'steer' desired lookup. 146 - * 147 - * NOTE: despite the name pid, in linux, current->pid actually refers 148 - * to the lightweight process id (tid) and can change if the process is 149 - * multi threaded. The tgid remains constant for the process and only changes 150 - * when the process of fork. For all intents and purposes, think of tgid 151 - * as a pid in the traditional sense. 152 - * 153 - * Return: Validated context on success, NULL on failure 154 - */ 155 - struct ctx_info *get_context(struct cxlflash_cfg *cfg, u64 rctxid, 156 - void *arg, enum ctx_ctrl ctx_ctrl) 157 - { 158 - struct device *dev = &cfg->dev->dev; 159 - struct ctx_info *ctxi = NULL; 160 - struct lun_access *lun_access = NULL; 161 - struct file *file = NULL; 162 - struct llun_info *lli = arg; 163 - u64 ctxid = DECODE_CTXID(rctxid); 164 - int rc; 165 - pid_t pid = task_tgid_nr(current), ctxpid = 0; 166 - 167 - if (ctx_ctrl & CTX_CTRL_FILE) { 168 - lli = NULL; 169 - file = (struct file *)arg; 170 - } 171 - 172 - if (ctx_ctrl & CTX_CTRL_CLONE) 173 - pid = task_ppid_nr(current); 174 - 175 - if (likely(ctxid < MAX_CONTEXT)) { 176 - while (true) { 177 - mutex_lock(&cfg->ctx_tbl_list_mutex); 178 - ctxi = cfg->ctx_tbl[ctxid]; 179 - if (ctxi) 180 - if ((file && (ctxi->file != file)) || 181 - (!file && (ctxi->ctxid != rctxid))) 182 - ctxi = NULL; 183 - 184 - if ((ctx_ctrl & CTX_CTRL_ERR) || 185 - (!ctxi && (ctx_ctrl & CTX_CTRL_ERR_FALLBACK))) 186 - ctxi = find_error_context(cfg, rctxid, file); 187 - if (!ctxi) { 188 - mutex_unlock(&cfg->ctx_tbl_list_mutex); 189 - goto out; 190 - } 191 - 192 - /* 193 - * Need to acquire ownership of the context while still 194 - * under the table/list lock to serialize with a remove 195 - * thread. Use the 'try' to avoid stalling the 196 - * table/list lock for a single context. 197 - * 198 - * Note that the lock order is: 199 - * 200 - * cfg->ctx_tbl_list_mutex -> ctxi->mutex 201 - * 202 - * Therefore release ctx_tbl_list_mutex before retrying. 203 - */ 204 - rc = mutex_trylock(&ctxi->mutex); 205 - mutex_unlock(&cfg->ctx_tbl_list_mutex); 206 - if (rc) 207 - break; /* got the context's lock! */ 208 - } 209 - 210 - if (ctxi->unavail) 211 - goto denied; 212 - 213 - ctxpid = ctxi->pid; 214 - if (likely(!(ctx_ctrl & CTX_CTRL_NOPID))) 215 - if (pid != ctxpid) 216 - goto denied; 217 - 218 - if (lli) { 219 - list_for_each_entry(lun_access, &ctxi->luns, list) 220 - if (lun_access->lli == lli) 221 - goto out; 222 - goto denied; 223 - } 224 - } 225 - 226 - out: 227 - dev_dbg(dev, "%s: rctxid=%016llx ctxinfo=%p ctxpid=%u pid=%u " 228 - "ctx_ctrl=%u\n", __func__, rctxid, ctxi, ctxpid, pid, 229 - ctx_ctrl); 230 - 231 - return ctxi; 232 - 233 - denied: 234 - mutex_unlock(&ctxi->mutex); 235 - ctxi = NULL; 236 - goto out; 237 - } 238 - 239 - /** 240 - * put_context() - release a context that was retrieved from get_context() 241 - * @ctxi: Context to release. 242 - * 243 - * For now, releasing the context equates to unlocking it's mutex. 244 - */ 245 - void put_context(struct ctx_info *ctxi) 246 - { 247 - mutex_unlock(&ctxi->mutex); 248 - } 249 - 250 - /** 251 - * afu_attach() - attach a context to the AFU 252 - * @cfg: Internal structure associated with the host. 253 - * @ctxi: Context to attach. 254 - * 255 - * Upon setting the context capabilities, they must be confirmed with 256 - * a read back operation as the context might have been closed since 257 - * the mailbox was unlocked. When this occurs, registration is failed. 258 - * 259 - * Return: 0 on success, -errno on failure 260 - */ 261 - static int afu_attach(struct cxlflash_cfg *cfg, struct ctx_info *ctxi) 262 - { 263 - struct device *dev = &cfg->dev->dev; 264 - struct afu *afu = cfg->afu; 265 - struct sisl_ctrl_map __iomem *ctrl_map = ctxi->ctrl_map; 266 - int rc = 0; 267 - struct hwq *hwq = get_hwq(afu, PRIMARY_HWQ); 268 - u64 val; 269 - int i; 270 - 271 - /* Unlock cap and restrict user to read/write cmds in translated mode */ 272 - readq_be(&ctrl_map->mbox_r); 273 - val = (SISL_CTX_CAP_READ_CMD | SISL_CTX_CAP_WRITE_CMD); 274 - writeq_be(val, &ctrl_map->ctx_cap); 275 - val = readq_be(&ctrl_map->ctx_cap); 276 - if (val != (SISL_CTX_CAP_READ_CMD | SISL_CTX_CAP_WRITE_CMD)) { 277 - dev_err(dev, "%s: ctx may be closed val=%016llx\n", 278 - __func__, val); 279 - rc = -EAGAIN; 280 - goto out; 281 - } 282 - 283 - if (afu_is_ocxl_lisn(afu)) { 284 - /* Set up the LISN effective address for each interrupt */ 285 - for (i = 0; i < ctxi->irqs; i++) { 286 - val = cfg->ops->get_irq_objhndl(ctxi->ctx, i); 287 - writeq_be(val, &ctrl_map->lisn_ea[i]); 288 - } 289 - 290 - /* Use primary HWQ PASID as identifier for all interrupts */ 291 - val = hwq->ctx_hndl; 292 - writeq_be(SISL_LISN_PASID(val, val), &ctrl_map->lisn_pasid[0]); 293 - writeq_be(SISL_LISN_PASID(0UL, val), &ctrl_map->lisn_pasid[1]); 294 - } 295 - 296 - /* Set up MMIO registers pointing to the RHT */ 297 - writeq_be((u64)ctxi->rht_start, &ctrl_map->rht_start); 298 - val = SISL_RHT_CNT_ID((u64)MAX_RHT_PER_CONTEXT, (u64)(hwq->ctx_hndl)); 299 - writeq_be(val, &ctrl_map->rht_cnt_id); 300 - out: 301 - dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 302 - return rc; 303 - } 304 - 305 - /** 306 - * read_cap16() - issues a SCSI READ_CAP16 command 307 - * @sdev: SCSI device associated with LUN. 308 - * @lli: LUN destined for capacity request. 309 - * 310 - * The READ_CAP16 can take quite a while to complete. Should an EEH occur while 311 - * in scsi_execute_cmd(), the EEH handler will attempt to recover. As part of 312 - * the recovery, the handler drains all currently running ioctls, waiting until 313 - * they have completed before proceeding with a reset. As this routine is used 314 - * on the ioctl path, this can create a condition where the EEH handler becomes 315 - * stuck, infinitely waiting for this ioctl thread. To avoid this behavior, 316 - * temporarily unmark this thread as an ioctl thread by releasing the ioctl 317 - * read semaphore. This will allow the EEH handler to proceed with a recovery 318 - * while this thread is still running. Once the scsi_execute_cmd() returns, 319 - * reacquire the ioctl read semaphore and check the adapter state in case it 320 - * changed while inside of scsi_execute_cmd(). The state check will wait if the 321 - * adapter is still being recovered or return a failure if the recovery failed. 322 - * In the event that the adapter reset failed, simply return the failure as the 323 - * ioctl would be unable to continue. 324 - * 325 - * Note that the above puts a requirement on this routine to only be called on 326 - * an ioctl thread. 327 - * 328 - * Return: 0 on success, -errno on failure 329 - */ 330 - static int read_cap16(struct scsi_device *sdev, struct llun_info *lli) 331 - { 332 - struct cxlflash_cfg *cfg = shost_priv(sdev->host); 333 - struct device *dev = &cfg->dev->dev; 334 - struct glun_info *gli = lli->parent; 335 - struct scsi_sense_hdr sshdr; 336 - const struct scsi_exec_args exec_args = { 337 - .sshdr = &sshdr, 338 - }; 339 - u8 *cmd_buf = NULL; 340 - u8 *scsi_cmd = NULL; 341 - int rc = 0; 342 - int result = 0; 343 - int retry_cnt = 0; 344 - u32 to = CMD_TIMEOUT * HZ; 345 - 346 - retry: 347 - cmd_buf = kzalloc(CMD_BUFSIZE, GFP_KERNEL); 348 - scsi_cmd = kzalloc(MAX_COMMAND_SIZE, GFP_KERNEL); 349 - if (unlikely(!cmd_buf || !scsi_cmd)) { 350 - rc = -ENOMEM; 351 - goto out; 352 - } 353 - 354 - scsi_cmd[0] = SERVICE_ACTION_IN_16; /* read cap(16) */ 355 - scsi_cmd[1] = SAI_READ_CAPACITY_16; /* service action */ 356 - put_unaligned_be32(CMD_BUFSIZE, &scsi_cmd[10]); 357 - 358 - dev_dbg(dev, "%s: %ssending cmd(%02x)\n", __func__, 359 - retry_cnt ? "re" : "", scsi_cmd[0]); 360 - 361 - /* Drop the ioctl read semaphore across lengthy call */ 362 - up_read(&cfg->ioctl_rwsem); 363 - result = scsi_execute_cmd(sdev, scsi_cmd, REQ_OP_DRV_IN, cmd_buf, 364 - CMD_BUFSIZE, to, CMD_RETRIES, &exec_args); 365 - down_read(&cfg->ioctl_rwsem); 366 - rc = check_state(cfg); 367 - if (rc) { 368 - dev_err(dev, "%s: Failed state result=%08x\n", 369 - __func__, result); 370 - rc = -ENODEV; 371 - goto out; 372 - } 373 - 374 - if (result > 0 && scsi_sense_valid(&sshdr)) { 375 - if (result & SAM_STAT_CHECK_CONDITION) { 376 - switch (sshdr.sense_key) { 377 - case NO_SENSE: 378 - case RECOVERED_ERROR: 379 - case NOT_READY: 380 - result &= ~SAM_STAT_CHECK_CONDITION; 381 - break; 382 - case UNIT_ATTENTION: 383 - switch (sshdr.asc) { 384 - case 0x29: /* Power on Reset or Device Reset */ 385 - fallthrough; 386 - case 0x2A: /* Device capacity changed */ 387 - case 0x3F: /* Report LUNs changed */ 388 - /* Retry the command once more */ 389 - if (retry_cnt++ < 1) { 390 - kfree(cmd_buf); 391 - kfree(scsi_cmd); 392 - goto retry; 393 - } 394 - } 395 - break; 396 - default: 397 - break; 398 - } 399 - } 400 - } 401 - 402 - if (result) { 403 - dev_err(dev, "%s: command failed, result=%08x\n", 404 - __func__, result); 405 - rc = -EIO; 406 - goto out; 407 - } 408 - 409 - /* 410 - * Read cap was successful, grab values from the buffer; 411 - * note that we don't need to worry about unaligned access 412 - * as the buffer is allocated on an aligned boundary. 413 - */ 414 - mutex_lock(&gli->mutex); 415 - gli->max_lba = be64_to_cpu(*((__be64 *)&cmd_buf[0])); 416 - gli->blk_len = be32_to_cpu(*((__be32 *)&cmd_buf[8])); 417 - mutex_unlock(&gli->mutex); 418 - 419 - out: 420 - kfree(cmd_buf); 421 - kfree(scsi_cmd); 422 - 423 - dev_dbg(dev, "%s: maxlba=%lld blklen=%d rc=%d\n", 424 - __func__, gli->max_lba, gli->blk_len, rc); 425 - return rc; 426 - } 427 - 428 - /** 429 - * get_rhte() - obtains validated resource handle table entry reference 430 - * @ctxi: Context owning the resource handle. 431 - * @rhndl: Resource handle associated with entry. 432 - * @lli: LUN associated with request. 433 - * 434 - * Return: Validated RHTE on success, NULL on failure 435 - */ 436 - struct sisl_rht_entry *get_rhte(struct ctx_info *ctxi, res_hndl_t rhndl, 437 - struct llun_info *lli) 438 - { 439 - struct cxlflash_cfg *cfg = ctxi->cfg; 440 - struct device *dev = &cfg->dev->dev; 441 - struct sisl_rht_entry *rhte = NULL; 442 - 443 - if (unlikely(!ctxi->rht_start)) { 444 - dev_dbg(dev, "%s: Context does not have allocated RHT\n", 445 - __func__); 446 - goto out; 447 - } 448 - 449 - if (unlikely(rhndl >= MAX_RHT_PER_CONTEXT)) { 450 - dev_dbg(dev, "%s: Bad resource handle rhndl=%d\n", 451 - __func__, rhndl); 452 - goto out; 453 - } 454 - 455 - if (unlikely(ctxi->rht_lun[rhndl] != lli)) { 456 - dev_dbg(dev, "%s: Bad resource handle LUN rhndl=%d\n", 457 - __func__, rhndl); 458 - goto out; 459 - } 460 - 461 - rhte = &ctxi->rht_start[rhndl]; 462 - if (unlikely(rhte->nmask == 0)) { 463 - dev_dbg(dev, "%s: Unopened resource handle rhndl=%d\n", 464 - __func__, rhndl); 465 - rhte = NULL; 466 - goto out; 467 - } 468 - 469 - out: 470 - return rhte; 471 - } 472 - 473 - /** 474 - * rhte_checkout() - obtains free/empty resource handle table entry 475 - * @ctxi: Context owning the resource handle. 476 - * @lli: LUN associated with request. 477 - * 478 - * Return: Free RHTE on success, NULL on failure 479 - */ 480 - struct sisl_rht_entry *rhte_checkout(struct ctx_info *ctxi, 481 - struct llun_info *lli) 482 - { 483 - struct cxlflash_cfg *cfg = ctxi->cfg; 484 - struct device *dev = &cfg->dev->dev; 485 - struct sisl_rht_entry *rhte = NULL; 486 - int i; 487 - 488 - /* Find a free RHT entry */ 489 - for (i = 0; i < MAX_RHT_PER_CONTEXT; i++) 490 - if (ctxi->rht_start[i].nmask == 0) { 491 - rhte = &ctxi->rht_start[i]; 492 - ctxi->rht_out++; 493 - break; 494 - } 495 - 496 - if (likely(rhte)) 497 - ctxi->rht_lun[i] = lli; 498 - 499 - dev_dbg(dev, "%s: returning rhte=%p index=%d\n", __func__, rhte, i); 500 - return rhte; 501 - } 502 - 503 - /** 504 - * rhte_checkin() - releases a resource handle table entry 505 - * @ctxi: Context owning the resource handle. 506 - * @rhte: RHTE to release. 507 - */ 508 - void rhte_checkin(struct ctx_info *ctxi, 509 - struct sisl_rht_entry *rhte) 510 - { 511 - u32 rsrc_handle = rhte - ctxi->rht_start; 512 - 513 - rhte->nmask = 0; 514 - rhte->fp = 0; 515 - ctxi->rht_out--; 516 - ctxi->rht_lun[rsrc_handle] = NULL; 517 - ctxi->rht_needs_ws[rsrc_handle] = false; 518 - } 519 - 520 - /** 521 - * rht_format1() - populates a RHTE for format 1 522 - * @rhte: RHTE to populate. 523 - * @lun_id: LUN ID of LUN associated with RHTE. 524 - * @perm: Desired permissions for RHTE. 525 - * @port_sel: Port selection mask 526 - */ 527 - static void rht_format1(struct sisl_rht_entry *rhte, u64 lun_id, u32 perm, 528 - u32 port_sel) 529 - { 530 - /* 531 - * Populate the Format 1 RHT entry for direct access (physical 532 - * LUN) using the synchronization sequence defined in the 533 - * SISLite specification. 534 - */ 535 - struct sisl_rht_entry_f1 dummy = { 0 }; 536 - struct sisl_rht_entry_f1 *rhte_f1 = (struct sisl_rht_entry_f1 *)rhte; 537 - 538 - memset(rhte_f1, 0, sizeof(*rhte_f1)); 539 - rhte_f1->fp = SISL_RHT_FP(1U, 0); 540 - dma_wmb(); /* Make setting of format bit visible */ 541 - 542 - rhte_f1->lun_id = lun_id; 543 - dma_wmb(); /* Make setting of LUN id visible */ 544 - 545 - /* 546 - * Use a dummy RHT Format 1 entry to build the second dword 547 - * of the entry that must be populated in a single write when 548 - * enabled (valid bit set to TRUE). 549 - */ 550 - dummy.valid = 0x80; 551 - dummy.fp = SISL_RHT_FP(1U, perm); 552 - dummy.port_sel = port_sel; 553 - rhte_f1->dw = dummy.dw; 554 - 555 - dma_wmb(); /* Make remaining RHT entry fields visible */ 556 - } 557 - 558 - /** 559 - * cxlflash_lun_attach() - attaches a user to a LUN and manages the LUN's mode 560 - * @gli: LUN to attach. 561 - * @mode: Desired mode of the LUN. 562 - * @locked: Mutex status on current thread. 563 - * 564 - * Return: 0 on success, -errno on failure 565 - */ 566 - int cxlflash_lun_attach(struct glun_info *gli, enum lun_mode mode, bool locked) 567 - { 568 - int rc = 0; 569 - 570 - if (!locked) 571 - mutex_lock(&gli->mutex); 572 - 573 - if (gli->mode == MODE_NONE) 574 - gli->mode = mode; 575 - else if (gli->mode != mode) { 576 - pr_debug("%s: gli_mode=%d requested_mode=%d\n", 577 - __func__, gli->mode, mode); 578 - rc = -EINVAL; 579 - goto out; 580 - } 581 - 582 - gli->users++; 583 - WARN_ON(gli->users <= 0); 584 - out: 585 - pr_debug("%s: Returning rc=%d gli->mode=%u gli->users=%u\n", 586 - __func__, rc, gli->mode, gli->users); 587 - if (!locked) 588 - mutex_unlock(&gli->mutex); 589 - return rc; 590 - } 591 - 592 - /** 593 - * cxlflash_lun_detach() - detaches a user from a LUN and resets the LUN's mode 594 - * @gli: LUN to detach. 595 - * 596 - * When resetting the mode, terminate block allocation resources as they 597 - * are no longer required (service is safe to call even when block allocation 598 - * resources were not present - such as when transitioning from physical mode). 599 - * These resources will be reallocated when needed (subsequent transition to 600 - * virtual mode). 601 - */ 602 - void cxlflash_lun_detach(struct glun_info *gli) 603 - { 604 - mutex_lock(&gli->mutex); 605 - WARN_ON(gli->mode == MODE_NONE); 606 - if (--gli->users == 0) { 607 - gli->mode = MODE_NONE; 608 - cxlflash_ba_terminate(&gli->blka.ba_lun); 609 - } 610 - pr_debug("%s: gli->users=%u\n", __func__, gli->users); 611 - WARN_ON(gli->users < 0); 612 - mutex_unlock(&gli->mutex); 613 - } 614 - 615 - /** 616 - * _cxlflash_disk_release() - releases the specified resource entry 617 - * @sdev: SCSI device associated with LUN. 618 - * @ctxi: Context owning resources. 619 - * @release: Release ioctl data structure. 620 - * 621 - * For LUNs in virtual mode, the virtual LUN associated with the specified 622 - * resource handle is resized to 0 prior to releasing the RHTE. Note that the 623 - * AFU sync should _not_ be performed when the context is sitting on the error 624 - * recovery list. A context on the error recovery list is not known to the AFU 625 - * due to reset. When the context is recovered, it will be reattached and made 626 - * known again to the AFU. 627 - * 628 - * Return: 0 on success, -errno on failure 629 - */ 630 - int _cxlflash_disk_release(struct scsi_device *sdev, 631 - struct ctx_info *ctxi, 632 - struct dk_cxlflash_release *release) 633 - { 634 - struct cxlflash_cfg *cfg = shost_priv(sdev->host); 635 - struct device *dev = &cfg->dev->dev; 636 - struct llun_info *lli = sdev->hostdata; 637 - struct glun_info *gli = lli->parent; 638 - struct afu *afu = cfg->afu; 639 - bool put_ctx = false; 640 - 641 - struct dk_cxlflash_resize size; 642 - res_hndl_t rhndl = release->rsrc_handle; 643 - 644 - int rc = 0; 645 - int rcr = 0; 646 - u64 ctxid = DECODE_CTXID(release->context_id), 647 - rctxid = release->context_id; 648 - 649 - struct sisl_rht_entry *rhte; 650 - struct sisl_rht_entry_f1 *rhte_f1; 651 - 652 - dev_dbg(dev, "%s: ctxid=%llu rhndl=%llu gli->mode=%u gli->users=%u\n", 653 - __func__, ctxid, release->rsrc_handle, gli->mode, gli->users); 654 - 655 - if (!ctxi) { 656 - ctxi = get_context(cfg, rctxid, lli, CTX_CTRL_ERR_FALLBACK); 657 - if (unlikely(!ctxi)) { 658 - dev_dbg(dev, "%s: Bad context ctxid=%llu\n", 659 - __func__, ctxid); 660 - rc = -EINVAL; 661 - goto out; 662 - } 663 - 664 - put_ctx = true; 665 - } 666 - 667 - rhte = get_rhte(ctxi, rhndl, lli); 668 - if (unlikely(!rhte)) { 669 - dev_dbg(dev, "%s: Bad resource handle rhndl=%d\n", 670 - __func__, rhndl); 671 - rc = -EINVAL; 672 - goto out; 673 - } 674 - 675 - /* 676 - * Resize to 0 for virtual LUNS by setting the size 677 - * to 0. This will clear LXT_START and LXT_CNT fields 678 - * in the RHT entry and properly sync with the AFU. 679 - * 680 - * Afterwards we clear the remaining fields. 681 - */ 682 - switch (gli->mode) { 683 - case MODE_VIRTUAL: 684 - marshal_rele_to_resize(release, &size); 685 - size.req_size = 0; 686 - rc = _cxlflash_vlun_resize(sdev, ctxi, &size); 687 - if (rc) { 688 - dev_dbg(dev, "%s: resize failed rc %d\n", __func__, rc); 689 - goto out; 690 - } 691 - 692 - break; 693 - case MODE_PHYSICAL: 694 - /* 695 - * Clear the Format 1 RHT entry for direct access 696 - * (physical LUN) using the synchronization sequence 697 - * defined in the SISLite specification. 698 - */ 699 - rhte_f1 = (struct sisl_rht_entry_f1 *)rhte; 700 - 701 - rhte_f1->valid = 0; 702 - dma_wmb(); /* Make revocation of RHT entry visible */ 703 - 704 - rhte_f1->lun_id = 0; 705 - dma_wmb(); /* Make clearing of LUN id visible */ 706 - 707 - rhte_f1->dw = 0; 708 - dma_wmb(); /* Make RHT entry bottom-half clearing visible */ 709 - 710 - if (!ctxi->err_recovery_active) { 711 - rcr = cxlflash_afu_sync(afu, ctxid, rhndl, AFU_HW_SYNC); 712 - if (unlikely(rcr)) 713 - dev_dbg(dev, "%s: AFU sync failed rc=%d\n", 714 - __func__, rcr); 715 - } 716 - break; 717 - default: 718 - WARN(1, "Unsupported LUN mode!"); 719 - goto out; 720 - } 721 - 722 - rhte_checkin(ctxi, rhte); 723 - cxlflash_lun_detach(gli); 724 - 725 - out: 726 - if (put_ctx) 727 - put_context(ctxi); 728 - dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 729 - return rc; 730 - } 731 - 732 - int cxlflash_disk_release(struct scsi_device *sdev, void *release) 733 - { 734 - return _cxlflash_disk_release(sdev, NULL, release); 735 - } 736 - 737 - /** 738 - * destroy_context() - releases a context 739 - * @cfg: Internal structure associated with the host. 740 - * @ctxi: Context to release. 741 - * 742 - * This routine is safe to be called with a a non-initialized context. 743 - * Also note that the routine conditionally checks for the existence 744 - * of the context control map before clearing the RHT registers and 745 - * context capabilities because it is possible to destroy a context 746 - * while the context is in the error state (previous mapping was 747 - * removed [so there is no need to worry about clearing] and context 748 - * is waiting for a new mapping). 749 - */ 750 - static void destroy_context(struct cxlflash_cfg *cfg, 751 - struct ctx_info *ctxi) 752 - { 753 - struct afu *afu = cfg->afu; 754 - 755 - if (ctxi->initialized) { 756 - WARN_ON(!list_empty(&ctxi->luns)); 757 - 758 - /* Clear RHT registers and drop all capabilities for context */ 759 - if (afu->afu_map && ctxi->ctrl_map) { 760 - writeq_be(0, &ctxi->ctrl_map->rht_start); 761 - writeq_be(0, &ctxi->ctrl_map->rht_cnt_id); 762 - writeq_be(0, &ctxi->ctrl_map->ctx_cap); 763 - } 764 - } 765 - 766 - /* Free memory associated with context */ 767 - free_page((ulong)ctxi->rht_start); 768 - kfree(ctxi->rht_needs_ws); 769 - kfree(ctxi->rht_lun); 770 - kfree(ctxi); 771 - } 772 - 773 - /** 774 - * create_context() - allocates and initializes a context 775 - * @cfg: Internal structure associated with the host. 776 - * 777 - * Return: Allocated context on success, NULL on failure 778 - */ 779 - static struct ctx_info *create_context(struct cxlflash_cfg *cfg) 780 - { 781 - struct device *dev = &cfg->dev->dev; 782 - struct ctx_info *ctxi = NULL; 783 - struct llun_info **lli = NULL; 784 - u8 *ws = NULL; 785 - struct sisl_rht_entry *rhte; 786 - 787 - ctxi = kzalloc(sizeof(*ctxi), GFP_KERNEL); 788 - lli = kzalloc((MAX_RHT_PER_CONTEXT * sizeof(*lli)), GFP_KERNEL); 789 - ws = kzalloc((MAX_RHT_PER_CONTEXT * sizeof(*ws)), GFP_KERNEL); 790 - if (unlikely(!ctxi || !lli || !ws)) { 791 - dev_err(dev, "%s: Unable to allocate context\n", __func__); 792 - goto err; 793 - } 794 - 795 - rhte = (struct sisl_rht_entry *)get_zeroed_page(GFP_KERNEL); 796 - if (unlikely(!rhte)) { 797 - dev_err(dev, "%s: Unable to allocate RHT\n", __func__); 798 - goto err; 799 - } 800 - 801 - ctxi->rht_lun = lli; 802 - ctxi->rht_needs_ws = ws; 803 - ctxi->rht_start = rhte; 804 - out: 805 - return ctxi; 806 - 807 - err: 808 - kfree(ws); 809 - kfree(lli); 810 - kfree(ctxi); 811 - ctxi = NULL; 812 - goto out; 813 - } 814 - 815 - /** 816 - * init_context() - initializes a previously allocated context 817 - * @ctxi: Previously allocated context 818 - * @cfg: Internal structure associated with the host. 819 - * @ctx: Previously obtained context cookie. 820 - * @ctxid: Previously obtained process element associated with CXL context. 821 - * @file: Previously obtained file associated with CXL context. 822 - * @perms: User-specified permissions. 823 - * @irqs: User-specified number of interrupts. 824 - */ 825 - static void init_context(struct ctx_info *ctxi, struct cxlflash_cfg *cfg, 826 - void *ctx, int ctxid, struct file *file, u32 perms, 827 - u64 irqs) 828 - { 829 - struct afu *afu = cfg->afu; 830 - 831 - ctxi->rht_perms = perms; 832 - ctxi->ctrl_map = &afu->afu_map->ctrls[ctxid].ctrl; 833 - ctxi->ctxid = ENCODE_CTXID(ctxi, ctxid); 834 - ctxi->irqs = irqs; 835 - ctxi->pid = task_tgid_nr(current); /* tgid = pid */ 836 - ctxi->ctx = ctx; 837 - ctxi->cfg = cfg; 838 - ctxi->file = file; 839 - ctxi->initialized = true; 840 - mutex_init(&ctxi->mutex); 841 - kref_init(&ctxi->kref); 842 - INIT_LIST_HEAD(&ctxi->luns); 843 - INIT_LIST_HEAD(&ctxi->list); /* initialize for list_empty() */ 844 - } 845 - 846 - /** 847 - * remove_context() - context kref release handler 848 - * @kref: Kernel reference associated with context to be removed. 849 - * 850 - * When a context no longer has any references it can safely be removed 851 - * from global access and destroyed. Note that it is assumed the thread 852 - * relinquishing access to the context holds its mutex. 853 - */ 854 - static void remove_context(struct kref *kref) 855 - { 856 - struct ctx_info *ctxi = container_of(kref, struct ctx_info, kref); 857 - struct cxlflash_cfg *cfg = ctxi->cfg; 858 - u64 ctxid = DECODE_CTXID(ctxi->ctxid); 859 - 860 - /* Remove context from table/error list */ 861 - WARN_ON(!mutex_is_locked(&ctxi->mutex)); 862 - ctxi->unavail = true; 863 - mutex_unlock(&ctxi->mutex); 864 - mutex_lock(&cfg->ctx_tbl_list_mutex); 865 - mutex_lock(&ctxi->mutex); 866 - 867 - if (!list_empty(&ctxi->list)) 868 - list_del(&ctxi->list); 869 - cfg->ctx_tbl[ctxid] = NULL; 870 - mutex_unlock(&cfg->ctx_tbl_list_mutex); 871 - mutex_unlock(&ctxi->mutex); 872 - 873 - /* Context now completely uncoupled/unreachable */ 874 - destroy_context(cfg, ctxi); 875 - } 876 - 877 - /** 878 - * _cxlflash_disk_detach() - detaches a LUN from a context 879 - * @sdev: SCSI device associated with LUN. 880 - * @ctxi: Context owning resources. 881 - * @detach: Detach ioctl data structure. 882 - * 883 - * As part of the detach, all per-context resources associated with the LUN 884 - * are cleaned up. When detaching the last LUN for a context, the context 885 - * itself is cleaned up and released. 886 - * 887 - * Return: 0 on success, -errno on failure 888 - */ 889 - static int _cxlflash_disk_detach(struct scsi_device *sdev, 890 - struct ctx_info *ctxi, 891 - struct dk_cxlflash_detach *detach) 892 - { 893 - struct cxlflash_cfg *cfg = shost_priv(sdev->host); 894 - struct device *dev = &cfg->dev->dev; 895 - struct llun_info *lli = sdev->hostdata; 896 - struct lun_access *lun_access, *t; 897 - struct dk_cxlflash_release rel; 898 - bool put_ctx = false; 899 - 900 - int i; 901 - int rc = 0; 902 - u64 ctxid = DECODE_CTXID(detach->context_id), 903 - rctxid = detach->context_id; 904 - 905 - dev_dbg(dev, "%s: ctxid=%llu\n", __func__, ctxid); 906 - 907 - if (!ctxi) { 908 - ctxi = get_context(cfg, rctxid, lli, CTX_CTRL_ERR_FALLBACK); 909 - if (unlikely(!ctxi)) { 910 - dev_dbg(dev, "%s: Bad context ctxid=%llu\n", 911 - __func__, ctxid); 912 - rc = -EINVAL; 913 - goto out; 914 - } 915 - 916 - put_ctx = true; 917 - } 918 - 919 - /* Cleanup outstanding resources tied to this LUN */ 920 - if (ctxi->rht_out) { 921 - marshal_det_to_rele(detach, &rel); 922 - for (i = 0; i < MAX_RHT_PER_CONTEXT; i++) { 923 - if (ctxi->rht_lun[i] == lli) { 924 - rel.rsrc_handle = i; 925 - _cxlflash_disk_release(sdev, ctxi, &rel); 926 - } 927 - 928 - /* No need to loop further if we're done */ 929 - if (ctxi->rht_out == 0) 930 - break; 931 - } 932 - } 933 - 934 - /* Take our LUN out of context, free the node */ 935 - list_for_each_entry_safe(lun_access, t, &ctxi->luns, list) 936 - if (lun_access->lli == lli) { 937 - list_del(&lun_access->list); 938 - kfree(lun_access); 939 - lun_access = NULL; 940 - break; 941 - } 942 - 943 - /* 944 - * Release the context reference and the sdev reference that 945 - * bound this LUN to the context. 946 - */ 947 - if (kref_put(&ctxi->kref, remove_context)) 948 - put_ctx = false; 949 - scsi_device_put(sdev); 950 - out: 951 - if (put_ctx) 952 - put_context(ctxi); 953 - dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 954 - return rc; 955 - } 956 - 957 - static int cxlflash_disk_detach(struct scsi_device *sdev, void *detach) 958 - { 959 - return _cxlflash_disk_detach(sdev, NULL, detach); 960 - } 961 - 962 - /** 963 - * cxlflash_cxl_release() - release handler for adapter file descriptor 964 - * @inode: File-system inode associated with fd. 965 - * @file: File installed with adapter file descriptor. 966 - * 967 - * This routine is the release handler for the fops registered with 968 - * the CXL services on an initial attach for a context. It is called 969 - * when a close (explicitly by the user or as part of a process tear 970 - * down) is performed on the adapter file descriptor returned to the 971 - * user. The user should be aware that explicitly performing a close 972 - * considered catastrophic and subsequent usage of the superpipe API 973 - * with previously saved off tokens will fail. 974 - * 975 - * This routine derives the context reference and calls detach for 976 - * each LUN associated with the context.The final detach operation 977 - * causes the context itself to be freed. With exception to when the 978 - * CXL process element (context id) lookup fails (a case that should 979 - * theoretically never occur), every call into this routine results 980 - * in a complete freeing of a context. 981 - * 982 - * Detaching the LUN is typically an ioctl() operation and the underlying 983 - * code assumes that ioctl_rwsem has been acquired as a reader. To support 984 - * that design point, the semaphore is acquired and released around detach. 985 - * 986 - * Return: 0 on success 987 - */ 988 - static int cxlflash_cxl_release(struct inode *inode, struct file *file) 989 - { 990 - struct cxlflash_cfg *cfg = container_of(file->f_op, struct cxlflash_cfg, 991 - cxl_fops); 992 - void *ctx = cfg->ops->fops_get_context(file); 993 - struct device *dev = &cfg->dev->dev; 994 - struct ctx_info *ctxi = NULL; 995 - struct dk_cxlflash_detach detach = { { 0 }, 0 }; 996 - struct lun_access *lun_access, *t; 997 - enum ctx_ctrl ctrl = CTX_CTRL_ERR_FALLBACK | CTX_CTRL_FILE; 998 - int ctxid; 999 - 1000 - ctxid = cfg->ops->process_element(ctx); 1001 - if (unlikely(ctxid < 0)) { 1002 - dev_err(dev, "%s: Context %p was closed ctxid=%d\n", 1003 - __func__, ctx, ctxid); 1004 - goto out; 1005 - } 1006 - 1007 - ctxi = get_context(cfg, ctxid, file, ctrl); 1008 - if (unlikely(!ctxi)) { 1009 - ctxi = get_context(cfg, ctxid, file, ctrl | CTX_CTRL_CLONE); 1010 - if (!ctxi) { 1011 - dev_dbg(dev, "%s: ctxid=%d already free\n", 1012 - __func__, ctxid); 1013 - goto out_release; 1014 - } 1015 - 1016 - dev_dbg(dev, "%s: Another process owns ctxid=%d\n", 1017 - __func__, ctxid); 1018 - put_context(ctxi); 1019 - goto out; 1020 - } 1021 - 1022 - dev_dbg(dev, "%s: close for ctxid=%d\n", __func__, ctxid); 1023 - 1024 - down_read(&cfg->ioctl_rwsem); 1025 - detach.context_id = ctxi->ctxid; 1026 - list_for_each_entry_safe(lun_access, t, &ctxi->luns, list) 1027 - _cxlflash_disk_detach(lun_access->sdev, ctxi, &detach); 1028 - up_read(&cfg->ioctl_rwsem); 1029 - out_release: 1030 - cfg->ops->fd_release(inode, file); 1031 - out: 1032 - dev_dbg(dev, "%s: returning\n", __func__); 1033 - return 0; 1034 - } 1035 - 1036 - /** 1037 - * unmap_context() - clears a previously established mapping 1038 - * @ctxi: Context owning the mapping. 1039 - * 1040 - * This routine is used to switch between the error notification page 1041 - * (dummy page of all 1's) and the real mapping (established by the CXL 1042 - * fault handler). 1043 - */ 1044 - static void unmap_context(struct ctx_info *ctxi) 1045 - { 1046 - unmap_mapping_range(ctxi->file->f_mapping, 0, 0, 1); 1047 - } 1048 - 1049 - /** 1050 - * get_err_page() - obtains and allocates the error notification page 1051 - * @cfg: Internal structure associated with the host. 1052 - * 1053 - * Return: error notification page on success, NULL on failure 1054 - */ 1055 - static struct page *get_err_page(struct cxlflash_cfg *cfg) 1056 - { 1057 - struct page *err_page = global.err_page; 1058 - struct device *dev = &cfg->dev->dev; 1059 - 1060 - if (unlikely(!err_page)) { 1061 - err_page = alloc_page(GFP_KERNEL); 1062 - if (unlikely(!err_page)) { 1063 - dev_err(dev, "%s: Unable to allocate err_page\n", 1064 - __func__); 1065 - goto out; 1066 - } 1067 - 1068 - memset(page_address(err_page), -1, PAGE_SIZE); 1069 - 1070 - /* Serialize update w/ other threads to avoid a leak */ 1071 - mutex_lock(&global.mutex); 1072 - if (likely(!global.err_page)) 1073 - global.err_page = err_page; 1074 - else { 1075 - __free_page(err_page); 1076 - err_page = global.err_page; 1077 - } 1078 - mutex_unlock(&global.mutex); 1079 - } 1080 - 1081 - out: 1082 - dev_dbg(dev, "%s: returning err_page=%p\n", __func__, err_page); 1083 - return err_page; 1084 - } 1085 - 1086 - /** 1087 - * cxlflash_mmap_fault() - mmap fault handler for adapter file descriptor 1088 - * @vmf: VM fault associated with current fault. 1089 - * 1090 - * To support error notification via MMIO, faults are 'caught' by this routine 1091 - * that was inserted before passing back the adapter file descriptor on attach. 1092 - * When a fault occurs, this routine evaluates if error recovery is active and 1093 - * if so, installs the error page to 'notify' the user about the error state. 1094 - * During normal operation, the fault is simply handled by the original fault 1095 - * handler that was installed by CXL services as part of initializing the 1096 - * adapter file descriptor. The VMA's page protection bits are toggled to 1097 - * indicate cached/not-cached depending on the memory backing the fault. 1098 - * 1099 - * Return: 0 on success, VM_FAULT_SIGBUS on failure 1100 - */ 1101 - static vm_fault_t cxlflash_mmap_fault(struct vm_fault *vmf) 1102 - { 1103 - struct vm_area_struct *vma = vmf->vma; 1104 - struct file *file = vma->vm_file; 1105 - struct cxlflash_cfg *cfg = container_of(file->f_op, struct cxlflash_cfg, 1106 - cxl_fops); 1107 - void *ctx = cfg->ops->fops_get_context(file); 1108 - struct device *dev = &cfg->dev->dev; 1109 - struct ctx_info *ctxi = NULL; 1110 - struct page *err_page = NULL; 1111 - enum ctx_ctrl ctrl = CTX_CTRL_ERR_FALLBACK | CTX_CTRL_FILE; 1112 - vm_fault_t rc = 0; 1113 - int ctxid; 1114 - 1115 - ctxid = cfg->ops->process_element(ctx); 1116 - if (unlikely(ctxid < 0)) { 1117 - dev_err(dev, "%s: Context %p was closed ctxid=%d\n", 1118 - __func__, ctx, ctxid); 1119 - goto err; 1120 - } 1121 - 1122 - ctxi = get_context(cfg, ctxid, file, ctrl); 1123 - if (unlikely(!ctxi)) { 1124 - dev_dbg(dev, "%s: Bad context ctxid=%d\n", __func__, ctxid); 1125 - goto err; 1126 - } 1127 - 1128 - dev_dbg(dev, "%s: fault for context %d\n", __func__, ctxid); 1129 - 1130 - if (likely(!ctxi->err_recovery_active)) { 1131 - vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 1132 - rc = ctxi->cxl_mmap_vmops->fault(vmf); 1133 - } else { 1134 - dev_dbg(dev, "%s: err recovery active, use err_page\n", 1135 - __func__); 1136 - 1137 - err_page = get_err_page(cfg); 1138 - if (unlikely(!err_page)) { 1139 - dev_err(dev, "%s: Could not get err_page\n", __func__); 1140 - rc = VM_FAULT_RETRY; 1141 - goto out; 1142 - } 1143 - 1144 - get_page(err_page); 1145 - vmf->page = err_page; 1146 - vma->vm_page_prot = pgprot_cached(vma->vm_page_prot); 1147 - } 1148 - 1149 - out: 1150 - if (likely(ctxi)) 1151 - put_context(ctxi); 1152 - dev_dbg(dev, "%s: returning rc=%x\n", __func__, rc); 1153 - return rc; 1154 - 1155 - err: 1156 - rc = VM_FAULT_SIGBUS; 1157 - goto out; 1158 - } 1159 - 1160 - /* 1161 - * Local MMAP vmops to 'catch' faults 1162 - */ 1163 - static const struct vm_operations_struct cxlflash_mmap_vmops = { 1164 - .fault = cxlflash_mmap_fault, 1165 - }; 1166 - 1167 - /** 1168 - * cxlflash_cxl_mmap() - mmap handler for adapter file descriptor 1169 - * @file: File installed with adapter file descriptor. 1170 - * @vma: VM area associated with mapping. 1171 - * 1172 - * Installs local mmap vmops to 'catch' faults for error notification support. 1173 - * 1174 - * Return: 0 on success, -errno on failure 1175 - */ 1176 - static int cxlflash_cxl_mmap(struct file *file, struct vm_area_struct *vma) 1177 - { 1178 - struct cxlflash_cfg *cfg = container_of(file->f_op, struct cxlflash_cfg, 1179 - cxl_fops); 1180 - void *ctx = cfg->ops->fops_get_context(file); 1181 - struct device *dev = &cfg->dev->dev; 1182 - struct ctx_info *ctxi = NULL; 1183 - enum ctx_ctrl ctrl = CTX_CTRL_ERR_FALLBACK | CTX_CTRL_FILE; 1184 - int ctxid; 1185 - int rc = 0; 1186 - 1187 - ctxid = cfg->ops->process_element(ctx); 1188 - if (unlikely(ctxid < 0)) { 1189 - dev_err(dev, "%s: Context %p was closed ctxid=%d\n", 1190 - __func__, ctx, ctxid); 1191 - rc = -EIO; 1192 - goto out; 1193 - } 1194 - 1195 - ctxi = get_context(cfg, ctxid, file, ctrl); 1196 - if (unlikely(!ctxi)) { 1197 - dev_dbg(dev, "%s: Bad context ctxid=%d\n", __func__, ctxid); 1198 - rc = -EIO; 1199 - goto out; 1200 - } 1201 - 1202 - dev_dbg(dev, "%s: mmap for context %d\n", __func__, ctxid); 1203 - 1204 - rc = cfg->ops->fd_mmap(file, vma); 1205 - if (likely(!rc)) { 1206 - /* Insert ourself in the mmap fault handler path */ 1207 - ctxi->cxl_mmap_vmops = vma->vm_ops; 1208 - vma->vm_ops = &cxlflash_mmap_vmops; 1209 - } 1210 - 1211 - out: 1212 - if (likely(ctxi)) 1213 - put_context(ctxi); 1214 - return rc; 1215 - } 1216 - 1217 - const struct file_operations cxlflash_cxl_fops = { 1218 - .owner = THIS_MODULE, 1219 - .mmap = cxlflash_cxl_mmap, 1220 - .release = cxlflash_cxl_release, 1221 - }; 1222 - 1223 - /** 1224 - * cxlflash_mark_contexts_error() - move contexts to error state and list 1225 - * @cfg: Internal structure associated with the host. 1226 - * 1227 - * A context is only moved over to the error list when there are no outstanding 1228 - * references to it. This ensures that a running operation has completed. 1229 - * 1230 - * Return: 0 on success, -errno on failure 1231 - */ 1232 - int cxlflash_mark_contexts_error(struct cxlflash_cfg *cfg) 1233 - { 1234 - int i, rc = 0; 1235 - struct ctx_info *ctxi = NULL; 1236 - 1237 - mutex_lock(&cfg->ctx_tbl_list_mutex); 1238 - 1239 - for (i = 0; i < MAX_CONTEXT; i++) { 1240 - ctxi = cfg->ctx_tbl[i]; 1241 - if (ctxi) { 1242 - mutex_lock(&ctxi->mutex); 1243 - cfg->ctx_tbl[i] = NULL; 1244 - list_add(&ctxi->list, &cfg->ctx_err_recovery); 1245 - ctxi->err_recovery_active = true; 1246 - ctxi->ctrl_map = NULL; 1247 - unmap_context(ctxi); 1248 - mutex_unlock(&ctxi->mutex); 1249 - } 1250 - } 1251 - 1252 - mutex_unlock(&cfg->ctx_tbl_list_mutex); 1253 - return rc; 1254 - } 1255 - 1256 - /* 1257 - * Dummy NULL fops 1258 - */ 1259 - static const struct file_operations null_fops = { 1260 - .owner = THIS_MODULE, 1261 - }; 1262 - 1263 - /** 1264 - * check_state() - checks and responds to the current adapter state 1265 - * @cfg: Internal structure associated with the host. 1266 - * 1267 - * This routine can block and should only be used on process context. 1268 - * It assumes that the caller is an ioctl thread and holding the ioctl 1269 - * read semaphore. This is temporarily let up across the wait to allow 1270 - * for draining actively running ioctls. Also note that when waking up 1271 - * from waiting in reset, the state is unknown and must be checked again 1272 - * before proceeding. 1273 - * 1274 - * Return: 0 on success, -errno on failure 1275 - */ 1276 - int check_state(struct cxlflash_cfg *cfg) 1277 - { 1278 - struct device *dev = &cfg->dev->dev; 1279 - int rc = 0; 1280 - 1281 - retry: 1282 - switch (cfg->state) { 1283 - case STATE_RESET: 1284 - dev_dbg(dev, "%s: Reset state, going to wait...\n", __func__); 1285 - up_read(&cfg->ioctl_rwsem); 1286 - rc = wait_event_interruptible(cfg->reset_waitq, 1287 - cfg->state != STATE_RESET); 1288 - down_read(&cfg->ioctl_rwsem); 1289 - if (unlikely(rc)) 1290 - break; 1291 - goto retry; 1292 - case STATE_FAILTERM: 1293 - dev_dbg(dev, "%s: Failed/Terminating\n", __func__); 1294 - rc = -ENODEV; 1295 - break; 1296 - default: 1297 - break; 1298 - } 1299 - 1300 - return rc; 1301 - } 1302 - 1303 - /** 1304 - * cxlflash_disk_attach() - attach a LUN to a context 1305 - * @sdev: SCSI device associated with LUN. 1306 - * @arg: Attach ioctl data structure. 1307 - * 1308 - * Creates a context and attaches LUN to it. A LUN can only be attached 1309 - * one time to a context (subsequent attaches for the same context/LUN pair 1310 - * are not supported). Additional LUNs can be attached to a context by 1311 - * specifying the 'reuse' flag defined in the cxlflash_ioctl.h header. 1312 - * 1313 - * Return: 0 on success, -errno on failure 1314 - */ 1315 - static int cxlflash_disk_attach(struct scsi_device *sdev, void *arg) 1316 - { 1317 - struct dk_cxlflash_attach *attach = arg; 1318 - struct cxlflash_cfg *cfg = shost_priv(sdev->host); 1319 - struct device *dev = &cfg->dev->dev; 1320 - struct afu *afu = cfg->afu; 1321 - struct llun_info *lli = sdev->hostdata; 1322 - struct glun_info *gli = lli->parent; 1323 - struct ctx_info *ctxi = NULL; 1324 - struct lun_access *lun_access = NULL; 1325 - int rc = 0; 1326 - u32 perms; 1327 - int ctxid = -1; 1328 - u64 irqs = attach->num_interrupts; 1329 - u64 flags = 0UL; 1330 - u64 rctxid = 0UL; 1331 - struct file *file = NULL; 1332 - 1333 - void *ctx = NULL; 1334 - 1335 - int fd = -1; 1336 - 1337 - if (irqs > 4) { 1338 - dev_dbg(dev, "%s: Cannot support this many interrupts %llu\n", 1339 - __func__, irqs); 1340 - rc = -EINVAL; 1341 - goto out; 1342 - } 1343 - 1344 - if (gli->max_lba == 0) { 1345 - dev_dbg(dev, "%s: No capacity info for LUN=%016llx\n", 1346 - __func__, lli->lun_id[sdev->channel]); 1347 - rc = read_cap16(sdev, lli); 1348 - if (rc) { 1349 - dev_err(dev, "%s: Invalid device rc=%d\n", 1350 - __func__, rc); 1351 - rc = -ENODEV; 1352 - goto out; 1353 - } 1354 - dev_dbg(dev, "%s: LBA = %016llx\n", __func__, gli->max_lba); 1355 - dev_dbg(dev, "%s: BLK_LEN = %08x\n", __func__, gli->blk_len); 1356 - } 1357 - 1358 - if (attach->hdr.flags & DK_CXLFLASH_ATTACH_REUSE_CONTEXT) { 1359 - rctxid = attach->context_id; 1360 - ctxi = get_context(cfg, rctxid, NULL, 0); 1361 - if (!ctxi) { 1362 - dev_dbg(dev, "%s: Bad context rctxid=%016llx\n", 1363 - __func__, rctxid); 1364 - rc = -EINVAL; 1365 - goto out; 1366 - } 1367 - 1368 - list_for_each_entry(lun_access, &ctxi->luns, list) 1369 - if (lun_access->lli == lli) { 1370 - dev_dbg(dev, "%s: Already attached\n", 1371 - __func__); 1372 - rc = -EINVAL; 1373 - goto out; 1374 - } 1375 - } 1376 - 1377 - rc = scsi_device_get(sdev); 1378 - if (unlikely(rc)) { 1379 - dev_err(dev, "%s: Unable to get sdev reference\n", __func__); 1380 - goto out; 1381 - } 1382 - 1383 - lun_access = kzalloc(sizeof(*lun_access), GFP_KERNEL); 1384 - if (unlikely(!lun_access)) { 1385 - dev_err(dev, "%s: Unable to allocate lun_access\n", __func__); 1386 - rc = -ENOMEM; 1387 - goto err; 1388 - } 1389 - 1390 - lun_access->lli = lli; 1391 - lun_access->sdev = sdev; 1392 - 1393 - /* Non-NULL context indicates reuse (another context reference) */ 1394 - if (ctxi) { 1395 - dev_dbg(dev, "%s: Reusing context for LUN rctxid=%016llx\n", 1396 - __func__, rctxid); 1397 - kref_get(&ctxi->kref); 1398 - list_add(&lun_access->list, &ctxi->luns); 1399 - goto out_attach; 1400 - } 1401 - 1402 - ctxi = create_context(cfg); 1403 - if (unlikely(!ctxi)) { 1404 - dev_err(dev, "%s: Failed to create context ctxid=%d\n", 1405 - __func__, ctxid); 1406 - rc = -ENOMEM; 1407 - goto err; 1408 - } 1409 - 1410 - ctx = cfg->ops->dev_context_init(cfg->dev, cfg->afu_cookie); 1411 - if (IS_ERR_OR_NULL(ctx)) { 1412 - dev_err(dev, "%s: Could not initialize context %p\n", 1413 - __func__, ctx); 1414 - rc = -ENODEV; 1415 - goto err; 1416 - } 1417 - 1418 - rc = cfg->ops->start_work(ctx, irqs); 1419 - if (unlikely(rc)) { 1420 - dev_dbg(dev, "%s: Could not start context rc=%d\n", 1421 - __func__, rc); 1422 - goto err; 1423 - } 1424 - 1425 - ctxid = cfg->ops->process_element(ctx); 1426 - if (unlikely((ctxid >= MAX_CONTEXT) || (ctxid < 0))) { 1427 - dev_err(dev, "%s: ctxid=%d invalid\n", __func__, ctxid); 1428 - rc = -EPERM; 1429 - goto err; 1430 - } 1431 - 1432 - file = cfg->ops->get_fd(ctx, &cfg->cxl_fops, &fd); 1433 - if (unlikely(fd < 0)) { 1434 - rc = -ENODEV; 1435 - dev_err(dev, "%s: Could not get file descriptor\n", __func__); 1436 - goto err; 1437 - } 1438 - 1439 - /* Translate read/write O_* flags from fcntl.h to AFU permission bits */ 1440 - perms = SISL_RHT_PERM(attach->hdr.flags + 1); 1441 - 1442 - /* Context mutex is locked upon return */ 1443 - init_context(ctxi, cfg, ctx, ctxid, file, perms, irqs); 1444 - 1445 - rc = afu_attach(cfg, ctxi); 1446 - if (unlikely(rc)) { 1447 - dev_err(dev, "%s: Could not attach AFU rc %d\n", __func__, rc); 1448 - goto err; 1449 - } 1450 - 1451 - /* 1452 - * No error paths after this point. Once the fd is installed it's 1453 - * visible to user space and can't be undone safely on this thread. 1454 - * There is no need to worry about a deadlock here because no one 1455 - * knows about us yet; we can be the only one holding our mutex. 1456 - */ 1457 - list_add(&lun_access->list, &ctxi->luns); 1458 - mutex_lock(&cfg->ctx_tbl_list_mutex); 1459 - mutex_lock(&ctxi->mutex); 1460 - cfg->ctx_tbl[ctxid] = ctxi; 1461 - mutex_unlock(&cfg->ctx_tbl_list_mutex); 1462 - fd_install(fd, file); 1463 - 1464 - out_attach: 1465 - if (fd != -1) 1466 - flags |= DK_CXLFLASH_APP_CLOSE_ADAP_FD; 1467 - if (afu_is_sq_cmd_mode(afu)) 1468 - flags |= DK_CXLFLASH_CONTEXT_SQ_CMD_MODE; 1469 - 1470 - attach->hdr.return_flags = flags; 1471 - attach->context_id = ctxi->ctxid; 1472 - attach->block_size = gli->blk_len; 1473 - attach->mmio_size = sizeof(afu->afu_map->hosts[0].harea); 1474 - attach->last_lba = gli->max_lba; 1475 - attach->max_xfer = sdev->host->max_sectors * MAX_SECTOR_UNIT; 1476 - attach->max_xfer /= gli->blk_len; 1477 - 1478 - out: 1479 - attach->adap_fd = fd; 1480 - 1481 - if (ctxi) 1482 - put_context(ctxi); 1483 - 1484 - dev_dbg(dev, "%s: returning ctxid=%d fd=%d bs=%lld rc=%d llba=%lld\n", 1485 - __func__, ctxid, fd, attach->block_size, rc, attach->last_lba); 1486 - return rc; 1487 - 1488 - err: 1489 - /* Cleanup CXL context; okay to 'stop' even if it was not started */ 1490 - if (!IS_ERR_OR_NULL(ctx)) { 1491 - cfg->ops->stop_context(ctx); 1492 - cfg->ops->release_context(ctx); 1493 - ctx = NULL; 1494 - } 1495 - 1496 - /* 1497 - * Here, we're overriding the fops with a dummy all-NULL fops because 1498 - * fput() calls the release fop, which will cause us to mistakenly 1499 - * call into the CXL code. Rather than try to add yet more complexity 1500 - * to that routine (cxlflash_cxl_release) we should try to fix the 1501 - * issue here. 1502 - */ 1503 - if (fd > 0) { 1504 - file->f_op = &null_fops; 1505 - fput(file); 1506 - put_unused_fd(fd); 1507 - fd = -1; 1508 - file = NULL; 1509 - } 1510 - 1511 - /* Cleanup our context */ 1512 - if (ctxi) { 1513 - destroy_context(cfg, ctxi); 1514 - ctxi = NULL; 1515 - } 1516 - 1517 - kfree(lun_access); 1518 - scsi_device_put(sdev); 1519 - goto out; 1520 - } 1521 - 1522 - /** 1523 - * recover_context() - recovers a context in error 1524 - * @cfg: Internal structure associated with the host. 1525 - * @ctxi: Context to release. 1526 - * @adap_fd: Adapter file descriptor associated with new/recovered context. 1527 - * 1528 - * Restablishes the state for a context-in-error. 1529 - * 1530 - * Return: 0 on success, -errno on failure 1531 - */ 1532 - static int recover_context(struct cxlflash_cfg *cfg, 1533 - struct ctx_info *ctxi, 1534 - int *adap_fd) 1535 - { 1536 - struct device *dev = &cfg->dev->dev; 1537 - int rc = 0; 1538 - int fd = -1; 1539 - int ctxid = -1; 1540 - struct file *file; 1541 - void *ctx; 1542 - struct afu *afu = cfg->afu; 1543 - 1544 - ctx = cfg->ops->dev_context_init(cfg->dev, cfg->afu_cookie); 1545 - if (IS_ERR_OR_NULL(ctx)) { 1546 - dev_err(dev, "%s: Could not initialize context %p\n", 1547 - __func__, ctx); 1548 - rc = -ENODEV; 1549 - goto out; 1550 - } 1551 - 1552 - rc = cfg->ops->start_work(ctx, ctxi->irqs); 1553 - if (unlikely(rc)) { 1554 - dev_dbg(dev, "%s: Could not start context rc=%d\n", 1555 - __func__, rc); 1556 - goto err1; 1557 - } 1558 - 1559 - ctxid = cfg->ops->process_element(ctx); 1560 - if (unlikely((ctxid >= MAX_CONTEXT) || (ctxid < 0))) { 1561 - dev_err(dev, "%s: ctxid=%d invalid\n", __func__, ctxid); 1562 - rc = -EPERM; 1563 - goto err2; 1564 - } 1565 - 1566 - file = cfg->ops->get_fd(ctx, &cfg->cxl_fops, &fd); 1567 - if (unlikely(fd < 0)) { 1568 - rc = -ENODEV; 1569 - dev_err(dev, "%s: Could not get file descriptor\n", __func__); 1570 - goto err2; 1571 - } 1572 - 1573 - /* Update with new MMIO area based on updated context id */ 1574 - ctxi->ctrl_map = &afu->afu_map->ctrls[ctxid].ctrl; 1575 - 1576 - rc = afu_attach(cfg, ctxi); 1577 - if (rc) { 1578 - dev_err(dev, "%s: Could not attach AFU rc %d\n", __func__, rc); 1579 - goto err3; 1580 - } 1581 - 1582 - /* 1583 - * No error paths after this point. Once the fd is installed it's 1584 - * visible to user space and can't be undone safely on this thread. 1585 - */ 1586 - ctxi->ctxid = ENCODE_CTXID(ctxi, ctxid); 1587 - ctxi->ctx = ctx; 1588 - ctxi->file = file; 1589 - 1590 - /* 1591 - * Put context back in table (note the reinit of the context list); 1592 - * we must first drop the context's mutex and then acquire it in 1593 - * order with the table/list mutex to avoid a deadlock - safe to do 1594 - * here because no one can find us at this moment in time. 1595 - */ 1596 - mutex_unlock(&ctxi->mutex); 1597 - mutex_lock(&cfg->ctx_tbl_list_mutex); 1598 - mutex_lock(&ctxi->mutex); 1599 - list_del_init(&ctxi->list); 1600 - cfg->ctx_tbl[ctxid] = ctxi; 1601 - mutex_unlock(&cfg->ctx_tbl_list_mutex); 1602 - fd_install(fd, file); 1603 - *adap_fd = fd; 1604 - out: 1605 - dev_dbg(dev, "%s: returning ctxid=%d fd=%d rc=%d\n", 1606 - __func__, ctxid, fd, rc); 1607 - return rc; 1608 - 1609 - err3: 1610 - fput(file); 1611 - put_unused_fd(fd); 1612 - err2: 1613 - cfg->ops->stop_context(ctx); 1614 - err1: 1615 - cfg->ops->release_context(ctx); 1616 - goto out; 1617 - } 1618 - 1619 - /** 1620 - * cxlflash_afu_recover() - initiates AFU recovery 1621 - * @sdev: SCSI device associated with LUN. 1622 - * @arg: Recover ioctl data structure. 1623 - * 1624 - * Only a single recovery is allowed at a time to avoid exhausting CXL 1625 - * resources (leading to recovery failure) in the event that we're up 1626 - * against the maximum number of contexts limit. For similar reasons, 1627 - * a context recovery is retried if there are multiple recoveries taking 1628 - * place at the same time and the failure was due to CXL services being 1629 - * unable to keep up. 1630 - * 1631 - * As this routine is called on ioctl context, it holds the ioctl r/w 1632 - * semaphore that is used to drain ioctls in recovery scenarios. The 1633 - * implementation to achieve the pacing described above (a local mutex) 1634 - * requires that the ioctl r/w semaphore be dropped and reacquired to 1635 - * avoid a 3-way deadlock when multiple process recoveries operate in 1636 - * parallel. 1637 - * 1638 - * Because a user can detect an error condition before the kernel, it is 1639 - * quite possible for this routine to act as the kernel's EEH detection 1640 - * source (MMIO read of mbox_r). Because of this, there is a window of 1641 - * time where an EEH might have been detected but not yet 'serviced' 1642 - * (callback invoked, causing the device to enter reset state). To avoid 1643 - * looping in this routine during that window, a 1 second sleep is in place 1644 - * between the time the MMIO failure is detected and the time a wait on the 1645 - * reset wait queue is attempted via check_state(). 1646 - * 1647 - * Return: 0 on success, -errno on failure 1648 - */ 1649 - static int cxlflash_afu_recover(struct scsi_device *sdev, void *arg) 1650 - { 1651 - struct dk_cxlflash_recover_afu *recover = arg; 1652 - struct cxlflash_cfg *cfg = shost_priv(sdev->host); 1653 - struct device *dev = &cfg->dev->dev; 1654 - struct llun_info *lli = sdev->hostdata; 1655 - struct afu *afu = cfg->afu; 1656 - struct ctx_info *ctxi = NULL; 1657 - struct mutex *mutex = &cfg->ctx_recovery_mutex; 1658 - struct hwq *hwq = get_hwq(afu, PRIMARY_HWQ); 1659 - u64 flags; 1660 - u64 ctxid = DECODE_CTXID(recover->context_id), 1661 - rctxid = recover->context_id; 1662 - long reg; 1663 - bool locked = true; 1664 - int lretry = 20; /* up to 2 seconds */ 1665 - int new_adap_fd = -1; 1666 - int rc = 0; 1667 - 1668 - atomic_inc(&cfg->recovery_threads); 1669 - up_read(&cfg->ioctl_rwsem); 1670 - rc = mutex_lock_interruptible(mutex); 1671 - down_read(&cfg->ioctl_rwsem); 1672 - if (rc) { 1673 - locked = false; 1674 - goto out; 1675 - } 1676 - 1677 - rc = check_state(cfg); 1678 - if (rc) { 1679 - dev_err(dev, "%s: Failed state rc=%d\n", __func__, rc); 1680 - rc = -ENODEV; 1681 - goto out; 1682 - } 1683 - 1684 - dev_dbg(dev, "%s: reason=%016llx rctxid=%016llx\n", 1685 - __func__, recover->reason, rctxid); 1686 - 1687 - retry: 1688 - /* Ensure that this process is attached to the context */ 1689 - ctxi = get_context(cfg, rctxid, lli, CTX_CTRL_ERR_FALLBACK); 1690 - if (unlikely(!ctxi)) { 1691 - dev_dbg(dev, "%s: Bad context ctxid=%llu\n", __func__, ctxid); 1692 - rc = -EINVAL; 1693 - goto out; 1694 - } 1695 - 1696 - if (ctxi->err_recovery_active) { 1697 - retry_recover: 1698 - rc = recover_context(cfg, ctxi, &new_adap_fd); 1699 - if (unlikely(rc)) { 1700 - dev_err(dev, "%s: Recovery failed ctxid=%llu rc=%d\n", 1701 - __func__, ctxid, rc); 1702 - if ((rc == -ENODEV) && 1703 - ((atomic_read(&cfg->recovery_threads) > 1) || 1704 - (lretry--))) { 1705 - dev_dbg(dev, "%s: Going to try again\n", 1706 - __func__); 1707 - mutex_unlock(mutex); 1708 - msleep(100); 1709 - rc = mutex_lock_interruptible(mutex); 1710 - if (rc) { 1711 - locked = false; 1712 - goto out; 1713 - } 1714 - goto retry_recover; 1715 - } 1716 - 1717 - goto out; 1718 - } 1719 - 1720 - ctxi->err_recovery_active = false; 1721 - 1722 - flags = DK_CXLFLASH_APP_CLOSE_ADAP_FD | 1723 - DK_CXLFLASH_RECOVER_AFU_CONTEXT_RESET; 1724 - if (afu_is_sq_cmd_mode(afu)) 1725 - flags |= DK_CXLFLASH_CONTEXT_SQ_CMD_MODE; 1726 - 1727 - recover->hdr.return_flags = flags; 1728 - recover->context_id = ctxi->ctxid; 1729 - recover->adap_fd = new_adap_fd; 1730 - recover->mmio_size = sizeof(afu->afu_map->hosts[0].harea); 1731 - goto out; 1732 - } 1733 - 1734 - /* Test if in error state */ 1735 - reg = readq_be(&hwq->ctrl_map->mbox_r); 1736 - if (reg == -1) { 1737 - dev_dbg(dev, "%s: MMIO fail, wait for recovery.\n", __func__); 1738 - 1739 - /* 1740 - * Before checking the state, put back the context obtained with 1741 - * get_context() as it is no longer needed and sleep for a short 1742 - * period of time (see prolog notes). 1743 - */ 1744 - put_context(ctxi); 1745 - ctxi = NULL; 1746 - ssleep(1); 1747 - rc = check_state(cfg); 1748 - if (unlikely(rc)) 1749 - goto out; 1750 - goto retry; 1751 - } 1752 - 1753 - dev_dbg(dev, "%s: MMIO working, no recovery required\n", __func__); 1754 - out: 1755 - if (likely(ctxi)) 1756 - put_context(ctxi); 1757 - if (locked) 1758 - mutex_unlock(mutex); 1759 - atomic_dec_if_positive(&cfg->recovery_threads); 1760 - return rc; 1761 - } 1762 - 1763 - /** 1764 - * process_sense() - evaluates and processes sense data 1765 - * @sdev: SCSI device associated with LUN. 1766 - * @verify: Verify ioctl data structure. 1767 - * 1768 - * Return: 0 on success, -errno on failure 1769 - */ 1770 - static int process_sense(struct scsi_device *sdev, 1771 - struct dk_cxlflash_verify *verify) 1772 - { 1773 - struct cxlflash_cfg *cfg = shost_priv(sdev->host); 1774 - struct device *dev = &cfg->dev->dev; 1775 - struct llun_info *lli = sdev->hostdata; 1776 - struct glun_info *gli = lli->parent; 1777 - u64 prev_lba = gli->max_lba; 1778 - struct scsi_sense_hdr sshdr = { 0 }; 1779 - int rc = 0; 1780 - 1781 - rc = scsi_normalize_sense((const u8 *)&verify->sense_data, 1782 - DK_CXLFLASH_VERIFY_SENSE_LEN, &sshdr); 1783 - if (!rc) { 1784 - dev_err(dev, "%s: Failed to normalize sense data\n", __func__); 1785 - rc = -EINVAL; 1786 - goto out; 1787 - } 1788 - 1789 - switch (sshdr.sense_key) { 1790 - case NO_SENSE: 1791 - case RECOVERED_ERROR: 1792 - case NOT_READY: 1793 - break; 1794 - case UNIT_ATTENTION: 1795 - switch (sshdr.asc) { 1796 - case 0x29: /* Power on Reset or Device Reset */ 1797 - fallthrough; 1798 - case 0x2A: /* Device settings/capacity changed */ 1799 - rc = read_cap16(sdev, lli); 1800 - if (rc) { 1801 - rc = -ENODEV; 1802 - break; 1803 - } 1804 - if (prev_lba != gli->max_lba) 1805 - dev_dbg(dev, "%s: Capacity changed old=%lld " 1806 - "new=%lld\n", __func__, prev_lba, 1807 - gli->max_lba); 1808 - break; 1809 - case 0x3F: /* Report LUNs changed, Rescan. */ 1810 - scsi_scan_host(cfg->host); 1811 - break; 1812 - default: 1813 - rc = -EIO; 1814 - break; 1815 - } 1816 - break; 1817 - default: 1818 - rc = -EIO; 1819 - break; 1820 - } 1821 - out: 1822 - dev_dbg(dev, "%s: sense_key %x asc %x ascq %x rc %d\n", __func__, 1823 - sshdr.sense_key, sshdr.asc, sshdr.ascq, rc); 1824 - return rc; 1825 - } 1826 - 1827 - /** 1828 - * cxlflash_disk_verify() - verifies a LUN is the same and handle size changes 1829 - * @sdev: SCSI device associated with LUN. 1830 - * @arg: Verify ioctl data structure. 1831 - * 1832 - * Return: 0 on success, -errno on failure 1833 - */ 1834 - static int cxlflash_disk_verify(struct scsi_device *sdev, void *arg) 1835 - { 1836 - struct dk_cxlflash_verify *verify = arg; 1837 - int rc = 0; 1838 - struct ctx_info *ctxi = NULL; 1839 - struct cxlflash_cfg *cfg = shost_priv(sdev->host); 1840 - struct device *dev = &cfg->dev->dev; 1841 - struct llun_info *lli = sdev->hostdata; 1842 - struct glun_info *gli = lli->parent; 1843 - struct sisl_rht_entry *rhte = NULL; 1844 - res_hndl_t rhndl = verify->rsrc_handle; 1845 - u64 ctxid = DECODE_CTXID(verify->context_id), 1846 - rctxid = verify->context_id; 1847 - u64 last_lba = 0; 1848 - 1849 - dev_dbg(dev, "%s: ctxid=%llu rhndl=%016llx, hint=%016llx, " 1850 - "flags=%016llx\n", __func__, ctxid, verify->rsrc_handle, 1851 - verify->hint, verify->hdr.flags); 1852 - 1853 - ctxi = get_context(cfg, rctxid, lli, 0); 1854 - if (unlikely(!ctxi)) { 1855 - dev_dbg(dev, "%s: Bad context ctxid=%llu\n", __func__, ctxid); 1856 - rc = -EINVAL; 1857 - goto out; 1858 - } 1859 - 1860 - rhte = get_rhte(ctxi, rhndl, lli); 1861 - if (unlikely(!rhte)) { 1862 - dev_dbg(dev, "%s: Bad resource handle rhndl=%d\n", 1863 - __func__, rhndl); 1864 - rc = -EINVAL; 1865 - goto out; 1866 - } 1867 - 1868 - /* 1869 - * Look at the hint/sense to see if it requires us to redrive 1870 - * inquiry (i.e. the Unit attention is due to the WWN changing). 1871 - */ 1872 - if (verify->hint & DK_CXLFLASH_VERIFY_HINT_SENSE) { 1873 - /* Can't hold mutex across process_sense/read_cap16, 1874 - * since we could have an intervening EEH event. 1875 - */ 1876 - ctxi->unavail = true; 1877 - mutex_unlock(&ctxi->mutex); 1878 - rc = process_sense(sdev, verify); 1879 - if (unlikely(rc)) { 1880 - dev_err(dev, "%s: Failed to validate sense data (%d)\n", 1881 - __func__, rc); 1882 - mutex_lock(&ctxi->mutex); 1883 - ctxi->unavail = false; 1884 - goto out; 1885 - } 1886 - mutex_lock(&ctxi->mutex); 1887 - ctxi->unavail = false; 1888 - } 1889 - 1890 - switch (gli->mode) { 1891 - case MODE_PHYSICAL: 1892 - last_lba = gli->max_lba; 1893 - break; 1894 - case MODE_VIRTUAL: 1895 - /* Cast lxt_cnt to u64 for multiply to be treated as 64bit op */ 1896 - last_lba = ((u64)rhte->lxt_cnt * MC_CHUNK_SIZE * gli->blk_len); 1897 - last_lba /= CXLFLASH_BLOCK_SIZE; 1898 - last_lba--; 1899 - break; 1900 - default: 1901 - WARN(1, "Unsupported LUN mode!"); 1902 - } 1903 - 1904 - verify->last_lba = last_lba; 1905 - 1906 - out: 1907 - if (likely(ctxi)) 1908 - put_context(ctxi); 1909 - dev_dbg(dev, "%s: returning rc=%d llba=%llx\n", 1910 - __func__, rc, verify->last_lba); 1911 - return rc; 1912 - } 1913 - 1914 - /** 1915 - * decode_ioctl() - translates an encoded ioctl to an easily identifiable string 1916 - * @cmd: The ioctl command to decode. 1917 - * 1918 - * Return: A string identifying the decoded ioctl. 1919 - */ 1920 - static char *decode_ioctl(unsigned int cmd) 1921 - { 1922 - switch (cmd) { 1923 - case DK_CXLFLASH_ATTACH: 1924 - return __stringify_1(DK_CXLFLASH_ATTACH); 1925 - case DK_CXLFLASH_USER_DIRECT: 1926 - return __stringify_1(DK_CXLFLASH_USER_DIRECT); 1927 - case DK_CXLFLASH_USER_VIRTUAL: 1928 - return __stringify_1(DK_CXLFLASH_USER_VIRTUAL); 1929 - case DK_CXLFLASH_VLUN_RESIZE: 1930 - return __stringify_1(DK_CXLFLASH_VLUN_RESIZE); 1931 - case DK_CXLFLASH_RELEASE: 1932 - return __stringify_1(DK_CXLFLASH_RELEASE); 1933 - case DK_CXLFLASH_DETACH: 1934 - return __stringify_1(DK_CXLFLASH_DETACH); 1935 - case DK_CXLFLASH_VERIFY: 1936 - return __stringify_1(DK_CXLFLASH_VERIFY); 1937 - case DK_CXLFLASH_VLUN_CLONE: 1938 - return __stringify_1(DK_CXLFLASH_VLUN_CLONE); 1939 - case DK_CXLFLASH_RECOVER_AFU: 1940 - return __stringify_1(DK_CXLFLASH_RECOVER_AFU); 1941 - case DK_CXLFLASH_MANAGE_LUN: 1942 - return __stringify_1(DK_CXLFLASH_MANAGE_LUN); 1943 - } 1944 - 1945 - return "UNKNOWN"; 1946 - } 1947 - 1948 - /** 1949 - * cxlflash_disk_direct_open() - opens a direct (physical) disk 1950 - * @sdev: SCSI device associated with LUN. 1951 - * @arg: UDirect ioctl data structure. 1952 - * 1953 - * On successful return, the user is informed of the resource handle 1954 - * to be used to identify the direct lun and the size (in blocks) of 1955 - * the direct lun in last LBA format. 1956 - * 1957 - * Return: 0 on success, -errno on failure 1958 - */ 1959 - static int cxlflash_disk_direct_open(struct scsi_device *sdev, void *arg) 1960 - { 1961 - struct cxlflash_cfg *cfg = shost_priv(sdev->host); 1962 - struct device *dev = &cfg->dev->dev; 1963 - struct afu *afu = cfg->afu; 1964 - struct llun_info *lli = sdev->hostdata; 1965 - struct glun_info *gli = lli->parent; 1966 - struct dk_cxlflash_release rel = { { 0 }, 0 }; 1967 - 1968 - struct dk_cxlflash_udirect *pphys = (struct dk_cxlflash_udirect *)arg; 1969 - 1970 - u64 ctxid = DECODE_CTXID(pphys->context_id), 1971 - rctxid = pphys->context_id; 1972 - u64 lun_size = 0; 1973 - u64 last_lba = 0; 1974 - u64 rsrc_handle = -1; 1975 - u32 port = CHAN2PORTMASK(sdev->channel); 1976 - 1977 - int rc = 0; 1978 - 1979 - struct ctx_info *ctxi = NULL; 1980 - struct sisl_rht_entry *rhte = NULL; 1981 - 1982 - dev_dbg(dev, "%s: ctxid=%llu ls=%llu\n", __func__, ctxid, lun_size); 1983 - 1984 - rc = cxlflash_lun_attach(gli, MODE_PHYSICAL, false); 1985 - if (unlikely(rc)) { 1986 - dev_dbg(dev, "%s: Failed attach to LUN (PHYSICAL)\n", __func__); 1987 - goto out; 1988 - } 1989 - 1990 - ctxi = get_context(cfg, rctxid, lli, 0); 1991 - if (unlikely(!ctxi)) { 1992 - dev_dbg(dev, "%s: Bad context ctxid=%llu\n", __func__, ctxid); 1993 - rc = -EINVAL; 1994 - goto err1; 1995 - } 1996 - 1997 - rhte = rhte_checkout(ctxi, lli); 1998 - if (unlikely(!rhte)) { 1999 - dev_dbg(dev, "%s: Too many opens ctxid=%lld\n", 2000 - __func__, ctxid); 2001 - rc = -EMFILE; /* too many opens */ 2002 - goto err1; 2003 - } 2004 - 2005 - rsrc_handle = (rhte - ctxi->rht_start); 2006 - 2007 - rht_format1(rhte, lli->lun_id[sdev->channel], ctxi->rht_perms, port); 2008 - 2009 - last_lba = gli->max_lba; 2010 - pphys->hdr.return_flags = 0; 2011 - pphys->last_lba = last_lba; 2012 - pphys->rsrc_handle = rsrc_handle; 2013 - 2014 - rc = cxlflash_afu_sync(afu, ctxid, rsrc_handle, AFU_LW_SYNC); 2015 - if (unlikely(rc)) { 2016 - dev_dbg(dev, "%s: AFU sync failed rc=%d\n", __func__, rc); 2017 - goto err2; 2018 - } 2019 - 2020 - out: 2021 - if (likely(ctxi)) 2022 - put_context(ctxi); 2023 - dev_dbg(dev, "%s: returning handle=%llu rc=%d llba=%llu\n", 2024 - __func__, rsrc_handle, rc, last_lba); 2025 - return rc; 2026 - 2027 - err2: 2028 - marshal_udir_to_rele(pphys, &rel); 2029 - _cxlflash_disk_release(sdev, ctxi, &rel); 2030 - goto out; 2031 - err1: 2032 - cxlflash_lun_detach(gli); 2033 - goto out; 2034 - } 2035 - 2036 - /** 2037 - * ioctl_common() - common IOCTL handler for driver 2038 - * @sdev: SCSI device associated with LUN. 2039 - * @cmd: IOCTL command. 2040 - * 2041 - * Handles common fencing operations that are valid for multiple ioctls. Always 2042 - * allow through ioctls that are cleanup oriented in nature, even when operating 2043 - * in a failed/terminating state. 2044 - * 2045 - * Return: 0 on success, -errno on failure 2046 - */ 2047 - static int ioctl_common(struct scsi_device *sdev, unsigned int cmd) 2048 - { 2049 - struct cxlflash_cfg *cfg = shost_priv(sdev->host); 2050 - struct device *dev = &cfg->dev->dev; 2051 - struct llun_info *lli = sdev->hostdata; 2052 - int rc = 0; 2053 - 2054 - if (unlikely(!lli)) { 2055 - dev_dbg(dev, "%s: Unknown LUN\n", __func__); 2056 - rc = -EINVAL; 2057 - goto out; 2058 - } 2059 - 2060 - rc = check_state(cfg); 2061 - if (unlikely(rc) && (cfg->state == STATE_FAILTERM)) { 2062 - switch (cmd) { 2063 - case DK_CXLFLASH_VLUN_RESIZE: 2064 - case DK_CXLFLASH_RELEASE: 2065 - case DK_CXLFLASH_DETACH: 2066 - dev_dbg(dev, "%s: Command override rc=%d\n", 2067 - __func__, rc); 2068 - rc = 0; 2069 - break; 2070 - } 2071 - } 2072 - out: 2073 - return rc; 2074 - } 2075 - 2076 - /** 2077 - * cxlflash_ioctl() - IOCTL handler for driver 2078 - * @sdev: SCSI device associated with LUN. 2079 - * @cmd: IOCTL command. 2080 - * @arg: Userspace ioctl data structure. 2081 - * 2082 - * A read/write semaphore is used to implement a 'drain' of currently 2083 - * running ioctls. The read semaphore is taken at the beginning of each 2084 - * ioctl thread and released upon concluding execution. Additionally the 2085 - * semaphore should be released and then reacquired in any ioctl execution 2086 - * path which will wait for an event to occur that is outside the scope of 2087 - * the ioctl (i.e. an adapter reset). To drain the ioctls currently running, 2088 - * a thread simply needs to acquire the write semaphore. 2089 - * 2090 - * Return: 0 on success, -errno on failure 2091 - */ 2092 - int cxlflash_ioctl(struct scsi_device *sdev, unsigned int cmd, void __user *arg) 2093 - { 2094 - typedef int (*sioctl) (struct scsi_device *, void *); 2095 - 2096 - struct cxlflash_cfg *cfg = shost_priv(sdev->host); 2097 - struct device *dev = &cfg->dev->dev; 2098 - struct afu *afu = cfg->afu; 2099 - struct dk_cxlflash_hdr *hdr; 2100 - char buf[sizeof(union cxlflash_ioctls)]; 2101 - size_t size = 0; 2102 - bool known_ioctl = false; 2103 - int idx; 2104 - int rc = 0; 2105 - struct Scsi_Host *shost = sdev->host; 2106 - sioctl do_ioctl = NULL; 2107 - 2108 - static const struct { 2109 - size_t size; 2110 - sioctl ioctl; 2111 - } ioctl_tbl[] = { /* NOTE: order matters here */ 2112 - {sizeof(struct dk_cxlflash_attach), cxlflash_disk_attach}, 2113 - {sizeof(struct dk_cxlflash_udirect), cxlflash_disk_direct_open}, 2114 - {sizeof(struct dk_cxlflash_release), cxlflash_disk_release}, 2115 - {sizeof(struct dk_cxlflash_detach), cxlflash_disk_detach}, 2116 - {sizeof(struct dk_cxlflash_verify), cxlflash_disk_verify}, 2117 - {sizeof(struct dk_cxlflash_recover_afu), cxlflash_afu_recover}, 2118 - {sizeof(struct dk_cxlflash_manage_lun), cxlflash_manage_lun}, 2119 - {sizeof(struct dk_cxlflash_uvirtual), cxlflash_disk_virtual_open}, 2120 - {sizeof(struct dk_cxlflash_resize), cxlflash_vlun_resize}, 2121 - {sizeof(struct dk_cxlflash_clone), cxlflash_disk_clone}, 2122 - }; 2123 - 2124 - /* Hold read semaphore so we can drain if needed */ 2125 - down_read(&cfg->ioctl_rwsem); 2126 - 2127 - /* Restrict command set to physical support only for internal LUN */ 2128 - if (afu->internal_lun) 2129 - switch (cmd) { 2130 - case DK_CXLFLASH_RELEASE: 2131 - case DK_CXLFLASH_USER_VIRTUAL: 2132 - case DK_CXLFLASH_VLUN_RESIZE: 2133 - case DK_CXLFLASH_VLUN_CLONE: 2134 - dev_dbg(dev, "%s: %s not supported for lun_mode=%d\n", 2135 - __func__, decode_ioctl(cmd), afu->internal_lun); 2136 - rc = -EINVAL; 2137 - goto cxlflash_ioctl_exit; 2138 - } 2139 - 2140 - switch (cmd) { 2141 - case DK_CXLFLASH_ATTACH: 2142 - case DK_CXLFLASH_USER_DIRECT: 2143 - case DK_CXLFLASH_RELEASE: 2144 - case DK_CXLFLASH_DETACH: 2145 - case DK_CXLFLASH_VERIFY: 2146 - case DK_CXLFLASH_RECOVER_AFU: 2147 - case DK_CXLFLASH_USER_VIRTUAL: 2148 - case DK_CXLFLASH_VLUN_RESIZE: 2149 - case DK_CXLFLASH_VLUN_CLONE: 2150 - dev_dbg(dev, "%s: %s (%08X) on dev(%d/%d/%d/%llu)\n", 2151 - __func__, decode_ioctl(cmd), cmd, shost->host_no, 2152 - sdev->channel, sdev->id, sdev->lun); 2153 - rc = ioctl_common(sdev, cmd); 2154 - if (unlikely(rc)) 2155 - goto cxlflash_ioctl_exit; 2156 - 2157 - fallthrough; 2158 - 2159 - case DK_CXLFLASH_MANAGE_LUN: 2160 - known_ioctl = true; 2161 - idx = _IOC_NR(cmd) - _IOC_NR(DK_CXLFLASH_ATTACH); 2162 - size = ioctl_tbl[idx].size; 2163 - do_ioctl = ioctl_tbl[idx].ioctl; 2164 - 2165 - if (likely(do_ioctl)) 2166 - break; 2167 - 2168 - fallthrough; 2169 - default: 2170 - rc = -EINVAL; 2171 - goto cxlflash_ioctl_exit; 2172 - } 2173 - 2174 - if (unlikely(copy_from_user(&buf, arg, size))) { 2175 - dev_err(dev, "%s: copy_from_user() fail size=%lu cmd=%u (%s) arg=%p\n", 2176 - __func__, size, cmd, decode_ioctl(cmd), arg); 2177 - rc = -EFAULT; 2178 - goto cxlflash_ioctl_exit; 2179 - } 2180 - 2181 - hdr = (struct dk_cxlflash_hdr *)&buf; 2182 - if (hdr->version != DK_CXLFLASH_VERSION_0) { 2183 - dev_dbg(dev, "%s: Version %u not supported for %s\n", 2184 - __func__, hdr->version, decode_ioctl(cmd)); 2185 - rc = -EINVAL; 2186 - goto cxlflash_ioctl_exit; 2187 - } 2188 - 2189 - if (hdr->rsvd[0] || hdr->rsvd[1] || hdr->rsvd[2] || hdr->return_flags) { 2190 - dev_dbg(dev, "%s: Reserved/rflags populated\n", __func__); 2191 - rc = -EINVAL; 2192 - goto cxlflash_ioctl_exit; 2193 - } 2194 - 2195 - rc = do_ioctl(sdev, (void *)&buf); 2196 - if (likely(!rc)) 2197 - if (unlikely(copy_to_user(arg, &buf, size))) { 2198 - dev_err(dev, "%s: copy_to_user() fail size=%lu cmd=%u (%s) arg=%p\n", 2199 - __func__, size, cmd, decode_ioctl(cmd), arg); 2200 - rc = -EFAULT; 2201 - } 2202 - 2203 - /* fall through to exit */ 2204 - 2205 - cxlflash_ioctl_exit: 2206 - up_read(&cfg->ioctl_rwsem); 2207 - if (unlikely(rc && known_ioctl)) 2208 - dev_err(dev, "%s: ioctl %s (%08X) on dev(%d/%d/%d/%llu) " 2209 - "returned rc %d\n", __func__, 2210 - decode_ioctl(cmd), cmd, shost->host_no, 2211 - sdev->channel, sdev->id, sdev->lun, rc); 2212 - else 2213 - dev_dbg(dev, "%s: ioctl %s (%08X) on dev(%d/%d/%d/%llu) " 2214 - "returned rc %d\n", __func__, decode_ioctl(cmd), 2215 - cmd, shost->host_no, sdev->channel, sdev->id, 2216 - sdev->lun, rc); 2217 - return rc; 2218 - }
-150
drivers/scsi/cxlflash/superpipe.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* 3 - * CXL Flash Device Driver 4 - * 5 - * Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation 6 - * Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation 7 - * 8 - * Copyright (C) 2015 IBM Corporation 9 - */ 10 - 11 - #ifndef _CXLFLASH_SUPERPIPE_H 12 - #define _CXLFLASH_SUPERPIPE_H 13 - 14 - extern struct cxlflash_global global; 15 - 16 - /* 17 - * Terminology: use afu (and not adapter) to refer to the HW. 18 - * Adapter is the entire slot and includes PSL out of which 19 - * only the AFU is visible to user space. 20 - */ 21 - 22 - /* Chunk size parms: note sislite minimum chunk size is 23 - * 0x10000 LBAs corresponding to a NMASK or 16. 24 - */ 25 - #define MC_CHUNK_SIZE (1 << MC_RHT_NMASK) /* in LBAs */ 26 - 27 - #define CMD_TIMEOUT 30 /* 30 secs */ 28 - #define CMD_RETRIES 5 /* 5 retries for scsi_execute */ 29 - 30 - #define MAX_SECTOR_UNIT 512 /* max_sector is in 512 byte multiples */ 31 - 32 - enum lun_mode { 33 - MODE_NONE = 0, 34 - MODE_VIRTUAL, 35 - MODE_PHYSICAL 36 - }; 37 - 38 - /* Global (entire driver, spans adapters) lun_info structure */ 39 - struct glun_info { 40 - u64 max_lba; /* from read cap(16) */ 41 - u32 blk_len; /* from read cap(16) */ 42 - enum lun_mode mode; /* NONE, VIRTUAL, PHYSICAL */ 43 - int users; /* Number of users w/ references to LUN */ 44 - 45 - u8 wwid[16]; 46 - 47 - struct mutex mutex; 48 - 49 - struct blka blka; 50 - struct list_head list; 51 - }; 52 - 53 - /* Local (per-adapter) lun_info structure */ 54 - struct llun_info { 55 - u64 lun_id[MAX_FC_PORTS]; /* from REPORT_LUNS */ 56 - u32 lun_index; /* Index in the LUN table */ 57 - u32 host_no; /* host_no from Scsi_host */ 58 - u32 port_sel; /* What port to use for this LUN */ 59 - bool in_table; /* Whether a LUN table entry was created */ 60 - 61 - u8 wwid[16]; /* Keep a duplicate copy here? */ 62 - 63 - struct glun_info *parent; /* Pointer to entry in global LUN structure */ 64 - struct scsi_device *sdev; 65 - struct list_head list; 66 - }; 67 - 68 - struct lun_access { 69 - struct llun_info *lli; 70 - struct scsi_device *sdev; 71 - struct list_head list; 72 - }; 73 - 74 - enum ctx_ctrl { 75 - CTX_CTRL_CLONE = (1 << 1), 76 - CTX_CTRL_ERR = (1 << 2), 77 - CTX_CTRL_ERR_FALLBACK = (1 << 3), 78 - CTX_CTRL_NOPID = (1 << 4), 79 - CTX_CTRL_FILE = (1 << 5) 80 - }; 81 - 82 - #define ENCODE_CTXID(_ctx, _id) (((((u64)_ctx) & 0xFFFFFFFF0ULL) << 28) | _id) 83 - #define DECODE_CTXID(_val) (_val & 0xFFFFFFFF) 84 - 85 - struct ctx_info { 86 - struct sisl_ctrl_map __iomem *ctrl_map; /* initialized at startup */ 87 - struct sisl_rht_entry *rht_start; /* 1 page (req'd for alignment), 88 - * alloc/free on attach/detach 89 - */ 90 - u32 rht_out; /* Number of checked out RHT entries */ 91 - u32 rht_perms; /* User-defined permissions for RHT entries */ 92 - struct llun_info **rht_lun; /* Mapping of RHT entries to LUNs */ 93 - u8 *rht_needs_ws; /* User-desired write-same function per RHTE */ 94 - 95 - u64 ctxid; 96 - u64 irqs; /* Number of interrupts requested for context */ 97 - pid_t pid; 98 - bool initialized; 99 - bool unavail; 100 - bool err_recovery_active; 101 - struct mutex mutex; /* Context protection */ 102 - struct kref kref; 103 - void *ctx; 104 - struct cxlflash_cfg *cfg; 105 - struct list_head luns; /* LUNs attached to this context */ 106 - const struct vm_operations_struct *cxl_mmap_vmops; 107 - struct file *file; 108 - struct list_head list; /* Link contexts in error recovery */ 109 - }; 110 - 111 - struct cxlflash_global { 112 - struct mutex mutex; 113 - struct list_head gluns;/* list of glun_info structs */ 114 - struct page *err_page; /* One page of all 0xF for error notification */ 115 - }; 116 - 117 - int cxlflash_vlun_resize(struct scsi_device *sdev, void *resize); 118 - int _cxlflash_vlun_resize(struct scsi_device *sdev, struct ctx_info *ctxi, 119 - struct dk_cxlflash_resize *resize); 120 - 121 - int cxlflash_disk_release(struct scsi_device *sdev, 122 - void *release); 123 - int _cxlflash_disk_release(struct scsi_device *sdev, struct ctx_info *ctxi, 124 - struct dk_cxlflash_release *release); 125 - 126 - int cxlflash_disk_clone(struct scsi_device *sdev, void *arg); 127 - 128 - int cxlflash_disk_virtual_open(struct scsi_device *sdev, void *arg); 129 - 130 - int cxlflash_lun_attach(struct glun_info *gli, enum lun_mode mode, bool locked); 131 - void cxlflash_lun_detach(struct glun_info *gli); 132 - 133 - struct ctx_info *get_context(struct cxlflash_cfg *cfg, u64 rctxit, void *arg, 134 - enum ctx_ctrl ctrl); 135 - void put_context(struct ctx_info *ctxi); 136 - 137 - struct sisl_rht_entry *get_rhte(struct ctx_info *ctxi, res_hndl_t rhndl, 138 - struct llun_info *lli); 139 - 140 - struct sisl_rht_entry *rhte_checkout(struct ctx_info *ctxi, 141 - struct llun_info *lli); 142 - void rhte_checkin(struct ctx_info *ctxi, struct sisl_rht_entry *rhte); 143 - 144 - void cxlflash_ba_terminate(struct ba_lun *ba_lun); 145 - 146 - int cxlflash_manage_lun(struct scsi_device *sdev, void *manage); 147 - 148 - int check_state(struct cxlflash_cfg *cfg); 149 - 150 - #endif /* ifndef _CXLFLASH_SUPERPIPE_H */
-1336
drivers/scsi/cxlflash/vlun.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * CXL Flash Device Driver 4 - * 5 - * Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation 6 - * Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation 7 - * 8 - * Copyright (C) 2015 IBM Corporation 9 - */ 10 - 11 - #include <linux/interrupt.h> 12 - #include <linux/pci.h> 13 - #include <linux/syscalls.h> 14 - #include <linux/unaligned.h> 15 - #include <asm/bitsperlong.h> 16 - 17 - #include <scsi/scsi_cmnd.h> 18 - #include <scsi/scsi_host.h> 19 - #include <uapi/scsi/cxlflash_ioctl.h> 20 - 21 - #include "sislite.h" 22 - #include "common.h" 23 - #include "vlun.h" 24 - #include "superpipe.h" 25 - 26 - /** 27 - * marshal_virt_to_resize() - translate uvirtual to resize structure 28 - * @virt: Source structure from which to translate/copy. 29 - * @resize: Destination structure for the translate/copy. 30 - */ 31 - static void marshal_virt_to_resize(struct dk_cxlflash_uvirtual *virt, 32 - struct dk_cxlflash_resize *resize) 33 - { 34 - resize->hdr = virt->hdr; 35 - resize->context_id = virt->context_id; 36 - resize->rsrc_handle = virt->rsrc_handle; 37 - resize->req_size = virt->lun_size; 38 - resize->last_lba = virt->last_lba; 39 - } 40 - 41 - /** 42 - * marshal_clone_to_rele() - translate clone to release structure 43 - * @clone: Source structure from which to translate/copy. 44 - * @release: Destination structure for the translate/copy. 45 - */ 46 - static void marshal_clone_to_rele(struct dk_cxlflash_clone *clone, 47 - struct dk_cxlflash_release *release) 48 - { 49 - release->hdr = clone->hdr; 50 - release->context_id = clone->context_id_dst; 51 - } 52 - 53 - /** 54 - * ba_init() - initializes a block allocator 55 - * @ba_lun: Block allocator to initialize. 56 - * 57 - * Return: 0 on success, -errno on failure 58 - */ 59 - static int ba_init(struct ba_lun *ba_lun) 60 - { 61 - struct ba_lun_info *bali = NULL; 62 - int lun_size_au = 0, i = 0; 63 - int last_word_underflow = 0; 64 - u64 *lam; 65 - 66 - pr_debug("%s: Initializing LUN: lun_id=%016llx " 67 - "ba_lun->lsize=%lx ba_lun->au_size=%lX\n", 68 - __func__, ba_lun->lun_id, ba_lun->lsize, ba_lun->au_size); 69 - 70 - /* Calculate bit map size */ 71 - lun_size_au = ba_lun->lsize / ba_lun->au_size; 72 - if (lun_size_au == 0) { 73 - pr_debug("%s: Requested LUN size of 0!\n", __func__); 74 - return -EINVAL; 75 - } 76 - 77 - /* Allocate lun information container */ 78 - bali = kzalloc(sizeof(struct ba_lun_info), GFP_KERNEL); 79 - if (unlikely(!bali)) { 80 - pr_err("%s: Failed to allocate lun_info lun_id=%016llx\n", 81 - __func__, ba_lun->lun_id); 82 - return -ENOMEM; 83 - } 84 - 85 - bali->total_aus = lun_size_au; 86 - bali->lun_bmap_size = lun_size_au / BITS_PER_LONG; 87 - 88 - if (lun_size_au % BITS_PER_LONG) 89 - bali->lun_bmap_size++; 90 - 91 - /* Allocate bitmap space */ 92 - bali->lun_alloc_map = kzalloc((bali->lun_bmap_size * sizeof(u64)), 93 - GFP_KERNEL); 94 - if (unlikely(!bali->lun_alloc_map)) { 95 - pr_err("%s: Failed to allocate lun allocation map: " 96 - "lun_id=%016llx\n", __func__, ba_lun->lun_id); 97 - kfree(bali); 98 - return -ENOMEM; 99 - } 100 - 101 - /* Initialize the bit map size and set all bits to '1' */ 102 - bali->free_aun_cnt = lun_size_au; 103 - 104 - for (i = 0; i < bali->lun_bmap_size; i++) 105 - bali->lun_alloc_map[i] = 0xFFFFFFFFFFFFFFFFULL; 106 - 107 - /* If the last word not fully utilized, mark extra bits as allocated */ 108 - last_word_underflow = (bali->lun_bmap_size * BITS_PER_LONG); 109 - last_word_underflow -= bali->free_aun_cnt; 110 - if (last_word_underflow > 0) { 111 - lam = &bali->lun_alloc_map[bali->lun_bmap_size - 1]; 112 - for (i = (HIBIT - last_word_underflow + 1); 113 - i < BITS_PER_LONG; 114 - i++) 115 - clear_bit(i, (ulong *)lam); 116 - } 117 - 118 - /* Initialize high elevator index, low/curr already at 0 from kzalloc */ 119 - bali->free_high_idx = bali->lun_bmap_size; 120 - 121 - /* Allocate clone map */ 122 - bali->aun_clone_map = kzalloc((bali->total_aus * sizeof(u8)), 123 - GFP_KERNEL); 124 - if (unlikely(!bali->aun_clone_map)) { 125 - pr_err("%s: Failed to allocate clone map: lun_id=%016llx\n", 126 - __func__, ba_lun->lun_id); 127 - kfree(bali->lun_alloc_map); 128 - kfree(bali); 129 - return -ENOMEM; 130 - } 131 - 132 - /* Pass the allocated LUN info as a handle to the user */ 133 - ba_lun->ba_lun_handle = bali; 134 - 135 - pr_debug("%s: Successfully initialized the LUN: " 136 - "lun_id=%016llx bitmap size=%x, free_aun_cnt=%llx\n", 137 - __func__, ba_lun->lun_id, bali->lun_bmap_size, 138 - bali->free_aun_cnt); 139 - return 0; 140 - } 141 - 142 - /** 143 - * find_free_range() - locates a free bit within the block allocator 144 - * @low: First word in block allocator to start search. 145 - * @high: Last word in block allocator to search. 146 - * @bali: LUN information structure owning the block allocator to search. 147 - * @bit_word: Passes back the word in the block allocator owning the free bit. 148 - * 149 - * Return: The bit position within the passed back word, -1 on failure 150 - */ 151 - static int find_free_range(u32 low, 152 - u32 high, 153 - struct ba_lun_info *bali, int *bit_word) 154 - { 155 - int i; 156 - u64 bit_pos = -1; 157 - ulong *lam, num_bits; 158 - 159 - for (i = low; i < high; i++) 160 - if (bali->lun_alloc_map[i] != 0) { 161 - lam = (ulong *)&bali->lun_alloc_map[i]; 162 - num_bits = (sizeof(*lam) * BITS_PER_BYTE); 163 - bit_pos = find_first_bit(lam, num_bits); 164 - 165 - pr_devel("%s: Found free bit %llu in LUN " 166 - "map entry %016llx at bitmap index = %d\n", 167 - __func__, bit_pos, bali->lun_alloc_map[i], i); 168 - 169 - *bit_word = i; 170 - bali->free_aun_cnt--; 171 - clear_bit(bit_pos, lam); 172 - break; 173 - } 174 - 175 - return bit_pos; 176 - } 177 - 178 - /** 179 - * ba_alloc() - allocates a block from the block allocator 180 - * @ba_lun: Block allocator from which to allocate a block. 181 - * 182 - * Return: The allocated block, -1 on failure 183 - */ 184 - static u64 ba_alloc(struct ba_lun *ba_lun) 185 - { 186 - u64 bit_pos = -1; 187 - int bit_word = 0; 188 - struct ba_lun_info *bali = NULL; 189 - 190 - bali = ba_lun->ba_lun_handle; 191 - 192 - pr_debug("%s: Received block allocation request: " 193 - "lun_id=%016llx free_aun_cnt=%llx\n", 194 - __func__, ba_lun->lun_id, bali->free_aun_cnt); 195 - 196 - if (bali->free_aun_cnt == 0) { 197 - pr_debug("%s: No space left on LUN: lun_id=%016llx\n", 198 - __func__, ba_lun->lun_id); 199 - return -1ULL; 200 - } 201 - 202 - /* Search to find a free entry, curr->high then low->curr */ 203 - bit_pos = find_free_range(bali->free_curr_idx, 204 - bali->free_high_idx, bali, &bit_word); 205 - if (bit_pos == -1) { 206 - bit_pos = find_free_range(bali->free_low_idx, 207 - bali->free_curr_idx, 208 - bali, &bit_word); 209 - if (bit_pos == -1) { 210 - pr_debug("%s: Could not find an allocation unit on LUN:" 211 - " lun_id=%016llx\n", __func__, ba_lun->lun_id); 212 - return -1ULL; 213 - } 214 - } 215 - 216 - /* Update the free_curr_idx */ 217 - if (bit_pos == HIBIT) 218 - bali->free_curr_idx = bit_word + 1; 219 - else 220 - bali->free_curr_idx = bit_word; 221 - 222 - pr_debug("%s: Allocating AU number=%llx lun_id=%016llx " 223 - "free_aun_cnt=%llx\n", __func__, 224 - ((bit_word * BITS_PER_LONG) + bit_pos), ba_lun->lun_id, 225 - bali->free_aun_cnt); 226 - 227 - return (u64) ((bit_word * BITS_PER_LONG) + bit_pos); 228 - } 229 - 230 - /** 231 - * validate_alloc() - validates the specified block has been allocated 232 - * @bali: LUN info owning the block allocator. 233 - * @aun: Block to validate. 234 - * 235 - * Return: 0 on success, -1 on failure 236 - */ 237 - static int validate_alloc(struct ba_lun_info *bali, u64 aun) 238 - { 239 - int idx = 0, bit_pos = 0; 240 - 241 - idx = aun / BITS_PER_LONG; 242 - bit_pos = aun % BITS_PER_LONG; 243 - 244 - if (test_bit(bit_pos, (ulong *)&bali->lun_alloc_map[idx])) 245 - return -1; 246 - 247 - return 0; 248 - } 249 - 250 - /** 251 - * ba_free() - frees a block from the block allocator 252 - * @ba_lun: Block allocator from which to allocate a block. 253 - * @to_free: Block to free. 254 - * 255 - * Return: 0 on success, -1 on failure 256 - */ 257 - static int ba_free(struct ba_lun *ba_lun, u64 to_free) 258 - { 259 - int idx = 0, bit_pos = 0; 260 - struct ba_lun_info *bali = NULL; 261 - 262 - bali = ba_lun->ba_lun_handle; 263 - 264 - if (validate_alloc(bali, to_free)) { 265 - pr_debug("%s: AUN %llx is not allocated on lun_id=%016llx\n", 266 - __func__, to_free, ba_lun->lun_id); 267 - return -1; 268 - } 269 - 270 - pr_debug("%s: Received a request to free AU=%llx lun_id=%016llx " 271 - "free_aun_cnt=%llx\n", __func__, to_free, ba_lun->lun_id, 272 - bali->free_aun_cnt); 273 - 274 - if (bali->aun_clone_map[to_free] > 0) { 275 - pr_debug("%s: AUN %llx lun_id=%016llx cloned. Clone count=%x\n", 276 - __func__, to_free, ba_lun->lun_id, 277 - bali->aun_clone_map[to_free]); 278 - bali->aun_clone_map[to_free]--; 279 - return 0; 280 - } 281 - 282 - idx = to_free / BITS_PER_LONG; 283 - bit_pos = to_free % BITS_PER_LONG; 284 - 285 - set_bit(bit_pos, (ulong *)&bali->lun_alloc_map[idx]); 286 - bali->free_aun_cnt++; 287 - 288 - if (idx < bali->free_low_idx) 289 - bali->free_low_idx = idx; 290 - else if (idx > bali->free_high_idx) 291 - bali->free_high_idx = idx; 292 - 293 - pr_debug("%s: Successfully freed AU bit_pos=%x bit map index=%x " 294 - "lun_id=%016llx free_aun_cnt=%llx\n", __func__, bit_pos, idx, 295 - ba_lun->lun_id, bali->free_aun_cnt); 296 - 297 - return 0; 298 - } 299 - 300 - /** 301 - * ba_clone() - Clone a chunk of the block allocation table 302 - * @ba_lun: Block allocator from which to allocate a block. 303 - * @to_clone: Block to clone. 304 - * 305 - * Return: 0 on success, -1 on failure 306 - */ 307 - static int ba_clone(struct ba_lun *ba_lun, u64 to_clone) 308 - { 309 - struct ba_lun_info *bali = ba_lun->ba_lun_handle; 310 - 311 - if (validate_alloc(bali, to_clone)) { 312 - pr_debug("%s: AUN=%llx not allocated on lun_id=%016llx\n", 313 - __func__, to_clone, ba_lun->lun_id); 314 - return -1; 315 - } 316 - 317 - pr_debug("%s: Received a request to clone AUN %llx on lun_id=%016llx\n", 318 - __func__, to_clone, ba_lun->lun_id); 319 - 320 - if (bali->aun_clone_map[to_clone] == MAX_AUN_CLONE_CNT) { 321 - pr_debug("%s: AUN %llx on lun_id=%016llx hit max clones already\n", 322 - __func__, to_clone, ba_lun->lun_id); 323 - return -1; 324 - } 325 - 326 - bali->aun_clone_map[to_clone]++; 327 - 328 - return 0; 329 - } 330 - 331 - /** 332 - * ba_space() - returns the amount of free space left in the block allocator 333 - * @ba_lun: Block allocator. 334 - * 335 - * Return: Amount of free space in block allocator 336 - */ 337 - static u64 ba_space(struct ba_lun *ba_lun) 338 - { 339 - struct ba_lun_info *bali = ba_lun->ba_lun_handle; 340 - 341 - return bali->free_aun_cnt; 342 - } 343 - 344 - /** 345 - * cxlflash_ba_terminate() - frees resources associated with the block allocator 346 - * @ba_lun: Block allocator. 347 - * 348 - * Safe to call in a partially allocated state. 349 - */ 350 - void cxlflash_ba_terminate(struct ba_lun *ba_lun) 351 - { 352 - struct ba_lun_info *bali = ba_lun->ba_lun_handle; 353 - 354 - if (bali) { 355 - kfree(bali->aun_clone_map); 356 - kfree(bali->lun_alloc_map); 357 - kfree(bali); 358 - ba_lun->ba_lun_handle = NULL; 359 - } 360 - } 361 - 362 - /** 363 - * init_vlun() - initializes a LUN for virtual use 364 - * @lli: LUN information structure that owns the block allocator. 365 - * 366 - * Return: 0 on success, -errno on failure 367 - */ 368 - static int init_vlun(struct llun_info *lli) 369 - { 370 - int rc = 0; 371 - struct glun_info *gli = lli->parent; 372 - struct blka *blka = &gli->blka; 373 - 374 - memset(blka, 0, sizeof(*blka)); 375 - mutex_init(&blka->mutex); 376 - 377 - /* LUN IDs are unique per port, save the index instead */ 378 - blka->ba_lun.lun_id = lli->lun_index; 379 - blka->ba_lun.lsize = gli->max_lba + 1; 380 - blka->ba_lun.lba_size = gli->blk_len; 381 - 382 - blka->ba_lun.au_size = MC_CHUNK_SIZE; 383 - blka->nchunk = blka->ba_lun.lsize / MC_CHUNK_SIZE; 384 - 385 - rc = ba_init(&blka->ba_lun); 386 - if (unlikely(rc)) 387 - pr_debug("%s: cannot init block_alloc, rc=%d\n", __func__, rc); 388 - 389 - pr_debug("%s: returning rc=%d lli=%p\n", __func__, rc, lli); 390 - return rc; 391 - } 392 - 393 - /** 394 - * write_same16() - sends a SCSI WRITE_SAME16 (0) command to specified LUN 395 - * @sdev: SCSI device associated with LUN. 396 - * @lba: Logical block address to start write same. 397 - * @nblks: Number of logical blocks to write same. 398 - * 399 - * The SCSI WRITE_SAME16 can take quite a while to complete. Should an EEH occur 400 - * while in scsi_execute_cmd(), the EEH handler will attempt to recover. As 401 - * part of the recovery, the handler drains all currently running ioctls, 402 - * waiting until they have completed before proceeding with a reset. As this 403 - * routine is used on the ioctl path, this can create a condition where the 404 - * EEH handler becomes stuck, infinitely waiting for this ioctl thread. To 405 - * avoid this behavior, temporarily unmark this thread as an ioctl thread by 406 - * releasing the ioctl read semaphore. This will allow the EEH handler to 407 - * proceed with a recovery while this thread is still running. Once the 408 - * scsi_execute_cmd() returns, reacquire the ioctl read semaphore and check the 409 - * adapter state in case it changed while inside of scsi_execute_cmd(). The 410 - * state check will wait if the adapter is still being recovered or return a 411 - * failure if the recovery failed. In the event that the adapter reset failed, 412 - * simply return the failure as the ioctl would be unable to continue. 413 - * 414 - * Note that the above puts a requirement on this routine to only be called on 415 - * an ioctl thread. 416 - * 417 - * Return: 0 on success, -errno on failure 418 - */ 419 - static int write_same16(struct scsi_device *sdev, 420 - u64 lba, 421 - u32 nblks) 422 - { 423 - u8 *cmd_buf = NULL; 424 - u8 *scsi_cmd = NULL; 425 - int rc = 0; 426 - int result = 0; 427 - u64 offset = lba; 428 - int left = nblks; 429 - struct cxlflash_cfg *cfg = shost_priv(sdev->host); 430 - struct device *dev = &cfg->dev->dev; 431 - const u32 s = ilog2(sdev->sector_size) - 9; 432 - const u32 to = sdev->request_queue->rq_timeout; 433 - const u32 ws_limit = 434 - sdev->request_queue->limits.max_write_zeroes_sectors >> s; 435 - 436 - cmd_buf = kzalloc(CMD_BUFSIZE, GFP_KERNEL); 437 - scsi_cmd = kzalloc(MAX_COMMAND_SIZE, GFP_KERNEL); 438 - if (unlikely(!cmd_buf || !scsi_cmd)) { 439 - rc = -ENOMEM; 440 - goto out; 441 - } 442 - 443 - while (left > 0) { 444 - 445 - scsi_cmd[0] = WRITE_SAME_16; 446 - scsi_cmd[1] = cfg->ws_unmap ? 0x8 : 0; 447 - put_unaligned_be64(offset, &scsi_cmd[2]); 448 - put_unaligned_be32(ws_limit < left ? ws_limit : left, 449 - &scsi_cmd[10]); 450 - 451 - /* Drop the ioctl read semaphore across lengthy call */ 452 - up_read(&cfg->ioctl_rwsem); 453 - result = scsi_execute_cmd(sdev, scsi_cmd, REQ_OP_DRV_OUT, 454 - cmd_buf, CMD_BUFSIZE, to, 455 - CMD_RETRIES, NULL); 456 - down_read(&cfg->ioctl_rwsem); 457 - rc = check_state(cfg); 458 - if (rc) { 459 - dev_err(dev, "%s: Failed state result=%08x\n", 460 - __func__, result); 461 - rc = -ENODEV; 462 - goto out; 463 - } 464 - 465 - if (result) { 466 - dev_err_ratelimited(dev, "%s: command failed for " 467 - "offset=%lld result=%08x\n", 468 - __func__, offset, result); 469 - rc = -EIO; 470 - goto out; 471 - } 472 - left -= ws_limit; 473 - offset += ws_limit; 474 - } 475 - 476 - out: 477 - kfree(cmd_buf); 478 - kfree(scsi_cmd); 479 - dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 480 - return rc; 481 - } 482 - 483 - /** 484 - * grow_lxt() - expands the translation table associated with the specified RHTE 485 - * @afu: AFU associated with the host. 486 - * @sdev: SCSI device associated with LUN. 487 - * @ctxid: Context ID of context owning the RHTE. 488 - * @rhndl: Resource handle associated with the RHTE. 489 - * @rhte: Resource handle entry (RHTE). 490 - * @new_size: Number of translation entries associated with RHTE. 491 - * 492 - * By design, this routine employs a 'best attempt' allocation and will 493 - * truncate the requested size down if there is not sufficient space in 494 - * the block allocator to satisfy the request but there does exist some 495 - * amount of space. The user is made aware of this by returning the size 496 - * allocated. 497 - * 498 - * Return: 0 on success, -errno on failure 499 - */ 500 - static int grow_lxt(struct afu *afu, 501 - struct scsi_device *sdev, 502 - ctx_hndl_t ctxid, 503 - res_hndl_t rhndl, 504 - struct sisl_rht_entry *rhte, 505 - u64 *new_size) 506 - { 507 - struct cxlflash_cfg *cfg = shost_priv(sdev->host); 508 - struct device *dev = &cfg->dev->dev; 509 - struct sisl_lxt_entry *lxt = NULL, *lxt_old = NULL; 510 - struct llun_info *lli = sdev->hostdata; 511 - struct glun_info *gli = lli->parent; 512 - struct blka *blka = &gli->blka; 513 - u32 av_size; 514 - u32 ngrps, ngrps_old; 515 - u64 aun; /* chunk# allocated by block allocator */ 516 - u64 delta = *new_size - rhte->lxt_cnt; 517 - u64 my_new_size; 518 - int i, rc = 0; 519 - 520 - /* 521 - * Check what is available in the block allocator before re-allocating 522 - * LXT array. This is done up front under the mutex which must not be 523 - * released until after allocation is complete. 524 - */ 525 - mutex_lock(&blka->mutex); 526 - av_size = ba_space(&blka->ba_lun); 527 - if (unlikely(av_size <= 0)) { 528 - dev_dbg(dev, "%s: ba_space error av_size=%d\n", 529 - __func__, av_size); 530 - mutex_unlock(&blka->mutex); 531 - rc = -ENOSPC; 532 - goto out; 533 - } 534 - 535 - if (av_size < delta) 536 - delta = av_size; 537 - 538 - lxt_old = rhte->lxt_start; 539 - ngrps_old = LXT_NUM_GROUPS(rhte->lxt_cnt); 540 - ngrps = LXT_NUM_GROUPS(rhte->lxt_cnt + delta); 541 - 542 - if (ngrps != ngrps_old) { 543 - /* reallocate to fit new size */ 544 - lxt = kzalloc((sizeof(*lxt) * LXT_GROUP_SIZE * ngrps), 545 - GFP_KERNEL); 546 - if (unlikely(!lxt)) { 547 - mutex_unlock(&blka->mutex); 548 - rc = -ENOMEM; 549 - goto out; 550 - } 551 - 552 - /* copy over all old entries */ 553 - memcpy(lxt, lxt_old, (sizeof(*lxt) * rhte->lxt_cnt)); 554 - } else 555 - lxt = lxt_old; 556 - 557 - /* nothing can fail from now on */ 558 - my_new_size = rhte->lxt_cnt + delta; 559 - 560 - /* add new entries to the end */ 561 - for (i = rhte->lxt_cnt; i < my_new_size; i++) { 562 - /* 563 - * Due to the earlier check of available space, ba_alloc 564 - * cannot fail here. If it did due to internal error, 565 - * leave a rlba_base of -1u which will likely be a 566 - * invalid LUN (too large). 567 - */ 568 - aun = ba_alloc(&blka->ba_lun); 569 - if ((aun == -1ULL) || (aun >= blka->nchunk)) 570 - dev_dbg(dev, "%s: ba_alloc error allocated chunk=%llu " 571 - "max=%llu\n", __func__, aun, blka->nchunk - 1); 572 - 573 - /* select both ports, use r/w perms from RHT */ 574 - lxt[i].rlba_base = ((aun << MC_CHUNK_SHIFT) | 575 - (lli->lun_index << LXT_LUNIDX_SHIFT) | 576 - (RHT_PERM_RW << LXT_PERM_SHIFT | 577 - lli->port_sel)); 578 - } 579 - 580 - mutex_unlock(&blka->mutex); 581 - 582 - /* 583 - * The following sequence is prescribed in the SISlite spec 584 - * for syncing up with the AFU when adding LXT entries. 585 - */ 586 - dma_wmb(); /* Make LXT updates are visible */ 587 - 588 - rhte->lxt_start = lxt; 589 - dma_wmb(); /* Make RHT entry's LXT table update visible */ 590 - 591 - rhte->lxt_cnt = my_new_size; 592 - dma_wmb(); /* Make RHT entry's LXT table size update visible */ 593 - 594 - rc = cxlflash_afu_sync(afu, ctxid, rhndl, AFU_LW_SYNC); 595 - if (unlikely(rc)) 596 - rc = -EAGAIN; 597 - 598 - /* free old lxt if reallocated */ 599 - if (lxt != lxt_old) 600 - kfree(lxt_old); 601 - *new_size = my_new_size; 602 - out: 603 - dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 604 - return rc; 605 - } 606 - 607 - /** 608 - * shrink_lxt() - reduces translation table associated with the specified RHTE 609 - * @afu: AFU associated with the host. 610 - * @sdev: SCSI device associated with LUN. 611 - * @rhndl: Resource handle associated with the RHTE. 612 - * @rhte: Resource handle entry (RHTE). 613 - * @ctxi: Context owning resources. 614 - * @new_size: Number of translation entries associated with RHTE. 615 - * 616 - * Return: 0 on success, -errno on failure 617 - */ 618 - static int shrink_lxt(struct afu *afu, 619 - struct scsi_device *sdev, 620 - res_hndl_t rhndl, 621 - struct sisl_rht_entry *rhte, 622 - struct ctx_info *ctxi, 623 - u64 *new_size) 624 - { 625 - struct cxlflash_cfg *cfg = shost_priv(sdev->host); 626 - struct device *dev = &cfg->dev->dev; 627 - struct sisl_lxt_entry *lxt, *lxt_old; 628 - struct llun_info *lli = sdev->hostdata; 629 - struct glun_info *gli = lli->parent; 630 - struct blka *blka = &gli->blka; 631 - ctx_hndl_t ctxid = DECODE_CTXID(ctxi->ctxid); 632 - bool needs_ws = ctxi->rht_needs_ws[rhndl]; 633 - bool needs_sync = !ctxi->err_recovery_active; 634 - u32 ngrps, ngrps_old; 635 - u64 aun; /* chunk# allocated by block allocator */ 636 - u64 delta = rhte->lxt_cnt - *new_size; 637 - u64 my_new_size; 638 - int i, rc = 0; 639 - 640 - lxt_old = rhte->lxt_start; 641 - ngrps_old = LXT_NUM_GROUPS(rhte->lxt_cnt); 642 - ngrps = LXT_NUM_GROUPS(rhte->lxt_cnt - delta); 643 - 644 - if (ngrps != ngrps_old) { 645 - /* Reallocate to fit new size unless new size is 0 */ 646 - if (ngrps) { 647 - lxt = kzalloc((sizeof(*lxt) * LXT_GROUP_SIZE * ngrps), 648 - GFP_KERNEL); 649 - if (unlikely(!lxt)) { 650 - rc = -ENOMEM; 651 - goto out; 652 - } 653 - 654 - /* Copy over old entries that will remain */ 655 - memcpy(lxt, lxt_old, 656 - (sizeof(*lxt) * (rhte->lxt_cnt - delta))); 657 - } else 658 - lxt = NULL; 659 - } else 660 - lxt = lxt_old; 661 - 662 - /* Nothing can fail from now on */ 663 - my_new_size = rhte->lxt_cnt - delta; 664 - 665 - /* 666 - * The following sequence is prescribed in the SISlite spec 667 - * for syncing up with the AFU when removing LXT entries. 668 - */ 669 - rhte->lxt_cnt = my_new_size; 670 - dma_wmb(); /* Make RHT entry's LXT table size update visible */ 671 - 672 - rhte->lxt_start = lxt; 673 - dma_wmb(); /* Make RHT entry's LXT table update visible */ 674 - 675 - if (needs_sync) { 676 - rc = cxlflash_afu_sync(afu, ctxid, rhndl, AFU_HW_SYNC); 677 - if (unlikely(rc)) 678 - rc = -EAGAIN; 679 - } 680 - 681 - if (needs_ws) { 682 - /* 683 - * Mark the context as unavailable, so that we can release 684 - * the mutex safely. 685 - */ 686 - ctxi->unavail = true; 687 - mutex_unlock(&ctxi->mutex); 688 - } 689 - 690 - /* Free LBAs allocated to freed chunks */ 691 - mutex_lock(&blka->mutex); 692 - for (i = delta - 1; i >= 0; i--) { 693 - aun = lxt_old[my_new_size + i].rlba_base >> MC_CHUNK_SHIFT; 694 - if (needs_ws) 695 - write_same16(sdev, aun, MC_CHUNK_SIZE); 696 - ba_free(&blka->ba_lun, aun); 697 - } 698 - mutex_unlock(&blka->mutex); 699 - 700 - if (needs_ws) { 701 - /* Make the context visible again */ 702 - mutex_lock(&ctxi->mutex); 703 - ctxi->unavail = false; 704 - } 705 - 706 - /* Free old lxt if reallocated */ 707 - if (lxt != lxt_old) 708 - kfree(lxt_old); 709 - *new_size = my_new_size; 710 - out: 711 - dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 712 - return rc; 713 - } 714 - 715 - /** 716 - * _cxlflash_vlun_resize() - changes the size of a virtual LUN 717 - * @sdev: SCSI device associated with LUN owning virtual LUN. 718 - * @ctxi: Context owning resources. 719 - * @resize: Resize ioctl data structure. 720 - * 721 - * On successful return, the user is informed of the new size (in blocks) 722 - * of the virtual LUN in last LBA format. When the size of the virtual 723 - * LUN is zero, the last LBA is reflected as -1. See comment in the 724 - * prologue for _cxlflash_disk_release() regarding AFU syncs and contexts 725 - * on the error recovery list. 726 - * 727 - * Return: 0 on success, -errno on failure 728 - */ 729 - int _cxlflash_vlun_resize(struct scsi_device *sdev, 730 - struct ctx_info *ctxi, 731 - struct dk_cxlflash_resize *resize) 732 - { 733 - struct cxlflash_cfg *cfg = shost_priv(sdev->host); 734 - struct device *dev = &cfg->dev->dev; 735 - struct llun_info *lli = sdev->hostdata; 736 - struct glun_info *gli = lli->parent; 737 - struct afu *afu = cfg->afu; 738 - bool put_ctx = false; 739 - 740 - res_hndl_t rhndl = resize->rsrc_handle; 741 - u64 new_size; 742 - u64 nsectors; 743 - u64 ctxid = DECODE_CTXID(resize->context_id), 744 - rctxid = resize->context_id; 745 - 746 - struct sisl_rht_entry *rhte; 747 - 748 - int rc = 0; 749 - 750 - /* 751 - * The requested size (req_size) is always assumed to be in 4k blocks, 752 - * so we have to convert it here from 4k to chunk size. 753 - */ 754 - nsectors = (resize->req_size * CXLFLASH_BLOCK_SIZE) / gli->blk_len; 755 - new_size = DIV_ROUND_UP(nsectors, MC_CHUNK_SIZE); 756 - 757 - dev_dbg(dev, "%s: ctxid=%llu rhndl=%llu req_size=%llu new_size=%llu\n", 758 - __func__, ctxid, resize->rsrc_handle, resize->req_size, 759 - new_size); 760 - 761 - if (unlikely(gli->mode != MODE_VIRTUAL)) { 762 - dev_dbg(dev, "%s: LUN mode does not support resize mode=%d\n", 763 - __func__, gli->mode); 764 - rc = -EINVAL; 765 - goto out; 766 - 767 - } 768 - 769 - if (!ctxi) { 770 - ctxi = get_context(cfg, rctxid, lli, CTX_CTRL_ERR_FALLBACK); 771 - if (unlikely(!ctxi)) { 772 - dev_dbg(dev, "%s: Bad context ctxid=%llu\n", 773 - __func__, ctxid); 774 - rc = -EINVAL; 775 - goto out; 776 - } 777 - 778 - put_ctx = true; 779 - } 780 - 781 - rhte = get_rhte(ctxi, rhndl, lli); 782 - if (unlikely(!rhte)) { 783 - dev_dbg(dev, "%s: Bad resource handle rhndl=%u\n", 784 - __func__, rhndl); 785 - rc = -EINVAL; 786 - goto out; 787 - } 788 - 789 - if (new_size > rhte->lxt_cnt) 790 - rc = grow_lxt(afu, sdev, ctxid, rhndl, rhte, &new_size); 791 - else if (new_size < rhte->lxt_cnt) 792 - rc = shrink_lxt(afu, sdev, rhndl, rhte, ctxi, &new_size); 793 - else { 794 - /* 795 - * Rare case where there is already sufficient space, just 796 - * need to perform a translation sync with the AFU. This 797 - * scenario likely follows a previous sync failure during 798 - * a resize operation. Accordingly, perform the heavyweight 799 - * form of translation sync as it is unknown which type of 800 - * resize failed previously. 801 - */ 802 - rc = cxlflash_afu_sync(afu, ctxid, rhndl, AFU_HW_SYNC); 803 - if (unlikely(rc)) { 804 - rc = -EAGAIN; 805 - goto out; 806 - } 807 - } 808 - 809 - resize->hdr.return_flags = 0; 810 - resize->last_lba = (new_size * MC_CHUNK_SIZE * gli->blk_len); 811 - resize->last_lba /= CXLFLASH_BLOCK_SIZE; 812 - resize->last_lba--; 813 - 814 - out: 815 - if (put_ctx) 816 - put_context(ctxi); 817 - dev_dbg(dev, "%s: resized to %llu returning rc=%d\n", 818 - __func__, resize->last_lba, rc); 819 - return rc; 820 - } 821 - 822 - int cxlflash_vlun_resize(struct scsi_device *sdev, void *resize) 823 - { 824 - return _cxlflash_vlun_resize(sdev, NULL, resize); 825 - } 826 - 827 - /** 828 - * cxlflash_restore_luntable() - Restore LUN table to prior state 829 - * @cfg: Internal structure associated with the host. 830 - */ 831 - void cxlflash_restore_luntable(struct cxlflash_cfg *cfg) 832 - { 833 - struct llun_info *lli, *temp; 834 - u32 lind; 835 - int k; 836 - struct device *dev = &cfg->dev->dev; 837 - __be64 __iomem *fc_port_luns; 838 - 839 - mutex_lock(&global.mutex); 840 - 841 - list_for_each_entry_safe(lli, temp, &cfg->lluns, list) { 842 - if (!lli->in_table) 843 - continue; 844 - 845 - lind = lli->lun_index; 846 - dev_dbg(dev, "%s: Virtual LUNs on slot %d:\n", __func__, lind); 847 - 848 - for (k = 0; k < cfg->num_fc_ports; k++) 849 - if (lli->port_sel & (1 << k)) { 850 - fc_port_luns = get_fc_port_luns(cfg, k); 851 - writeq_be(lli->lun_id[k], &fc_port_luns[lind]); 852 - dev_dbg(dev, "\t%d=%llx\n", k, lli->lun_id[k]); 853 - } 854 - } 855 - 856 - mutex_unlock(&global.mutex); 857 - } 858 - 859 - /** 860 - * get_num_ports() - compute number of ports from port selection mask 861 - * @psm: Port selection mask. 862 - * 863 - * Return: Population count of port selection mask 864 - */ 865 - static inline u8 get_num_ports(u32 psm) 866 - { 867 - static const u8 bits[16] = { 0, 1, 1, 2, 1, 2, 2, 3, 868 - 1, 2, 2, 3, 2, 3, 3, 4 }; 869 - 870 - return bits[psm & 0xf]; 871 - } 872 - 873 - /** 874 - * init_luntable() - write an entry in the LUN table 875 - * @cfg: Internal structure associated with the host. 876 - * @lli: Per adapter LUN information structure. 877 - * 878 - * On successful return, a LUN table entry is created: 879 - * - at the top for LUNs visible on multiple ports. 880 - * - at the bottom for LUNs visible only on one port. 881 - * 882 - * Return: 0 on success, -errno on failure 883 - */ 884 - static int init_luntable(struct cxlflash_cfg *cfg, struct llun_info *lli) 885 - { 886 - u32 chan; 887 - u32 lind; 888 - u32 nports; 889 - int rc = 0; 890 - int k; 891 - struct device *dev = &cfg->dev->dev; 892 - __be64 __iomem *fc_port_luns; 893 - 894 - mutex_lock(&global.mutex); 895 - 896 - if (lli->in_table) 897 - goto out; 898 - 899 - nports = get_num_ports(lli->port_sel); 900 - if (nports == 0 || nports > cfg->num_fc_ports) { 901 - WARN(1, "Unsupported port configuration nports=%u", nports); 902 - rc = -EIO; 903 - goto out; 904 - } 905 - 906 - if (nports > 1) { 907 - /* 908 - * When LUN is visible from multiple ports, we will put 909 - * it in the top half of the LUN table. 910 - */ 911 - for (k = 0; k < cfg->num_fc_ports; k++) { 912 - if (!(lli->port_sel & (1 << k))) 913 - continue; 914 - 915 - if (cfg->promote_lun_index == cfg->last_lun_index[k]) { 916 - rc = -ENOSPC; 917 - goto out; 918 - } 919 - } 920 - 921 - lind = lli->lun_index = cfg->promote_lun_index; 922 - dev_dbg(dev, "%s: Virtual LUNs on slot %d:\n", __func__, lind); 923 - 924 - for (k = 0; k < cfg->num_fc_ports; k++) { 925 - if (!(lli->port_sel & (1 << k))) 926 - continue; 927 - 928 - fc_port_luns = get_fc_port_luns(cfg, k); 929 - writeq_be(lli->lun_id[k], &fc_port_luns[lind]); 930 - dev_dbg(dev, "\t%d=%llx\n", k, lli->lun_id[k]); 931 - } 932 - 933 - cfg->promote_lun_index++; 934 - } else { 935 - /* 936 - * When LUN is visible only from one port, we will put 937 - * it in the bottom half of the LUN table. 938 - */ 939 - chan = PORTMASK2CHAN(lli->port_sel); 940 - if (cfg->promote_lun_index == cfg->last_lun_index[chan]) { 941 - rc = -ENOSPC; 942 - goto out; 943 - } 944 - 945 - lind = lli->lun_index = cfg->last_lun_index[chan]; 946 - fc_port_luns = get_fc_port_luns(cfg, chan); 947 - writeq_be(lli->lun_id[chan], &fc_port_luns[lind]); 948 - cfg->last_lun_index[chan]--; 949 - dev_dbg(dev, "%s: Virtual LUNs on slot %d:\n\t%d=%llx\n", 950 - __func__, lind, chan, lli->lun_id[chan]); 951 - } 952 - 953 - lli->in_table = true; 954 - out: 955 - mutex_unlock(&global.mutex); 956 - dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 957 - return rc; 958 - } 959 - 960 - /** 961 - * cxlflash_disk_virtual_open() - open a virtual disk of specified size 962 - * @sdev: SCSI device associated with LUN owning virtual LUN. 963 - * @arg: UVirtual ioctl data structure. 964 - * 965 - * On successful return, the user is informed of the resource handle 966 - * to be used to identify the virtual LUN and the size (in blocks) of 967 - * the virtual LUN in last LBA format. When the size of the virtual LUN 968 - * is zero, the last LBA is reflected as -1. 969 - * 970 - * Return: 0 on success, -errno on failure 971 - */ 972 - int cxlflash_disk_virtual_open(struct scsi_device *sdev, void *arg) 973 - { 974 - struct cxlflash_cfg *cfg = shost_priv(sdev->host); 975 - struct device *dev = &cfg->dev->dev; 976 - struct llun_info *lli = sdev->hostdata; 977 - struct glun_info *gli = lli->parent; 978 - 979 - struct dk_cxlflash_uvirtual *virt = (struct dk_cxlflash_uvirtual *)arg; 980 - struct dk_cxlflash_resize resize; 981 - 982 - u64 ctxid = DECODE_CTXID(virt->context_id), 983 - rctxid = virt->context_id; 984 - u64 lun_size = virt->lun_size; 985 - u64 last_lba = 0; 986 - u64 rsrc_handle = -1; 987 - 988 - int rc = 0; 989 - 990 - struct ctx_info *ctxi = NULL; 991 - struct sisl_rht_entry *rhte = NULL; 992 - 993 - dev_dbg(dev, "%s: ctxid=%llu ls=%llu\n", __func__, ctxid, lun_size); 994 - 995 - /* Setup the LUNs block allocator on first call */ 996 - mutex_lock(&gli->mutex); 997 - if (gli->mode == MODE_NONE) { 998 - rc = init_vlun(lli); 999 - if (rc) { 1000 - dev_err(dev, "%s: init_vlun failed rc=%d\n", 1001 - __func__, rc); 1002 - rc = -ENOMEM; 1003 - goto err0; 1004 - } 1005 - } 1006 - 1007 - rc = cxlflash_lun_attach(gli, MODE_VIRTUAL, true); 1008 - if (unlikely(rc)) { 1009 - dev_err(dev, "%s: Failed attach to LUN (VIRTUAL)\n", __func__); 1010 - goto err0; 1011 - } 1012 - mutex_unlock(&gli->mutex); 1013 - 1014 - rc = init_luntable(cfg, lli); 1015 - if (rc) { 1016 - dev_err(dev, "%s: init_luntable failed rc=%d\n", __func__, rc); 1017 - goto err1; 1018 - } 1019 - 1020 - ctxi = get_context(cfg, rctxid, lli, 0); 1021 - if (unlikely(!ctxi)) { 1022 - dev_err(dev, "%s: Bad context ctxid=%llu\n", __func__, ctxid); 1023 - rc = -EINVAL; 1024 - goto err1; 1025 - } 1026 - 1027 - rhte = rhte_checkout(ctxi, lli); 1028 - if (unlikely(!rhte)) { 1029 - dev_err(dev, "%s: too many opens ctxid=%llu\n", 1030 - __func__, ctxid); 1031 - rc = -EMFILE; /* too many opens */ 1032 - goto err1; 1033 - } 1034 - 1035 - rsrc_handle = (rhte - ctxi->rht_start); 1036 - 1037 - /* Populate RHT format 0 */ 1038 - rhte->nmask = MC_RHT_NMASK; 1039 - rhte->fp = SISL_RHT_FP(0U, ctxi->rht_perms); 1040 - 1041 - /* Resize even if requested size is 0 */ 1042 - marshal_virt_to_resize(virt, &resize); 1043 - resize.rsrc_handle = rsrc_handle; 1044 - rc = _cxlflash_vlun_resize(sdev, ctxi, &resize); 1045 - if (rc) { 1046 - dev_err(dev, "%s: resize failed rc=%d\n", __func__, rc); 1047 - goto err2; 1048 - } 1049 - last_lba = resize.last_lba; 1050 - 1051 - if (virt->hdr.flags & DK_CXLFLASH_UVIRTUAL_NEED_WRITE_SAME) 1052 - ctxi->rht_needs_ws[rsrc_handle] = true; 1053 - 1054 - virt->hdr.return_flags = 0; 1055 - virt->last_lba = last_lba; 1056 - virt->rsrc_handle = rsrc_handle; 1057 - 1058 - if (get_num_ports(lli->port_sel) > 1) 1059 - virt->hdr.return_flags |= DK_CXLFLASH_ALL_PORTS_ACTIVE; 1060 - out: 1061 - if (likely(ctxi)) 1062 - put_context(ctxi); 1063 - dev_dbg(dev, "%s: returning handle=%llu rc=%d llba=%llu\n", 1064 - __func__, rsrc_handle, rc, last_lba); 1065 - return rc; 1066 - 1067 - err2: 1068 - rhte_checkin(ctxi, rhte); 1069 - err1: 1070 - cxlflash_lun_detach(gli); 1071 - goto out; 1072 - err0: 1073 - /* Special common cleanup prior to successful LUN attach */ 1074 - cxlflash_ba_terminate(&gli->blka.ba_lun); 1075 - mutex_unlock(&gli->mutex); 1076 - goto out; 1077 - } 1078 - 1079 - /** 1080 - * clone_lxt() - copies translation tables from source to destination RHTE 1081 - * @afu: AFU associated with the host. 1082 - * @blka: Block allocator associated with LUN. 1083 - * @ctxid: Context ID of context owning the RHTE. 1084 - * @rhndl: Resource handle associated with the RHTE. 1085 - * @rhte: Destination resource handle entry (RHTE). 1086 - * @rhte_src: Source resource handle entry (RHTE). 1087 - * 1088 - * Return: 0 on success, -errno on failure 1089 - */ 1090 - static int clone_lxt(struct afu *afu, 1091 - struct blka *blka, 1092 - ctx_hndl_t ctxid, 1093 - res_hndl_t rhndl, 1094 - struct sisl_rht_entry *rhte, 1095 - struct sisl_rht_entry *rhte_src) 1096 - { 1097 - struct cxlflash_cfg *cfg = afu->parent; 1098 - struct device *dev = &cfg->dev->dev; 1099 - struct sisl_lxt_entry *lxt = NULL; 1100 - bool locked = false; 1101 - u32 ngrps; 1102 - u64 aun; /* chunk# allocated by block allocator */ 1103 - int j; 1104 - int i = 0; 1105 - int rc = 0; 1106 - 1107 - ngrps = LXT_NUM_GROUPS(rhte_src->lxt_cnt); 1108 - 1109 - if (ngrps) { 1110 - /* allocate new LXTs for clone */ 1111 - lxt = kzalloc((sizeof(*lxt) * LXT_GROUP_SIZE * ngrps), 1112 - GFP_KERNEL); 1113 - if (unlikely(!lxt)) { 1114 - rc = -ENOMEM; 1115 - goto out; 1116 - } 1117 - 1118 - /* copy over */ 1119 - memcpy(lxt, rhte_src->lxt_start, 1120 - (sizeof(*lxt) * rhte_src->lxt_cnt)); 1121 - 1122 - /* clone the LBAs in block allocator via ref_cnt, note that the 1123 - * block allocator mutex must be held until it is established 1124 - * that this routine will complete without the need for a 1125 - * cleanup. 1126 - */ 1127 - mutex_lock(&blka->mutex); 1128 - locked = true; 1129 - for (i = 0; i < rhte_src->lxt_cnt; i++) { 1130 - aun = (lxt[i].rlba_base >> MC_CHUNK_SHIFT); 1131 - if (ba_clone(&blka->ba_lun, aun) == -1ULL) { 1132 - rc = -EIO; 1133 - goto err; 1134 - } 1135 - } 1136 - } 1137 - 1138 - /* 1139 - * The following sequence is prescribed in the SISlite spec 1140 - * for syncing up with the AFU when adding LXT entries. 1141 - */ 1142 - dma_wmb(); /* Make LXT updates are visible */ 1143 - 1144 - rhte->lxt_start = lxt; 1145 - dma_wmb(); /* Make RHT entry's LXT table update visible */ 1146 - 1147 - rhte->lxt_cnt = rhte_src->lxt_cnt; 1148 - dma_wmb(); /* Make RHT entry's LXT table size update visible */ 1149 - 1150 - rc = cxlflash_afu_sync(afu, ctxid, rhndl, AFU_LW_SYNC); 1151 - if (unlikely(rc)) { 1152 - rc = -EAGAIN; 1153 - goto err2; 1154 - } 1155 - 1156 - out: 1157 - if (locked) 1158 - mutex_unlock(&blka->mutex); 1159 - dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 1160 - return rc; 1161 - err2: 1162 - /* Reset the RHTE */ 1163 - rhte->lxt_cnt = 0; 1164 - dma_wmb(); 1165 - rhte->lxt_start = NULL; 1166 - dma_wmb(); 1167 - err: 1168 - /* free the clones already made */ 1169 - for (j = 0; j < i; j++) { 1170 - aun = (lxt[j].rlba_base >> MC_CHUNK_SHIFT); 1171 - ba_free(&blka->ba_lun, aun); 1172 - } 1173 - kfree(lxt); 1174 - goto out; 1175 - } 1176 - 1177 - /** 1178 - * cxlflash_disk_clone() - clone a context by making snapshot of another 1179 - * @sdev: SCSI device associated with LUN owning virtual LUN. 1180 - * @arg: Clone ioctl data structure. 1181 - * 1182 - * This routine effectively performs cxlflash_disk_open operation for each 1183 - * in-use virtual resource in the source context. Note that the destination 1184 - * context must be in pristine state and cannot have any resource handles 1185 - * open at the time of the clone. 1186 - * 1187 - * Return: 0 on success, -errno on failure 1188 - */ 1189 - int cxlflash_disk_clone(struct scsi_device *sdev, void *arg) 1190 - { 1191 - struct dk_cxlflash_clone *clone = arg; 1192 - struct cxlflash_cfg *cfg = shost_priv(sdev->host); 1193 - struct device *dev = &cfg->dev->dev; 1194 - struct llun_info *lli = sdev->hostdata; 1195 - struct glun_info *gli = lli->parent; 1196 - struct blka *blka = &gli->blka; 1197 - struct afu *afu = cfg->afu; 1198 - struct dk_cxlflash_release release = { { 0 }, 0 }; 1199 - 1200 - struct ctx_info *ctxi_src = NULL, 1201 - *ctxi_dst = NULL; 1202 - struct lun_access *lun_access_src, *lun_access_dst; 1203 - u32 perms; 1204 - u64 ctxid_src = DECODE_CTXID(clone->context_id_src), 1205 - ctxid_dst = DECODE_CTXID(clone->context_id_dst), 1206 - rctxid_src = clone->context_id_src, 1207 - rctxid_dst = clone->context_id_dst; 1208 - int i, j; 1209 - int rc = 0; 1210 - bool found; 1211 - LIST_HEAD(sidecar); 1212 - 1213 - dev_dbg(dev, "%s: ctxid_src=%llu ctxid_dst=%llu\n", 1214 - __func__, ctxid_src, ctxid_dst); 1215 - 1216 - /* Do not clone yourself */ 1217 - if (unlikely(rctxid_src == rctxid_dst)) { 1218 - rc = -EINVAL; 1219 - goto out; 1220 - } 1221 - 1222 - if (unlikely(gli->mode != MODE_VIRTUAL)) { 1223 - rc = -EINVAL; 1224 - dev_dbg(dev, "%s: Only supported on virtual LUNs mode=%u\n", 1225 - __func__, gli->mode); 1226 - goto out; 1227 - } 1228 - 1229 - ctxi_src = get_context(cfg, rctxid_src, lli, CTX_CTRL_CLONE); 1230 - ctxi_dst = get_context(cfg, rctxid_dst, lli, 0); 1231 - if (unlikely(!ctxi_src || !ctxi_dst)) { 1232 - dev_dbg(dev, "%s: Bad context ctxid_src=%llu ctxid_dst=%llu\n", 1233 - __func__, ctxid_src, ctxid_dst); 1234 - rc = -EINVAL; 1235 - goto out; 1236 - } 1237 - 1238 - /* Verify there is no open resource handle in the destination context */ 1239 - for (i = 0; i < MAX_RHT_PER_CONTEXT; i++) 1240 - if (ctxi_dst->rht_start[i].nmask != 0) { 1241 - rc = -EINVAL; 1242 - goto out; 1243 - } 1244 - 1245 - /* Clone LUN access list */ 1246 - list_for_each_entry(lun_access_src, &ctxi_src->luns, list) { 1247 - found = false; 1248 - list_for_each_entry(lun_access_dst, &ctxi_dst->luns, list) 1249 - if (lun_access_dst->sdev == lun_access_src->sdev) { 1250 - found = true; 1251 - break; 1252 - } 1253 - 1254 - if (!found) { 1255 - lun_access_dst = kzalloc(sizeof(*lun_access_dst), 1256 - GFP_KERNEL); 1257 - if (unlikely(!lun_access_dst)) { 1258 - dev_err(dev, "%s: lun_access allocation fail\n", 1259 - __func__); 1260 - rc = -ENOMEM; 1261 - goto out; 1262 - } 1263 - 1264 - *lun_access_dst = *lun_access_src; 1265 - list_add(&lun_access_dst->list, &sidecar); 1266 - } 1267 - } 1268 - 1269 - if (unlikely(!ctxi_src->rht_out)) { 1270 - dev_dbg(dev, "%s: Nothing to clone\n", __func__); 1271 - goto out_success; 1272 - } 1273 - 1274 - /* User specified permission on attach */ 1275 - perms = ctxi_dst->rht_perms; 1276 - 1277 - /* 1278 - * Copy over checked-out RHT (and their associated LXT) entries by 1279 - * hand, stopping after we've copied all outstanding entries and 1280 - * cleaning up if the clone fails. 1281 - * 1282 - * Note: This loop is equivalent to performing cxlflash_disk_open and 1283 - * cxlflash_vlun_resize. As such, LUN accounting needs to be taken into 1284 - * account by attaching after each successful RHT entry clone. In the 1285 - * event that a clone failure is experienced, the LUN detach is handled 1286 - * via the cleanup performed by _cxlflash_disk_release. 1287 - */ 1288 - for (i = 0; i < MAX_RHT_PER_CONTEXT; i++) { 1289 - if (ctxi_src->rht_out == ctxi_dst->rht_out) 1290 - break; 1291 - if (ctxi_src->rht_start[i].nmask == 0) 1292 - continue; 1293 - 1294 - /* Consume a destination RHT entry */ 1295 - ctxi_dst->rht_out++; 1296 - ctxi_dst->rht_start[i].nmask = ctxi_src->rht_start[i].nmask; 1297 - ctxi_dst->rht_start[i].fp = 1298 - SISL_RHT_FP_CLONE(ctxi_src->rht_start[i].fp, perms); 1299 - ctxi_dst->rht_lun[i] = ctxi_src->rht_lun[i]; 1300 - 1301 - rc = clone_lxt(afu, blka, ctxid_dst, i, 1302 - &ctxi_dst->rht_start[i], 1303 - &ctxi_src->rht_start[i]); 1304 - if (rc) { 1305 - marshal_clone_to_rele(clone, &release); 1306 - for (j = 0; j < i; j++) { 1307 - release.rsrc_handle = j; 1308 - _cxlflash_disk_release(sdev, ctxi_dst, 1309 - &release); 1310 - } 1311 - 1312 - /* Put back the one we failed on */ 1313 - rhte_checkin(ctxi_dst, &ctxi_dst->rht_start[i]); 1314 - goto err; 1315 - } 1316 - 1317 - cxlflash_lun_attach(gli, gli->mode, false); 1318 - } 1319 - 1320 - out_success: 1321 - list_splice(&sidecar, &ctxi_dst->luns); 1322 - 1323 - /* fall through */ 1324 - out: 1325 - if (ctxi_src) 1326 - put_context(ctxi_src); 1327 - if (ctxi_dst) 1328 - put_context(ctxi_dst); 1329 - dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 1330 - return rc; 1331 - 1332 - err: 1333 - list_for_each_entry_safe(lun_access_src, lun_access_dst, &sidecar, list) 1334 - kfree(lun_access_src); 1335 - goto out; 1336 - }
-82
drivers/scsi/cxlflash/vlun.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* 3 - * CXL Flash Device Driver 4 - * 5 - * Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation 6 - * Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation 7 - * 8 - * Copyright (C) 2015 IBM Corporation 9 - */ 10 - 11 - #ifndef _CXLFLASH_VLUN_H 12 - #define _CXLFLASH_VLUN_H 13 - 14 - /* RHT - Resource Handle Table */ 15 - #define MC_RHT_NMASK 16 /* in bits */ 16 - #define MC_CHUNK_SHIFT MC_RHT_NMASK /* shift to go from LBA to chunk# */ 17 - 18 - #define HIBIT (BITS_PER_LONG - 1) 19 - 20 - #define MAX_AUN_CLONE_CNT 0xFF 21 - 22 - /* 23 - * LXT - LBA Translation Table 24 - * 25 - * +-------+-------+-------+-------+-------+-------+-------+---+---+ 26 - * | RLBA_BASE |LUN_IDX| P |SEL| 27 - * +-------+-------+-------+-------+-------+-------+-------+---+---+ 28 - * 29 - * The LXT Entry contains the physical LBA where the chunk starts (RLBA_BASE). 30 - * AFU ORes the low order bits from the virtual LBA (offset into the chunk) 31 - * with RLBA_BASE. The result is the physical LBA to be sent to storage. 32 - * The LXT Entry also contains an index to a LUN TBL and a bitmask of which 33 - * outgoing (FC) * ports can be selected. The port select bit-mask is ANDed 34 - * with a global port select bit-mask maintained by the driver. 35 - * In addition, it has permission bits that are ANDed with the 36 - * RHT permissions to arrive at the final permissions for the chunk. 37 - * 38 - * LXT tables are allocated dynamically in groups. This is done to avoid 39 - * a malloc/free overhead each time the LXT has to grow or shrink. 40 - * 41 - * Based on the current lxt_cnt (used), it is always possible to know 42 - * how many are allocated (used+free). The number of allocated entries is 43 - * not stored anywhere. 44 - * 45 - * The LXT table is re-allocated whenever it needs to cross into another group. 46 - */ 47 - #define LXT_GROUP_SIZE 8 48 - #define LXT_NUM_GROUPS(lxt_cnt) (((lxt_cnt) + 7)/8) /* alloc'ed groups */ 49 - #define LXT_LUNIDX_SHIFT 8 /* LXT entry, shift for LUN index */ 50 - #define LXT_PERM_SHIFT 4 /* LXT entry, shift for permission bits */ 51 - 52 - struct ba_lun_info { 53 - u64 *lun_alloc_map; 54 - u32 lun_bmap_size; 55 - u32 total_aus; 56 - u64 free_aun_cnt; 57 - 58 - /* indices to be used for elevator lookup of free map */ 59 - u32 free_low_idx; 60 - u32 free_curr_idx; 61 - u32 free_high_idx; 62 - 63 - u8 *aun_clone_map; 64 - }; 65 - 66 - struct ba_lun { 67 - u64 lun_id; 68 - u64 wwpn; 69 - size_t lsize; /* LUN size in number of LBAs */ 70 - size_t lba_size; /* LBA size in number of bytes */ 71 - size_t au_size; /* Allocation Unit size in number of LBAs */ 72 - struct ba_lun_info *ba_lun_handle; 73 - }; 74 - 75 - /* Block Allocator */ 76 - struct blka { 77 - struct ba_lun ba_lun; 78 - u64 nchunk; /* number of chunks */ 79 - struct mutex mutex; 80 - }; 81 - 82 - #endif /* ifndef _CXLFLASH_SUPERPIPE_H */
+1 -1
drivers/scsi/elx/efct/efct_driver.c
··· 735 735 736 736 MODULE_DEVICE_TABLE(pci, efct_pci_table); 737 737 738 - static struct pci_error_handlers efct_pci_err_handler = { 738 + static const struct pci_error_handlers efct_pci_err_handler = { 739 739 .error_detected = efct_pci_io_error_detected, 740 740 .slot_reset = efct_pci_io_slot_reset, 741 741 .resume = efct_pci_io_resume,
+20 -37
drivers/scsi/fnic/fdls_disc.c
··· 308 308 struct fnic *fnic = iport->fnic; 309 309 struct reclaim_entry_s *reclaim_entry; 310 310 unsigned long delay_j = msecs_to_jiffies(OXID_RECLAIM_TOV(iport)); 311 + unsigned long flags; 311 312 int idx; 312 - 313 - spin_lock_irqsave(&fnic->fnic_lock, fnic->lock_flags); 314 313 315 314 for_each_set_bit(idx, oxid_pool->pending_schedule_free, FNIC_OXID_POOL_SZ) { 316 315 317 316 FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 318 317 "Schedule oxid free. oxid idx: %d\n", idx); 319 318 320 - spin_unlock_irqrestore(&fnic->fnic_lock, fnic->lock_flags); 321 - reclaim_entry = (struct reclaim_entry_s *) 322 - kzalloc(sizeof(struct reclaim_entry_s), GFP_KERNEL); 323 - spin_lock_irqsave(&fnic->fnic_lock, fnic->lock_flags); 324 - 319 + reclaim_entry = kzalloc(sizeof(*reclaim_entry), GFP_KERNEL); 325 320 if (!reclaim_entry) { 326 - FNIC_FCS_DBG(KERN_WARNING, fnic->host, fnic->fnic_num, 327 - "Failed to allocate memory for reclaim struct for oxid idx: 0x%x\n", 328 - idx); 329 - 330 321 schedule_delayed_work(&oxid_pool->schedule_oxid_free_retry, 331 322 msecs_to_jiffies(SCHEDULE_OXID_FREE_RETRY_TIME)); 332 - spin_unlock_irqrestore(&fnic->fnic_lock, fnic->lock_flags); 333 323 return; 334 324 } 335 325 336 - if (test_and_clear_bit(idx, oxid_pool->pending_schedule_free)) { 337 - reclaim_entry->oxid_idx = idx; 338 - reclaim_entry->expires = round_jiffies(jiffies + delay_j); 339 - list_add_tail(&reclaim_entry->links, &oxid_pool->oxid_reclaim_list); 340 - schedule_delayed_work(&oxid_pool->oxid_reclaim_work, delay_j); 341 - } else { 342 - /* unlikely scenario, free the allocated memory and continue */ 343 - kfree(reclaim_entry); 344 - } 345 - } 346 - 347 - spin_unlock_irqrestore(&fnic->fnic_lock, fnic->lock_flags); 326 + clear_bit(idx, oxid_pool->pending_schedule_free); 327 + reclaim_entry->oxid_idx = idx; 328 + reclaim_entry->expires = round_jiffies(jiffies + delay_j); 329 + spin_lock_irqsave(&fnic->fnic_lock, flags); 330 + list_add_tail(&reclaim_entry->links, &oxid_pool->oxid_reclaim_list); 331 + spin_unlock_irqrestore(&fnic->fnic_lock, flags); 332 + schedule_delayed_work(&oxid_pool->oxid_reclaim_work, delay_j); 333 + } 348 334 } 349 335 350 336 static bool fdls_is_oxid_fabric_req(uint16_t oxid) ··· 1553 1567 1554 1568 iport->fabric.flags &= ~FNIC_FDLS_FABRIC_ABORT_ISSUED; 1555 1569 1556 - FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 1557 - "0x%x: FDLS send fabric LOGO with oxid: 0x%x", 1558 - iport->fcid, oxid); 1570 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 1571 + "0x%x: FDLS send fabric LOGO with oxid: 0x%x", 1572 + iport->fcid, oxid); 1559 1573 1560 1574 fnic_send_fcoe_frame(iport, frame, frame_size); 1561 1575 ··· 1884 1898 if (fnic->subsys_desc_len >= FNIC_FDMI_MODEL_LEN) 1885 1899 fnic->subsys_desc_len = FNIC_FDMI_MODEL_LEN - 1; 1886 1900 strscpy_pad(data, fnic->subsys_desc, FNIC_FDMI_MODEL_LEN); 1887 - data[FNIC_FDMI_MODEL_LEN - 1] = 0; 1888 1901 fnic_fdmi_attr_set(fdmi_attr, FNIC_FDMI_TYPE_MODEL, FNIC_FDMI_MODEL_LEN, 1889 1902 data, &attr_off_bytes); 1890 1903 ··· 2046 2061 snprintf(tmp_data, FNIC_FDMI_OS_NAME_LEN - 1, "host%d", 2047 2062 fnic->host->host_no); 2048 2063 strscpy_pad(data, tmp_data, FNIC_FDMI_OS_NAME_LEN); 2049 - data[FNIC_FDMI_OS_NAME_LEN - 1] = 0; 2050 2064 fnic_fdmi_attr_set(fdmi_attr, FNIC_FDMI_TYPE_OS_NAME, 2051 2065 FNIC_FDMI_OS_NAME_LEN, data, &attr_off_bytes); 2052 2066 ··· 2055 2071 sprintf(fc_host_system_hostname(fnic->host), "%s", utsname()->nodename); 2056 2072 strscpy_pad(data, fc_host_system_hostname(fnic->host), 2057 2073 FNIC_FDMI_HN_LEN); 2058 - data[FNIC_FDMI_HN_LEN - 1] = 0; 2059 2074 fnic_fdmi_attr_set(fdmi_attr, FNIC_FDMI_TYPE_HOST_NAME, 2060 2075 FNIC_FDMI_HN_LEN, data, &attr_off_bytes); 2061 2076 ··· 4642 4659 d_id = ntoh24(fchdr->fh_d_id); 4643 4660 4644 4661 /* some common validation */ 4645 - if (fdls_get_state(fabric) > FDLS_STATE_FABRIC_FLOGI) { 4646 - if ((iport->fcid != d_id) || (!FNIC_FC_FRAME_CS_CTL(fchdr))) { 4647 - FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 4648 - "invalid frame received. Dropping frame"); 4649 - return -1; 4650 - } 4662 + if (fdls_get_state(fabric) > FDLS_STATE_FABRIC_FLOGI) { 4663 + if (iport->fcid != d_id || (!FNIC_FC_FRAME_CS_CTL(fchdr))) { 4664 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 4665 + "invalid frame received. Dropping frame"); 4666 + return -1; 4651 4667 } 4668 + } 4652 4669 4653 4670 /* BLS ABTS response */ 4654 4671 if ((fchdr->fh_r_ctl == FC_RCTL_BA_ACC) ··· 4665 4682 "Received unexpected ABTS RSP(oxid:0x%x) from 0x%x. Dropping frame", 4666 4683 oxid, s_id); 4667 4684 return -1; 4668 - } 4685 + } 4669 4686 return FNIC_FABRIC_BLS_ABTS_RSP; 4670 4687 } else if (fdls_is_oxid_fdmi_req(oxid)) { 4671 4688 return FNIC_FDMI_BLS_ABTS_RSP;
+2 -3
drivers/scsi/fnic/fnic_main.c
··· 1365 1365 if (pc_rscn_handling_feature_flag == PC_RSCN_HANDLING_FEATURE_ON) 1366 1366 destroy_workqueue(reset_fnic_work_queue); 1367 1367 1368 - if (fnic_fip_queue) { 1369 - flush_workqueue(fnic_fip_queue); 1368 + if (fnic_fip_queue) 1370 1369 destroy_workqueue(fnic_fip_queue); 1371 - } 1370 + 1372 1371 kmem_cache_destroy(fnic_sgl_cache[FNIC_SGL_CACHE_MAX]); 1373 1372 kmem_cache_destroy(fnic_sgl_cache[FNIC_SGL_CACHE_DFLT]); 1374 1373 kmem_cache_destroy(fnic_io_req_cache);
+1 -2
drivers/scsi/hisi_sas/hisi_sas.h
··· 633 633 extern void hisi_sas_stop_phys(struct hisi_hba *hisi_hba); 634 634 extern int hisi_sas_alloc(struct hisi_hba *hisi_hba); 635 635 extern void hisi_sas_free(struct hisi_hba *hisi_hba); 636 - extern u8 hisi_sas_get_ata_protocol(struct host_to_dev_fis *fis, 637 - int direction); 636 + extern u8 hisi_sas_get_ata_protocol(struct sas_task *task); 638 637 extern struct hisi_sas_port *to_hisi_sas_port(struct asd_sas_port *sas_port); 639 638 extern void hisi_sas_sata_done(struct sas_task *task, 640 639 struct hisi_sas_slot *slot);
+26 -2
drivers/scsi/hisi_sas/hisi_sas_main.c
··· 21 21 bool rst_ha_timeout; /* reset the HA for timeout */ 22 22 }; 23 23 24 - u8 hisi_sas_get_ata_protocol(struct host_to_dev_fis *fis, int direction) 24 + static u8 hisi_sas_get_ata_protocol_from_tf(struct ata_queued_cmd *qc) 25 25 { 26 + if (!qc) 27 + return HISI_SAS_SATA_PROTOCOL_PIO; 28 + 29 + switch (qc->tf.protocol) { 30 + case ATA_PROT_NODATA: 31 + return HISI_SAS_SATA_PROTOCOL_NONDATA; 32 + case ATA_PROT_PIO: 33 + return HISI_SAS_SATA_PROTOCOL_PIO; 34 + case ATA_PROT_DMA: 35 + return HISI_SAS_SATA_PROTOCOL_DMA; 36 + case ATA_PROT_NCQ_NODATA: 37 + case ATA_PROT_NCQ: 38 + return HISI_SAS_SATA_PROTOCOL_FPDMA; 39 + default: 40 + return HISI_SAS_SATA_PROTOCOL_PIO; 41 + } 42 + } 43 + 44 + u8 hisi_sas_get_ata_protocol(struct sas_task *task) 45 + { 46 + struct host_to_dev_fis *fis = &task->ata_task.fis; 47 + struct ata_queued_cmd *qc = task->uldd_task; 48 + int direction = task->data_dir; 49 + 26 50 switch (fis->command) { 27 51 case ATA_CMD_FPDMA_WRITE: 28 52 case ATA_CMD_FPDMA_READ: ··· 117 93 { 118 94 if (direction == DMA_NONE) 119 95 return HISI_SAS_SATA_PROTOCOL_NONDATA; 120 - return HISI_SAS_SATA_PROTOCOL_PIO; 96 + return hisi_sas_get_ata_protocol_from_tf(qc); 121 97 } 122 98 } 123 99 }
+1 -1
drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
··· 1806 1806 .driver = { 1807 1807 .name = DRV_NAME, 1808 1808 .of_match_table = sas_v1_of_match, 1809 - .acpi_match_table = ACPI_PTR(sas_v1_acpi_match), 1809 + .acpi_match_table = sas_v1_acpi_match, 1810 1810 }, 1811 1811 }; 1812 1812
+2 -4
drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
··· 2538 2538 (task->ata_task.fis.control & ATA_SRST)) 2539 2539 dw1 |= 1 << CMD_HDR_RESET_OFF; 2540 2540 2541 - dw1 |= (hisi_sas_get_ata_protocol( 2542 - &task->ata_task.fis, task->data_dir)) 2543 - << CMD_HDR_FRAME_TYPE_OFF; 2541 + dw1 |= (hisi_sas_get_ata_protocol(task)) << CMD_HDR_FRAME_TYPE_OFF; 2544 2542 dw1 |= sas_dev->device_id << CMD_HDR_DEV_ID_OFF; 2545 2543 hdr->dw1 = cpu_to_le32(dw1); 2546 2544 ··· 3651 3653 .driver = { 3652 3654 .name = DRV_NAME, 3653 3655 .of_match_table = sas_v2_of_match, 3654 - .acpi_match_table = ACPI_PTR(sas_v2_acpi_match), 3656 + .acpi_match_table = sas_v2_acpi_match, 3655 3657 }, 3656 3658 }; 3657 3659
+1 -3
drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
··· 1456 1456 (task->ata_task.fis.control & ATA_SRST)) 1457 1457 dw1 |= 1 << CMD_HDR_RESET_OFF; 1458 1458 1459 - dw1 |= (hisi_sas_get_ata_protocol( 1460 - &task->ata_task.fis, task->data_dir)) 1461 - << CMD_HDR_FRAME_TYPE_OFF; 1459 + dw1 |= (hisi_sas_get_ata_protocol(task)) << CMD_HDR_FRAME_TYPE_OFF; 1462 1460 dw1 |= sas_dev->device_id << CMD_HDR_DEV_ID_OFF; 1463 1461 1464 1462 if (FIS_CMD_IS_UNCONSTRAINED(task->ata_task.fis))
+5 -14
drivers/scsi/hpsa.c
··· 453 453 struct device_attribute *attr, 454 454 const char *buf, size_t count) 455 455 { 456 - int status, len; 456 + int status; 457 457 struct ctlr_info *h; 458 458 struct Scsi_Host *shost = class_to_shost(dev); 459 - char tmpbuf[10]; 460 459 461 460 if (!capable(CAP_SYS_ADMIN) || !capable(CAP_SYS_RAWIO)) 462 461 return -EACCES; 463 - len = count > sizeof(tmpbuf) - 1 ? sizeof(tmpbuf) - 1 : count; 464 - strncpy(tmpbuf, buf, len); 465 - tmpbuf[len] = '\0'; 466 - if (sscanf(tmpbuf, "%d", &status) != 1) 462 + if (kstrtoint(buf, 10, &status)) 467 463 return -EINVAL; 468 464 h = shost_to_hba(shost); 469 465 h->acciopath_status = !!status; ··· 473 477 struct device_attribute *attr, 474 478 const char *buf, size_t count) 475 479 { 476 - int debug_level, len; 480 + int debug_level; 477 481 struct ctlr_info *h; 478 482 struct Scsi_Host *shost = class_to_shost(dev); 479 - char tmpbuf[10]; 480 483 481 484 if (!capable(CAP_SYS_ADMIN) || !capable(CAP_SYS_RAWIO)) 482 485 return -EACCES; 483 - len = count > sizeof(tmpbuf) - 1 ? sizeof(tmpbuf) - 1 : count; 484 - strncpy(tmpbuf, buf, len); 485 - tmpbuf[len] = '\0'; 486 - if (sscanf(tmpbuf, "%d", &debug_level) != 1) 486 + if (kstrtoint(buf, 10, &debug_level)) 487 487 return -EINVAL; 488 488 if (debug_level < 0) 489 489 debug_level = 0; ··· 7230 7238 7231 7239 static void init_driver_version(char *driver_version, int len) 7232 7240 { 7233 - memset(driver_version, 0, len); 7234 - strncpy(driver_version, HPSA " " HPSA_DRIVER_VERSION, len - 1); 7241 + strscpy_pad(driver_version, HPSA " " HPSA_DRIVER_VERSION, len); 7235 7242 } 7236 7243 7237 7244 static int write_driver_ver_to_cfgtable(struct CfgTable __iomem *cfgtable)
+4 -4
drivers/scsi/ips.c
··· 3631 3631 3632 3632 break; 3633 3633 3634 - case RESERVE: 3635 - case RELEASE: 3634 + case RESERVE_6: 3635 + case RELEASE_6: 3636 3636 scb->scsi_cmd->result = DID_OK << 16; 3637 3637 break; 3638 3638 ··· 3899 3899 case WRITE_6: 3900 3900 case READ_10: 3901 3901 case WRITE_10: 3902 - case RESERVE: 3903 - case RELEASE: 3902 + case RESERVE_6: 3903 + case RELEASE_6: 3904 3904 break; 3905 3905 3906 3906 case MODE_SENSE:
+7 -7
drivers/scsi/isci/init.c
··· 91 91 92 92 /* linux isci specific settings */ 93 93 94 - unsigned char no_outbound_task_to = 2; 94 + static unsigned char no_outbound_task_to = 2; 95 95 module_param(no_outbound_task_to, byte, 0); 96 96 MODULE_PARM_DESC(no_outbound_task_to, "No Outbound Task Timeout (1us incr)"); 97 97 98 - u16 ssp_max_occ_to = 20; 98 + static u16 ssp_max_occ_to = 20; 99 99 module_param(ssp_max_occ_to, ushort, 0); 100 100 MODULE_PARM_DESC(ssp_max_occ_to, "SSP Max occupancy timeout (100us incr)"); 101 101 102 - u16 stp_max_occ_to = 5; 102 + static u16 stp_max_occ_to = 5; 103 103 module_param(stp_max_occ_to, ushort, 0); 104 104 MODULE_PARM_DESC(stp_max_occ_to, "STP Max occupancy timeout (100us incr)"); 105 105 106 - u16 ssp_inactive_to = 5; 106 + static u16 ssp_inactive_to = 5; 107 107 module_param(ssp_inactive_to, ushort, 0); 108 108 MODULE_PARM_DESC(ssp_inactive_to, "SSP inactivity timeout (100us incr)"); 109 109 110 - u16 stp_inactive_to = 5; 110 + static u16 stp_inactive_to = 5; 111 111 module_param(stp_inactive_to, ushort, 0); 112 112 MODULE_PARM_DESC(stp_inactive_to, "STP inactivity timeout (100us incr)"); 113 113 114 - unsigned char phy_gen = SCIC_SDS_PARM_GEN2_SPEED; 114 + static unsigned char phy_gen = SCIC_SDS_PARM_GEN2_SPEED; 115 115 module_param(phy_gen, byte, 0); 116 116 MODULE_PARM_DESC(phy_gen, "PHY generation (1: 1.5Gbps 2: 3.0Gbps 3: 6.0Gbps)"); 117 117 118 - unsigned char max_concurr_spinup; 118 + static unsigned char max_concurr_spinup; 119 119 module_param(max_concurr_spinup, byte, 0); 120 120 MODULE_PARM_DESC(max_concurr_spinup, "Max concurrent device spinup"); 121 121
-7
drivers/scsi/isci/isci.h
··· 473 473 dest[word_cnt] = swab32(src[word_cnt]); 474 474 } 475 475 476 - extern unsigned char no_outbound_task_to; 477 - extern u16 ssp_max_occ_to; 478 - extern u16 stp_max_occ_to; 479 - extern u16 ssp_inactive_to; 480 - extern u16 stp_inactive_to; 481 - extern unsigned char phy_gen; 482 - extern unsigned char max_concurr_spinup; 483 476 extern uint cable_selection_override; 484 477 485 478 irqreturn_t isci_msix_isr(int vec, void *data);
+1 -1
drivers/scsi/isci/remote_device.h
··· 198 198 * device. When there are no active IO for the device it is is in this 199 199 * state. 200 200 * 201 - * @SCI_STP_DEV_CMD: This is the command state for for the STP remote 201 + * @SCI_STP_DEV_CMD: This is the command state for the STP remote 202 202 * device. This state is entered when the device is processing a 203 203 * non-NCQ command. The device object will fail any new start IO 204 204 * requests until this command is complete.
+8 -52
drivers/scsi/iscsi_tcp.c
··· 17 17 * Zhenyu Wang 18 18 */ 19 19 20 - #include <crypto/hash.h> 21 20 #include <linux/types.h> 22 21 #include <linux/inet.h> 23 22 #include <linux/slab.h> ··· 467 468 * sufficient room. 468 469 */ 469 470 if (conn->hdrdgst_en) { 470 - iscsi_tcp_dgst_header(tcp_sw_conn->tx_hash, hdr, hdrlen, 471 - hdr + hdrlen); 471 + iscsi_tcp_dgst_header(hdr, hdrlen, hdr + hdrlen); 472 472 hdrlen += ISCSI_DIGEST_SIZE; 473 473 } 474 474 ··· 492 494 { 493 495 struct iscsi_tcp_conn *tcp_conn = conn->dd_data; 494 496 struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data; 495 - struct ahash_request *tx_hash = NULL; 497 + u32 *tx_crcp = NULL; 496 498 unsigned int hdr_spec_len; 497 499 498 500 ISCSI_SW_TCP_DBG(conn, "offset=%d, datalen=%d %s\n", offset, len, ··· 505 507 WARN_ON(iscsi_padded(len) != iscsi_padded(hdr_spec_len)); 506 508 507 509 if (conn->datadgst_en) 508 - tx_hash = tcp_sw_conn->tx_hash; 510 + tx_crcp = &tcp_sw_conn->tx_crc; 509 511 510 512 return iscsi_segment_seek_sg(&tcp_sw_conn->out.data_segment, 511 - sg, count, offset, len, 512 - NULL, tx_hash); 513 + sg, count, offset, len, NULL, tx_crcp); 513 514 } 514 515 515 516 static void ··· 517 520 { 518 521 struct iscsi_tcp_conn *tcp_conn = conn->dd_data; 519 522 struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data; 520 - struct ahash_request *tx_hash = NULL; 523 + u32 *tx_crcp = NULL; 521 524 unsigned int hdr_spec_len; 522 525 523 526 ISCSI_SW_TCP_DBG(conn, "datalen=%zd %s\n", len, conn->datadgst_en ? ··· 529 532 WARN_ON(iscsi_padded(len) != iscsi_padded(hdr_spec_len)); 530 533 531 534 if (conn->datadgst_en) 532 - tx_hash = tcp_sw_conn->tx_hash; 535 + tx_crcp = &tcp_sw_conn->tx_crc; 533 536 534 537 iscsi_segment_init_linear(&tcp_sw_conn->out.data_segment, 535 - data, len, NULL, tx_hash); 538 + data, len, NULL, tx_crcp); 536 539 } 537 540 538 541 static int iscsi_sw_tcp_pdu_init(struct iscsi_task *task, ··· 580 583 struct iscsi_cls_conn *cls_conn; 581 584 struct iscsi_tcp_conn *tcp_conn; 582 585 struct iscsi_sw_tcp_conn *tcp_sw_conn; 583 - struct crypto_ahash *tfm; 584 586 585 587 cls_conn = iscsi_tcp_conn_setup(cls_session, sizeof(*tcp_sw_conn), 586 588 conn_idx); ··· 592 596 tcp_sw_conn->queue_recv = iscsi_recv_from_iscsi_q; 593 597 594 598 mutex_init(&tcp_sw_conn->sock_lock); 595 - 596 - tfm = crypto_alloc_ahash("crc32c", 0, CRYPTO_ALG_ASYNC); 597 - if (IS_ERR(tfm)) 598 - goto free_conn; 599 - 600 - tcp_sw_conn->tx_hash = ahash_request_alloc(tfm, GFP_KERNEL); 601 - if (!tcp_sw_conn->tx_hash) 602 - goto free_tfm; 603 - ahash_request_set_callback(tcp_sw_conn->tx_hash, 0, NULL, NULL); 604 - 605 - tcp_sw_conn->rx_hash = ahash_request_alloc(tfm, GFP_KERNEL); 606 - if (!tcp_sw_conn->rx_hash) 607 - goto free_tx_hash; 608 - ahash_request_set_callback(tcp_sw_conn->rx_hash, 0, NULL, NULL); 609 - 610 - tcp_conn->rx_hash = tcp_sw_conn->rx_hash; 599 + tcp_conn->rx_crcp = &tcp_sw_conn->rx_crc; 611 600 612 601 return cls_conn; 613 - 614 - free_tx_hash: 615 - ahash_request_free(tcp_sw_conn->tx_hash); 616 - free_tfm: 617 - crypto_free_ahash(tfm); 618 - free_conn: 619 - iscsi_conn_printk(KERN_ERR, conn, 620 - "Could not create connection due to crc32c " 621 - "loading error. Make sure the crc32c " 622 - "module is built as a module or into the " 623 - "kernel\n"); 624 - iscsi_tcp_conn_teardown(cls_conn); 625 - return NULL; 626 602 } 627 603 628 604 static void iscsi_sw_tcp_release_conn(struct iscsi_conn *conn) ··· 632 664 static void iscsi_sw_tcp_conn_destroy(struct iscsi_cls_conn *cls_conn) 633 665 { 634 666 struct iscsi_conn *conn = cls_conn->dd_data; 635 - struct iscsi_tcp_conn *tcp_conn = conn->dd_data; 636 - struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data; 637 667 638 668 iscsi_sw_tcp_release_conn(conn); 639 - 640 - ahash_request_free(tcp_sw_conn->rx_hash); 641 - if (tcp_sw_conn->tx_hash) { 642 - struct crypto_ahash *tfm; 643 - 644 - tfm = crypto_ahash_reqtfm(tcp_sw_conn->tx_hash); 645 - ahash_request_free(tcp_sw_conn->tx_hash); 646 - crypto_free_ahash(tfm); 647 - } 648 - 649 669 iscsi_tcp_conn_teardown(cls_conn); 650 670 } 651 671
+2 -2
drivers/scsi/iscsi_tcp.h
··· 41 41 void (*old_write_space)(struct sock *); 42 42 43 43 /* data and header digests */ 44 - struct ahash_request *tx_hash; /* CRC32C (Tx) */ 45 - struct ahash_request *rx_hash; /* CRC32C (Rx) */ 44 + u32 tx_crc; /* CRC32C (Tx) */ 45 + u32 rx_crc; /* CRC32C (Rx) */ 46 46 47 47 /* MIB custom statistics */ 48 48 uint32_t sendpage_failures_cnt;
+39 -50
drivers/scsi/libiscsi_tcp.c
··· 15 15 * Zhenyu Wang 16 16 */ 17 17 18 - #include <crypto/hash.h> 18 + #include <linux/crc32c.h> 19 19 #include <linux/types.h> 20 20 #include <linux/list.h> 21 21 #include <linux/inet.h> ··· 168 168 segment->size = ISCSI_DIGEST_SIZE; 169 169 segment->copied = 0; 170 170 segment->sg = NULL; 171 - segment->hash = NULL; 171 + segment->crcp = NULL; 172 172 } 173 173 174 174 /** ··· 191 191 struct iscsi_segment *segment, int recv, 192 192 unsigned copied) 193 193 { 194 - struct scatterlist sg; 195 194 unsigned int pad; 196 195 197 196 ISCSI_DBG_TCP(tcp_conn->iscsi_conn, "copied %u %u size %u %s\n", 198 197 segment->copied, copied, segment->size, 199 198 recv ? "recv" : "xmit"); 200 - if (segment->hash && copied) { 201 - /* 202 - * If a segment is kmapd we must unmap it before sending 203 - * to the crypto layer since that will try to kmap it again. 204 - */ 205 - iscsi_tcp_segment_unmap(segment); 199 + if (segment->crcp && copied) { 200 + if (segment->data) { 201 + *segment->crcp = crc32c(*segment->crcp, 202 + segment->data + segment->copied, 203 + copied); 204 + } else { 205 + const void *data; 206 206 207 - if (!segment->data) { 208 - sg_init_table(&sg, 1); 209 - sg_set_page(&sg, sg_page(segment->sg), copied, 210 - segment->copied + segment->sg_offset + 211 - segment->sg->offset); 212 - } else 213 - sg_init_one(&sg, segment->data + segment->copied, 214 - copied); 215 - ahash_request_set_crypt(segment->hash, &sg, NULL, copied); 216 - crypto_ahash_update(segment->hash); 207 + data = kmap_local_page(sg_page(segment->sg)); 208 + *segment->crcp = crc32c(*segment->crcp, 209 + data + segment->copied + 210 + segment->sg_offset + 211 + segment->sg->offset, 212 + copied); 213 + kunmap_local(data); 214 + } 217 215 } 218 216 219 217 segment->copied += copied; ··· 256 258 * Set us up for transferring the data digest. hdr digest 257 259 * is completely handled in hdr done function. 258 260 */ 259 - if (segment->hash) { 260 - ahash_request_set_crypt(segment->hash, NULL, 261 - segment->digest, 0); 262 - crypto_ahash_final(segment->hash); 261 + if (segment->crcp) { 262 + put_unaligned_le32(~*segment->crcp, segment->digest); 263 263 iscsi_tcp_segment_splice_digest(segment, 264 264 recv ? segment->recv_digest : segment->digest); 265 265 return 0; ··· 278 282 * given buffer, and returns the number of bytes 279 283 * consumed, which can actually be less than @len. 280 284 * 281 - * If hash digest is enabled, the function will update the 282 - * hash while copying. 285 + * If CRC is enabled, the function will update the CRC while copying. 283 286 * Combining these two operations doesn't buy us a lot (yet), 284 287 * but in the future we could implement combined copy+crc, 285 288 * just way we do for network layer checksums. ··· 306 311 } 307 312 308 313 inline void 309 - iscsi_tcp_dgst_header(struct ahash_request *hash, const void *hdr, 310 - size_t hdrlen, unsigned char digest[ISCSI_DIGEST_SIZE]) 314 + iscsi_tcp_dgst_header(const void *hdr, size_t hdrlen, 315 + unsigned char digest[ISCSI_DIGEST_SIZE]) 311 316 { 312 - struct scatterlist sg; 313 - 314 - sg_init_one(&sg, hdr, hdrlen); 315 - ahash_request_set_crypt(hash, &sg, digest, hdrlen); 316 - crypto_ahash_digest(hash); 317 + put_unaligned_le32(~crc32c(~0, hdr, hdrlen), digest); 317 318 } 318 319 EXPORT_SYMBOL_GPL(iscsi_tcp_dgst_header); 319 320 ··· 334 343 */ 335 344 static inline void 336 345 __iscsi_segment_init(struct iscsi_segment *segment, size_t size, 337 - iscsi_segment_done_fn_t *done, struct ahash_request *hash) 346 + iscsi_segment_done_fn_t *done, u32 *crcp) 338 347 { 339 348 memset(segment, 0, sizeof(*segment)); 340 349 segment->total_size = size; 341 350 segment->done = done; 342 351 343 - if (hash) { 344 - segment->hash = hash; 345 - crypto_ahash_init(hash); 352 + if (crcp) { 353 + segment->crcp = crcp; 354 + *crcp = ~0; 346 355 } 347 356 } 348 357 349 358 inline void 350 359 iscsi_segment_init_linear(struct iscsi_segment *segment, void *data, 351 - size_t size, iscsi_segment_done_fn_t *done, 352 - struct ahash_request *hash) 360 + size_t size, iscsi_segment_done_fn_t *done, u32 *crcp) 353 361 { 354 - __iscsi_segment_init(segment, size, done, hash); 362 + __iscsi_segment_init(segment, size, done, crcp); 355 363 segment->data = data; 356 364 segment->size = size; 357 365 } ··· 360 370 iscsi_segment_seek_sg(struct iscsi_segment *segment, 361 371 struct scatterlist *sg_list, unsigned int sg_count, 362 372 unsigned int offset, size_t size, 363 - iscsi_segment_done_fn_t *done, 364 - struct ahash_request *hash) 373 + iscsi_segment_done_fn_t *done, u32 *crcp) 365 374 { 366 375 struct scatterlist *sg; 367 376 unsigned int i; 368 377 369 - __iscsi_segment_init(segment, size, done, hash); 378 + __iscsi_segment_init(segment, size, done, crcp); 370 379 for_each_sg(sg_list, sg, sg_count, i) { 371 380 if (offset < sg->length) { 372 381 iscsi_tcp_segment_init_sg(segment, sg, offset); ··· 382 393 * iscsi_tcp_hdr_recv_prep - prep segment for hdr reception 383 394 * @tcp_conn: iscsi connection to prep for 384 395 * 385 - * This function always passes NULL for the hash argument, because when this 396 + * This function always passes NULL for the crcp argument, because when this 386 397 * function is called we do not yet know the final size of the header and want 387 398 * to delay the digest processing until we know that. 388 399 */ ··· 423 434 iscsi_tcp_data_recv_prep(struct iscsi_tcp_conn *tcp_conn) 424 435 { 425 436 struct iscsi_conn *conn = tcp_conn->iscsi_conn; 426 - struct ahash_request *rx_hash = NULL; 437 + u32 *rx_crcp = NULL; 427 438 428 439 if (conn->datadgst_en && 429 440 !(conn->session->tt->caps & CAP_DIGEST_OFFLOAD)) 430 - rx_hash = tcp_conn->rx_hash; 441 + rx_crcp = tcp_conn->rx_crcp; 431 442 432 443 iscsi_segment_init_linear(&tcp_conn->in.segment, 433 444 conn->data, tcp_conn->in.datalen, 434 - iscsi_tcp_data_recv_done, rx_hash); 445 + iscsi_tcp_data_recv_done, rx_crcp); 435 446 } 436 447 437 448 /** ··· 719 730 720 731 if (tcp_conn->in.datalen) { 721 732 struct iscsi_tcp_task *tcp_task = task->dd_data; 722 - struct ahash_request *rx_hash = NULL; 733 + u32 *rx_crcp = NULL; 723 734 struct scsi_data_buffer *sdb = &task->sc->sdb; 724 735 725 736 /* ··· 732 743 */ 733 744 if (conn->datadgst_en && 734 745 !(conn->session->tt->caps & CAP_DIGEST_OFFLOAD)) 735 - rx_hash = tcp_conn->rx_hash; 746 + rx_crcp = tcp_conn->rx_crcp; 736 747 737 748 ISCSI_DBG_TCP(conn, "iscsi_tcp_begin_data_in( " 738 749 "offset=%d, datalen=%d)\n", ··· 745 756 tcp_task->data_offset, 746 757 tcp_conn->in.datalen, 747 758 iscsi_tcp_process_data_in, 748 - rx_hash); 759 + rx_crcp); 749 760 spin_unlock(&conn->session->back_lock); 750 761 return rc; 751 762 } ··· 867 878 return 0; 868 879 } 869 880 870 - iscsi_tcp_dgst_header(tcp_conn->rx_hash, hdr, 881 + iscsi_tcp_dgst_header(hdr, 871 882 segment->total_copied - ISCSI_DIGEST_SIZE, 872 883 segment->digest); 873 884
+1 -2
drivers/scsi/lpfc/lpfc.h
··· 74 74 * queue depths when there are driver resource error or Firmware 75 75 * resource error. 76 76 */ 77 - /* 1 Second */ 78 - #define QUEUE_RAMP_DOWN_INTERVAL (msecs_to_jiffies(1000 * 1)) 77 + #define QUEUE_RAMP_DOWN_INTERVAL (secs_to_jiffies(1)) 79 78 80 79 /* Number of exchanges reserved for discovery to complete */ 81 80 #define LPFC_DISC_IOCB_BUFF_COUNT 20
+13 -16
drivers/scsi/lpfc/lpfc_els.c
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2017-2024 Broadcom. All Rights Reserved. The term * 4 + * Copyright (C) 2017-2025 Broadcom. All Rights Reserved. The term * 5 5 * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. * 6 6 * Copyright (C) 2004-2016 Emulex. All rights reserved. * 7 7 * EMULEX and SLI are trademarks of Emulex. * ··· 8045 8045 if (test_bit(FC_DISC_TMO, &vport->fc_flag)) { 8046 8046 tmo = ((phba->fc_ratov * 3) + 3); 8047 8047 mod_timer(&vport->fc_disctmo, 8048 - jiffies + 8049 - msecs_to_jiffies(1000 * tmo)); 8048 + jiffies + secs_to_jiffies(tmo)); 8050 8049 } 8051 8050 return 0; 8052 8051 } ··· 8080 8081 if (test_bit(FC_DISC_TMO, &vport->fc_flag)) { 8081 8082 tmo = ((phba->fc_ratov * 3) + 3); 8082 8083 mod_timer(&vport->fc_disctmo, 8083 - jiffies + msecs_to_jiffies(1000 * tmo)); 8084 + jiffies + secs_to_jiffies(tmo)); 8084 8085 } 8085 8086 if ((rscn_cnt < FC_MAX_HOLD_RSCN) && 8086 8087 !test_bit(FC_RSCN_DISCOVERY, &vport->fc_flag)) { ··· 9510 9511 if (!list_empty(&pring->txcmplq)) 9511 9512 if (!test_bit(FC_UNLOADING, &phba->pport->load_flag)) 9512 9513 mod_timer(&vport->els_tmofunc, 9513 - jiffies + msecs_to_jiffies(1000 * timeout)); 9514 + jiffies + secs_to_jiffies(timeout)); 9514 9515 } 9515 9516 9516 9517 /** ··· 9568 9569 mbx_tmo_err = test_bit(MBX_TMO_ERR, &phba->bit_flags); 9569 9570 /* First we need to issue aborts to outstanding cmds on txcmpl */ 9570 9571 list_for_each_entry_safe(piocb, tmp_iocb, &pring->txcmplq, list) { 9571 - lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, 9572 - "2243 iotag = 0x%x cmd_flag = 0x%x " 9573 - "ulp_command = 0x%x this_vport %x " 9574 - "sli_flag = 0x%x\n", 9575 - piocb->iotag, piocb->cmd_flag, 9576 - get_job_cmnd(phba, piocb), 9577 - (piocb->vport == vport), 9578 - phba->sli.sli_flag); 9579 - 9580 9572 if (piocb->vport != vport) 9581 9573 continue; 9574 + 9575 + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, 9576 + "2243 iotag = 0x%x cmd_flag = 0x%x " 9577 + "ulp_command = 0x%x sli_flag = 0x%x\n", 9578 + piocb->iotag, piocb->cmd_flag, 9579 + get_job_cmnd(phba, piocb), 9580 + phba->sli.sli_flag); 9582 9581 9583 9582 if ((phba->sli.sli_flag & LPFC_SLI_ACTIVE) && !mbx_tmo_err) { 9584 9583 if (piocb->cmd_flag & LPFC_IO_LIBDFC) ··· 10896 10899 "3334 Delay fc port discovery for %d secs\n", 10897 10900 phba->fc_ratov); 10898 10901 mod_timer(&vport->delayed_disc_tmo, 10899 - jiffies + msecs_to_jiffies(1000 * phba->fc_ratov)); 10902 + jiffies + secs_to_jiffies(phba->fc_ratov)); 10900 10903 return; 10901 10904 } 10902 10905 ··· 11153 11156 if (!ndlp) 11154 11157 return; 11155 11158 11156 - mod_timer(&ndlp->nlp_delayfunc, jiffies + msecs_to_jiffies(1000)); 11159 + mod_timer(&ndlp->nlp_delayfunc, jiffies + secs_to_jiffies(1)); 11157 11160 set_bit(NLP_DELAY_TMO, &ndlp->nlp_flag); 11158 11161 ndlp->nlp_last_elscmd = ELS_CMD_FLOGI; 11159 11162 phba->pport->port_state = LPFC_FLOGI;
+24 -11
drivers/scsi/lpfc/lpfc_hbadisc.c
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2017-2024 Broadcom. All Rights Reserved. The term * 4 + * Copyright (C) 2017-2025 Broadcom. All Rights Reserved. The term * 5 5 * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. * 6 6 * Copyright (C) 2004-2016 Emulex. All rights reserved. * 7 7 * EMULEX and SLI are trademarks of Emulex. * ··· 228 228 if (ndlp->nlp_state == NLP_STE_MAPPED_NODE) 229 229 return; 230 230 231 - /* check for recovered fabric node */ 232 - if (ndlp->nlp_state == NLP_STE_UNMAPPED_NODE && 233 - ndlp->nlp_DID == Fabric_DID) 231 + /* Ignore callback for a mismatched (stale) rport */ 232 + if (ndlp->rport != rport) { 233 + lpfc_vlog_msg(vport, KERN_WARNING, LOG_NODE, 234 + "6788 fc rport mismatch: d_id x%06x ndlp x%px " 235 + "fc rport x%px node rport x%px state x%x " 236 + "refcnt %u\n", 237 + ndlp->nlp_DID, ndlp, rport, ndlp->rport, 238 + ndlp->nlp_state, kref_read(&ndlp->kref)); 234 239 return; 240 + } 235 241 236 242 if (rport->port_name != wwn_to_u64(ndlp->nlp_portname.u.wwn)) 237 243 lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, ··· 3524 3518 if (phba->fc_topology && 3525 3519 phba->fc_topology != bf_get(lpfc_mbx_read_top_topology, la)) { 3526 3520 lpfc_printf_log(phba, KERN_WARNING, LOG_SLI, 3527 - "3314 Toplogy changed was 0x%x is 0x%x\n", 3521 + "3314 Topology changed was 0x%x is 0x%x\n", 3528 3522 phba->fc_topology, 3529 3523 bf_get(lpfc_mbx_read_top_topology, la)); 3530 3524 phba->fc_topology_changed = 1; ··· 4979 4973 tmo, vport->port_state, vport->fc_flag); 4980 4974 } 4981 4975 4982 - mod_timer(&vport->fc_disctmo, jiffies + msecs_to_jiffies(1000 * tmo)); 4976 + mod_timer(&vport->fc_disctmo, jiffies + secs_to_jiffies(tmo)); 4983 4977 set_bit(FC_DISC_TMO, &vport->fc_flag); 4984 4978 4985 4979 /* Start Discovery Timer state <hba_state> */ ··· 5570 5564 __lpfc_findnode_did(struct lpfc_vport *vport, uint32_t did) 5571 5565 { 5572 5566 struct lpfc_nodelist *ndlp; 5567 + struct lpfc_nodelist *np = NULL; 5573 5568 uint32_t data1; 5574 5569 5575 5570 list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) { ··· 5585 5578 ndlp, ndlp->nlp_DID, 5586 5579 ndlp->nlp_flag, data1, ndlp->nlp_rpi, 5587 5580 ndlp->active_rrqs_xri_bitmap); 5588 - return ndlp; 5581 + 5582 + /* Check for new or potentially stale node */ 5583 + if (ndlp->nlp_state != NLP_STE_UNUSED_NODE) 5584 + return ndlp; 5585 + np = ndlp; 5589 5586 } 5590 5587 } 5591 5588 5592 - /* FIND node did <did> NOT FOUND */ 5593 - lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE, 5594 - "0932 FIND node did x%x NOT FOUND.\n", did); 5595 - return NULL; 5589 + if (!np) 5590 + /* FIND node did <did> NOT FOUND */ 5591 + lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE, 5592 + "0932 FIND node did x%x NOT FOUND.\n", did); 5593 + 5594 + return np; 5596 5595 } 5597 5596 5598 5597 struct lpfc_nodelist *
+8 -6
drivers/scsi/lpfc/lpfc_init.c
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2017-2024 Broadcom. All Rights Reserved. The term * 4 + * Copyright (C) 2017-2025 Broadcom. All Rights Reserved. The term * 5 5 * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. * 6 6 * Copyright (C) 2004-2016 Emulex. All rights reserved. * 7 7 * EMULEX and SLI are trademarks of Emulex. * ··· 595 595 /* Set up ring-0 (ELS) timer */ 596 596 timeout = phba->fc_ratov * 2; 597 597 mod_timer(&vport->els_tmofunc, 598 - jiffies + msecs_to_jiffies(1000 * timeout)); 598 + jiffies + secs_to_jiffies(timeout)); 599 599 /* Set up heart beat (HB) timer */ 600 600 mod_timer(&phba->hb_tmofunc, 601 601 jiffies + secs_to_jiffies(LPFC_HB_MBOX_INTERVAL)); ··· 604 604 phba->last_completion_time = jiffies; 605 605 /* Set up error attention (ERATT) polling timer */ 606 606 mod_timer(&phba->eratt_poll, 607 - jiffies + msecs_to_jiffies(1000 * phba->eratt_poll_interval)); 607 + jiffies + secs_to_jiffies(phba->eratt_poll_interval)); 608 608 609 609 if (test_bit(LINK_DISABLED, &phba->hba_flag)) { 610 610 lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, ··· 3361 3361 /* Determine how long we might wait for the active mailbox 3362 3362 * command to be gracefully completed by firmware. 3363 3363 */ 3364 - timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba, 3365 - phba->sli.mbox_active) * 1000) + jiffies; 3364 + timeout = secs_to_jiffies(lpfc_mbox_tmo_val(phba, 3365 + phba->sli.mbox_active)) + jiffies; 3366 3366 } 3367 3367 spin_unlock_irqrestore(&phba->hbalock, iflag); 3368 3368 ··· 6909 6909 * re-instantiate the Vlink using FDISC. 6910 6910 */ 6911 6911 mod_timer(&ndlp->nlp_delayfunc, 6912 - jiffies + msecs_to_jiffies(1000)); 6912 + jiffies + secs_to_jiffies(1)); 6913 6913 set_bit(NLP_DELAY_TMO, &ndlp->nlp_flag); 6914 6914 ndlp->nlp_last_elscmd = ELS_CMD_FDISC; 6915 6915 vport->port_state = LPFC_FDISC; ··· 13169 13169 eqhdl = lpfc_get_eq_hdl(0); 13170 13170 rc = pci_irq_vector(phba->pcidev, 0); 13171 13171 if (rc < 0) { 13172 + free_irq(phba->pcidev->irq, phba); 13172 13173 pci_free_irq_vectors(phba->pcidev); 13173 13174 lpfc_printf_log(phba, KERN_WARNING, LOG_INIT, 13174 13175 "0496 MSI pci_irq_vec failed (%d)\n", rc); ··· 13250 13249 eqhdl = lpfc_get_eq_hdl(0); 13251 13250 retval = pci_irq_vector(phba->pcidev, 0); 13252 13251 if (retval < 0) { 13252 + free_irq(phba->pcidev->irq, phba); 13253 13253 lpfc_printf_log(phba, KERN_WARNING, LOG_INIT, 13254 13254 "0502 INTR pci_irq_vec failed (%d)\n", 13255 13255 retval);
+5 -7
drivers/scsi/lpfc/lpfc_scsi.c
··· 5645 5645 * cmd_flag is set to LPFC_DRIVER_ABORTED before we wait 5646 5646 * for abort to complete. 5647 5647 */ 5648 - wait_event_timeout(waitq, 5649 - (lpfc_cmd->pCmd != cmnd), 5650 - msecs_to_jiffies(2*vport->cfg_devloss_tmo*1000)); 5648 + wait_event_timeout(waitq, (lpfc_cmd->pCmd != cmnd), 5649 + secs_to_jiffies(2*vport->cfg_devloss_tmo)); 5651 5650 5652 5651 spin_lock(&lpfc_cmd->buf_lock); 5653 5652 ··· 5910 5911 * If target is not in a MAPPED state, delay until 5911 5912 * target is rediscovered or devloss timeout expires. 5912 5913 */ 5913 - later = msecs_to_jiffies(2 * vport->cfg_devloss_tmo * 1000) + jiffies; 5914 + later = secs_to_jiffies(2 * vport->cfg_devloss_tmo) + jiffies; 5914 5915 while (time_after(later, jiffies)) { 5915 5916 if (!pnode) 5916 5917 return FAILED; ··· 5956 5957 lpfc_sli_abort_taskmgmt(vport, 5957 5958 &phba->sli.sli3_ring[LPFC_FCP_RING], 5958 5959 tgt_id, lun_id, context); 5959 - later = msecs_to_jiffies(2 * vport->cfg_devloss_tmo * 1000) + jiffies; 5960 + later = secs_to_jiffies(2 * vport->cfg_devloss_tmo) + jiffies; 5960 5961 while (time_after(later, jiffies) && cnt) { 5961 5962 schedule_timeout_uninterruptible(msecs_to_jiffies(20)); 5962 5963 cnt = lpfc_sli_sum_iocb(vport, tgt_id, lun_id, context); ··· 6136 6137 wait_event_timeout(waitq, 6137 6138 !test_bit(NLP_WAIT_FOR_LOGO, 6138 6139 &pnode->save_flags), 6139 - msecs_to_jiffies(dev_loss_tmo * 6140 - 1000)); 6140 + secs_to_jiffies(dev_loss_tmo)); 6141 6141 6142 6142 if (test_and_clear_bit(NLP_WAIT_FOR_LOGO, 6143 6143 &pnode->save_flags))
+18 -25
drivers/scsi/lpfc/lpfc_sli.c
··· 1025 1025 LIST_HEAD(send_rrq); 1026 1026 1027 1027 clear_bit(HBA_RRQ_ACTIVE, &phba->hba_flag); 1028 - next_time = jiffies + msecs_to_jiffies(1000 * (phba->fc_ratov + 1)); 1028 + next_time = jiffies + secs_to_jiffies(phba->fc_ratov + 1); 1029 1029 spin_lock_irqsave(&phba->rrq_list_lock, iflags); 1030 1030 list_for_each_entry_safe(rrq, nextrrq, 1031 1031 &phba->active_rrq_list, list) { ··· 1208 1208 else 1209 1209 rrq->send_rrq = 0; 1210 1210 rrq->xritag = xritag; 1211 - rrq->rrq_stop_time = jiffies + 1212 - msecs_to_jiffies(1000 * (phba->fc_ratov + 1)); 1211 + rrq->rrq_stop_time = jiffies + secs_to_jiffies(phba->fc_ratov + 1); 1213 1212 rrq->nlp_DID = ndlp->nlp_DID; 1214 1213 rrq->vport = ndlp->vport; 1215 1214 rrq->rxid = rxid; ··· 1735 1736 BUG_ON(!piocb->vport); 1736 1737 if (!test_bit(FC_UNLOADING, &piocb->vport->load_flag)) 1737 1738 mod_timer(&piocb->vport->els_tmofunc, 1738 - jiffies + 1739 - msecs_to_jiffies(1000 * (phba->fc_ratov << 1))); 1739 + jiffies + secs_to_jiffies(phba->fc_ratov << 1)); 1740 1740 } 1741 1741 1742 1742 return 0; ··· 2921 2923 clear_bit(NLP_UNREG_INP, &ndlp->nlp_flag); 2922 2924 ndlp->nlp_defer_did = NLP_EVT_NOTHING_PENDING; 2923 2925 lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0); 2926 + } else { 2927 + clear_bit(NLP_UNREG_INP, &ndlp->nlp_flag); 2924 2928 } 2925 2929 2926 2930 /* The unreg_login mailbox is complete and had a ··· 3956 3956 else 3957 3957 /* Restart the timer for next eratt poll */ 3958 3958 mod_timer(&phba->eratt_poll, 3959 - jiffies + 3960 - msecs_to_jiffies(1000 * phba->eratt_poll_interval)); 3959 + jiffies + secs_to_jiffies(phba->eratt_poll_interval)); 3961 3960 return; 3962 3961 } 3963 3962 ··· 9007 9008 9008 9009 /* Start the ELS watchdog timer */ 9009 9010 mod_timer(&vport->els_tmofunc, 9010 - jiffies + msecs_to_jiffies(1000 * (phba->fc_ratov * 2))); 9011 + jiffies + secs_to_jiffies(phba->fc_ratov * 2)); 9011 9012 9012 9013 /* Start heart beat timer */ 9013 9014 mod_timer(&phba->hb_tmofunc, ··· 9026 9027 9027 9028 /* Start error attention (ERATT) polling timer */ 9028 9029 mod_timer(&phba->eratt_poll, 9029 - jiffies + msecs_to_jiffies(1000 * phba->eratt_poll_interval)); 9030 + jiffies + secs_to_jiffies(phba->eratt_poll_interval)); 9030 9031 9031 9032 /* 9032 9033 * The port is ready, set the host's link state to LINK_DOWN ··· 9503 9504 goto out_not_finished; 9504 9505 } 9505 9506 /* timeout active mbox command */ 9506 - timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba, pmbox) * 9507 - 1000); 9507 + timeout = secs_to_jiffies(lpfc_mbox_tmo_val(phba, pmbox)); 9508 9508 mod_timer(&psli->mbox_tmo, jiffies + timeout); 9509 9509 } 9510 9510 ··· 9627 9629 drvr_flag); 9628 9630 goto out_not_finished; 9629 9631 } 9630 - timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba, pmbox) * 9631 - 1000) + jiffies; 9632 + timeout = secs_to_jiffies(lpfc_mbox_tmo_val(phba, pmbox)) + jiffies; 9632 9633 i = 0; 9633 9634 /* Wait for command to complete */ 9634 9635 while (((word0 & OWN_CHIP) == OWN_CHIP) || ··· 9753 9756 * command to be gracefully completed by firmware. 9754 9757 */ 9755 9758 if (phba->sli.mbox_active) 9756 - timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba, 9757 - phba->sli.mbox_active) * 9758 - 1000) + jiffies; 9759 + timeout = secs_to_jiffies(lpfc_mbox_tmo_val(phba, 9760 + phba->sli.mbox_active)) + jiffies; 9759 9761 spin_unlock_irq(&phba->hbalock); 9760 9762 9761 9763 /* Make sure the mailbox is really active */ ··· 9877 9881 } 9878 9882 } 9879 9883 9880 - timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba, mboxq) 9881 - * 1000) + jiffies; 9884 + timeout = secs_to_jiffies(lpfc_mbox_tmo_val(phba, mboxq)) + jiffies; 9882 9885 9883 9886 do { 9884 9887 bmbx_reg.word0 = readl(phba->sli4_hba.BMBXregaddr); ··· 10225 10230 10226 10231 /* Start timer for the mbox_tmo and log some mailbox post messages */ 10227 10232 mod_timer(&psli->mbox_tmo, (jiffies + 10228 - msecs_to_jiffies(1000 * lpfc_mbox_tmo_val(phba, mboxq)))); 10233 + secs_to_jiffies(lpfc_mbox_tmo_val(phba, mboxq)))); 10229 10234 10230 10235 lpfc_printf_log(phba, KERN_INFO, LOG_MBOX | LOG_SLI, 10231 10236 "(%d):0355 Mailbox cmd x%x (x%x/x%x) issue Data: " ··· 13154 13159 retval = lpfc_sli_issue_iocb(phba, ring_number, piocb, 13155 13160 SLI_IOCB_RET_IOCB); 13156 13161 if (retval == IOCB_SUCCESS) { 13157 - timeout_req = msecs_to_jiffies(timeout * 1000); 13162 + timeout_req = secs_to_jiffies(timeout); 13158 13163 timeleft = wait_event_timeout(done_q, 13159 13164 lpfc_chk_iocb_flg(phba, piocb, LPFC_IO_WAKE), 13160 13165 timeout_req); ··· 13270 13275 /* now issue the command */ 13271 13276 retval = lpfc_sli_issue_mbox(phba, pmboxq, MBX_NOWAIT); 13272 13277 if (retval == MBX_BUSY || retval == MBX_SUCCESS) { 13273 - wait_for_completion_timeout(&mbox_done, 13274 - msecs_to_jiffies(timeout * 1000)); 13278 + wait_for_completion_timeout(&mbox_done, secs_to_jiffies(timeout)); 13275 13279 13276 13280 spin_lock_irqsave(&phba->hbalock, flag); 13277 13281 pmboxq->ctx_u.mbox_wait = NULL; ··· 13330 13336 * command to be gracefully completed by firmware. 13331 13337 */ 13332 13338 if (phba->sli.mbox_active) 13333 - timeout = msecs_to_jiffies(lpfc_mbox_tmo_val(phba, 13334 - phba->sli.mbox_active) * 13335 - 1000) + jiffies; 13339 + timeout = secs_to_jiffies(lpfc_mbox_tmo_val(phba, 13340 + phba->sli.mbox_active)) + jiffies; 13336 13341 spin_unlock_irq(&phba->hbalock); 13337 13342 13338 13343 /* Enable softirqs again, done with phba->hbalock */
+3 -3
drivers/scsi/lpfc/lpfc_version.h
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2017-2024 Broadcom. All Rights Reserved. The term * 4 + * Copyright (C) 2017-2025 Broadcom. All Rights Reserved. The term * 5 5 * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. * 6 6 * Copyright (C) 2004-2016 Emulex. All rights reserved. * 7 7 * EMULEX and SLI are trademarks of Emulex. * ··· 20 20 * included with this package. * 21 21 *******************************************************************/ 22 22 23 - #define LPFC_DRIVER_VERSION "14.4.0.7" 23 + #define LPFC_DRIVER_VERSION "14.4.0.8" 24 24 #define LPFC_DRIVER_NAME "lpfc" 25 25 26 26 /* Used for SLI 2/3 */ ··· 32 32 33 33 #define LPFC_MODULE_DESC "Emulex LightPulse Fibre Channel SCSI driver " \ 34 34 LPFC_DRIVER_VERSION 35 - #define LPFC_COPYRIGHT "Copyright (C) 2017-2024 Broadcom. All Rights " \ 35 + #define LPFC_COPYRIGHT "Copyright (C) 2017-2025 Broadcom. All Rights " \ 36 36 "Reserved. The term \"Broadcom\" refers to Broadcom Inc. " \ 37 37 "and/or its subsidiaries."
+1 -1
drivers/scsi/lpfc/lpfc_vport.c
··· 246 246 * fabric RA_TOV value and dev_loss tmo. The driver's 247 247 * devloss_tmo is 10 giving this loop a 3x multiplier minimally. 248 248 */ 249 - wait_time_max = msecs_to_jiffies(((phba->fc_ratov * 3) + 3) * 1000); 249 + wait_time_max = secs_to_jiffies((phba->fc_ratov * 3) + 3); 250 250 wait_time_max += jiffies; 251 251 start_time = jiffies; 252 252 while (time_before(jiffies, wait_time_max)) {
+5 -5
drivers/scsi/megaraid.c
··· 855 855 return scb; 856 856 857 857 #if MEGA_HAVE_CLUSTERING 858 - case RESERVE: 859 - case RELEASE: 858 + case RESERVE_6: 859 + case RELEASE_6: 860 860 861 861 /* 862 862 * Do we support clustering and is the support enabled ··· 875 875 } 876 876 877 877 scb->raw_mbox[0] = MEGA_CLUSTER_CMD; 878 - scb->raw_mbox[2] = ( *cmd->cmnd == RESERVE ) ? 878 + scb->raw_mbox[2] = *cmd->cmnd == RESERVE_6 ? 879 879 MEGA_RESERVE_LD : MEGA_RELEASE_LD; 880 880 881 881 scb->raw_mbox[3] = ldrv_num; ··· 1618 1618 * failed or the input parameter is invalid 1619 1619 */ 1620 1620 if( status == 1 && 1621 - (cmd->cmnd[0] == RESERVE || 1622 - cmd->cmnd[0] == RELEASE) ) { 1621 + (cmd->cmnd[0] == RESERVE_6 || 1622 + cmd->cmnd[0] == RELEASE_6) ) { 1623 1623 1624 1624 cmd->result |= (DID_ERROR << 16) | 1625 1625 SAM_STAT_RESERVATION_CONFLICT;
+5 -5
drivers/scsi/megaraid/megaraid_mbox.c
··· 1725 1725 1726 1726 return scb; 1727 1727 1728 - case RESERVE: 1729 - case RELEASE: 1728 + case RESERVE_6: 1729 + case RELEASE_6: 1730 1730 /* 1731 1731 * Do we support clustering and is the support enabled 1732 1732 */ ··· 1748 1748 scb->dev_channel = 0xFF; 1749 1749 scb->dev_target = target; 1750 1750 ccb->raw_mbox[0] = CLUSTER_CMD; 1751 - ccb->raw_mbox[2] = (scp->cmnd[0] == RESERVE) ? 1751 + ccb->raw_mbox[2] = scp->cmnd[0] == RESERVE_6 ? 1752 1752 RESERVE_LD : RELEASE_LD; 1753 1753 1754 1754 ccb->raw_mbox[3] = target; ··· 2334 2334 * Error code returned is 1 if Reserve or Release 2335 2335 * failed or the input parameter is invalid 2336 2336 */ 2337 - if (status == 1 && (scp->cmnd[0] == RESERVE || 2338 - scp->cmnd[0] == RELEASE)) { 2337 + if (status == 1 && (scp->cmnd[0] == RESERVE_6 || 2338 + scp->cmnd[0] == RELEASE_6)) { 2339 2339 2340 2340 scp->result = DID_ERROR << 16 | 2341 2341 SAM_STAT_RESERVATION_CONFLICT;
+5 -5
drivers/scsi/megaraid/megaraid_sas_base.c
··· 93 93 module_param(scmd_timeout, int, 0444); 94 94 MODULE_PARM_DESC(scmd_timeout, "scsi command timeout (10-90s), default 90s. See megasas_reset_timer."); 95 95 96 - int perf_mode = -1; 96 + static int perf_mode = -1; 97 97 module_param(perf_mode, int, 0444); 98 98 MODULE_PARM_DESC(perf_mode, "Performance mode (only for Aero adapters), options:\n\t\t" 99 99 "0 - balanced: High iops and low latency queues are allocated &\n\t\t" ··· 105 105 "default mode is 'balanced'" 106 106 ); 107 107 108 - int event_log_level = MFI_EVT_CLASS_CRITICAL; 108 + static int event_log_level = MFI_EVT_CLASS_CRITICAL; 109 109 module_param(event_log_level, int, 0644); 110 110 MODULE_PARM_DESC(event_log_level, "Asynchronous event logging level- range is: -2(CLASS_DEBUG) to 4(CLASS_DEAD), Default: 2(CLASS_CRITICAL)"); 111 111 112 - unsigned int enable_sdev_max_qd; 112 + static unsigned int enable_sdev_max_qd; 113 113 module_param(enable_sdev_max_qd, int, 0444); 114 114 MODULE_PARM_DESC(enable_sdev_max_qd, "Enable sdev max qd as can_queue. Default: 0"); 115 115 116 - int poll_queues; 116 + static int poll_queues; 117 117 module_param(poll_queues, int, 0444); 118 118 MODULE_PARM_DESC(poll_queues, "Number of queues to be use for io_uring poll mode.\n\t\t" 119 119 "This parameter is effective only if host_tagset_enable=1 &\n\t\t" ··· 122 122 "High iops queues are not allocated &\n\t\t" 123 123 ); 124 124 125 - int host_tagset_enable = 1; 125 + static int host_tagset_enable = 1; 126 126 module_param(host_tagset_enable, int, 0444); 127 127 MODULE_PARM_DESC(host_tagset_enable, "Shared host tagset enable/disable Default: enable(1)"); 128 128
+4
drivers/scsi/mpi3mr/mpi/mpi30_cnfg.h
··· 19 19 #define MPI3_CONFIG_PAGETYPE_PCIE_SWITCH (0x31) 20 20 #define MPI3_CONFIG_PAGETYPE_PCIE_LINK (0x33) 21 21 #define MPI3_CONFIG_PAGEATTR_MASK (0xf0) 22 + #define MPI3_CONFIG_PAGEATTR_SHIFT (4) 22 23 #define MPI3_CONFIG_PAGEATTR_READ_ONLY (0x00) 23 24 #define MPI3_CONFIG_PAGEATTR_CHANGEABLE (0x10) 24 25 #define MPI3_CONFIG_PAGEATTR_PERSISTENT (0x20) ··· 30 29 #define MPI3_CONFIG_ACTION_READ_PERSISTENT (0x04) 31 30 #define MPI3_CONFIG_ACTION_WRITE_PERSISTENT (0x05) 32 31 #define MPI3_DEVICE_PGAD_FORM_MASK (0xf0000000) 32 + #define MPI3_DEVICE_PGAD_FORM_SHIFT (28) 33 33 #define MPI3_DEVICE_PGAD_FORM_GET_NEXT_HANDLE (0x00000000) 34 34 #define MPI3_DEVICE_PGAD_FORM_HANDLE (0x20000000) 35 35 #define MPI3_DEVICE_PGAD_HANDLE_MASK (0x0000ffff) 36 + #define MPI3_DEVICE_PGAD_HANDLE_SHIFT (0) 36 37 #define MPI3_SAS_EXPAND_PGAD_FORM_MASK (0xf0000000) 38 + #define MPI3_SAS_EXPAND_PGAD_FORM_SHIFT (28) 37 39 #define MPI3_SAS_EXPAND_PGAD_FORM_GET_NEXT_HANDLE (0x00000000) 38 40 #define MPI3_SAS_EXPAND_PGAD_FORM_HANDLE_PHY_NUM (0x10000000) 39 41 #define MPI3_SAS_EXPAND_PGAD_FORM_HANDLE (0x20000000)
+8
drivers/scsi/mpi3mr/mpi/mpi30_image.h
··· 66 66 #define MPI3_IMAGE_HEADER_SIGNATURE1_SMM (0x204d4d53) 67 67 #define MPI3_IMAGE_HEADER_SIGNATURE1_PSW (0x20575350) 68 68 #define MPI3_IMAGE_HEADER_SIGNATURE2_VALUE (0x50584546) 69 + #define MPI3_IMAGE_HEADER_FLAGS_SIGNED_UEFI_MASK (0x00000300) 70 + #define MPI3_IMAGE_HEADER_FLAGS_SIGNED_UEFI_SHIFT (8) 71 + #define MPI3_IMAGE_HEADER_FLAGS_CERT_CHAIN_FORMAT_MASK (0x000000c0) 72 + #define MPI3_IMAGE_HEADER_FLAGS_CERT_CHAIN_FORMAT_SHIFT (6) 69 73 #define MPI3_IMAGE_HEADER_FLAGS_DEVICE_KEY_BASIS_MASK (0x00000030) 74 + #define MPI3_IMAGE_HEADER_FLAGS_DEVICE_KEY_BASIS_SHIFT (4) 70 75 #define MPI3_IMAGE_HEADER_FLAGS_DEVICE_KEY_BASIS_CDI (0x00000000) 71 76 #define MPI3_IMAGE_HEADER_FLAGS_DEVICE_KEY_BASIS_DI (0x00000010) 72 77 #define MPI3_IMAGE_HEADER_FLAGS_SIGNED_NVDATA (0x00000008) ··· 219 214 #define MPI3_HASH_IMAGE_TYPE_KEY_WITH_HASH_1_OF_2 (0x04) 220 215 #define MPI3_HASH_IMAGE_TYPE_KEY_WITH_HASH_2_OF_2 (0x05) 221 216 #define MPI3_HASH_ALGORITHM_VERSION_MASK (0xe0) 217 + #define MPI3_HASH_ALGORITHM_VERSION_SHIFT (5) 222 218 #define MPI3_HASH_ALGORITHM_VERSION_NONE (0x00) 223 219 #define MPI3_HASH_ALGORITHM_VERSION_SHA1 (0x20) 224 220 #define MPI3_HASH_ALGORITHM_VERSION_SHA2 (0x40) 225 221 #define MPI3_HASH_ALGORITHM_VERSION_SHA3 (0x60) 226 222 #define MPI3_HASH_ALGORITHM_SIZE_MASK (0x1f) 223 + #define MPI3_HASH_ALGORITHM_SIZE_SHIFT (0) 227 224 #define MPI3_HASH_ALGORITHM_SIZE_UNUSED (0x00) 228 225 #define MPI3_HASH_ALGORITHM_SIZE_SHA256 (0x01) 229 226 #define MPI3_HASH_ALGORITHM_SIZE_SHA512 (0x02) ··· 243 236 #define MPI3_ENCRYPTION_ALGORITHM_ML_DSA_65 (0x0c) 244 237 #define MPI3_ENCRYPTION_ALGORITHM_ML_DSA_44 (0x0d) 245 238 #define MPI3_ENCRYPTED_HASH_ENTRY_FLAGS_PAIRED_KEY_MASK (0x0f) 239 + #define MPI3_ENCRYPTED_HASH_ENTRY_FLAGS_PAIRED_KEY_SHIFT (0) 246 240 247 241 #ifndef MPI3_ENCRYPTED_HASH_ENTRY_MAX 248 242 #define MPI3_ENCRYPTED_HASH_ENTRY_MAX (1)
+10 -1
drivers/scsi/mpi3mr/mpi/mpi30_init.h
··· 38 38 #define MPI3_SCSIIO_MSGFLAGS_METASGL_VALID (0x80) 39 39 #define MPI3_SCSIIO_MSGFLAGS_DIVERT_TO_FIRMWARE (0x40) 40 40 #define MPI3_SCSIIO_FLAGS_LARGE_CDB (0x60000000) 41 + #define MPI3_SCSIIO_FLAGS_LARGE_CDB_MASK (0x60000000) 42 + #define MPI3_SCSIIO_FLAGS_LARGE_CDB_SHIFT (29) 43 + #define MPI3_SCSIIO_FLAGS_IOC_USE_ONLY_27_MASK (0x18000000) 44 + #define MPI3_SCSIIO_FLAGS_IOC_USE_ONLY_27_SHIFT (27) 41 45 #define MPI3_SCSIIO_FLAGS_CDB_16_OR_LESS (0x00000000) 42 46 #define MPI3_SCSIIO_FLAGS_CDB_GREATER_THAN_16 (0x20000000) 43 47 #define MPI3_SCSIIO_FLAGS_CDB_IN_SEPARATE_BUFFER (0x40000000) 44 48 #define MPI3_SCSIIO_FLAGS_TASKATTRIBUTE_MASK (0x07000000) 49 + #define MPI3_SCSIIO_FLAGS_TASKATTRIBUTE_SHIFT (24) 50 + #define MPI3_SCSIIO_FLAGS_DATADIRECTION_MASK (0x000c0000) 51 + #define MPI3_SCSIIO_FLAGS_DATADIRECTION_SHIFT (18) 45 52 #define MPI3_SCSIIO_FLAGS_TASKATTRIBUTE_SIMPLEQ (0x00000000) 46 53 #define MPI3_SCSIIO_FLAGS_TASKATTRIBUTE_HEADOFQ (0x01000000) 47 54 #define MPI3_SCSIIO_FLAGS_TASKATTRIBUTE_ORDEREDQ (0x02000000) 48 55 #define MPI3_SCSIIO_FLAGS_TASKATTRIBUTE_ACAQ (0x04000000) 49 56 #define MPI3_SCSIIO_FLAGS_CMDPRI_MASK (0x00f00000) 50 57 #define MPI3_SCSIIO_FLAGS_CMDPRI_SHIFT (20) 51 - #define MPI3_SCSIIO_FLAGS_DATADIRECTION_MASK (0x000c0000) 52 58 #define MPI3_SCSIIO_FLAGS_DATADIRECTION_NO_DATA_TRANSFER (0x00000000) 53 59 #define MPI3_SCSIIO_FLAGS_DATADIRECTION_WRITE (0x00040000) 54 60 #define MPI3_SCSIIO_FLAGS_DATADIRECTION_READ (0x00080000) 55 61 #define MPI3_SCSIIO_FLAGS_DMAOPERATION_MASK (0x00030000) 62 + #define MPI3_SCSIIO_FLAGS_DMAOPERATION_SHIFT (16) 56 63 #define MPI3_SCSIIO_FLAGS_DMAOPERATION_HOST_PI (0x00010000) 57 64 #define MPI3_SCSIIO_FLAGS_DIVERT_REASON_MASK (0x000000f0) 65 + #define MPI3_SCSIIO_FLAGS_DIVERT_REASON_SHIFT (4) 58 66 #define MPI3_SCSIIO_FLAGS_DIVERT_REASON_IO_THROTTLING (0x00000010) 59 67 #define MPI3_SCSIIO_FLAGS_DIVERT_REASON_WRITE_SAME_TOO_LARGE (0x00000020) 60 68 #define MPI3_SCSIIO_FLAGS_DIVERT_REASON_PROD_SPECIFIC (0x00000080) ··· 107 99 #define MPI3_SCSI_STATUS_ACA_ACTIVE (0x30) 108 100 #define MPI3_SCSI_STATUS_TASK_ABORTED (0x40) 109 101 #define MPI3_SCSI_STATE_SENSE_MASK (0x03) 102 + #define MPI3_SCSI_STATE_SENSE_SHIFT (0) 110 103 #define MPI3_SCSI_STATE_SENSE_VALID (0x00) 111 104 #define MPI3_SCSI_STATE_SENSE_FAILED (0x01) 112 105 #define MPI3_SCSI_STATE_SENSE_BUFF_Q_EMPTY (0x02)
+21
drivers/scsi/mpi3mr/mpi/mpi30_ioc.h
··· 30 30 #define MPI3_IOCINIT_MSGFLAGS_WRITESAMEDIVERT_SUPPORTED (0x08) 31 31 #define MPI3_IOCINIT_MSGFLAGS_SCSIIOSTATUSREPLY_SUPPORTED (0x04) 32 32 #define MPI3_IOCINIT_MSGFLAGS_HOSTMETADATA_MASK (0x03) 33 + #define MPI3_IOCINIT_MSGFLAGS_HOSTMETADATA_SHIFT (0) 33 34 #define MPI3_IOCINIT_MSGFLAGS_HOSTMETADATA_NOT_USED (0x00) 34 35 #define MPI3_IOCINIT_MSGFLAGS_HOSTMETADATA_SEPARATED (0x01) 35 36 #define MPI3_IOCINIT_MSGFLAGS_HOSTMETADATA_INLINE (0x02) ··· 41 40 #define MPI3_WHOINIT_MANUFACTURER (0x04) 42 41 43 42 #define MPI3_IOCINIT_DRIVERCAP_OSEXPOSURE_MASK (0x00000003) 43 + #define MPI3_IOCINIT_DRIVERCAP_OSEXPOSURE_SHIFT (0) 44 44 #define MPI3_IOCINIT_DRIVERCAP_OSEXPOSURE_NO_GUIDANCE (0x00000000) 45 45 #define MPI3_IOCINIT_DRIVERCAP_OSEXPOSURE_NO_SPECIAL (0x00000001) 46 46 #define MPI3_IOCINIT_DRIVERCAP_OSEXPOSURE_REPORT_AS_HDD (0x00000002) ··· 113 111 __le32 diag_tty_size; 114 112 }; 115 113 #define MPI3_IOCFACTS_CAPABILITY_NON_SUPERVISOR_MASK (0x80000000) 114 + #define MPI3_IOCFACTS_CAPABILITY_NON_SUPERVISOR_SHIFT (31) 116 115 #define MPI3_IOCFACTS_CAPABILITY_SUPERVISOR_IOC (0x00000000) 117 116 #define MPI3_IOCFACTS_CAPABILITY_NON_SUPERVISOR_IOC (0x80000000) 118 117 #define MPI3_IOCFACTS_CAPABILITY_INT_COALESCE_MASK (0x00000600) 118 + #define MPI3_IOCFACTS_CAPABILITY_INT_COALESCE_SHIFT (9) 119 119 #define MPI3_IOCFACTS_CAPABILITY_INT_COALESCE_FIXED_THRESHOLD (0x00000000) 120 120 #define MPI3_IOCFACTS_CAPABILITY_INT_COALESCE_OUTSTANDING_IO (0x00000200) 121 121 #define MPI3_IOCFACTS_CAPABILITY_COMPLETE_RESET_SUPPORTED (0x00000100) ··· 138 134 #define MPI3_IOCFACTS_EXCEPT_SAS_DISABLED (0x1000) 139 135 #define MPI3_IOCFACTS_EXCEPT_SAFE_MODE (0x0800) 140 136 #define MPI3_IOCFACTS_EXCEPT_SECURITY_KEY_MASK (0x0700) 137 + #define MPI3_IOCFACTS_EXCEPT_SECURITY_KEY_SHIFT (8) 141 138 #define MPI3_IOCFACTS_EXCEPT_SECURITY_KEY_NONE (0x0000) 142 139 #define MPI3_IOCFACTS_EXCEPT_SECURITY_KEY_LOCAL_VIA_MGMT (0x0100) 143 140 #define MPI3_IOCFACTS_EXCEPT_SECURITY_KEY_EXT_VIA_MGMT (0x0200) ··· 154 149 #define MPI3_IOCFACTS_EXCEPT_BLOCKING_BOOT_EVENT (0x0004) 155 150 #define MPI3_IOCFACTS_EXCEPT_SECURITY_SELFTEST_FAILURE (0x0002) 156 151 #define MPI3_IOCFACTS_EXCEPT_BOOTSTAT_MASK (0x0001) 152 + #define MPI3_IOCFACTS_EXCEPT_BOOTSTAT_SHIFT (0) 157 153 #define MPI3_IOCFACTS_EXCEPT_BOOTSTAT_PRIMARY (0x0000) 158 154 #define MPI3_IOCFACTS_EXCEPT_BOOTSTAT_SECONDARY (0x0001) 159 155 #define MPI3_IOCFACTS_PROTOCOL_SAS (0x0010) ··· 167 161 #define MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_MASK (0x0000ff00) 168 162 #define MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_SHIFT (8) 169 163 #define MPI3_IOCFACTS_FLAGS_INITIAL_PORT_ENABLE_MASK (0x00000030) 164 + #define MPI3_IOCFACTS_FLAGS_INITIAL_PORT_ENABLE_SHIFT (4) 170 165 #define MPI3_IOCFACTS_FLAGS_INITIAL_PORT_ENABLE_NOT_STARTED (0x00000000) 171 166 #define MPI3_IOCFACTS_FLAGS_INITIAL_PORT_ENABLE_IN_PROGRESS (0x00000010) 172 167 #define MPI3_IOCFACTS_FLAGS_INITIAL_PORT_ENABLE_COMPLETE (0x00000020) 173 168 #define MPI3_IOCFACTS_FLAGS_PERSONALITY_MASK (0x0000000f) 169 + #define MPI3_IOCFACTS_FLAGS_PERSONALITY_SHIFT (0) 174 170 #define MPI3_IOCFACTS_FLAGS_PERSONALITY_EHBA (0x00000000) 175 171 #define MPI3_IOCFACTS_FLAGS_PERSONALITY_RAID_DDR (0x00000002) 176 172 #define MPI3_IOCFACTS_IO_THROTTLE_DATA_LENGTH_NOT_REQUIRED (0x0000) ··· 212 204 }; 213 205 214 206 #define MPI3_CREATE_REQUEST_QUEUE_FLAGS_SEGMENTED_MASK (0x80) 207 + #define MPI3_CREATE_REQUEST_QUEUE_FLAGS_SEGMENTED_SHIFT (7) 215 208 #define MPI3_CREATE_REQUEST_QUEUE_FLAGS_SEGMENTED_SEGMENTED (0x80) 216 209 #define MPI3_CREATE_REQUEST_QUEUE_FLAGS_SEGMENTED_CONTIGUOUS (0x00) 217 210 #define MPI3_CREATE_REQUEST_QUEUE_SIZE_MINIMUM (2) ··· 246 237 }; 247 238 248 239 #define MPI3_CREATE_REPLY_QUEUE_FLAGS_SEGMENTED_MASK (0x80) 240 + #define MPI3_CREATE_REPLY_QUEUE_FLAGS_SEGMENTED_SHIFT (7) 249 241 #define MPI3_CREATE_REPLY_QUEUE_FLAGS_SEGMENTED_SEGMENTED (0x80) 250 242 #define MPI3_CREATE_REPLY_QUEUE_FLAGS_SEGMENTED_CONTIGUOUS (0x00) 251 243 #define MPI3_CREATE_REPLY_QUEUE_FLAGS_COALESCE_DISABLE (0x02) 252 244 #define MPI3_CREATE_REPLY_QUEUE_FLAGS_INT_ENABLE_MASK (0x01) 245 + #define MPI3_CREATE_REPLY_QUEUE_FLAGS_INT_ENABLE_SHIFT (0) 253 246 #define MPI3_CREATE_REPLY_QUEUE_FLAGS_INT_ENABLE_DISABLE (0x00) 254 247 #define MPI3_CREATE_REPLY_QUEUE_FLAGS_INT_ENABLE_ENABLE (0x01) 255 248 #define MPI3_CREATE_REPLY_QUEUE_SIZE_MINIMUM (2) ··· 337 326 }; 338 327 339 328 #define MPI3_EVENT_NOTIFY_MSGFLAGS_ACK_MASK (0x01) 329 + #define MPI3_EVENT_NOTIFY_MSGFLAGS_ACK_SHIFT (0) 340 330 #define MPI3_EVENT_NOTIFY_MSGFLAGS_ACK_REQUIRED (0x01) 341 331 #define MPI3_EVENT_NOTIFY_MSGFLAGS_ACK_NOT_REQUIRED (0x00) 342 332 #define MPI3_EVENT_NOTIFY_MSGFLAGS_EVENT_ORIGINALITY_MASK (0x02) 333 + #define MPI3_EVENT_NOTIFY_MSGFLAGS_EVENT_ORIGINALITY_SHIFT (1) 343 334 #define MPI3_EVENT_NOTIFY_MSGFLAGS_EVENT_ORIGINALITY_ORIGINAL (0x00) 344 335 #define MPI3_EVENT_NOTIFY_MSGFLAGS_EVENT_ORIGINALITY_REPLAY (0x02) 345 336 struct mpi3_event_data_gpio_interrupt { ··· 500 487 #define MPI3_EVENT_SAS_TOPO_PHY_STATUS_NO_EXIST (0x40) 501 488 #define MPI3_EVENT_SAS_TOPO_PHY_STATUS_VACANT (0x80) 502 489 #define MPI3_EVENT_SAS_TOPO_PHY_RC_MASK (0x0f) 490 + #define MPI3_EVENT_SAS_TOPO_PHY_RC_SHIFT (0) 503 491 #define MPI3_EVENT_SAS_TOPO_PHY_RC_TARG_NOT_RESPONDING (0x02) 504 492 #define MPI3_EVENT_SAS_TOPO_PHY_RC_PHY_CHANGED (0x03) 505 493 #define MPI3_EVENT_SAS_TOPO_PHY_RC_NO_CHANGE (0x04) ··· 580 566 #define MPI3_EVENT_PCIE_TOPO_PS_DELAY_NOT_RESPONDING (0x05) 581 567 #define MPI3_EVENT_PCIE_TOPO_PS_RESPONDING (0x06) 582 568 #define MPI3_EVENT_PCIE_TOPO_PI_LANES_MASK (0xf0) 569 + #define MPI3_EVENT_PCIE_TOPO_PI_LANES_SHIFT (4) 583 570 #define MPI3_EVENT_PCIE_TOPO_PI_LANES_UNKNOWN (0x00) 584 571 #define MPI3_EVENT_PCIE_TOPO_PI_LANES_1 (0x10) 585 572 #define MPI3_EVENT_PCIE_TOPO_PI_LANES_2 (0x20) ··· 588 573 #define MPI3_EVENT_PCIE_TOPO_PI_LANES_8 (0x40) 589 574 #define MPI3_EVENT_PCIE_TOPO_PI_LANES_16 (0x50) 590 575 #define MPI3_EVENT_PCIE_TOPO_PI_RATE_MASK (0x0f) 576 + #define MPI3_EVENT_PCIE_TOPO_PI_RATE_SHIFT (0) 591 577 #define MPI3_EVENT_PCIE_TOPO_PI_RATE_UNKNOWN (0x00) 592 578 #define MPI3_EVENT_PCIE_TOPO_PI_RATE_DISABLED (0x01) 593 579 #define MPI3_EVENT_PCIE_TOPO_PI_RATE_2_5 (0x02) ··· 897 881 }; 898 882 899 883 #define MPI3_PELACKNOWLEDGE_MSGFLAGS_SAFE_MODE_EXIT_MASK (0x03) 884 + #define MPI3_PELACKNOWLEDGE_MSGFLAGS_SAFE_MODE_EXIT_SHIFT (0) 900 885 #define MPI3_PELACKNOWLEDGE_MSGFLAGS_SAFE_MODE_EXIT_NO_GUIDANCE (0x00) 901 886 #define MPI3_PELACKNOWLEDGE_MSGFLAGS_SAFE_MODE_EXIT_CONTINUE_OP (0x01) 902 887 #define MPI3_PELACKNOWLEDGE_MSGFLAGS_SAFE_MODE_EXIT_TRANSITION_TO_FAULT (0x02) ··· 941 924 #define MPI3_CI_DOWNLOAD_MSGFLAGS_FORCE_FMC_ENABLE (0x40) 942 925 #define MPI3_CI_DOWNLOAD_MSGFLAGS_SIGNED_NVDATA (0x20) 943 926 #define MPI3_CI_DOWNLOAD_MSGFLAGS_WRITE_CACHE_FLUSH_MASK (0x03) 927 + #define MPI3_CI_DOWNLOAD_MSGFLAGS_WRITE_CACHE_FLUSH_SHIFT (0) 944 928 #define MPI3_CI_DOWNLOAD_MSGFLAGS_WRITE_CACHE_FLUSH_FAST (0x00) 945 929 #define MPI3_CI_DOWNLOAD_MSGFLAGS_WRITE_CACHE_FLUSH_MEDIUM (0x01) 946 930 #define MPI3_CI_DOWNLOAD_MSGFLAGS_WRITE_CACHE_FLUSH_SLOW (0x02) ··· 971 953 #define MPI3_CI_DOWNLOAD_FLAGS_OFFLINE_ACTIVATION_REQUIRED (0x20) 972 954 #define MPI3_CI_DOWNLOAD_FLAGS_KEY_UPDATE_PENDING (0x10) 973 955 #define MPI3_CI_DOWNLOAD_FLAGS_ACTIVATION_STATUS_MASK (0x0e) 956 + #define MPI3_CI_DOWNLOAD_FLAGS_ACTIVATION_STATUS_SHIFT (1) 974 957 #define MPI3_CI_DOWNLOAD_FLAGS_ACTIVATION_STATUS_NOT_NEEDED (0x00) 975 958 #define MPI3_CI_DOWNLOAD_FLAGS_ACTIVATION_STATUS_AWAITING (0x02) 976 959 #define MPI3_CI_DOWNLOAD_FLAGS_ACTIVATION_STATUS_ONLINE_PENDING (0x04) ··· 995 976 }; 996 977 997 978 #define MPI3_CI_UPLOAD_MSGFLAGS_LOCATION_MASK (0x01) 979 + #define MPI3_CI_UPLOAD_MSGFLAGS_LOCATION_SHIFT (0) 998 980 #define MPI3_CI_UPLOAD_MSGFLAGS_LOCATION_PRIMARY (0x00) 999 981 #define MPI3_CI_UPLOAD_MSGFLAGS_LOCATION_SECONDARY (0x01) 1000 982 #define MPI3_CI_UPLOAD_MSGFLAGS_FORMAT_MASK (0x02) 983 + #define MPI3_CI_UPLOAD_MSGFLAGS_FORMAT_SHIFT (1) 1001 984 #define MPI3_CI_UPLOAD_MSGFLAGS_FORMAT_FLASH (0x00) 1002 985 #define MPI3_CI_UPLOAD_MSGFLAGS_FORMAT_EXECUTABLE (0x02) 1003 986 #define MPI3_CTRL_OP_FORCE_FULL_DISCOVERY (0x01)
+1
drivers/scsi/mpi3mr/mpi/mpi30_tool.h
··· 9 9 #define MPI3_DIAG_BUFFER_TYPE_FW (0x02) 10 10 #define MPI3_DIAG_BUFFER_ACTION_RELEASE (0x01) 11 11 12 + #define MPI3_DIAG_BUFFER_POST_MSGFLAGS_SEGMENTED (0x01) 12 13 struct mpi3_diag_buffer_post_request { 13 14 __le16 host_tag; 14 15 u8 ioc_use_only02;
+19 -1
drivers/scsi/mpi3mr/mpi/mpi30_transport.h
··· 18 18 19 19 #define MPI3_VERSION_MAJOR (3) 20 20 #define MPI3_VERSION_MINOR (0) 21 - #define MPI3_VERSION_UNIT (34) 21 + #define MPI3_VERSION_UNIT (35) 22 22 #define MPI3_VERSION_DEV (0) 23 23 #define MPI3_DEVHANDLE_INVALID (0xffff) 24 24 struct mpi3_sysif_oper_queue_indexes { ··· 80 80 #define MPI3_SYSIF_IOC_CONFIG_OPER_RPY_ENT_SZ_SHIFT (20) 81 81 #define MPI3_SYSIF_IOC_CONFIG_OPER_REQ_ENT_SZ (0x000f0000) 82 82 #define MPI3_SYSIF_IOC_CONFIG_OPER_REQ_ENT_SZ_SHIFT (16) 83 + #define MPI3_SYSIF_IOC_CONFIG_SHUTDOWN_SHIFT (14) 83 84 #define MPI3_SYSIF_IOC_CONFIG_SHUTDOWN_MASK (0x0000c000) 84 85 #define MPI3_SYSIF_IOC_CONFIG_SHUTDOWN_NO (0x00000000) 85 86 #define MPI3_SYSIF_IOC_CONFIG_SHUTDOWN_NORMAL (0x00004000) ··· 98 97 #define MPI3_SYSIF_IOC_STATUS_READY (0x00000001) 99 98 #define MPI3_SYSIF_ADMIN_Q_NUM_ENTRIES_OFFSET (0x00000024) 100 99 #define MPI3_SYSIF_ADMIN_Q_NUM_ENTRIES_REQ_MASK (0x0fff) 100 + #define MPI3_SYSIF_ADMIN_Q_NUM_ENTRIES_REQ_SHIFT (0) 101 101 #define MPI3_SYSIF_ADMIN_Q_NUM_ENTRIES_REPLY_OFFSET (0x00000026) 102 102 #define MPI3_SYSIF_ADMIN_Q_NUM_ENTRIES_REPLY_MASK (0x0fff0000) 103 103 #define MPI3_SYSIF_ADMIN_Q_NUM_ENTRIES_REPLY_SHIFT (16) ··· 108 106 #define MPI3_SYSIF_ADMIN_REPLY_Q_ADDR_HIGH_OFFSET (0x00000034) 109 107 #define MPI3_SYSIF_COALESCE_CONTROL_OFFSET (0x00000040) 110 108 #define MPI3_SYSIF_COALESCE_CONTROL_ENABLE_MASK (0xc0000000) 109 + #define MPI3_SYSIF_COALESCE_CONTROL_ENABLE_SHIFT (30) 111 110 #define MPI3_SYSIF_COALESCE_CONTROL_ENABLE_NO_CHANGE (0x00000000) 112 111 #define MPI3_SYSIF_COALESCE_CONTROL_ENABLE_DISABLE (0x40000000) 113 112 #define MPI3_SYSIF_COALESCE_CONTROL_ENABLE_ENABLE (0xc0000000) ··· 127 124 #define MPI3_SYSIF_OPER_REPLY_Q_N_CI_OFFSET(N) (MPI3_SYSIF_OPER_REPLY_Q_CI_OFFSET + (((N) - 1) * 8)) 128 125 #define MPI3_SYSIF_WRITE_SEQUENCE_OFFSET (0x00001c04) 129 126 #define MPI3_SYSIF_WRITE_SEQUENCE_KEY_VALUE_MASK (0x0000000f) 127 + #define MPI3_SYSIF_WRITE_SEQUENCE_KEY_VALUE_SHIFT (0) 130 128 #define MPI3_SYSIF_WRITE_SEQUENCE_KEY_VALUE_FLUSH (0x0) 131 129 #define MPI3_SYSIF_WRITE_SEQUENCE_KEY_VALUE_1ST (0xf) 132 130 #define MPI3_SYSIF_WRITE_SEQUENCE_KEY_VALUE_2ND (0x4) ··· 137 133 #define MPI3_SYSIF_WRITE_SEQUENCE_KEY_VALUE_6TH (0xd) 138 134 #define MPI3_SYSIF_HOST_DIAG_OFFSET (0x00001c08) 139 135 #define MPI3_SYSIF_HOST_DIAG_RESET_ACTION_MASK (0x00000700) 136 + #define MPI3_SYSIF_HOST_DIAG_RESET_ACTION_SHIFT (8) 140 137 #define MPI3_SYSIF_HOST_DIAG_RESET_ACTION_NO_RESET (0x00000000) 141 138 #define MPI3_SYSIF_HOST_DIAG_RESET_ACTION_SOFT_RESET (0x00000100) 142 139 #define MPI3_SYSIF_HOST_DIAG_RESET_ACTION_HOST_CONTROL_BOOT_RESET (0x00000200) ··· 156 151 #define MPI3_SYSIF_FAULT_FUNC_AREA_SHIFT (24) 157 152 #define MPI3_SYSIF_FAULT_FUNC_AREA_MPI_DEFINED (0x00000000) 158 153 #define MPI3_SYSIF_FAULT_CODE_MASK (0x0000ffff) 154 + #define MPI3_SYSIF_FAULT_CODE_SHIFT (0) 159 155 #define MPI3_SYSIF_FAULT_CODE_DIAG_FAULT_RESET (0x0000f000) 160 156 #define MPI3_SYSIF_FAULT_CODE_CI_ACTIVATION_RESET (0x0000f001) 161 157 #define MPI3_SYSIF_FAULT_CODE_SOFT_RESET_IN_PROGRESS (0x0000f002) ··· 182 176 #define MPI3_SYSIF_DIAG_RW_ADDRESS_HIGH_OFFSET (0x00001c5c) 183 177 #define MPI3_SYSIF_DIAG_RW_CONTROL_OFFSET (0x00001c60) 184 178 #define MPI3_SYSIF_DIAG_RW_CONTROL_LEN_MASK (0x00000030) 179 + #define MPI3_SYSIF_DIAG_RW_CONTROL_LEN_SHIFT (4) 185 180 #define MPI3_SYSIF_DIAG_RW_CONTROL_LEN_1BYTE (0x00000000) 186 181 #define MPI3_SYSIF_DIAG_RW_CONTROL_LEN_2BYTES (0x00000010) 187 182 #define MPI3_SYSIF_DIAG_RW_CONTROL_LEN_4BYTES (0x00000020) 188 183 #define MPI3_SYSIF_DIAG_RW_CONTROL_LEN_8BYTES (0x00000030) 189 184 #define MPI3_SYSIF_DIAG_RW_CONTROL_RESET (0x00000004) 190 185 #define MPI3_SYSIF_DIAG_RW_CONTROL_DIR_MASK (0x00000002) 186 + #define MPI3_SYSIF_DIAG_RW_CONTROL_DIR_SHIFT (1) 191 187 #define MPI3_SYSIF_DIAG_RW_CONTROL_DIR_READ (0x00000000) 192 188 #define MPI3_SYSIF_DIAG_RW_CONTROL_DIR_WRITE (0x00000002) 193 189 #define MPI3_SYSIF_DIAG_RW_CONTROL_START (0x00000001) 194 190 #define MPI3_SYSIF_DIAG_RW_STATUS_OFFSET (0x00001c62) 195 191 #define MPI3_SYSIF_DIAG_RW_STATUS_STATUS_MASK (0x0000000e) 192 + #define MPI3_SYSIF_DIAG_RW_STATUS_STATUS_SHIFT (1) 196 193 #define MPI3_SYSIF_DIAG_RW_STATUS_STATUS_SUCCESS (0x00000000) 197 194 #define MPI3_SYSIF_DIAG_RW_STATUS_STATUS_INV_ADDR (0x00000002) 198 195 #define MPI3_SYSIF_DIAG_RW_STATUS_STATUS_ACC_ERR (0x00000004) ··· 216 207 }; 217 208 218 209 #define MPI3_REPLY_DESCRIPT_FLAGS_PHASE_MASK (0x0001) 210 + #define MPI3_REPLY_DESCRIPT_FLAGS_PHASE_SHIFT (0) 219 211 #define MPI3_REPLY_DESCRIPT_FLAGS_TYPE_MASK (0xf000) 212 + #define MPI3_REPLY_DESCRIPT_FLAGS_TYPE_SHIFT (12) 220 213 #define MPI3_REPLY_DESCRIPT_FLAGS_TYPE_ADDRESS_REPLY (0x0000) 221 214 #define MPI3_REPLY_DESCRIPT_FLAGS_TYPE_SUCCESS (0x1000) 222 215 #define MPI3_REPLY_DESCRIPT_FLAGS_TYPE_TARGET_COMMAND_BUFFER (0x2000) ··· 312 301 }; 313 302 314 303 #define MPI3_SGE_FLAGS_ELEMENT_TYPE_MASK (0xf0) 304 + #define MPI3_SGE_FLAGS_ELEMENT_TYPE_SHIFT (4) 315 305 #define MPI3_SGE_FLAGS_ELEMENT_TYPE_SIMPLE (0x00) 316 306 #define MPI3_SGE_FLAGS_ELEMENT_TYPE_BIT_BUCKET (0x10) 317 307 #define MPI3_SGE_FLAGS_ELEMENT_TYPE_CHAIN (0x20) ··· 321 309 #define MPI3_SGE_FLAGS_END_OF_LIST (0x08) 322 310 #define MPI3_SGE_FLAGS_END_OF_BUFFER (0x04) 323 311 #define MPI3_SGE_FLAGS_DLAS_MASK (0x03) 312 + #define MPI3_SGE_FLAGS_DLAS_SHIFT (0) 324 313 #define MPI3_SGE_FLAGS_DLAS_SYSTEM (0x00) 325 314 #define MPI3_SGE_FLAGS_DLAS_IOC_UDP (0x01) 326 315 #define MPI3_SGE_FLAGS_DLAS_IOC_CTL (0x02) ··· 335 322 #define MPI3_EEDPFLAGS_CHK_APP_TAG (0x0200) 336 323 #define MPI3_EEDPFLAGS_CHK_GUARD (0x0100) 337 324 #define MPI3_EEDPFLAGS_ESC_MODE_MASK (0x00c0) 325 + #define MPI3_EEDPFLAGS_ESC_MODE_SHIFT (6) 338 326 #define MPI3_EEDPFLAGS_ESC_MODE_DO_NOT_DISABLE (0x0040) 339 327 #define MPI3_EEDPFLAGS_ESC_MODE_APPTAG_DISABLE (0x0080) 340 328 #define MPI3_EEDPFLAGS_ESC_MODE_APPTAG_REFTAG_DISABLE (0x00c0) 341 329 #define MPI3_EEDPFLAGS_HOST_GUARD_MASK (0x0030) 330 + #define MPI3_EEDPFLAGS_HOST_GUARD_SHIFT (4) 342 331 #define MPI3_EEDPFLAGS_HOST_GUARD_T10_CRC (0x0000) 343 332 #define MPI3_EEDPFLAGS_HOST_GUARD_IP_CHKSUM (0x0010) 344 333 #define MPI3_EEDPFLAGS_HOST_GUARD_OEM_SPECIFIC (0x0020) 345 334 #define MPI3_EEDPFLAGS_PT_REF_TAG (0x0008) 346 335 #define MPI3_EEDPFLAGS_EEDP_OP_MASK (0x0007) 336 + #define MPI3_EEDPFLAGS_EEDP_OP_SHIFT (0) 347 337 #define MPI3_EEDPFLAGS_EEDP_OP_CHECK (0x0001) 348 338 #define MPI3_EEDPFLAGS_EEDP_OP_STRIP (0x0002) 349 339 #define MPI3_EEDPFLAGS_EEDP_OP_CHECK_REMOVE (0x0003) ··· 419 403 #define MPI3_IOCSTATUS_LOG_INFO_AVAIL_MASK (0x8000) 420 404 #define MPI3_IOCSTATUS_LOG_INFO_AVAILABLE (0x8000) 421 405 #define MPI3_IOCSTATUS_STATUS_MASK (0x7fff) 406 + #define MPI3_IOCSTATUS_STATUS_SHIFT (0) 422 407 #define MPI3_IOCSTATUS_SUCCESS (0x0000) 423 408 #define MPI3_IOCSTATUS_INVALID_FUNCTION (0x0001) 424 409 #define MPI3_IOCSTATUS_BUSY (0x0002) ··· 486 469 #define MPI3_IOCLOGINFO_TYPE_NONE (0x0) 487 470 #define MPI3_IOCLOGINFO_TYPE_SAS (0x3) 488 471 #define MPI3_IOCLOGINFO_LOG_DATA_MASK (0x0fffffff) 472 + #define MPI3_IOCLOGINFO_LOG_DATA_SHIFT (0) 489 473 #endif
+31 -3
drivers/scsi/mpi3mr/mpi3mr.h
··· 56 56 extern int prot_mask; 57 57 extern atomic64_t event_counter; 58 58 59 - #define MPI3MR_DRIVER_VERSION "8.12.0.3.50" 60 - #define MPI3MR_DRIVER_RELDATE "11-November-2024" 59 + #define MPI3MR_DRIVER_VERSION "8.13.0.5.50" 60 + #define MPI3MR_DRIVER_RELDATE "20-February-2025" 61 61 62 62 #define MPI3MR_DRIVER_NAME "mpi3mr" 63 63 #define MPI3MR_DRIVER_LICENSE "GPL" ··· 80 80 81 81 /* Admin queue management definitions */ 82 82 #define MPI3MR_ADMIN_REQ_Q_SIZE (2 * MPI3MR_PAGE_SIZE_4K) 83 - #define MPI3MR_ADMIN_REPLY_Q_SIZE (4 * MPI3MR_PAGE_SIZE_4K) 83 + #define MPI3MR_ADMIN_REPLY_Q_SIZE (8 * MPI3MR_PAGE_SIZE_4K) 84 84 #define MPI3MR_ADMIN_REQ_FRAME_SZ 128 85 85 #define MPI3MR_ADMIN_REPLY_FRAME_SZ 16 86 86 87 87 /* Operational queue management definitions */ 88 88 #define MPI3MR_OP_REQ_Q_QD 512 89 89 #define MPI3MR_OP_REP_Q_QD 1024 90 + #define MPI3MR_OP_REP_Q_QD2K 2048 90 91 #define MPI3MR_OP_REP_Q_QD4K 4096 91 92 #define MPI3MR_OP_REQ_Q_SEG_SIZE 4096 92 93 #define MPI3MR_OP_REP_Q_SEG_SIZE 4096 ··· 329 328 #define MPI3MR_RESET_REASON_OSTYPE_SHIFT 28 330 329 #define MPI3MR_RESET_REASON_IOCNUM_SHIFT 20 331 330 331 + 332 332 /* Queue type definitions */ 333 333 enum queue_type { 334 334 MPI3MR_DEFAULT_QUEUE = 0, ··· 389 387 u16 max_msix_vectors; 390 388 u8 personality; 391 389 u8 dma_mask; 390 + bool max_req_limit; 392 391 u8 protocol_flags; 393 392 u8 sge_mod_mask; 394 393 u8 sge_mod_value; ··· 459 456 * @enable_irq_poll: Flag to indicate polling is enabled 460 457 * @in_use: Queue is handled by poll/ISR 461 458 * @qtype: Type of queue (types defined in enum queue_type) 459 + * @qfull_watermark: Watermark defined in reply queue to avoid 460 + * reply queue full 462 461 */ 463 462 struct op_reply_qinfo { 464 463 u16 ci; ··· 476 471 bool enable_irq_poll; 477 472 atomic_t in_use; 478 473 enum queue_type qtype; 474 + u16 qfull_watermark; 479 475 }; 480 476 481 477 /** ··· 934 928 * @size: Buffer size 935 929 * @addr: Virtual address 936 930 * @dma_addr: Buffer DMA address 931 + * @is_segmented: The buffer is segmented or not 932 + * @disabled_after_reset: The buffer is disabled after reset 937 933 */ 938 934 struct diag_buffer_desc { 939 935 u8 type; ··· 945 937 u32 size; 946 938 void *addr; 947 939 dma_addr_t dma_addr; 940 + bool is_segmented; 941 + bool disabled_after_reset; 948 942 }; 949 943 950 944 /** ··· 1032 1022 * @admin_reply_base: Admin reply queue base virtual address 1033 1023 * @admin_reply_dma: Admin reply queue base dma address 1034 1024 * @admin_reply_q_in_use: Queue is handled by poll/ISR 1025 + * @admin_pend_isr: Count of unprocessed admin ISR/poll calls 1026 + * due to another thread processing replies 1035 1027 * @ready_timeout: Controller ready timeout 1036 1028 * @intr_info: Interrupt cookie pointer 1037 1029 * @intr_info_count: Number of interrupt cookies ··· 1102 1090 * @ts_update_interval: Timestamp update interval 1103 1091 * @reset_in_progress: Reset in progress flag 1104 1092 * @unrecoverable: Controller unrecoverable flag 1093 + * @io_admin_reset_sync: Manage state of I/O ops during an admin reset process 1105 1094 * @prev_reset_result: Result of previous reset 1106 1095 * @reset_mutex: Controller reset mutex 1107 1096 * @reset_waitq: Controller reset wait queue ··· 1166 1153 * @snapdump_trigger_active: Snapdump trigger active flag 1167 1154 * @pci_err_recovery: PCI error recovery in progress 1168 1155 * @block_on_pci_err: Block IO during PCI error recovery 1156 + * @reply_qfull_count: Occurences of reply queue full avoidance kicking-in 1157 + * @prevent_reply_qfull: Enable reply queue prevention 1158 + * @seg_tb_support: Segmented trace buffer support 1159 + * @num_tb_segs: Number of Segments in Trace buffer 1160 + * @trace_buf_pool: DMA pool for Segmented trace buffer segments 1161 + * @trace_buf: Trace buffer segments memory descriptor 1169 1162 */ 1170 1163 struct mpi3mr_ioc { 1171 1164 struct list_head list; ··· 1208 1189 void *admin_reply_base; 1209 1190 dma_addr_t admin_reply_dma; 1210 1191 atomic_t admin_reply_q_in_use; 1192 + atomic_t admin_pend_isr; 1211 1193 1212 1194 u32 ready_timeout; 1213 1195 ··· 1296 1276 u16 ts_update_interval; 1297 1277 u8 reset_in_progress; 1298 1278 u8 unrecoverable; 1279 + u8 io_admin_reset_sync; 1299 1280 int prev_reset_result; 1300 1281 struct mutex reset_mutex; 1301 1282 wait_queue_head_t reset_waitq; ··· 1372 1351 bool fw_release_trigger_active; 1373 1352 bool pci_err_recovery; 1374 1353 bool block_on_pci_err; 1354 + atomic_t reply_qfull_count; 1355 + bool prevent_reply_qfull; 1356 + bool seg_tb_support; 1357 + u32 num_tb_segs; 1358 + struct dma_pool *trace_buf_pool; 1359 + struct segments *trace_buf; 1360 + 1375 1361 }; 1376 1362 1377 1363 /**
+120 -9
drivers/scsi/mpi3mr/mpi3mr_app.c
··· 12 12 #include <uapi/scsi/scsi_bsg_mpi3mr.h> 13 13 14 14 /** 15 - * mpi3mr_alloc_trace_buffer: Allocate trace buffer 15 + * mpi3mr_alloc_trace_buffer: Allocate segmented trace buffer 16 16 * @mrioc: Adapter instance reference 17 17 * @trace_size: Trace buffer size 18 18 * 19 - * Allocate trace buffer 19 + * Allocate either segmented memory pools or contiguous buffer 20 + * based on the controller capability for the host trace 21 + * buffer. 22 + * 20 23 * Return: 0 on success, non-zero on failure. 21 24 */ 22 25 static int mpi3mr_alloc_trace_buffer(struct mpi3mr_ioc *mrioc, u32 trace_size) 23 26 { 24 27 struct diag_buffer_desc *diag_buffer = &mrioc->diag_buffers[0]; 28 + int i, sz; 29 + u64 *diag_buffer_list = NULL; 30 + dma_addr_t diag_buffer_list_dma; 31 + u32 seg_count; 25 32 26 - diag_buffer->addr = dma_alloc_coherent(&mrioc->pdev->dev, 27 - trace_size, &diag_buffer->dma_addr, GFP_KERNEL); 28 - if (diag_buffer->addr) { 29 - dprint_init(mrioc, "trace diag buffer is allocated successfully\n"); 33 + if (mrioc->seg_tb_support) { 34 + seg_count = (trace_size) / MPI3MR_PAGE_SIZE_4K; 35 + trace_size = seg_count * MPI3MR_PAGE_SIZE_4K; 36 + 37 + diag_buffer_list = dma_alloc_coherent(&mrioc->pdev->dev, 38 + sizeof(u64) * seg_count, 39 + &diag_buffer_list_dma, GFP_KERNEL); 40 + if (!diag_buffer_list) 41 + return -1; 42 + 43 + mrioc->num_tb_segs = seg_count; 44 + 45 + sz = sizeof(struct segments) * seg_count; 46 + mrioc->trace_buf = kzalloc(sz, GFP_KERNEL); 47 + if (!mrioc->trace_buf) 48 + goto trace_buf_failed; 49 + 50 + mrioc->trace_buf_pool = dma_pool_create("trace_buf pool", 51 + &mrioc->pdev->dev, MPI3MR_PAGE_SIZE_4K, MPI3MR_PAGE_SIZE_4K, 52 + 0); 53 + if (!mrioc->trace_buf_pool) { 54 + ioc_err(mrioc, "trace buf pool: dma_pool_create failed\n"); 55 + goto trace_buf_pool_failed; 56 + } 57 + 58 + for (i = 0; i < seg_count; i++) { 59 + mrioc->trace_buf[i].segment = 60 + dma_pool_zalloc(mrioc->trace_buf_pool, GFP_KERNEL, 61 + &mrioc->trace_buf[i].segment_dma); 62 + diag_buffer_list[i] = 63 + (u64) mrioc->trace_buf[i].segment_dma; 64 + if (!diag_buffer_list[i]) 65 + goto tb_seg_alloc_failed; 66 + } 67 + 68 + diag_buffer->addr = diag_buffer_list; 69 + diag_buffer->dma_addr = diag_buffer_list_dma; 70 + diag_buffer->is_segmented = true; 71 + 72 + dprint_init(mrioc, "segmented trace diag buffer\n" 73 + "is allocated successfully seg_count:%d\n", seg_count); 30 74 return 0; 75 + } else { 76 + diag_buffer->addr = dma_alloc_coherent(&mrioc->pdev->dev, 77 + trace_size, &diag_buffer->dma_addr, GFP_KERNEL); 78 + if (diag_buffer->addr) { 79 + dprint_init(mrioc, "trace diag buffer is allocated successfully\n"); 80 + return 0; 81 + } 82 + return -1; 31 83 } 84 + 85 + tb_seg_alloc_failed: 86 + if (mrioc->trace_buf_pool) { 87 + for (i = 0; i < mrioc->num_tb_segs; i++) { 88 + if (mrioc->trace_buf[i].segment) { 89 + dma_pool_free(mrioc->trace_buf_pool, 90 + mrioc->trace_buf[i].segment, 91 + mrioc->trace_buf[i].segment_dma); 92 + mrioc->trace_buf[i].segment = NULL; 93 + } 94 + mrioc->trace_buf[i].segment = NULL; 95 + } 96 + dma_pool_destroy(mrioc->trace_buf_pool); 97 + mrioc->trace_buf_pool = NULL; 98 + } 99 + trace_buf_pool_failed: 100 + kfree(mrioc->trace_buf); 101 + mrioc->trace_buf = NULL; 102 + trace_buf_failed: 103 + if (diag_buffer_list) 104 + dma_free_coherent(&mrioc->pdev->dev, 105 + sizeof(u64) * mrioc->num_tb_segs, 106 + diag_buffer_list, diag_buffer_list_dma); 32 107 return -1; 33 108 } 34 109 ··· 175 100 dprint_init(mrioc, 176 101 "trying to allocate trace diag buffer of size = %dKB\n", 177 102 trace_size / 1024); 178 - if (get_order(trace_size) > MAX_PAGE_ORDER || 103 + if ((!mrioc->seg_tb_support && (get_order(trace_size) > MAX_PAGE_ORDER)) || 179 104 mpi3mr_alloc_trace_buffer(mrioc, trace_size)) { 105 + 180 106 retry = true; 181 107 trace_size -= trace_dec_size; 182 108 dprint_init(mrioc, "trace diag buffer allocation failed\n" ··· 237 161 u8 prev_status; 238 162 int retval = 0; 239 163 164 + if (diag_buffer->disabled_after_reset) { 165 + dprint_bsg_err(mrioc, "%s: skipping diag buffer posting\n" 166 + "as it is disabled after reset\n", __func__); 167 + return -1; 168 + } 169 + 240 170 memset(&diag_buf_post_req, 0, sizeof(diag_buf_post_req)); 241 171 mutex_lock(&mrioc->init_cmds.mutex); 242 172 if (mrioc->init_cmds.state & MPI3MR_CMD_PENDING) { ··· 259 177 diag_buf_post_req.address = le64_to_cpu(diag_buffer->dma_addr); 260 178 diag_buf_post_req.length = le32_to_cpu(diag_buffer->size); 261 179 262 - dprint_bsg_info(mrioc, "%s: posting diag buffer type %d\n", __func__, 263 - diag_buffer->type); 180 + if (diag_buffer->is_segmented) 181 + diag_buf_post_req.msg_flags |= MPI3_DIAG_BUFFER_POST_MSGFLAGS_SEGMENTED; 182 + 183 + dprint_bsg_info(mrioc, "%s: posting diag buffer type %d segmented:%d\n", __func__, 184 + diag_buffer->type, diag_buffer->is_segmented); 185 + 264 186 prev_status = diag_buffer->status; 265 187 diag_buffer->status = MPI3MR_HDB_BUFSTATUS_POSTED_UNPAUSED; 266 188 init_completion(&mrioc->init_cmds.done); ··· 2425 2339 } 2426 2340 2427 2341 if (!mrioc->ioctl_sges_allocated) { 2342 + mutex_unlock(&mrioc->bsg_cmds.mutex); 2428 2343 dprint_bsg_err(mrioc, "%s: DMA memory was not allocated\n", 2429 2344 __func__); 2430 2345 return -ENOMEM; ··· 3148 3061 static DEVICE_ATTR_RO(reply_queue_count); 3149 3062 3150 3063 /** 3064 + * reply_qfull_count_show - Show reply qfull count 3065 + * @dev: class device 3066 + * @attr: Device attributes 3067 + * @buf: Buffer to copy 3068 + * 3069 + * Retrieves the current value of the reply_qfull_count from the mrioc structure and 3070 + * formats it as a string for display. 3071 + * 3072 + * Return: sysfs_emit() return 3073 + */ 3074 + static ssize_t 3075 + reply_qfull_count_show(struct device *dev, struct device_attribute *attr, 3076 + char *buf) 3077 + { 3078 + struct Scsi_Host *shost = class_to_shost(dev); 3079 + struct mpi3mr_ioc *mrioc = shost_priv(shost); 3080 + 3081 + return sysfs_emit(buf, "%u\n", atomic_read(&mrioc->reply_qfull_count)); 3082 + } 3083 + 3084 + static DEVICE_ATTR_RO(reply_qfull_count); 3085 + 3086 + /** 3151 3087 * logging_level_show - Show controller debug level 3152 3088 * @dev: class device 3153 3089 * @attr: Device attributes ··· 3262 3152 &dev_attr_fw_queue_depth.attr, 3263 3153 &dev_attr_op_req_q_count.attr, 3264 3154 &dev_attr_reply_queue_count.attr, 3155 + &dev_attr_reply_qfull_count.attr, 3265 3156 &dev_attr_logging_level.attr, 3266 3157 &dev_attr_adp_state.attr, 3267 3158 NULL,
+146 -13
drivers/scsi/mpi3mr/mpi3mr_fw.c
··· 17 17 struct mpi3_ioc_facts_data *facts_data); 18 18 static void mpi3mr_pel_wait_complete(struct mpi3mr_ioc *mrioc, 19 19 struct mpi3mr_drv_cmd *drv_cmd); 20 - 20 + static int mpi3mr_check_op_admin_proc(struct mpi3mr_ioc *mrioc); 21 21 static int poll_queues; 22 22 module_param(poll_queues, int, 0444); 23 23 MODULE_PARM_DESC(poll_queues, "Number of queues for io_uring poll mode. (Range 1 - 126)"); ··· 446 446 u16 threshold_comps = 0; 447 447 struct mpi3_default_reply_descriptor *reply_desc; 448 448 449 - if (!atomic_add_unless(&mrioc->admin_reply_q_in_use, 1, 1)) 449 + if (!atomic_add_unless(&mrioc->admin_reply_q_in_use, 1, 1)) { 450 + atomic_inc(&mrioc->admin_pend_isr); 450 451 return 0; 452 + } 451 453 452 454 reply_desc = (struct mpi3_default_reply_descriptor *)mrioc->admin_reply_base + 453 455 admin_reply_ci; ··· 461 459 } 462 460 463 461 do { 464 - if (mrioc->unrecoverable) 462 + if (mrioc->unrecoverable || mrioc->io_admin_reset_sync) 465 463 break; 466 464 467 465 mrioc->admin_req_ci = le16_to_cpu(reply_desc->request_queue_ci); ··· 556 554 } 557 555 558 556 do { 559 - if (mrioc->unrecoverable) 557 + if (mrioc->unrecoverable || mrioc->io_admin_reset_sync) 560 558 break; 561 559 562 560 req_q_idx = le16_to_cpu(reply_desc->request_queue_id) - 1; ··· 1304 1302 (ioc_config & MPI3_SYSIF_IOC_CONFIG_ENABLE_IOC))) 1305 1303 retval = 0; 1306 1304 1307 - ioc_info(mrioc, "Base IOC Sts/Config after %s MUR is (0x%x)/(0x%x)\n", 1305 + ioc_info(mrioc, "Base IOC Sts/Config after %s MUR is (0x%08x)/(0x%08x)\n", 1308 1306 (!retval) ? "successful" : "failed", ioc_status, ioc_config); 1309 1307 return retval; 1310 1308 } ··· 1356 1354 "critical error: multipath capability is enabled at the\n" 1357 1355 "\tcontroller while sas transport support is enabled at the\n" 1358 1356 "\tdriver, please reboot the system or reload the driver\n"); 1357 + 1358 + if (mrioc->seg_tb_support) { 1359 + if (!(mrioc->facts.ioc_capabilities & 1360 + MPI3_IOCFACTS_CAPABILITY_SEG_DIAG_TRACE_SUPPORTED)) { 1361 + ioc_err(mrioc, 1362 + "critical error: previously enabled segmented trace\n" 1363 + " buffer capability is disabled after reset. Please\n" 1364 + " update the firmware or reboot the system or\n" 1365 + " reload the driver to enable trace diag buffer\n"); 1366 + mrioc->diag_buffers[0].disabled_after_reset = true; 1367 + } else 1368 + mrioc->diag_buffers[0].disabled_after_reset = false; 1369 + } 1359 1370 1360 1371 if (mrioc->facts.max_devhandle > mrioc->dev_handle_bitmap_bits) { 1361 1372 removepend_bitmap = bitmap_zalloc(mrioc->facts.max_devhandle, ··· 1732 1717 ioc_config = readl(&mrioc->sysif_regs->ioc_configuration); 1733 1718 ioc_status = readl(&mrioc->sysif_regs->ioc_status); 1734 1719 ioc_info(mrioc, 1735 - "ioc_status/ioc_onfig after %s reset is (0x%x)/(0x%x)\n", 1720 + "ioc_status/ioc_config after %s reset is (0x%08x)/(0x%08x)\n", 1736 1721 (!retval)?"successful":"failed", ioc_status, 1737 1722 ioc_config); 1738 1723 if (retval) ··· 2119 2104 } 2120 2105 2121 2106 reply_qid = qidx + 1; 2122 - op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD; 2123 - if ((mrioc->pdev->device == MPI3_MFGPAGE_DEVID_SAS4116) && 2124 - !mrioc->pdev->revision) 2125 - op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD4K; 2107 + 2108 + if (mrioc->pdev->device == MPI3_MFGPAGE_DEVID_SAS4116) { 2109 + if (mrioc->pdev->revision) 2110 + op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD; 2111 + else 2112 + op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD4K; 2113 + } else 2114 + op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD2K; 2115 + 2126 2116 op_reply_q->ci = 0; 2127 2117 op_reply_q->ephase = 1; 2128 2118 atomic_set(&op_reply_q->pend_ios, 0); 2129 2119 atomic_set(&op_reply_q->in_use, 0); 2130 2120 op_reply_q->enable_irq_poll = false; 2121 + op_reply_q->qfull_watermark = 2122 + op_reply_q->num_replies - (MPI3MR_THRESHOLD_REPLY_COUNT * 2); 2131 2123 2132 2124 if (!op_reply_q->q_segments) { 2133 2125 retval = mpi3mr_alloc_op_reply_q_segments(mrioc, qidx); ··· 2438 2416 void *segment_base_addr; 2439 2417 u16 req_sz = mrioc->facts.op_req_sz; 2440 2418 struct segments *segments = op_req_q->q_segments; 2419 + struct op_reply_qinfo *op_reply_q = NULL; 2441 2420 2442 2421 reply_qidx = op_req_q->reply_qid - 1; 2422 + op_reply_q = mrioc->op_reply_qinfo + reply_qidx; 2443 2423 2444 2424 if (mrioc->unrecoverable) 2445 2425 return -EFAULT; ··· 2468 2444 } 2469 2445 if (mrioc->pci_err_recovery) { 2470 2446 ioc_err(mrioc, "operational request queue submission failed due to pci error recovery in progress\n"); 2447 + retval = -EAGAIN; 2448 + goto out; 2449 + } 2450 + 2451 + /* Reply queue is nearing to get full, push back IOs to SML */ 2452 + if ((mrioc->prevent_reply_qfull == true) && 2453 + (atomic_read(&op_reply_q->pend_ios) > 2454 + (op_reply_q->qfull_watermark))) { 2455 + atomic_inc(&mrioc->reply_qfull_count); 2471 2456 retval = -EAGAIN; 2472 2457 goto out; 2473 2458 } ··· 2759 2726 return; 2760 2727 } 2761 2728 2762 - if (mrioc->ts_update_counter++ >= mrioc->ts_update_interval) { 2729 + if (atomic_read(&mrioc->admin_pend_isr)) { 2730 + ioc_err(mrioc, "Unprocessed admin ISR instance found\n" 2731 + "flush admin replies\n"); 2732 + mpi3mr_process_admin_reply_q(mrioc); 2733 + } 2734 + 2735 + if (!(mrioc->facts.ioc_capabilities & 2736 + MPI3_IOCFACTS_CAPABILITY_NON_SUPERVISOR_IOC) && 2737 + (mrioc->ts_update_counter++ >= mrioc->ts_update_interval)) { 2738 + 2763 2739 mrioc->ts_update_counter = 0; 2764 2740 mpi3mr_sync_timestamp(mrioc); 2765 2741 } ··· 3130 3088 mrioc->facts.max_msix_vectors = le16_to_cpu(facts_data->max_msix_vectors); 3131 3089 mrioc->facts.personality = (facts_flags & 3132 3090 MPI3_IOCFACTS_FLAGS_PERSONALITY_MASK); 3091 + mrioc->facts.dma_mask = (facts_flags & 3092 + MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_MASK) >> 3093 + MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_SHIFT; 3133 3094 mrioc->facts.dma_mask = (facts_flags & 3134 3095 MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_MASK) >> 3135 3096 MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_SHIFT; ··· 4259 4214 mrioc->shost->transportt = mpi3mr_transport_template; 4260 4215 } 4261 4216 4217 + if (mrioc->facts.max_req_limit) 4218 + mrioc->prevent_reply_qfull = true; 4219 + 4220 + if (mrioc->facts.ioc_capabilities & 4221 + MPI3_IOCFACTS_CAPABILITY_SEG_DIAG_TRACE_SUPPORTED) 4222 + mrioc->seg_tb_support = true; 4223 + 4262 4224 mrioc->reply_sz = mrioc->facts.reply_sz; 4263 4225 4264 4226 retval = mpi3mr_check_reset_dma_mask(mrioc); ··· 4422 4370 goto out_failed_noretry; 4423 4371 } 4424 4372 4373 + mrioc->io_admin_reset_sync = 0; 4425 4374 if (is_resume || mrioc->block_on_pci_err) { 4426 4375 dprint_reset(mrioc, "setting up single ISR\n"); 4427 4376 retval = mpi3mr_setup_isr(mrioc, 1); ··· 4724 4671 */ 4725 4672 void mpi3mr_free_mem(struct mpi3mr_ioc *mrioc) 4726 4673 { 4727 - u16 i; 4674 + u16 i, j; 4728 4675 struct mpi3mr_intr_info *intr_info; 4729 4676 struct diag_buffer_desc *diag_buffer; 4730 4677 ··· 4859 4806 4860 4807 for (i = 0; i < MPI3MR_MAX_NUM_HDB; i++) { 4861 4808 diag_buffer = &mrioc->diag_buffers[i]; 4809 + if ((i == 0) && mrioc->seg_tb_support) { 4810 + if (mrioc->trace_buf_pool) { 4811 + for (j = 0; j < mrioc->num_tb_segs; j++) { 4812 + if (mrioc->trace_buf[j].segment) { 4813 + dma_pool_free(mrioc->trace_buf_pool, 4814 + mrioc->trace_buf[j].segment, 4815 + mrioc->trace_buf[j].segment_dma); 4816 + mrioc->trace_buf[j].segment = NULL; 4817 + } 4818 + 4819 + mrioc->trace_buf[j].segment = NULL; 4820 + } 4821 + dma_pool_destroy(mrioc->trace_buf_pool); 4822 + mrioc->trace_buf_pool = NULL; 4823 + } 4824 + 4825 + kfree(mrioc->trace_buf); 4826 + mrioc->trace_buf = NULL; 4827 + diag_buffer->size = sizeof(u64) * mrioc->num_tb_segs; 4828 + } 4862 4829 if (diag_buffer->addr) { 4863 4830 dma_free_coherent(&mrioc->pdev->dev, 4864 4831 diag_buffer->size, diag_buffer->addr, ··· 4956 4883 } 4957 4884 4958 4885 ioc_info(mrioc, 4959 - "Base IOC Sts/Config after %s shutdown is (0x%x)/(0x%x)\n", 4886 + "Base IOC Sts/Config after %s shutdown is (0x%08x)/(0x%08x)\n", 4960 4887 (!retval) ? "successful" : "failed", ioc_status, 4961 4888 ioc_config); 4962 4889 } ··· 5302 5229 } 5303 5230 5304 5231 /** 5232 + * mpi3mr_check_op_admin_proc - 5233 + * @mrioc: Adapter instance reference 5234 + * 5235 + * Check if any of the operation reply queues 5236 + * or the admin reply queue are currently in use. 5237 + * If any queue is in use, this function waits for 5238 + * a maximum of 10 seconds for them to become available. 5239 + * 5240 + * Return: 0 on success, non-zero on failure. 5241 + */ 5242 + static int mpi3mr_check_op_admin_proc(struct mpi3mr_ioc *mrioc) 5243 + { 5244 + 5245 + u16 timeout = 10 * 10; 5246 + u16 elapsed_time = 0; 5247 + bool op_admin_in_use = false; 5248 + 5249 + do { 5250 + op_admin_in_use = false; 5251 + 5252 + /* Check admin_reply queue first to exit early */ 5253 + if (atomic_read(&mrioc->admin_reply_q_in_use) == 1) 5254 + op_admin_in_use = true; 5255 + else { 5256 + /* Check op_reply queues */ 5257 + int i; 5258 + 5259 + for (i = 0; i < mrioc->num_queues; i++) { 5260 + if (atomic_read(&mrioc->op_reply_qinfo[i].in_use) == 1) { 5261 + op_admin_in_use = true; 5262 + break; 5263 + } 5264 + } 5265 + } 5266 + 5267 + if (!op_admin_in_use) 5268 + break; 5269 + 5270 + msleep(100); 5271 + 5272 + } while (++elapsed_time < timeout); 5273 + 5274 + if (op_admin_in_use) 5275 + return 1; 5276 + 5277 + return 0; 5278 + } 5279 + 5280 + /** 5305 5281 * mpi3mr_soft_reset_handler - Reset the controller 5306 5282 * @mrioc: Adapter instance reference 5307 5283 * @reset_reason: Reset reason code ··· 5430 5308 mpi3mr_wait_for_host_io(mrioc, MPI3MR_RESET_HOST_IOWAIT_TIMEOUT); 5431 5309 5432 5310 mpi3mr_ioc_disable_intr(mrioc); 5311 + mrioc->io_admin_reset_sync = 1; 5433 5312 5434 5313 if (snapdump) { 5435 5314 mpi3mr_set_diagsave(mrioc); ··· 5458 5335 ioc_err(mrioc, "Failed to issue soft reset to the ioc\n"); 5459 5336 goto out; 5460 5337 } 5338 + 5339 + retval = mpi3mr_check_op_admin_proc(mrioc); 5340 + if (retval) { 5341 + ioc_err(mrioc, "Soft reset failed due to an Admin or I/O queue polling\n" 5342 + "thread still processing replies even after a 10 second\n" 5343 + "timeout. Marking the controller as unrecoverable!\n"); 5344 + 5345 + goto out; 5346 + } 5347 + 5461 5348 if (mrioc->num_io_throttle_group != 5462 5349 mrioc->facts.max_io_throttle_group) { 5463 5350 ioc_err(mrioc,
+100 -1
drivers/scsi/mpi3mr/mpi3mr_os.c
··· 3839 3839 tgtdev = mpi3mr_get_tgtdev_by_handle(mrioc, handle); 3840 3840 3841 3841 if (scmd) { 3842 + if (tm_type == MPI3_SCSITASKMGMT_TASKTYPE_ABORT_TASK) { 3843 + cmd_priv = scsi_cmd_priv(scmd); 3844 + if (!cmd_priv) 3845 + goto out_unlock; 3846 + 3847 + struct op_req_qinfo *op_req_q; 3848 + 3849 + op_req_q = &mrioc->req_qinfo[cmd_priv->req_q_idx]; 3850 + tm_req.task_host_tag = cpu_to_le16(cmd_priv->host_tag); 3851 + tm_req.task_request_queue_id = 3852 + cpu_to_le16(op_req_q->qid); 3853 + } 3842 3854 sdev = scmd->device; 3843 3855 sdev_priv_data = sdev->hostdata; 3844 3856 scsi_tgt_priv_data = ((sdev_priv_data) ? ··· 4395 4383 sdev_printk(KERN_INFO, scmd->device, 4396 4384 "%s: device(LUN) reset is %s for scmd(%p)\n", mrioc->name, 4397 4385 ((retval == SUCCESS) ? "SUCCESS" : "FAILED"), scmd); 4386 + 4387 + return retval; 4388 + } 4389 + 4390 + /** 4391 + * mpi3mr_eh_abort - Callback function for abort error handling 4392 + * @scmd: SCSI command reference 4393 + * 4394 + * Issues Abort Task Management if the command is in LLD scope 4395 + * and verifies if it is aborted successfully, and return status 4396 + * accordingly. 4397 + * 4398 + * Return: SUCCESS if the abort was successful, otherwise FAILED 4399 + */ 4400 + static int mpi3mr_eh_abort(struct scsi_cmnd *scmd) 4401 + { 4402 + struct mpi3mr_ioc *mrioc = shost_priv(scmd->device->host); 4403 + struct mpi3mr_stgt_priv_data *stgt_priv_data; 4404 + struct mpi3mr_sdev_priv_data *sdev_priv_data; 4405 + struct scmd_priv *cmd_priv; 4406 + u16 dev_handle, timeout = MPI3MR_ABORTTM_TIMEOUT; 4407 + u8 resp_code = 0; 4408 + int retval = FAILED, ret = 0; 4409 + struct request *rq = scsi_cmd_to_rq(scmd); 4410 + unsigned long scmd_age_ms = jiffies_to_msecs(jiffies - scmd->jiffies_at_alloc); 4411 + unsigned long scmd_age_sec = scmd_age_ms / HZ; 4412 + 4413 + sdev_printk(KERN_INFO, scmd->device, 4414 + "%s: attempting abort task for scmd(%p)\n", mrioc->name, scmd); 4415 + 4416 + sdev_printk(KERN_INFO, scmd->device, 4417 + "%s: scmd(0x%p) is outstanding for %lus %lums, timeout %us, retries %d, allowed %d\n", 4418 + mrioc->name, scmd, scmd_age_sec, scmd_age_ms % HZ, rq->timeout / HZ, 4419 + scmd->retries, scmd->allowed); 4420 + 4421 + scsi_print_command(scmd); 4422 + 4423 + sdev_priv_data = scmd->device->hostdata; 4424 + if (!sdev_priv_data || !sdev_priv_data->tgt_priv_data) { 4425 + sdev_printk(KERN_INFO, scmd->device, 4426 + "%s: Device not available, Skip issuing abort task\n", 4427 + mrioc->name); 4428 + retval = SUCCESS; 4429 + goto out; 4430 + } 4431 + 4432 + stgt_priv_data = sdev_priv_data->tgt_priv_data; 4433 + dev_handle = stgt_priv_data->dev_handle; 4434 + 4435 + cmd_priv = scsi_cmd_priv(scmd); 4436 + if (!cmd_priv->in_lld_scope || 4437 + cmd_priv->host_tag == MPI3MR_HOSTTAG_INVALID) { 4438 + sdev_printk(KERN_INFO, scmd->device, 4439 + "%s: scmd (0x%p) not in LLD scope, Skip issuing Abort Task\n", 4440 + mrioc->name, scmd); 4441 + retval = SUCCESS; 4442 + goto out; 4443 + } 4444 + 4445 + if (stgt_priv_data->dev_removed) { 4446 + sdev_printk(KERN_INFO, scmd->device, 4447 + "%s: Device (handle = 0x%04x) removed, Skip issuing Abort Task\n", 4448 + mrioc->name, dev_handle); 4449 + retval = FAILED; 4450 + goto out; 4451 + } 4452 + 4453 + ret = mpi3mr_issue_tm(mrioc, MPI3_SCSITASKMGMT_TASKTYPE_ABORT_TASK, 4454 + dev_handle, sdev_priv_data->lun_id, MPI3MR_HOSTTAG_BLK_TMS, 4455 + timeout, &mrioc->host_tm_cmds, &resp_code, scmd); 4456 + 4457 + if (ret) 4458 + goto out; 4459 + 4460 + if (cmd_priv->in_lld_scope) { 4461 + sdev_printk(KERN_INFO, scmd->device, 4462 + "%s: Abort task failed. scmd (0x%p) was not terminated\n", 4463 + mrioc->name, scmd); 4464 + goto out; 4465 + } 4466 + 4467 + retval = SUCCESS; 4468 + out: 4469 + sdev_printk(KERN_INFO, scmd->device, 4470 + "%s: Abort Task %s for scmd (0x%p)\n", mrioc->name, 4471 + ((retval == SUCCESS) ? "SUCCEEDED" : "FAILED"), scmd); 4398 4472 4399 4473 return retval; 4400 4474 } ··· 5167 5069 .scan_finished = mpi3mr_scan_finished, 5168 5070 .scan_start = mpi3mr_scan_start, 5169 5071 .change_queue_depth = mpi3mr_change_queue_depth, 5072 + .eh_abort_handler = mpi3mr_eh_abort, 5170 5073 .eh_device_reset_handler = mpi3mr_eh_dev_reset, 5171 5074 .eh_target_reset_handler = mpi3mr_eh_target_reset, 5172 5075 .eh_bus_reset_handler = mpi3mr_eh_bus_reset, ··· 5902 5803 }; 5903 5804 MODULE_DEVICE_TABLE(pci, mpi3mr_pci_id_table); 5904 5805 5905 - static struct pci_error_handlers mpi3mr_err_handler = { 5806 + static const struct pci_error_handlers mpi3mr_err_handler = { 5906 5807 .error_detected = mpi3mr_pcierr_error_detected, 5907 5808 .mmio_enabled = mpi3mr_pcierr_mmio_enabled, 5908 5809 .slot_reset = mpi3mr_pcierr_slot_reset,
+8 -1
drivers/scsi/mpt3sas/mpi/mpi2.h
··· 125 125 * 06-24-19 02.00.55 Bumped MPI2_HEADER_VERSION_UNIT 126 126 * 08-01-19 02.00.56 Bumped MPI2_HEADER_VERSION_UNIT 127 127 * 10-02-19 02.00.57 Bumped MPI2_HEADER_VERSION_UNIT 128 + * 07-20-20 02.00.58 Bumped MPI2_HEADER_VERSION_UNIT 129 + * 03-30-21 02.00.59 Bumped MPI2_HEADER_VERSION_UNIT 130 + * 06-03-22 02.00.60 Bumped MPI2_HEADER_VERSION_UNIT 131 + * 09-20-23 02.00.61 Bumped MPI2_HEADER_VERSION_UNIT 132 + * 09-13-24 02.00.62 Bumped MPI2_HEADER_VERSION_UNIT 133 + * Added MPI2_FUNCTION_MCTP_PASSTHROUGH 128 134 * -------------------------------------------------------------------------- 129 135 */ 130 136 ··· 171 165 172 166 173 167 /* Unit and Dev versioning for this MPI header set */ 174 - #define MPI2_HEADER_VERSION_UNIT (0x39) 168 + #define MPI2_HEADER_VERSION_UNIT (0x3E) 175 169 #define MPI2_HEADER_VERSION_DEV (0x00) 176 170 #define MPI2_HEADER_VERSION_UNIT_MASK (0xFF00) 177 171 #define MPI2_HEADER_VERSION_UNIT_SHIFT (8) ··· 675 669 #define MPI2_FUNCTION_PWR_MGMT_CONTROL (0x30) 676 670 #define MPI2_FUNCTION_SEND_HOST_MESSAGE (0x31) 677 671 #define MPI2_FUNCTION_NVME_ENCAPSULATED (0x33) 672 + #define MPI2_FUNCTION_MCTP_PASSTHROUGH (0x34) 678 673 #define MPI2_FUNCTION_MIN_PRODUCT_SPECIFIC (0xF0) 679 674 #define MPI2_FUNCTION_MAX_PRODUCT_SPECIFIC (0xFF) 680 675
+5
drivers/scsi/mpt3sas/mpi/mpi2_cnfg.h
··· 251 251 * 12-17-18 02.00.47 Swap locations of Slotx2 and Slotx4 in ManPage 7. 252 252 * 08-01-19 02.00.49 Add MPI26_MANPAGE7_FLAG_X2_X4_SLOT_INFO_VALID 253 253 * Add MPI26_IOUNITPAGE1_NVME_WRCACHE_SHIFT 254 + * 09-13-24 02.00.50 Added PCIe 32 GT/s link rate 254 255 */ 255 256 256 257 #ifndef MPI2_CNFG_H ··· 1122 1121 #define MPI2_IOUNITPAGE7_PCIE_SPEED_5_0_GBPS (0x01) 1123 1122 #define MPI2_IOUNITPAGE7_PCIE_SPEED_8_0_GBPS (0x02) 1124 1123 #define MPI2_IOUNITPAGE7_PCIE_SPEED_16_0_GBPS (0x03) 1124 + #define MPI2_IOUNITPAGE7_PCIE_SPEED_32_0_GBPS (0x04) 1125 1125 1126 1126 /*defines for IO Unit Page 7 ProcessorState field */ 1127 1127 #define MPI2_IOUNITPAGE7_PSTATE_MASK_SECOND (0x0000000F) ··· 2303 2301 #define MPI2_SASIOUNIT1_CONTROL_CLEAR_AFFILIATION (0x0001) 2304 2302 2305 2303 /*values for SAS IO Unit Page 1 AdditionalControlFlags */ 2304 + #define MPI2_SASIOUNIT1_ACONTROL_PROD_SPECIFIC_1 (0x8000) 2306 2305 #define MPI2_SASIOUNIT1_ACONTROL_DA_PERSIST_CONNECT (0x0100) 2307 2306 #define MPI2_SASIOUNIT1_ACONTROL_MULTI_PORT_DOMAIN_ILLEGAL (0x0080) 2308 2307 #define MPI2_SASIOUNIT1_ACONTROL_SATA_ASYNCHROUNOUS_NOTIFICATION (0x0040) ··· 3594 3591 #define MPI26_PCIE_NEG_LINK_RATE_5_0 (0x03) 3595 3592 #define MPI26_PCIE_NEG_LINK_RATE_8_0 (0x04) 3596 3593 #define MPI26_PCIE_NEG_LINK_RATE_16_0 (0x05) 3594 + #define MPI26_PCIE_NEG_LINK_RATE_32_0 (0x06) 3597 3595 3598 3596 3599 3597 /**************************************************************************** ··· 3704 3700 #define MPI26_PCIEIOUNIT1_MAX_RATE_5_0 (0x30) 3705 3701 #define MPI26_PCIEIOUNIT1_MAX_RATE_8_0 (0x40) 3706 3702 #define MPI26_PCIEIOUNIT1_MAX_RATE_16_0 (0x50) 3703 + #define MPI26_PCIEIOUNIT1_MAX_RATE_32_0 (0x60) 3707 3704 3708 3705 /*values for PCIe IO Unit Page 1 DMDReportPCIe */ 3709 3706 #define MPI26_PCIEIOUNIT1_DMDRPT_UNIT_MASK (0x80)
+54
drivers/scsi/mpt3sas/mpi/mpi2_ioc.h
··· 179 179 * Added MPI26_IOCFACTS_CAPABILITY_COREDUMP_ENABLED 180 180 * Added MPI2_FW_DOWNLOAD_ITYPE_COREDUMP 181 181 * Added MPI2_FW_UPLOAD_ITYPE_COREDUMP 182 + * 9-13-24 02.00.39 Added MPI26_MCTP_PASSTHROUGH messages 182 183 * -------------------------------------------------------------------------- 183 184 */ 184 185 ··· 383 382 /*ProductID field uses MPI2_FW_HEADER_PID_ */ 384 383 385 384 /*IOCCapabilities */ 385 + #define MPI26_IOCFACTS_CAPABILITY_MCTP_PASSTHRU (0x00800000) 386 386 #define MPI26_IOCFACTS_CAPABILITY_COREDUMP_ENABLED (0x00200000) 387 387 #define MPI26_IOCFACTS_CAPABILITY_PCIE_SRIOV (0x00100000) 388 388 #define MPI26_IOCFACTS_CAPABILITY_ATOMIC_REQ (0x00080000) ··· 1800 1798 Mpi26IoUnitControlReply_t, 1801 1799 *pMpi26IoUnitControlReply_t; 1802 1800 1801 + /**************************************************************************** 1802 + * MCTP Passthrough messages (MPI v2.6 and later only.) 1803 + ****************************************************************************/ 1804 + 1805 + /* MCTP Passthrough Request Message */ 1806 + typedef struct _MPI26_MCTP_PASSTHROUGH_REQUEST { 1807 + U8 MsgContext; /* 0x00 */ 1808 + U8 Reserved1[2]; /* 0x01 */ 1809 + U8 Function; /* 0x03 */ 1810 + U8 Reserved2[3]; /* 0x04 */ 1811 + U8 MsgFlags; /* 0x07 */ 1812 + U8 VP_ID; /* 0x08 */ 1813 + U8 VF_ID; /* 0x09 */ 1814 + U16 Reserved3; /* 0x0A */ 1815 + U32 Reserved4; /* 0x0C */ 1816 + U8 Flags; /* 0x10 */ 1817 + U8 Reserved5[3]; /* 0x11 */ 1818 + U32 Reserved6; /* 0x14 */ 1819 + U32 H2DLength; /* 0x18 */ 1820 + U32 D2HLength; /* 0x1C */ 1821 + MPI25_SGE_IO_UNION H2DSGL; /* 0x20 */ 1822 + MPI25_SGE_IO_UNION D2HSGL; /* 0x30 */ 1823 + } MPI26_MCTP_PASSTHROUGH_REQUEST, 1824 + *PTR_MPI26_MCTP_PASSTHROUGH_REQUEST, 1825 + Mpi26MctpPassthroughRequest_t, 1826 + *pMpi26MctpPassthroughRequest_t; 1827 + 1828 + /* values for the MsgContext field */ 1829 + #define MPI26_MCTP_MSG_CONEXT_UNUSED (0x00) 1830 + 1831 + /* values for the Flags field */ 1832 + #define MPI26_MCTP_FLAGS_MSG_FORMAT_MPT (0x01) 1833 + 1834 + /* MCTP Passthrough Reply Message */ 1835 + typedef struct _MPI26_MCTP_PASSTHROUGH_REPLY { 1836 + U8 MsgContext; /* 0x00 */ 1837 + U8 Reserved1; /* 0x01 */ 1838 + U8 MsgLength; /* 0x02 */ 1839 + U8 Function; /* 0x03 */ 1840 + U8 Reserved2[3]; /* 0x04 */ 1841 + U8 MsgFlags; /* 0x07 */ 1842 + U8 VP_ID; /* 0x08 */ 1843 + U8 VF_ID; /* 0x09 */ 1844 + U16 Reserved3; /* 0x0A */ 1845 + U16 Reserved4; /* 0x0C */ 1846 + U16 IOCStatus; /* 0x0E */ 1847 + U32 IOCLogInfo; /* 0x10 */ 1848 + U32 ResponseDataLength; /* 0x14 */ 1849 + } MPI26_MCTP_PASSTHROUGH_REPLY, 1850 + *PTR_MPI26_MCTP_PASSTHROUGH_REPLY, 1851 + Mpi26MctpPassthroughReply_t, 1852 + *pMpi26MctpPassthroughReply_t; 1803 1853 1804 1854 #endif
+18 -5
drivers/scsi/mpt3sas/mpt3sas_base.c
··· 1202 1202 ioc->sge_size; 1203 1203 func_str = "nvme_encapsulated"; 1204 1204 break; 1205 + case MPI2_FUNCTION_MCTP_PASSTHROUGH: 1206 + frame_sz = sizeof(Mpi26MctpPassthroughRequest_t) + 1207 + ioc->sge_size; 1208 + func_str = "mctp_passthru"; 1209 + break; 1205 1210 default: 1206 1211 frame_sz = 32; 1207 1212 func_str = "unknown"; ··· 4879 4874 i++; 4880 4875 } 4881 4876 4877 + if (ioc->facts.IOCCapabilities & 4878 + MPI26_IOCFACTS_CAPABILITY_MCTP_PASSTHRU) { 4879 + pr_cont("%sMCTP Passthru", i ? "," : ""); 4880 + i++; 4881 + } 4882 + 4882 4883 iounit_pg1_flags = le32_to_cpu(ioc->iounit_pg1.Flags); 4883 4884 if (!(iounit_pg1_flags & MPI2_IOUNITPAGE1_NATIVE_COMMAND_Q_DISABLE)) { 4884 4885 pr_cont("%sNCQ", i ? "," : ""); ··· 8029 8018 8030 8019 mutex_lock(&ioc->hostdiag_unlock_mutex); 8031 8020 if (mpt3sas_base_unlock_and_get_host_diagnostic(ioc, &host_diagnostic)) 8032 - goto out; 8021 + goto unlock; 8033 8022 8034 8023 hcb_size = ioc->base_readl(&ioc->chip->HCBSize); 8035 8024 drsprintk(ioc, ioc_info(ioc, "diag reset: issued\n")); ··· 8049 8038 ioc_info(ioc, 8050 8039 "Invalid host diagnostic register value\n"); 8051 8040 _base_dump_reg_set(ioc); 8052 - goto out; 8041 + goto unlock; 8053 8042 } 8054 8043 if (!(host_diagnostic & MPI2_DIAG_RESET_ADAPTER)) 8055 8044 break; ··· 8085 8074 ioc_err(ioc, "%s: failed going to ready state (ioc_state=0x%x)\n", 8086 8075 __func__, ioc_state); 8087 8076 _base_dump_reg_set(ioc); 8088 - goto out; 8077 + goto fail; 8089 8078 } 8090 8079 8091 8080 pci_cfg_access_unlock(ioc->pdev); 8092 8081 ioc_info(ioc, "diag reset: SUCCESS\n"); 8093 8082 return 0; 8094 8083 8095 - out: 8084 + unlock: 8085 + mutex_unlock(&ioc->hostdiag_unlock_mutex); 8086 + 8087 + fail: 8096 8088 pci_cfg_access_unlock(ioc->pdev); 8097 8089 ioc_err(ioc, "diag reset: FAILED\n"); 8098 - mutex_unlock(&ioc->hostdiag_unlock_mutex); 8099 8090 return -EFAULT; 8100 8091 } 8101 8092
+2 -8
drivers/scsi/mpt3sas/mpt3sas_base.h
··· 77 77 #define MPT3SAS_DRIVER_NAME "mpt3sas" 78 78 #define MPT3SAS_AUTHOR "Avago Technologies <MPT-FusionLinux.pdl@avagotech.com>" 79 79 #define MPT3SAS_DESCRIPTION "LSI MPT Fusion SAS 3.0 Device Driver" 80 - #define MPT3SAS_DRIVER_VERSION "51.100.00.00" 81 - #define MPT3SAS_MAJOR_VERSION 51 80 + #define MPT3SAS_DRIVER_VERSION "52.100.00.00" 81 + #define MPT3SAS_MAJOR_VERSION 52 82 82 #define MPT3SAS_MINOR_VERSION 100 83 83 #define MPT3SAS_BUILD_VERSION 00 84 84 #define MPT3SAS_RELEASE_VERSION 00 ··· 1858 1858 int mpt3sas_config_get_manufacturing_pg1(struct MPT3SAS_ADAPTER *ioc, 1859 1859 Mpi2ConfigReply_t *mpi_reply, Mpi2ManufacturingPage1_t *config_page); 1860 1860 1861 - int mpt3sas_config_get_manufacturing_pg7(struct MPT3SAS_ADAPTER *ioc, 1862 - Mpi2ConfigReply_t *mpi_reply, Mpi2ManufacturingPage7_t *config_page, 1863 - u16 sz); 1864 1861 int mpt3sas_config_get_manufacturing_pg10(struct MPT3SAS_ADAPTER *ioc, 1865 1862 Mpi2ConfigReply_t *mpi_reply, 1866 1863 struct Mpi2ManufacturingPage10_t *config_page); ··· 1883 1886 *mpi_reply, Mpi2IOUnitPage0_t *config_page); 1884 1887 int mpt3sas_config_get_sas_device_pg0(struct MPT3SAS_ADAPTER *ioc, 1885 1888 Mpi2ConfigReply_t *mpi_reply, Mpi2SasDevicePage0_t *config_page, 1886 - u32 form, u32 handle); 1887 - int mpt3sas_config_get_sas_device_pg1(struct MPT3SAS_ADAPTER *ioc, 1888 - Mpi2ConfigReply_t *mpi_reply, Mpi2SasDevicePage1_t *config_page, 1889 1889 u32 form, u32 handle); 1890 1890 int mpt3sas_config_get_pcie_device_pg0(struct MPT3SAS_ADAPTER *ioc, 1891 1891 Mpi2ConfigReply_t *mpi_reply, Mpi26PCIeDevicePage0_t *config_page,
-79
drivers/scsi/mpt3sas/mpt3sas_config.c
··· 577 577 } 578 578 579 579 /** 580 - * mpt3sas_config_get_manufacturing_pg7 - obtain manufacturing page 7 581 - * @ioc: per adapter object 582 - * @mpi_reply: reply mf payload returned from firmware 583 - * @config_page: contents of the config page 584 - * @sz: size of buffer passed in config_page 585 - * Context: sleep. 586 - * 587 - * Return: 0 for success, non-zero for failure. 588 - */ 589 - int 590 - mpt3sas_config_get_manufacturing_pg7(struct MPT3SAS_ADAPTER *ioc, 591 - Mpi2ConfigReply_t *mpi_reply, Mpi2ManufacturingPage7_t *config_page, 592 - u16 sz) 593 - { 594 - Mpi2ConfigRequest_t mpi_request; 595 - int r; 596 - 597 - memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 598 - mpi_request.Function = MPI2_FUNCTION_CONFIG; 599 - mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER; 600 - mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_MANUFACTURING; 601 - mpi_request.Header.PageNumber = 7; 602 - mpi_request.Header.PageVersion = MPI2_MANUFACTURING7_PAGEVERSION; 603 - ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE); 604 - r = _config_request(ioc, &mpi_request, mpi_reply, 605 - MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0); 606 - if (r) 607 - goto out; 608 - 609 - mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT; 610 - r = _config_request(ioc, &mpi_request, mpi_reply, 611 - MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page, 612 - sz); 613 - out: 614 - return r; 615 - } 616 - 617 - /** 618 580 * mpt3sas_config_get_manufacturing_pg10 - obtain manufacturing page 10 619 581 * @ioc: per adapter object 620 582 * @mpi_reply: reply mf payload returned from firmware ··· 1160 1198 mpi_request.ExtPageType = MPI2_CONFIG_EXTPAGETYPE_SAS_DEVICE; 1161 1199 mpi_request.Header.PageVersion = MPI2_SASDEVICE0_PAGEVERSION; 1162 1200 mpi_request.Header.PageNumber = 0; 1163 - ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE); 1164 - r = _config_request(ioc, &mpi_request, mpi_reply, 1165 - MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0); 1166 - if (r) 1167 - goto out; 1168 - 1169 - mpi_request.PageAddress = cpu_to_le32(form | handle); 1170 - mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT; 1171 - r = _config_request(ioc, &mpi_request, mpi_reply, 1172 - MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page, 1173 - sizeof(*config_page)); 1174 - out: 1175 - return r; 1176 - } 1177 - 1178 - /** 1179 - * mpt3sas_config_get_sas_device_pg1 - obtain sas device page 1 1180 - * @ioc: per adapter object 1181 - * @mpi_reply: reply mf payload returned from firmware 1182 - * @config_page: contents of the config page 1183 - * @form: GET_NEXT_HANDLE or HANDLE 1184 - * @handle: device handle 1185 - * Context: sleep. 1186 - * 1187 - * Return: 0 for success, non-zero for failure. 1188 - */ 1189 - int 1190 - mpt3sas_config_get_sas_device_pg1(struct MPT3SAS_ADAPTER *ioc, 1191 - Mpi2ConfigReply_t *mpi_reply, Mpi2SasDevicePage1_t *config_page, 1192 - u32 form, u32 handle) 1193 - { 1194 - Mpi2ConfigRequest_t mpi_request; 1195 - int r; 1196 - 1197 - memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 1198 - mpi_request.Function = MPI2_FUNCTION_CONFIG; 1199 - mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER; 1200 - mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_EXTENDED; 1201 - mpi_request.ExtPageType = MPI2_CONFIG_EXTPAGETYPE_SAS_DEVICE; 1202 - mpi_request.Header.PageVersion = MPI2_SASDEVICE1_PAGEVERSION; 1203 - mpi_request.Header.PageNumber = 1; 1204 1201 ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE); 1205 1202 r = _config_request(ioc, &mpi_request, mpi_reply, 1206 1203 MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0);
+277 -2
drivers/scsi/mpt3sas/mpt3sas_ctl.c
··· 186 186 case MPI2_FUNCTION_NVME_ENCAPSULATED: 187 187 desc = "nvme_encapsulated"; 188 188 break; 189 + case MPI2_FUNCTION_MCTP_PASSTHROUGH: 190 + desc = "mctp_passthrough"; 191 + break; 189 192 } 190 193 191 194 if (!desc) ··· 656 653 } 657 654 658 655 /** 656 + * _ctl_send_mctp_passthru_req - Send an MCTP passthru request 657 + * @ioc: per adapter object 658 + * @mctp_passthru_req: MPI mctp passhthru request from caller 659 + * @psge: pointer to the H2DSGL 660 + * @data_out_dma: DMA buffer for H2D SGL 661 + * @data_out_sz: H2D length 662 + * @data_in_dma: DMA buffer for D2H SGL 663 + * @data_in_sz: D2H length 664 + * @smid: SMID to submit the request 665 + * 666 + */ 667 + static void 668 + _ctl_send_mctp_passthru_req( 669 + struct MPT3SAS_ADAPTER *ioc, 670 + Mpi26MctpPassthroughRequest_t *mctp_passthru_req, void *psge, 671 + dma_addr_t data_out_dma, int data_out_sz, 672 + dma_addr_t data_in_dma, int data_in_sz, 673 + u16 smid) 674 + { 675 + mctp_passthru_req->H2DLength = data_out_sz; 676 + mctp_passthru_req->D2HLength = data_in_sz; 677 + 678 + /* Build the H2D SGL from the data out buffer */ 679 + ioc->build_sg(ioc, psge, data_out_dma, data_out_sz, 0, 0); 680 + 681 + psge += ioc->sge_size_ieee; 682 + 683 + /* Build the D2H SGL for the data in buffer */ 684 + ioc->build_sg(ioc, psge, 0, 0, data_in_dma, data_in_sz); 685 + 686 + ioc->put_smid_default(ioc, smid); 687 + } 688 + 689 + /** 659 690 * _ctl_do_mpt_command - main handler for MPT3COMMAND opcode 660 691 * @ioc: per adapter object 661 692 * @karg: (struct mpt3_ioctl_command) ··· 716 679 size_t data_in_sz = 0; 717 680 long ret; 718 681 u16 device_handle = MPT3SAS_INVALID_DEVICE_HANDLE; 682 + int tm_ret; 719 683 720 684 issue_reset = 0; 721 685 ··· 830 792 831 793 init_completion(&ioc->ctl_cmds.done); 832 794 switch (mpi_request->Function) { 795 + case MPI2_FUNCTION_MCTP_PASSTHROUGH: 796 + { 797 + Mpi26MctpPassthroughRequest_t *mctp_passthru_req = 798 + (Mpi26MctpPassthroughRequest_t *)request; 799 + 800 + if (!(ioc->facts.IOCCapabilities & MPI26_IOCFACTS_CAPABILITY_MCTP_PASSTHRU)) { 801 + ioc_err(ioc, "%s: MCTP Passthrough request not supported\n", 802 + __func__); 803 + mpt3sas_base_free_smid(ioc, smid); 804 + ret = -EINVAL; 805 + goto out; 806 + } 807 + 808 + _ctl_send_mctp_passthru_req(ioc, mctp_passthru_req, psge, data_out_dma, 809 + data_out_sz, data_in_dma, data_in_sz, smid); 810 + break; 811 + } 833 812 case MPI2_FUNCTION_NVME_ENCAPSULATED: 834 813 { 835 814 nvme_encap_request = (Mpi26NVMeEncapsulatedRequest_t *)request; ··· 1175 1120 if (pcie_device && (!ioc->tm_custom_handling) && 1176 1121 (!(mpt3sas_scsih_is_pcie_scsi_device( 1177 1122 pcie_device->device_info)))) 1178 - mpt3sas_scsih_issue_locked_tm(ioc, 1123 + tm_ret = mpt3sas_scsih_issue_locked_tm(ioc, 1179 1124 le16_to_cpu(mpi_request->FunctionDependent1), 1180 1125 0, 0, 0, 1181 1126 MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET, 0, 1182 1127 0, pcie_device->reset_timeout, 1183 1128 MPI26_SCSITASKMGMT_MSGFLAGS_PROTOCOL_LVL_RST_PCIE); 1184 1129 else 1185 - mpt3sas_scsih_issue_locked_tm(ioc, 1130 + tm_ret = mpt3sas_scsih_issue_locked_tm(ioc, 1186 1131 le16_to_cpu(mpi_request->FunctionDependent1), 1187 1132 0, 0, 0, 1188 1133 MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET, 0, 1189 1134 0, 30, MPI2_SCSITASKMGMT_MSGFLAGS_LINK_RESET); 1135 + 1136 + if (tm_ret != SUCCESS) { 1137 + ioc_info(ioc, 1138 + "target reset failed, issue hard reset: handle (0x%04x)\n", 1139 + le16_to_cpu(mpi_request->FunctionDependent1)); 1140 + mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER); 1141 + } 1190 1142 } else 1191 1143 mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER); 1192 1144 } ··· 1261 1199 break; 1262 1200 } 1263 1201 karg.bios_version = le32_to_cpu(ioc->bios_pg3.BiosVersion); 1202 + 1203 + karg.driver_capability |= MPT3_IOCTL_IOCINFO_DRIVER_CAP_MCTP_PASSTHRU; 1264 1204 1265 1205 if (copy_to_user(arg, &karg, sizeof(karg))) { 1266 1206 pr_err("failure at %s:%d/%s()!\n", ··· 2849 2785 mutex_unlock(&ioc->pci_access_mutex); 2850 2786 return ret; 2851 2787 } 2788 + 2789 + /** 2790 + * _ctl_get_mpt_mctp_passthru_adapter - Traverse the IOC list and return the IOC at 2791 + * dev_index positionthat support MCTP passhtru 2792 + * @dev_index: position in the mpt3sas_ioc_list to search for 2793 + * Return pointer to the IOC on success 2794 + * NULL if device not found error 2795 + */ 2796 + static struct MPT3SAS_ADAPTER * 2797 + _ctl_get_mpt_mctp_passthru_adapter(int dev_index) 2798 + { 2799 + struct MPT3SAS_ADAPTER *ioc = NULL; 2800 + int count = 0; 2801 + 2802 + spin_lock(&gioc_lock); 2803 + /* Traverse ioc list and return number of IOC that support MCTP passthru */ 2804 + list_for_each_entry(ioc, &mpt3sas_ioc_list, list) { 2805 + if (ioc->facts.IOCCapabilities & MPI26_IOCFACTS_CAPABILITY_MCTP_PASSTHRU) { 2806 + if (count == dev_index) { 2807 + spin_unlock(&gioc_lock); 2808 + return 0; 2809 + } 2810 + } 2811 + } 2812 + spin_unlock(&gioc_lock); 2813 + 2814 + return NULL; 2815 + } 2816 + 2817 + /** 2818 + * mpt3sas_get_device_count - Retrieve the count of MCTP passthrough 2819 + * capable devices managed by the driver. 2820 + * 2821 + * Returns number of devices that support MCTP passthrough. 2822 + */ 2823 + int 2824 + mpt3sas_get_device_count(void) 2825 + { 2826 + int count = 0; 2827 + struct MPT3SAS_ADAPTER *ioc = NULL; 2828 + 2829 + spin_lock(&gioc_lock); 2830 + /* Traverse ioc list and return number of IOC that support MCTP passthru */ 2831 + list_for_each_entry(ioc, &mpt3sas_ioc_list, list) 2832 + if (ioc->facts.IOCCapabilities & MPI26_IOCFACTS_CAPABILITY_MCTP_PASSTHRU) 2833 + count++; 2834 + 2835 + spin_unlock(&gioc_lock); 2836 + 2837 + return count; 2838 + } 2839 + EXPORT_SYMBOL(mpt3sas_get_device_count); 2840 + 2841 + /** 2842 + * mpt3sas_send_passthru_cmd - Send an MPI MCTP passthrough command to 2843 + * firmware 2844 + * @command: The MPI MCTP passthrough command to send to firmware 2845 + * 2846 + * Returns 0 on success, anything else is error. 2847 + */ 2848 + int mpt3sas_send_mctp_passthru_req(struct mpt3_passthru_command *command) 2849 + { 2850 + struct MPT3SAS_ADAPTER *ioc; 2851 + MPI2RequestHeader_t *mpi_request = NULL, *request; 2852 + MPI2DefaultReply_t *mpi_reply; 2853 + Mpi26MctpPassthroughRequest_t *mctp_passthru_req; 2854 + u16 smid; 2855 + unsigned long timeout; 2856 + u8 issue_reset = 0; 2857 + u32 sz; 2858 + void *psge; 2859 + void *data_out = NULL; 2860 + dma_addr_t data_out_dma = 0; 2861 + size_t data_out_sz = 0; 2862 + void *data_in = NULL; 2863 + dma_addr_t data_in_dma = 0; 2864 + size_t data_in_sz = 0; 2865 + long ret; 2866 + 2867 + /* Retrieve ioc from dev_index */ 2868 + ioc = _ctl_get_mpt_mctp_passthru_adapter(command->dev_index); 2869 + if (!ioc) 2870 + return -ENODEV; 2871 + 2872 + mutex_lock(&ioc->pci_access_mutex); 2873 + if (ioc->shost_recovery || 2874 + ioc->pci_error_recovery || ioc->is_driver_loading || 2875 + ioc->remove_host) { 2876 + ret = -EAGAIN; 2877 + goto unlock_pci_access; 2878 + } 2879 + 2880 + /* Lock the ctl_cmds mutex to ensure a single ctl cmd is pending */ 2881 + if (mutex_lock_interruptible(&ioc->ctl_cmds.mutex)) { 2882 + ret = -ERESTARTSYS; 2883 + goto unlock_pci_access; 2884 + } 2885 + 2886 + if (ioc->ctl_cmds.status != MPT3_CMD_NOT_USED) { 2887 + ioc_err(ioc, "%s: ctl_cmd in use\n", __func__); 2888 + ret = -EAGAIN; 2889 + goto unlock_ctl_cmds; 2890 + } 2891 + 2892 + ret = mpt3sas_wait_for_ioc(ioc, IOC_OPERATIONAL_WAIT_COUNT); 2893 + if (ret) 2894 + goto unlock_ctl_cmds; 2895 + 2896 + mpi_request = (MPI2RequestHeader_t *)command->mpi_request; 2897 + if (mpi_request->Function != MPI2_FUNCTION_MCTP_PASSTHROUGH) { 2898 + ioc_err(ioc, "%s: Invalid request received, Function 0x%x\n", 2899 + __func__, mpi_request->Function); 2900 + ret = -EINVAL; 2901 + goto unlock_ctl_cmds; 2902 + } 2903 + 2904 + /* Use first reserved smid for passthrough commands */ 2905 + smid = ioc->scsiio_depth - INTERNAL_SCSIIO_CMDS_COUNT + 1; 2906 + ret = 0; 2907 + ioc->ctl_cmds.status = MPT3_CMD_PENDING; 2908 + memset(ioc->ctl_cmds.reply, 0, ioc->reply_sz); 2909 + request = mpt3sas_base_get_msg_frame(ioc, smid); 2910 + memset(request, 0, ioc->request_sz); 2911 + memcpy(request, command->mpi_request, sizeof(Mpi26MctpPassthroughRequest_t)); 2912 + ioc->ctl_cmds.smid = smid; 2913 + data_out_sz = command->data_out_size; 2914 + data_in_sz = command->data_in_size; 2915 + 2916 + /* obtain dma-able memory for data transfer */ 2917 + if (data_out_sz) /* WRITE */ { 2918 + data_out = dma_alloc_coherent(&ioc->pdev->dev, data_out_sz, 2919 + &data_out_dma, GFP_ATOMIC); 2920 + if (!data_out) { 2921 + ret = -ENOMEM; 2922 + mpt3sas_base_free_smid(ioc, smid); 2923 + goto out; 2924 + } 2925 + memcpy(data_out, command->data_out_buf_ptr, data_out_sz); 2926 + 2927 + } 2928 + 2929 + if (data_in_sz) /* READ */ { 2930 + data_in = dma_alloc_coherent(&ioc->pdev->dev, data_in_sz, 2931 + &data_in_dma, GFP_ATOMIC); 2932 + if (!data_in) { 2933 + ret = -ENOMEM; 2934 + mpt3sas_base_free_smid(ioc, smid); 2935 + goto out; 2936 + } 2937 + } 2938 + 2939 + psge = &((Mpi26MctpPassthroughRequest_t *)request)->H2DSGL; 2940 + 2941 + init_completion(&ioc->ctl_cmds.done); 2942 + 2943 + mctp_passthru_req = (Mpi26MctpPassthroughRequest_t *)request; 2944 + 2945 + _ctl_send_mctp_passthru_req(ioc, mctp_passthru_req, psge, data_out_dma, 2946 + data_out_sz, data_in_dma, data_in_sz, smid); 2947 + 2948 + timeout = command->timeout; 2949 + if (timeout < MPT3_IOCTL_DEFAULT_TIMEOUT) 2950 + timeout = MPT3_IOCTL_DEFAULT_TIMEOUT; 2951 + 2952 + wait_for_completion_timeout(&ioc->ctl_cmds.done, timeout*HZ); 2953 + if (!(ioc->ctl_cmds.status & MPT3_CMD_COMPLETE)) { 2954 + mpt3sas_check_cmd_timeout(ioc, 2955 + ioc->ctl_cmds.status, mpi_request, 2956 + sizeof(Mpi26MctpPassthroughRequest_t) / 4, issue_reset); 2957 + goto issue_host_reset; 2958 + } 2959 + 2960 + mpi_reply = ioc->ctl_cmds.reply; 2961 + 2962 + /* copy out xdata to user */ 2963 + if (data_in_sz) 2964 + memcpy(command->data_in_buf_ptr, data_in, data_in_sz); 2965 + 2966 + /* copy out reply message frame to user */ 2967 + if (command->max_reply_bytes) { 2968 + sz = min_t(u32, command->max_reply_bytes, ioc->reply_sz); 2969 + memcpy(command->reply_frame_buf_ptr, ioc->ctl_cmds.reply, sz); 2970 + } 2971 + 2972 + issue_host_reset: 2973 + if (issue_reset) { 2974 + ret = -ENODATA; 2975 + mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER); 2976 + } 2977 + 2978 + out: 2979 + /* free memory associated with sg buffers */ 2980 + if (data_in) 2981 + dma_free_coherent(&ioc->pdev->dev, data_in_sz, data_in, 2982 + data_in_dma); 2983 + 2984 + if (data_out) 2985 + dma_free_coherent(&ioc->pdev->dev, data_out_sz, data_out, 2986 + data_out_dma); 2987 + 2988 + ioc->ctl_cmds.status = MPT3_CMD_NOT_USED; 2989 + 2990 + unlock_ctl_cmds: 2991 + mutex_unlock(&ioc->ctl_cmds.mutex); 2992 + 2993 + unlock_pci_access: 2994 + mutex_unlock(&ioc->pci_access_mutex); 2995 + return ret; 2996 + 2997 + } 2998 + EXPORT_SYMBOL(mpt3sas_send_mctp_passthru_req); 2852 2999 2853 3000 /** 2854 3001 * _ctl_ioctl - mpt3ctl main ioctl entry point (unlocked)
+48 -1
drivers/scsi/mpt3sas/mpt3sas_ctl.h
··· 160 160 #define MPT3_IOCTL_INTERFACE_SAS35 (0x07) 161 161 #define MPT2_IOCTL_VERSION_LENGTH (32) 162 162 163 + /* Bits set for mpt3_ioctl_iocinfo.driver_cap */ 164 + #define MPT3_IOCTL_IOCINFO_DRIVER_CAP_MCTP_PASSTHRU 0x1 165 + 163 166 /** 164 167 * struct mpt3_ioctl_iocinfo - generic controller info 165 168 * @hdr - generic header ··· 178 175 * @driver_version - driver version - 32 ASCII characters 179 176 * @rsvd1 - reserved 180 177 * @scsi_id - scsi id of adapter 0 178 + * @driver_capability - driver capabilities 181 179 * @rsvd2 - reserved 182 180 * @pci_information - pci info (2nd revision) 183 181 */ ··· 196 192 uint8_t driver_version[MPT2_IOCTL_VERSION_LENGTH]; 197 193 uint8_t rsvd1; 198 194 uint8_t scsi_id; 199 - uint16_t rsvd2; 195 + uint8_t driver_capability; 196 + uint8_t rsvd2; 200 197 struct mpt3_ioctl_pci_info pci_information; 201 198 }; 202 199 ··· 462 457 struct mpt3_enable_diag_sbr_reload { 463 458 struct mpt3_ioctl_header hdr; 464 459 }; 460 + 461 + /** 462 + * struct mpt3_passthru_command - generic mpt firmware passthru command 463 + * @dev_index - device index 464 + * @timeout - command timeout in seconds. (if zero then use driver default 465 + * value). 466 + * @reply_frame_buf_ptr - MPI reply location 467 + * @data_in_buf_ptr - destination for read 468 + * @data_out_buf_ptr - data source for write 469 + * @max_reply_bytes - maximum number of reply bytes to be sent to app. 470 + * @data_in_size - number bytes for data transfer in (read) 471 + * @data_out_size - number bytes for data transfer out (write) 472 + * @mpi_request - request frame 473 + */ 474 + struct mpt3_passthru_command { 475 + u8 dev_index; 476 + uint32_t timeout; 477 + void *reply_frame_buf_ptr; 478 + void *data_in_buf_ptr; 479 + void *data_out_buf_ptr; 480 + uint32_t max_reply_bytes; 481 + uint32_t data_in_size; 482 + uint32_t data_out_size; 483 + Mpi26MctpPassthroughRequest_t *mpi_request; 484 + }; 485 + 486 + /* 487 + * mpt3sas_get_device_count - Retrieve the count of MCTP passthrough 488 + * capable devices managed by the driver. 489 + * 490 + * Returns number of devices that support MCTP passthrough. 491 + */ 492 + int mpt3sas_get_device_count(void); 493 + 494 + /* 495 + * mpt3sas_send_passthru_cmd - Send an MPI MCTP passthrough command to 496 + * firmware 497 + * @command: The MPI MCTP passthrough command to send to firmware 498 + * 499 + * Returns 0 on success, anything else is error . 500 + */ 501 + int mpt3sas_send_mctp_passthru_req(struct mpt3_passthru_command *command); 465 502 466 503 #endif /* MPT3SAS_CTL_H_INCLUDED */
+2 -2
drivers/scsi/mpt3sas/mpt3sas_scsih.c
··· 2703 2703 ssp_target = 1; 2704 2704 if (sas_device->device_info & 2705 2705 MPI2_SAS_DEVICE_INFO_SEP) { 2706 - sdev_printk(KERN_WARNING, sdev, 2706 + sdev_printk(KERN_INFO, sdev, 2707 2707 "set ignore_delay_remove for handle(0x%04x)\n", 2708 2708 sas_device_priv_data->sas_target->handle); 2709 2709 sas_device_priv_data->ignore_delay_remove = 1; ··· 12710 12710 }; 12711 12711 MODULE_DEVICE_TABLE(pci, mpt3sas_pci_table); 12712 12712 12713 - static struct pci_error_handlers _mpt3sas_err_handler = { 12713 + static const struct pci_error_handlers _mpt3sas_err_handler = { 12714 12714 .error_detected = scsih_pci_error_detected, 12715 12715 .mmio_enabled = scsih_pci_mmio_enabled, 12716 12716 .slot_reset = scsih_pci_slot_reset,
-10
drivers/scsi/mvsas/mv_sas.c
··· 151 151 return MVS_CHIP_DISP->assign_reg_set(mvi, &dev->taskfileset); 152 152 } 153 153 154 - void mvs_phys_reset(struct mvs_info *mvi, u32 phy_mask, int hard) 155 - { 156 - u32 no; 157 - for_each_phy(phy_mask, phy_mask, no) { 158 - if (!(phy_mask & 1)) 159 - continue; 160 - MVS_CHIP_DISP->phy_reset(mvi, no, hard); 161 - } 162 - } 163 - 164 154 int mvs_phy_control(struct asd_sas_phy *sas_phy, enum phy_func func, 165 155 void *funcdata) 166 156 {
-1
drivers/scsi/mvsas/mv_sas.h
··· 425 425 void mvs_get_sas_addr(void *buf, u32 buflen); 426 426 void mvs_iounmap(void __iomem *regs); 427 427 int mvs_ioremap(struct mvs_info *mvi, int bar, int bar_ex); 428 - void mvs_phys_reset(struct mvs_info *mvi, u32 phy_mask, int hard); 429 428 int mvs_phy_control(struct asd_sas_phy *sas_phy, enum phy_func func, 430 429 void *funcdata); 431 430 void mvs_set_sas_addr(struct mvs_info *mvi, int port_id, u32 off_lo,
+1 -1
drivers/scsi/qedi/qedi_main.c
··· 2876 2876 2877 2877 static enum cpuhp_state qedi_cpuhp_state; 2878 2878 2879 - static struct pci_error_handlers qedi_err_handler = { 2879 + static const struct pci_error_handlers qedi_err_handler = { 2880 2880 .error_detected = qedi_io_error_detected, 2881 2881 }; 2882 2882
+2 -2
drivers/scsi/qla2xxx/qla_sup.c
··· 2136 2136 * @flash_id: Flash ID 2137 2137 * 2138 2138 * This function polls the device until bit 7 of what is read matches data 2139 - * bit 7 or until data bit 5 becomes a 1. If that hapens, the flash ROM timed 2140 - * out (a fatal error). The flash book recommeds reading bit 7 again after 2139 + * bit 7 or until data bit 5 becomes a 1. If that happens, the flash ROM timed 2140 + * out (a fatal error). The flash book recommends reading bit 7 again after 2141 2141 * reading bit 5 as a 1. 2142 2142 * 2143 2143 * Returns 0 on success, else non-zero.
+20 -8
drivers/scsi/scsi.c
··· 510 510 return; 511 511 512 512 for (i = 4; i < vpd_buf->len; i++) { 513 - if (vpd_buf->data[i] == 0x0) 513 + switch (vpd_buf->data[i]) { 514 + case 0x0: 514 515 scsi_update_vpd_page(sdev, 0x0, &sdev->vpd_pg0); 515 - if (vpd_buf->data[i] == 0x80) 516 + break; 517 + case 0x80: 516 518 scsi_update_vpd_page(sdev, 0x80, &sdev->vpd_pg80); 517 - if (vpd_buf->data[i] == 0x83) 519 + break; 520 + case 0x83: 518 521 scsi_update_vpd_page(sdev, 0x83, &sdev->vpd_pg83); 519 - if (vpd_buf->data[i] == 0x89) 522 + break; 523 + case 0x89: 520 524 scsi_update_vpd_page(sdev, 0x89, &sdev->vpd_pg89); 521 - if (vpd_buf->data[i] == 0xb0) 525 + break; 526 + case 0xb0: 522 527 scsi_update_vpd_page(sdev, 0xb0, &sdev->vpd_pgb0); 523 - if (vpd_buf->data[i] == 0xb1) 528 + break; 529 + case 0xb1: 524 530 scsi_update_vpd_page(sdev, 0xb1, &sdev->vpd_pgb1); 525 - if (vpd_buf->data[i] == 0xb2) 531 + break; 532 + case 0xb2: 526 533 scsi_update_vpd_page(sdev, 0xb2, &sdev->vpd_pgb2); 527 - if (vpd_buf->data[i] == 0xb7) 534 + break; 535 + case 0xb7: 528 536 scsi_update_vpd_page(sdev, 0xb7, &sdev->vpd_pgb7); 537 + break; 538 + default: 539 + break; 540 + } 529 541 } 530 542 kfree(vpd_buf); 531 543 }
+791 -137
drivers/scsi/scsi_debug.c
··· 71 71 #define NO_ADDITIONAL_SENSE 0x0 72 72 #define OVERLAP_ATOMIC_COMMAND_ASC 0x0 73 73 #define OVERLAP_ATOMIC_COMMAND_ASCQ 0x23 74 + #define FILEMARK_DETECTED_ASCQ 0x1 75 + #define EOP_EOM_DETECTED_ASCQ 0x2 76 + #define BEGINNING_OF_P_M_DETECTED_ASCQ 0x4 77 + #define EOD_DETECTED_ASCQ 0x5 74 78 #define LOGICAL_UNIT_NOT_READY 0x4 75 79 #define LOGICAL_UNIT_COMMUNICATION_FAILURE 0x8 76 80 #define UNRECOVERED_READ_ERR 0x11 ··· 84 80 #define INVALID_FIELD_IN_CDB 0x24 85 81 #define INVALID_FIELD_IN_PARAM_LIST 0x26 86 82 #define WRITE_PROTECTED 0x27 83 + #define UA_READY_ASC 0x28 87 84 #define UA_RESET_ASC 0x29 88 85 #define UA_CHANGED_ASC 0x2a 86 + #define TOO_MANY_IN_PARTITION_ASC 0x3b 89 87 #define TARGET_CHANGED_ASC 0x3f 90 88 #define LUNS_CHANGED_ASCQ 0x0e 91 89 #define INSUFF_RES_ASC 0x55 ··· 179 173 #define DEF_ZBC_MAX_OPEN_ZONES 8 180 174 #define DEF_ZBC_NR_CONV_ZONES 1 181 175 176 + /* Default parameters for tape drives */ 177 + #define TAPE_DEF_DENSITY 0x0 178 + #define TAPE_BAD_DENSITY 0x65 179 + #define TAPE_DEF_BLKSIZE 0 180 + #define TAPE_MIN_BLKSIZE 512 181 + #define TAPE_MAX_BLKSIZE 1048576 182 + #define TAPE_EW 20 183 + #define TAPE_MAX_PARTITIONS 2 184 + #define TAPE_UNITS 10000 185 + #define TAPE_PARTITION_1_UNITS 1000 186 + 187 + /* The tape block data definitions */ 188 + #define TAPE_BLOCK_FM_FLAG ((u32)0x1 << 30) 189 + #define TAPE_BLOCK_EOD_FLAG ((u32)0x2 << 30) 190 + #define TAPE_BLOCK_MARK_MASK ((u32)0x3 << 30) 191 + #define TAPE_BLOCK_SIZE_MASK (~TAPE_BLOCK_MARK_MASK) 192 + #define TAPE_BLOCK_MARK(a) (a & TAPE_BLOCK_MARK_MASK) 193 + #define TAPE_BLOCK_SIZE(a) (a & TAPE_BLOCK_SIZE_MASK) 194 + #define IS_TAPE_BLOCK_FM(a) ((a & TAPE_BLOCK_FM_FLAG) != 0) 195 + #define IS_TAPE_BLOCK_EOD(a) ((a & TAPE_BLOCK_EOD_FLAG) != 0) 196 + 197 + struct tape_block { 198 + u32 fl_size; 199 + unsigned char data[4]; 200 + }; 201 + 202 + /* Flags for sense data */ 203 + #define SENSE_FLAG_FILEMARK 0x80 204 + #define SENSE_FLAG_EOM 0x40 205 + #define SENSE_FLAG_ILI 0x20 206 + 182 207 #define SDEBUG_LUN_0_VAL 0 183 208 184 209 /* bit mask values for sdebug_opts */ ··· 253 216 #define SDEBUG_UA_LUNS_CHANGED 5 254 217 #define SDEBUG_UA_MICROCODE_CHANGED 6 /* simulate firmware change */ 255 218 #define SDEBUG_UA_MICROCODE_CHANGED_WO_RESET 7 256 - #define SDEBUG_NUM_UAS 8 219 + #define SDEBUG_UA_NOT_READY_TO_READY 8 220 + #define SDEBUG_NUM_UAS 9 257 221 258 222 /* when 1==SDEBUG_OPT_MEDIUM_ERR, a medium error is simulated at this 259 223 * sector on read commands: */ ··· 299 261 #define SDEBUG_MAX_CMD_LEN 32 300 262 301 263 #define SDEB_XA_NOT_IN_USE XA_MARK_1 302 - 303 - static struct kmem_cache *queued_cmd_cache; 304 - 305 - #define TO_QUEUED_CMD(scmd) ((void *)(scmd)->host_scribble) 306 - #define ASSIGN_QUEUED_CMD(scmnd, qc) { (scmnd)->host_scribble = (void *) qc; } 307 264 308 265 /* Zone types (zbcr05 table 25) */ 309 266 enum sdebug_z_type { ··· 396 363 ktime_t create_ts; /* time since bootup that this device was created */ 397 364 struct sdeb_zone_state *zstate; 398 365 366 + /* For tapes */ 367 + unsigned int tape_blksize; 368 + unsigned int tape_density; 369 + unsigned char tape_partition; 370 + unsigned char tape_nbr_partitions; 371 + unsigned char tape_pending_nbr_partitions; 372 + unsigned int tape_pending_part_0_size; 373 + unsigned int tape_pending_part_1_size; 374 + unsigned char tape_dce; 375 + unsigned int tape_location[TAPE_MAX_PARTITIONS]; 376 + unsigned int tape_eop[TAPE_MAX_PARTITIONS]; 377 + struct tape_block *tape_blocks[TAPE_MAX_PARTITIONS]; 378 + 399 379 struct dentry *debugfs_entry; 400 380 struct spinlock list_lock; 401 381 struct list_head inject_err_list; ··· 455 409 enum sdeb_defer_type defer_t; 456 410 }; 457 411 458 - struct sdebug_device_access_info { 459 - bool atomic_write; 460 - u64 lba; 461 - u32 num; 462 - struct scsi_cmnd *self; 463 - }; 464 - 465 - struct sdebug_queued_cmd { 466 - /* corresponding bit set in in_use_bm[] in owning struct sdebug_queue 467 - * instance indicates this slot is in use. 468 - */ 469 - struct sdebug_defer sd_dp; 470 - struct scsi_cmnd *scmd; 471 - struct sdebug_device_access_info *i; 472 - }; 473 - 474 412 struct sdebug_scsi_cmd { 475 413 spinlock_t lock; 414 + struct sdebug_defer sd_dp; 476 415 }; 477 416 478 417 static atomic_t sdebug_cmnd_count; /* number of incoming commands */ ··· 514 483 SDEB_I_ZONE_OUT = 30, /* 0x94+SA; includes no data xfer */ 515 484 SDEB_I_ZONE_IN = 31, /* 0x95+SA; all have data-in */ 516 485 SDEB_I_ATOMIC_WRITE_16 = 32, 517 - SDEB_I_LAST_ELEM_P1 = 33, /* keep this last (previous + 1) */ 486 + SDEB_I_READ_BLOCK_LIMITS = 33, 487 + SDEB_I_LOCATE = 34, 488 + SDEB_I_WRITE_FILEMARKS = 35, 489 + SDEB_I_SPACE = 36, 490 + SDEB_I_FORMAT_MEDIUM = 37, 491 + SDEB_I_LAST_ELEM_P1 = 38, /* keep this last (previous + 1) */ 518 492 }; 519 493 520 494 521 495 static const unsigned char opcode_ind_arr[256] = { 522 496 /* 0x0; 0x0->0x1f: 6 byte cdbs */ 523 497 SDEB_I_TEST_UNIT_READY, SDEB_I_REZERO_UNIT, 0, SDEB_I_REQUEST_SENSE, 524 - 0, 0, 0, 0, 498 + SDEB_I_FORMAT_MEDIUM, SDEB_I_READ_BLOCK_LIMITS, 0, 0, 525 499 SDEB_I_READ, 0, SDEB_I_WRITE, 0, 0, 0, 0, 0, 526 - 0, 0, SDEB_I_INQUIRY, 0, 0, SDEB_I_MODE_SELECT, SDEB_I_RESERVE, 527 - SDEB_I_RELEASE, 500 + SDEB_I_WRITE_FILEMARKS, SDEB_I_SPACE, SDEB_I_INQUIRY, 0, 0, 501 + SDEB_I_MODE_SELECT, SDEB_I_RESERVE, SDEB_I_RELEASE, 528 502 0, 0, SDEB_I_MODE_SENSE, SDEB_I_START_STOP, 0, SDEB_I_SEND_DIAG, 529 503 SDEB_I_ALLOW_REMOVAL, 0, 530 504 /* 0x20; 0x20->0x3f: 10 byte cdbs */ 531 505 0, 0, 0, 0, 0, SDEB_I_READ_CAPACITY, 0, 0, 532 - SDEB_I_READ, 0, SDEB_I_WRITE, 0, 0, 0, 0, SDEB_I_VERIFY, 506 + SDEB_I_READ, 0, SDEB_I_WRITE, SDEB_I_LOCATE, 0, 0, 0, SDEB_I_VERIFY, 533 507 0, 0, 0, 0, SDEB_I_PRE_FETCH, SDEB_I_SYNC_CACHE, 0, 0, 534 508 0, 0, 0, SDEB_I_WRITE_BUFFER, 0, 0, 0, 0, 535 509 /* 0x40; 0x40->0x5f: 10 byte cdbs */ ··· 609 573 static int resp_close_zone(struct scsi_cmnd *, struct sdebug_dev_info *); 610 574 static int resp_finish_zone(struct scsi_cmnd *, struct sdebug_dev_info *); 611 575 static int resp_rwp_zone(struct scsi_cmnd *, struct sdebug_dev_info *); 576 + static int resp_read_blklimits(struct scsi_cmnd *, struct sdebug_dev_info *); 577 + static int resp_locate(struct scsi_cmnd *, struct sdebug_dev_info *); 578 + static int resp_write_filemarks(struct scsi_cmnd *, struct sdebug_dev_info *); 579 + static int resp_space(struct scsi_cmnd *, struct sdebug_dev_info *); 580 + static int resp_rewind(struct scsi_cmnd *, struct sdebug_dev_info *); 581 + static int resp_format_medium(struct scsi_cmnd *, struct sdebug_dev_info *); 612 582 613 583 static int sdebug_do_add_host(bool mk_new_store); 614 584 static int sdebug_add_host_helper(int per_host_idx); ··· 622 580 static int sdebug_add_store(void); 623 581 static void sdebug_erase_store(int idx, struct sdeb_store_info *sip); 624 582 static void sdebug_erase_all_stores(bool apart_from_first); 625 - 626 - static void sdebug_free_queued_cmd(struct sdebug_queued_cmd *sqcp); 627 583 628 584 /* 629 585 * The following are overflow arrays for cdbs that "hit" the same index in ··· 813 773 /* 20 */ 814 774 {0, 0x1e, 0, 0, NULL, NULL, /* ALLOW REMOVAL */ 815 775 {6, 0, 0, 0, 0x3, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 816 - {0, 0x1, 0, 0, resp_start_stop, NULL, /* REWIND ?? */ 776 + {0, 0x1, 0, 0, resp_rewind, NULL, 817 777 {6, 0x1, 0, 0, 0, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 818 778 {0, 0, 0, F_INV_OP | FF_RESPOND, NULL, NULL, /* ATA_PT */ 819 779 {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, ··· 840 800 resp_pre_fetch, pre_fetch_iarr, 841 801 {10, 0x2, 0xff, 0xff, 0xff, 0xff, 0x3f, 0xff, 0xff, 0xc7, 0, 0, 842 802 0, 0, 0, 0} }, /* PRE-FETCH (10) */ 803 + /* READ POSITION (10) */ 843 804 844 805 /* 30 */ 845 806 {ARRAY_SIZE(zone_out_iarr), 0x94, 0x3, F_SA_LOW | F_M_ACCESS, ··· 851 810 resp_report_zones, zone_in_iarr, /* ZONE_IN(16), REPORT ZONES) */ 852 811 {16, 0x0 /* SA */, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 853 812 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xbf, 0xc7} }, 854 - /* 31 */ 813 + /* 32 */ 855 814 {0, 0x0, 0x0, F_D_OUT | FF_MEDIA_IO, 856 815 resp_atomic_write, NULL, /* ATOMIC WRITE 16 */ 857 816 {16, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 858 817 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff} }, 818 + {0, 0x05, 0, F_D_IN, resp_read_blklimits, NULL, /* READ BLOCK LIMITS (6) */ 819 + {6, 0, 0, 0, 0, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 820 + {0, 0x2b, 0, F_D_UNKN, resp_locate, NULL, /* LOCATE (10) */ 821 + {10, 0x07, 0, 0xff, 0xff, 0xff, 0xff, 0, 0xff, 0xc7, 0, 0, 822 + 0, 0, 0, 0} }, 823 + {0, 0x10, 0, F_D_IN, resp_write_filemarks, NULL, /* WRITE FILEMARKS (6) */ 824 + {6, 0x01, 0xff, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 825 + {0, 0x11, 0, F_D_IN, resp_space, NULL, /* SPACE (6) */ 826 + {6, 0x07, 0xff, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 827 + {0, 0x4, 0, 0, resp_format_medium, NULL, /* FORMAT MEDIUM (6) */ 828 + {6, 0x3, 0x7, 0, 0, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 829 + /* 38 */ 859 830 /* sentinel */ 860 831 {0xff, 0, 0, 0, NULL, NULL, /* terminating element */ 861 832 {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, ··· 1384 1331 my_name, key, asc, asq); 1385 1332 } 1386 1333 1334 + /* Sense data that has information fields for tapes */ 1335 + static void mk_sense_info_tape(struct scsi_cmnd *scp, int key, int asc, int asq, 1336 + unsigned int information, unsigned char tape_flags) 1337 + { 1338 + if (!scp->sense_buffer) { 1339 + sdev_printk(KERN_ERR, scp->device, 1340 + "%s: sense_buffer is NULL\n", __func__); 1341 + return; 1342 + } 1343 + memset(scp->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE); 1344 + 1345 + scsi_build_sense(scp, /* sdebug_dsense */ 0, key, asc, asq); 1346 + /* only fixed format so far */ 1347 + 1348 + scp->sense_buffer[0] |= 0x80; /* valid */ 1349 + scp->sense_buffer[2] |= tape_flags; 1350 + put_unaligned_be32(information, &scp->sense_buffer[3]); 1351 + 1352 + if (sdebug_verbose) 1353 + sdev_printk(KERN_INFO, scp->device, 1354 + "%s: [sense_key,asc,ascq]: [0x%x,0x%x,0x%x]\n", 1355 + my_name, key, asc, asq); 1356 + } 1357 + 1387 1358 static void mk_sense_invalid_opcode(struct scsi_cmnd *scp) 1388 1359 { 1389 1360 mk_sense_buffer(scp, ILLEGAL_REQUEST, INVALID_OPCODE, 0); ··· 1569 1492 LUNS_CHANGED_ASCQ); 1570 1493 if (sdebug_verbose) 1571 1494 cp = "reported luns data has changed"; 1495 + break; 1496 + case SDEBUG_UA_NOT_READY_TO_READY: 1497 + mk_sense_buffer(scp, UNIT_ATTENTION, UA_READY_ASC, 1498 + 0); 1499 + if (sdebug_verbose) 1500 + cp = "not ready to ready transition/media change"; 1572 1501 break; 1573 1502 default: 1574 1503 pr_warn("unexpected unit attention code=%d\n", k); ··· 2279 2196 changing = (stopped_state != want_stop); 2280 2197 if (changing) 2281 2198 atomic_xchg(&devip->stopped, want_stop); 2199 + if (sdebug_ptype == TYPE_TAPE && !want_stop) { 2200 + int i; 2201 + 2202 + set_bit(SDEBUG_UA_NOT_READY_TO_READY, devip->uas_bm); /* not legal! */ 2203 + for (i = 0; i < TAPE_MAX_PARTITIONS; i++) 2204 + devip->tape_location[i] = 0; 2205 + devip->tape_partition = 0; 2206 + } 2282 2207 if (!changing || (cmd[1] & 0x1)) /* state unchanged or IMMED bit set in cdb */ 2283 2208 return SDEG_RES_IMMED_MASK; 2284 2209 else ··· 2819 2728 return sizeof(sas_sha_m_pg); 2820 2729 } 2821 2730 2731 + static unsigned char partition_pg[] = {0x11, 12, 1, 0, 0x24, 3, 9, 0, 2732 + 0xff, 0xff, 0x00, 0x00}; 2733 + 2734 + static int resp_partition_m_pg(unsigned char *p, int pcontrol, int target) 2735 + { /* Partition page for mode_sense (tape) */ 2736 + memcpy(p, partition_pg, sizeof(partition_pg)); 2737 + if (pcontrol == 1) 2738 + memset(p + 2, 0, sizeof(partition_pg) - 2); 2739 + return sizeof(partition_pg); 2740 + } 2741 + 2742 + static int process_medium_part_m_pg(struct sdebug_dev_info *devip, 2743 + unsigned char *new, int pg_len) 2744 + { 2745 + int new_nbr, p0_size, p1_size; 2746 + 2747 + if ((new[4] & 0x80) != 0) { /* FDP */ 2748 + partition_pg[4] |= 0x80; 2749 + devip->tape_pending_nbr_partitions = TAPE_MAX_PARTITIONS; 2750 + devip->tape_pending_part_0_size = TAPE_UNITS - TAPE_PARTITION_1_UNITS; 2751 + devip->tape_pending_part_1_size = TAPE_PARTITION_1_UNITS; 2752 + } else { 2753 + new_nbr = new[3] + 1; 2754 + if (new_nbr > TAPE_MAX_PARTITIONS) 2755 + return 3; 2756 + if ((new[4] & 0x40) != 0) { /* SDP */ 2757 + p1_size = TAPE_PARTITION_1_UNITS; 2758 + p0_size = TAPE_UNITS - p1_size; 2759 + if (p0_size < 100) 2760 + return 4; 2761 + } else if ((new[4] & 0x20) != 0) { 2762 + if (new_nbr > 1) { 2763 + p0_size = get_unaligned_be16(new + 8); 2764 + p1_size = get_unaligned_be16(new + 10); 2765 + if (p1_size == 0xFFFF) 2766 + p1_size = TAPE_UNITS - p0_size; 2767 + else if (p0_size == 0xFFFF) 2768 + p0_size = TAPE_UNITS - p1_size; 2769 + if (p0_size < 100 || p1_size < 100) 2770 + return 8; 2771 + } else { 2772 + p0_size = TAPE_UNITS; 2773 + p1_size = 0; 2774 + } 2775 + } else 2776 + return 6; 2777 + devip->tape_pending_nbr_partitions = new_nbr; 2778 + devip->tape_pending_part_0_size = p0_size; 2779 + devip->tape_pending_part_1_size = p1_size; 2780 + partition_pg[3] = new_nbr; 2781 + devip->tape_pending_nbr_partitions = new_nbr; 2782 + } 2783 + 2784 + return 0; 2785 + } 2786 + 2787 + static int resp_compression_m_pg(unsigned char *p, int pcontrol, int target, 2788 + unsigned char dce) 2789 + { /* Compression page for mode_sense (tape) */ 2790 + unsigned char compression_pg[] = {0x0f, 14, 0x40, 0, 0, 0, 0, 0, 2791 + 0, 0, 0, 0, 00, 00}; 2792 + 2793 + memcpy(p, compression_pg, sizeof(compression_pg)); 2794 + if (dce) 2795 + p[2] |= 0x80; 2796 + if (pcontrol == 1) 2797 + memset(p + 2, 0, sizeof(compression_pg) - 2); 2798 + return sizeof(compression_pg); 2799 + } 2800 + 2822 2801 /* PAGE_SIZE is more than necessary but provides room for future expansion. */ 2823 2802 #define SDEBUG_MAX_MSENSE_SZ PAGE_SIZE 2824 2803 ··· 2903 2742 unsigned char *ap; 2904 2743 unsigned char *arr __free(kfree); 2905 2744 unsigned char *cmd = scp->cmnd; 2906 - bool dbd, llbaa, msense_6, is_disk, is_zbc; 2745 + bool dbd, llbaa, msense_6, is_disk, is_zbc, is_tape; 2907 2746 2908 2747 arr = kzalloc(SDEBUG_MAX_MSENSE_SZ, GFP_ATOMIC); 2909 2748 if (!arr) ··· 2916 2755 llbaa = msense_6 ? false : !!(cmd[1] & 0x10); 2917 2756 is_disk = (sdebug_ptype == TYPE_DISK); 2918 2757 is_zbc = devip->zoned; 2919 - if ((is_disk || is_zbc) && !dbd) 2758 + is_tape = (sdebug_ptype == TYPE_TAPE); 2759 + if ((is_disk || is_zbc || is_tape) && !dbd) 2920 2760 bd_len = llbaa ? 16 : 8; 2921 2761 else 2922 2762 bd_len = 0; ··· 2955 2793 put_unaligned_be32(0xffffffff, ap + 0); 2956 2794 else 2957 2795 put_unaligned_be32(sdebug_capacity, ap + 0); 2958 - put_unaligned_be16(sdebug_sector_size, ap + 6); 2796 + if (is_tape) { 2797 + ap[0] = devip->tape_density; 2798 + put_unaligned_be16(devip->tape_blksize, ap + 6); 2799 + } else 2800 + put_unaligned_be16(sdebug_sector_size, ap + 6); 2959 2801 offset += bd_len; 2960 2802 ap = arr + offset; 2961 2803 } else if (16 == bd_len) { 2804 + if (is_tape) { 2805 + mk_sense_invalid_fld(scp, SDEB_IN_DATA, 1, 4); 2806 + return check_condition_result; 2807 + } 2962 2808 put_unaligned_be64((u64)sdebug_capacity, ap + 0); 2963 2809 put_unaligned_be32(sdebug_sector_size, ap + 12); 2964 2810 offset += bd_len; 2965 2811 ap = arr + offset; 2966 2812 } 2813 + if (cmd[2] == 0) 2814 + goto only_bd; /* Only block descriptor requested */ 2967 2815 2968 2816 /* 2969 2817 * N.B. If len>0 before resp_*_pg() call, then form of that call should be: ··· 3029 2857 } 3030 2858 offset += len; 3031 2859 break; 2860 + case 0xf: /* Compression Mode Page (tape) */ 2861 + if (!is_tape) 2862 + goto bad_pcode; 2863 + len = resp_compression_m_pg(ap, pcontrol, target, devip->tape_dce); 2864 + offset += len; 2865 + break; 2866 + case 0x11: /* Partition Mode Page (tape) */ 2867 + if (!is_tape) 2868 + goto bad_pcode; 2869 + len = resp_partition_m_pg(ap, pcontrol, target); 2870 + offset += len; 2871 + break; 3032 2872 case 0x19: /* if spc==1 then sas phy, control+discover */ 3033 2873 if (subpcode > 0x2 && subpcode < 0xff) 3034 2874 goto bad_subpcode; ··· 3086 2902 default: 3087 2903 goto bad_pcode; 3088 2904 } 2905 + only_bd: 3089 2906 if (msense_6) 3090 2907 arr[0] = offset - 1; 3091 2908 else ··· 3130 2945 __func__, param_len, res); 3131 2946 md_len = mselect6 ? (arr[0] + 1) : (get_unaligned_be16(arr + 0) + 2); 3132 2947 bd_len = mselect6 ? arr[3] : get_unaligned_be16(arr + 6); 3133 - off = bd_len + (mselect6 ? 4 : 8); 3134 - if (md_len > 2 || off >= res) { 2948 + off = (mselect6 ? 4 : 8); 2949 + if (sdebug_ptype == TYPE_TAPE) { 2950 + int blksize; 2951 + 2952 + if (bd_len != 8) { 2953 + mk_sense_invalid_fld(scp, SDEB_IN_DATA, 2954 + mselect6 ? 3 : 6, -1); 2955 + return check_condition_result; 2956 + } 2957 + if (arr[off] == TAPE_BAD_DENSITY) { 2958 + mk_sense_invalid_fld(scp, SDEB_IN_DATA, 0, -1); 2959 + return check_condition_result; 2960 + } 2961 + blksize = get_unaligned_be16(arr + off + 6); 2962 + if (blksize != 0 && 2963 + (blksize < TAPE_MIN_BLKSIZE || 2964 + blksize > TAPE_MAX_BLKSIZE || 2965 + (blksize % 4) != 0)) { 2966 + mk_sense_invalid_fld(scp, SDEB_IN_DATA, 1, -1); 2967 + return check_condition_result; 2968 + } 2969 + devip->tape_density = arr[off]; 2970 + devip->tape_blksize = blksize; 2971 + } 2972 + off += bd_len; 2973 + if (off >= res) 2974 + return 0; /* No page written, just descriptors */ 2975 + if (md_len > 2) { 3135 2976 mk_sense_invalid_fld(scp, SDEB_IN_DATA, 0, -1); 3136 2977 return check_condition_result; 3137 2978 } ··· 3195 2984 goto set_mode_changed_ua; 3196 2985 } 3197 2986 break; 2987 + case 0xf: /* Compression mode page */ 2988 + if (sdebug_ptype != TYPE_TAPE) 2989 + goto bad_pcode; 2990 + if ((arr[off + 2] & 0x40) != 0) { 2991 + devip->tape_dce = (arr[off + 2] & 0x80) != 0; 2992 + return 0; 2993 + } 2994 + break; 2995 + case 0x11: /* Medium Partition Mode Page (tape) */ 2996 + if (sdebug_ptype == TYPE_TAPE) { 2997 + int fld; 2998 + 2999 + fld = process_medium_part_m_pg(devip, &arr[off], pg_len); 3000 + if (fld == 0) 3001 + return 0; 3002 + mk_sense_invalid_fld(scp, SDEB_IN_DATA, fld, -1); 3003 + return check_condition_result; 3004 + } 3005 + break; 3198 3006 case 0x1c: /* Informational Exceptions Mode page */ 3199 3007 if (iec_m_pg[1] == arr[off + 1]) { 3200 3008 memcpy(iec_m_pg + 2, arr + off + 2, ··· 3229 2999 set_mode_changed_ua: 3230 3000 set_bit(SDEBUG_UA_MODE_CHANGED, devip->uas_bm); 3231 3001 return 0; 3002 + 3003 + bad_pcode: 3004 + mk_sense_invalid_fld(scp, SDEB_IN_CDB, 2, 5); 3005 + return check_condition_result; 3232 3006 } 3233 3007 3234 3008 static int resp_temp_l_pg(unsigned char *arr) ··· 3370 3136 len = min_t(u32, get_unaligned_be16(arr + 2) + 4, alloc_len); 3371 3137 return fill_from_dev_buffer(scp, arr, 3372 3138 min_t(u32, len, SDEBUG_MAX_INQ_ARR_SZ)); 3139 + } 3140 + 3141 + enum {SDEBUG_READ_BLOCK_LIMITS_ARR_SZ = 6}; 3142 + static int resp_read_blklimits(struct scsi_cmnd *scp, 3143 + struct sdebug_dev_info *devip) 3144 + { 3145 + unsigned char arr[SDEBUG_READ_BLOCK_LIMITS_ARR_SZ]; 3146 + 3147 + arr[0] = 4; 3148 + put_unaligned_be24(TAPE_MAX_BLKSIZE, arr + 1); 3149 + put_unaligned_be16(TAPE_MIN_BLKSIZE, arr + 4); 3150 + return fill_from_dev_buffer(scp, arr, SDEBUG_READ_BLOCK_LIMITS_ARR_SZ); 3151 + } 3152 + 3153 + static int resp_locate(struct scsi_cmnd *scp, 3154 + struct sdebug_dev_info *devip) 3155 + { 3156 + unsigned char *cmd = scp->cmnd; 3157 + unsigned int i, pos; 3158 + struct tape_block *blp; 3159 + int partition; 3160 + 3161 + if ((cmd[1] & 0x02) != 0) { 3162 + if (cmd[8] >= devip->tape_nbr_partitions) { 3163 + mk_sense_invalid_fld(scp, SDEB_IN_CDB, 8, -1); 3164 + return check_condition_result; 3165 + } 3166 + devip->tape_partition = cmd[8]; 3167 + } 3168 + pos = get_unaligned_be32(cmd + 3); 3169 + partition = devip->tape_partition; 3170 + 3171 + for (i = 0, blp = devip->tape_blocks[partition]; 3172 + i < pos && i < devip->tape_eop[partition]; i++, blp++) 3173 + if (IS_TAPE_BLOCK_EOD(blp->fl_size)) 3174 + break; 3175 + if (i < pos) { 3176 + devip->tape_location[partition] = i; 3177 + mk_sense_buffer(scp, BLANK_CHECK, 0x05, 0); 3178 + return check_condition_result; 3179 + } 3180 + devip->tape_location[partition] = pos; 3181 + 3182 + return 0; 3183 + } 3184 + 3185 + static int resp_write_filemarks(struct scsi_cmnd *scp, 3186 + struct sdebug_dev_info *devip) 3187 + { 3188 + unsigned char *cmd = scp->cmnd; 3189 + unsigned int i, count, pos; 3190 + u32 data; 3191 + int partition = devip->tape_partition; 3192 + 3193 + if ((cmd[1] & 0xfe) != 0) { /* probably write setmarks, not in >= SCSI-3 */ 3194 + mk_sense_invalid_fld(scp, SDEB_IN_CDB, 1, 1); 3195 + return check_condition_result; 3196 + } 3197 + count = get_unaligned_be24(cmd + 2); 3198 + data = TAPE_BLOCK_FM_FLAG; 3199 + for (i = 0, pos = devip->tape_location[partition]; i < count; i++, pos++) { 3200 + if (pos >= devip->tape_eop[partition] - 1) { /* don't overwrite EOD */ 3201 + devip->tape_location[partition] = devip->tape_eop[partition] - 1; 3202 + mk_sense_info_tape(scp, VOLUME_OVERFLOW, NO_ADDITIONAL_SENSE, 3203 + EOP_EOM_DETECTED_ASCQ, count, SENSE_FLAG_EOM); 3204 + return check_condition_result; 3205 + } 3206 + (devip->tape_blocks[partition] + pos)->fl_size = data; 3207 + } 3208 + (devip->tape_blocks[partition] + pos)->fl_size = 3209 + TAPE_BLOCK_EOD_FLAG; 3210 + devip->tape_location[partition] = pos; 3211 + 3212 + return 0; 3213 + } 3214 + 3215 + static int resp_space(struct scsi_cmnd *scp, 3216 + struct sdebug_dev_info *devip) 3217 + { 3218 + unsigned char *cmd = scp->cmnd, code; 3219 + int i = 0, pos, count; 3220 + struct tape_block *blp; 3221 + int partition = devip->tape_partition; 3222 + 3223 + count = get_unaligned_be24(cmd + 2); 3224 + if ((count & 0x800000) != 0) /* extend negative to 32-bit count */ 3225 + count |= 0xff000000; 3226 + code = cmd[1] & 0x0f; 3227 + 3228 + pos = devip->tape_location[partition]; 3229 + if (code == 0) { /* blocks */ 3230 + if (count < 0) { 3231 + count = (-count); 3232 + pos -= 1; 3233 + for (i = 0, blp = devip->tape_blocks[partition] + pos; i < count; 3234 + i++) { 3235 + if (pos < 0) 3236 + goto is_bop; 3237 + else if (IS_TAPE_BLOCK_FM(blp->fl_size)) 3238 + goto is_fm; 3239 + if (i > 0) { 3240 + pos--; 3241 + blp--; 3242 + } 3243 + } 3244 + } else if (count > 0) { 3245 + for (i = 0, blp = devip->tape_blocks[partition] + pos; i < count; 3246 + i++, pos++, blp++) { 3247 + if (IS_TAPE_BLOCK_EOD(blp->fl_size)) 3248 + goto is_eod; 3249 + if (IS_TAPE_BLOCK_FM(blp->fl_size)) { 3250 + pos += 1; 3251 + goto is_fm; 3252 + } 3253 + if (pos >= devip->tape_eop[partition]) 3254 + goto is_eop; 3255 + } 3256 + } 3257 + } else if (code == 1) { /* filemarks */ 3258 + if (count < 0) { 3259 + count = (-count); 3260 + if (pos == 0) 3261 + goto is_bop; 3262 + else { 3263 + for (i = 0, blp = devip->tape_blocks[partition] + pos; 3264 + i < count && pos >= 0; i++, pos--, blp--) { 3265 + for (pos--, blp-- ; !IS_TAPE_BLOCK_FM(blp->fl_size) && 3266 + pos >= 0; pos--, blp--) 3267 + ; /* empty */ 3268 + if (pos < 0) 3269 + goto is_bop; 3270 + } 3271 + } 3272 + pos += 1; 3273 + } else if (count > 0) { 3274 + for (i = 0, blp = devip->tape_blocks[partition] + pos; 3275 + i < count; i++, pos++, blp++) { 3276 + for ( ; !IS_TAPE_BLOCK_FM(blp->fl_size) && 3277 + !IS_TAPE_BLOCK_EOD(blp->fl_size) && 3278 + pos < devip->tape_eop[partition]; 3279 + pos++, blp++) 3280 + ; /* empty */ 3281 + if (IS_TAPE_BLOCK_EOD(blp->fl_size)) 3282 + goto is_eod; 3283 + if (pos >= devip->tape_eop[partition]) 3284 + goto is_eop; 3285 + } 3286 + } 3287 + } else if (code == 3) { /* EOD */ 3288 + for (blp = devip->tape_blocks[partition] + pos; 3289 + !IS_TAPE_BLOCK_EOD(blp->fl_size) && pos < devip->tape_eop[partition]; 3290 + pos++, blp++) 3291 + ; /* empty */ 3292 + if (pos >= devip->tape_eop[partition]) 3293 + goto is_eop; 3294 + } else { 3295 + /* sequential filemarks not supported */ 3296 + mk_sense_invalid_fld(scp, SDEB_IN_CDB, 8, -1); 3297 + return check_condition_result; 3298 + } 3299 + devip->tape_location[partition] = pos; 3300 + return 0; 3301 + 3302 + is_fm: 3303 + devip->tape_location[partition] = pos; 3304 + mk_sense_info_tape(scp, NO_SENSE, NO_ADDITIONAL_SENSE, 3305 + FILEMARK_DETECTED_ASCQ, count - i, 3306 + SENSE_FLAG_FILEMARK); 3307 + return check_condition_result; 3308 + 3309 + is_eod: 3310 + devip->tape_location[partition] = pos; 3311 + mk_sense_info_tape(scp, BLANK_CHECK, NO_ADDITIONAL_SENSE, 3312 + EOD_DETECTED_ASCQ, count - i, 3313 + 0); 3314 + return check_condition_result; 3315 + 3316 + is_bop: 3317 + devip->tape_location[partition] = 0; 3318 + mk_sense_info_tape(scp, NO_SENSE, NO_ADDITIONAL_SENSE, 3319 + BEGINNING_OF_P_M_DETECTED_ASCQ, count - i, 3320 + SENSE_FLAG_EOM); 3321 + devip->tape_location[partition] = 0; 3322 + return check_condition_result; 3323 + 3324 + is_eop: 3325 + devip->tape_location[partition] = devip->tape_eop[partition] - 1; 3326 + mk_sense_info_tape(scp, MEDIUM_ERROR, NO_ADDITIONAL_SENSE, 3327 + EOP_EOM_DETECTED_ASCQ, (unsigned int)i, 3328 + SENSE_FLAG_EOM); 3329 + return check_condition_result; 3330 + } 3331 + 3332 + static int resp_rewind(struct scsi_cmnd *scp, 3333 + struct sdebug_dev_info *devip) 3334 + { 3335 + devip->tape_location[devip->tape_partition] = 0; 3336 + 3337 + return 0; 3338 + } 3339 + 3340 + static int partition_tape(struct sdebug_dev_info *devip, int nbr_partitions, 3341 + int part_0_size, int part_1_size) 3342 + { 3343 + int i; 3344 + 3345 + if (part_0_size + part_1_size > TAPE_UNITS) 3346 + return -1; 3347 + devip->tape_eop[0] = part_0_size; 3348 + devip->tape_blocks[0]->fl_size = TAPE_BLOCK_EOD_FLAG; 3349 + devip->tape_eop[1] = part_1_size; 3350 + devip->tape_blocks[1] = devip->tape_blocks[0] + 3351 + devip->tape_eop[0]; 3352 + devip->tape_blocks[1]->fl_size = TAPE_BLOCK_EOD_FLAG; 3353 + 3354 + for (i = 0 ; i < TAPE_MAX_PARTITIONS; i++) 3355 + devip->tape_location[i] = 0; 3356 + 3357 + devip->tape_nbr_partitions = nbr_partitions; 3358 + devip->tape_partition = 0; 3359 + 3360 + partition_pg[3] = nbr_partitions - 1; 3361 + put_unaligned_be16(devip->tape_eop[0], partition_pg + 8); 3362 + put_unaligned_be16(devip->tape_eop[1], partition_pg + 10); 3363 + 3364 + return nbr_partitions; 3365 + } 3366 + 3367 + static int resp_format_medium(struct scsi_cmnd *scp, 3368 + struct sdebug_dev_info *devip) 3369 + { 3370 + int res = 0; 3371 + unsigned char *cmd = scp->cmnd; 3372 + 3373 + if (sdebug_ptype != TYPE_TAPE) { 3374 + mk_sense_invalid_fld(scp, SDEB_IN_CDB, 0, -1); 3375 + return check_condition_result; 3376 + } 3377 + if (cmd[2] > 2) { 3378 + mk_sense_invalid_fld(scp, SDEB_IN_DATA, 2, -1); 3379 + return check_condition_result; 3380 + } 3381 + if (cmd[2] != 0) { 3382 + if (devip->tape_pending_nbr_partitions > 0) { 3383 + res = partition_tape(devip, 3384 + devip->tape_pending_nbr_partitions, 3385 + devip->tape_pending_part_0_size, 3386 + devip->tape_pending_part_1_size); 3387 + } else 3388 + res = partition_tape(devip, devip->tape_nbr_partitions, 3389 + devip->tape_eop[0], devip->tape_eop[1]); 3390 + } else 3391 + res = partition_tape(devip, 1, TAPE_UNITS, 0); 3392 + if (res < 0) 3393 + return -EINVAL; 3394 + 3395 + devip->tape_pending_nbr_partitions = -1; 3396 + 3397 + return 0; 3373 3398 } 3374 3399 3375 3400 static inline bool sdebug_dev_is_zoned(struct sdebug_dev_info *devip) ··· 4364 3871 return ret; 4365 3872 } 4366 3873 3874 + static int resp_read_tape(struct scsi_cmnd *scp, struct sdebug_dev_info *devip) 3875 + { 3876 + u32 i, num, transfer, size; 3877 + u8 *cmd = scp->cmnd; 3878 + struct scsi_data_buffer *sdb = &scp->sdb; 3879 + int partition = devip->tape_partition; 3880 + u32 pos = devip->tape_location[partition]; 3881 + struct tape_block *blp; 3882 + bool fixed, sili; 3883 + 3884 + if (cmd[0] != READ_6) { /* Only Read(6) supported */ 3885 + mk_sense_invalid_opcode(scp); 3886 + return illegal_condition_result; 3887 + } 3888 + fixed = (cmd[1] & 0x1) != 0; 3889 + sili = (cmd[1] & 0x2) != 0; 3890 + if (fixed && sili) { 3891 + mk_sense_invalid_fld(scp, SDEB_IN_CDB, 1, 1); 3892 + return check_condition_result; 3893 + } 3894 + 3895 + transfer = get_unaligned_be24(cmd + 2); 3896 + if (fixed) { 3897 + num = transfer; 3898 + size = devip->tape_blksize; 3899 + } else { 3900 + if (transfer < TAPE_MIN_BLKSIZE || 3901 + transfer > TAPE_MAX_BLKSIZE) { 3902 + mk_sense_invalid_fld(scp, SDEB_IN_CDB, 2, -1); 3903 + return check_condition_result; 3904 + } 3905 + num = 1; 3906 + size = transfer; 3907 + } 3908 + 3909 + for (i = 0, blp = devip->tape_blocks[partition] + pos; 3910 + i < num && pos < devip->tape_eop[partition]; 3911 + i++, pos++, blp++) { 3912 + devip->tape_location[partition] = pos + 1; 3913 + if (IS_TAPE_BLOCK_FM(blp->fl_size)) { 3914 + mk_sense_info_tape(scp, NO_SENSE, NO_ADDITIONAL_SENSE, 3915 + FILEMARK_DETECTED_ASCQ, fixed ? num - i : size, 3916 + SENSE_FLAG_FILEMARK); 3917 + scsi_set_resid(scp, (num - i) * size); 3918 + return check_condition_result; 3919 + } 3920 + /* Assume no REW */ 3921 + if (IS_TAPE_BLOCK_EOD(blp->fl_size)) { 3922 + mk_sense_info_tape(scp, BLANK_CHECK, NO_ADDITIONAL_SENSE, 3923 + EOD_DETECTED_ASCQ, fixed ? num - i : size, 3924 + 0); 3925 + devip->tape_location[partition] = pos; 3926 + scsi_set_resid(scp, (num - i) * size); 3927 + return check_condition_result; 3928 + } 3929 + sg_zero_buffer(sdb->table.sgl, sdb->table.nents, 3930 + size, i * size); 3931 + sg_copy_buffer(sdb->table.sgl, sdb->table.nents, 3932 + &(blp->data), 4, i * size, false); 3933 + if (fixed) { 3934 + if (blp->fl_size != devip->tape_blksize) { 3935 + scsi_set_resid(scp, (num - i) * size); 3936 + mk_sense_info_tape(scp, NO_SENSE, NO_ADDITIONAL_SENSE, 3937 + 0, num - i, 3938 + SENSE_FLAG_ILI); 3939 + return check_condition_result; 3940 + } 3941 + } else { 3942 + if (blp->fl_size != size) { 3943 + if (blp->fl_size < size) 3944 + scsi_set_resid(scp, size - blp->fl_size); 3945 + if (!sili) { 3946 + mk_sense_info_tape(scp, NO_SENSE, NO_ADDITIONAL_SENSE, 3947 + 0, size - blp->fl_size, 3948 + SENSE_FLAG_ILI); 3949 + return check_condition_result; 3950 + } 3951 + } 3952 + } 3953 + } 3954 + if (pos >= devip->tape_eop[partition]) { 3955 + mk_sense_info_tape(scp, NO_SENSE, NO_ADDITIONAL_SENSE, 3956 + EOP_EOM_DETECTED_ASCQ, fixed ? num - i : size, 3957 + SENSE_FLAG_EOM); 3958 + devip->tape_location[partition] = pos - 1; 3959 + return check_condition_result; 3960 + } 3961 + devip->tape_location[partition] = pos; 3962 + 3963 + return 0; 3964 + } 3965 + 4367 3966 static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip) 4368 3967 { 4369 3968 bool check_prot; ··· 4466 3881 struct sdeb_store_info *sip = devip2sip(devip, true); 4467 3882 u8 *cmd = scp->cmnd; 4468 3883 bool meta_data_locked = false; 3884 + 3885 + if (sdebug_ptype == TYPE_TAPE) 3886 + return resp_read_tape(scp, devip); 4469 3887 4470 3888 switch (cmd[0]) { 4471 3889 case READ_16: ··· 4766 4178 } 4767 4179 } 4768 4180 4181 + static int resp_write_tape(struct scsi_cmnd *scp, struct sdebug_dev_info *devip) 4182 + { 4183 + u32 i, num, transfer, size, written = 0; 4184 + u8 *cmd = scp->cmnd; 4185 + struct scsi_data_buffer *sdb = &scp->sdb; 4186 + int partition = devip->tape_partition; 4187 + int pos = devip->tape_location[partition]; 4188 + struct tape_block *blp; 4189 + bool fixed, ew; 4190 + 4191 + if (cmd[0] != WRITE_6) { /* Only Write(6) supported */ 4192 + mk_sense_invalid_opcode(scp); 4193 + return illegal_condition_result; 4194 + } 4195 + 4196 + fixed = (cmd[1] & 1) != 0; 4197 + transfer = get_unaligned_be24(cmd + 2); 4198 + if (fixed) { 4199 + num = transfer; 4200 + size = devip->tape_blksize; 4201 + } else { 4202 + if (transfer < TAPE_MIN_BLKSIZE || 4203 + transfer > TAPE_MAX_BLKSIZE) { 4204 + mk_sense_invalid_fld(scp, SDEB_IN_CDB, 2, -1); 4205 + return check_condition_result; 4206 + } 4207 + num = 1; 4208 + size = transfer; 4209 + } 4210 + 4211 + scsi_set_resid(scp, num * transfer); 4212 + for (i = 0, blp = devip->tape_blocks[partition] + pos, ew = false; 4213 + i < num && pos < devip->tape_eop[partition] - 1; i++, pos++, blp++) { 4214 + blp->fl_size = size; 4215 + sg_copy_buffer(sdb->table.sgl, sdb->table.nents, 4216 + &(blp->data), 4, i * size, true); 4217 + written += size; 4218 + scsi_set_resid(scp, num * transfer - written); 4219 + ew |= (pos == devip->tape_eop[partition] - TAPE_EW); 4220 + } 4221 + 4222 + devip->tape_location[partition] = pos; 4223 + blp->fl_size = TAPE_BLOCK_EOD_FLAG; 4224 + if (pos >= devip->tape_eop[partition] - 1) { 4225 + mk_sense_info_tape(scp, VOLUME_OVERFLOW, 4226 + NO_ADDITIONAL_SENSE, EOP_EOM_DETECTED_ASCQ, 4227 + fixed ? num - i : transfer, 4228 + SENSE_FLAG_EOM); 4229 + return check_condition_result; 4230 + } 4231 + if (ew) { /* early warning */ 4232 + mk_sense_info_tape(scp, NO_SENSE, 4233 + NO_ADDITIONAL_SENSE, EOP_EOM_DETECTED_ASCQ, 4234 + fixed ? num - i : transfer, 4235 + SENSE_FLAG_EOM); 4236 + return check_condition_result; 4237 + } 4238 + 4239 + return 0; 4240 + } 4241 + 4769 4242 static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip) 4770 4243 { 4771 4244 bool check_prot; ··· 4838 4189 struct sdeb_store_info *sip = devip2sip(devip, true); 4839 4190 u8 *cmd = scp->cmnd; 4840 4191 bool meta_data_locked = false; 4192 + 4193 + if (sdebug_ptype == TYPE_TAPE) 4194 + return resp_write_tape(scp, devip); 4841 4195 4842 4196 switch (cmd[0]) { 4843 4197 case WRITE_16: ··· 5570 4918 * a GOOD status otherwise. Model a disk with a big cache and yield 5571 4919 * CONDITION MET. Actually tries to bring range in main memory into the 5572 4920 * cache associated with the CPU(s). 4921 + * 4922 + * The pcode 0x34 is also used for READ POSITION by tape devices. 5573 4923 */ 4924 + enum {SDEBUG_READ_POSITION_ARR_SZ = 20}; 5574 4925 static int resp_pre_fetch(struct scsi_cmnd *scp, 5575 4926 struct sdebug_dev_info *devip) 5576 4927 { ··· 5584 4929 u8 *cmd = scp->cmnd; 5585 4930 struct sdeb_store_info *sip = devip2sip(devip, true); 5586 4931 u8 *fsp = sip->storep; 4932 + 4933 + if (sdebug_ptype == TYPE_TAPE) { 4934 + if (cmd[0] == PRE_FETCH) { /* READ POSITION (10) */ 4935 + int all_length; 4936 + unsigned char arr[20]; 4937 + unsigned int pos; 4938 + 4939 + all_length = get_unaligned_be16(cmd + 7); 4940 + if ((cmd[1] & 0xfe) != 0 || 4941 + all_length != 0) { /* only short form */ 4942 + mk_sense_invalid_fld(scp, SDEB_IN_CDB, 4943 + all_length ? 7 : 1, 0); 4944 + return check_condition_result; 4945 + } 4946 + memset(arr, 0, SDEBUG_READ_POSITION_ARR_SZ); 4947 + arr[1] = devip->tape_partition; 4948 + pos = devip->tape_location[devip->tape_partition]; 4949 + put_unaligned_be32(pos, arr + 4); 4950 + put_unaligned_be32(pos, arr + 8); 4951 + return fill_from_dev_buffer(scp, arr, 4952 + SDEBUG_READ_POSITION_ARR_SZ); 4953 + } 4954 + mk_sense_invalid_opcode(scp); 4955 + return check_condition_result; 4956 + } 5587 4957 5588 4958 if (cmd[0] == PRE_FETCH) { /* 10 byte cdb */ 5589 4959 lba = get_unaligned_be32(cmd + 2); ··· 6318 5638 /* Queued (deferred) command completions converge here. */ 6319 5639 static void sdebug_q_cmd_complete(struct sdebug_defer *sd_dp) 6320 5640 { 6321 - struct sdebug_queued_cmd *sqcp = container_of(sd_dp, struct sdebug_queued_cmd, sd_dp); 5641 + struct sdebug_scsi_cmd *sdsc = container_of(sd_dp, 5642 + typeof(*sdsc), sd_dp); 5643 + struct scsi_cmnd *scp = (struct scsi_cmnd *)sdsc - 1; 6322 5644 unsigned long flags; 6323 - struct scsi_cmnd *scp = sqcp->scmd; 6324 - struct sdebug_scsi_cmd *sdsc; 6325 5645 bool aborted; 6326 5646 6327 5647 if (sdebug_statistics) { ··· 6332 5652 6333 5653 if (!scp) { 6334 5654 pr_err("scmd=NULL\n"); 6335 - goto out; 5655 + return; 6336 5656 } 6337 5657 6338 - sdsc = scsi_cmd_priv(scp); 6339 5658 spin_lock_irqsave(&sdsc->lock, flags); 6340 5659 aborted = sd_dp->aborted; 6341 5660 if (unlikely(aborted)) 6342 5661 sd_dp->aborted = false; 6343 - ASSIGN_QUEUED_CMD(scp, NULL); 6344 5662 6345 5663 spin_unlock_irqrestore(&sdsc->lock, flags); 6346 5664 6347 5665 if (aborted) { 6348 5666 pr_info("bypassing scsi_done() due to aborted cmd, kicking-off EH\n"); 6349 5667 blk_abort_request(scsi_cmd_to_rq(scp)); 6350 - goto out; 5668 + return; 6351 5669 } 6352 5670 6353 5671 scsi_done(scp); /* callback to mid level */ 6354 - out: 6355 - sdebug_free_queued_cmd(sqcp); 6356 5672 } 6357 5673 6358 5674 /* When high resolution timer goes off this function is called. */ ··· 6511 5835 } else { 6512 5836 devip->zoned = false; 6513 5837 } 5838 + if (sdebug_ptype == TYPE_TAPE) { 5839 + devip->tape_density = TAPE_DEF_DENSITY; 5840 + devip->tape_blksize = TAPE_DEF_BLKSIZE; 5841 + } 6514 5842 devip->create_ts = ktime_get_boottime(); 6515 5843 atomic_set(&devip->stopped, (sdeb_tur_ms_to_ready > 0 ? 2 : 0)); 6516 5844 spin_lock_init(&devip->list_lock); ··· 6585 5905 if (devip == NULL) 6586 5906 return 1; /* no resources, will be marked offline */ 6587 5907 } 5908 + if (sdebug_ptype == TYPE_TAPE) { 5909 + if (!devip->tape_blocks[0]) { 5910 + devip->tape_blocks[0] = 5911 + kcalloc(TAPE_UNITS, sizeof(struct tape_block), 5912 + GFP_KERNEL); 5913 + if (!devip->tape_blocks[0]) 5914 + return 1; 5915 + } 5916 + devip->tape_pending_nbr_partitions = -1; 5917 + if (partition_tape(devip, 1, TAPE_UNITS, 0) < 0) { 5918 + kfree(devip->tape_blocks[0]); 5919 + devip->tape_blocks[0] = NULL; 5920 + return 1; 5921 + } 5922 + } 6588 5923 sdp->hostdata = devip; 6589 5924 if (sdebug_no_uld) 6590 5925 sdp->no_uld_attach = 1; ··· 6645 5950 6646 5951 debugfs_remove(devip->debugfs_entry); 6647 5952 5953 + if (sdebug_ptype == TYPE_TAPE) { 5954 + kfree(devip->tape_blocks[0]); 5955 + devip->tape_blocks[0] = NULL; 5956 + } 5957 + 6648 5958 /* make this slot available for re-use */ 6649 5959 devip->used = false; 6650 5960 sdp->hostdata = NULL; 6651 5961 } 6652 5962 6653 - /* Returns true if we require the queued memory to be freed by the caller. */ 6654 - static bool stop_qc_helper(struct sdebug_defer *sd_dp, 6655 - enum sdeb_defer_type defer_t) 5963 + /* Returns true if cancelled or not running callback. */ 5964 + static bool scsi_debug_stop_cmnd(struct scsi_cmnd *cmnd) 6656 5965 { 5966 + struct sdebug_scsi_cmd *sdsc = scsi_cmd_priv(cmnd); 5967 + struct sdebug_defer *sd_dp = &sdsc->sd_dp; 5968 + enum sdeb_defer_type defer_t = READ_ONCE(sd_dp->defer_t); 5969 + 5970 + lockdep_assert_held(&sdsc->lock); 5971 + 6657 5972 if (defer_t == SDEB_DEFER_HRT) { 6658 5973 int res = hrtimer_try_to_cancel(&sd_dp->hrt); 6659 5974 6660 5975 switch (res) { 6661 - case 0: /* Not active, it must have already run */ 6662 5976 case -1: /* -1 It's executing the CB */ 6663 5977 return false; 5978 + case 0: /* Not active, it must have already run */ 6664 5979 case 1: /* Was active, we've now cancelled */ 6665 5980 default: 6666 5981 return true; 6667 5982 } 6668 5983 } else if (defer_t == SDEB_DEFER_WQ) { 6669 5984 /* Cancel if pending */ 6670 - if (cancel_work_sync(&sd_dp->ew.work)) 5985 + if (cancel_work(&sd_dp->ew.work)) 6671 5986 return true; 6672 - /* Was not pending, so it must have run */ 5987 + /* callback may be running, so return false */ 6673 5988 return false; 6674 5989 } else if (defer_t == SDEB_DEFER_POLL) { 6675 5990 return true; 6676 5991 } 6677 5992 6678 5993 return false; 6679 - } 6680 - 6681 - 6682 - static bool scsi_debug_stop_cmnd(struct scsi_cmnd *cmnd) 6683 - { 6684 - enum sdeb_defer_type l_defer_t; 6685 - struct sdebug_defer *sd_dp; 6686 - struct sdebug_scsi_cmd *sdsc = scsi_cmd_priv(cmnd); 6687 - struct sdebug_queued_cmd *sqcp = TO_QUEUED_CMD(cmnd); 6688 - 6689 - lockdep_assert_held(&sdsc->lock); 6690 - 6691 - if (!sqcp) 6692 - return false; 6693 - sd_dp = &sqcp->sd_dp; 6694 - l_defer_t = READ_ONCE(sd_dp->defer_t); 6695 - ASSIGN_QUEUED_CMD(cmnd, NULL); 6696 - 6697 - if (stop_qc_helper(sd_dp, l_defer_t)) 6698 - sdebug_free_queued_cmd(sqcp); 6699 - 6700 - return true; 6701 5994 } 6702 5995 6703 5996 /* ··· 6759 6076 6760 6077 static int scsi_debug_abort(struct scsi_cmnd *SCpnt) 6761 6078 { 6762 - bool ok = scsi_debug_abort_cmnd(SCpnt); 6079 + bool aborted = scsi_debug_abort_cmnd(SCpnt); 6763 6080 u8 *cmd = SCpnt->cmnd; 6764 6081 u8 opcode = cmd[0]; 6765 6082 ··· 6768 6085 if (SDEBUG_OPT_ALL_NOISE & sdebug_opts) 6769 6086 sdev_printk(KERN_INFO, SCpnt->device, 6770 6087 "%s: command%s found\n", __func__, 6771 - ok ? "" : " not"); 6088 + aborted ? "" : " not"); 6089 + 6772 6090 6773 6091 if (sdebug_fail_abort(SCpnt)) { 6774 6092 scmd_printk(KERN_INFO, SCpnt, "fail abort command 0x%x\n", 6775 6093 opcode); 6776 6094 return FAILED; 6777 6095 } 6096 + 6097 + if (aborted == false) 6098 + return FAILED; 6778 6099 6779 6100 return SUCCESS; 6780 6101 } ··· 6831 6144 return 0; 6832 6145 } 6833 6146 6147 + static void scsi_tape_reset_clear(struct sdebug_dev_info *devip) 6148 + { 6149 + if (sdebug_ptype == TYPE_TAPE) { 6150 + int i; 6151 + 6152 + devip->tape_blksize = TAPE_DEF_BLKSIZE; 6153 + devip->tape_density = TAPE_DEF_DENSITY; 6154 + devip->tape_partition = 0; 6155 + devip->tape_dce = 0; 6156 + for (i = 0; i < TAPE_MAX_PARTITIONS; i++) 6157 + devip->tape_location[i] = 0; 6158 + devip->tape_pending_nbr_partitions = -1; 6159 + /* Don't reset partitioning? */ 6160 + } 6161 + } 6162 + 6834 6163 static int scsi_debug_device_reset(struct scsi_cmnd *SCpnt) 6835 6164 { 6836 6165 struct scsi_device *sdp = SCpnt->device; ··· 6860 6157 sdev_printk(KERN_INFO, sdp, "%s\n", __func__); 6861 6158 6862 6159 scsi_debug_stop_all_queued(sdp); 6863 - if (devip) 6160 + if (devip) { 6864 6161 set_bit(SDEBUG_UA_POR, devip->uas_bm); 6162 + scsi_tape_reset_clear(devip); 6163 + } 6865 6164 6866 6165 if (sdebug_fail_lun_reset(SCpnt)) { 6867 6166 scmd_printk(KERN_INFO, SCpnt, "fail lun reset 0x%x\n", opcode); ··· 6901 6196 list_for_each_entry(devip, &sdbg_host->dev_info_list, dev_list) { 6902 6197 if (devip->target == sdp->id) { 6903 6198 set_bit(SDEBUG_UA_BUS_RESET, devip->uas_bm); 6199 + scsi_tape_reset_clear(devip); 6904 6200 ++k; 6905 6201 } 6906 6202 } ··· 6933 6227 6934 6228 list_for_each_entry(devip, &sdbg_host->dev_info_list, dev_list) { 6935 6229 set_bit(SDEBUG_UA_BUS_RESET, devip->uas_bm); 6230 + scsi_tape_reset_clear(devip); 6936 6231 ++k; 6937 6232 } 6938 6233 ··· 6957 6250 list_for_each_entry(devip, &sdbg_host->dev_info_list, 6958 6251 dev_list) { 6959 6252 set_bit(SDEBUG_UA_BUS_RESET, devip->uas_bm); 6253 + scsi_tape_reset_clear(devip); 6960 6254 ++k; 6961 6255 } 6962 6256 } ··· 7074 6366 7075 6367 #define INCLUSIVE_TIMING_MAX_NS 1000000 /* 1 millisecond */ 7076 6368 7077 - 7078 - void sdebug_free_queued_cmd(struct sdebug_queued_cmd *sqcp) 7079 - { 7080 - if (sqcp) 7081 - kmem_cache_free(queued_cmd_cache, sqcp); 7082 - } 7083 - 7084 - static struct sdebug_queued_cmd *sdebug_alloc_queued_cmd(struct scsi_cmnd *scmd) 7085 - { 7086 - struct sdebug_queued_cmd *sqcp; 7087 - struct sdebug_defer *sd_dp; 7088 - 7089 - sqcp = kmem_cache_zalloc(queued_cmd_cache, GFP_ATOMIC); 7090 - if (!sqcp) 7091 - return NULL; 7092 - 7093 - sd_dp = &sqcp->sd_dp; 7094 - 7095 - hrtimer_setup(&sd_dp->hrt, sdebug_q_cmd_hrt_complete, CLOCK_MONOTONIC, 7096 - HRTIMER_MODE_REL_PINNED); 7097 - INIT_WORK(&sd_dp->ew.work, sdebug_q_cmd_wq_complete); 7098 - 7099 - sqcp->scmd = scmd; 7100 - 7101 - return sqcp; 7102 - } 7103 - 7104 6369 /* Complete the processing of the thread that queued a SCSI command to this 7105 6370 * driver. It either completes the command by calling cmnd_done() or 7106 6371 * schedules a hr timer or work queue then returns 0. Returns ··· 7090 6409 struct sdebug_scsi_cmd *sdsc = scsi_cmd_priv(cmnd); 7091 6410 unsigned long flags; 7092 6411 u64 ns_from_boot = 0; 7093 - struct sdebug_queued_cmd *sqcp; 7094 6412 struct scsi_device *sdp; 7095 6413 struct sdebug_defer *sd_dp; 7096 6414 ··· 7121 6441 } 7122 6442 } 7123 6443 7124 - sqcp = sdebug_alloc_queued_cmd(cmnd); 7125 - if (!sqcp) { 7126 - pr_err("%s no alloc\n", __func__); 7127 - return SCSI_MLQUEUE_HOST_BUSY; 7128 - } 7129 - sd_dp = &sqcp->sd_dp; 6444 + sd_dp = &sdsc->sd_dp; 7130 6445 7131 6446 if (polled || (ndelay > 0 && ndelay < INCLUSIVE_TIMING_MAX_NS)) 7132 6447 ns_from_boot = ktime_get_boottime_ns(); ··· 7169 6494 7170 6495 if (kt <= d) { /* elapsed duration >= kt */ 7171 6496 /* call scsi_done() from this thread */ 7172 - sdebug_free_queued_cmd(sqcp); 7173 6497 scsi_done(cmnd); 7174 6498 return 0; 7175 6499 } ··· 7181 6507 if (polled) { 7182 6508 spin_lock_irqsave(&sdsc->lock, flags); 7183 6509 sd_dp->cmpl_ts = ktime_add(ns_to_ktime(ns_from_boot), kt); 7184 - ASSIGN_QUEUED_CMD(cmnd, sqcp); 7185 6510 WRITE_ONCE(sd_dp->defer_t, SDEB_DEFER_POLL); 7186 6511 spin_unlock_irqrestore(&sdsc->lock, flags); 7187 6512 } else { 7188 6513 /* schedule the invocation of scsi_done() for a later time */ 7189 6514 spin_lock_irqsave(&sdsc->lock, flags); 7190 - ASSIGN_QUEUED_CMD(cmnd, sqcp); 7191 6515 WRITE_ONCE(sd_dp->defer_t, SDEB_DEFER_HRT); 7192 6516 hrtimer_start(&sd_dp->hrt, kt, HRTIMER_MODE_REL_PINNED); 7193 6517 /* ··· 7209 6537 sd_dp->issuing_cpu = raw_smp_processor_id(); 7210 6538 if (polled) { 7211 6539 spin_lock_irqsave(&sdsc->lock, flags); 7212 - ASSIGN_QUEUED_CMD(cmnd, sqcp); 7213 6540 sd_dp->cmpl_ts = ns_to_ktime(ns_from_boot); 7214 6541 WRITE_ONCE(sd_dp->defer_t, SDEB_DEFER_POLL); 7215 6542 spin_unlock_irqrestore(&sdsc->lock, flags); 7216 6543 } else { 7217 6544 spin_lock_irqsave(&sdsc->lock, flags); 7218 - ASSIGN_QUEUED_CMD(cmnd, sqcp); 7219 6545 WRITE_ONCE(sd_dp->defer_t, SDEB_DEFER_WQ); 7220 6546 schedule_work(&sd_dp->ew.work); 7221 6547 spin_unlock_irqrestore(&sdsc->lock, flags); ··· 7505 6835 blk_mq_tagset_busy_iter(&host->tag_set, sdebug_submit_queue_iter, 7506 6836 &data); 7507 6837 if (f >= 0) { 7508 - seq_printf(m, " in_use_bm BUSY: %s: %d,%d\n", 6838 + seq_printf(m, " BUSY: %s: %d,%d\n", 7509 6839 "first,last bits", f, l); 7510 6840 } 7511 6841 } ··· 8580 7910 hosts_to_add = sdebug_add_host; 8581 7911 sdebug_add_host = 0; 8582 7912 8583 - queued_cmd_cache = KMEM_CACHE(sdebug_queued_cmd, SLAB_HWCACHE_ALIGN); 8584 - if (!queued_cmd_cache) { 8585 - ret = -ENOMEM; 8586 - goto driver_unreg; 8587 - } 8588 - 8589 7913 sdebug_debugfs_root = debugfs_create_dir("scsi_debug", NULL); 8590 7914 if (IS_ERR_OR_NULL(sdebug_debugfs_root)) 8591 7915 pr_info("%s: failed to create initial debugfs directory\n", __func__); ··· 8606 7942 8607 7943 return 0; 8608 7944 8609 - driver_unreg: 8610 - driver_unregister(&sdebug_driverfs_driver); 8611 7945 bus_unreg: 8612 7946 bus_unregister(&pseudo_lld_bus); 8613 7947 dev_unreg: ··· 8621 7959 8622 7960 for (; k; k--) 8623 7961 sdebug_do_remove_host(true); 8624 - kmem_cache_destroy(queued_cmd_cache); 8625 7962 driver_unregister(&sdebug_driverfs_driver); 8626 7963 bus_unregister(&pseudo_lld_bus); 8627 7964 root_device_unregister(pseudo_primary); ··· 9004 8343 struct sdebug_defer *sd_dp; 9005 8344 u32 unique_tag = blk_mq_unique_tag(rq); 9006 8345 u16 hwq = blk_mq_unique_tag_to_hwq(unique_tag); 9007 - struct sdebug_queued_cmd *sqcp; 9008 8346 unsigned long flags; 9009 8347 int queue_num = data->queue_num; 9010 8348 ktime_t time; ··· 9019 8359 time = ktime_get_boottime(); 9020 8360 9021 8361 spin_lock_irqsave(&sdsc->lock, flags); 9022 - sqcp = TO_QUEUED_CMD(cmd); 9023 - if (!sqcp) { 9024 - spin_unlock_irqrestore(&sdsc->lock, flags); 9025 - return true; 9026 - } 9027 - 9028 - sd_dp = &sqcp->sd_dp; 8362 + sd_dp = &sdsc->sd_dp; 9029 8363 if (READ_ONCE(sd_dp->defer_t) != SDEB_DEFER_POLL) { 9030 8364 spin_unlock_irqrestore(&sdsc->lock, flags); 9031 8365 return true; ··· 9029 8375 spin_unlock_irqrestore(&sdsc->lock, flags); 9030 8376 return true; 9031 8377 } 9032 - 9033 - ASSIGN_QUEUED_CMD(cmd, NULL); 9034 8378 spin_unlock_irqrestore(&sdsc->lock, flags); 9035 8379 9036 8380 if (sdebug_statistics) { ··· 9036 8384 if (raw_smp_processor_id() != sd_dp->issuing_cpu) 9037 8385 atomic_inc(&sdebug_miss_cpus); 9038 8386 } 9039 - 9040 - sdebug_free_queued_cmd(sqcp); 9041 8387 9042 8388 scsi_done(cmd); /* callback to mid level */ 9043 8389 (*data->num_entries)++; ··· 9351 8701 static int sdebug_init_cmd_priv(struct Scsi_Host *shost, struct scsi_cmnd *cmd) 9352 8702 { 9353 8703 struct sdebug_scsi_cmd *sdsc = scsi_cmd_priv(cmd); 8704 + struct sdebug_defer *sd_dp = &sdsc->sd_dp; 9354 8705 9355 8706 spin_lock_init(&sdsc->lock); 8707 + hrtimer_setup(&sd_dp->hrt, sdebug_q_cmd_hrt_complete, CLOCK_MONOTONIC, 8708 + HRTIMER_MODE_REL_PINNED); 8709 + INIT_WORK(&sd_dp->ew.work, sdebug_q_cmd_wq_complete); 9356 8710 9357 8711 return 0; 9358 8712 }
+19
drivers/scsi/scsi_error.c
··· 547 547 548 548 scsi_report_sense(sdev, &sshdr); 549 549 550 + if (sshdr.sense_key == UNIT_ATTENTION) { 551 + /* 552 + * Increment the counters for Power on/Reset or New Media so 553 + * that all ULDs interested in these can see that those have 554 + * happened, even if someone else gets the sense data. 555 + */ 556 + if (sshdr.asc == 0x28) 557 + scmd->device->ua_new_media_ctr++; 558 + else if (sshdr.asc == 0x29) 559 + scmd->device->ua_por_ctr++; 560 + } 561 + 550 562 if (scsi_sense_is_deferred(&sshdr)) 551 563 return NEEDS_RETRY; 552 564 ··· 723 711 return SUCCESS; 724 712 725 713 case COMPLETED: 714 + /* 715 + * A command using command duration limits (CDL) with a 716 + * descriptor set with policy 0xD may be completed with success 717 + * and the sense data DATA CURRENTLY UNAVAILABLE, indicating 718 + * that the command was in fact aborted because it exceeded its 719 + * duration limit. Never retry these commands. 720 + */ 726 721 if (sshdr.asc == 0x55 && sshdr.ascq == 0x0a) { 727 722 set_scsi_ml_byte(scmd, SCSIML_STAT_DL_TIMEOUT); 728 723 req->cmd_flags |= REQ_FAILFAST_DEV;
+3 -2
drivers/scsi/scsi_scan.c
··· 151 151 struct async_scan_data *data; 152 152 153 153 do { 154 - if (list_empty(&scanning_hosts)) 155 - return 0; 154 + scoped_guard(spinlock, &async_scan_lock) 155 + if (list_empty(&scanning_hosts)) 156 + return 0; 156 157 /* If we can't get memory immediately, that's OK. Just 157 158 * sleep a little. Even if we never get memory, the async 158 159 * scans will finish eventually.
+3 -1
drivers/scsi/scsi_sysctl.c
··· 17 17 .data = &scsi_logging_level, 18 18 .maxlen = sizeof(scsi_logging_level), 19 19 .mode = 0644, 20 - .proc_handler = proc_dointvec }, 20 + .proc_handler = proc_dointvec_minmax, 21 + .extra1 = SYSCTL_ZERO, 22 + .extra2 = SYSCTL_INT_MAX }, 21 23 }; 22 24 23 25 static struct ctl_table_header *scsi_table_header;
+69 -11
drivers/scsi/st.c
··· 163 163 164 164 static int debugging = DEBUG; 165 165 166 + /* Setting these non-zero may risk recognizing resets */ 166 167 #define MAX_RETRIES 0 167 168 #define MAX_WRITE_RETRIES 0 168 169 #define MAX_READY_RETRIES 0 170 + 169 171 #define NO_TAPE NOT_READY 170 172 171 173 #define ST_TIMEOUT (900 * HZ) ··· 359 357 { 360 358 int result = SRpnt->result; 361 359 u8 scode; 360 + unsigned int ctr; 362 361 DEB(const char *stp;) 363 362 char *name = STp->name; 364 363 struct st_cmdstatus *cmdstatp; 364 + 365 + ctr = scsi_get_ua_por_ctr(STp->device); 366 + if (ctr != STp->por_ctr) { 367 + STp->por_ctr = ctr; 368 + STp->pos_unknown = 1; /* ASC => power on / reset */ 369 + st_printk(KERN_WARNING, STp, "Power on/reset recognized."); 370 + } 365 371 366 372 if (!result) 367 373 return 0; ··· 423 413 if (cmdstatp->have_sense && 424 414 cmdstatp->sense_hdr.asc == 0 && cmdstatp->sense_hdr.ascq == 0x17) 425 415 STp->cleaning_req = 1; /* ASC and ASCQ => cleaning requested */ 426 - if (cmdstatp->have_sense && scode == UNIT_ATTENTION && cmdstatp->sense_hdr.asc == 0x29) 416 + if (cmdstatp->have_sense && scode == UNIT_ATTENTION && 417 + cmdstatp->sense_hdr.asc == 0x29 && !STp->pos_unknown) { 427 418 STp->pos_unknown = 1; /* ASC => power on / reset */ 428 - 429 - STp->pos_unknown |= STp->device->was_reset; 419 + st_printk(KERN_WARNING, STp, "Power on/reset recognized."); 420 + } 430 421 431 422 if (cmdstatp->have_sense && 432 423 scode == RECOVERED_ERROR ··· 963 952 STp->partition = find_partition(STp); 964 953 if (STp->partition < 0) 965 954 STp->partition = 0; 966 - STp->new_partition = STp->partition; 967 955 } 968 956 } 969 957 ··· 979 969 { 980 970 int attentions, waits, max_wait, scode; 981 971 int retval = CHKRES_READY, new_session = 0; 972 + unsigned int ctr; 982 973 unsigned char cmd[MAX_COMMAND_SIZE]; 983 974 struct st_request *SRpnt = NULL; 984 975 struct st_cmdstatus *cmdstatp = &STp->buffer->cmdstat; ··· 1034 1023 break; 1035 1024 } 1036 1025 } 1026 + } 1027 + 1028 + ctr = scsi_get_ua_new_media_ctr(STp->device); 1029 + if (ctr != STp->new_media_ctr) { 1030 + STp->new_media_ctr = ctr; 1031 + new_session = 1; 1032 + DEBC_printk(STp, "New tape session."); 1037 1033 } 1038 1034 1039 1035 retval = (STp->buffer)->syscall_result; ··· 2915 2897 timeout = STp->long_timeout * 8; 2916 2898 2917 2899 DEBC_printk(STp, "Erasing tape.\n"); 2918 - fileno = blkno = at_sm = 0; 2919 2900 break; 2920 2901 case MTSETBLK: /* Set block length */ 2921 2902 case MTSETDENSITY: /* Set tape density */ ··· 2947 2930 if (cmd_in == MTSETDENSITY) { 2948 2931 (STp->buffer)->b_data[4] = arg; 2949 2932 STp->density_changed = 1; /* At least we tried ;-) */ 2933 + STp->changed_density = arg; 2950 2934 } else if (cmd_in == SET_DENS_AND_BLK) 2951 2935 (STp->buffer)->b_data[4] = arg >> 24; 2952 2936 else 2953 2937 (STp->buffer)->b_data[4] = STp->density; 2954 2938 if (cmd_in == MTSETBLK || cmd_in == SET_DENS_AND_BLK) { 2955 2939 ltmp = arg & MT_ST_BLKSIZE_MASK; 2956 - if (cmd_in == MTSETBLK) 2940 + if (cmd_in == MTSETBLK) { 2957 2941 STp->blksize_changed = 1; /* At least we tried ;-) */ 2942 + STp->changed_blksize = arg; 2943 + } 2958 2944 } else 2959 2945 ltmp = STp->block_size; 2960 2946 (STp->buffer)->b_data[9] = (ltmp >> 16); ··· 3104 3084 cmd_in == MTSETDRVBUFFER || 3105 3085 cmd_in == SET_DENS_AND_BLK) { 3106 3086 if (cmdstatp->sense_hdr.sense_key == ILLEGAL_REQUEST && 3107 - !(STp->use_pf & PF_TESTED)) { 3087 + cmdstatp->sense_hdr.asc == 0x24 && 3088 + (STp->device)->scsi_level <= SCSI_2 && 3089 + !(STp->use_pf & PF_TESTED)) { 3108 3090 /* Try the other possible state of Page Format if not 3109 3091 already tried */ 3110 3092 STp->use_pf = (STp->use_pf ^ USE_PF) | PF_TESTED; ··· 3658 3636 retval = (-EIO); 3659 3637 goto out; 3660 3638 } 3661 - reset_state(STp); 3662 - /* remove this when the midlevel properly clears was_reset */ 3663 - STp->device->was_reset = 0; 3639 + reset_state(STp); /* Clears pos_unknown */ 3640 + 3641 + /* Fix the device settings after reset, ignore errors */ 3642 + if (mtc.mt_op == MTREW || mtc.mt_op == MTSEEK || 3643 + mtc.mt_op == MTEOM) { 3644 + if (STp->can_partitions) { 3645 + /* STp->new_partition contains the 3646 + * latest partition set 3647 + */ 3648 + STp->partition = 0; 3649 + switch_partition(STp); 3650 + } 3651 + if (STp->density_changed) 3652 + st_int_ioctl(STp, MTSETDENSITY, STp->changed_density); 3653 + if (STp->blksize_changed) 3654 + st_int_ioctl(STp, MTSETBLK, STp->changed_blksize); 3655 + } 3664 3656 } 3665 3657 3666 3658 if (mtc.mt_op != MTNOP && mtc.mt_op != MTSETBLK && ··· 4158 4122 */ 4159 4123 static int __init st_setup(char *str) 4160 4124 { 4161 - int i, len, ints[5]; 4125 + int i, len, ints[ARRAY_SIZE(parms) + 1]; 4162 4126 char *stp; 4163 4127 4164 4128 stp = get_options(str, ARRAY_SIZE(ints), ints); ··· 4419 4383 "st: Can't allocate statistics.\n"); 4420 4384 goto out_idr_remove; 4421 4385 } 4386 + 4387 + tpnt->new_media_ctr = scsi_get_ua_new_media_ctr(SDp); 4388 + tpnt->por_ctr = scsi_get_ua_por_ctr(SDp); 4422 4389 4423 4390 dev_set_drvdata(dev, tpnt); 4424 4391 ··· 4704 4665 } 4705 4666 static DEVICE_ATTR_RO(options); 4706 4667 4668 + /** 4669 + * position_lost_in_reset_show - Value 1 indicates that reads, writes, etc. 4670 + * are blocked because a device reset has occurred and no operation positioning 4671 + * the tape has been issued. 4672 + * @dev: struct device 4673 + * @attr: attribute structure 4674 + * @buf: buffer to return formatted data in 4675 + */ 4676 + static ssize_t position_lost_in_reset_show(struct device *dev, 4677 + struct device_attribute *attr, char *buf) 4678 + { 4679 + struct st_modedef *STm = dev_get_drvdata(dev); 4680 + struct scsi_tape *STp = STm->tape; 4681 + 4682 + return sprintf(buf, "%d", STp->pos_unknown); 4683 + } 4684 + static DEVICE_ATTR_RO(position_lost_in_reset); 4685 + 4707 4686 /* Support for tape stats */ 4708 4687 4709 4688 /** ··· 4906 4849 &dev_attr_default_density.attr, 4907 4850 &dev_attr_default_compression.attr, 4908 4851 &dev_attr_options.attr, 4852 + &dev_attr_position_lost_in_reset.attr, 4909 4853 NULL, 4910 4854 }; 4911 4855
+6
drivers/scsi/st.h
··· 165 165 unsigned char compression_changed; 166 166 unsigned char drv_buffer; 167 167 unsigned char density; 168 + unsigned char changed_density; 168 169 unsigned char door_locked; 169 170 unsigned char autorew_dev; /* auto-rewind device */ 170 171 unsigned char rew_at_close; /* rewind necessary at close */ ··· 173 172 unsigned char cleaning_req; /* cleaning requested? */ 174 173 unsigned char first_tur; /* first TEST UNIT READY */ 175 174 int block_size; 175 + int changed_blksize; 176 176 int min_block; 177 177 int max_block; 178 178 int recover_count; /* From tape opening */ 179 179 int recover_reg; /* From last status call */ 180 + 181 + /* The saved values of midlevel counters */ 182 + unsigned int new_media_ctr; 183 + unsigned int por_ctr; 180 184 181 185 #if DEBUG 182 186 unsigned char write_pending;
+2 -2
drivers/scsi/storvsc_drv.c
··· 776 776 777 777 if (vstor_packet->operation != VSTOR_OPERATION_COMPLETE_IO || 778 778 vstor_packet->status != 0) { 779 - dev_err(dev, "Failed to create sub-channel: op=%d, sts=%d\n", 779 + dev_err(dev, "Failed to create sub-channel: op=%d, host=0x%x\n", 780 780 vstor_packet->operation, vstor_packet->status); 781 781 return; 782 782 } ··· 1183 1183 STORVSC_LOGGING_WARN : STORVSC_LOGGING_ERROR; 1184 1184 1185 1185 storvsc_log_ratelimited(device, loglevel, 1186 - "tag#%d cmd 0x%x status: scsi 0x%x srb 0x%x hv 0x%x\n", 1186 + "tag#%d cmd 0x%x status: scsi 0x%x srb 0x%x host 0x%x\n", 1187 1187 scsi_cmd_to_rq(request->cmd)->tag, 1188 1188 stor_pkt->vm_srb.cdb[0], 1189 1189 vstor_packet->vm_srb.scsi_status,
+3 -3
drivers/target/iscsi/iscsi_target_nego.c
··· 212 212 213 213 if ((login_req->max_version != login->version_max) || 214 214 (login_req->min_version != login->version_min)) { 215 - pr_err("Login request changed Version Max/Nin" 215 + pr_err("Login request changed Version Max/Min" 216 216 " unexpectedly to 0x%02x/0x%02x, protocol error\n", 217 217 login_req->max_version, login_req->min_version); 218 218 iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_INITIATOR_ERR, ··· 557 557 * before initial PDU processing in iscsi_target_start_negotiation() 558 558 * has completed, go ahead and retry until it's cleared. 559 559 * 560 - * Otherwise if the TCP connection drops while this is occuring, 560 + * Otherwise if the TCP connection drops while this is occurring, 561 561 * iscsi_target_start_negotiation() will detect the failure, call 562 562 * cancel_delayed_work_sync(&conn->login_work), and cleanup the 563 563 * remaining iscsi connection resources from iscsi_np process context. ··· 1050 1050 /* 1051 1051 * Check to make sure the TCP connection has not 1052 1052 * dropped asynchronously while session reinstatement 1053 - * was occuring in this kthread context, before 1053 + * was occurring in this kthread context, before 1054 1054 * transitioning to full feature phase operation. 1055 1055 */ 1056 1056 if (iscsi_target_sk_check_close(conn))
+3 -2
drivers/target/loopback/tcm_loop.c
··· 176 176 177 177 memset(tl_cmd, 0, sizeof(*tl_cmd)); 178 178 tl_cmd->sc = sc; 179 - tl_cmd->sc_cmd_tag = scsi_cmd_to_rq(sc)->tag; 179 + tl_cmd->sc_cmd_tag = blk_mq_unique_tag(scsi_cmd_to_rq(sc)); 180 180 181 181 tcm_loop_target_queue_cmd(tl_cmd); 182 182 return 0; ··· 242 242 tl_hba = *(struct tcm_loop_hba **)shost_priv(sc->device->host); 243 243 tl_tpg = &tl_hba->tl_hba_tpgs[sc->device->id]; 244 244 ret = tcm_loop_issue_tmr(tl_tpg, sc->device->lun, 245 - scsi_cmd_to_rq(sc)->tag, TMR_ABORT_TASK); 245 + blk_mq_unique_tag(scsi_cmd_to_rq(sc)), 246 + TMR_ABORT_TASK); 246 247 return (ret == TMR_FUNCTION_COMPLETE) ? SUCCESS : FAILED; 247 248 } 248 249
+3 -3
drivers/target/target_core_configfs.c
··· 123 123 goto unlock; 124 124 } 125 125 126 - read_bytes = snprintf(db_root_stage, DB_ROOT_LEN, "%s", page); 126 + read_bytes = scnprintf(db_root_stage, DB_ROOT_LEN, "%s", page); 127 127 if (!read_bytes) 128 128 goto unlock; 129 129 ··· 143 143 } 144 144 filp_close(fp, NULL); 145 145 146 - strncpy(db_root, db_root_stage, read_bytes); 146 + strscpy(db_root, db_root_stage); 147 147 pr_debug("Target_Core_ConfigFS: db_root set to %s\n", db_root); 148 148 149 149 r = read_bytes; ··· 3664 3664 } 3665 3665 filp_close(fp, NULL); 3666 3666 3667 - strncpy(db_root, db_root_stage, DB_ROOT_LEN); 3667 + strscpy(db_root, db_root_stage); 3668 3668 pr_debug("Target_Core_ConfigFS: db_root set to %s\n", db_root); 3669 3669 } 3670 3670
+4 -4
drivers/target/target_core_device.c
··· 1078 1078 if (!dev->dev_attrib.emulate_pr && 1079 1079 ((cdb[0] == PERSISTENT_RESERVE_IN) || 1080 1080 (cdb[0] == PERSISTENT_RESERVE_OUT) || 1081 - (cdb[0] == RELEASE || cdb[0] == RELEASE_10) || 1082 - (cdb[0] == RESERVE || cdb[0] == RESERVE_10))) { 1081 + (cdb[0] == RELEASE_6 || cdb[0] == RELEASE_10) || 1082 + (cdb[0] == RESERVE_6 || cdb[0] == RESERVE_10))) { 1083 1083 return TCM_UNSUPPORTED_SCSI_OPCODE; 1084 1084 } 1085 1085 ··· 1101 1101 return target_cmd_size_check(cmd, size); 1102 1102 } 1103 1103 1104 - if (cdb[0] == RELEASE || cdb[0] == RELEASE_10) { 1104 + if (cdb[0] == RELEASE_6 || cdb[0] == RELEASE_10) { 1105 1105 cmd->execute_cmd = target_scsi2_reservation_release; 1106 1106 if (cdb[0] == RELEASE_10) 1107 1107 size = get_unaligned_be16(&cdb[7]); ··· 1109 1109 size = cmd->data_length; 1110 1110 return target_cmd_size_check(cmd, size); 1111 1111 } 1112 - if (cdb[0] == RESERVE || cdb[0] == RESERVE_10) { 1112 + if (cdb[0] == RESERVE_6 || cdb[0] == RESERVE_10) { 1113 1113 cmd->execute_cmd = target_scsi2_reservation_reserve; 1114 1114 if (cdb[0] == RESERVE_10) 1115 1115 size = get_unaligned_be16(&cdb[7]);
+3 -3
drivers/target/target_core_pr.c
··· 91 91 92 92 switch (cmd->t_task_cdb[0]) { 93 93 case INQUIRY: 94 - case RELEASE: 94 + case RELEASE_6: 95 95 case RELEASE_10: 96 96 return 0; 97 97 default: ··· 418 418 return -EINVAL; 419 419 } 420 420 break; 421 - case RELEASE: 421 + case RELEASE_6: 422 422 case RELEASE_10: 423 423 /* Handled by CRH=1 in target_scsi2_reservation_release() */ 424 424 ret = 0; 425 425 break; 426 - case RESERVE: 426 + case RESERVE_6: 427 427 case RESERVE_10: 428 428 /* Handled by CRH=1 in target_scsi2_reservation_reserve() */ 429 429 ret = 0;
+21 -15
drivers/target/target_core_spc.c
··· 1674 1674 return true; 1675 1675 1676 1676 switch (descr->opcode) { 1677 - case RESERVE: 1677 + case RESERVE_6: 1678 1678 case RESERVE_10: 1679 - case RELEASE: 1679 + case RELEASE_6: 1680 1680 case RELEASE_10: 1681 1681 /* 1682 1682 * The pr_ops which are used by the backend modules don't ··· 1828 1828 1829 1829 static struct target_opcode_descriptor tcm_opcode_release = { 1830 1830 .support = SCSI_SUPPORT_FULL, 1831 - .opcode = RELEASE, 1831 + .opcode = RELEASE_6, 1832 1832 .cdb_size = 6, 1833 - .usage_bits = {RELEASE, 0x00, 0x00, 0x00, 1833 + .usage_bits = {RELEASE_6, 0x00, 0x00, 0x00, 1834 1834 0x00, SCSI_CONTROL_MASK}, 1835 1835 .enabled = tcm_is_pr_enabled, 1836 1836 }; ··· 1847 1847 1848 1848 static struct target_opcode_descriptor tcm_opcode_reserve = { 1849 1849 .support = SCSI_SUPPORT_FULL, 1850 - .opcode = RESERVE, 1850 + .opcode = RESERVE_6, 1851 1851 .cdb_size = 6, 1852 - .usage_bits = {RESERVE, 0x00, 0x00, 0x00, 1852 + .usage_bits = {RESERVE_6, 0x00, 0x00, 0x00, 1853 1853 0x00, SCSI_CONTROL_MASK}, 1854 1854 .enabled = tcm_is_pr_enabled, 1855 1855 }; ··· 2151 2151 if (descr->serv_action_valid) 2152 2152 return TCM_INVALID_CDB_FIELD; 2153 2153 2154 - if (!descr->enabled || descr->enabled(descr, cmd)) 2154 + if (!descr->enabled || descr->enabled(descr, cmd)) { 2155 2155 *opcode = descr; 2156 + return TCM_NO_SENSE; 2157 + } 2156 2158 break; 2157 2159 case 0x2: 2158 2160 /* ··· 2168 2166 if (descr->serv_action_valid && 2169 2167 descr->service_action == requested_sa) { 2170 2168 if (!descr->enabled || descr->enabled(descr, 2171 - cmd)) 2169 + cmd)) { 2172 2170 *opcode = descr; 2171 + return TCM_NO_SENSE; 2172 + } 2173 2173 } else if (!descr->serv_action_valid) 2174 2174 return TCM_INVALID_CDB_FIELD; 2175 2175 break; ··· 2184 2180 */ 2185 2181 if (descr->service_action == requested_sa) 2186 2182 if (!descr->enabled || descr->enabled(descr, 2187 - cmd)) 2183 + cmd)) { 2188 2184 *opcode = descr; 2185 + return TCM_NO_SENSE; 2186 + } 2189 2187 break; 2190 2188 } 2191 2189 } 2192 2190 2193 - return 0; 2191 + return TCM_NO_SENSE; 2194 2192 } 2195 2193 2196 2194 static sense_reason_t ··· 2249 2243 response_length += spc_rsoc_encode_command_descriptor( 2250 2244 &buf[response_length], rctd, descr); 2251 2245 } 2252 - put_unaligned_be32(response_length - 3, buf); 2246 + put_unaligned_be32(response_length - 4, buf); 2253 2247 } else { 2254 2248 response_length = spc_rsoc_encode_one_command_descriptor( 2255 2249 &buf[response_length], rctd, descr, ··· 2273 2267 unsigned char *cdb = cmd->t_task_cdb; 2274 2268 2275 2269 switch (cdb[0]) { 2276 - case RESERVE: 2270 + case RESERVE_6: 2277 2271 case RESERVE_10: 2278 - case RELEASE: 2272 + case RELEASE_6: 2279 2273 case RELEASE_10: 2280 2274 if (!dev->dev_attrib.emulate_pr) 2281 2275 return TCM_UNSUPPORTED_SCSI_OPCODE; ··· 2319 2313 *size = get_unaligned_be32(&cdb[5]); 2320 2314 cmd->execute_cmd = target_scsi3_emulate_pr_out; 2321 2315 break; 2322 - case RELEASE: 2316 + case RELEASE_6: 2323 2317 case RELEASE_10: 2324 2318 if (cdb[0] == RELEASE_10) 2325 2319 *size = get_unaligned_be16(&cdb[7]); ··· 2328 2322 2329 2323 cmd->execute_cmd = target_scsi2_reservation_release; 2330 2324 break; 2331 - case RESERVE: 2325 + case RESERVE_6: 2332 2326 case RESERVE_10: 2333 2327 /* 2334 2328 * The SPC-2 RESERVE does not contain a size in the SCSI CDB.
+10
drivers/ufs/core/ufs-sysfs.c
··· 458 458 return count; 459 459 } 460 460 461 + static ssize_t critical_health_show(struct device *dev, 462 + struct device_attribute *attr, char *buf) 463 + { 464 + struct ufs_hba *hba = dev_get_drvdata(dev); 465 + 466 + return sysfs_emit(buf, "%d\n", hba->critical_health_count); 467 + } 468 + 461 469 static DEVICE_ATTR_RW(rpm_lvl); 462 470 static DEVICE_ATTR_RO(rpm_target_dev_state); 463 471 static DEVICE_ATTR_RO(rpm_target_link_state); ··· 478 470 static DEVICE_ATTR_RW(wb_flush_threshold); 479 471 static DEVICE_ATTR_RW(rtc_update_ms); 480 472 static DEVICE_ATTR_RW(pm_qos_enable); 473 + static DEVICE_ATTR_RO(critical_health); 481 474 482 475 static struct attribute *ufs_sysfs_ufshcd_attrs[] = { 483 476 &dev_attr_rpm_lvl.attr, ··· 493 484 &dev_attr_wb_flush_threshold.attr, 494 485 &dev_attr_rtc_update_ms.attr, 495 486 &dev_attr_pm_qos_enable.attr, 487 + &dev_attr_critical_health.attr, 496 488 NULL 497 489 }; 498 490
+69 -66
drivers/ufs/core/ufs_trace.h
··· 83 83 84 84 TRACE_EVENT(ufshcd_clk_gating, 85 85 86 - TP_PROTO(const char *dev_name, int state), 86 + TP_PROTO(struct ufs_hba *hba, int state), 87 87 88 - TP_ARGS(dev_name, state), 88 + TP_ARGS(hba, state), 89 89 90 90 TP_STRUCT__entry( 91 - __string(dev_name, dev_name) 91 + __field(struct ufs_hba *, hba) 92 92 __field(int, state) 93 93 ), 94 94 95 95 TP_fast_assign( 96 - __assign_str(dev_name); 96 + __entry->hba = hba; 97 97 __entry->state = state; 98 98 ), 99 99 100 100 TP_printk("%s: gating state changed to %s", 101 - __get_str(dev_name), 101 + dev_name(__entry->hba->dev), 102 102 __print_symbolic(__entry->state, UFSCHD_CLK_GATING_STATES)) 103 103 ); 104 104 105 105 TRACE_EVENT(ufshcd_clk_scaling, 106 106 107 - TP_PROTO(const char *dev_name, const char *state, const char *clk, 107 + TP_PROTO(struct ufs_hba *hba, const char *state, const char *clk, 108 108 u32 prev_state, u32 curr_state), 109 109 110 - TP_ARGS(dev_name, state, clk, prev_state, curr_state), 110 + TP_ARGS(hba, state, clk, prev_state, curr_state), 111 111 112 112 TP_STRUCT__entry( 113 - __string(dev_name, dev_name) 113 + __field(struct ufs_hba *, hba) 114 114 __string(state, state) 115 115 __string(clk, clk) 116 116 __field(u32, prev_state) ··· 118 118 ), 119 119 120 120 TP_fast_assign( 121 - __assign_str(dev_name); 121 + __entry->hba = hba; 122 122 __assign_str(state); 123 123 __assign_str(clk); 124 124 __entry->prev_state = prev_state; ··· 126 126 ), 127 127 128 128 TP_printk("%s: %s %s from %u to %u Hz", 129 - __get_str(dev_name), __get_str(state), __get_str(clk), 129 + dev_name(__entry->hba->dev), __get_str(state), __get_str(clk), 130 130 __entry->prev_state, __entry->curr_state) 131 131 ); 132 132 133 133 TRACE_EVENT(ufshcd_auto_bkops_state, 134 134 135 - TP_PROTO(const char *dev_name, const char *state), 135 + TP_PROTO(struct ufs_hba *hba, const char *state), 136 136 137 - TP_ARGS(dev_name, state), 137 + TP_ARGS(hba, state), 138 138 139 139 TP_STRUCT__entry( 140 - __string(dev_name, dev_name) 140 + __field(struct ufs_hba *, hba) 141 141 __string(state, state) 142 142 ), 143 143 144 144 TP_fast_assign( 145 - __assign_str(dev_name); 145 + __entry->hba = hba; 146 146 __assign_str(state); 147 147 ), 148 148 149 149 TP_printk("%s: auto bkops - %s", 150 - __get_str(dev_name), __get_str(state)) 150 + dev_name(__entry->hba->dev), __get_str(state)) 151 151 ); 152 152 153 153 DECLARE_EVENT_CLASS(ufshcd_profiling_template, 154 - TP_PROTO(const char *dev_name, const char *profile_info, s64 time_us, 154 + TP_PROTO(struct ufs_hba *hba, const char *profile_info, s64 time_us, 155 155 int err), 156 156 157 - TP_ARGS(dev_name, profile_info, time_us, err), 157 + TP_ARGS(hba, profile_info, time_us, err), 158 158 159 159 TP_STRUCT__entry( 160 - __string(dev_name, dev_name) 160 + __field(struct ufs_hba *, hba) 161 161 __string(profile_info, profile_info) 162 162 __field(s64, time_us) 163 163 __field(int, err) 164 164 ), 165 165 166 166 TP_fast_assign( 167 - __assign_str(dev_name); 167 + __entry->hba = hba; 168 168 __assign_str(profile_info); 169 169 __entry->time_us = time_us; 170 170 __entry->err = err; 171 171 ), 172 172 173 173 TP_printk("%s: %s: took %lld usecs, err %d", 174 - __get_str(dev_name), __get_str(profile_info), 174 + dev_name(__entry->hba->dev), __get_str(profile_info), 175 175 __entry->time_us, __entry->err) 176 176 ); 177 177 178 178 DEFINE_EVENT(ufshcd_profiling_template, ufshcd_profile_hibern8, 179 - TP_PROTO(const char *dev_name, const char *profile_info, s64 time_us, 179 + TP_PROTO(struct ufs_hba *hba, const char *profile_info, s64 time_us, 180 180 int err), 181 - TP_ARGS(dev_name, profile_info, time_us, err)); 181 + TP_ARGS(hba, profile_info, time_us, err)); 182 182 183 183 DEFINE_EVENT(ufshcd_profiling_template, ufshcd_profile_clk_gating, 184 - TP_PROTO(const char *dev_name, const char *profile_info, s64 time_us, 184 + TP_PROTO(struct ufs_hba *hba, const char *profile_info, s64 time_us, 185 185 int err), 186 - TP_ARGS(dev_name, profile_info, time_us, err)); 186 + TP_ARGS(hba, profile_info, time_us, err)); 187 187 188 188 DEFINE_EVENT(ufshcd_profiling_template, ufshcd_profile_clk_scaling, 189 - TP_PROTO(const char *dev_name, const char *profile_info, s64 time_us, 189 + TP_PROTO(struct ufs_hba *hba, const char *profile_info, s64 time_us, 190 190 int err), 191 - TP_ARGS(dev_name, profile_info, time_us, err)); 191 + TP_ARGS(hba, profile_info, time_us, err)); 192 192 193 193 DECLARE_EVENT_CLASS(ufshcd_template, 194 - TP_PROTO(const char *dev_name, int err, s64 usecs, 194 + TP_PROTO(struct ufs_hba *hba, int err, s64 usecs, 195 195 int dev_state, int link_state), 196 196 197 - TP_ARGS(dev_name, err, usecs, dev_state, link_state), 197 + TP_ARGS(hba, err, usecs, dev_state, link_state), 198 198 199 199 TP_STRUCT__entry( 200 200 __field(s64, usecs) 201 201 __field(int, err) 202 - __string(dev_name, dev_name) 202 + __field(struct ufs_hba *, hba) 203 203 __field(int, dev_state) 204 204 __field(int, link_state) 205 205 ), ··· 207 207 TP_fast_assign( 208 208 __entry->usecs = usecs; 209 209 __entry->err = err; 210 - __assign_str(dev_name); 210 + __entry->hba = hba; 211 211 __entry->dev_state = dev_state; 212 212 __entry->link_state = link_state; 213 213 ), 214 214 215 215 TP_printk( 216 216 "%s: took %lld usecs, dev_state: %s, link_state: %s, err %d", 217 - __get_str(dev_name), 217 + dev_name(__entry->hba->dev), 218 218 __entry->usecs, 219 219 __print_symbolic(__entry->dev_state, UFS_PWR_MODES), 220 220 __print_symbolic(__entry->link_state, UFS_LINK_STATES), ··· 223 223 ); 224 224 225 225 DEFINE_EVENT(ufshcd_template, ufshcd_system_suspend, 226 - TP_PROTO(const char *dev_name, int err, s64 usecs, 226 + TP_PROTO(struct ufs_hba *hba, int err, s64 usecs, 227 227 int dev_state, int link_state), 228 - TP_ARGS(dev_name, err, usecs, dev_state, link_state)); 228 + TP_ARGS(hba, err, usecs, dev_state, link_state)); 229 229 230 230 DEFINE_EVENT(ufshcd_template, ufshcd_system_resume, 231 - TP_PROTO(const char *dev_name, int err, s64 usecs, 231 + TP_PROTO(struct ufs_hba *hba, int err, s64 usecs, 232 232 int dev_state, int link_state), 233 - TP_ARGS(dev_name, err, usecs, dev_state, link_state)); 233 + TP_ARGS(hba, err, usecs, dev_state, link_state)); 234 234 235 235 DEFINE_EVENT(ufshcd_template, ufshcd_runtime_suspend, 236 - TP_PROTO(const char *dev_name, int err, s64 usecs, 236 + TP_PROTO(struct ufs_hba *hba, int err, s64 usecs, 237 237 int dev_state, int link_state), 238 - TP_ARGS(dev_name, err, usecs, dev_state, link_state)); 238 + TP_ARGS(hba, err, usecs, dev_state, link_state)); 239 239 240 240 DEFINE_EVENT(ufshcd_template, ufshcd_runtime_resume, 241 - TP_PROTO(const char *dev_name, int err, s64 usecs, 241 + TP_PROTO(struct ufs_hba *hba, int err, s64 usecs, 242 242 int dev_state, int link_state), 243 - TP_ARGS(dev_name, err, usecs, dev_state, link_state)); 243 + TP_ARGS(hba, err, usecs, dev_state, link_state)); 244 244 245 245 DEFINE_EVENT(ufshcd_template, ufshcd_init, 246 - TP_PROTO(const char *dev_name, int err, s64 usecs, 246 + TP_PROTO(struct ufs_hba *hba, int err, s64 usecs, 247 247 int dev_state, int link_state), 248 - TP_ARGS(dev_name, err, usecs, dev_state, link_state)); 248 + TP_ARGS(hba, err, usecs, dev_state, link_state)); 249 249 250 250 DEFINE_EVENT(ufshcd_template, ufshcd_wl_suspend, 251 - TP_PROTO(const char *dev_name, int err, s64 usecs, 251 + TP_PROTO(struct ufs_hba *hba, int err, s64 usecs, 252 252 int dev_state, int link_state), 253 - TP_ARGS(dev_name, err, usecs, dev_state, link_state)); 253 + TP_ARGS(hba, err, usecs, dev_state, link_state)); 254 254 255 255 DEFINE_EVENT(ufshcd_template, ufshcd_wl_resume, 256 - TP_PROTO(const char *dev_name, int err, s64 usecs, 256 + TP_PROTO(struct ufs_hba *hba, int err, s64 usecs, 257 257 int dev_state, int link_state), 258 - TP_ARGS(dev_name, err, usecs, dev_state, link_state)); 258 + TP_ARGS(hba, err, usecs, dev_state, link_state)); 259 259 260 260 DEFINE_EVENT(ufshcd_template, ufshcd_wl_runtime_suspend, 261 - TP_PROTO(const char *dev_name, int err, s64 usecs, 261 + TP_PROTO(struct ufs_hba *hba, int err, s64 usecs, 262 262 int dev_state, int link_state), 263 - TP_ARGS(dev_name, err, usecs, dev_state, link_state)); 263 + TP_ARGS(hba, err, usecs, dev_state, link_state)); 264 264 265 265 DEFINE_EVENT(ufshcd_template, ufshcd_wl_runtime_resume, 266 - TP_PROTO(const char *dev_name, int err, s64 usecs, 266 + TP_PROTO(struct ufs_hba *hba, int err, s64 usecs, 267 267 int dev_state, int link_state), 268 - TP_ARGS(dev_name, err, usecs, dev_state, link_state)); 268 + TP_ARGS(hba, err, usecs, dev_state, link_state)); 269 269 270 270 TRACE_EVENT(ufshcd_command, 271 - TP_PROTO(struct scsi_device *sdev, enum ufs_trace_str_t str_t, 271 + TP_PROTO(struct scsi_device *sdev, struct ufs_hba *hba, 272 + enum ufs_trace_str_t str_t, 272 273 unsigned int tag, u32 doorbell, u32 hwq_id, int transfer_len, 273 274 u32 intr, u64 lba, u8 opcode, u8 group_id), 274 275 275 - TP_ARGS(sdev, str_t, tag, doorbell, hwq_id, transfer_len, intr, lba, 276 + TP_ARGS(sdev, hba, str_t, tag, doorbell, hwq_id, transfer_len, intr, lba, 276 277 opcode, group_id), 277 278 278 279 TP_STRUCT__entry( 279 280 __field(struct scsi_device *, sdev) 281 + __field(struct ufs_hba *, hba) 280 282 __field(enum ufs_trace_str_t, str_t) 281 283 __field(unsigned int, tag) 282 284 __field(u32, doorbell) ··· 292 290 293 291 TP_fast_assign( 294 292 __entry->sdev = sdev; 293 + __entry->hba = hba; 295 294 __entry->str_t = str_t; 296 295 __entry->tag = tag; 297 296 __entry->doorbell = doorbell; ··· 315 312 ); 316 313 317 314 TRACE_EVENT(ufshcd_uic_command, 318 - TP_PROTO(const char *dev_name, enum ufs_trace_str_t str_t, u32 cmd, 315 + TP_PROTO(struct ufs_hba *hba, enum ufs_trace_str_t str_t, u32 cmd, 319 316 u32 arg1, u32 arg2, u32 arg3), 320 317 321 - TP_ARGS(dev_name, str_t, cmd, arg1, arg2, arg3), 318 + TP_ARGS(hba, str_t, cmd, arg1, arg2, arg3), 322 319 323 320 TP_STRUCT__entry( 324 - __string(dev_name, dev_name) 321 + __field(struct ufs_hba *, hba) 325 322 __field(enum ufs_trace_str_t, str_t) 326 323 __field(u32, cmd) 327 324 __field(u32, arg1) ··· 330 327 ), 331 328 332 329 TP_fast_assign( 333 - __assign_str(dev_name); 330 + __entry->hba = hba; 334 331 __entry->str_t = str_t; 335 332 __entry->cmd = cmd; 336 333 __entry->arg1 = arg1; ··· 340 337 341 338 TP_printk( 342 339 "%s: %s: cmd: 0x%x, arg1: 0x%x, arg2: 0x%x, arg3: 0x%x", 343 - show_ufs_cmd_trace_str(__entry->str_t), __get_str(dev_name), 340 + show_ufs_cmd_trace_str(__entry->str_t), dev_name(__entry->hba->dev), 344 341 __entry->cmd, __entry->arg1, __entry->arg2, __entry->arg3 345 342 ) 346 343 ); 347 344 348 345 TRACE_EVENT(ufshcd_upiu, 349 - TP_PROTO(const char *dev_name, enum ufs_trace_str_t str_t, void *hdr, 346 + TP_PROTO(struct ufs_hba *hba, enum ufs_trace_str_t str_t, void *hdr, 350 347 void *tsf, enum ufs_trace_tsf_t tsf_t), 351 348 352 - TP_ARGS(dev_name, str_t, hdr, tsf, tsf_t), 349 + TP_ARGS(hba, str_t, hdr, tsf, tsf_t), 353 350 354 351 TP_STRUCT__entry( 355 - __string(dev_name, dev_name) 352 + __field(struct ufs_hba *, hba) 356 353 __field(enum ufs_trace_str_t, str_t) 357 354 __array(unsigned char, hdr, 12) 358 355 __array(unsigned char, tsf, 16) ··· 360 357 ), 361 358 362 359 TP_fast_assign( 363 - __assign_str(dev_name); 360 + __entry->hba = hba; 364 361 __entry->str_t = str_t; 365 362 memcpy(__entry->hdr, hdr, sizeof(__entry->hdr)); 366 363 memcpy(__entry->tsf, tsf, sizeof(__entry->tsf)); ··· 369 366 370 367 TP_printk( 371 368 "%s: %s: HDR:%s, %s:%s", 372 - show_ufs_cmd_trace_str(__entry->str_t), __get_str(dev_name), 369 + show_ufs_cmd_trace_str(__entry->str_t), dev_name(__entry->hba->dev), 373 370 __print_hex(__entry->hdr, sizeof(__entry->hdr)), 374 371 show_ufs_cmd_trace_tsf(__entry->tsf_t), 375 372 __print_hex(__entry->tsf, sizeof(__entry->tsf)) ··· 378 375 379 376 TRACE_EVENT(ufshcd_exception_event, 380 377 381 - TP_PROTO(const char *dev_name, u16 status), 378 + TP_PROTO(struct ufs_hba *hba, u16 status), 382 379 383 - TP_ARGS(dev_name, status), 380 + TP_ARGS(hba, status), 384 381 385 382 TP_STRUCT__entry( 386 - __string(dev_name, dev_name) 383 + __field(struct ufs_hba *, hba) 387 384 __field(u16, status) 388 385 ), 389 386 390 387 TP_fast_assign( 391 - __assign_str(dev_name); 388 + __entry->hba = hba; 392 389 __entry->status = status; 393 390 ), 394 391 395 392 TP_printk("%s: status 0x%x", 396 - __get_str(dev_name), __entry->status 393 + dev_name(__entry->hba->dev), __entry->status 397 394 ) 398 395 ); 399 396
+15 -6
drivers/ufs/core/ufshcd-priv.h
··· 117 117 return ufshcd_readl(hba, REG_UFS_VERSION); 118 118 } 119 119 120 - static inline int ufshcd_vops_clk_scale_notify(struct ufs_hba *hba, 121 - bool up, enum ufs_notify_change_status status) 120 + static inline int ufshcd_vops_clk_scale_notify(struct ufs_hba *hba, bool up, 121 + unsigned long target_freq, 122 + enum ufs_notify_change_status status) 122 123 { 123 124 if (hba->vops && hba->vops->clk_scale_notify) 124 - return hba->vops->clk_scale_notify(hba, up, status); 125 + return hba->vops->clk_scale_notify(hba, up, target_freq, status); 125 126 return 0; 126 127 } 127 128 ··· 160 159 } 161 160 162 161 static inline int ufshcd_vops_pwr_change_notify(struct ufs_hba *hba, 163 - enum ufs_notify_change_status status, 164 - struct ufs_pa_layer_attr *dev_max_params, 165 - struct ufs_pa_layer_attr *dev_req_params) 162 + enum ufs_notify_change_status status, 163 + const struct ufs_pa_layer_attr *dev_max_params, 164 + struct ufs_pa_layer_attr *dev_req_params) 166 165 { 167 166 if (hba->vops && hba->vops->pwr_change_notify) 168 167 return hba->vops->pwr_change_notify(hba, status, ··· 269 268 return hba->vops->config_esi(hba); 270 269 271 270 return -EOPNOTSUPP; 271 + } 272 + 273 + static inline u32 ufshcd_vops_freq_to_gear_speed(struct ufs_hba *hba, unsigned long freq) 274 + { 275 + if (hba->vops && hba->vops->freq_to_gear_speed) 276 + return hba->vops->freq_to_gear_speed(hba, freq); 277 + 278 + return 0; 272 279 } 273 280 274 281 extern const struct ufs_pm_lvl_states ufs_pm_lvl_states[];
+96 -52
drivers/ufs/core/ufshcd.c
··· 369 369 else 370 370 header = &hba->lrb[tag].ucd_rsp_ptr->header; 371 371 372 - trace_ufshcd_upiu(dev_name(hba->dev), str_t, header, &rq->sc.cdb, 372 + trace_ufshcd_upiu(hba, str_t, header, &rq->sc.cdb, 373 373 UFS_TSF_CDB); 374 374 } 375 375 ··· 380 380 if (!trace_ufshcd_upiu_enabled()) 381 381 return; 382 382 383 - trace_ufshcd_upiu(dev_name(hba->dev), str_t, &rq_rsp->header, 383 + trace_ufshcd_upiu(hba, str_t, &rq_rsp->header, 384 384 &rq_rsp->qr, UFS_TSF_OSF); 385 385 } 386 386 ··· 393 393 return; 394 394 395 395 if (str_t == UFS_TM_SEND) 396 - trace_ufshcd_upiu(dev_name(hba->dev), str_t, 396 + trace_ufshcd_upiu(hba, str_t, 397 397 &descp->upiu_req.req_header, 398 398 &descp->upiu_req.input_param1, 399 399 UFS_TSF_TM_INPUT); 400 400 else 401 - trace_ufshcd_upiu(dev_name(hba->dev), str_t, 401 + trace_ufshcd_upiu(hba, str_t, 402 402 &descp->upiu_rsp.rsp_header, 403 403 &descp->upiu_rsp.output_param1, 404 404 UFS_TSF_TM_OUTPUT); ··· 418 418 else 419 419 cmd = ufshcd_readl(hba, REG_UIC_COMMAND); 420 420 421 - trace_ufshcd_uic_command(dev_name(hba->dev), str_t, cmd, 421 + trace_ufshcd_uic_command(hba, str_t, cmd, 422 422 ufshcd_readl(hba, REG_UIC_COMMAND_ARG_1), 423 423 ufshcd_readl(hba, REG_UIC_COMMAND_ARG_2), 424 424 ufshcd_readl(hba, REG_UIC_COMMAND_ARG_3)); ··· 473 473 } else { 474 474 doorbell = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL); 475 475 } 476 - trace_ufshcd_command(cmd->device, str_t, tag, doorbell, hwq_id, 476 + trace_ufshcd_command(cmd->device, hba, str_t, tag, doorbell, hwq_id, 477 477 transfer_len, intr, lba, opcode, group_id); 478 478 } 479 479 ··· 1063 1063 clki->max_freq, ret); 1064 1064 break; 1065 1065 } 1066 - trace_ufshcd_clk_scaling(dev_name(hba->dev), 1066 + trace_ufshcd_clk_scaling(hba, 1067 1067 "scaled up", clki->name, 1068 1068 clki->curr_freq, 1069 1069 clki->max_freq); ··· 1081 1081 clki->min_freq, ret); 1082 1082 break; 1083 1083 } 1084 - trace_ufshcd_clk_scaling(dev_name(hba->dev), 1084 + trace_ufshcd_clk_scaling(hba, 1085 1085 "scaled down", clki->name, 1086 1086 clki->curr_freq, 1087 1087 clki->min_freq); ··· 1122 1122 return ret; 1123 1123 } 1124 1124 1125 - trace_ufshcd_clk_scaling(dev_name(dev), 1125 + trace_ufshcd_clk_scaling(hba, 1126 1126 (scaling_down ? "scaled down" : "scaled up"), 1127 1127 clki->name, hba->clk_scaling.target_freq, freq); 1128 1128 } ··· 1162 1162 int ret = 0; 1163 1163 ktime_t start = ktime_get(); 1164 1164 1165 - ret = ufshcd_vops_clk_scale_notify(hba, scale_up, PRE_CHANGE); 1165 + ret = ufshcd_vops_clk_scale_notify(hba, scale_up, freq, PRE_CHANGE); 1166 1166 if (ret) 1167 1167 goto out; 1168 1168 ··· 1173 1173 if (ret) 1174 1174 goto out; 1175 1175 1176 - ret = ufshcd_vops_clk_scale_notify(hba, scale_up, POST_CHANGE); 1176 + ret = ufshcd_vops_clk_scale_notify(hba, scale_up, freq, POST_CHANGE); 1177 1177 if (ret) { 1178 1178 if (hba->use_pm_opp) 1179 1179 ufshcd_opp_set_rate(hba, ··· 1186 1186 ufshcd_pm_qos_update(hba, scale_up); 1187 1187 1188 1188 out: 1189 - trace_ufshcd_profile_clk_scaling(dev_name(hba->dev), 1189 + trace_ufshcd_profile_clk_scaling(hba, 1190 1190 (scale_up ? "up" : "down"), 1191 1191 ktime_to_us(ktime_sub(ktime_get(), start)), ret); 1192 1192 return ret; ··· 1313 1313 /** 1314 1314 * ufshcd_scale_gear - scale up/down UFS gear 1315 1315 * @hba: per adapter instance 1316 + * @target_gear: target gear to scale to 1316 1317 * @scale_up: True for scaling up gear and false for scaling down 1317 1318 * 1318 1319 * Return: 0 for success; -EBUSY if scaling can't happen at this time; 1319 1320 * non-zero for any other errors. 1320 1321 */ 1321 - static int ufshcd_scale_gear(struct ufs_hba *hba, bool scale_up) 1322 + static int ufshcd_scale_gear(struct ufs_hba *hba, u32 target_gear, bool scale_up) 1322 1323 { 1323 1324 int ret = 0; 1324 1325 struct ufs_pa_layer_attr new_pwr_info; 1325 1326 1327 + if (target_gear) { 1328 + new_pwr_info = hba->pwr_info; 1329 + new_pwr_info.gear_tx = target_gear; 1330 + new_pwr_info.gear_rx = target_gear; 1331 + 1332 + goto config_pwr_mode; 1333 + } 1334 + 1335 + /* Legacy gear scaling, in case vops_freq_to_gear_speed() is not implemented */ 1326 1336 if (scale_up) { 1327 1337 memcpy(&new_pwr_info, &hba->clk_scaling.saved_pwr_info, 1328 1338 sizeof(struct ufs_pa_layer_attr)); ··· 1353 1343 } 1354 1344 } 1355 1345 1346 + config_pwr_mode: 1356 1347 /* check if the power mode needs to be changed or not? */ 1357 1348 ret = ufshcd_config_pwr_mode(hba, &new_pwr_info); 1358 1349 if (ret) ··· 1398 1387 return ret; 1399 1388 } 1400 1389 1401 - static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba, int err, bool scale_up) 1390 + static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba, int err) 1402 1391 { 1403 1392 up_write(&hba->clk_scaling_lock); 1404 1393 1405 - /* Enable Write Booster if we have scaled up else disable it */ 1394 + /* Enable Write Booster if current gear requires it else disable it */ 1406 1395 if (ufshcd_enable_wb_if_scaling_up(hba) && !err) 1407 - ufshcd_wb_toggle(hba, scale_up); 1396 + ufshcd_wb_toggle(hba, hba->pwr_info.gear_rx >= hba->clk_scaling.wb_gear); 1408 1397 1409 1398 mutex_unlock(&hba->wb_mutex); 1410 1399 ··· 1424 1413 static int ufshcd_devfreq_scale(struct ufs_hba *hba, unsigned long freq, 1425 1414 bool scale_up) 1426 1415 { 1416 + u32 old_gear = hba->pwr_info.gear_rx; 1417 + u32 new_gear = 0; 1427 1418 int ret = 0; 1419 + 1420 + new_gear = ufshcd_vops_freq_to_gear_speed(hba, freq); 1428 1421 1429 1422 ret = ufshcd_clock_scaling_prepare(hba, 1 * USEC_PER_SEC); 1430 1423 if (ret) ··· 1436 1421 1437 1422 /* scale down the gear before scaling down clocks */ 1438 1423 if (!scale_up) { 1439 - ret = ufshcd_scale_gear(hba, false); 1424 + ret = ufshcd_scale_gear(hba, new_gear, false); 1440 1425 if (ret) 1441 1426 goto out_unprepare; 1442 1427 } ··· 1444 1429 ret = ufshcd_scale_clks(hba, freq, scale_up); 1445 1430 if (ret) { 1446 1431 if (!scale_up) 1447 - ufshcd_scale_gear(hba, true); 1432 + ufshcd_scale_gear(hba, old_gear, true); 1448 1433 goto out_unprepare; 1449 1434 } 1450 1435 1451 1436 /* scale up the gear after scaling up clocks */ 1452 1437 if (scale_up) { 1453 - ret = ufshcd_scale_gear(hba, true); 1438 + ret = ufshcd_scale_gear(hba, new_gear, true); 1454 1439 if (ret) { 1455 1440 ufshcd_scale_clks(hba, hba->devfreq->previous_freq, 1456 1441 false); ··· 1459 1444 } 1460 1445 1461 1446 out_unprepare: 1462 - ufshcd_clock_scaling_unprepare(hba, ret, scale_up); 1447 + ufshcd_clock_scaling_unprepare(hba, ret); 1463 1448 return ret; 1464 1449 } 1465 1450 ··· 1563 1548 if (!ret) 1564 1549 hba->clk_scaling.target_freq = *freq; 1565 1550 1566 - trace_ufshcd_profile_clk_scaling(dev_name(hba->dev), 1551 + trace_ufshcd_profile_clk_scaling(hba, 1567 1552 (scale_up ? "up" : "down"), 1568 1553 ktime_to_us(ktime_sub(ktime_get(), start)), ret); 1569 1554 ··· 1735 1720 struct device_attribute *attr, const char *buf, size_t count) 1736 1721 { 1737 1722 struct ufs_hba *hba = dev_get_drvdata(dev); 1723 + struct ufs_clk_info *clki; 1724 + unsigned long freq; 1738 1725 u32 value; 1739 1726 int err = 0; 1740 1727 ··· 1760 1743 1761 1744 if (value) { 1762 1745 ufshcd_resume_clkscaling(hba); 1763 - } else { 1764 - ufshcd_suspend_clkscaling(hba); 1765 - err = ufshcd_devfreq_scale(hba, ULONG_MAX, true); 1766 - if (err) 1767 - dev_err(hba->dev, "%s: failed to scale clocks up %d\n", 1768 - __func__, err); 1746 + goto out_rel; 1769 1747 } 1770 1748 1749 + clki = list_first_entry(&hba->clk_list_head, struct ufs_clk_info, list); 1750 + freq = clki->max_freq; 1751 + 1752 + ufshcd_suspend_clkscaling(hba); 1753 + 1754 + if (!ufshcd_is_devfreq_scaling_required(hba, freq, true)) 1755 + goto out_rel; 1756 + 1757 + err = ufshcd_devfreq_scale(hba, freq, true); 1758 + if (err) 1759 + dev_err(hba->dev, "%s: failed to scale clocks up %d\n", 1760 + __func__, err); 1761 + else 1762 + hba->clk_scaling.target_freq = freq; 1763 + 1764 + out_rel: 1771 1765 ufshcd_release(hba); 1772 1766 ufshcd_rpm_put_sync(hba); 1773 1767 out: ··· 1810 1782 1811 1783 if (!hba->clk_scaling.min_gear) 1812 1784 hba->clk_scaling.min_gear = UFS_HS_G1; 1785 + 1786 + if (!hba->clk_scaling.wb_gear) 1787 + /* Use intermediate gear speed HS_G3 as the default wb_gear */ 1788 + hba->clk_scaling.wb_gear = UFS_HS_G3; 1813 1789 1814 1790 INIT_WORK(&hba->clk_scaling.suspend_work, 1815 1791 ufshcd_clk_scaling_suspend_work); ··· 1913 1881 case REQ_CLKS_OFF: 1914 1882 if (cancel_delayed_work(&hba->clk_gating.gate_work)) { 1915 1883 hba->clk_gating.state = CLKS_ON; 1916 - trace_ufshcd_clk_gating(dev_name(hba->dev), 1884 + trace_ufshcd_clk_gating(hba, 1917 1885 hba->clk_gating.state); 1918 1886 break; 1919 1887 } ··· 1925 1893 fallthrough; 1926 1894 case CLKS_OFF: 1927 1895 hba->clk_gating.state = REQ_CLKS_ON; 1928 - trace_ufshcd_clk_gating(dev_name(hba->dev), 1896 + trace_ufshcd_clk_gating(hba, 1929 1897 hba->clk_gating.state); 1930 1898 queue_work(hba->clk_gating.clk_gating_workq, 1931 1899 &hba->clk_gating.ungate_work); ··· 1965 1933 if (hba->clk_gating.is_suspended || 1966 1934 hba->clk_gating.state != REQ_CLKS_OFF) { 1967 1935 hba->clk_gating.state = CLKS_ON; 1968 - trace_ufshcd_clk_gating(dev_name(hba->dev), 1936 + trace_ufshcd_clk_gating(hba, 1969 1937 hba->clk_gating.state); 1970 1938 return; 1971 1939 } ··· 1987 1955 hba->clk_gating.state = CLKS_ON; 1988 1956 dev_err(hba->dev, "%s: hibern8 enter failed %d\n", 1989 1957 __func__, ret); 1990 - trace_ufshcd_clk_gating(dev_name(hba->dev), 1958 + trace_ufshcd_clk_gating(hba, 1991 1959 hba->clk_gating.state); 1992 1960 return; 1993 1961 } ··· 2012 1980 guard(spinlock_irqsave)(&hba->clk_gating.lock); 2013 1981 if (hba->clk_gating.state == REQ_CLKS_OFF) { 2014 1982 hba->clk_gating.state = CLKS_OFF; 2015 - trace_ufshcd_clk_gating(dev_name(hba->dev), 1983 + trace_ufshcd_clk_gating(hba, 2016 1984 hba->clk_gating.state); 2017 1985 } 2018 1986 } ··· 2038 2006 } 2039 2007 2040 2008 hba->clk_gating.state = REQ_CLKS_OFF; 2041 - trace_ufshcd_clk_gating(dev_name(hba->dev), hba->clk_gating.state); 2009 + trace_ufshcd_clk_gating(hba, hba->clk_gating.state); 2042 2010 queue_delayed_work(hba->clk_gating.clk_gating_workq, 2043 2011 &hba->clk_gating.gate_work, 2044 2012 msecs_to_jiffies(hba->clk_gating.delay_ms)); ··· 4037 4005 * 4038 4006 * Return: 0 on success, non-zero value on failure. 4039 4007 */ 4040 - static int ufshcd_dme_reset(struct ufs_hba *hba) 4008 + int ufshcd_dme_reset(struct ufs_hba *hba) 4041 4009 { 4042 4010 struct uic_command uic_cmd = { 4043 4011 .command = UIC_CMD_DME_RESET, ··· 4051 4019 4052 4020 return ret; 4053 4021 } 4022 + EXPORT_SYMBOL_GPL(ufshcd_dme_reset); 4054 4023 4055 4024 int ufshcd_dme_configure_adapt(struct ufs_hba *hba, 4056 4025 int agreed_gear, ··· 4077 4044 * 4078 4045 * Return: 0 on success, non-zero value on failure. 4079 4046 */ 4080 - static int ufshcd_dme_enable(struct ufs_hba *hba) 4047 + int ufshcd_dme_enable(struct ufs_hba *hba) 4081 4048 { 4082 4049 struct uic_command uic_cmd = { 4083 4050 .command = UIC_CMD_DME_ENABLE, ··· 4091 4058 4092 4059 return ret; 4093 4060 } 4061 + EXPORT_SYMBOL_GPL(ufshcd_dme_enable); 4094 4062 4095 4063 static inline void ufshcd_add_delay_before_dme_cmd(struct ufs_hba *hba) 4096 4064 { ··· 4456 4422 ufshcd_vops_hibern8_notify(hba, UIC_CMD_DME_HIBER_ENTER, PRE_CHANGE); 4457 4423 4458 4424 ret = ufshcd_uic_pwr_ctrl(hba, &uic_cmd); 4459 - trace_ufshcd_profile_hibern8(dev_name(hba->dev), "enter", 4425 + trace_ufshcd_profile_hibern8(hba, "enter", 4460 4426 ktime_to_us(ktime_sub(ktime_get(), start)), ret); 4461 4427 4462 4428 if (ret) ··· 4481 4447 ufshcd_vops_hibern8_notify(hba, UIC_CMD_DME_HIBER_EXIT, PRE_CHANGE); 4482 4448 4483 4449 ret = ufshcd_uic_pwr_ctrl(hba, &uic_cmd); 4484 - trace_ufshcd_profile_hibern8(dev_name(hba->dev), "exit", 4450 + trace_ufshcd_profile_hibern8(hba, "exit", 4485 4451 ktime_to_us(ktime_sub(ktime_get(), start)), ret); 4486 4452 4487 4453 if (ret) { ··· 5842 5808 } 5843 5809 5844 5810 hba->auto_bkops_enabled = true; 5845 - trace_ufshcd_auto_bkops_state(dev_name(hba->dev), "Enabled"); 5811 + trace_ufshcd_auto_bkops_state(hba, "Enabled"); 5846 5812 5847 5813 /* No need of URGENT_BKOPS exception from the device */ 5848 5814 err = ufshcd_disable_ee(hba, MASK_EE_URGENT_BKOPS); ··· 5893 5859 } 5894 5860 5895 5861 hba->auto_bkops_enabled = false; 5896 - trace_ufshcd_auto_bkops_state(dev_name(hba->dev), "Disabled"); 5862 + trace_ufshcd_auto_bkops_state(hba, "Disabled"); 5897 5863 hba->is_urgent_bkops_lvl_checked = false; 5898 5864 out: 5899 5865 return err; ··· 6227 6193 return; 6228 6194 } 6229 6195 6230 - trace_ufshcd_exception_event(dev_name(hba->dev), status); 6196 + trace_ufshcd_exception_event(hba, status); 6231 6197 6232 6198 if (status & hba->ee_drv_mask & MASK_EE_URGENT_BKOPS) 6233 6199 ufshcd_bkops_exception_event_handler(hba); 6234 6200 6235 6201 if (status & hba->ee_drv_mask & MASK_EE_URGENT_TEMP) 6236 6202 ufs_hwmon_notify_event(hba, status & MASK_EE_URGENT_TEMP); 6203 + 6204 + if (status & hba->ee_drv_mask & MASK_EE_HEALTH_CRITICAL) { 6205 + hba->critical_health_count++; 6206 + sysfs_notify(&hba->dev->kobj, NULL, "critical_health"); 6207 + } 6237 6208 6238 6209 ufs_debugfs_exception_event(hba, status); 6239 6210 } ··· 7691 7652 hba->ufshcd_state = UFSHCD_STATE_OPERATIONAL; 7692 7653 spin_unlock_irqrestore(hba->host->host_lock, flags); 7693 7654 7694 - trace_ufshcd_init(dev_name(hba->dev), ret, 7655 + trace_ufshcd_init(hba, ret, 7695 7656 ktime_to_us(ktime_sub(ktime_get(), probe_start)), 7696 7657 hba->curr_dev_pwr_mode, hba->uic_link_state); 7697 7658 } ··· 8331 8292 ufshcd_wb_probe(hba, desc_buf); 8332 8293 8333 8294 ufshcd_temp_notif_probe(hba, desc_buf); 8295 + 8296 + if (dev_info->wspecversion >= 0x410) { 8297 + hba->critical_health_count = 0; 8298 + ufshcd_enable_ee(hba, MASK_EE_HEALTH_CRITICAL); 8299 + } 8334 8300 8335 8301 ufs_init_rtc(hba, desc_buf); 8336 8302 ··· 9192 9148 } else if (!ret && on && hba->clk_gating.is_initialized) { 9193 9149 scoped_guard(spinlock_irqsave, &hba->clk_gating.lock) 9194 9150 hba->clk_gating.state = CLKS_ON; 9195 - trace_ufshcd_clk_gating(dev_name(hba->dev), 9151 + trace_ufshcd_clk_gating(hba, 9196 9152 hba->clk_gating.state); 9197 9153 } 9198 9154 9199 9155 if (clk_state_changed) 9200 - trace_ufshcd_profile_clk_gating(dev_name(hba->dev), 9156 + trace_ufshcd_profile_clk_gating(hba, 9201 9157 (on ? "on" : "off"), 9202 9158 ktime_to_us(ktime_sub(ktime_get(), start)), ret); 9203 9159 return ret; ··· 9897 9853 if (ret) 9898 9854 dev_err(&sdev->sdev_gendev, "%s failed: %d\n", __func__, ret); 9899 9855 9900 - trace_ufshcd_wl_runtime_suspend(dev_name(dev), ret, 9856 + trace_ufshcd_wl_runtime_suspend(hba, ret, 9901 9857 ktime_to_us(ktime_sub(ktime_get(), start)), 9902 9858 hba->curr_dev_pwr_mode, hba->uic_link_state); 9903 9859 ··· 9917 9873 if (ret) 9918 9874 dev_err(&sdev->sdev_gendev, "%s failed: %d\n", __func__, ret); 9919 9875 9920 - trace_ufshcd_wl_runtime_resume(dev_name(dev), ret, 9876 + trace_ufshcd_wl_runtime_resume(hba, ret, 9921 9877 ktime_to_us(ktime_sub(ktime_get(), start)), 9922 9878 hba->curr_dev_pwr_mode, hba->uic_link_state); 9923 9879 ··· 9949 9905 out: 9950 9906 if (!ret) 9951 9907 hba->is_sys_suspended = true; 9952 - trace_ufshcd_wl_suspend(dev_name(dev), ret, 9908 + trace_ufshcd_wl_suspend(hba, ret, 9953 9909 ktime_to_us(ktime_sub(ktime_get(), start)), 9954 9910 hba->curr_dev_pwr_mode, hba->uic_link_state); 9955 9911 ··· 9972 9928 if (ret) 9973 9929 dev_err(&sdev->sdev_gendev, "%s failed: %d\n", __func__, ret); 9974 9930 out: 9975 - trace_ufshcd_wl_resume(dev_name(dev), ret, 9931 + trace_ufshcd_wl_resume(hba, ret, 9976 9932 ktime_to_us(ktime_sub(ktime_get(), start)), 9977 9933 hba->curr_dev_pwr_mode, hba->uic_link_state); 9978 9934 if (!ret) ··· 10010 9966 } 10011 9967 if (ufshcd_is_clkgating_allowed(hba)) { 10012 9968 hba->clk_gating.state = CLKS_OFF; 10013 - trace_ufshcd_clk_gating(dev_name(hba->dev), 9969 + trace_ufshcd_clk_gating(hba, 10014 9970 hba->clk_gating.state); 10015 9971 } 10016 9972 ··· 10083 10039 10084 10040 ret = ufshcd_suspend(hba); 10085 10041 out: 10086 - trace_ufshcd_system_suspend(dev_name(hba->dev), ret, 10042 + trace_ufshcd_system_suspend(hba, ret, 10087 10043 ktime_to_us(ktime_sub(ktime_get(), start)), 10088 10044 hba->curr_dev_pwr_mode, hba->uic_link_state); 10089 10045 return ret; ··· 10111 10067 ret = ufshcd_resume(hba); 10112 10068 10113 10069 out: 10114 - trace_ufshcd_system_resume(dev_name(hba->dev), ret, 10070 + trace_ufshcd_system_resume(hba, ret, 10115 10071 ktime_to_us(ktime_sub(ktime_get(), start)), 10116 10072 hba->curr_dev_pwr_mode, hba->uic_link_state); 10117 10073 ··· 10137 10093 10138 10094 ret = ufshcd_suspend(hba); 10139 10095 10140 - trace_ufshcd_runtime_suspend(dev_name(hba->dev), ret, 10096 + trace_ufshcd_runtime_suspend(hba, ret, 10141 10097 ktime_to_us(ktime_sub(ktime_get(), start)), 10142 10098 hba->curr_dev_pwr_mode, hba->uic_link_state); 10143 10099 return ret; ··· 10164 10120 10165 10121 ret = ufshcd_resume(hba); 10166 10122 10167 - trace_ufshcd_runtime_resume(dev_name(hba->dev), ret, 10123 + trace_ufshcd_runtime_resume(hba, ret, 10168 10124 ktime_to_us(ktime_sub(ktime_get(), start)), 10169 10125 hba->curr_dev_pwr_mode, hba->uic_link_state); 10170 10126 return ret;
+12
drivers/ufs/host/Kconfig
··· 142 142 143 143 Select this if you have UFS controller on Unisoc chipset. 144 144 If unsure, say N. 145 + 146 + config SCSI_UFS_ROCKCHIP 147 + tristate "Rockchip UFS host controller driver" 148 + depends on SCSI_UFSHCD_PLATFORM && (ARCH_ROCKCHIP || COMPILE_TEST) 149 + help 150 + This selects the Rockchip specific additions to UFSHCD platform driver. 151 + UFS host on Rockchip needs some vendor specific configuration before 152 + accessing the hardware which includes PHY configuration and vendor 153 + specific registers. 154 + 155 + Select this if you have UFS controller on Rockchip chipset. 156 + If unsure, say N.
+1
drivers/ufs/host/Makefile
··· 10 10 obj-$(CONFIG_SCSI_UFS_HISI) += ufs-hisi.o 11 11 obj-$(CONFIG_SCSI_UFS_MEDIATEK) += ufs-mediatek.o 12 12 obj-$(CONFIG_SCSI_UFS_RENESAS) += ufs-renesas.o 13 + obj-$(CONFIG_SCSI_UFS_ROCKCHIP) += ufs-rockchip.o 13 14 obj-$(CONFIG_SCSI_UFS_SPRD) += ufs-sprd.o 14 15 obj-$(CONFIG_SCSI_UFS_TI_J721E) += ti-j721e-ufs.o
+5 -5
drivers/ufs/host/ufs-exynos.c
··· 321 321 } 322 322 323 323 static int exynosauto_ufs_post_pwr_change(struct exynos_ufs *ufs, 324 - struct ufs_pa_layer_attr *pwr) 324 + const struct ufs_pa_layer_attr *pwr) 325 325 { 326 326 struct ufs_hba *hba = ufs->hba; 327 327 u32 enabled_vh; ··· 396 396 } 397 397 398 398 static int exynos7_ufs_post_pwr_change(struct exynos_ufs *ufs, 399 - struct ufs_pa_layer_attr *pwr) 399 + const struct ufs_pa_layer_attr *pwr) 400 400 { 401 401 struct ufs_hba *hba = ufs->hba; 402 402 int lanes = max_t(u32, pwr->lane_rx, pwr->lane_tx); ··· 813 813 } 814 814 815 815 static int exynos_ufs_pre_pwr_mode(struct ufs_hba *hba, 816 - struct ufs_pa_layer_attr *dev_max_params, 816 + const struct ufs_pa_layer_attr *dev_max_params, 817 817 struct ufs_pa_layer_attr *dev_req_params) 818 818 { 819 819 struct exynos_ufs *ufs = ufshcd_get_variant(hba); ··· 865 865 866 866 #define PWR_MODE_STR_LEN 64 867 867 static int exynos_ufs_post_pwr_mode(struct ufs_hba *hba, 868 - struct ufs_pa_layer_attr *pwr_req) 868 + const struct ufs_pa_layer_attr *pwr_req) 869 869 { 870 870 struct exynos_ufs *ufs = ufshcd_get_variant(hba); 871 871 struct phy *generic_phy = ufs->phy; ··· 1635 1635 1636 1636 static int exynos_ufs_pwr_change_notify(struct ufs_hba *hba, 1637 1637 enum ufs_notify_change_status status, 1638 - struct ufs_pa_layer_attr *dev_max_params, 1638 + const struct ufs_pa_layer_attr *dev_max_params, 1639 1639 struct ufs_pa_layer_attr *dev_req_params) 1640 1640 { 1641 1641 int ret = 0;
+1 -1
drivers/ufs/host/ufs-exynos.h
··· 188 188 int (*pre_pwr_change)(struct exynos_ufs *ufs, 189 189 struct ufs_pa_layer_attr *pwr); 190 190 int (*post_pwr_change)(struct exynos_ufs *ufs, 191 - struct ufs_pa_layer_attr *pwr); 191 + const struct ufs_pa_layer_attr *pwr); 192 192 int (*pre_hce_enable)(struct exynos_ufs *ufs); 193 193 int (*post_hce_enable)(struct exynos_ufs *ufs); 194 194 };
+3 -3
drivers/ufs/host/ufs-hisi.c
··· 361 361 } 362 362 363 363 static int ufs_hisi_pwr_change_notify(struct ufs_hba *hba, 364 - enum ufs_notify_change_status status, 365 - struct ufs_pa_layer_attr *dev_max_params, 366 - struct ufs_pa_layer_attr *dev_req_params) 364 + enum ufs_notify_change_status status, 365 + const struct ufs_pa_layer_attr *dev_max_params, 366 + struct ufs_pa_layer_attr *dev_req_params) 367 367 { 368 368 struct ufs_host_params host_params; 369 369 int ret = 0;
+6 -5
drivers/ufs/host/ufs-mediatek.c
··· 1081 1081 } 1082 1082 1083 1083 static int ufs_mtk_pre_pwr_change(struct ufs_hba *hba, 1084 - struct ufs_pa_layer_attr *dev_max_params, 1085 - struct ufs_pa_layer_attr *dev_req_params) 1084 + const struct ufs_pa_layer_attr *dev_max_params, 1085 + struct ufs_pa_layer_attr *dev_req_params) 1086 1086 { 1087 1087 struct ufs_mtk_host *host = ufshcd_get_variant(hba); 1088 1088 struct ufs_host_params host_params; ··· 1134 1134 } 1135 1135 1136 1136 static int ufs_mtk_pwr_change_notify(struct ufs_hba *hba, 1137 - enum ufs_notify_change_status stage, 1138 - struct ufs_pa_layer_attr *dev_max_params, 1139 - struct ufs_pa_layer_attr *dev_req_params) 1137 + enum ufs_notify_change_status stage, 1138 + const struct ufs_pa_layer_attr *dev_max_params, 1139 + struct ufs_pa_layer_attr *dev_req_params) 1140 1140 { 1141 1141 int ret = 0; 1142 1142 ··· 1643 1643 } 1644 1644 1645 1645 static int ufs_mtk_clk_scale_notify(struct ufs_hba *hba, bool scale_up, 1646 + unsigned long target_freq, 1646 1647 enum ufs_notify_change_status status) 1647 1648 { 1648 1649 if (!ufshcd_is_clkscaling_supported(hba))
+90 -36
drivers/ufs/host/ufs-qcom.c
··· 15 15 #include <linux/platform_device.h> 16 16 #include <linux/reset-controller.h> 17 17 #include <linux/time.h> 18 + #include <linux/unaligned.h> 19 + #include <linux/units.h> 18 20 19 21 #include <soc/qcom/ice.h> 20 22 ··· 99 97 }; 100 98 101 99 static void ufs_qcom_get_default_testbus_cfg(struct ufs_qcom_host *host); 102 - static int ufs_qcom_set_core_clk_ctrl(struct ufs_hba *hba, bool is_scale_up); 100 + static int ufs_qcom_set_core_clk_ctrl(struct ufs_hba *hba, unsigned long freq); 103 101 104 102 static struct ufs_qcom_host *rcdev_to_ufs_host(struct reset_controller_dev *rcd) 105 103 { ··· 107 105 } 108 106 109 107 #ifdef CONFIG_SCSI_UFS_CRYPTO 108 + /** 109 + * ufs_qcom_config_ice_allocator() - ICE core allocator configuration 110 + * 111 + * @host: pointer to qcom specific variant structure. 112 + */ 113 + static void ufs_qcom_config_ice_allocator(struct ufs_qcom_host *host) 114 + { 115 + struct ufs_hba *hba = host->hba; 116 + static const uint8_t val[4] = { NUM_RX_R1W0, NUM_TX_R0W1, NUM_RX_R1W1, NUM_TX_R1W1 }; 117 + u32 config; 118 + 119 + if (!(host->caps & UFS_QCOM_CAP_ICE_CONFIG) || 120 + !(host->hba->caps & UFSHCD_CAP_CRYPTO)) 121 + return; 122 + 123 + config = get_unaligned_le32(val); 124 + 125 + ufshcd_writel(hba, ICE_ALLOCATOR_TYPE, REG_UFS_MEM_ICE_CONFIG); 126 + ufshcd_writel(hba, config, REG_UFS_MEM_ICE_NUM_CORE); 127 + } 110 128 111 129 static inline void ufs_qcom_ice_enable(struct ufs_qcom_host *host) 112 130 { ··· 271 249 { 272 250 return 0; 273 251 } 252 + 253 + static void ufs_qcom_config_ice_allocator(struct ufs_qcom_host *host) 254 + { 255 + } 256 + 274 257 #endif 275 258 276 259 static void ufs_qcom_disable_lane_clks(struct ufs_qcom_host *host) ··· 524 497 err = ufs_qcom_check_hibern8(hba); 525 498 ufs_qcom_enable_hw_clk_gating(hba); 526 499 ufs_qcom_ice_enable(host); 500 + ufs_qcom_config_ice_allocator(host); 527 501 break; 528 502 default: 529 503 dev_err(hba->dev, "%s: invalid status %d\n", __func__, status); ··· 538 510 * ufs_qcom_cfg_timers - Configure ufs qcom cfg timers 539 511 * 540 512 * @hba: host controller instance 541 - * @gear: Current operating gear 542 - * @hs: current power mode 543 - * @rate: current operating rate (A or B) 544 - * @update_link_startup_timer: indicate if link_start ongoing 545 513 * @is_pre_scale_up: flag to check if pre scale up condition. 546 514 * Return: zero for success and non-zero in case of a failure. 547 515 */ 548 - static int ufs_qcom_cfg_timers(struct ufs_hba *hba, u32 gear, 549 - u32 hs, u32 rate, bool update_link_startup_timer, 550 - bool is_pre_scale_up) 516 + static int ufs_qcom_cfg_timers(struct ufs_hba *hba, bool is_pre_scale_up) 551 517 { 552 518 struct ufs_qcom_host *host = ufshcd_get_variant(hba); 553 519 struct ufs_clk_info *clki; ··· 556 534 */ 557 535 if (host->hw_ver.major < 4 && !ufshcd_is_intr_aggr_allowed(hba)) 558 536 return 0; 559 - 560 - if (gear == 0) { 561 - dev_err(hba->dev, "%s: invalid gear = %d\n", __func__, gear); 562 - return -EINVAL; 563 - } 564 537 565 538 list_for_each_entry(clki, &hba->clk_list_head, list) { 566 539 if (!strcmp(clki->name, "core_clk")) { ··· 592 575 593 576 switch (status) { 594 577 case PRE_CHANGE: 595 - if (ufs_qcom_cfg_timers(hba, UFS_PWM_G1, SLOWAUTO_MODE, 596 - 0, true, false)) { 578 + if (ufs_qcom_cfg_timers(hba, false)) { 597 579 dev_err(hba->dev, "%s: ufs_qcom_cfg_timers() failed\n", 598 580 __func__); 599 581 return -EINVAL; 600 582 } 601 583 602 - err = ufs_qcom_set_core_clk_ctrl(hba, true); 584 + err = ufs_qcom_set_core_clk_ctrl(hba, ULONG_MAX); 603 585 if (err) 604 586 dev_err(hba->dev, "cfg core clk ctrl failed\n"); 605 587 /* ··· 797 781 798 782 static int ufs_qcom_pwr_change_notify(struct ufs_hba *hba, 799 783 enum ufs_notify_change_status status, 800 - struct ufs_pa_layer_attr *dev_max_params, 784 + const struct ufs_pa_layer_attr *dev_max_params, 801 785 struct ufs_pa_layer_attr *dev_req_params) 802 786 { 803 787 struct ufs_qcom_host *host = ufshcd_get_variant(hba); ··· 848 832 } 849 833 break; 850 834 case POST_CHANGE: 851 - if (ufs_qcom_cfg_timers(hba, dev_req_params->gear_rx, 852 - dev_req_params->pwr_rx, 853 - dev_req_params->hs_rate, false, false)) { 835 + if (ufs_qcom_cfg_timers(hba, false)) { 854 836 dev_err(hba->dev, "%s: ufs_qcom_cfg_timers() failed\n", 855 837 __func__); 856 838 /* ··· 1004 990 host_params->hs_tx_gear = host_params->hs_rx_gear = ufs_qcom_get_hs_gear(hba); 1005 991 } 1006 992 993 + static void ufs_qcom_set_host_caps(struct ufs_hba *hba) 994 + { 995 + struct ufs_qcom_host *host = ufshcd_get_variant(hba); 996 + 997 + if (host->hw_ver.major >= 0x5) 998 + host->caps |= UFS_QCOM_CAP_ICE_CONFIG; 999 + } 1000 + 1007 1001 static void ufs_qcom_set_caps(struct ufs_hba *hba) 1008 1002 { 1009 1003 hba->caps |= UFSHCD_CAP_CLK_GATING | UFSHCD_CAP_HIBERN8_WITH_CLK_GATING; ··· 1020 998 hba->caps |= UFSHCD_CAP_WB_EN; 1021 999 hba->caps |= UFSHCD_CAP_AGGR_POWER_COLLAPSE; 1022 1000 hba->caps |= UFSHCD_CAP_RPM_AUTOSUSPEND; 1001 + 1002 + ufs_qcom_set_host_caps(hba); 1023 1003 } 1024 1004 1025 1005 /** ··· 1317 1293 return ufshcd_dme_set(hba, UIC_ARG_MIB(PA_VS_CORE_CLK_40NS_CYCLES), reg); 1318 1294 } 1319 1295 1320 - static int ufs_qcom_set_core_clk_ctrl(struct ufs_hba *hba, bool is_scale_up) 1296 + static int ufs_qcom_set_core_clk_ctrl(struct ufs_hba *hba, unsigned long freq) 1321 1297 { 1322 1298 struct ufs_qcom_host *host = ufshcd_get_variant(hba); 1323 1299 struct list_head *head = &hba->clk_list_head; ··· 1331 1307 !strcmp(clki->name, "core_clk_unipro")) { 1332 1308 if (!clki->max_freq) 1333 1309 cycles_in_1us = 150; /* default for backwards compatibility */ 1334 - else if (is_scale_up) 1335 - cycles_in_1us = ceil(clki->max_freq, (1000 * 1000)); 1310 + else if (freq == ULONG_MAX) 1311 + cycles_in_1us = ceil(clki->max_freq, HZ_PER_MHZ); 1336 1312 else 1337 - cycles_in_1us = ceil(clk_get_rate(clki->clk), (1000 * 1000)); 1313 + cycles_in_1us = ceil(freq, HZ_PER_MHZ); 1314 + 1338 1315 break; 1339 1316 } 1340 1317 } ··· 1372 1347 return ufs_qcom_set_clk_40ns_cycles(hba, cycles_in_1us); 1373 1348 } 1374 1349 1375 - static int ufs_qcom_clk_scale_up_pre_change(struct ufs_hba *hba) 1350 + static int ufs_qcom_clk_scale_up_pre_change(struct ufs_hba *hba, unsigned long freq) 1376 1351 { 1377 - struct ufs_qcom_host *host = ufshcd_get_variant(hba); 1378 - struct ufs_pa_layer_attr *attr = &host->dev_req_params; 1379 1352 int ret; 1380 1353 1381 - ret = ufs_qcom_cfg_timers(hba, attr->gear_rx, attr->pwr_rx, 1382 - attr->hs_rate, false, true); 1354 + ret = ufs_qcom_cfg_timers(hba, true); 1383 1355 if (ret) { 1384 1356 dev_err(hba->dev, "%s ufs cfg timer failed\n", __func__); 1385 1357 return ret; 1386 1358 } 1387 1359 /* set unipro core clock attributes and clear clock divider */ 1388 - return ufs_qcom_set_core_clk_ctrl(hba, true); 1360 + return ufs_qcom_set_core_clk_ctrl(hba, freq); 1389 1361 } 1390 1362 1391 1363 static int ufs_qcom_clk_scale_up_post_change(struct ufs_hba *hba) ··· 1411 1389 return err; 1412 1390 } 1413 1391 1414 - static int ufs_qcom_clk_scale_down_post_change(struct ufs_hba *hba) 1392 + static int ufs_qcom_clk_scale_down_post_change(struct ufs_hba *hba, unsigned long freq) 1415 1393 { 1416 1394 /* set unipro core clock attributes and clear clock divider */ 1417 - return ufs_qcom_set_core_clk_ctrl(hba, false); 1395 + return ufs_qcom_set_core_clk_ctrl(hba, freq); 1418 1396 } 1419 1397 1420 - static int ufs_qcom_clk_scale_notify(struct ufs_hba *hba, 1421 - bool scale_up, enum ufs_notify_change_status status) 1398 + static int ufs_qcom_clk_scale_notify(struct ufs_hba *hba, bool scale_up, 1399 + unsigned long target_freq, 1400 + enum ufs_notify_change_status status) 1422 1401 { 1423 1402 struct ufs_qcom_host *host = ufshcd_get_variant(hba); 1424 1403 int err; ··· 1433 1410 if (err) 1434 1411 return err; 1435 1412 if (scale_up) 1436 - err = ufs_qcom_clk_scale_up_pre_change(hba); 1413 + err = ufs_qcom_clk_scale_up_pre_change(hba, target_freq); 1437 1414 else 1438 1415 err = ufs_qcom_clk_scale_down_pre_change(hba); 1439 1416 ··· 1445 1422 if (scale_up) 1446 1423 err = ufs_qcom_clk_scale_up_post_change(hba); 1447 1424 else 1448 - err = ufs_qcom_clk_scale_down_post_change(hba); 1425 + err = ufs_qcom_clk_scale_down_post_change(hba, target_freq); 1449 1426 1450 1427 1451 1428 if (err) { ··· 1882 1859 return ret; 1883 1860 } 1884 1861 1862 + static u32 ufs_qcom_freq_to_gear_speed(struct ufs_hba *hba, unsigned long freq) 1863 + { 1864 + u32 gear = 0; 1865 + 1866 + switch (freq) { 1867 + case 403000000: 1868 + gear = UFS_HS_G5; 1869 + break; 1870 + case 300000000: 1871 + gear = UFS_HS_G4; 1872 + break; 1873 + case 201500000: 1874 + gear = UFS_HS_G3; 1875 + break; 1876 + case 150000000: 1877 + case 100000000: 1878 + gear = UFS_HS_G2; 1879 + break; 1880 + case 75000000: 1881 + case 37500000: 1882 + gear = UFS_HS_G1; 1883 + break; 1884 + default: 1885 + dev_err(hba->dev, "%s: Unsupported clock freq : %lu\n", __func__, freq); 1886 + break; 1887 + } 1888 + 1889 + return gear; 1890 + } 1891 + 1885 1892 /* 1886 1893 * struct ufs_hba_qcom_vops - UFS QCOM specific variant operations 1887 1894 * ··· 1940 1887 .op_runtime_config = ufs_qcom_op_runtime_config, 1941 1888 .get_outstanding_cqs = ufs_qcom_get_outstanding_cqs, 1942 1889 .config_esi = ufs_qcom_config_esi, 1890 + .freq_to_gear_speed = ufs_qcom_freq_to_gear_speed, 1943 1891 }; 1944 1892 1945 1893 /**
+38 -1
drivers/ufs/host/ufs-qcom.h
··· 50 50 */ 51 51 UFS_AH8_CFG = 0xFC, 52 52 53 + REG_UFS_MEM_ICE_CONFIG = 0x260C, 54 + REG_UFS_MEM_ICE_NUM_CORE = 0x2664, 55 + 53 56 REG_UFS_CFG3 = 0x271C, 54 57 55 58 REG_UFS_DEBUG_SPARE_CFG = 0x284C, ··· 113 110 /* bit definition for UFS_UFS_TEST_BUS_CTRL_n */ 114 111 #define TEST_BUS_SUB_SEL_MASK GENMASK(4, 0) /* All XXX_SEL fields are 5 bits wide */ 115 112 113 + /* bit definition for UFS Shared ICE config */ 114 + #define UFS_QCOM_CAP_ICE_CONFIG BIT(0) 115 + 116 116 #define REG_UFS_CFG2_CGC_EN_ALL (UAWM_HW_CGC_EN | UARM_HW_CGC_EN |\ 117 117 TXUC_HW_CGC_EN | RXUC_HW_CGC_EN |\ 118 118 DFC_HW_CGC_EN | TRLUT_HW_CGC_EN |\ ··· 140 134 #define UNIPRO_CORE_CLK_FREQ_300_MHZ 300 141 135 #define UNIPRO_CORE_CLK_FREQ_201_5_MHZ 202 142 136 #define UNIPRO_CORE_CLK_FREQ_403_MHZ 403 137 + 138 + /* ICE allocator type to share AES engines among TX stream and RX stream */ 139 + #define ICE_ALLOCATOR_TYPE 2 140 + 141 + /* 142 + * Number of cores allocated for RX stream when Read data block received and 143 + * Write data block is not in progress 144 + */ 145 + #define NUM_RX_R1W0 28 146 + 147 + /* 148 + * Number of cores allocated for TX stream when Device asked to send write 149 + * data block and Read data block is not in progress 150 + */ 151 + #define NUM_TX_R0W1 28 152 + 153 + /* 154 + * Number of cores allocated for RX stream when Read data block received and 155 + * Write data block is in progress 156 + * OR 157 + * Device asked to send write data block and Read data block is in progress 158 + */ 159 + #define NUM_RX_R1W1 15 160 + 161 + /* 162 + * Number of cores allocated for TX stream (UFS write) when Read data block 163 + * received and Write data block is in progress 164 + * OR 165 + * Device asked to send write data block and Read data block is in progress 166 + */ 167 + #define NUM_TX_R1W1 13 143 168 144 169 static inline void 145 170 ufs_qcom_get_controller_revision(struct ufs_hba *hba, ··· 233 196 #ifdef CONFIG_SCSI_UFS_CRYPTO 234 197 struct qcom_ice *ice; 235 198 #endif 236 - 199 + u32 caps; 237 200 void __iomem *dev_ref_clk_ctrl_mmio; 238 201 bool is_dev_ref_clk_enabled; 239 202 struct ufs_hw_version hw_ver;
+435 -292
drivers/ufs/host/ufs-renesas.c
··· 9 9 #include <linux/delay.h> 10 10 #include <linux/dma-mapping.h> 11 11 #include <linux/err.h> 12 + #include <linux/firmware.h> 12 13 #include <linux/iopoll.h> 13 14 #include <linux/kernel.h> 14 15 #include <linux/module.h> 16 + #include <linux/nvmem-consumer.h> 15 17 #include <linux/of.h> 16 18 #include <linux/platform_device.h> 17 19 #include <linux/pm_runtime.h> 20 + #include <linux/sys_soc.h> 18 21 #include <ufs/ufshcd.h> 19 22 20 23 #include "ufshcd-pltfrm.h" 21 24 25 + #define EFUSE_CALIB_SIZE 8 26 + 22 27 struct ufs_renesas_priv { 28 + const struct firmware *fw; 29 + void (*pre_init)(struct ufs_hba *hba); 23 30 bool initialized; /* The hardware needs initialization once */ 31 + u8 calib[EFUSE_CALIB_SIZE]; 24 32 }; 25 33 26 - enum { 27 - SET_PHY_INDEX_LO = 0, 28 - SET_PHY_INDEX_HI, 29 - TIMER_INDEX, 30 - MAX_INDEX 31 - }; 32 - 33 - enum ufs_renesas_init_param_mode { 34 - MODE_RESTORE, 35 - MODE_SET, 36 - MODE_SAVE, 37 - MODE_POLL, 38 - MODE_WAIT, 39 - MODE_WRITE, 40 - }; 41 - 42 - #define PARAM_RESTORE(_reg, _index) \ 43 - { .mode = MODE_RESTORE, .reg = _reg, .index = _index } 44 - #define PARAM_SET(_index, _set) \ 45 - { .mode = MODE_SET, .index = _index, .u.set = _set } 46 - #define PARAM_SAVE(_reg, _mask, _index) \ 47 - { .mode = MODE_SAVE, .reg = _reg, .mask = (u32)(_mask), \ 48 - .index = _index } 49 - #define PARAM_POLL(_reg, _expected, _mask) \ 50 - { .mode = MODE_POLL, .reg = _reg, .u.expected = _expected, \ 51 - .mask = (u32)(_mask) } 52 - #define PARAM_WAIT(_delay_us) \ 53 - { .mode = MODE_WAIT, .u.delay_us = _delay_us } 54 - 55 - #define PARAM_WRITE(_reg, _val) \ 56 - { .mode = MODE_WRITE, .reg = _reg, .u.val = _val } 57 - 58 - #define PARAM_WRITE_D0_D4(_d0, _d4) \ 59 - PARAM_WRITE(0xd0, _d0), PARAM_WRITE(0xd4, _d4) 60 - 61 - #define PARAM_WRITE_800_80C_POLL(_addr, _data_800) \ 62 - PARAM_WRITE_D0_D4(0x0000080c, 0x00000100), \ 63 - PARAM_WRITE_D0_D4(0x00000800, ((_data_800) << 16) | BIT(8) | (_addr)), \ 64 - PARAM_WRITE(0xd0, 0x0000080c), \ 65 - PARAM_POLL(0xd4, BIT(8), BIT(8)) 66 - 67 - #define PARAM_RESTORE_800_80C_POLL(_index) \ 68 - PARAM_WRITE_D0_D4(0x0000080c, 0x00000100), \ 69 - PARAM_WRITE(0xd0, 0x00000800), \ 70 - PARAM_RESTORE(0xd4, _index), \ 71 - PARAM_WRITE(0xd0, 0x0000080c), \ 72 - PARAM_POLL(0xd4, BIT(8), BIT(8)) 73 - 74 - #define PARAM_WRITE_804_80C_POLL(_addr, _data_804) \ 75 - PARAM_WRITE_D0_D4(0x0000080c, 0x00000100), \ 76 - PARAM_WRITE_D0_D4(0x00000804, ((_data_804) << 16) | BIT(8) | (_addr)), \ 77 - PARAM_WRITE(0xd0, 0x0000080c), \ 78 - PARAM_POLL(0xd4, BIT(8), BIT(8)) 79 - 80 - #define PARAM_WRITE_828_82C_POLL(_data_828) \ 81 - PARAM_WRITE_D0_D4(0x0000082c, 0x0f000000), \ 82 - PARAM_WRITE_D0_D4(0x00000828, _data_828), \ 83 - PARAM_WRITE(0xd0, 0x0000082c), \ 84 - PARAM_POLL(0xd4, _data_828, _data_828) 85 - 86 - #define PARAM_WRITE_PHY(_addr16, _data16) \ 87 - PARAM_WRITE(0xf0, 1), \ 88 - PARAM_WRITE_800_80C_POLL(0x16, (_addr16) & 0xff), \ 89 - PARAM_WRITE_800_80C_POLL(0x17, ((_addr16) >> 8) & 0xff), \ 90 - PARAM_WRITE_800_80C_POLL(0x18, (_data16) & 0xff), \ 91 - PARAM_WRITE_800_80C_POLL(0x19, ((_data16) >> 8) & 0xff), \ 92 - PARAM_WRITE_800_80C_POLL(0x1c, 0x01), \ 93 - PARAM_WRITE_828_82C_POLL(0x0f000000), \ 94 - PARAM_WRITE(0xf0, 0) 95 - 96 - #define PARAM_SET_PHY(_addr16, _data16) \ 97 - PARAM_WRITE(0xf0, 1), \ 98 - PARAM_WRITE_800_80C_POLL(0x16, (_addr16) & 0xff), \ 99 - PARAM_WRITE_800_80C_POLL(0x17, ((_addr16) >> 8) & 0xff), \ 100 - PARAM_WRITE_800_80C_POLL(0x1c, 0x01), \ 101 - PARAM_WRITE_828_82C_POLL(0x0f000000), \ 102 - PARAM_WRITE_804_80C_POLL(0x1a, 0), \ 103 - PARAM_WRITE(0xd0, 0x00000808), \ 104 - PARAM_SAVE(0xd4, 0xff, SET_PHY_INDEX_LO), \ 105 - PARAM_WRITE_804_80C_POLL(0x1b, 0), \ 106 - PARAM_WRITE(0xd0, 0x00000808), \ 107 - PARAM_SAVE(0xd4, 0xff, SET_PHY_INDEX_HI), \ 108 - PARAM_WRITE_828_82C_POLL(0x0f000000), \ 109 - PARAM_WRITE(0xf0, 0), \ 110 - PARAM_WRITE(0xf0, 1), \ 111 - PARAM_WRITE_800_80C_POLL(0x16, (_addr16) & 0xff), \ 112 - PARAM_WRITE_800_80C_POLL(0x17, ((_addr16) >> 8) & 0xff), \ 113 - PARAM_SET(SET_PHY_INDEX_LO, ((_data16 & 0xff) << 16) | BIT(8) | 0x18), \ 114 - PARAM_RESTORE_800_80C_POLL(SET_PHY_INDEX_LO), \ 115 - PARAM_SET(SET_PHY_INDEX_HI, (((_data16 >> 8) & 0xff) << 16) | BIT(8) | 0x19), \ 116 - PARAM_RESTORE_800_80C_POLL(SET_PHY_INDEX_HI), \ 117 - PARAM_WRITE_800_80C_POLL(0x1c, 0x01), \ 118 - PARAM_WRITE_828_82C_POLL(0x0f000000), \ 119 - PARAM_WRITE(0xf0, 0) 120 - 121 - #define PARAM_INDIRECT_WRITE(_gpio, _addr, _data_800) \ 122 - PARAM_WRITE(0xf0, _gpio), \ 123 - PARAM_WRITE_800_80C_POLL(_addr, _data_800), \ 124 - PARAM_WRITE_828_82C_POLL(0x0f000000), \ 125 - PARAM_WRITE(0xf0, 0) 126 - 127 - #define PARAM_INDIRECT_POLL(_gpio, _addr, _expected, _mask) \ 128 - PARAM_WRITE(0xf0, _gpio), \ 129 - PARAM_WRITE_800_80C_POLL(_addr, 0), \ 130 - PARAM_WRITE(0xd0, 0x00000808), \ 131 - PARAM_POLL(0xd4, _expected, _mask), \ 132 - PARAM_WRITE(0xf0, 0) 133 - 134 - struct ufs_renesas_init_param { 135 - enum ufs_renesas_init_param_mode mode; 136 - u32 reg; 137 - union { 138 - u32 expected; 139 - u32 delay_us; 140 - u32 set; 141 - u32 val; 142 - } u; 143 - u32 mask; 144 - u32 index; 145 - }; 146 - 147 - /* This setting is for SERIES B */ 148 - static const struct ufs_renesas_init_param ufs_param[] = { 149 - PARAM_WRITE(0xc0, 0x49425308), 150 - PARAM_WRITE_D0_D4(0x00000104, 0x00000002), 151 - PARAM_WAIT(1), 152 - PARAM_WRITE_D0_D4(0x00000828, 0x00000200), 153 - PARAM_WAIT(1), 154 - PARAM_WRITE_D0_D4(0x00000828, 0x00000000), 155 - PARAM_WRITE_D0_D4(0x00000104, 0x00000001), 156 - PARAM_WRITE_D0_D4(0x00000940, 0x00000001), 157 - PARAM_WAIT(1), 158 - PARAM_WRITE_D0_D4(0x00000940, 0x00000000), 159 - 160 - PARAM_WRITE(0xc0, 0x49425308), 161 - PARAM_WRITE(0xc0, 0x41584901), 162 - 163 - PARAM_WRITE_D0_D4(0x0000080c, 0x00000100), 164 - PARAM_WRITE_D0_D4(0x00000804, 0x00000000), 165 - PARAM_WRITE(0xd0, 0x0000080c), 166 - PARAM_POLL(0xd4, BIT(8), BIT(8)), 167 - 168 - PARAM_WRITE(REG_CONTROLLER_ENABLE, 0x00000001), 169 - 170 - PARAM_WRITE(0xd0, 0x00000804), 171 - PARAM_POLL(0xd4, BIT(8) | BIT(6) | BIT(0), BIT(8) | BIT(6) | BIT(0)), 172 - 173 - PARAM_WRITE(0xd0, 0x00000d00), 174 - PARAM_SAVE(0xd4, 0x0000ffff, TIMER_INDEX), 175 - PARAM_WRITE(0xd4, 0x00000000), 176 - PARAM_WRITE_D0_D4(0x0000082c, 0x0f000000), 177 - PARAM_WRITE_D0_D4(0x00000828, 0x08000000), 178 - PARAM_WRITE(0xd0, 0x0000082c), 179 - PARAM_POLL(0xd4, BIT(27), BIT(27)), 180 - PARAM_WRITE(0xd0, 0x00000d2c), 181 - PARAM_POLL(0xd4, BIT(0), BIT(0)), 182 - 183 - /* phy setup */ 184 - PARAM_INDIRECT_WRITE(1, 0x01, 0x001f), 185 - PARAM_INDIRECT_WRITE(7, 0x5d, 0x0014), 186 - PARAM_INDIRECT_WRITE(7, 0x5e, 0x0014), 187 - PARAM_INDIRECT_WRITE(7, 0x0d, 0x0003), 188 - PARAM_INDIRECT_WRITE(7, 0x0e, 0x0007), 189 - PARAM_INDIRECT_WRITE(7, 0x5f, 0x0003), 190 - PARAM_INDIRECT_WRITE(7, 0x60, 0x0003), 191 - PARAM_INDIRECT_WRITE(7, 0x5b, 0x00a6), 192 - PARAM_INDIRECT_WRITE(7, 0x5c, 0x0003), 193 - 194 - PARAM_INDIRECT_POLL(7, 0x3c, 0, BIT(7)), 195 - PARAM_INDIRECT_POLL(7, 0x4c, 0, BIT(4)), 196 - 197 - PARAM_INDIRECT_WRITE(1, 0x32, 0x0080), 198 - PARAM_INDIRECT_WRITE(1, 0x1f, 0x0001), 199 - PARAM_INDIRECT_WRITE(0, 0x2c, 0x0001), 200 - PARAM_INDIRECT_WRITE(0, 0x32, 0x0087), 201 - 202 - PARAM_INDIRECT_WRITE(1, 0x4d, 0x0061), 203 - PARAM_INDIRECT_WRITE(4, 0x9b, 0x0009), 204 - PARAM_INDIRECT_WRITE(4, 0xa6, 0x0005), 205 - PARAM_INDIRECT_WRITE(4, 0xa5, 0x0058), 206 - PARAM_INDIRECT_WRITE(1, 0x39, 0x0027), 207 - PARAM_INDIRECT_WRITE(1, 0x47, 0x004c), 208 - 209 - PARAM_INDIRECT_WRITE(7, 0x0d, 0x0002), 210 - PARAM_INDIRECT_WRITE(7, 0x0e, 0x0007), 211 - 212 - PARAM_WRITE_PHY(0x0028, 0x0061), 213 - PARAM_WRITE_PHY(0x4014, 0x0061), 214 - PARAM_SET_PHY(0x401c, BIT(2)), 215 - PARAM_WRITE_PHY(0x4000, 0x0000), 216 - PARAM_WRITE_PHY(0x4001, 0x0000), 217 - 218 - PARAM_WRITE_PHY(0x10ae, 0x0001), 219 - PARAM_WRITE_PHY(0x10ad, 0x0000), 220 - PARAM_WRITE_PHY(0x10af, 0x0001), 221 - PARAM_WRITE_PHY(0x10b6, 0x0001), 222 - PARAM_WRITE_PHY(0x10ae, 0x0000), 223 - 224 - PARAM_WRITE_PHY(0x10ae, 0x0001), 225 - PARAM_WRITE_PHY(0x10ad, 0x0000), 226 - PARAM_WRITE_PHY(0x10af, 0x0002), 227 - PARAM_WRITE_PHY(0x10b6, 0x0001), 228 - PARAM_WRITE_PHY(0x10ae, 0x0000), 229 - 230 - PARAM_WRITE_PHY(0x10ae, 0x0001), 231 - PARAM_WRITE_PHY(0x10ad, 0x0080), 232 - PARAM_WRITE_PHY(0x10af, 0x0000), 233 - PARAM_WRITE_PHY(0x10b6, 0x0001), 234 - PARAM_WRITE_PHY(0x10ae, 0x0000), 235 - 236 - PARAM_WRITE_PHY(0x10ae, 0x0001), 237 - PARAM_WRITE_PHY(0x10ad, 0x0080), 238 - PARAM_WRITE_PHY(0x10af, 0x001a), 239 - PARAM_WRITE_PHY(0x10b6, 0x0001), 240 - PARAM_WRITE_PHY(0x10ae, 0x0000), 241 - 242 - PARAM_INDIRECT_WRITE(7, 0x70, 0x0016), 243 - PARAM_INDIRECT_WRITE(7, 0x71, 0x0016), 244 - PARAM_INDIRECT_WRITE(7, 0x72, 0x0014), 245 - PARAM_INDIRECT_WRITE(7, 0x73, 0x0014), 246 - PARAM_INDIRECT_WRITE(7, 0x74, 0x0000), 247 - PARAM_INDIRECT_WRITE(7, 0x75, 0x0000), 248 - PARAM_INDIRECT_WRITE(7, 0x76, 0x0010), 249 - PARAM_INDIRECT_WRITE(7, 0x77, 0x0010), 250 - PARAM_INDIRECT_WRITE(7, 0x78, 0x00ff), 251 - PARAM_INDIRECT_WRITE(7, 0x79, 0x0000), 252 - 253 - PARAM_INDIRECT_WRITE(7, 0x19, 0x0007), 254 - 255 - PARAM_INDIRECT_WRITE(7, 0x1a, 0x0007), 256 - 257 - PARAM_INDIRECT_WRITE(7, 0x24, 0x000c), 258 - 259 - PARAM_INDIRECT_WRITE(7, 0x25, 0x000c), 260 - 261 - PARAM_INDIRECT_WRITE(7, 0x62, 0x0000), 262 - PARAM_INDIRECT_WRITE(7, 0x63, 0x0000), 263 - PARAM_INDIRECT_WRITE(7, 0x5d, 0x0014), 264 - PARAM_INDIRECT_WRITE(7, 0x5e, 0x0017), 265 - PARAM_INDIRECT_WRITE(7, 0x5d, 0x0004), 266 - PARAM_INDIRECT_WRITE(7, 0x5e, 0x0017), 267 - PARAM_INDIRECT_POLL(7, 0x55, 0, BIT(6)), 268 - PARAM_INDIRECT_POLL(7, 0x41, 0, BIT(7)), 269 - /* end of phy setup */ 270 - 271 - PARAM_WRITE(0xf0, 0), 272 - PARAM_WRITE(0xd0, 0x00000d00), 273 - PARAM_RESTORE(0xd4, TIMER_INDEX), 274 - }; 34 + #define UFS_RENESAS_FIRMWARE_NAME "r8a779f0_ufs.bin" 35 + MODULE_FIRMWARE(UFS_RENESAS_FIRMWARE_NAME); 275 36 276 37 static void ufs_renesas_dbg_register_dump(struct ufs_hba *hba) 277 38 { 278 39 ufshcd_dump_regs(hba, 0xc0, 0x40, "regs: 0xc0 + "); 279 40 } 280 41 281 - static void ufs_renesas_reg_control(struct ufs_hba *hba, 282 - const struct ufs_renesas_init_param *p) 42 + static void ufs_renesas_poll(struct ufs_hba *hba, u32 reg, u32 expected, u32 mask) 283 43 { 284 - static u32 save[MAX_INDEX]; 285 44 int ret; 286 45 u32 val; 287 46 288 - WARN_ON(p->index >= MAX_INDEX); 289 - 290 - switch (p->mode) { 291 - case MODE_RESTORE: 292 - ufshcd_writel(hba, save[p->index], p->reg); 293 - break; 294 - case MODE_SET: 295 - save[p->index] |= p->u.set; 296 - break; 297 - case MODE_SAVE: 298 - save[p->index] = ufshcd_readl(hba, p->reg) & p->mask; 299 - break; 300 - case MODE_POLL: 301 - ret = readl_poll_timeout_atomic(hba->mmio_base + p->reg, 302 - val, 303 - (val & p->mask) == p->u.expected, 304 - 10, 1000); 305 - if (ret) 306 - dev_err(hba->dev, "%s: poll failed %d (%08x, %08x, %08x)\n", 307 - __func__, ret, val, p->mask, p->u.expected); 308 - break; 309 - case MODE_WAIT: 310 - if (p->u.delay_us > 1000) 311 - mdelay(DIV_ROUND_UP(p->u.delay_us, 1000)); 312 - else 313 - udelay(p->u.delay_us); 314 - break; 315 - case MODE_WRITE: 316 - ufshcd_writel(hba, p->u.val, p->reg); 317 - break; 318 - default: 319 - break; 320 - } 47 + ret = readl_poll_timeout_atomic(hba->mmio_base + reg, 48 + val, (val & mask) == expected, 49 + 10, 1000); 50 + if (ret) 51 + dev_err(hba->dev, "%s: poll failed %d (%08x, %08x, %08x)\n", 52 + __func__, ret, val, mask, expected); 321 53 } 322 54 323 - static void ufs_renesas_pre_init(struct ufs_hba *hba) 55 + static u32 ufs_renesas_read(struct ufs_hba *hba, u32 reg) 324 56 { 325 - const struct ufs_renesas_init_param *p = ufs_param; 326 - unsigned int i; 57 + return ufshcd_readl(hba, reg); 58 + } 327 59 328 - for (i = 0; i < ARRAY_SIZE(ufs_param); i++) 329 - ufs_renesas_reg_control(hba, &p[i]); 60 + static void ufs_renesas_write(struct ufs_hba *hba, u32 reg, u32 value) 61 + { 62 + ufshcd_writel(hba, value, reg); 63 + } 64 + 65 + static void ufs_renesas_write_d0_d4(struct ufs_hba *hba, u32 data_d0, u32 data_d4) 66 + { 67 + ufs_renesas_write(hba, 0xd0, data_d0); 68 + ufs_renesas_write(hba, 0xd4, data_d4); 69 + } 70 + 71 + static void ufs_renesas_write_800_80c_poll(struct ufs_hba *hba, u32 addr, 72 + u32 data_800) 73 + { 74 + ufs_renesas_write_d0_d4(hba, 0x0000080c, 0x00000100); 75 + ufs_renesas_write_d0_d4(hba, 0x00000800, (data_800 << 16) | BIT(8) | addr); 76 + ufs_renesas_write(hba, 0xd0, 0x0000080c); 77 + ufs_renesas_poll(hba, 0xd4, BIT(8), BIT(8)); 78 + } 79 + 80 + static void ufs_renesas_write_804_80c_poll(struct ufs_hba *hba, u32 addr, u32 data_804) 81 + { 82 + ufs_renesas_write_d0_d4(hba, 0x0000080c, 0x00000100); 83 + ufs_renesas_write_d0_d4(hba, 0x00000804, (data_804 << 16) | BIT(8) | addr); 84 + ufs_renesas_write(hba, 0xd0, 0x0000080c); 85 + ufs_renesas_poll(hba, 0xd4, BIT(8), BIT(8)); 86 + } 87 + 88 + static void ufs_renesas_write_828_82c_poll(struct ufs_hba *hba, u32 data_828) 89 + { 90 + ufs_renesas_write_d0_d4(hba, 0x0000082c, 0x0f000000); 91 + ufs_renesas_write_d0_d4(hba, 0x00000828, data_828); 92 + ufs_renesas_write(hba, 0xd0, 0x0000082c); 93 + ufs_renesas_poll(hba, 0xd4, data_828, data_828); 94 + } 95 + 96 + static void ufs_renesas_write_phy(struct ufs_hba *hba, u32 addr16, u32 data16) 97 + { 98 + ufs_renesas_write(hba, 0xf0, 1); 99 + ufs_renesas_write_800_80c_poll(hba, 0x16, addr16 & 0xff); 100 + ufs_renesas_write_800_80c_poll(hba, 0x17, (addr16 >> 8) & 0xff); 101 + ufs_renesas_write_800_80c_poll(hba, 0x18, data16 & 0xff); 102 + ufs_renesas_write_800_80c_poll(hba, 0x19, (data16 >> 8) & 0xff); 103 + ufs_renesas_write_800_80c_poll(hba, 0x1c, 0x01); 104 + ufs_renesas_write_828_82c_poll(hba, 0x0f000000); 105 + ufs_renesas_write(hba, 0xf0, 0); 106 + } 107 + 108 + static void ufs_renesas_set_phy(struct ufs_hba *hba, u32 addr16, u32 data16) 109 + { 110 + u32 low, high; 111 + 112 + ufs_renesas_write(hba, 0xf0, 1); 113 + ufs_renesas_write_800_80c_poll(hba, 0x16, addr16 & 0xff); 114 + ufs_renesas_write_800_80c_poll(hba, 0x17, (addr16 >> 8) & 0xff); 115 + ufs_renesas_write_800_80c_poll(hba, 0x1c, 0x01); 116 + ufs_renesas_write_828_82c_poll(hba, 0x0f000000); 117 + ufs_renesas_write_804_80c_poll(hba, 0x1a, 0); 118 + ufs_renesas_write(hba, 0xd0, 0x00000808); 119 + low = ufs_renesas_read(hba, 0xd4) & 0xff; 120 + ufs_renesas_write_804_80c_poll(hba, 0x1b, 0); 121 + ufs_renesas_write(hba, 0xd0, 0x00000808); 122 + high = ufs_renesas_read(hba, 0xd4) & 0xff; 123 + ufs_renesas_write_828_82c_poll(hba, 0x0f000000); 124 + ufs_renesas_write(hba, 0xf0, 0); 125 + 126 + data16 |= (high << 8) | low; 127 + ufs_renesas_write_phy(hba, addr16, data16); 128 + } 129 + 130 + static void ufs_renesas_reset_indirect_write(struct ufs_hba *hba, int gpio, 131 + u32 addr, u32 data) 132 + { 133 + ufs_renesas_write(hba, 0xf0, gpio); 134 + ufs_renesas_write_800_80c_poll(hba, addr, data); 135 + } 136 + 137 + static void ufs_renesas_reset_indirect_update(struct ufs_hba *hba) 138 + { 139 + ufs_renesas_write_d0_d4(hba, 0x0000082c, 0x0f000000); 140 + ufs_renesas_write_d0_d4(hba, 0x00000828, 0x0f000000); 141 + ufs_renesas_write(hba, 0xd0, 0x0000082c); 142 + ufs_renesas_poll(hba, 0xd4, BIT(27) | BIT(26) | BIT(24), BIT(27) | BIT(26) | BIT(24)); 143 + ufs_renesas_write(hba, 0xf0, 0); 144 + } 145 + 146 + static void ufs_renesas_indirect_write(struct ufs_hba *hba, u32 gpio, u32 addr, 147 + u32 data_800) 148 + { 149 + ufs_renesas_write(hba, 0xf0, gpio); 150 + ufs_renesas_write_800_80c_poll(hba, addr, data_800); 151 + ufs_renesas_write_828_82c_poll(hba, 0x0f000000); 152 + ufs_renesas_write(hba, 0xf0, 0); 153 + } 154 + 155 + static void ufs_renesas_indirect_poll(struct ufs_hba *hba, u32 gpio, u32 addr, 156 + u32 expected, u32 mask) 157 + { 158 + ufs_renesas_write(hba, 0xf0, gpio); 159 + ufs_renesas_write_800_80c_poll(hba, addr, 0); 160 + ufs_renesas_write(hba, 0xd0, 0x00000808); 161 + ufs_renesas_poll(hba, 0xd4, expected, mask); 162 + ufs_renesas_write(hba, 0xf0, 0); 163 + } 164 + 165 + static void ufs_renesas_init_step1_to_3(struct ufs_hba *hba, bool init108) 166 + { 167 + ufs_renesas_write(hba, 0xc0, 0x49425308); 168 + ufs_renesas_write_d0_d4(hba, 0x00000104, 0x00000002); 169 + if (init108) 170 + ufs_renesas_write_d0_d4(hba, 0x00000108, 0x00000002); 171 + udelay(1); 172 + ufs_renesas_write_d0_d4(hba, 0x00000828, 0x00000200); 173 + udelay(1); 174 + ufs_renesas_write_d0_d4(hba, 0x00000828, 0x00000000); 175 + ufs_renesas_write_d0_d4(hba, 0x00000104, 0x00000001); 176 + if (init108) 177 + ufs_renesas_write_d0_d4(hba, 0x00000108, 0x00000001); 178 + ufs_renesas_write_d0_d4(hba, 0x00000940, 0x00000001); 179 + udelay(1); 180 + ufs_renesas_write_d0_d4(hba, 0x00000940, 0x00000000); 181 + 182 + ufs_renesas_write(hba, 0xc0, 0x49425308); 183 + ufs_renesas_write(hba, 0xc0, 0x41584901); 184 + } 185 + 186 + static void ufs_renesas_init_step4_to_6(struct ufs_hba *hba) 187 + { 188 + ufs_renesas_write_d0_d4(hba, 0x0000080c, 0x00000100); 189 + ufs_renesas_write_d0_d4(hba, 0x00000804, 0x00000000); 190 + ufs_renesas_write(hba, 0xd0, 0x0000080c); 191 + ufs_renesas_poll(hba, 0xd4, BIT(8), BIT(8)); 192 + 193 + ufs_renesas_write(hba, REG_CONTROLLER_ENABLE, 0x00000001); 194 + 195 + ufs_renesas_write(hba, 0xd0, 0x00000804); 196 + ufs_renesas_poll(hba, 0xd4, BIT(8) | BIT(6) | BIT(0), BIT(8) | BIT(6) | BIT(0)); 197 + } 198 + 199 + static u32 ufs_renesas_init_disable_timer(struct ufs_hba *hba) 200 + { 201 + u32 timer_val; 202 + 203 + ufs_renesas_write(hba, 0xd0, 0x00000d00); 204 + timer_val = ufs_renesas_read(hba, 0xd4) & 0x0000ffff; 205 + ufs_renesas_write(hba, 0xd4, 0x00000000); 206 + ufs_renesas_write_d0_d4(hba, 0x0000082c, 0x0f000000); 207 + ufs_renesas_write_d0_d4(hba, 0x00000828, 0x08000000); 208 + ufs_renesas_write(hba, 0xd0, 0x0000082c); 209 + ufs_renesas_poll(hba, 0xd4, BIT(27), BIT(27)); 210 + ufs_renesas_write(hba, 0xd0, 0x00000d2c); 211 + ufs_renesas_poll(hba, 0xd4, BIT(0), BIT(0)); 212 + 213 + return timer_val; 214 + } 215 + 216 + static void ufs_renesas_init_enable_timer(struct ufs_hba *hba, u32 timer_val) 217 + { 218 + ufs_renesas_write(hba, 0xf0, 0); 219 + ufs_renesas_write(hba, 0xd0, 0x00000d00); 220 + ufs_renesas_write(hba, 0xd4, timer_val); 221 + } 222 + 223 + static void ufs_renesas_write_phy_10ad_10af(struct ufs_hba *hba, 224 + u32 data_10ad, u32 data_10af) 225 + { 226 + ufs_renesas_write_phy(hba, 0x10ae, 0x0001); 227 + ufs_renesas_write_phy(hba, 0x10ad, data_10ad); 228 + ufs_renesas_write_phy(hba, 0x10af, data_10af); 229 + ufs_renesas_write_phy(hba, 0x10b6, 0x0001); 230 + ufs_renesas_write_phy(hba, 0x10ae, 0x0000); 231 + } 232 + 233 + static void ufs_renesas_init_compensation_and_slicers(struct ufs_hba *hba) 234 + { 235 + ufs_renesas_write_phy_10ad_10af(hba, 0x0000, 0x0001); 236 + ufs_renesas_write_phy_10ad_10af(hba, 0x0000, 0x0002); 237 + ufs_renesas_write_phy_10ad_10af(hba, 0x0080, 0x0000); 238 + ufs_renesas_write_phy_10ad_10af(hba, 0x0080, 0x001a); 239 + } 240 + 241 + static void ufs_renesas_r8a779f0_es10_pre_init(struct ufs_hba *hba) 242 + { 243 + u32 timer_val; 244 + 245 + /* This setting is for SERIES B */ 246 + ufs_renesas_init_step1_to_3(hba, false); 247 + 248 + ufs_renesas_init_step4_to_6(hba); 249 + 250 + timer_val = ufs_renesas_init_disable_timer(hba); 251 + 252 + /* phy setup */ 253 + ufs_renesas_indirect_write(hba, 1, 0x01, 0x001f); 254 + ufs_renesas_indirect_write(hba, 7, 0x5d, 0x0014); 255 + ufs_renesas_indirect_write(hba, 7, 0x5e, 0x0014); 256 + ufs_renesas_indirect_write(hba, 7, 0x0d, 0x0003); 257 + ufs_renesas_indirect_write(hba, 7, 0x0e, 0x0007); 258 + ufs_renesas_indirect_write(hba, 7, 0x5f, 0x0003); 259 + ufs_renesas_indirect_write(hba, 7, 0x60, 0x0003); 260 + ufs_renesas_indirect_write(hba, 7, 0x5b, 0x00a6); 261 + ufs_renesas_indirect_write(hba, 7, 0x5c, 0x0003); 262 + 263 + ufs_renesas_indirect_poll(hba, 7, 0x3c, 0, BIT(7)); 264 + ufs_renesas_indirect_poll(hba, 7, 0x4c, 0, BIT(4)); 265 + 266 + ufs_renesas_indirect_write(hba, 1, 0x32, 0x0080); 267 + ufs_renesas_indirect_write(hba, 1, 0x1f, 0x0001); 268 + ufs_renesas_indirect_write(hba, 0, 0x2c, 0x0001); 269 + ufs_renesas_indirect_write(hba, 0, 0x32, 0x0087); 270 + 271 + ufs_renesas_indirect_write(hba, 1, 0x4d, 0x0061); 272 + ufs_renesas_indirect_write(hba, 4, 0x9b, 0x0009); 273 + ufs_renesas_indirect_write(hba, 4, 0xa6, 0x0005); 274 + ufs_renesas_indirect_write(hba, 4, 0xa5, 0x0058); 275 + ufs_renesas_indirect_write(hba, 1, 0x39, 0x0027); 276 + ufs_renesas_indirect_write(hba, 1, 0x47, 0x004c); 277 + 278 + ufs_renesas_indirect_write(hba, 7, 0x0d, 0x0002); 279 + ufs_renesas_indirect_write(hba, 7, 0x0e, 0x0007); 280 + 281 + ufs_renesas_write_phy(hba, 0x0028, 0x0061); 282 + ufs_renesas_write_phy(hba, 0x4014, 0x0061); 283 + ufs_renesas_set_phy(hba, 0x401c, BIT(2)); 284 + ufs_renesas_write_phy(hba, 0x4000, 0x0000); 285 + ufs_renesas_write_phy(hba, 0x4001, 0x0000); 286 + 287 + ufs_renesas_init_compensation_and_slicers(hba); 288 + 289 + ufs_renesas_indirect_write(hba, 7, 0x70, 0x0016); 290 + ufs_renesas_indirect_write(hba, 7, 0x71, 0x0016); 291 + ufs_renesas_indirect_write(hba, 7, 0x72, 0x0014); 292 + ufs_renesas_indirect_write(hba, 7, 0x73, 0x0014); 293 + ufs_renesas_indirect_write(hba, 7, 0x74, 0x0000); 294 + ufs_renesas_indirect_write(hba, 7, 0x75, 0x0000); 295 + ufs_renesas_indirect_write(hba, 7, 0x76, 0x0010); 296 + ufs_renesas_indirect_write(hba, 7, 0x77, 0x0010); 297 + ufs_renesas_indirect_write(hba, 7, 0x78, 0x00ff); 298 + ufs_renesas_indirect_write(hba, 7, 0x79, 0x0000); 299 + 300 + ufs_renesas_indirect_write(hba, 7, 0x19, 0x0007); 301 + ufs_renesas_indirect_write(hba, 7, 0x1a, 0x0007); 302 + ufs_renesas_indirect_write(hba, 7, 0x24, 0x000c); 303 + ufs_renesas_indirect_write(hba, 7, 0x25, 0x000c); 304 + ufs_renesas_indirect_write(hba, 7, 0x62, 0x0000); 305 + ufs_renesas_indirect_write(hba, 7, 0x63, 0x0000); 306 + ufs_renesas_indirect_write(hba, 7, 0x5d, 0x0014); 307 + ufs_renesas_indirect_write(hba, 7, 0x5e, 0x0017); 308 + ufs_renesas_indirect_write(hba, 7, 0x5d, 0x0004); 309 + ufs_renesas_indirect_write(hba, 7, 0x5e, 0x0017); 310 + ufs_renesas_indirect_poll(hba, 7, 0x55, 0, BIT(6)); 311 + ufs_renesas_indirect_poll(hba, 7, 0x41, 0, BIT(7)); 312 + /* end of phy setup */ 313 + 314 + ufs_renesas_init_enable_timer(hba, timer_val); 315 + } 316 + 317 + static void ufs_renesas_r8a779f0_init_step3_add(struct ufs_hba *hba, bool assert) 318 + { 319 + u32 val_2x = 0, val_3x = 0, val_4x = 0; 320 + 321 + if (assert) { 322 + val_2x = 0x0001; 323 + val_3x = 0x0003; 324 + val_4x = 0x0001; 325 + } 326 + 327 + ufs_renesas_reset_indirect_write(hba, 7, 0x20, val_2x); 328 + ufs_renesas_reset_indirect_write(hba, 7, 0x4a, val_4x); 329 + ufs_renesas_reset_indirect_write(hba, 7, 0x35, val_3x); 330 + ufs_renesas_reset_indirect_update(hba); 331 + ufs_renesas_reset_indirect_write(hba, 7, 0x21, val_2x); 332 + ufs_renesas_reset_indirect_write(hba, 7, 0x4b, val_4x); 333 + ufs_renesas_reset_indirect_write(hba, 7, 0x36, val_3x); 334 + ufs_renesas_reset_indirect_update(hba); 335 + } 336 + 337 + static void ufs_renesas_r8a779f0_pre_init(struct ufs_hba *hba) 338 + { 339 + struct ufs_renesas_priv *priv = ufshcd_get_variant(hba); 340 + u32 timer_val; 341 + u32 data; 342 + int i; 343 + 344 + /* This setting is for SERIES B */ 345 + ufs_renesas_init_step1_to_3(hba, true); 346 + 347 + ufs_renesas_r8a779f0_init_step3_add(hba, true); 348 + ufs_renesas_reset_indirect_write(hba, 7, 0x5f, 0x0063); 349 + ufs_renesas_reset_indirect_update(hba); 350 + ufs_renesas_reset_indirect_write(hba, 7, 0x60, 0x0003); 351 + ufs_renesas_reset_indirect_update(hba); 352 + ufs_renesas_reset_indirect_write(hba, 7, 0x5b, 0x00a6); 353 + ufs_renesas_reset_indirect_update(hba); 354 + ufs_renesas_reset_indirect_write(hba, 7, 0x5c, 0x0003); 355 + ufs_renesas_reset_indirect_update(hba); 356 + ufs_renesas_r8a779f0_init_step3_add(hba, false); 357 + 358 + ufs_renesas_init_step4_to_6(hba); 359 + 360 + timer_val = ufs_renesas_init_disable_timer(hba); 361 + 362 + ufs_renesas_indirect_write(hba, 1, 0x01, 0x001f); 363 + ufs_renesas_indirect_write(hba, 7, 0x5d, 0x0014); 364 + ufs_renesas_indirect_write(hba, 7, 0x5e, 0x0014); 365 + ufs_renesas_indirect_write(hba, 7, 0x0d, 0x0007); 366 + ufs_renesas_indirect_write(hba, 7, 0x0e, 0x0007); 367 + 368 + ufs_renesas_indirect_poll(hba, 7, 0x3c, 0, BIT(7)); 369 + ufs_renesas_indirect_poll(hba, 7, 0x4c, 0, BIT(4)); 370 + 371 + ufs_renesas_indirect_write(hba, 1, 0x32, 0x0080); 372 + ufs_renesas_indirect_write(hba, 1, 0x1f, 0x0001); 373 + ufs_renesas_indirect_write(hba, 1, 0x2c, 0x0001); 374 + ufs_renesas_indirect_write(hba, 1, 0x32, 0x0087); 375 + 376 + ufs_renesas_indirect_write(hba, 1, 0x4d, priv->calib[2]); 377 + ufs_renesas_indirect_write(hba, 1, 0x4e, priv->calib[3]); 378 + ufs_renesas_indirect_write(hba, 1, 0x0d, 0x0006); 379 + ufs_renesas_indirect_write(hba, 1, 0x0e, 0x0007); 380 + ufs_renesas_write_phy(hba, 0x0028, priv->calib[3]); 381 + ufs_renesas_write_phy(hba, 0x4014, priv->calib[3]); 382 + 383 + ufs_renesas_set_phy(hba, 0x401c, BIT(2)); 384 + 385 + ufs_renesas_write_phy(hba, 0x4000, priv->calib[6]); 386 + ufs_renesas_write_phy(hba, 0x4001, priv->calib[7]); 387 + 388 + ufs_renesas_indirect_write(hba, 1, 0x14, 0x0001); 389 + 390 + ufs_renesas_init_compensation_and_slicers(hba); 391 + 392 + ufs_renesas_indirect_write(hba, 7, 0x79, 0x0000); 393 + ufs_renesas_indirect_write(hba, 7, 0x24, 0x000c); 394 + ufs_renesas_indirect_write(hba, 7, 0x25, 0x000c); 395 + ufs_renesas_indirect_write(hba, 7, 0x62, 0x00c0); 396 + ufs_renesas_indirect_write(hba, 7, 0x63, 0x0001); 397 + 398 + for (i = 0; i < priv->fw->size / 2; i++) { 399 + data = (priv->fw->data[i * 2 + 1] << 8) | priv->fw->data[i * 2]; 400 + ufs_renesas_write_phy(hba, 0xc000 + i, data); 401 + } 402 + 403 + ufs_renesas_indirect_write(hba, 7, 0x0d, 0x0002); 404 + ufs_renesas_indirect_write(hba, 7, 0x0e, 0x0007); 405 + 406 + ufs_renesas_indirect_write(hba, 7, 0x5d, 0x0014); 407 + ufs_renesas_indirect_write(hba, 7, 0x5e, 0x0017); 408 + ufs_renesas_indirect_write(hba, 7, 0x5d, 0x0004); 409 + ufs_renesas_indirect_write(hba, 7, 0x5e, 0x0017); 410 + ufs_renesas_indirect_poll(hba, 7, 0x55, 0, BIT(6)); 411 + ufs_renesas_indirect_poll(hba, 7, 0x41, 0, BIT(7)); 412 + 413 + ufs_renesas_init_enable_timer(hba, timer_val); 330 414 } 331 415 332 416 static int ufs_renesas_hce_enable_notify(struct ufs_hba *hba, ··· 422 338 return 0; 423 339 424 340 if (status == PRE_CHANGE) 425 - ufs_renesas_pre_init(hba); 341 + priv->pre_init(hba); 426 342 427 343 priv->initialized = true; 428 344 ··· 440 356 return 0; 441 357 } 442 358 359 + static const struct soc_device_attribute ufs_fallback[] = { 360 + { .soc_id = "r8a779f0", .revision = "ES1.[01]" }, 361 + { /* Sentinel */ } 362 + }; 363 + 443 364 static int ufs_renesas_init(struct ufs_hba *hba) 444 365 { 366 + const struct soc_device_attribute *attr; 367 + struct nvmem_cell *cell = NULL; 368 + struct device *dev = hba->dev; 445 369 struct ufs_renesas_priv *priv; 370 + u8 *data = NULL; 371 + size_t len; 372 + int ret; 446 373 447 - priv = devm_kzalloc(hba->dev, sizeof(*priv), GFP_KERNEL); 374 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 448 375 if (!priv) 449 376 return -ENOMEM; 450 377 ufshcd_set_variant(hba, priv); 451 378 452 379 hba->quirks |= UFSHCD_QUIRK_HIBERN_FASTAUTO; 453 380 381 + attr = soc_device_match(ufs_fallback); 382 + if (attr) 383 + goto fallback; 384 + 385 + ret = request_firmware(&priv->fw, UFS_RENESAS_FIRMWARE_NAME, dev); 386 + if (ret) { 387 + dev_warn(dev, "Failed to load firmware\n"); 388 + goto fallback; 389 + } 390 + 391 + cell = nvmem_cell_get(dev, "calibration"); 392 + if (IS_ERR(cell)) { 393 + dev_warn(dev, "No calibration data specified\n"); 394 + goto fallback; 395 + } 396 + 397 + data = nvmem_cell_read(cell, &len); 398 + if (IS_ERR(data)) { 399 + dev_warn(dev, "Failed to read calibration data: %pe\n", data); 400 + goto fallback; 401 + } 402 + 403 + if (len != EFUSE_CALIB_SIZE) { 404 + dev_warn(dev, "Invalid calibration data size %zu\n", len); 405 + goto fallback; 406 + } 407 + 408 + memcpy(priv->calib, data, EFUSE_CALIB_SIZE); 409 + priv->pre_init = ufs_renesas_r8a779f0_pre_init; 410 + goto out; 411 + 412 + fallback: 413 + dev_info(dev, "Using ES1.0 init code\n"); 414 + priv->pre_init = ufs_renesas_r8a779f0_es10_pre_init; 415 + 416 + out: 417 + kfree(data); 418 + if (!IS_ERR_OR_NULL(cell)) 419 + nvmem_cell_put(cell); 420 + 454 421 return 0; 422 + } 423 + 424 + static void ufs_renesas_exit(struct ufs_hba *hba) 425 + { 426 + struct ufs_renesas_priv *priv = ufshcd_get_variant(hba); 427 + 428 + release_firmware(priv->fw); 455 429 } 456 430 457 431 static int ufs_renesas_set_dma_mask(struct ufs_hba *hba) ··· 520 378 static const struct ufs_hba_variant_ops ufs_renesas_vops = { 521 379 .name = "renesas", 522 380 .init = ufs_renesas_init, 381 + .exit = ufs_renesas_exit, 523 382 .set_dma_mask = ufs_renesas_set_dma_mask, 524 383 .setup_clocks = ufs_renesas_setup_clocks, 525 384 .hce_enable_notify = ufs_renesas_hce_enable_notify,
+354
drivers/ufs/host/ufs-rockchip.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Rockchip UFS Host Controller driver 4 + * 5 + * Copyright (C) 2025 Rockchip Electronics Co., Ltd. 6 + */ 7 + 8 + #include <linux/clk.h> 9 + #include <linux/gpio.h> 10 + #include <linux/mfd/syscon.h> 11 + #include <linux/of.h> 12 + #include <linux/platform_device.h> 13 + #include <linux/pm_domain.h> 14 + #include <linux/pm_wakeup.h> 15 + #include <linux/regmap.h> 16 + #include <linux/reset.h> 17 + 18 + #include <ufs/ufshcd.h> 19 + #include <ufs/unipro.h> 20 + #include "ufshcd-pltfrm.h" 21 + #include "ufs-rockchip.h" 22 + 23 + static int ufs_rockchip_hce_enable_notify(struct ufs_hba *hba, 24 + enum ufs_notify_change_status status) 25 + { 26 + int err = 0; 27 + 28 + if (status == POST_CHANGE) { 29 + err = ufshcd_dme_reset(hba); 30 + if (err) 31 + return err; 32 + 33 + err = ufshcd_dme_enable(hba); 34 + if (err) 35 + return err; 36 + 37 + return ufshcd_vops_phy_initialization(hba); 38 + } 39 + 40 + return 0; 41 + } 42 + 43 + static void ufs_rockchip_set_pm_lvl(struct ufs_hba *hba) 44 + { 45 + hba->rpm_lvl = UFS_PM_LVL_5; 46 + hba->spm_lvl = UFS_PM_LVL_5; 47 + } 48 + 49 + static int ufs_rockchip_rk3576_phy_init(struct ufs_hba *hba) 50 + { 51 + struct ufs_rockchip_host *host = ufshcd_get_variant(hba); 52 + 53 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(PA_LOCAL_TX_LCC_ENABLE, 0x0), 0x0); 54 + /* enable the mphy DME_SET cfg */ 55 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(MPHY_CFG, 0x0), MPHY_CFG_ENABLE); 56 + for (int i = 0; i < 2; i++) { 57 + /* Configuration M - TX */ 58 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(VND_TX_CLK_PRD, SEL_TX_LANE0 + i), 0x06); 59 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(VND_TX_CLK_PRD_EN, SEL_TX_LANE0 + i), 0x02); 60 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(VND_TX_LINERESET_VALUE, SEL_TX_LANE0 + i), 0x44); 61 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(VND_TX_LINERESET_PVALUE1, SEL_TX_LANE0 + i), 0xe6); 62 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(VND_TX_LINERESET_PVALUE2, SEL_TX_LANE0 + i), 0x07); 63 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(VND_TX_TASE_VALUE, SEL_TX_LANE0 + i), 0x93); 64 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(VND_TX_BASE_NVALUE, SEL_TX_LANE0 + i), 0xc9); 65 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(VND_TX_POWER_SAVING_CTRL, SEL_TX_LANE0 + i), 0x00); 66 + /* Configuration M - RX */ 67 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(VND_RX_CLK_PRD, SEL_RX_LANE0 + i), 0x06); 68 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(VND_RX_CLK_PRD_EN, SEL_RX_LANE0 + i), 0x00); 69 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(VND_RX_LINERESET_VALUE, SEL_RX_LANE0 + i), 0x58); 70 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(VND_RX_LINERESET_PVALUE1, SEL_RX_LANE0 + i), 0x8c); 71 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(VND_RX_LINERESET_PVALUE2, SEL_RX_LANE0 + i), 0x02); 72 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(VND_RX_LINERESET_OPTION, SEL_RX_LANE0 + i), 0xf6); 73 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(VND_RX_POWER_SAVING_CTRL, SEL_RX_LANE0 + i), 0x69); 74 + } 75 + 76 + /* disable the mphy DME_SET cfg */ 77 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(MPHY_CFG, 0x0), MPHY_CFG_DISABLE); 78 + 79 + ufs_sys_writel(host->mphy_base, 0x80, CMN_REG23); 80 + ufs_sys_writel(host->mphy_base, 0xB5, TRSV0_REG14); 81 + ufs_sys_writel(host->mphy_base, 0xB5, TRSV1_REG14); 82 + 83 + ufs_sys_writel(host->mphy_base, 0x03, TRSV0_REG15); 84 + ufs_sys_writel(host->mphy_base, 0x03, TRSV1_REG15); 85 + 86 + ufs_sys_writel(host->mphy_base, 0x38, TRSV0_REG08); 87 + ufs_sys_writel(host->mphy_base, 0x38, TRSV1_REG08); 88 + 89 + ufs_sys_writel(host->mphy_base, 0x50, TRSV0_REG29); 90 + ufs_sys_writel(host->mphy_base, 0x50, TRSV1_REG29); 91 + 92 + ufs_sys_writel(host->mphy_base, 0x80, TRSV0_REG2E); 93 + ufs_sys_writel(host->mphy_base, 0x80, TRSV1_REG2E); 94 + 95 + ufs_sys_writel(host->mphy_base, 0x18, TRSV0_REG3C); 96 + ufs_sys_writel(host->mphy_base, 0x18, TRSV1_REG3C); 97 + 98 + ufs_sys_writel(host->mphy_base, 0x03, TRSV0_REG16); 99 + ufs_sys_writel(host->mphy_base, 0x03, TRSV1_REG16); 100 + 101 + ufs_sys_writel(host->mphy_base, 0x20, TRSV0_REG17); 102 + ufs_sys_writel(host->mphy_base, 0x20, TRSV1_REG17); 103 + 104 + ufs_sys_writel(host->mphy_base, 0xC0, TRSV0_REG18); 105 + ufs_sys_writel(host->mphy_base, 0xC0, TRSV1_REG18); 106 + 107 + ufs_sys_writel(host->mphy_base, 0x03, CMN_REG25); 108 + 109 + ufs_sys_writel(host->mphy_base, 0x03, TRSV0_REG3D); 110 + ufs_sys_writel(host->mphy_base, 0x03, TRSV1_REG3D); 111 + 112 + ufs_sys_writel(host->mphy_base, 0xC0, CMN_REG23); 113 + udelay(1); 114 + ufs_sys_writel(host->mphy_base, 0x00, CMN_REG23); 115 + 116 + usleep_range(200, 250); 117 + /* start link up */ 118 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(MIB_T_DBG_CPORT_TX_ENDIAN, 0), 0x0); 119 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(MIB_T_DBG_CPORT_RX_ENDIAN, 0), 0x0); 120 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(N_DEVICEID, 0), 0x0); 121 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(N_DEVICEID_VALID, 0), 0x1); 122 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(T_PEERDEVICEID, 0), 0x1); 123 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(T_CONNECTIONSTATE, 0), 0x1); 124 + 125 + return 0; 126 + } 127 + 128 + static int ufs_rockchip_common_init(struct ufs_hba *hba) 129 + { 130 + struct device *dev = hba->dev; 131 + struct platform_device *pdev = to_platform_device(dev); 132 + struct ufs_rockchip_host *host; 133 + int err; 134 + 135 + host = devm_kzalloc(dev, sizeof(*host), GFP_KERNEL); 136 + if (!host) 137 + return -ENOMEM; 138 + 139 + host->ufs_sys_ctrl = devm_platform_ioremap_resource_byname(pdev, "hci_grf"); 140 + if (IS_ERR(host->ufs_sys_ctrl)) 141 + return dev_err_probe(dev, PTR_ERR(host->ufs_sys_ctrl), 142 + "Failed to map HCI system control registers\n"); 143 + 144 + host->ufs_phy_ctrl = devm_platform_ioremap_resource_byname(pdev, "mphy_grf"); 145 + if (IS_ERR(host->ufs_phy_ctrl)) 146 + return dev_err_probe(dev, PTR_ERR(host->ufs_phy_ctrl), 147 + "Failed to map mphy system control registers\n"); 148 + 149 + host->mphy_base = devm_platform_ioremap_resource_byname(pdev, "mphy"); 150 + if (IS_ERR(host->mphy_base)) 151 + return dev_err_probe(dev, PTR_ERR(host->mphy_base), 152 + "Failed to map mphy base registers\n"); 153 + 154 + host->rst = devm_reset_control_array_get_exclusive(dev); 155 + if (IS_ERR(host->rst)) 156 + return dev_err_probe(dev, PTR_ERR(host->rst), 157 + "failed to get reset control\n"); 158 + 159 + reset_control_assert(host->rst); 160 + udelay(1); 161 + reset_control_deassert(host->rst); 162 + 163 + host->ref_out_clk = devm_clk_get_enabled(dev, "ref_out"); 164 + if (IS_ERR(host->ref_out_clk)) 165 + return dev_err_probe(dev, PTR_ERR(host->ref_out_clk), 166 + "ref_out clock unavailable\n"); 167 + 168 + host->rst_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW); 169 + if (IS_ERR(host->rst_gpio)) 170 + return dev_err_probe(dev, PTR_ERR(host->rst_gpio), 171 + "failed to get reset gpio\n"); 172 + 173 + err = devm_clk_bulk_get_all_enabled(dev, &host->clks); 174 + if (err < 0) 175 + return dev_err_probe(dev, err, "failed to enable clocks\n"); 176 + 177 + host->hba = hba; 178 + 179 + ufshcd_set_variant(hba, host); 180 + 181 + return 0; 182 + } 183 + 184 + static int ufs_rockchip_rk3576_init(struct ufs_hba *hba) 185 + { 186 + struct device *dev = hba->dev; 187 + int ret; 188 + 189 + hba->quirks = UFSHCD_QUIRK_SKIP_DEF_UNIPRO_TIMEOUT_SETTING; 190 + 191 + /* Enable BKOPS when suspend */ 192 + hba->caps |= UFSHCD_CAP_AUTO_BKOPS_SUSPEND; 193 + /* Enable putting device into deep sleep */ 194 + hba->caps |= UFSHCD_CAP_DEEPSLEEP; 195 + /* Enable devfreq of UFS */ 196 + hba->caps |= UFSHCD_CAP_CLK_SCALING; 197 + /* Enable WriteBooster */ 198 + hba->caps |= UFSHCD_CAP_WB_EN; 199 + 200 + /* Set the default desired pm level in case no users set via sysfs */ 201 + ufs_rockchip_set_pm_lvl(hba); 202 + 203 + ret = ufs_rockchip_common_init(hba); 204 + if (ret) 205 + return dev_err_probe(dev, ret, "ufs common init fail\n"); 206 + 207 + return 0; 208 + } 209 + 210 + static int ufs_rockchip_device_reset(struct ufs_hba *hba) 211 + { 212 + struct ufs_rockchip_host *host = ufshcd_get_variant(hba); 213 + 214 + gpiod_set_value_cansleep(host->rst_gpio, 1); 215 + usleep_range(20, 25); 216 + 217 + gpiod_set_value_cansleep(host->rst_gpio, 0); 218 + usleep_range(20, 25); 219 + 220 + return 0; 221 + } 222 + 223 + static const struct ufs_hba_variant_ops ufs_hba_rk3576_vops = { 224 + .name = "rk3576", 225 + .init = ufs_rockchip_rk3576_init, 226 + .device_reset = ufs_rockchip_device_reset, 227 + .hce_enable_notify = ufs_rockchip_hce_enable_notify, 228 + .phy_initialization = ufs_rockchip_rk3576_phy_init, 229 + }; 230 + 231 + static const struct of_device_id ufs_rockchip_of_match[] = { 232 + { .compatible = "rockchip,rk3576-ufshc", .data = &ufs_hba_rk3576_vops }, 233 + { }, 234 + }; 235 + MODULE_DEVICE_TABLE(of, ufs_rockchip_of_match); 236 + 237 + static int ufs_rockchip_probe(struct platform_device *pdev) 238 + { 239 + struct device *dev = &pdev->dev; 240 + const struct ufs_hba_variant_ops *vops; 241 + int err; 242 + 243 + vops = device_get_match_data(dev); 244 + if (!vops) 245 + return dev_err_probe(dev, -ENODATA, "ufs_hba_variant_ops not defined.\n"); 246 + 247 + err = ufshcd_pltfrm_init(pdev, vops); 248 + if (err) 249 + return dev_err_probe(dev, err, "ufshcd_pltfrm_init failed\n"); 250 + 251 + return 0; 252 + } 253 + 254 + static void ufs_rockchip_remove(struct platform_device *pdev) 255 + { 256 + ufshcd_pltfrm_remove(pdev); 257 + } 258 + 259 + #ifdef CONFIG_PM 260 + static int ufs_rockchip_runtime_suspend(struct device *dev) 261 + { 262 + struct ufs_hba *hba = dev_get_drvdata(dev); 263 + struct ufs_rockchip_host *host = ufshcd_get_variant(hba); 264 + 265 + clk_disable_unprepare(host->ref_out_clk); 266 + 267 + /* Do not power down the genpd if rpm_lvl is less than level 5 */ 268 + dev_pm_genpd_rpm_always_on(dev, hba->rpm_lvl < UFS_PM_LVL_5); 269 + 270 + return ufshcd_runtime_suspend(dev); 271 + } 272 + 273 + static int ufs_rockchip_runtime_resume(struct device *dev) 274 + { 275 + struct ufs_hba *hba = dev_get_drvdata(dev); 276 + struct ufs_rockchip_host *host = ufshcd_get_variant(hba); 277 + int err; 278 + 279 + err = clk_prepare_enable(host->ref_out_clk); 280 + if (err) { 281 + dev_err(hba->dev, "failed to enable ref_out clock %d\n", err); 282 + return err; 283 + } 284 + 285 + reset_control_assert(host->rst); 286 + udelay(1); 287 + reset_control_deassert(host->rst); 288 + 289 + return ufshcd_runtime_resume(dev); 290 + } 291 + #endif 292 + 293 + #ifdef CONFIG_PM_SLEEP 294 + static int ufs_rockchip_system_suspend(struct device *dev) 295 + { 296 + struct ufs_hba *hba = dev_get_drvdata(dev); 297 + struct ufs_rockchip_host *host = ufshcd_get_variant(hba); 298 + int err; 299 + 300 + /* 301 + * If spm_lvl is less than level 5, it means we need to keep the host 302 + * controller in powered-on state. So device_set_awake_path() is 303 + * calling pm core to notify the genpd provider to meet this requirement 304 + */ 305 + if (hba->spm_lvl < UFS_PM_LVL_5) 306 + device_set_awake_path(dev); 307 + 308 + err = ufshcd_system_suspend(dev); 309 + if (err) { 310 + dev_err(hba->dev, "UFSHCD system suspend failed %d\n", err); 311 + return err; 312 + } 313 + 314 + clk_disable_unprepare(host->ref_out_clk); 315 + 316 + return 0; 317 + } 318 + 319 + static int ufs_rockchip_system_resume(struct device *dev) 320 + { 321 + struct ufs_hba *hba = dev_get_drvdata(dev); 322 + struct ufs_rockchip_host *host = ufshcd_get_variant(hba); 323 + int err; 324 + 325 + err = clk_prepare_enable(host->ref_out_clk); 326 + if (err) { 327 + dev_err(hba->dev, "failed to enable ref_out clock %d\n", err); 328 + return err; 329 + } 330 + 331 + return ufshcd_system_resume(dev); 332 + } 333 + #endif 334 + 335 + static const struct dev_pm_ops ufs_rockchip_pm_ops = { 336 + SET_SYSTEM_SLEEP_PM_OPS(ufs_rockchip_system_suspend, ufs_rockchip_system_resume) 337 + SET_RUNTIME_PM_OPS(ufs_rockchip_runtime_suspend, ufs_rockchip_runtime_resume, NULL) 338 + .prepare = ufshcd_suspend_prepare, 339 + .complete = ufshcd_resume_complete, 340 + }; 341 + 342 + static struct platform_driver ufs_rockchip_pltform = { 343 + .probe = ufs_rockchip_probe, 344 + .remove = ufs_rockchip_remove, 345 + .driver = { 346 + .name = "ufshcd-rockchip", 347 + .pm = &ufs_rockchip_pm_ops, 348 + .of_match_table = ufs_rockchip_of_match, 349 + }, 350 + }; 351 + module_platform_driver(ufs_rockchip_pltform); 352 + 353 + MODULE_LICENSE("GPL"); 354 + MODULE_DESCRIPTION("Rockchip UFS Host Driver");
+90
drivers/ufs/host/ufs-rockchip.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Rockchip UFS Host Controller driver 4 + * 5 + * Copyright (C) 2025 Rockchip Electronics Co., Ltd. 6 + */ 7 + 8 + #ifndef _UFS_ROCKCHIP_H_ 9 + #define _UFS_ROCKCHIP_H_ 10 + 11 + #define SEL_TX_LANE0 0x0 12 + #define SEL_TX_LANE1 0x1 13 + #define SEL_TX_LANE2 0x2 14 + #define SEL_TX_LANE3 0x3 15 + #define SEL_RX_LANE0 0x4 16 + #define SEL_RX_LANE1 0x5 17 + #define SEL_RX_LANE2 0x6 18 + #define SEL_RX_LANE3 0x7 19 + 20 + #define VND_TX_CLK_PRD 0xAA 21 + #define VND_TX_CLK_PRD_EN 0xA9 22 + #define VND_TX_LINERESET_PVALUE2 0xAB 23 + #define VND_TX_LINERESET_PVALUE1 0xAC 24 + #define VND_TX_LINERESET_VALUE 0xAD 25 + #define VND_TX_BASE_NVALUE 0x93 26 + #define VND_TX_TASE_VALUE 0x94 27 + #define VND_TX_POWER_SAVING_CTRL 0x7F 28 + #define VND_RX_CLK_PRD 0x12 29 + #define VND_RX_CLK_PRD_EN 0x11 30 + #define VND_RX_LINERESET_PVALUE2 0x1B 31 + #define VND_RX_LINERESET_PVALUE1 0x1C 32 + #define VND_RX_LINERESET_VALUE 0x1D 33 + #define VND_RX_LINERESET_OPTION 0x25 34 + #define VND_RX_POWER_SAVING_CTRL 0x2F 35 + #define VND_RX_SAVE_DET_CTRL 0x1E 36 + 37 + #define CMN_REG23 0x8C 38 + #define CMN_REG25 0x94 39 + #define TRSV0_REG08 0xE0 40 + #define TRSV1_REG08 0x220 41 + #define TRSV0_REG14 0x110 42 + #define TRSV1_REG14 0x250 43 + #define TRSV0_REG15 0x134 44 + #define TRSV1_REG15 0x274 45 + #define TRSV0_REG16 0x128 46 + #define TRSV1_REG16 0x268 47 + #define TRSV0_REG17 0x12C 48 + #define TRSV1_REG17 0x26c 49 + #define TRSV0_REG18 0x120 50 + #define TRSV1_REG18 0x260 51 + #define TRSV0_REG29 0x164 52 + #define TRSV1_REG29 0x2A4 53 + #define TRSV0_REG2E 0x178 54 + #define TRSV1_REG2E 0x2B8 55 + #define TRSV0_REG3C 0x1B0 56 + #define TRSV1_REG3C 0x2F0 57 + #define TRSV0_REG3D 0x1B4 58 + #define TRSV1_REG3D 0x2F4 59 + 60 + #define MPHY_CFG 0x200 61 + #define MPHY_CFG_ENABLE 0x40 62 + #define MPHY_CFG_DISABLE 0x0 63 + 64 + #define MIB_T_DBG_CPORT_TX_ENDIAN 0xc022 65 + #define MIB_T_DBG_CPORT_RX_ENDIAN 0xc023 66 + 67 + struct ufs_rockchip_host { 68 + struct ufs_hba *hba; 69 + void __iomem *ufs_phy_ctrl; 70 + void __iomem *ufs_sys_ctrl; 71 + void __iomem *mphy_base; 72 + struct gpio_desc *rst_gpio; 73 + struct reset_control *rst; 74 + struct clk *ref_out_clk; 75 + struct clk_bulk_data *clks; 76 + uint64_t caps; 77 + }; 78 + 79 + #define ufs_sys_writel(base, val, reg) \ 80 + writel((val), (base) + (reg)) 81 + #define ufs_sys_readl(base, reg) readl((base) + (reg)) 82 + #define ufs_sys_set_bits(base, mask, reg) \ 83 + ufs_sys_writel( \ 84 + (base), ((mask) | (ufs_sys_readl((base), (reg)))), (reg)) 85 + #define ufs_sys_ctrl_clr_bits(base, mask, reg) \ 86 + ufs_sys_writel((base), \ 87 + ((~(mask)) & (ufs_sys_readl((base), (reg)))), \ 88 + (reg)) 89 + 90 + #endif /* _UFS_ROCKCHIP_H_ */
+3 -3
drivers/ufs/host/ufs-sprd.c
··· 160 160 } 161 161 162 162 static int sprd_ufs_pwr_change_notify(struct ufs_hba *hba, 163 - enum ufs_notify_change_status status, 164 - struct ufs_pa_layer_attr *dev_max_params, 165 - struct ufs_pa_layer_attr *dev_req_params) 163 + enum ufs_notify_change_status status, 164 + const struct ufs_pa_layer_attr *dev_max_params, 165 + struct ufs_pa_layer_attr *dev_req_params) 166 166 { 167 167 struct ufs_sprd_host *host = ufshcd_get_variant(hba); 168 168
+1 -1
drivers/ufs/host/ufshcd-pci.c
··· 157 157 158 158 static int ufs_intel_lkf_pwr_change_notify(struct ufs_hba *hba, 159 159 enum ufs_notify_change_status status, 160 - struct ufs_pa_layer_attr *dev_max_params, 160 + const struct ufs_pa_layer_attr *dev_max_params, 161 161 struct ufs_pa_layer_attr *dev_req_params) 162 162 { 163 163 int err = 0;
+2 -2
drivers/usb/gadget/function/f_mass_storage.c
··· 2142 2142 * of Posix locks. 2143 2143 */ 2144 2144 case FORMAT_UNIT: 2145 - case RELEASE: 2146 - case RESERVE: 2145 + case RELEASE_6: 2146 + case RESERVE_6: 2147 2147 case SEND_DIAGNOSTIC: 2148 2148 2149 2149 default:
+2 -2
drivers/usb/storage/debug.c
··· 58 58 case INQUIRY: what = "INQUIRY"; break; 59 59 case RECOVER_BUFFERED_DATA: what = "RECOVER_BUFFERED_DATA"; break; 60 60 case MODE_SELECT: what = "MODE_SELECT"; break; 61 - case RESERVE: what = "RESERVE"; break; 62 - case RELEASE: what = "RELEASE"; break; 61 + case RESERVE_6: what = "RESERVE"; break; 62 + case RELEASE_6: what = "RELEASE"; break; 63 63 case COPY: what = "COPY"; break; 64 64 case ERASE: what = "ERASE"; break; 65 65 case MODE_SENSE: what = "MODE_SENSE"; break;
+6 -10
include/scsi/libiscsi_tcp.h
··· 15 15 struct iscsi_tcp_conn; 16 16 struct iscsi_segment; 17 17 struct sk_buff; 18 - struct ahash_request; 19 18 20 19 typedef int iscsi_segment_done_fn_t(struct iscsi_tcp_conn *, 21 20 struct iscsi_segment *); ··· 26 27 unsigned int total_size; 27 28 unsigned int total_copied; 28 29 29 - struct ahash_request *hash; 30 + u32 *crcp; 30 31 unsigned char padbuf[ISCSI_PAD_LEN]; 31 32 unsigned char recv_digest[ISCSI_DIGEST_SIZE]; 32 33 unsigned char digest[ISCSI_DIGEST_SIZE]; ··· 60 61 * stop to terminate */ 61 62 /* control data */ 62 63 struct iscsi_tcp_recv in; /* TCP receive context */ 63 - /* CRC32C (Rx) LLD should set this is they do not offload */ 64 - struct ahash_request *rx_hash; 64 + /* CRC32C (Rx) LLD should set this if they do not offload */ 65 + u32 *rx_crcp; 65 66 }; 66 67 67 68 struct iscsi_tcp_task { ··· 98 99 99 100 extern void iscsi_segment_init_linear(struct iscsi_segment *segment, 100 101 void *data, size_t size, 101 - iscsi_segment_done_fn_t *done, 102 - struct ahash_request *hash); 102 + iscsi_segment_done_fn_t *done, u32 *crcp); 103 103 extern int 104 104 iscsi_segment_seek_sg(struct iscsi_segment *segment, 105 105 struct scatterlist *sg_list, unsigned int sg_count, 106 106 unsigned int offset, size_t size, 107 - iscsi_segment_done_fn_t *done, 108 - struct ahash_request *hash); 107 + iscsi_segment_done_fn_t *done, u32 *crcp); 109 108 110 109 /* digest helpers */ 111 - extern void iscsi_tcp_dgst_header(struct ahash_request *hash, const void *hdr, 112 - size_t hdrlen, 110 + extern void iscsi_tcp_dgst_header(const void *hdr, size_t hdrlen, 113 111 unsigned char digest[ISCSI_DIGEST_SIZE]); 114 112 extern struct iscsi_cls_conn * 115 113 iscsi_tcp_conn_setup(struct iscsi_cls_session *cls_session, int dd_data_size,
+9
include/scsi/scsi_device.h
··· 247 247 unsigned int queue_stopped; /* request queue is quiesced */ 248 248 bool offline_already; /* Device offline message logged */ 249 249 250 + unsigned int ua_new_media_ctr; /* Counter for New Media UNIT ATTENTIONs */ 251 + unsigned int ua_por_ctr; /* Counter for Power On / Reset UAs */ 252 + 250 253 atomic_t disk_events_disable_depth; /* disable depth for disk events */ 251 254 252 255 DECLARE_BITMAP(supported_events, SDEV_EVT_MAXBITS); /* supported events */ ··· 686 683 { 687 684 return sbitmap_weight(&sdev->budget_map); 688 685 } 686 + 687 + /* Macros to access the UNIT ATTENTION counters */ 688 + #define scsi_get_ua_new_media_ctr(sdev) \ 689 + ((const unsigned int)(sdev->ua_new_media_ctr)) 690 + #define scsi_get_ua_por_ctr(sdev) \ 691 + ((const unsigned int)(sdev->ua_por_ctr)) 689 692 690 693 #define MODULE_ALIAS_SCSI_DEVICE(type) \ 691 694 MODULE_ALIAS("scsi:t-" __stringify(type) "*")
+2 -2
include/scsi/scsi_proto.h
··· 33 33 #define INQUIRY 0x12 34 34 #define RECOVER_BUFFERED_DATA 0x14 35 35 #define MODE_SELECT 0x15 36 - #define RESERVE 0x16 37 - #define RELEASE 0x17 36 + #define RESERVE_6 0x16 37 + #define RELEASE_6 0x17 38 38 #define COPY 0x18 39 39 #define ERASE 0x19 40 40 #define MODE_SENSE 0x1a
+2 -2
include/trace/events/scsi.h
··· 29 29 scsi_opcode_name(INQUIRY), \ 30 30 scsi_opcode_name(RECOVER_BUFFERED_DATA), \ 31 31 scsi_opcode_name(MODE_SELECT), \ 32 - scsi_opcode_name(RESERVE), \ 33 - scsi_opcode_name(RELEASE), \ 32 + scsi_opcode_name(RESERVE_6), \ 33 + scsi_opcode_name(RELEASE_6), \ 34 34 scsi_opcode_name(COPY), \ 35 35 scsi_opcode_name(ERASE), \ 36 36 scsi_opcode_name(MODE_SENSE), \
+2 -2
include/trace/events/target.h
··· 31 31 scsi_opcode_name(INQUIRY), \ 32 32 scsi_opcode_name(RECOVER_BUFFERED_DATA), \ 33 33 scsi_opcode_name(MODE_SELECT), \ 34 - scsi_opcode_name(RESERVE), \ 35 - scsi_opcode_name(RELEASE), \ 34 + scsi_opcode_name(RESERVE_6), \ 35 + scsi_opcode_name(RELEASE_6), \ 36 36 scsi_opcode_name(COPY), \ 37 37 scsi_opcode_name(ERASE), \ 38 38 scsi_opcode_name(MODE_SENSE), \
-276
include/uapi/scsi/cxlflash_ioctl.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */ 2 - /* 3 - * CXL Flash Device Driver 4 - * 5 - * Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation 6 - * Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation 7 - * 8 - * Copyright (C) 2015 IBM Corporation 9 - * 10 - * This program is free software; you can redistribute it and/or 11 - * modify it under the terms of the GNU General Public License 12 - * as published by the Free Software Foundation; either version 13 - * 2 of the License, or (at your option) any later version. 14 - */ 15 - 16 - #ifndef _CXLFLASH_IOCTL_H 17 - #define _CXLFLASH_IOCTL_H 18 - 19 - #include <linux/types.h> 20 - 21 - /* 22 - * Structure and definitions for all CXL Flash ioctls 23 - */ 24 - #define CXLFLASH_WWID_LEN 16 25 - 26 - /* 27 - * Structure and flag definitions CXL Flash superpipe ioctls 28 - */ 29 - 30 - #define DK_CXLFLASH_VERSION_0 0 31 - 32 - struct dk_cxlflash_hdr { 33 - __u16 version; /* Version data */ 34 - __u16 rsvd[3]; /* Reserved for future use */ 35 - __u64 flags; /* Input flags */ 36 - __u64 return_flags; /* Returned flags */ 37 - }; 38 - 39 - /* 40 - * Return flag definitions available to all superpipe ioctls 41 - * 42 - * Similar to the input flags, these are grown from the bottom-up with the 43 - * intention that ioctl-specific return flag definitions would grow from the 44 - * top-down, allowing the two sets to co-exist. While not required/enforced 45 - * at this time, this provides future flexibility. 46 - */ 47 - #define DK_CXLFLASH_ALL_PORTS_ACTIVE 0x0000000000000001ULL 48 - #define DK_CXLFLASH_APP_CLOSE_ADAP_FD 0x0000000000000002ULL 49 - #define DK_CXLFLASH_CONTEXT_SQ_CMD_MODE 0x0000000000000004ULL 50 - 51 - /* 52 - * General Notes: 53 - * ------------- 54 - * The 'context_id' field of all ioctl structures contains the context 55 - * identifier for a context in the lower 32-bits (upper 32-bits are not 56 - * to be used when identifying a context to the AFU). That said, the value 57 - * in its entirety (all 64-bits) is to be treated as an opaque cookie and 58 - * should be presented as such when issuing ioctls. 59 - */ 60 - 61 - /* 62 - * DK_CXLFLASH_ATTACH Notes: 63 - * ------------------------ 64 - * Read/write access permissions are specified via the O_RDONLY, O_WRONLY, 65 - * and O_RDWR flags defined in the fcntl.h header file. 66 - * 67 - * A valid adapter file descriptor (fd >= 0) is only returned on the initial 68 - * attach (successful) of a context. When a context is shared(reused), the user 69 - * is expected to already 'know' the adapter file descriptor associated with the 70 - * context. 71 - */ 72 - #define DK_CXLFLASH_ATTACH_REUSE_CONTEXT 0x8000000000000000ULL 73 - 74 - struct dk_cxlflash_attach { 75 - struct dk_cxlflash_hdr hdr; /* Common fields */ 76 - __u64 num_interrupts; /* Requested number of interrupts */ 77 - __u64 context_id; /* Returned context */ 78 - __u64 mmio_size; /* Returned size of MMIO area */ 79 - __u64 block_size; /* Returned block size, in bytes */ 80 - __u64 adap_fd; /* Returned adapter file descriptor */ 81 - __u64 last_lba; /* Returned last LBA on the device */ 82 - __u64 max_xfer; /* Returned max transfer size, blocks */ 83 - __u64 reserved[8]; /* Reserved for future use */ 84 - }; 85 - 86 - struct dk_cxlflash_detach { 87 - struct dk_cxlflash_hdr hdr; /* Common fields */ 88 - __u64 context_id; /* Context to detach */ 89 - __u64 reserved[8]; /* Reserved for future use */ 90 - }; 91 - 92 - struct dk_cxlflash_udirect { 93 - struct dk_cxlflash_hdr hdr; /* Common fields */ 94 - __u64 context_id; /* Context to own physical resources */ 95 - __u64 rsrc_handle; /* Returned resource handle */ 96 - __u64 last_lba; /* Returned last LBA on the device */ 97 - __u64 reserved[8]; /* Reserved for future use */ 98 - }; 99 - 100 - #define DK_CXLFLASH_UVIRTUAL_NEED_WRITE_SAME 0x8000000000000000ULL 101 - 102 - struct dk_cxlflash_uvirtual { 103 - struct dk_cxlflash_hdr hdr; /* Common fields */ 104 - __u64 context_id; /* Context to own virtual resources */ 105 - __u64 lun_size; /* Requested size, in 4K blocks */ 106 - __u64 rsrc_handle; /* Returned resource handle */ 107 - __u64 last_lba; /* Returned last LBA of LUN */ 108 - __u64 reserved[8]; /* Reserved for future use */ 109 - }; 110 - 111 - struct dk_cxlflash_release { 112 - struct dk_cxlflash_hdr hdr; /* Common fields */ 113 - __u64 context_id; /* Context owning resources */ 114 - __u64 rsrc_handle; /* Resource handle to release */ 115 - __u64 reserved[8]; /* Reserved for future use */ 116 - }; 117 - 118 - struct dk_cxlflash_resize { 119 - struct dk_cxlflash_hdr hdr; /* Common fields */ 120 - __u64 context_id; /* Context owning resources */ 121 - __u64 rsrc_handle; /* Resource handle of LUN to resize */ 122 - __u64 req_size; /* New requested size, in 4K blocks */ 123 - __u64 last_lba; /* Returned last LBA of LUN */ 124 - __u64 reserved[8]; /* Reserved for future use */ 125 - }; 126 - 127 - struct dk_cxlflash_clone { 128 - struct dk_cxlflash_hdr hdr; /* Common fields */ 129 - __u64 context_id_src; /* Context to clone from */ 130 - __u64 context_id_dst; /* Context to clone to */ 131 - __u64 adap_fd_src; /* Source context adapter fd */ 132 - __u64 reserved[8]; /* Reserved for future use */ 133 - }; 134 - 135 - #define DK_CXLFLASH_VERIFY_SENSE_LEN 18 136 - #define DK_CXLFLASH_VERIFY_HINT_SENSE 0x8000000000000000ULL 137 - 138 - struct dk_cxlflash_verify { 139 - struct dk_cxlflash_hdr hdr; /* Common fields */ 140 - __u64 context_id; /* Context owning resources to verify */ 141 - __u64 rsrc_handle; /* Resource handle of LUN */ 142 - __u64 hint; /* Reasons for verify */ 143 - __u64 last_lba; /* Returned last LBA of device */ 144 - __u8 sense_data[DK_CXLFLASH_VERIFY_SENSE_LEN]; /* SCSI sense data */ 145 - __u8 pad[6]; /* Pad to next 8-byte boundary */ 146 - __u64 reserved[8]; /* Reserved for future use */ 147 - }; 148 - 149 - #define DK_CXLFLASH_RECOVER_AFU_CONTEXT_RESET 0x8000000000000000ULL 150 - 151 - struct dk_cxlflash_recover_afu { 152 - struct dk_cxlflash_hdr hdr; /* Common fields */ 153 - __u64 reason; /* Reason for recovery request */ 154 - __u64 context_id; /* Context to recover / updated ID */ 155 - __u64 mmio_size; /* Returned size of MMIO area */ 156 - __u64 adap_fd; /* Returned adapter file descriptor */ 157 - __u64 reserved[8]; /* Reserved for future use */ 158 - }; 159 - 160 - #define DK_CXLFLASH_MANAGE_LUN_WWID_LEN CXLFLASH_WWID_LEN 161 - #define DK_CXLFLASH_MANAGE_LUN_ENABLE_SUPERPIPE 0x8000000000000000ULL 162 - #define DK_CXLFLASH_MANAGE_LUN_DISABLE_SUPERPIPE 0x4000000000000000ULL 163 - #define DK_CXLFLASH_MANAGE_LUN_ALL_PORTS_ACCESSIBLE 0x2000000000000000ULL 164 - 165 - struct dk_cxlflash_manage_lun { 166 - struct dk_cxlflash_hdr hdr; /* Common fields */ 167 - __u8 wwid[DK_CXLFLASH_MANAGE_LUN_WWID_LEN]; /* Page83 WWID, NAA-6 */ 168 - __u64 reserved[8]; /* Rsvd, future use */ 169 - }; 170 - 171 - union cxlflash_ioctls { 172 - struct dk_cxlflash_attach attach; 173 - struct dk_cxlflash_detach detach; 174 - struct dk_cxlflash_udirect udirect; 175 - struct dk_cxlflash_uvirtual uvirtual; 176 - struct dk_cxlflash_release release; 177 - struct dk_cxlflash_resize resize; 178 - struct dk_cxlflash_clone clone; 179 - struct dk_cxlflash_verify verify; 180 - struct dk_cxlflash_recover_afu recover_afu; 181 - struct dk_cxlflash_manage_lun manage_lun; 182 - }; 183 - 184 - #define MAX_CXLFLASH_IOCTL_SZ (sizeof(union cxlflash_ioctls)) 185 - 186 - #define CXL_MAGIC 0xCA 187 - #define CXL_IOWR(_n, _s) _IOWR(CXL_MAGIC, _n, struct _s) 188 - 189 - /* 190 - * CXL Flash superpipe ioctls start at base of the reserved CXL_MAGIC 191 - * region (0x80) and grow upwards. 192 - */ 193 - #define DK_CXLFLASH_ATTACH CXL_IOWR(0x80, dk_cxlflash_attach) 194 - #define DK_CXLFLASH_USER_DIRECT CXL_IOWR(0x81, dk_cxlflash_udirect) 195 - #define DK_CXLFLASH_RELEASE CXL_IOWR(0x82, dk_cxlflash_release) 196 - #define DK_CXLFLASH_DETACH CXL_IOWR(0x83, dk_cxlflash_detach) 197 - #define DK_CXLFLASH_VERIFY CXL_IOWR(0x84, dk_cxlflash_verify) 198 - #define DK_CXLFLASH_RECOVER_AFU CXL_IOWR(0x85, dk_cxlflash_recover_afu) 199 - #define DK_CXLFLASH_MANAGE_LUN CXL_IOWR(0x86, dk_cxlflash_manage_lun) 200 - #define DK_CXLFLASH_USER_VIRTUAL CXL_IOWR(0x87, dk_cxlflash_uvirtual) 201 - #define DK_CXLFLASH_VLUN_RESIZE CXL_IOWR(0x88, dk_cxlflash_resize) 202 - #define DK_CXLFLASH_VLUN_CLONE CXL_IOWR(0x89, dk_cxlflash_clone) 203 - 204 - /* 205 - * Structure and flag definitions CXL Flash host ioctls 206 - */ 207 - 208 - #define HT_CXLFLASH_VERSION_0 0 209 - 210 - struct ht_cxlflash_hdr { 211 - __u16 version; /* Version data */ 212 - __u16 subcmd; /* Sub-command */ 213 - __u16 rsvd[2]; /* Reserved for future use */ 214 - __u64 flags; /* Input flags */ 215 - __u64 return_flags; /* Returned flags */ 216 - }; 217 - 218 - /* 219 - * Input flag definitions available to all host ioctls 220 - * 221 - * These are grown from the bottom-up with the intention that ioctl-specific 222 - * input flag definitions would grow from the top-down, allowing the two sets 223 - * to co-exist. While not required/enforced at this time, this provides future 224 - * flexibility. 225 - */ 226 - #define HT_CXLFLASH_HOST_READ 0x0000000000000000ULL 227 - #define HT_CXLFLASH_HOST_WRITE 0x0000000000000001ULL 228 - 229 - #define HT_CXLFLASH_LUN_PROVISION_SUBCMD_CREATE_LUN 0x0001 230 - #define HT_CXLFLASH_LUN_PROVISION_SUBCMD_DELETE_LUN 0x0002 231 - #define HT_CXLFLASH_LUN_PROVISION_SUBCMD_QUERY_PORT 0x0003 232 - 233 - struct ht_cxlflash_lun_provision { 234 - struct ht_cxlflash_hdr hdr; /* Common fields */ 235 - __u16 port; /* Target port for provision request */ 236 - __u16 reserved16[3]; /* Reserved for future use */ 237 - __u64 size; /* Size of LUN (4K blocks) */ 238 - __u64 lun_id; /* SCSI LUN ID */ 239 - __u8 wwid[CXLFLASH_WWID_LEN];/* Page83 WWID, NAA-6 */ 240 - __u64 max_num_luns; /* Maximum number of LUNs provisioned */ 241 - __u64 cur_num_luns; /* Current number of LUNs provisioned */ 242 - __u64 max_cap_port; /* Total capacity for port (4K blocks) */ 243 - __u64 cur_cap_port; /* Current capacity for port (4K blocks) */ 244 - __u64 reserved[8]; /* Reserved for future use */ 245 - }; 246 - 247 - #define HT_CXLFLASH_AFU_DEBUG_MAX_DATA_LEN 262144 /* 256K */ 248 - #define HT_CXLFLASH_AFU_DEBUG_SUBCMD_LEN 12 249 - struct ht_cxlflash_afu_debug { 250 - struct ht_cxlflash_hdr hdr; /* Common fields */ 251 - __u8 reserved8[4]; /* Reserved for future use */ 252 - __u8 afu_subcmd[HT_CXLFLASH_AFU_DEBUG_SUBCMD_LEN]; /* AFU subcommand, 253 - * (pass through) 254 - */ 255 - __u64 data_ea; /* Data buffer effective address */ 256 - __u32 data_len; /* Data buffer length */ 257 - __u32 reserved32; /* Reserved for future use */ 258 - __u64 reserved[8]; /* Reserved for future use */ 259 - }; 260 - 261 - union cxlflash_ht_ioctls { 262 - struct ht_cxlflash_lun_provision lun_provision; 263 - struct ht_cxlflash_afu_debug afu_debug; 264 - }; 265 - 266 - #define MAX_HT_CXLFLASH_IOCTL_SZ (sizeof(union cxlflash_ht_ioctls)) 267 - 268 - /* 269 - * CXL Flash host ioctls start at the top of the reserved CXL_MAGIC 270 - * region (0xBF) and grow downwards. 271 - */ 272 - #define HT_CXLFLASH_LUN_PROVISION CXL_IOWR(0xBF, ht_cxlflash_lun_provision) 273 - #define HT_CXLFLASH_AFU_DEBUG CXL_IOWR(0xBE, ht_cxlflash_afu_debug) 274 - 275 - 276 - #endif /* ifndef _CXLFLASH_IOCTL_H */
+1
include/ufs/ufs.h
··· 419 419 MASK_EE_TOO_LOW_TEMP = BIT(4), 420 420 MASK_EE_WRITEBOOSTER_EVENT = BIT(5), 421 421 MASK_EE_PERFORMANCE_THROTTLING = BIT(6), 422 + MASK_EE_HEALTH_CRITICAL = BIT(9), 422 423 }; 423 424 #define MASK_EE_URGENT_TEMP (MASK_EE_TOO_HIGH_TEMP | MASK_EE_TOO_LOW_TEMP) 424 425
+16 -6
include/ufs/ufshcd.h
··· 336 336 * @get_outstanding_cqs: called to get outstanding completion queues 337 337 * @config_esi: called to config Event Specific Interrupt 338 338 * @config_scsi_dev: called to configure SCSI device parameters 339 + * @freq_to_gear_speed: called to map clock frequency to the max supported gear speed 339 340 */ 340 341 struct ufs_hba_variant_ops { 341 342 const char *name; ··· 345 344 void (*exit)(struct ufs_hba *); 346 345 u32 (*get_ufs_hci_version)(struct ufs_hba *); 347 346 int (*set_dma_mask)(struct ufs_hba *); 348 - int (*clk_scale_notify)(struct ufs_hba *, bool, 349 - enum ufs_notify_change_status); 347 + int (*clk_scale_notify)(struct ufs_hba *, bool, unsigned long, 348 + enum ufs_notify_change_status); 350 349 int (*setup_clocks)(struct ufs_hba *, bool, 351 350 enum ufs_notify_change_status); 352 351 int (*hce_enable_notify)(struct ufs_hba *, ··· 354 353 int (*link_startup_notify)(struct ufs_hba *, 355 354 enum ufs_notify_change_status); 356 355 int (*pwr_change_notify)(struct ufs_hba *, 357 - enum ufs_notify_change_status status, 358 - struct ufs_pa_layer_attr *desired_pwr_mode, 359 - struct ufs_pa_layer_attr *final_params); 356 + enum ufs_notify_change_status status, 357 + const struct ufs_pa_layer_attr *desired_pwr_mode, 358 + struct ufs_pa_layer_attr *final_params); 360 359 void (*setup_xfer_req)(struct ufs_hba *hba, int tag, 361 360 bool is_scsi_cmd); 362 361 void (*setup_task_mgmt)(struct ufs_hba *, int, u8); ··· 385 384 unsigned long *ocqs); 386 385 int (*config_esi)(struct ufs_hba *hba); 387 386 void (*config_scsi_dev)(struct scsi_device *sdev); 387 + u32 (*freq_to_gear_speed)(struct ufs_hba *hba, unsigned long freq); 388 388 }; 389 389 390 390 /* clock gating state */ ··· 450 448 * one keeps track of previous power mode. 451 449 * @target_freq: frequency requested by devfreq framework 452 450 * @min_gear: lowest HS gear to scale down to 451 + * @wb_gear: enable Write Booster when HS gear scales above or equal to it, else 452 + * disable Write Booster 453 453 * @is_enabled: tracks if scaling is currently enabled or not, controlled by 454 454 * clkscale_enable sysfs node 455 455 * @is_allowed: tracks if scaling is currently allowed or not, used to block ··· 475 471 struct ufs_pa_layer_attr saved_pwr_info; 476 472 unsigned long target_freq; 477 473 u32 min_gear; 474 + u32 wb_gear; 478 475 bool is_enabled; 479 476 bool is_allowed; 480 477 bool is_initialized; ··· 967 962 * @ufs_rtc_update_work: A work for UFS RTC periodic update 968 963 * @pm_qos_req: PM QoS request handle 969 964 * @pm_qos_enabled: flag to check if pm qos is enabled 965 + * @critical_health_count: count of critical health exceptions 970 966 */ 971 967 struct ufs_hba { 972 968 void __iomem *mmio_base; ··· 1136 1130 struct delayed_work ufs_rtc_update_work; 1137 1131 struct pm_qos_request pm_qos_req; 1138 1132 bool pm_qos_enabled; 1133 + 1134 + int critical_health_count; 1139 1135 }; 1140 1136 1141 1137 /** ··· 1376 1368 extern int ufshcd_system_restore(struct device *dev); 1377 1369 #endif 1378 1370 1371 + extern int ufshcd_dme_reset(struct ufs_hba *hba); 1372 + extern int ufshcd_dme_enable(struct ufs_hba *hba); 1379 1373 extern int ufshcd_dme_configure_adapt(struct ufs_hba *hba, 1380 1374 int agreed_gear, 1381 1375 int adapt_val); ··· 1435 1425 return ufshcd_dme_get_attr(hba, attr_sel, mib_val, DME_PEER); 1436 1426 } 1437 1427 1438 - static inline bool ufshcd_is_hs_mode(struct ufs_pa_layer_attr *pwr_info) 1428 + static inline bool ufshcd_is_hs_mode(const struct ufs_pa_layer_attr *pwr_info) 1439 1429 { 1440 1430 return (pwr_info->pwr_rx == FAST_MODE || 1441 1431 pwr_info->pwr_rx == FASTAUTO_MODE) &&
+6 -7
tools/testing/selftests/filesystems/statmount/statmount_test.c
··· 26 26 "hfsplus", "hostfs", "hpfs", "hugetlbfs", "ibmasmfs", "iomem", 27 27 "ipathfs", "iso9660", "jffs2", "jfs", "minix", "mqueue", "msdos", 28 28 "nfs", "nfs4", "nfsd", "nilfs2", "nsfs", "ntfs", "ntfs3", "ocfs2", 29 - "ocfs2_dlmfs", "ocxlflash", "omfs", "openpromfs", "overlay", "pipefs", 30 - "proc", "pstore", "pvfs2", "qnx4", "qnx6", "ramfs", 31 - "resctrl", "romfs", "rootfs", "rpc_pipefs", "s390_hypfs", "secretmem", 32 - "securityfs", "selinuxfs", "smackfs", "smb3", "sockfs", "spufs", 33 - "squashfs", "sysfs", "sysv", "tmpfs", "tracefs", "ubifs", "udf", 34 - "ufs", "v7", "vboxsf", "vfat", "virtiofs", "vxfs", "xenfs", "xfs", 35 - "zonefs", NULL }; 29 + "ocfs2_dlmfs", "omfs", "openpromfs", "overlay", "pipefs", "proc", 30 + "pstore", "pvfs2", "qnx4", "qnx6", "ramfs", "resctrl", "romfs", 31 + "rootfs", "rpc_pipefs", "s390_hypfs", "secretmem", "securityfs", 32 + "selinuxfs", "smackfs", "smb3", "sockfs", "spufs", "squashfs", "sysfs", 33 + "sysv", "tmpfs", "tracefs", "ubifs", "udf", "ufs", "v7", "vboxsf", 34 + "vfat", "virtiofs", "vxfs", "xenfs", "xfs", "zonefs", NULL }; 36 35 37 36 static struct statmount *statmount_alloc(uint64_t mnt_id, uint64_t mask, unsigned int flags) 38 37 {