Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull first round of SCSI updates from James Bottomley:
"This includes one new driver: cxlflash plus the usual grab bag of
updates for the major drivers: qla2xxx, ipr, storvsc, pm80xx, hptiop,
plus a few assorted fixes.

There's another tranch coming, but I want to incubate it another few
days in the checkers, plus it includes a mpt2sas separated lifetime
fix, which Avago won't get done testing until Friday"

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (85 commits)
aic94xx: set an error code on failure
storvsc: Set the error code correctly in failure conditions
storvsc: Allow write_same when host is windows 10
storvsc: use storage protocol version to determine storage capabilities
storvsc: use correct defaults for values determined by protocol negotiation
storvsc: Untangle the storage protocol negotiation from the vmbus protocol negotiation.
storvsc: Use a single value to track protocol versions
storvsc: Rather than look for sets of specific protocol versions, make decisions based on ranges.
cxlflash: Remove unused variable from queuecommand
cxlflash: shift wrapping bug in afu_link_reset()
cxlflash: off by one bug in cxlflash_show_port_status()
cxlflash: Virtual LUN support
cxlflash: Superpipe support
cxlflash: Base error recovery support
qla2xxx: Update driver version to 8.07.00.26-k
qla2xxx: Add pci device id 0x2261.
qla2xxx: Fix missing device login retries.
qla2xxx: do not clear slot in outstanding cmd array
qla2xxx: Remove decrement of sp reference count in abort handler.
qla2xxx: Add support to show MPI and PEP FW version for ISP27xx.
...

+9179 -1329
+1
Documentation/ioctl/ioctl-number.txt
··· 316 316 0xB3 00 linux/mmc/ioctl.h 317 317 0xC0 00-0F linux/usb/iowarrior.h 318 318 0xCA 00-0F uapi/misc/cxl.h 319 + 0xCA 80-8F uapi/scsi/cxlflash_ioctl.h 319 320 0xCB 00-1F CBM serial IEC bus in development: 320 321 <mailto:michael.klein@puffin.lb.shuttle.de> 321 322 0xCD 01 linux/reiserfs_fs.h
+318
Documentation/powerpc/cxlflash.txt
··· 1 + Introduction 2 + ============ 3 + 4 + The IBM Power architecture provides support for CAPI (Coherent 5 + Accelerator Power Interface), which is available to certain PCIe slots 6 + on Power 8 systems. CAPI can be thought of as a special tunneling 7 + protocol through PCIe that allow PCIe adapters to look like special 8 + purpose co-processors which can read or write an application's 9 + memory and generate page faults. As a result, the host interface to 10 + an adapter running in CAPI mode does not require the data buffers to 11 + be mapped to the device's memory (IOMMU bypass) nor does it require 12 + memory to be pinned. 13 + 14 + On Linux, Coherent Accelerator (CXL) kernel services present CAPI 15 + devices as a PCI device by implementing a virtual PCI host bridge. 16 + This abstraction simplifies the infrastructure and programming 17 + model, allowing for drivers to look similar to other native PCI 18 + device drivers. 19 + 20 + CXL provides a mechanism by which user space applications can 21 + directly talk to a device (network or storage) bypassing the typical 22 + kernel/device driver stack. The CXL Flash Adapter Driver enables a 23 + user space application direct access to Flash storage. 24 + 25 + The CXL Flash Adapter Driver is a kernel module that sits in the 26 + SCSI stack as a low level device driver (below the SCSI disk and 27 + protocol drivers) for the IBM CXL Flash Adapter. This driver is 28 + responsible for the initialization of the adapter, setting up the 29 + special path for user space access, and performing error recovery. It 30 + communicates directly the Flash Accelerator Functional Unit (AFU) 31 + as described in Documentation/powerpc/cxl.txt. 32 + 33 + The cxlflash driver supports two, mutually exclusive, modes of 34 + operation at the device (LUN) level: 35 + 36 + - Any flash device (LUN) can be configured to be accessed as a 37 + regular disk device (i.e.: /dev/sdc). This is the default mode. 38 + 39 + - Any flash device (LUN) can be configured to be accessed from 40 + user space with a special block library. This mode further 41 + specifies the means of accessing the device and provides for 42 + either raw access to the entire LUN (referred to as direct 43 + or physical LUN access) or access to a kernel/AFU-mediated 44 + partition of the LUN (referred to as virtual LUN access). The 45 + segmentation of a disk device into virtual LUNs is assisted 46 + by special translation services provided by the Flash AFU. 47 + 48 + Overview 49 + ======== 50 + 51 + The Coherent Accelerator Interface Architecture (CAIA) introduces a 52 + concept of a master context. A master typically has special privileges 53 + granted to it by the kernel or hypervisor allowing it to perform AFU 54 + wide management and control. The master may or may not be involved 55 + directly in each user I/O, but at the minimum is involved in the 56 + initial setup before the user application is allowed to send requests 57 + directly to the AFU. 58 + 59 + The CXL Flash Adapter Driver establishes a master context with the 60 + AFU. It uses memory mapped I/O (MMIO) for this control and setup. The 61 + Adapter Problem Space Memory Map looks like this: 62 + 63 + +-------------------------------+ 64 + | 512 * 64 KB User MMIO | 65 + | (per context) | 66 + | User Accessible | 67 + +-------------------------------+ 68 + | 512 * 128 B per context | 69 + | Provisioning and Control | 70 + | Trusted Process accessible | 71 + +-------------------------------+ 72 + | 64 KB Global | 73 + | Trusted Process accessible | 74 + +-------------------------------+ 75 + 76 + This driver configures itself into the SCSI software stack as an 77 + adapter driver. The driver is the only entity that is considered a 78 + Trusted Process to program the Provisioning and Control and Global 79 + areas in the MMIO Space shown above. The master context driver 80 + discovers all LUNs attached to the CXL Flash adapter and instantiates 81 + scsi block devices (/dev/sdb, /dev/sdc etc.) for each unique LUN 82 + seen from each path. 83 + 84 + Once these scsi block devices are instantiated, an application 85 + written to a specification provided by the block library may get 86 + access to the Flash from user space (without requiring a system call). 87 + 88 + This master context driver also provides a series of ioctls for this 89 + block library to enable this user space access. The driver supports 90 + two modes for accessing the block device. 91 + 92 + The first mode is called a virtual mode. In this mode a single scsi 93 + block device (/dev/sdb) may be carved up into any number of distinct 94 + virtual LUNs. The virtual LUNs may be resized as long as the sum of 95 + the sizes of all the virtual LUNs, along with the meta-data associated 96 + with it does not exceed the physical capacity. 97 + 98 + The second mode is called the physical mode. In this mode a single 99 + block device (/dev/sdb) may be opened directly by the block library 100 + and the entire space for the LUN is available to the application. 101 + 102 + Only the physical mode provides persistence of the data. i.e. The 103 + data written to the block device will survive application exit and 104 + restart and also reboot. The virtual LUNs do not persist (i.e. do 105 + not survive after the application terminates or the system reboots). 106 + 107 + 108 + Block library API 109 + ================= 110 + 111 + Applications intending to get access to the CXL Flash from user 112 + space should use the block library, as it abstracts the details of 113 + interfacing directly with the cxlflash driver that are necessary for 114 + performing administrative actions (i.e.: setup, tear down, resize). 115 + The block library can be thought of as a 'user' of services, 116 + implemented as IOCTLs, that are provided by the cxlflash driver 117 + specifically for devices (LUNs) operating in user space access 118 + mode. While it is not a requirement that applications understand 119 + the interface between the block library and the cxlflash driver, 120 + a high-level overview of each supported service (IOCTL) is provided 121 + below. 122 + 123 + The block library can be found on GitHub: 124 + http://www.github.com/mikehollinger/ibmcapikv 125 + 126 + 127 + CXL Flash Driver IOCTLs 128 + ======================= 129 + 130 + Users, such as the block library, that wish to interface with a flash 131 + device (LUN) via user space access need to use the services provided 132 + by the cxlflash driver. As these services are implemented as ioctls, 133 + a file descriptor handle must first be obtained in order to establish 134 + the communication channel between a user and the kernel. This file 135 + descriptor is obtained by opening the device special file associated 136 + with the scsi disk device (/dev/sdb) that was created during LUN 137 + discovery. As per the location of the cxlflash driver within the 138 + SCSI protocol stack, this open is actually not seen by the cxlflash 139 + driver. Upon successful open, the user receives a file descriptor 140 + (herein referred to as fd1) that should be used for issuing the 141 + subsequent ioctls listed below. 142 + 143 + The structure definitions for these IOCTLs are available in: 144 + uapi/scsi/cxlflash_ioctl.h 145 + 146 + DK_CXLFLASH_ATTACH 147 + ------------------ 148 + 149 + This ioctl obtains, initializes, and starts a context using the CXL 150 + kernel services. These services specify a context id (u16) by which 151 + to uniquely identify the context and its allocated resources. The 152 + services additionally provide a second file descriptor (herein 153 + referred to as fd2) that is used by the block library to initiate 154 + memory mapped I/O (via mmap()) to the CXL flash device and poll for 155 + completion events. This file descriptor is intentionally installed by 156 + this driver and not the CXL kernel services to allow for intermediary 157 + notification and access in the event of a non-user-initiated close(), 158 + such as a killed process. This design point is described in further 159 + detail in the description for the DK_CXLFLASH_DETACH ioctl. 160 + 161 + There are a few important aspects regarding the "tokens" (context id 162 + and fd2) that are provided back to the user: 163 + 164 + - These tokens are only valid for the process under which they 165 + were created. The child of a forked process cannot continue 166 + to use the context id or file descriptor created by its parent 167 + (see DK_CXLFLASH_VLUN_CLONE for further details). 168 + 169 + - These tokens are only valid for the lifetime of the context and 170 + the process under which they were created. Once either is 171 + destroyed, the tokens are to be considered stale and subsequent 172 + usage will result in errors. 173 + 174 + - When a context is no longer needed, the user shall detach from 175 + the context via the DK_CXLFLASH_DETACH ioctl. 176 + 177 + - A close on fd2 will invalidate the tokens. This operation is not 178 + required by the user. 179 + 180 + DK_CXLFLASH_USER_DIRECT 181 + ----------------------- 182 + This ioctl is responsible for transitioning the LUN to direct 183 + (physical) mode access and configuring the AFU for direct access from 184 + user space on a per-context basis. Additionally, the block size and 185 + last logical block address (LBA) are returned to the user. 186 + 187 + As mentioned previously, when operating in user space access mode, 188 + LUNs may be accessed in whole or in part. Only one mode is allowed 189 + at a time and if one mode is active (outstanding references exist), 190 + requests to use the LUN in a different mode are denied. 191 + 192 + The AFU is configured for direct access from user space by adding an 193 + entry to the AFU's resource handle table. The index of the entry is 194 + treated as a resource handle that is returned to the user. The user 195 + is then able to use the handle to reference the LUN during I/O. 196 + 197 + DK_CXLFLASH_USER_VIRTUAL 198 + ------------------------ 199 + This ioctl is responsible for transitioning the LUN to virtual mode 200 + of access and configuring the AFU for virtual access from user space 201 + on a per-context basis. Additionally, the block size and last logical 202 + block address (LBA) are returned to the user. 203 + 204 + As mentioned previously, when operating in user space access mode, 205 + LUNs may be accessed in whole or in part. Only one mode is allowed 206 + at a time and if one mode is active (outstanding references exist), 207 + requests to use the LUN in a different mode are denied. 208 + 209 + The AFU is configured for virtual access from user space by adding 210 + an entry to the AFU's resource handle table. The index of the entry 211 + is treated as a resource handle that is returned to the user. The 212 + user is then able to use the handle to reference the LUN during I/O. 213 + 214 + By default, the virtual LUN is created with a size of 0. The user 215 + would need to use the DK_CXLFLASH_VLUN_RESIZE ioctl to adjust the grow 216 + the virtual LUN to a desired size. To avoid having to perform this 217 + resize for the initial creation of the virtual LUN, the user has the 218 + option of specifying a size as part of the DK_CXLFLASH_USER_VIRTUAL 219 + ioctl, such that when success is returned to the user, the 220 + resource handle that is provided is already referencing provisioned 221 + storage. This is reflected by the last LBA being a non-zero value. 222 + 223 + DK_CXLFLASH_VLUN_RESIZE 224 + ----------------------- 225 + This ioctl is responsible for resizing a previously created virtual 226 + LUN and will fail if invoked upon a LUN that is not in virtual 227 + mode. Upon success, an updated last LBA is returned to the user 228 + indicating the new size of the virtual LUN associated with the 229 + resource handle. 230 + 231 + The partitioning of virtual LUNs is jointly mediated by the cxlflash 232 + driver and the AFU. An allocation table is kept for each LUN that is 233 + operating in the virtual mode and used to program a LUN translation 234 + table that the AFU references when provided with a resource handle. 235 + 236 + DK_CXLFLASH_RELEASE 237 + ------------------- 238 + This ioctl is responsible for releasing a previously obtained 239 + reference to either a physical or virtual LUN. This can be 240 + thought of as the inverse of the DK_CXLFLASH_USER_DIRECT or 241 + DK_CXLFLASH_USER_VIRTUAL ioctls. Upon success, the resource handle 242 + is no longer valid and the entry in the resource handle table is 243 + made available to be used again. 244 + 245 + As part of the release process for virtual LUNs, the virtual LUN 246 + is first resized to 0 to clear out and free the translation tables 247 + associated with the virtual LUN reference. 248 + 249 + DK_CXLFLASH_DETACH 250 + ------------------ 251 + This ioctl is responsible for unregistering a context with the 252 + cxlflash driver and release outstanding resources that were 253 + not explicitly released via the DK_CXLFLASH_RELEASE ioctl. Upon 254 + success, all "tokens" which had been provided to the user from the 255 + DK_CXLFLASH_ATTACH onward are no longer valid. 256 + 257 + DK_CXLFLASH_VLUN_CLONE 258 + ---------------------- 259 + This ioctl is responsible for cloning a previously created 260 + context to a more recently created context. It exists solely to 261 + support maintaining user space access to storage after a process 262 + forks. Upon success, the child process (which invoked the ioctl) 263 + will have access to the same LUNs via the same resource handle(s) 264 + and fd2 as the parent, but under a different context. 265 + 266 + Context sharing across processes is not supported with CXL and 267 + therefore each fork must be met with establishing a new context 268 + for the child process. This ioctl simplifies the state management 269 + and playback required by a user in such a scenario. When a process 270 + forks, child process can clone the parents context by first creating 271 + a context (via DK_CXLFLASH_ATTACH) and then using this ioctl to 272 + perform the clone from the parent to the child. 273 + 274 + The clone itself is fairly simple. The resource handle and lun 275 + translation tables are copied from the parent context to the child's 276 + and then synced with the AFU. 277 + 278 + DK_CXLFLASH_VERIFY 279 + ------------------ 280 + This ioctl is used to detect various changes such as the capacity of 281 + the disk changing, the number of LUNs visible changing, etc. In cases 282 + where the changes affect the application (such as a LUN resize), the 283 + cxlflash driver will report the changed state to the application. 284 + 285 + The user calls in when they want to validate that a LUN hasn't been 286 + changed in response to a check condition. As the user is operating out 287 + of band from the kernel, they will see these types of events without 288 + the kernel's knowledge. When encountered, the user's architected 289 + behavior is to call in to this ioctl, indicating what they want to 290 + verify and passing along any appropriate information. For now, only 291 + verifying a LUN change (ie: size different) with sense data is 292 + supported. 293 + 294 + DK_CXLFLASH_RECOVER_AFU 295 + ----------------------- 296 + This ioctl is used to drive recovery (if such an action is warranted) 297 + of a specified user context. Any state associated with the user context 298 + is re-established upon successful recovery. 299 + 300 + User contexts are put into an error condition when the device needs to 301 + be reset or is terminating. Users are notified of this error condition 302 + by seeing all 0xF's on an MMIO read. Upon encountering this, the 303 + architected behavior for a user is to call into this ioctl to recover 304 + their context. A user may also call into this ioctl at any time to 305 + check if the device is operating normally. If a failure is returned 306 + from this ioctl, the user is expected to gracefully clean up their 307 + context via release/detach ioctls. Until they do, the context they 308 + hold is not relinquished. The user may also optionally exit the process 309 + at which time the context/resources they held will be freed as part of 310 + the release fop. 311 + 312 + DK_CXLFLASH_MANAGE_LUN 313 + ---------------------- 314 + This ioctl is used to switch a LUN from a mode where it is available 315 + for file-system access (legacy), to a mode where it is set aside for 316 + exclusive user space access (superpipe). In case a LUN is visible 317 + across multiple ports and adapters, this ioctl is used to uniquely 318 + identify each LUN by its World Wide Node Name (WWNN).
+1 -1
MAINTAINERS
··· 8098 8098 F: drivers/scsi/pmcraid.* 8099 8099 8100 8100 PMC SIERRA PM8001 DRIVER 8101 - M: xjtuwjp@gmail.com 8101 + M: Jack Wang <jinpu.wang@profitbricks.com> 8102 8102 M: lindar_liu@usish.com 8103 8103 L: pmchba@pmcs.com 8104 8104 L: linux-scsi@vger.kernel.org
+9
drivers/message/fusion/mptctl.c
··· 1859 1859 } 1860 1860 spin_unlock_irqrestore(&ioc->taskmgmt_lock, flags); 1861 1861 1862 + /* Basic sanity checks to prevent underflows or integer overflows */ 1863 + if (karg.maxReplyBytes < 0 || 1864 + karg.dataInSize < 0 || 1865 + karg.dataOutSize < 0 || 1866 + karg.dataSgeOffset < 0 || 1867 + karg.maxSenseBytes < 0 || 1868 + karg.dataSgeOffset > ioc->req_sz / 4) 1869 + return -EINVAL; 1870 + 1862 1871 /* Verify that the final request frame will not be too large. 1863 1872 */ 1864 1873 sz = karg.dataSgeOffset * 4;
+1
drivers/scsi/Kconfig
··· 345 345 source "drivers/scsi/bnx2i/Kconfig" 346 346 source "drivers/scsi/bnx2fc/Kconfig" 347 347 source "drivers/scsi/be2iscsi/Kconfig" 348 + source "drivers/scsi/cxlflash/Kconfig" 348 349 349 350 config SGIWD93_SCSI 350 351 tristate "SGI WD93C93 SCSI Driver"
+1
drivers/scsi/Makefile
··· 102 102 obj-$(CONFIG_SCSI_EATA) += eata.o 103 103 obj-$(CONFIG_SCSI_DC395x) += dc395x.o 104 104 obj-$(CONFIG_SCSI_AM53C974) += esp_scsi.o am53c974.o 105 + obj-$(CONFIG_CXLFLASH) += cxlflash/ 105 106 obj-$(CONFIG_MEGARAID_LEGACY) += megaraid.o 106 107 obj-$(CONFIG_MEGARAID_NEWGEN) += megaraid/ 107 108 obj-$(CONFIG_MEGARAID_SAS) += megaraid/
+1
drivers/scsi/aic94xx/aic94xx_init.c
··· 109 109 if (!io_handle->addr) { 110 110 asd_printk("couldn't map MBAR%d of %s\n", i==0?0:1, 111 111 pci_name(asd_ha->pcidev)); 112 + err = -ENOMEM; 112 113 goto Err_unreq; 113 114 } 114 115 }
+2
drivers/scsi/bfa/bfad_im.c
··· 851 851 852 852 if (bfad_im_scsi_vport_transport_template) 853 853 fc_release_transport(bfad_im_scsi_vport_transport_template); 854 + 855 + idr_destroy(&bfad_im_port_index); 854 856 } 855 857 856 858 void
+11
drivers/scsi/cxlflash/Kconfig
··· 1 + # 2 + # IBM CXL-attached Flash Accelerator SCSI Driver 3 + # 4 + 5 + config CXLFLASH 6 + tristate "Support for IBM CAPI Flash" 7 + depends on PCI && SCSI && CXL && EEH 8 + default m 9 + help 10 + Allows CAPI Accelerated IO to Flash 11 + If unsure, say N.
+2
drivers/scsi/cxlflash/Makefile
··· 1 + obj-$(CONFIG_CXLFLASH) += cxlflash.o 2 + cxlflash-y += main.o superpipe.o lunmgt.o vlun.o
+208
drivers/scsi/cxlflash/common.h
··· 1 + /* 2 + * CXL Flash Device Driver 3 + * 4 + * Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation 5 + * Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation 6 + * 7 + * Copyright (C) 2015 IBM Corporation 8 + * 9 + * This program is free software; you can redistribute it and/or 10 + * modify it under the terms of the GNU General Public License 11 + * as published by the Free Software Foundation; either version 12 + * 2 of the License, or (at your option) any later version. 13 + */ 14 + 15 + #ifndef _CXLFLASH_COMMON_H 16 + #define _CXLFLASH_COMMON_H 17 + 18 + #include <linux/list.h> 19 + #include <linux/types.h> 20 + #include <scsi/scsi.h> 21 + #include <scsi/scsi_device.h> 22 + 23 + 24 + #define MAX_CONTEXT CXLFLASH_MAX_CONTEXT /* num contexts per afu */ 25 + 26 + #define CXLFLASH_BLOCK_SIZE 4096 /* 4K blocks */ 27 + #define CXLFLASH_MAX_XFER_SIZE 16777216 /* 16MB transfer */ 28 + #define CXLFLASH_MAX_SECTORS (CXLFLASH_MAX_XFER_SIZE/512) /* SCSI wants 29 + max_sectors 30 + in units of 31 + 512 byte 32 + sectors 33 + */ 34 + 35 + #define NUM_RRQ_ENTRY 16 /* for master issued cmds */ 36 + #define MAX_RHT_PER_CONTEXT (PAGE_SIZE / sizeof(struct sisl_rht_entry)) 37 + 38 + /* AFU command retry limit */ 39 + #define MC_RETRY_CNT 5 /* sufficient for SCSI check and 40 + certain AFU errors */ 41 + 42 + /* Command management definitions */ 43 + #define CXLFLASH_NUM_CMDS (2 * CXLFLASH_MAX_CMDS) /* Must be a pow2 for 44 + alignment and more 45 + efficient array 46 + index derivation 47 + */ 48 + 49 + #define CXLFLASH_MAX_CMDS 16 50 + #define CXLFLASH_MAX_CMDS_PER_LUN CXLFLASH_MAX_CMDS 51 + 52 + 53 + static inline void check_sizes(void) 54 + { 55 + BUILD_BUG_ON_NOT_POWER_OF_2(CXLFLASH_NUM_CMDS); 56 + } 57 + 58 + /* AFU defines a fixed size of 4K for command buffers (borrow 4K page define) */ 59 + #define CMD_BUFSIZE SIZE_4K 60 + 61 + /* flags in IOA status area for host use */ 62 + #define B_DONE 0x01 63 + #define B_ERROR 0x02 /* set with B_DONE */ 64 + #define B_TIMEOUT 0x04 /* set with B_DONE & B_ERROR */ 65 + 66 + enum cxlflash_lr_state { 67 + LINK_RESET_INVALID, 68 + LINK_RESET_REQUIRED, 69 + LINK_RESET_COMPLETE 70 + }; 71 + 72 + enum cxlflash_init_state { 73 + INIT_STATE_NONE, 74 + INIT_STATE_PCI, 75 + INIT_STATE_AFU, 76 + INIT_STATE_SCSI 77 + }; 78 + 79 + enum cxlflash_state { 80 + STATE_NORMAL, /* Normal running state, everything good */ 81 + STATE_LIMBO, /* Limbo running state, trying to reset/recover */ 82 + STATE_FAILTERM /* Failed/terminating state, error out users/threads */ 83 + }; 84 + 85 + /* 86 + * Each context has its own set of resource handles that is visible 87 + * only from that context. 88 + */ 89 + 90 + struct cxlflash_cfg { 91 + struct afu *afu; 92 + struct cxl_context *mcctx; 93 + 94 + struct pci_dev *dev; 95 + struct pci_device_id *dev_id; 96 + struct Scsi_Host *host; 97 + 98 + ulong cxlflash_regs_pci; 99 + 100 + struct work_struct work_q; 101 + enum cxlflash_init_state init_state; 102 + enum cxlflash_lr_state lr_state; 103 + int lr_port; 104 + 105 + struct cxl_afu *cxl_afu; 106 + 107 + struct pci_pool *cxlflash_cmd_pool; 108 + struct pci_dev *parent_dev; 109 + 110 + atomic_t recovery_threads; 111 + struct mutex ctx_recovery_mutex; 112 + struct mutex ctx_tbl_list_mutex; 113 + struct ctx_info *ctx_tbl[MAX_CONTEXT]; 114 + struct list_head ctx_err_recovery; /* contexts w/ recovery pending */ 115 + struct file_operations cxl_fops; 116 + 117 + atomic_t num_user_contexts; 118 + 119 + /* Parameters that are LUN table related */ 120 + int last_lun_index[CXLFLASH_NUM_FC_PORTS]; 121 + int promote_lun_index; 122 + struct list_head lluns; /* list of llun_info structs */ 123 + 124 + wait_queue_head_t tmf_waitq; 125 + bool tmf_active; 126 + wait_queue_head_t limbo_waitq; 127 + enum cxlflash_state state; 128 + }; 129 + 130 + struct afu_cmd { 131 + struct sisl_ioarcb rcb; /* IOARCB (cache line aligned) */ 132 + struct sisl_ioasa sa; /* IOASA must follow IOARCB */ 133 + spinlock_t slock; 134 + struct completion cevent; 135 + char *buf; /* per command buffer */ 136 + struct afu *parent; 137 + int slot; 138 + atomic_t free; 139 + 140 + u8 cmd_tmf:1; 141 + 142 + /* As per the SISLITE spec the IOARCB EA has to be 16-byte aligned. 143 + * However for performance reasons the IOARCB/IOASA should be 144 + * cache line aligned. 145 + */ 146 + } __aligned(cache_line_size()); 147 + 148 + struct afu { 149 + /* Stuff requiring alignment go first. */ 150 + 151 + u64 rrq_entry[NUM_RRQ_ENTRY]; /* 128B RRQ */ 152 + /* 153 + * Command & data for AFU commands. 154 + */ 155 + struct afu_cmd cmd[CXLFLASH_NUM_CMDS]; 156 + 157 + /* Beware of alignment till here. Preferably introduce new 158 + * fields after this point 159 + */ 160 + 161 + /* AFU HW */ 162 + struct cxl_ioctl_start_work work; 163 + struct cxlflash_afu_map *afu_map; /* entire MMIO map */ 164 + struct sisl_host_map *host_map; /* MC host map */ 165 + struct sisl_ctrl_map *ctrl_map; /* MC control map */ 166 + 167 + ctx_hndl_t ctx_hndl; /* master's context handle */ 168 + u64 *hrrq_start; 169 + u64 *hrrq_end; 170 + u64 *hrrq_curr; 171 + bool toggle; 172 + bool read_room; 173 + atomic64_t room; 174 + u64 hb; 175 + u32 cmd_couts; /* Number of command checkouts */ 176 + u32 internal_lun; /* User-desired LUN mode for this AFU */ 177 + 178 + char version[8]; 179 + u64 interface_version; 180 + 181 + struct cxlflash_cfg *parent; /* Pointer back to parent cxlflash_cfg */ 182 + 183 + }; 184 + 185 + static inline u64 lun_to_lunid(u64 lun) 186 + { 187 + u64 lun_id; 188 + 189 + int_to_scsilun(lun, (struct scsi_lun *)&lun_id); 190 + return swab64(lun_id); 191 + } 192 + 193 + int cxlflash_send_cmd(struct afu *, struct afu_cmd *); 194 + void cxlflash_wait_resp(struct afu *, struct afu_cmd *); 195 + int cxlflash_afu_reset(struct cxlflash_cfg *); 196 + struct afu_cmd *cxlflash_cmd_checkout(struct afu *); 197 + void cxlflash_cmd_checkin(struct afu_cmd *); 198 + int cxlflash_afu_sync(struct afu *, ctx_hndl_t, res_hndl_t, u8); 199 + void cxlflash_list_init(void); 200 + void cxlflash_term_global_luns(void); 201 + void cxlflash_free_errpage(void); 202 + int cxlflash_ioctl(struct scsi_device *, int, void __user *); 203 + void cxlflash_stop_term_user_contexts(struct cxlflash_cfg *); 204 + int cxlflash_mark_contexts_error(struct cxlflash_cfg *); 205 + void cxlflash_term_local_luns(struct cxlflash_cfg *); 206 + void cxlflash_restore_luntable(struct cxlflash_cfg *); 207 + 208 + #endif /* ifndef _CXLFLASH_COMMON_H */
+266
drivers/scsi/cxlflash/lunmgt.c
··· 1 + /* 2 + * CXL Flash Device Driver 3 + * 4 + * Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation 5 + * Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation 6 + * 7 + * Copyright (C) 2015 IBM Corporation 8 + * 9 + * This program is free software; you can redistribute it and/or 10 + * modify it under the terms of the GNU General Public License 11 + * as published by the Free Software Foundation; either version 12 + * 2 of the License, or (at your option) any later version. 13 + */ 14 + 15 + #include <misc/cxl.h> 16 + #include <asm/unaligned.h> 17 + 18 + #include <scsi/scsi_host.h> 19 + #include <uapi/scsi/cxlflash_ioctl.h> 20 + 21 + #include "sislite.h" 22 + #include "common.h" 23 + #include "vlun.h" 24 + #include "superpipe.h" 25 + 26 + /** 27 + * create_local() - allocate and initialize a local LUN information structure 28 + * @sdev: SCSI device associated with LUN. 29 + * @wwid: World Wide Node Name for LUN. 30 + * 31 + * Return: Allocated local llun_info structure on success, NULL on failure 32 + */ 33 + static struct llun_info *create_local(struct scsi_device *sdev, u8 *wwid) 34 + { 35 + struct llun_info *lli = NULL; 36 + 37 + lli = kzalloc(sizeof(*lli), GFP_KERNEL); 38 + if (unlikely(!lli)) { 39 + pr_err("%s: could not allocate lli\n", __func__); 40 + goto out; 41 + } 42 + 43 + lli->sdev = sdev; 44 + lli->newly_created = true; 45 + lli->host_no = sdev->host->host_no; 46 + lli->in_table = false; 47 + 48 + memcpy(lli->wwid, wwid, DK_CXLFLASH_MANAGE_LUN_WWID_LEN); 49 + out: 50 + return lli; 51 + } 52 + 53 + /** 54 + * create_global() - allocate and initialize a global LUN information structure 55 + * @sdev: SCSI device associated with LUN. 56 + * @wwid: World Wide Node Name for LUN. 57 + * 58 + * Return: Allocated global glun_info structure on success, NULL on failure 59 + */ 60 + static struct glun_info *create_global(struct scsi_device *sdev, u8 *wwid) 61 + { 62 + struct glun_info *gli = NULL; 63 + 64 + gli = kzalloc(sizeof(*gli), GFP_KERNEL); 65 + if (unlikely(!gli)) { 66 + pr_err("%s: could not allocate gli\n", __func__); 67 + goto out; 68 + } 69 + 70 + mutex_init(&gli->mutex); 71 + memcpy(gli->wwid, wwid, DK_CXLFLASH_MANAGE_LUN_WWID_LEN); 72 + out: 73 + return gli; 74 + } 75 + 76 + /** 77 + * refresh_local() - find and update local LUN information structure by WWID 78 + * @cfg: Internal structure associated with the host. 79 + * @wwid: WWID associated with LUN. 80 + * 81 + * When the LUN is found, mark it by updating it's newly_created field. 82 + * 83 + * Return: Found local lun_info structure on success, NULL on failure 84 + * If a LUN with the WWID is found in the list, refresh it's state. 85 + */ 86 + static struct llun_info *refresh_local(struct cxlflash_cfg *cfg, u8 *wwid) 87 + { 88 + struct llun_info *lli, *temp; 89 + 90 + list_for_each_entry_safe(lli, temp, &cfg->lluns, list) 91 + if (!memcmp(lli->wwid, wwid, DK_CXLFLASH_MANAGE_LUN_WWID_LEN)) { 92 + lli->newly_created = false; 93 + return lli; 94 + } 95 + 96 + return NULL; 97 + } 98 + 99 + /** 100 + * lookup_global() - find a global LUN information structure by WWID 101 + * @wwid: WWID associated with LUN. 102 + * 103 + * Return: Found global lun_info structure on success, NULL on failure 104 + */ 105 + static struct glun_info *lookup_global(u8 *wwid) 106 + { 107 + struct glun_info *gli, *temp; 108 + 109 + list_for_each_entry_safe(gli, temp, &global.gluns, list) 110 + if (!memcmp(gli->wwid, wwid, DK_CXLFLASH_MANAGE_LUN_WWID_LEN)) 111 + return gli; 112 + 113 + return NULL; 114 + } 115 + 116 + /** 117 + * find_and_create_lun() - find or create a local LUN information structure 118 + * @sdev: SCSI device associated with LUN. 119 + * @wwid: WWID associated with LUN. 120 + * 121 + * The LUN is kept both in a local list (per adapter) and in a global list 122 + * (across all adapters). Certain attributes of the LUN are local to the 123 + * adapter (such as index, port selection mask etc.). 124 + * The block allocation map is shared across all adapters (i.e. associated 125 + * wih the global list). Since different attributes are associated with 126 + * the per adapter and global entries, allocate two separate structures for each 127 + * LUN (one local, one global). 128 + * 129 + * Keep a pointer back from the local to the global entry. 130 + * 131 + * Return: Found/Allocated local lun_info structure on success, NULL on failure 132 + */ 133 + static struct llun_info *find_and_create_lun(struct scsi_device *sdev, u8 *wwid) 134 + { 135 + struct llun_info *lli = NULL; 136 + struct glun_info *gli = NULL; 137 + struct Scsi_Host *shost = sdev->host; 138 + struct cxlflash_cfg *cfg = shost_priv(shost); 139 + 140 + mutex_lock(&global.mutex); 141 + if (unlikely(!wwid)) 142 + goto out; 143 + 144 + lli = refresh_local(cfg, wwid); 145 + if (lli) 146 + goto out; 147 + 148 + lli = create_local(sdev, wwid); 149 + if (unlikely(!lli)) 150 + goto out; 151 + 152 + gli = lookup_global(wwid); 153 + if (gli) { 154 + lli->parent = gli; 155 + list_add(&lli->list, &cfg->lluns); 156 + goto out; 157 + } 158 + 159 + gli = create_global(sdev, wwid); 160 + if (unlikely(!gli)) { 161 + kfree(lli); 162 + lli = NULL; 163 + goto out; 164 + } 165 + 166 + lli->parent = gli; 167 + list_add(&lli->list, &cfg->lluns); 168 + 169 + list_add(&gli->list, &global.gluns); 170 + 171 + out: 172 + mutex_unlock(&global.mutex); 173 + pr_debug("%s: returning %p\n", __func__, lli); 174 + return lli; 175 + } 176 + 177 + /** 178 + * cxlflash_term_local_luns() - Delete all entries from local LUN list, free. 179 + * @cfg: Internal structure associated with the host. 180 + */ 181 + void cxlflash_term_local_luns(struct cxlflash_cfg *cfg) 182 + { 183 + struct llun_info *lli, *temp; 184 + 185 + mutex_lock(&global.mutex); 186 + list_for_each_entry_safe(lli, temp, &cfg->lluns, list) { 187 + list_del(&lli->list); 188 + kfree(lli); 189 + } 190 + mutex_unlock(&global.mutex); 191 + } 192 + 193 + /** 194 + * cxlflash_list_init() - initializes the global LUN list 195 + */ 196 + void cxlflash_list_init(void) 197 + { 198 + INIT_LIST_HEAD(&global.gluns); 199 + mutex_init(&global.mutex); 200 + global.err_page = NULL; 201 + } 202 + 203 + /** 204 + * cxlflash_term_global_luns() - frees resources associated with global LUN list 205 + */ 206 + void cxlflash_term_global_luns(void) 207 + { 208 + struct glun_info *gli, *temp; 209 + 210 + mutex_lock(&global.mutex); 211 + list_for_each_entry_safe(gli, temp, &global.gluns, list) { 212 + list_del(&gli->list); 213 + cxlflash_ba_terminate(&gli->blka.ba_lun); 214 + kfree(gli); 215 + } 216 + mutex_unlock(&global.mutex); 217 + } 218 + 219 + /** 220 + * cxlflash_manage_lun() - handles LUN management activities 221 + * @sdev: SCSI device associated with LUN. 222 + * @manage: Manage ioctl data structure. 223 + * 224 + * This routine is used to notify the driver about a LUN's WWID and associate 225 + * SCSI devices (sdev) with a global LUN instance. Additionally it serves to 226 + * change a LUN's operating mode: legacy or superpipe. 227 + * 228 + * Return: 0 on success, -errno on failure 229 + */ 230 + int cxlflash_manage_lun(struct scsi_device *sdev, 231 + struct dk_cxlflash_manage_lun *manage) 232 + { 233 + int rc = 0; 234 + struct llun_info *lli = NULL; 235 + u64 flags = manage->hdr.flags; 236 + u32 chan = sdev->channel; 237 + 238 + lli = find_and_create_lun(sdev, manage->wwid); 239 + pr_debug("%s: ENTER: WWID = %016llX%016llX, flags = %016llX li = %p\n", 240 + __func__, get_unaligned_le64(&manage->wwid[0]), 241 + get_unaligned_le64(&manage->wwid[8]), 242 + manage->hdr.flags, lli); 243 + if (unlikely(!lli)) { 244 + rc = -ENOMEM; 245 + goto out; 246 + } 247 + 248 + if (flags & DK_CXLFLASH_MANAGE_LUN_ENABLE_SUPERPIPE) { 249 + if (lli->newly_created) 250 + lli->port_sel = CHAN2PORT(chan); 251 + else 252 + lli->port_sel = BOTH_PORTS; 253 + /* Store off lun in unpacked, AFU-friendly format */ 254 + lli->lun_id[chan] = lun_to_lunid(sdev->lun); 255 + sdev->hostdata = lli; 256 + } else if (flags & DK_CXLFLASH_MANAGE_LUN_DISABLE_SUPERPIPE) { 257 + if (lli->parent->mode != MODE_NONE) 258 + rc = -EBUSY; 259 + else 260 + sdev->hostdata = NULL; 261 + } 262 + 263 + out: 264 + pr_debug("%s: returning rc=%d\n", __func__, rc); 265 + return rc; 266 + }
+2494
drivers/scsi/cxlflash/main.c
··· 1 + /* 2 + * CXL Flash Device Driver 3 + * 4 + * Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation 5 + * Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation 6 + * 7 + * Copyright (C) 2015 IBM Corporation 8 + * 9 + * This program is free software; you can redistribute it and/or 10 + * modify it under the terms of the GNU General Public License 11 + * as published by the Free Software Foundation; either version 12 + * 2 of the License, or (at your option) any later version. 13 + */ 14 + 15 + #include <linux/delay.h> 16 + #include <linux/list.h> 17 + #include <linux/module.h> 18 + #include <linux/pci.h> 19 + 20 + #include <asm/unaligned.h> 21 + 22 + #include <misc/cxl.h> 23 + 24 + #include <scsi/scsi_cmnd.h> 25 + #include <scsi/scsi_host.h> 26 + #include <uapi/scsi/cxlflash_ioctl.h> 27 + 28 + #include "main.h" 29 + #include "sislite.h" 30 + #include "common.h" 31 + 32 + MODULE_DESCRIPTION(CXLFLASH_ADAPTER_NAME); 33 + MODULE_AUTHOR("Manoj N. Kumar <manoj@linux.vnet.ibm.com>"); 34 + MODULE_AUTHOR("Matthew R. Ochs <mrochs@linux.vnet.ibm.com>"); 35 + MODULE_LICENSE("GPL"); 36 + 37 + 38 + /** 39 + * cxlflash_cmd_checkout() - checks out an AFU command 40 + * @afu: AFU to checkout from. 41 + * 42 + * Commands are checked out in a round-robin fashion. Note that since 43 + * the command pool is larger than the hardware queue, the majority of 44 + * times we will only loop once or twice before getting a command. The 45 + * buffer and CDB within the command are initialized (zeroed) prior to 46 + * returning. 47 + * 48 + * Return: The checked out command or NULL when command pool is empty. 49 + */ 50 + struct afu_cmd *cxlflash_cmd_checkout(struct afu *afu) 51 + { 52 + int k, dec = CXLFLASH_NUM_CMDS; 53 + struct afu_cmd *cmd; 54 + 55 + while (dec--) { 56 + k = (afu->cmd_couts++ & (CXLFLASH_NUM_CMDS - 1)); 57 + 58 + cmd = &afu->cmd[k]; 59 + 60 + if (!atomic_dec_if_positive(&cmd->free)) { 61 + pr_debug("%s: returning found index=%d\n", 62 + __func__, cmd->slot); 63 + memset(cmd->buf, 0, CMD_BUFSIZE); 64 + memset(cmd->rcb.cdb, 0, sizeof(cmd->rcb.cdb)); 65 + return cmd; 66 + } 67 + } 68 + 69 + return NULL; 70 + } 71 + 72 + /** 73 + * cxlflash_cmd_checkin() - checks in an AFU command 74 + * @cmd: AFU command to checkin. 75 + * 76 + * Safe to pass commands that have already been checked in. Several 77 + * internal tracking fields are reset as part of the checkin. Note 78 + * that these are intentionally reset prior to toggling the free bit 79 + * to avoid clobbering values in the event that the command is checked 80 + * out right away. 81 + */ 82 + void cxlflash_cmd_checkin(struct afu_cmd *cmd) 83 + { 84 + cmd->rcb.scp = NULL; 85 + cmd->rcb.timeout = 0; 86 + cmd->sa.ioasc = 0; 87 + cmd->cmd_tmf = false; 88 + cmd->sa.host_use[0] = 0; /* clears both completion and retry bytes */ 89 + 90 + if (unlikely(atomic_inc_return(&cmd->free) != 1)) { 91 + pr_err("%s: Freeing cmd (%d) that is not in use!\n", 92 + __func__, cmd->slot); 93 + return; 94 + } 95 + 96 + pr_debug("%s: released cmd %p index=%d\n", __func__, cmd, cmd->slot); 97 + } 98 + 99 + /** 100 + * process_cmd_err() - command error handler 101 + * @cmd: AFU command that experienced the error. 102 + * @scp: SCSI command associated with the AFU command in error. 103 + * 104 + * Translates error bits from AFU command to SCSI command results. 105 + */ 106 + static void process_cmd_err(struct afu_cmd *cmd, struct scsi_cmnd *scp) 107 + { 108 + struct sisl_ioarcb *ioarcb; 109 + struct sisl_ioasa *ioasa; 110 + 111 + if (unlikely(!cmd)) 112 + return; 113 + 114 + ioarcb = &(cmd->rcb); 115 + ioasa = &(cmd->sa); 116 + 117 + if (ioasa->rc.flags & SISL_RC_FLAGS_UNDERRUN) { 118 + pr_debug("%s: cmd underrun cmd = %p scp = %p\n", 119 + __func__, cmd, scp); 120 + scp->result = (DID_ERROR << 16); 121 + } 122 + 123 + if (ioasa->rc.flags & SISL_RC_FLAGS_OVERRUN) { 124 + pr_debug("%s: cmd underrun cmd = %p scp = %p\n", 125 + __func__, cmd, scp); 126 + scp->result = (DID_ERROR << 16); 127 + } 128 + 129 + pr_debug("%s: cmd failed afu_rc=%d scsi_rc=%d fc_rc=%d " 130 + "afu_extra=0x%X, scsi_entra=0x%X, fc_extra=0x%X\n", 131 + __func__, ioasa->rc.afu_rc, ioasa->rc.scsi_rc, 132 + ioasa->rc.fc_rc, ioasa->afu_extra, ioasa->scsi_extra, 133 + ioasa->fc_extra); 134 + 135 + if (ioasa->rc.scsi_rc) { 136 + /* We have a SCSI status */ 137 + if (ioasa->rc.flags & SISL_RC_FLAGS_SENSE_VALID) { 138 + memcpy(scp->sense_buffer, ioasa->sense_data, 139 + SISL_SENSE_DATA_LEN); 140 + scp->result = ioasa->rc.scsi_rc; 141 + } else 142 + scp->result = ioasa->rc.scsi_rc | (DID_ERROR << 16); 143 + } 144 + 145 + /* 146 + * We encountered an error. Set scp->result based on nature 147 + * of error. 148 + */ 149 + if (ioasa->rc.fc_rc) { 150 + /* We have an FC status */ 151 + switch (ioasa->rc.fc_rc) { 152 + case SISL_FC_RC_LINKDOWN: 153 + scp->result = (DID_REQUEUE << 16); 154 + break; 155 + case SISL_FC_RC_RESID: 156 + /* This indicates an FCP resid underrun */ 157 + if (!(ioasa->rc.flags & SISL_RC_FLAGS_OVERRUN)) { 158 + /* If the SISL_RC_FLAGS_OVERRUN flag was set, 159 + * then we will handle this error else where. 160 + * If not then we must handle it here. 161 + * This is probably an AFU bug. We will 162 + * attempt a retry to see if that resolves it. 163 + */ 164 + scp->result = (DID_ERROR << 16); 165 + } 166 + break; 167 + case SISL_FC_RC_RESIDERR: 168 + /* Resid mismatch between adapter and device */ 169 + case SISL_FC_RC_TGTABORT: 170 + case SISL_FC_RC_ABORTOK: 171 + case SISL_FC_RC_ABORTFAIL: 172 + case SISL_FC_RC_NOLOGI: 173 + case SISL_FC_RC_ABORTPEND: 174 + case SISL_FC_RC_WRABORTPEND: 175 + case SISL_FC_RC_NOEXP: 176 + case SISL_FC_RC_INUSE: 177 + scp->result = (DID_ERROR << 16); 178 + break; 179 + } 180 + } 181 + 182 + if (ioasa->rc.afu_rc) { 183 + /* We have an AFU error */ 184 + switch (ioasa->rc.afu_rc) { 185 + case SISL_AFU_RC_NO_CHANNELS: 186 + scp->result = (DID_MEDIUM_ERROR << 16); 187 + break; 188 + case SISL_AFU_RC_DATA_DMA_ERR: 189 + switch (ioasa->afu_extra) { 190 + case SISL_AFU_DMA_ERR_PAGE_IN: 191 + /* Retry */ 192 + scp->result = (DID_IMM_RETRY << 16); 193 + break; 194 + case SISL_AFU_DMA_ERR_INVALID_EA: 195 + default: 196 + scp->result = (DID_ERROR << 16); 197 + } 198 + break; 199 + case SISL_AFU_RC_OUT_OF_DATA_BUFS: 200 + /* Retry */ 201 + scp->result = (DID_ALLOC_FAILURE << 16); 202 + break; 203 + default: 204 + scp->result = (DID_ERROR << 16); 205 + } 206 + } 207 + } 208 + 209 + /** 210 + * cmd_complete() - command completion handler 211 + * @cmd: AFU command that has completed. 212 + * 213 + * Prepares and submits command that has either completed or timed out to 214 + * the SCSI stack. Checks AFU command back into command pool for non-internal 215 + * (rcb.scp populated) commands. 216 + */ 217 + static void cmd_complete(struct afu_cmd *cmd) 218 + { 219 + struct scsi_cmnd *scp; 220 + u32 resid; 221 + ulong lock_flags; 222 + struct afu *afu = cmd->parent; 223 + struct cxlflash_cfg *cfg = afu->parent; 224 + bool cmd_is_tmf; 225 + 226 + spin_lock_irqsave(&cmd->slock, lock_flags); 227 + cmd->sa.host_use_b[0] |= B_DONE; 228 + spin_unlock_irqrestore(&cmd->slock, lock_flags); 229 + 230 + if (cmd->rcb.scp) { 231 + scp = cmd->rcb.scp; 232 + if (unlikely(cmd->sa.rc.afu_rc || 233 + cmd->sa.rc.scsi_rc || 234 + cmd->sa.rc.fc_rc)) 235 + process_cmd_err(cmd, scp); 236 + else 237 + scp->result = (DID_OK << 16); 238 + 239 + resid = cmd->sa.resid; 240 + cmd_is_tmf = cmd->cmd_tmf; 241 + cxlflash_cmd_checkin(cmd); /* Don't use cmd after here */ 242 + 243 + pr_debug("%s: calling scsi_set_resid, scp=%p " 244 + "result=%X resid=%d\n", __func__, 245 + scp, scp->result, resid); 246 + 247 + scsi_set_resid(scp, resid); 248 + scsi_dma_unmap(scp); 249 + scp->scsi_done(scp); 250 + 251 + if (cmd_is_tmf) { 252 + spin_lock_irqsave(&cfg->tmf_waitq.lock, lock_flags); 253 + cfg->tmf_active = false; 254 + wake_up_all_locked(&cfg->tmf_waitq); 255 + spin_unlock_irqrestore(&cfg->tmf_waitq.lock, 256 + lock_flags); 257 + } 258 + } else 259 + complete(&cmd->cevent); 260 + } 261 + 262 + /** 263 + * send_tmf() - sends a Task Management Function (TMF) 264 + * @afu: AFU to checkout from. 265 + * @scp: SCSI command from stack. 266 + * @tmfcmd: TMF command to send. 267 + * 268 + * Return: 269 + * 0 on success 270 + * SCSI_MLQUEUE_HOST_BUSY when host is busy 271 + */ 272 + static int send_tmf(struct afu *afu, struct scsi_cmnd *scp, u64 tmfcmd) 273 + { 274 + struct afu_cmd *cmd; 275 + 276 + u32 port_sel = scp->device->channel + 1; 277 + short lflag = 0; 278 + struct Scsi_Host *host = scp->device->host; 279 + struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)host->hostdata; 280 + ulong lock_flags; 281 + int rc = 0; 282 + 283 + cmd = cxlflash_cmd_checkout(afu); 284 + if (unlikely(!cmd)) { 285 + pr_err("%s: could not get a free command\n", __func__); 286 + rc = SCSI_MLQUEUE_HOST_BUSY; 287 + goto out; 288 + } 289 + 290 + /* If a Task Management Function is active, do not send one more. 291 + */ 292 + spin_lock_irqsave(&cfg->tmf_waitq.lock, lock_flags); 293 + if (cfg->tmf_active) 294 + wait_event_interruptible_locked_irq(cfg->tmf_waitq, 295 + !cfg->tmf_active); 296 + cfg->tmf_active = true; 297 + cmd->cmd_tmf = true; 298 + spin_unlock_irqrestore(&cfg->tmf_waitq.lock, lock_flags); 299 + 300 + cmd->rcb.ctx_id = afu->ctx_hndl; 301 + cmd->rcb.port_sel = port_sel; 302 + cmd->rcb.lun_id = lun_to_lunid(scp->device->lun); 303 + 304 + lflag = SISL_REQ_FLAGS_TMF_CMD; 305 + 306 + cmd->rcb.req_flags = (SISL_REQ_FLAGS_PORT_LUN_ID | 307 + SISL_REQ_FLAGS_SUP_UNDERRUN | lflag); 308 + 309 + /* Stash the scp in the reserved field, for reuse during interrupt */ 310 + cmd->rcb.scp = scp; 311 + 312 + /* Copy the CDB from the cmd passed in */ 313 + memcpy(cmd->rcb.cdb, &tmfcmd, sizeof(tmfcmd)); 314 + 315 + /* Send the command */ 316 + rc = cxlflash_send_cmd(afu, cmd); 317 + if (unlikely(rc)) { 318 + cxlflash_cmd_checkin(cmd); 319 + spin_lock_irqsave(&cfg->tmf_waitq.lock, lock_flags); 320 + cfg->tmf_active = false; 321 + spin_unlock_irqrestore(&cfg->tmf_waitq.lock, lock_flags); 322 + goto out; 323 + } 324 + 325 + spin_lock_irqsave(&cfg->tmf_waitq.lock, lock_flags); 326 + wait_event_interruptible_locked_irq(cfg->tmf_waitq, !cfg->tmf_active); 327 + spin_unlock_irqrestore(&cfg->tmf_waitq.lock, lock_flags); 328 + out: 329 + return rc; 330 + } 331 + 332 + /** 333 + * cxlflash_driver_info() - information handler for this host driver 334 + * @host: SCSI host associated with device. 335 + * 336 + * Return: A string describing the device. 337 + */ 338 + static const char *cxlflash_driver_info(struct Scsi_Host *host) 339 + { 340 + return CXLFLASH_ADAPTER_NAME; 341 + } 342 + 343 + /** 344 + * cxlflash_queuecommand() - sends a mid-layer request 345 + * @host: SCSI host associated with device. 346 + * @scp: SCSI command to send. 347 + * 348 + * Return: 349 + * 0 on success 350 + * SCSI_MLQUEUE_HOST_BUSY when host is busy 351 + */ 352 + static int cxlflash_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scp) 353 + { 354 + struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)host->hostdata; 355 + struct afu *afu = cfg->afu; 356 + struct pci_dev *pdev = cfg->dev; 357 + struct afu_cmd *cmd; 358 + u32 port_sel = scp->device->channel + 1; 359 + int nseg, i, ncount; 360 + struct scatterlist *sg; 361 + ulong lock_flags; 362 + short lflag = 0; 363 + int rc = 0; 364 + 365 + pr_debug("%s: (scp=%p) %d/%d/%d/%llu cdb=(%08X-%08X-%08X-%08X)\n", 366 + __func__, scp, host->host_no, scp->device->channel, 367 + scp->device->id, scp->device->lun, 368 + get_unaligned_be32(&((u32 *)scp->cmnd)[0]), 369 + get_unaligned_be32(&((u32 *)scp->cmnd)[1]), 370 + get_unaligned_be32(&((u32 *)scp->cmnd)[2]), 371 + get_unaligned_be32(&((u32 *)scp->cmnd)[3])); 372 + 373 + /* If a Task Management Function is active, wait for it to complete 374 + * before continuing with regular commands. 375 + */ 376 + spin_lock_irqsave(&cfg->tmf_waitq.lock, lock_flags); 377 + if (cfg->tmf_active) { 378 + spin_unlock_irqrestore(&cfg->tmf_waitq.lock, lock_flags); 379 + rc = SCSI_MLQUEUE_HOST_BUSY; 380 + goto out; 381 + } 382 + spin_unlock_irqrestore(&cfg->tmf_waitq.lock, lock_flags); 383 + 384 + switch (cfg->state) { 385 + case STATE_LIMBO: 386 + dev_dbg_ratelimited(&cfg->dev->dev, "%s: device in limbo!\n", 387 + __func__); 388 + rc = SCSI_MLQUEUE_HOST_BUSY; 389 + goto out; 390 + case STATE_FAILTERM: 391 + dev_dbg_ratelimited(&cfg->dev->dev, "%s: device has failed!\n", 392 + __func__); 393 + scp->result = (DID_NO_CONNECT << 16); 394 + scp->scsi_done(scp); 395 + rc = 0; 396 + goto out; 397 + default: 398 + break; 399 + } 400 + 401 + cmd = cxlflash_cmd_checkout(afu); 402 + if (unlikely(!cmd)) { 403 + pr_err("%s: could not get a free command\n", __func__); 404 + rc = SCSI_MLQUEUE_HOST_BUSY; 405 + goto out; 406 + } 407 + 408 + cmd->rcb.ctx_id = afu->ctx_hndl; 409 + cmd->rcb.port_sel = port_sel; 410 + cmd->rcb.lun_id = lun_to_lunid(scp->device->lun); 411 + 412 + if (scp->sc_data_direction == DMA_TO_DEVICE) 413 + lflag = SISL_REQ_FLAGS_HOST_WRITE; 414 + else 415 + lflag = SISL_REQ_FLAGS_HOST_READ; 416 + 417 + cmd->rcb.req_flags = (SISL_REQ_FLAGS_PORT_LUN_ID | 418 + SISL_REQ_FLAGS_SUP_UNDERRUN | lflag); 419 + 420 + /* Stash the scp in the reserved field, for reuse during interrupt */ 421 + cmd->rcb.scp = scp; 422 + 423 + nseg = scsi_dma_map(scp); 424 + if (unlikely(nseg < 0)) { 425 + dev_err(&pdev->dev, "%s: Fail DMA map! nseg=%d\n", 426 + __func__, nseg); 427 + rc = SCSI_MLQUEUE_HOST_BUSY; 428 + goto out; 429 + } 430 + 431 + ncount = scsi_sg_count(scp); 432 + scsi_for_each_sg(scp, sg, ncount, i) { 433 + cmd->rcb.data_len = sg_dma_len(sg); 434 + cmd->rcb.data_ea = sg_dma_address(sg); 435 + } 436 + 437 + /* Copy the CDB from the scsi_cmnd passed in */ 438 + memcpy(cmd->rcb.cdb, scp->cmnd, sizeof(cmd->rcb.cdb)); 439 + 440 + /* Send the command */ 441 + rc = cxlflash_send_cmd(afu, cmd); 442 + if (unlikely(rc)) { 443 + cxlflash_cmd_checkin(cmd); 444 + scsi_dma_unmap(scp); 445 + } 446 + 447 + out: 448 + return rc; 449 + } 450 + 451 + /** 452 + * cxlflash_eh_device_reset_handler() - reset a single LUN 453 + * @scp: SCSI command to send. 454 + * 455 + * Return: 456 + * SUCCESS as defined in scsi/scsi.h 457 + * FAILED as defined in scsi/scsi.h 458 + */ 459 + static int cxlflash_eh_device_reset_handler(struct scsi_cmnd *scp) 460 + { 461 + int rc = SUCCESS; 462 + struct Scsi_Host *host = scp->device->host; 463 + struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)host->hostdata; 464 + struct afu *afu = cfg->afu; 465 + int rcr = 0; 466 + 467 + pr_debug("%s: (scp=%p) %d/%d/%d/%llu " 468 + "cdb=(%08X-%08X-%08X-%08X)\n", __func__, scp, 469 + host->host_no, scp->device->channel, 470 + scp->device->id, scp->device->lun, 471 + get_unaligned_be32(&((u32 *)scp->cmnd)[0]), 472 + get_unaligned_be32(&((u32 *)scp->cmnd)[1]), 473 + get_unaligned_be32(&((u32 *)scp->cmnd)[2]), 474 + get_unaligned_be32(&((u32 *)scp->cmnd)[3])); 475 + 476 + switch (cfg->state) { 477 + case STATE_NORMAL: 478 + rcr = send_tmf(afu, scp, TMF_LUN_RESET); 479 + if (unlikely(rcr)) 480 + rc = FAILED; 481 + break; 482 + case STATE_LIMBO: 483 + wait_event(cfg->limbo_waitq, cfg->state != STATE_LIMBO); 484 + if (cfg->state == STATE_NORMAL) 485 + break; 486 + /* fall through */ 487 + default: 488 + rc = FAILED; 489 + break; 490 + } 491 + 492 + pr_debug("%s: returning rc=%d\n", __func__, rc); 493 + return rc; 494 + } 495 + 496 + /** 497 + * cxlflash_eh_host_reset_handler() - reset the host adapter 498 + * @scp: SCSI command from stack identifying host. 499 + * 500 + * Return: 501 + * SUCCESS as defined in scsi/scsi.h 502 + * FAILED as defined in scsi/scsi.h 503 + */ 504 + static int cxlflash_eh_host_reset_handler(struct scsi_cmnd *scp) 505 + { 506 + int rc = SUCCESS; 507 + int rcr = 0; 508 + struct Scsi_Host *host = scp->device->host; 509 + struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)host->hostdata; 510 + 511 + pr_debug("%s: (scp=%p) %d/%d/%d/%llu " 512 + "cdb=(%08X-%08X-%08X-%08X)\n", __func__, scp, 513 + host->host_no, scp->device->channel, 514 + scp->device->id, scp->device->lun, 515 + get_unaligned_be32(&((u32 *)scp->cmnd)[0]), 516 + get_unaligned_be32(&((u32 *)scp->cmnd)[1]), 517 + get_unaligned_be32(&((u32 *)scp->cmnd)[2]), 518 + get_unaligned_be32(&((u32 *)scp->cmnd)[3])); 519 + 520 + switch (cfg->state) { 521 + case STATE_NORMAL: 522 + cfg->state = STATE_LIMBO; 523 + scsi_block_requests(cfg->host); 524 + cxlflash_mark_contexts_error(cfg); 525 + rcr = cxlflash_afu_reset(cfg); 526 + if (rcr) { 527 + rc = FAILED; 528 + cfg->state = STATE_FAILTERM; 529 + } else 530 + cfg->state = STATE_NORMAL; 531 + wake_up_all(&cfg->limbo_waitq); 532 + scsi_unblock_requests(cfg->host); 533 + break; 534 + case STATE_LIMBO: 535 + wait_event(cfg->limbo_waitq, cfg->state != STATE_LIMBO); 536 + if (cfg->state == STATE_NORMAL) 537 + break; 538 + /* fall through */ 539 + default: 540 + rc = FAILED; 541 + break; 542 + } 543 + 544 + pr_debug("%s: returning rc=%d\n", __func__, rc); 545 + return rc; 546 + } 547 + 548 + /** 549 + * cxlflash_change_queue_depth() - change the queue depth for the device 550 + * @sdev: SCSI device destined for queue depth change. 551 + * @qdepth: Requested queue depth value to set. 552 + * 553 + * The requested queue depth is capped to the maximum supported value. 554 + * 555 + * Return: The actual queue depth set. 556 + */ 557 + static int cxlflash_change_queue_depth(struct scsi_device *sdev, int qdepth) 558 + { 559 + 560 + if (qdepth > CXLFLASH_MAX_CMDS_PER_LUN) 561 + qdepth = CXLFLASH_MAX_CMDS_PER_LUN; 562 + 563 + scsi_change_queue_depth(sdev, qdepth); 564 + return sdev->queue_depth; 565 + } 566 + 567 + /** 568 + * cxlflash_show_port_status() - queries and presents the current port status 569 + * @dev: Generic device associated with the host owning the port. 570 + * @attr: Device attribute representing the port. 571 + * @buf: Buffer of length PAGE_SIZE to report back port status in ASCII. 572 + * 573 + * Return: The size of the ASCII string returned in @buf. 574 + */ 575 + static ssize_t cxlflash_show_port_status(struct device *dev, 576 + struct device_attribute *attr, 577 + char *buf) 578 + { 579 + struct Scsi_Host *shost = class_to_shost(dev); 580 + struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)shost->hostdata; 581 + struct afu *afu = cfg->afu; 582 + 583 + char *disp_status; 584 + int rc; 585 + u32 port; 586 + u64 status; 587 + u64 *fc_regs; 588 + 589 + rc = kstrtouint((attr->attr.name + 4), 10, &port); 590 + if (rc || (port >= NUM_FC_PORTS)) 591 + return 0; 592 + 593 + fc_regs = &afu->afu_map->global.fc_regs[port][0]; 594 + status = 595 + (readq_be(&fc_regs[FC_MTIP_STATUS / 8]) & FC_MTIP_STATUS_MASK); 596 + 597 + if (status == FC_MTIP_STATUS_ONLINE) 598 + disp_status = "online"; 599 + else if (status == FC_MTIP_STATUS_OFFLINE) 600 + disp_status = "offline"; 601 + else 602 + disp_status = "unknown"; 603 + 604 + return snprintf(buf, PAGE_SIZE, "%s\n", disp_status); 605 + } 606 + 607 + /** 608 + * cxlflash_show_lun_mode() - presents the current LUN mode of the host 609 + * @dev: Generic device associated with the host. 610 + * @attr: Device attribute representing the lun mode. 611 + * @buf: Buffer of length PAGE_SIZE to report back the LUN mode in ASCII. 612 + * 613 + * Return: The size of the ASCII string returned in @buf. 614 + */ 615 + static ssize_t cxlflash_show_lun_mode(struct device *dev, 616 + struct device_attribute *attr, char *buf) 617 + { 618 + struct Scsi_Host *shost = class_to_shost(dev); 619 + struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)shost->hostdata; 620 + struct afu *afu = cfg->afu; 621 + 622 + return snprintf(buf, PAGE_SIZE, "%u\n", afu->internal_lun); 623 + } 624 + 625 + /** 626 + * cxlflash_store_lun_mode() - sets the LUN mode of the host 627 + * @dev: Generic device associated with the host. 628 + * @attr: Device attribute representing the lun mode. 629 + * @buf: Buffer of length PAGE_SIZE containing the LUN mode in ASCII. 630 + * @count: Length of data resizing in @buf. 631 + * 632 + * The CXL Flash AFU supports a dummy LUN mode where the external 633 + * links and storage are not required. Space on the FPGA is used 634 + * to create 1 or 2 small LUNs which are presented to the system 635 + * as if they were a normal storage device. This feature is useful 636 + * during development and also provides manufacturing with a way 637 + * to test the AFU without an actual device. 638 + * 639 + * 0 = external LUN[s] (default) 640 + * 1 = internal LUN (1 x 64K, 512B blocks, id 0) 641 + * 2 = internal LUN (1 x 64K, 4K blocks, id 0) 642 + * 3 = internal LUN (2 x 32K, 512B blocks, ids 0,1) 643 + * 4 = internal LUN (2 x 32K, 4K blocks, ids 0,1) 644 + * 645 + * Return: The size of the ASCII string returned in @buf. 646 + */ 647 + static ssize_t cxlflash_store_lun_mode(struct device *dev, 648 + struct device_attribute *attr, 649 + const char *buf, size_t count) 650 + { 651 + struct Scsi_Host *shost = class_to_shost(dev); 652 + struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)shost->hostdata; 653 + struct afu *afu = cfg->afu; 654 + int rc; 655 + u32 lun_mode; 656 + 657 + rc = kstrtouint(buf, 10, &lun_mode); 658 + if (!rc && (lun_mode < 5) && (lun_mode != afu->internal_lun)) { 659 + afu->internal_lun = lun_mode; 660 + cxlflash_afu_reset(cfg); 661 + scsi_scan_host(cfg->host); 662 + } 663 + 664 + return count; 665 + } 666 + 667 + /** 668 + * cxlflash_show_ioctl_version() - presents the current ioctl version of the host 669 + * @dev: Generic device associated with the host. 670 + * @attr: Device attribute representing the ioctl version. 671 + * @buf: Buffer of length PAGE_SIZE to report back the ioctl version. 672 + * 673 + * Return: The size of the ASCII string returned in @buf. 674 + */ 675 + static ssize_t cxlflash_show_ioctl_version(struct device *dev, 676 + struct device_attribute *attr, 677 + char *buf) 678 + { 679 + return scnprintf(buf, PAGE_SIZE, "%u\n", DK_CXLFLASH_VERSION_0); 680 + } 681 + 682 + /** 683 + * cxlflash_show_dev_mode() - presents the current mode of the device 684 + * @dev: Generic device associated with the device. 685 + * @attr: Device attribute representing the device mode. 686 + * @buf: Buffer of length PAGE_SIZE to report back the dev mode in ASCII. 687 + * 688 + * Return: The size of the ASCII string returned in @buf. 689 + */ 690 + static ssize_t cxlflash_show_dev_mode(struct device *dev, 691 + struct device_attribute *attr, char *buf) 692 + { 693 + struct scsi_device *sdev = to_scsi_device(dev); 694 + 695 + return snprintf(buf, PAGE_SIZE, "%s\n", 696 + sdev->hostdata ? "superpipe" : "legacy"); 697 + } 698 + 699 + /** 700 + * cxlflash_wait_for_pci_err_recovery() - wait for error recovery during probe 701 + * @cxlflash: Internal structure associated with the host. 702 + */ 703 + static void cxlflash_wait_for_pci_err_recovery(struct cxlflash_cfg *cfg) 704 + { 705 + struct pci_dev *pdev = cfg->dev; 706 + 707 + if (pci_channel_offline(pdev)) 708 + wait_event_timeout(cfg->limbo_waitq, 709 + !pci_channel_offline(pdev), 710 + CXLFLASH_PCI_ERROR_RECOVERY_TIMEOUT); 711 + } 712 + 713 + /* 714 + * Host attributes 715 + */ 716 + static DEVICE_ATTR(port0, S_IRUGO, cxlflash_show_port_status, NULL); 717 + static DEVICE_ATTR(port1, S_IRUGO, cxlflash_show_port_status, NULL); 718 + static DEVICE_ATTR(lun_mode, S_IRUGO | S_IWUSR, cxlflash_show_lun_mode, 719 + cxlflash_store_lun_mode); 720 + static DEVICE_ATTR(ioctl_version, S_IRUGO, cxlflash_show_ioctl_version, NULL); 721 + 722 + static struct device_attribute *cxlflash_host_attrs[] = { 723 + &dev_attr_port0, 724 + &dev_attr_port1, 725 + &dev_attr_lun_mode, 726 + &dev_attr_ioctl_version, 727 + NULL 728 + }; 729 + 730 + /* 731 + * Device attributes 732 + */ 733 + static DEVICE_ATTR(mode, S_IRUGO, cxlflash_show_dev_mode, NULL); 734 + 735 + static struct device_attribute *cxlflash_dev_attrs[] = { 736 + &dev_attr_mode, 737 + NULL 738 + }; 739 + 740 + /* 741 + * Host template 742 + */ 743 + static struct scsi_host_template driver_template = { 744 + .module = THIS_MODULE, 745 + .name = CXLFLASH_ADAPTER_NAME, 746 + .info = cxlflash_driver_info, 747 + .ioctl = cxlflash_ioctl, 748 + .proc_name = CXLFLASH_NAME, 749 + .queuecommand = cxlflash_queuecommand, 750 + .eh_device_reset_handler = cxlflash_eh_device_reset_handler, 751 + .eh_host_reset_handler = cxlflash_eh_host_reset_handler, 752 + .change_queue_depth = cxlflash_change_queue_depth, 753 + .cmd_per_lun = 16, 754 + .can_queue = CXLFLASH_MAX_CMDS, 755 + .this_id = -1, 756 + .sg_tablesize = SG_NONE, /* No scatter gather support. */ 757 + .max_sectors = CXLFLASH_MAX_SECTORS, 758 + .use_clustering = ENABLE_CLUSTERING, 759 + .shost_attrs = cxlflash_host_attrs, 760 + .sdev_attrs = cxlflash_dev_attrs, 761 + }; 762 + 763 + /* 764 + * Device dependent values 765 + */ 766 + static struct dev_dependent_vals dev_corsa_vals = { CXLFLASH_MAX_SECTORS }; 767 + 768 + /* 769 + * PCI device binding table 770 + */ 771 + static struct pci_device_id cxlflash_pci_table[] = { 772 + {PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CORSA, 773 + PCI_ANY_ID, PCI_ANY_ID, 0, 0, (kernel_ulong_t)&dev_corsa_vals}, 774 + {} 775 + }; 776 + 777 + MODULE_DEVICE_TABLE(pci, cxlflash_pci_table); 778 + 779 + /** 780 + * free_mem() - free memory associated with the AFU 781 + * @cxlflash: Internal structure associated with the host. 782 + */ 783 + static void free_mem(struct cxlflash_cfg *cfg) 784 + { 785 + int i; 786 + char *buf = NULL; 787 + struct afu *afu = cfg->afu; 788 + 789 + if (cfg->afu) { 790 + for (i = 0; i < CXLFLASH_NUM_CMDS; i++) { 791 + buf = afu->cmd[i].buf; 792 + if (!((u64)buf & (PAGE_SIZE - 1))) 793 + free_page((ulong)buf); 794 + } 795 + 796 + free_pages((ulong)afu, get_order(sizeof(struct afu))); 797 + cfg->afu = NULL; 798 + } 799 + } 800 + 801 + /** 802 + * stop_afu() - stops the AFU command timers and unmaps the MMIO space 803 + * @cxlflash: Internal structure associated with the host. 804 + * 805 + * Safe to call with AFU in a partially allocated/initialized state. 806 + */ 807 + static void stop_afu(struct cxlflash_cfg *cfg) 808 + { 809 + int i; 810 + struct afu *afu = cfg->afu; 811 + 812 + if (likely(afu)) { 813 + for (i = 0; i < CXLFLASH_NUM_CMDS; i++) 814 + complete(&afu->cmd[i].cevent); 815 + 816 + if (likely(afu->afu_map)) { 817 + cxl_psa_unmap((void *)afu->afu_map); 818 + afu->afu_map = NULL; 819 + } 820 + } 821 + } 822 + 823 + /** 824 + * term_mc() - terminates the master context 825 + * @cxlflash: Internal structure associated with the host. 826 + * @level: Depth of allocation, where to begin waterfall tear down. 827 + * 828 + * Safe to call with AFU/MC in partially allocated/initialized state. 829 + */ 830 + static void term_mc(struct cxlflash_cfg *cfg, enum undo_level level) 831 + { 832 + int rc = 0; 833 + struct afu *afu = cfg->afu; 834 + 835 + if (!afu || !cfg->mcctx) { 836 + pr_err("%s: returning from term_mc with NULL afu or MC\n", 837 + __func__); 838 + return; 839 + } 840 + 841 + switch (level) { 842 + case UNDO_START: 843 + rc = cxl_stop_context(cfg->mcctx); 844 + BUG_ON(rc); 845 + case UNMAP_THREE: 846 + cxl_unmap_afu_irq(cfg->mcctx, 3, afu); 847 + case UNMAP_TWO: 848 + cxl_unmap_afu_irq(cfg->mcctx, 2, afu); 849 + case UNMAP_ONE: 850 + cxl_unmap_afu_irq(cfg->mcctx, 1, afu); 851 + case FREE_IRQ: 852 + cxl_free_afu_irqs(cfg->mcctx); 853 + case RELEASE_CONTEXT: 854 + cfg->mcctx = NULL; 855 + } 856 + } 857 + 858 + /** 859 + * term_afu() - terminates the AFU 860 + * @cxlflash: Internal structure associated with the host. 861 + * 862 + * Safe to call with AFU/MC in partially allocated/initialized state. 863 + */ 864 + static void term_afu(struct cxlflash_cfg *cfg) 865 + { 866 + term_mc(cfg, UNDO_START); 867 + 868 + if (cfg->afu) 869 + stop_afu(cfg); 870 + 871 + pr_debug("%s: returning\n", __func__); 872 + } 873 + 874 + /** 875 + * cxlflash_remove() - PCI entry point to tear down host 876 + * @pdev: PCI device associated with the host. 877 + * 878 + * Safe to use as a cleanup in partially allocated/initialized state. 879 + */ 880 + static void cxlflash_remove(struct pci_dev *pdev) 881 + { 882 + struct cxlflash_cfg *cfg = pci_get_drvdata(pdev); 883 + ulong lock_flags; 884 + 885 + /* If a Task Management Function is active, wait for it to complete 886 + * before continuing with remove. 887 + */ 888 + spin_lock_irqsave(&cfg->tmf_waitq.lock, lock_flags); 889 + if (cfg->tmf_active) 890 + wait_event_interruptible_locked_irq(cfg->tmf_waitq, 891 + !cfg->tmf_active); 892 + spin_unlock_irqrestore(&cfg->tmf_waitq.lock, lock_flags); 893 + 894 + cfg->state = STATE_FAILTERM; 895 + cxlflash_stop_term_user_contexts(cfg); 896 + 897 + switch (cfg->init_state) { 898 + case INIT_STATE_SCSI: 899 + cxlflash_term_local_luns(cfg); 900 + scsi_remove_host(cfg->host); 901 + scsi_host_put(cfg->host); 902 + /* Fall through */ 903 + case INIT_STATE_AFU: 904 + term_afu(cfg); 905 + case INIT_STATE_PCI: 906 + pci_release_regions(cfg->dev); 907 + pci_disable_device(pdev); 908 + case INIT_STATE_NONE: 909 + flush_work(&cfg->work_q); 910 + free_mem(cfg); 911 + break; 912 + } 913 + 914 + pr_debug("%s: returning\n", __func__); 915 + } 916 + 917 + /** 918 + * alloc_mem() - allocates the AFU and its command pool 919 + * @cxlflash: Internal structure associated with the host. 920 + * 921 + * A partially allocated state remains on failure. 922 + * 923 + * Return: 924 + * 0 on success 925 + * -ENOMEM on failure to allocate memory 926 + */ 927 + static int alloc_mem(struct cxlflash_cfg *cfg) 928 + { 929 + int rc = 0; 930 + int i; 931 + char *buf = NULL; 932 + 933 + /* This allocation is about 12K, i.e. only 1 64k page 934 + * and upto 4 4k pages 935 + */ 936 + cfg->afu = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 937 + get_order(sizeof(struct afu))); 938 + if (unlikely(!cfg->afu)) { 939 + pr_err("%s: cannot get %d free pages\n", 940 + __func__, get_order(sizeof(struct afu))); 941 + rc = -ENOMEM; 942 + goto out; 943 + } 944 + cfg->afu->parent = cfg; 945 + cfg->afu->afu_map = NULL; 946 + 947 + for (i = 0; i < CXLFLASH_NUM_CMDS; buf += CMD_BUFSIZE, i++) { 948 + if (!((u64)buf & (PAGE_SIZE - 1))) { 949 + buf = (void *)__get_free_page(GFP_KERNEL | __GFP_ZERO); 950 + if (unlikely(!buf)) { 951 + pr_err("%s: Allocate command buffers fail!\n", 952 + __func__); 953 + rc = -ENOMEM; 954 + free_mem(cfg); 955 + goto out; 956 + } 957 + } 958 + 959 + cfg->afu->cmd[i].buf = buf; 960 + atomic_set(&cfg->afu->cmd[i].free, 1); 961 + cfg->afu->cmd[i].slot = i; 962 + } 963 + 964 + out: 965 + return rc; 966 + } 967 + 968 + /** 969 + * init_pci() - initializes the host as a PCI device 970 + * @cxlflash: Internal structure associated with the host. 971 + * 972 + * Return: 973 + * 0 on success 974 + * -EIO on unable to communicate with device 975 + * A return code from the PCI sub-routines 976 + */ 977 + static int init_pci(struct cxlflash_cfg *cfg) 978 + { 979 + struct pci_dev *pdev = cfg->dev; 980 + int rc = 0; 981 + 982 + cfg->cxlflash_regs_pci = pci_resource_start(pdev, 0); 983 + rc = pci_request_regions(pdev, CXLFLASH_NAME); 984 + if (rc < 0) { 985 + dev_err(&pdev->dev, 986 + "%s: Couldn't register memory range of registers\n", 987 + __func__); 988 + goto out; 989 + } 990 + 991 + rc = pci_enable_device(pdev); 992 + if (rc || pci_channel_offline(pdev)) { 993 + if (pci_channel_offline(pdev)) { 994 + cxlflash_wait_for_pci_err_recovery(cfg); 995 + rc = pci_enable_device(pdev); 996 + } 997 + 998 + if (rc) { 999 + dev_err(&pdev->dev, "%s: Cannot enable adapter\n", 1000 + __func__); 1001 + cxlflash_wait_for_pci_err_recovery(cfg); 1002 + goto out_release_regions; 1003 + } 1004 + } 1005 + 1006 + rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(64)); 1007 + if (rc < 0) { 1008 + dev_dbg(&pdev->dev, "%s: Failed to set 64 bit PCI DMA mask\n", 1009 + __func__); 1010 + rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); 1011 + } 1012 + 1013 + if (rc < 0) { 1014 + dev_err(&pdev->dev, "%s: Failed to set PCI DMA mask\n", 1015 + __func__); 1016 + goto out_disable; 1017 + } 1018 + 1019 + pci_set_master(pdev); 1020 + 1021 + if (pci_channel_offline(pdev)) { 1022 + cxlflash_wait_for_pci_err_recovery(cfg); 1023 + if (pci_channel_offline(pdev)) { 1024 + rc = -EIO; 1025 + goto out_msi_disable; 1026 + } 1027 + } 1028 + 1029 + rc = pci_save_state(pdev); 1030 + 1031 + if (rc != PCIBIOS_SUCCESSFUL) { 1032 + dev_err(&pdev->dev, "%s: Failed to save PCI config space\n", 1033 + __func__); 1034 + rc = -EIO; 1035 + goto cleanup_nolog; 1036 + } 1037 + 1038 + out: 1039 + pr_debug("%s: returning rc=%d\n", __func__, rc); 1040 + return rc; 1041 + 1042 + cleanup_nolog: 1043 + out_msi_disable: 1044 + cxlflash_wait_for_pci_err_recovery(cfg); 1045 + out_disable: 1046 + pci_disable_device(pdev); 1047 + out_release_regions: 1048 + pci_release_regions(pdev); 1049 + goto out; 1050 + 1051 + } 1052 + 1053 + /** 1054 + * init_scsi() - adds the host to the SCSI stack and kicks off host scan 1055 + * @cxlflash: Internal structure associated with the host. 1056 + * 1057 + * Return: 1058 + * 0 on success 1059 + * A return code from adding the host 1060 + */ 1061 + static int init_scsi(struct cxlflash_cfg *cfg) 1062 + { 1063 + struct pci_dev *pdev = cfg->dev; 1064 + int rc = 0; 1065 + 1066 + rc = scsi_add_host(cfg->host, &pdev->dev); 1067 + if (rc) { 1068 + dev_err(&pdev->dev, "%s: scsi_add_host failed (rc=%d)\n", 1069 + __func__, rc); 1070 + goto out; 1071 + } 1072 + 1073 + scsi_scan_host(cfg->host); 1074 + 1075 + out: 1076 + pr_debug("%s: returning rc=%d\n", __func__, rc); 1077 + return rc; 1078 + } 1079 + 1080 + /** 1081 + * set_port_online() - transitions the specified host FC port to online state 1082 + * @fc_regs: Top of MMIO region defined for specified port. 1083 + * 1084 + * The provided MMIO region must be mapped prior to call. Online state means 1085 + * that the FC link layer has synced, completed the handshaking process, and 1086 + * is ready for login to start. 1087 + */ 1088 + static void set_port_online(u64 *fc_regs) 1089 + { 1090 + u64 cmdcfg; 1091 + 1092 + cmdcfg = readq_be(&fc_regs[FC_MTIP_CMDCONFIG / 8]); 1093 + cmdcfg &= (~FC_MTIP_CMDCONFIG_OFFLINE); /* clear OFF_LINE */ 1094 + cmdcfg |= (FC_MTIP_CMDCONFIG_ONLINE); /* set ON_LINE */ 1095 + writeq_be(cmdcfg, &fc_regs[FC_MTIP_CMDCONFIG / 8]); 1096 + } 1097 + 1098 + /** 1099 + * set_port_offline() - transitions the specified host FC port to offline state 1100 + * @fc_regs: Top of MMIO region defined for specified port. 1101 + * 1102 + * The provided MMIO region must be mapped prior to call. 1103 + */ 1104 + static void set_port_offline(u64 *fc_regs) 1105 + { 1106 + u64 cmdcfg; 1107 + 1108 + cmdcfg = readq_be(&fc_regs[FC_MTIP_CMDCONFIG / 8]); 1109 + cmdcfg &= (~FC_MTIP_CMDCONFIG_ONLINE); /* clear ON_LINE */ 1110 + cmdcfg |= (FC_MTIP_CMDCONFIG_OFFLINE); /* set OFF_LINE */ 1111 + writeq_be(cmdcfg, &fc_regs[FC_MTIP_CMDCONFIG / 8]); 1112 + } 1113 + 1114 + /** 1115 + * wait_port_online() - waits for the specified host FC port come online 1116 + * @fc_regs: Top of MMIO region defined for specified port. 1117 + * @delay_us: Number of microseconds to delay between reading port status. 1118 + * @nretry: Number of cycles to retry reading port status. 1119 + * 1120 + * The provided MMIO region must be mapped prior to call. This will timeout 1121 + * when the cable is not plugged in. 1122 + * 1123 + * Return: 1124 + * TRUE (1) when the specified port is online 1125 + * FALSE (0) when the specified port fails to come online after timeout 1126 + * -EINVAL when @delay_us is less than 1000 1127 + */ 1128 + static int wait_port_online(u64 *fc_regs, u32 delay_us, u32 nretry) 1129 + { 1130 + u64 status; 1131 + 1132 + if (delay_us < 1000) { 1133 + pr_err("%s: invalid delay specified %d\n", __func__, delay_us); 1134 + return -EINVAL; 1135 + } 1136 + 1137 + do { 1138 + msleep(delay_us / 1000); 1139 + status = readq_be(&fc_regs[FC_MTIP_STATUS / 8]); 1140 + } while ((status & FC_MTIP_STATUS_MASK) != FC_MTIP_STATUS_ONLINE && 1141 + nretry--); 1142 + 1143 + return ((status & FC_MTIP_STATUS_MASK) == FC_MTIP_STATUS_ONLINE); 1144 + } 1145 + 1146 + /** 1147 + * wait_port_offline() - waits for the specified host FC port go offline 1148 + * @fc_regs: Top of MMIO region defined for specified port. 1149 + * @delay_us: Number of microseconds to delay between reading port status. 1150 + * @nretry: Number of cycles to retry reading port status. 1151 + * 1152 + * The provided MMIO region must be mapped prior to call. 1153 + * 1154 + * Return: 1155 + * TRUE (1) when the specified port is offline 1156 + * FALSE (0) when the specified port fails to go offline after timeout 1157 + * -EINVAL when @delay_us is less than 1000 1158 + */ 1159 + static int wait_port_offline(u64 *fc_regs, u32 delay_us, u32 nretry) 1160 + { 1161 + u64 status; 1162 + 1163 + if (delay_us < 1000) { 1164 + pr_err("%s: invalid delay specified %d\n", __func__, delay_us); 1165 + return -EINVAL; 1166 + } 1167 + 1168 + do { 1169 + msleep(delay_us / 1000); 1170 + status = readq_be(&fc_regs[FC_MTIP_STATUS / 8]); 1171 + } while ((status & FC_MTIP_STATUS_MASK) != FC_MTIP_STATUS_OFFLINE && 1172 + nretry--); 1173 + 1174 + return ((status & FC_MTIP_STATUS_MASK) == FC_MTIP_STATUS_OFFLINE); 1175 + } 1176 + 1177 + /** 1178 + * afu_set_wwpn() - configures the WWPN for the specified host FC port 1179 + * @afu: AFU associated with the host that owns the specified FC port. 1180 + * @port: Port number being configured. 1181 + * @fc_regs: Top of MMIO region defined for specified port. 1182 + * @wwpn: The world-wide-port-number previously discovered for port. 1183 + * 1184 + * The provided MMIO region must be mapped prior to call. As part of the 1185 + * sequence to configure the WWPN, the port is toggled offline and then back 1186 + * online. This toggling action can cause this routine to delay up to a few 1187 + * seconds. When configured to use the internal LUN feature of the AFU, a 1188 + * failure to come online is overridden. 1189 + * 1190 + * Return: 1191 + * 0 when the WWPN is successfully written and the port comes back online 1192 + * -1 when the port fails to go offline or come back up online 1193 + */ 1194 + static int afu_set_wwpn(struct afu *afu, int port, u64 *fc_regs, u64 wwpn) 1195 + { 1196 + int ret = 0; 1197 + 1198 + set_port_offline(fc_regs); 1199 + 1200 + if (!wait_port_offline(fc_regs, FC_PORT_STATUS_RETRY_INTERVAL_US, 1201 + FC_PORT_STATUS_RETRY_CNT)) { 1202 + pr_debug("%s: wait on port %d to go offline timed out\n", 1203 + __func__, port); 1204 + ret = -1; /* but continue on to leave the port back online */ 1205 + } 1206 + 1207 + if (ret == 0) 1208 + writeq_be(wwpn, &fc_regs[FC_PNAME / 8]); 1209 + 1210 + set_port_online(fc_regs); 1211 + 1212 + if (!wait_port_online(fc_regs, FC_PORT_STATUS_RETRY_INTERVAL_US, 1213 + FC_PORT_STATUS_RETRY_CNT)) { 1214 + pr_debug("%s: wait on port %d to go online timed out\n", 1215 + __func__, port); 1216 + ret = -1; 1217 + 1218 + /* 1219 + * Override for internal lun!!! 1220 + */ 1221 + if (afu->internal_lun) { 1222 + pr_debug("%s: Overriding port %d online timeout!!!\n", 1223 + __func__, port); 1224 + ret = 0; 1225 + } 1226 + } 1227 + 1228 + pr_debug("%s: returning rc=%d\n", __func__, ret); 1229 + 1230 + return ret; 1231 + } 1232 + 1233 + /** 1234 + * afu_link_reset() - resets the specified host FC port 1235 + * @afu: AFU associated with the host that owns the specified FC port. 1236 + * @port: Port number being configured. 1237 + * @fc_regs: Top of MMIO region defined for specified port. 1238 + * 1239 + * The provided MMIO region must be mapped prior to call. The sequence to 1240 + * reset the port involves toggling it offline and then back online. This 1241 + * action can cause this routine to delay up to a few seconds. An effort 1242 + * is made to maintain link with the device by switching to host to use 1243 + * the alternate port exclusively while the reset takes place. 1244 + * failure to come online is overridden. 1245 + */ 1246 + static void afu_link_reset(struct afu *afu, int port, u64 *fc_regs) 1247 + { 1248 + u64 port_sel; 1249 + 1250 + /* first switch the AFU to the other links, if any */ 1251 + port_sel = readq_be(&afu->afu_map->global.regs.afu_port_sel); 1252 + port_sel &= ~(1ULL << port); 1253 + writeq_be(port_sel, &afu->afu_map->global.regs.afu_port_sel); 1254 + cxlflash_afu_sync(afu, 0, 0, AFU_GSYNC); 1255 + 1256 + set_port_offline(fc_regs); 1257 + if (!wait_port_offline(fc_regs, FC_PORT_STATUS_RETRY_INTERVAL_US, 1258 + FC_PORT_STATUS_RETRY_CNT)) 1259 + pr_err("%s: wait on port %d to go offline timed out\n", 1260 + __func__, port); 1261 + 1262 + set_port_online(fc_regs); 1263 + if (!wait_port_online(fc_regs, FC_PORT_STATUS_RETRY_INTERVAL_US, 1264 + FC_PORT_STATUS_RETRY_CNT)) 1265 + pr_err("%s: wait on port %d to go online timed out\n", 1266 + __func__, port); 1267 + 1268 + /* switch back to include this port */ 1269 + port_sel |= (1ULL << port); 1270 + writeq_be(port_sel, &afu->afu_map->global.regs.afu_port_sel); 1271 + cxlflash_afu_sync(afu, 0, 0, AFU_GSYNC); 1272 + 1273 + pr_debug("%s: returning port_sel=%lld\n", __func__, port_sel); 1274 + } 1275 + 1276 + /* 1277 + * Asynchronous interrupt information table 1278 + */ 1279 + static const struct asyc_intr_info ainfo[] = { 1280 + {SISL_ASTATUS_FC0_OTHER, "other error", 0, CLR_FC_ERROR | LINK_RESET}, 1281 + {SISL_ASTATUS_FC0_LOGO, "target initiated LOGO", 0, 0}, 1282 + {SISL_ASTATUS_FC0_CRC_T, "CRC threshold exceeded", 0, LINK_RESET}, 1283 + {SISL_ASTATUS_FC0_LOGI_R, "login timed out, retrying", 0, 0}, 1284 + {SISL_ASTATUS_FC0_LOGI_F, "login failed", 0, CLR_FC_ERROR}, 1285 + {SISL_ASTATUS_FC0_LOGI_S, "login succeeded", 0, 0}, 1286 + {SISL_ASTATUS_FC0_LINK_DN, "link down", 0, 0}, 1287 + {SISL_ASTATUS_FC0_LINK_UP, "link up", 0, 0}, 1288 + {SISL_ASTATUS_FC1_OTHER, "other error", 1, CLR_FC_ERROR | LINK_RESET}, 1289 + {SISL_ASTATUS_FC1_LOGO, "target initiated LOGO", 1, 0}, 1290 + {SISL_ASTATUS_FC1_CRC_T, "CRC threshold exceeded", 1, LINK_RESET}, 1291 + {SISL_ASTATUS_FC1_LOGI_R, "login timed out, retrying", 1, 0}, 1292 + {SISL_ASTATUS_FC1_LOGI_F, "login failed", 1, CLR_FC_ERROR}, 1293 + {SISL_ASTATUS_FC1_LOGI_S, "login succeeded", 1, 0}, 1294 + {SISL_ASTATUS_FC1_LINK_DN, "link down", 1, 0}, 1295 + {SISL_ASTATUS_FC1_LINK_UP, "link up", 1, 0}, 1296 + {0x0, "", 0, 0} /* terminator */ 1297 + }; 1298 + 1299 + /** 1300 + * find_ainfo() - locates and returns asynchronous interrupt information 1301 + * @status: Status code set by AFU on error. 1302 + * 1303 + * Return: The located information or NULL when the status code is invalid. 1304 + */ 1305 + static const struct asyc_intr_info *find_ainfo(u64 status) 1306 + { 1307 + const struct asyc_intr_info *info; 1308 + 1309 + for (info = &ainfo[0]; info->status; info++) 1310 + if (info->status == status) 1311 + return info; 1312 + 1313 + return NULL; 1314 + } 1315 + 1316 + /** 1317 + * afu_err_intr_init() - clears and initializes the AFU for error interrupts 1318 + * @afu: AFU associated with the host. 1319 + */ 1320 + static void afu_err_intr_init(struct afu *afu) 1321 + { 1322 + int i; 1323 + u64 reg; 1324 + 1325 + /* global async interrupts: AFU clears afu_ctrl on context exit 1326 + * if async interrupts were sent to that context. This prevents 1327 + * the AFU form sending further async interrupts when 1328 + * there is 1329 + * nobody to receive them. 1330 + */ 1331 + 1332 + /* mask all */ 1333 + writeq_be(-1ULL, &afu->afu_map->global.regs.aintr_mask); 1334 + /* set LISN# to send and point to master context */ 1335 + reg = ((u64) (((afu->ctx_hndl << 8) | SISL_MSI_ASYNC_ERROR)) << 40); 1336 + 1337 + if (afu->internal_lun) 1338 + reg |= 1; /* Bit 63 indicates local lun */ 1339 + writeq_be(reg, &afu->afu_map->global.regs.afu_ctrl); 1340 + /* clear all */ 1341 + writeq_be(-1ULL, &afu->afu_map->global.regs.aintr_clear); 1342 + /* unmask bits that are of interest */ 1343 + /* note: afu can send an interrupt after this step */ 1344 + writeq_be(SISL_ASTATUS_MASK, &afu->afu_map->global.regs.aintr_mask); 1345 + /* clear again in case a bit came on after previous clear but before */ 1346 + /* unmask */ 1347 + writeq_be(-1ULL, &afu->afu_map->global.regs.aintr_clear); 1348 + 1349 + /* Clear/Set internal lun bits */ 1350 + reg = readq_be(&afu->afu_map->global.fc_regs[0][FC_CONFIG2 / 8]); 1351 + reg &= SISL_FC_INTERNAL_MASK; 1352 + if (afu->internal_lun) 1353 + reg |= ((u64)(afu->internal_lun - 1) << SISL_FC_INTERNAL_SHIFT); 1354 + writeq_be(reg, &afu->afu_map->global.fc_regs[0][FC_CONFIG2 / 8]); 1355 + 1356 + /* now clear FC errors */ 1357 + for (i = 0; i < NUM_FC_PORTS; i++) { 1358 + writeq_be(0xFFFFFFFFU, 1359 + &afu->afu_map->global.fc_regs[i][FC_ERROR / 8]); 1360 + writeq_be(0, &afu->afu_map->global.fc_regs[i][FC_ERRCAP / 8]); 1361 + } 1362 + 1363 + /* sync interrupts for master's IOARRIN write */ 1364 + /* note that unlike asyncs, there can be no pending sync interrupts */ 1365 + /* at this time (this is a fresh context and master has not written */ 1366 + /* IOARRIN yet), so there is nothing to clear. */ 1367 + 1368 + /* set LISN#, it is always sent to the context that wrote IOARRIN */ 1369 + writeq_be(SISL_MSI_SYNC_ERROR, &afu->host_map->ctx_ctrl); 1370 + writeq_be(SISL_ISTATUS_MASK, &afu->host_map->intr_mask); 1371 + } 1372 + 1373 + /** 1374 + * cxlflash_sync_err_irq() - interrupt handler for synchronous errors 1375 + * @irq: Interrupt number. 1376 + * @data: Private data provided at interrupt registration, the AFU. 1377 + * 1378 + * Return: Always return IRQ_HANDLED. 1379 + */ 1380 + static irqreturn_t cxlflash_sync_err_irq(int irq, void *data) 1381 + { 1382 + struct afu *afu = (struct afu *)data; 1383 + u64 reg; 1384 + u64 reg_unmasked; 1385 + 1386 + reg = readq_be(&afu->host_map->intr_status); 1387 + reg_unmasked = (reg & SISL_ISTATUS_UNMASK); 1388 + 1389 + if (reg_unmasked == 0UL) { 1390 + pr_err("%s: %llX: spurious interrupt, intr_status %016llX\n", 1391 + __func__, (u64)afu, reg); 1392 + goto cxlflash_sync_err_irq_exit; 1393 + } 1394 + 1395 + pr_err("%s: %llX: unexpected interrupt, intr_status %016llX\n", 1396 + __func__, (u64)afu, reg); 1397 + 1398 + writeq_be(reg_unmasked, &afu->host_map->intr_clear); 1399 + 1400 + cxlflash_sync_err_irq_exit: 1401 + pr_debug("%s: returning rc=%d\n", __func__, IRQ_HANDLED); 1402 + return IRQ_HANDLED; 1403 + } 1404 + 1405 + /** 1406 + * cxlflash_rrq_irq() - interrupt handler for read-response queue (normal path) 1407 + * @irq: Interrupt number. 1408 + * @data: Private data provided at interrupt registration, the AFU. 1409 + * 1410 + * Return: Always return IRQ_HANDLED. 1411 + */ 1412 + static irqreturn_t cxlflash_rrq_irq(int irq, void *data) 1413 + { 1414 + struct afu *afu = (struct afu *)data; 1415 + struct afu_cmd *cmd; 1416 + bool toggle = afu->toggle; 1417 + u64 entry, 1418 + *hrrq_start = afu->hrrq_start, 1419 + *hrrq_end = afu->hrrq_end, 1420 + *hrrq_curr = afu->hrrq_curr; 1421 + 1422 + /* Process however many RRQ entries that are ready */ 1423 + while (true) { 1424 + entry = *hrrq_curr; 1425 + 1426 + if ((entry & SISL_RESP_HANDLE_T_BIT) != toggle) 1427 + break; 1428 + 1429 + cmd = (struct afu_cmd *)(entry & ~SISL_RESP_HANDLE_T_BIT); 1430 + cmd_complete(cmd); 1431 + 1432 + /* Advance to next entry or wrap and flip the toggle bit */ 1433 + if (hrrq_curr < hrrq_end) 1434 + hrrq_curr++; 1435 + else { 1436 + hrrq_curr = hrrq_start; 1437 + toggle ^= SISL_RESP_HANDLE_T_BIT; 1438 + } 1439 + } 1440 + 1441 + afu->hrrq_curr = hrrq_curr; 1442 + afu->toggle = toggle; 1443 + 1444 + return IRQ_HANDLED; 1445 + } 1446 + 1447 + /** 1448 + * cxlflash_async_err_irq() - interrupt handler for asynchronous errors 1449 + * @irq: Interrupt number. 1450 + * @data: Private data provided at interrupt registration, the AFU. 1451 + * 1452 + * Return: Always return IRQ_HANDLED. 1453 + */ 1454 + static irqreturn_t cxlflash_async_err_irq(int irq, void *data) 1455 + { 1456 + struct afu *afu = (struct afu *)data; 1457 + struct cxlflash_cfg *cfg; 1458 + u64 reg_unmasked; 1459 + const struct asyc_intr_info *info; 1460 + struct sisl_global_map *global = &afu->afu_map->global; 1461 + u64 reg; 1462 + u8 port; 1463 + int i; 1464 + 1465 + cfg = afu->parent; 1466 + 1467 + reg = readq_be(&global->regs.aintr_status); 1468 + reg_unmasked = (reg & SISL_ASTATUS_UNMASK); 1469 + 1470 + if (reg_unmasked == 0) { 1471 + pr_err("%s: spurious interrupt, aintr_status 0x%016llX\n", 1472 + __func__, reg); 1473 + goto out; 1474 + } 1475 + 1476 + /* it is OK to clear AFU status before FC_ERROR */ 1477 + writeq_be(reg_unmasked, &global->regs.aintr_clear); 1478 + 1479 + /* check each bit that is on */ 1480 + for (i = 0; reg_unmasked; i++, reg_unmasked = (reg_unmasked >> 1)) { 1481 + info = find_ainfo(1ULL << i); 1482 + if ((reg_unmasked & 0x1) || !info) 1483 + continue; 1484 + 1485 + port = info->port; 1486 + 1487 + pr_err("%s: FC Port %d -> %s, fc_status 0x%08llX\n", 1488 + __func__, port, info->desc, 1489 + readq_be(&global->fc_regs[port][FC_STATUS / 8])); 1490 + 1491 + /* 1492 + * do link reset first, some OTHER errors will set FC_ERROR 1493 + * again if cleared before or w/o a reset 1494 + */ 1495 + if (info->action & LINK_RESET) { 1496 + pr_err("%s: FC Port %d: resetting link\n", 1497 + __func__, port); 1498 + cfg->lr_state = LINK_RESET_REQUIRED; 1499 + cfg->lr_port = port; 1500 + schedule_work(&cfg->work_q); 1501 + } 1502 + 1503 + if (info->action & CLR_FC_ERROR) { 1504 + reg = readq_be(&global->fc_regs[port][FC_ERROR / 8]); 1505 + 1506 + /* 1507 + * since all errors are unmasked, FC_ERROR and FC_ERRCAP 1508 + * should be the same and tracing one is sufficient. 1509 + */ 1510 + 1511 + pr_err("%s: fc %d: clearing fc_error 0x%08llX\n", 1512 + __func__, port, reg); 1513 + 1514 + writeq_be(reg, &global->fc_regs[port][FC_ERROR / 8]); 1515 + writeq_be(0, &global->fc_regs[port][FC_ERRCAP / 8]); 1516 + } 1517 + } 1518 + 1519 + out: 1520 + pr_debug("%s: returning rc=%d, afu=%p\n", __func__, IRQ_HANDLED, afu); 1521 + return IRQ_HANDLED; 1522 + } 1523 + 1524 + /** 1525 + * start_context() - starts the master context 1526 + * @cxlflash: Internal structure associated with the host. 1527 + * 1528 + * Return: A success or failure value from CXL services. 1529 + */ 1530 + static int start_context(struct cxlflash_cfg *cfg) 1531 + { 1532 + int rc = 0; 1533 + 1534 + rc = cxl_start_context(cfg->mcctx, 1535 + cfg->afu->work.work_element_descriptor, 1536 + NULL); 1537 + 1538 + pr_debug("%s: returning rc=%d\n", __func__, rc); 1539 + return rc; 1540 + } 1541 + 1542 + /** 1543 + * read_vpd() - obtains the WWPNs from VPD 1544 + * @cxlflash: Internal structure associated with the host. 1545 + * @wwpn: Array of size NUM_FC_PORTS to pass back WWPNs 1546 + * 1547 + * Return: 1548 + * 0 on success 1549 + * -ENODEV when VPD or WWPN keywords not found 1550 + */ 1551 + static int read_vpd(struct cxlflash_cfg *cfg, u64 wwpn[]) 1552 + { 1553 + struct pci_dev *dev = cfg->parent_dev; 1554 + int rc = 0; 1555 + int ro_start, ro_size, i, j, k; 1556 + ssize_t vpd_size; 1557 + char vpd_data[CXLFLASH_VPD_LEN]; 1558 + char tmp_buf[WWPN_BUF_LEN] = { 0 }; 1559 + char *wwpn_vpd_tags[NUM_FC_PORTS] = { "V5", "V6" }; 1560 + 1561 + /* Get the VPD data from the device */ 1562 + vpd_size = pci_read_vpd(dev, 0, sizeof(vpd_data), vpd_data); 1563 + if (unlikely(vpd_size <= 0)) { 1564 + pr_err("%s: Unable to read VPD (size = %ld)\n", 1565 + __func__, vpd_size); 1566 + rc = -ENODEV; 1567 + goto out; 1568 + } 1569 + 1570 + /* Get the read only section offset */ 1571 + ro_start = pci_vpd_find_tag(vpd_data, 0, vpd_size, 1572 + PCI_VPD_LRDT_RO_DATA); 1573 + if (unlikely(ro_start < 0)) { 1574 + pr_err("%s: VPD Read-only data not found\n", __func__); 1575 + rc = -ENODEV; 1576 + goto out; 1577 + } 1578 + 1579 + /* Get the read only section size, cap when extends beyond read VPD */ 1580 + ro_size = pci_vpd_lrdt_size(&vpd_data[ro_start]); 1581 + j = ro_size; 1582 + i = ro_start + PCI_VPD_LRDT_TAG_SIZE; 1583 + if (unlikely((i + j) > vpd_size)) { 1584 + pr_debug("%s: Might need to read more VPD (%d > %ld)\n", 1585 + __func__, (i + j), vpd_size); 1586 + ro_size = vpd_size - i; 1587 + } 1588 + 1589 + /* 1590 + * Find the offset of the WWPN tag within the read only 1591 + * VPD data and validate the found field (partials are 1592 + * no good to us). Convert the ASCII data to an integer 1593 + * value. Note that we must copy to a temporary buffer 1594 + * because the conversion service requires that the ASCII 1595 + * string be terminated. 1596 + */ 1597 + for (k = 0; k < NUM_FC_PORTS; k++) { 1598 + j = ro_size; 1599 + i = ro_start + PCI_VPD_LRDT_TAG_SIZE; 1600 + 1601 + i = pci_vpd_find_info_keyword(vpd_data, i, j, wwpn_vpd_tags[k]); 1602 + if (unlikely(i < 0)) { 1603 + pr_err("%s: Port %d WWPN not found in VPD\n", 1604 + __func__, k); 1605 + rc = -ENODEV; 1606 + goto out; 1607 + } 1608 + 1609 + j = pci_vpd_info_field_size(&vpd_data[i]); 1610 + i += PCI_VPD_INFO_FLD_HDR_SIZE; 1611 + if (unlikely((i + j > vpd_size) || (j != WWPN_LEN))) { 1612 + pr_err("%s: Port %d WWPN incomplete or VPD corrupt\n", 1613 + __func__, k); 1614 + rc = -ENODEV; 1615 + goto out; 1616 + } 1617 + 1618 + memcpy(tmp_buf, &vpd_data[i], WWPN_LEN); 1619 + rc = kstrtoul(tmp_buf, WWPN_LEN, (ulong *)&wwpn[k]); 1620 + if (unlikely(rc)) { 1621 + pr_err("%s: Fail to convert port %d WWPN to integer\n", 1622 + __func__, k); 1623 + rc = -ENODEV; 1624 + goto out; 1625 + } 1626 + } 1627 + 1628 + out: 1629 + pr_debug("%s: returning rc=%d\n", __func__, rc); 1630 + return rc; 1631 + } 1632 + 1633 + /** 1634 + * cxlflash_context_reset() - timeout handler for AFU commands 1635 + * @cmd: AFU command that timed out. 1636 + * 1637 + * Sends a reset to the AFU. 1638 + */ 1639 + void cxlflash_context_reset(struct afu_cmd *cmd) 1640 + { 1641 + int nretry = 0; 1642 + u64 rrin = 0x1; 1643 + u64 room = 0; 1644 + struct afu *afu = cmd->parent; 1645 + ulong lock_flags; 1646 + 1647 + pr_debug("%s: cmd=%p\n", __func__, cmd); 1648 + 1649 + spin_lock_irqsave(&cmd->slock, lock_flags); 1650 + 1651 + /* Already completed? */ 1652 + if (cmd->sa.host_use_b[0] & B_DONE) { 1653 + spin_unlock_irqrestore(&cmd->slock, lock_flags); 1654 + return; 1655 + } 1656 + 1657 + cmd->sa.host_use_b[0] |= (B_DONE | B_ERROR | B_TIMEOUT); 1658 + spin_unlock_irqrestore(&cmd->slock, lock_flags); 1659 + 1660 + /* 1661 + * We really want to send this reset at all costs, so spread 1662 + * out wait time on successive retries for available room. 1663 + */ 1664 + do { 1665 + room = readq_be(&afu->host_map->cmd_room); 1666 + atomic64_set(&afu->room, room); 1667 + if (room) 1668 + goto write_rrin; 1669 + udelay(nretry); 1670 + } while (nretry++ < MC_ROOM_RETRY_CNT); 1671 + 1672 + pr_err("%s: no cmd_room to send reset\n", __func__); 1673 + return; 1674 + 1675 + write_rrin: 1676 + nretry = 0; 1677 + writeq_be(rrin, &afu->host_map->ioarrin); 1678 + do { 1679 + rrin = readq_be(&afu->host_map->ioarrin); 1680 + if (rrin != 0x1) 1681 + break; 1682 + /* Double delay each time */ 1683 + udelay(2 ^ nretry); 1684 + } while (nretry++ < MC_ROOM_RETRY_CNT); 1685 + } 1686 + 1687 + /** 1688 + * init_pcr() - initialize the provisioning and control registers 1689 + * @cxlflash: Internal structure associated with the host. 1690 + * 1691 + * Also sets up fast access to the mapped registers and initializes AFU 1692 + * command fields that never change. 1693 + */ 1694 + void init_pcr(struct cxlflash_cfg *cfg) 1695 + { 1696 + struct afu *afu = cfg->afu; 1697 + struct sisl_ctrl_map *ctrl_map; 1698 + int i; 1699 + 1700 + for (i = 0; i < MAX_CONTEXT; i++) { 1701 + ctrl_map = &afu->afu_map->ctrls[i].ctrl; 1702 + /* disrupt any clients that could be running */ 1703 + /* e. g. clients that survived a master restart */ 1704 + writeq_be(0, &ctrl_map->rht_start); 1705 + writeq_be(0, &ctrl_map->rht_cnt_id); 1706 + writeq_be(0, &ctrl_map->ctx_cap); 1707 + } 1708 + 1709 + /* copy frequently used fields into afu */ 1710 + afu->ctx_hndl = (u16) cxl_process_element(cfg->mcctx); 1711 + /* ctx_hndl is 16 bits in CAIA */ 1712 + afu->host_map = &afu->afu_map->hosts[afu->ctx_hndl].host; 1713 + afu->ctrl_map = &afu->afu_map->ctrls[afu->ctx_hndl].ctrl; 1714 + 1715 + /* Program the Endian Control for the master context */ 1716 + writeq_be(SISL_ENDIAN_CTRL, &afu->host_map->endian_ctrl); 1717 + 1718 + /* initialize cmd fields that never change */ 1719 + for (i = 0; i < CXLFLASH_NUM_CMDS; i++) { 1720 + afu->cmd[i].rcb.ctx_id = afu->ctx_hndl; 1721 + afu->cmd[i].rcb.msi = SISL_MSI_RRQ_UPDATED; 1722 + afu->cmd[i].rcb.rrq = 0x0; 1723 + } 1724 + } 1725 + 1726 + /** 1727 + * init_global() - initialize AFU global registers 1728 + * @cxlflash: Internal structure associated with the host. 1729 + */ 1730 + int init_global(struct cxlflash_cfg *cfg) 1731 + { 1732 + struct afu *afu = cfg->afu; 1733 + u64 wwpn[NUM_FC_PORTS]; /* wwpn of AFU ports */ 1734 + int i = 0, num_ports = 0; 1735 + int rc = 0; 1736 + u64 reg; 1737 + 1738 + rc = read_vpd(cfg, &wwpn[0]); 1739 + if (rc) { 1740 + pr_err("%s: could not read vpd rc=%d\n", __func__, rc); 1741 + goto out; 1742 + } 1743 + 1744 + pr_debug("%s: wwpn0=0x%llX wwpn1=0x%llX\n", __func__, wwpn[0], wwpn[1]); 1745 + 1746 + /* set up RRQ in AFU for master issued cmds */ 1747 + writeq_be((u64) afu->hrrq_start, &afu->host_map->rrq_start); 1748 + writeq_be((u64) afu->hrrq_end, &afu->host_map->rrq_end); 1749 + 1750 + /* AFU configuration */ 1751 + reg = readq_be(&afu->afu_map->global.regs.afu_config); 1752 + reg |= SISL_AFUCONF_AR_ALL|SISL_AFUCONF_ENDIAN; 1753 + /* enable all auto retry options and control endianness */ 1754 + /* leave others at default: */ 1755 + /* CTX_CAP write protected, mbox_r does not clear on read and */ 1756 + /* checker on if dual afu */ 1757 + writeq_be(reg, &afu->afu_map->global.regs.afu_config); 1758 + 1759 + /* global port select: select either port */ 1760 + if (afu->internal_lun) { 1761 + /* only use port 0 */ 1762 + writeq_be(PORT0, &afu->afu_map->global.regs.afu_port_sel); 1763 + num_ports = NUM_FC_PORTS - 1; 1764 + } else { 1765 + writeq_be(BOTH_PORTS, &afu->afu_map->global.regs.afu_port_sel); 1766 + num_ports = NUM_FC_PORTS; 1767 + } 1768 + 1769 + for (i = 0; i < num_ports; i++) { 1770 + /* unmask all errors (but they are still masked at AFU) */ 1771 + writeq_be(0, &afu->afu_map->global.fc_regs[i][FC_ERRMSK / 8]); 1772 + /* clear CRC error cnt & set a threshold */ 1773 + (void)readq_be(&afu->afu_map->global. 1774 + fc_regs[i][FC_CNT_CRCERR / 8]); 1775 + writeq_be(MC_CRC_THRESH, &afu->afu_map->global.fc_regs[i] 1776 + [FC_CRC_THRESH / 8]); 1777 + 1778 + /* set WWPNs. If already programmed, wwpn[i] is 0 */ 1779 + if (wwpn[i] != 0 && 1780 + afu_set_wwpn(afu, i, 1781 + &afu->afu_map->global.fc_regs[i][0], 1782 + wwpn[i])) { 1783 + pr_err("%s: failed to set WWPN on port %d\n", 1784 + __func__, i); 1785 + rc = -EIO; 1786 + goto out; 1787 + } 1788 + /* Programming WWPN back to back causes additional 1789 + * offline/online transitions and a PLOGI 1790 + */ 1791 + msleep(100); 1792 + 1793 + } 1794 + 1795 + /* set up master's own CTX_CAP to allow real mode, host translation */ 1796 + /* tbls, afu cmds and read/write GSCSI cmds. */ 1797 + /* First, unlock ctx_cap write by reading mbox */ 1798 + (void)readq_be(&afu->ctrl_map->mbox_r); /* unlock ctx_cap */ 1799 + writeq_be((SISL_CTX_CAP_REAL_MODE | SISL_CTX_CAP_HOST_XLATE | 1800 + SISL_CTX_CAP_READ_CMD | SISL_CTX_CAP_WRITE_CMD | 1801 + SISL_CTX_CAP_AFU_CMD | SISL_CTX_CAP_GSCSI_CMD), 1802 + &afu->ctrl_map->ctx_cap); 1803 + /* init heartbeat */ 1804 + afu->hb = readq_be(&afu->afu_map->global.regs.afu_hb); 1805 + 1806 + out: 1807 + return rc; 1808 + } 1809 + 1810 + /** 1811 + * start_afu() - initializes and starts the AFU 1812 + * @cxlflash: Internal structure associated with the host. 1813 + */ 1814 + static int start_afu(struct cxlflash_cfg *cfg) 1815 + { 1816 + struct afu *afu = cfg->afu; 1817 + struct afu_cmd *cmd; 1818 + 1819 + int i = 0; 1820 + int rc = 0; 1821 + 1822 + for (i = 0; i < CXLFLASH_NUM_CMDS; i++) { 1823 + cmd = &afu->cmd[i]; 1824 + 1825 + init_completion(&cmd->cevent); 1826 + spin_lock_init(&cmd->slock); 1827 + cmd->parent = afu; 1828 + } 1829 + 1830 + init_pcr(cfg); 1831 + 1832 + /* initialize RRQ pointers */ 1833 + afu->hrrq_start = &afu->rrq_entry[0]; 1834 + afu->hrrq_end = &afu->rrq_entry[NUM_RRQ_ENTRY - 1]; 1835 + afu->hrrq_curr = afu->hrrq_start; 1836 + afu->toggle = 1; 1837 + 1838 + rc = init_global(cfg); 1839 + 1840 + pr_debug("%s: returning rc=%d\n", __func__, rc); 1841 + return rc; 1842 + } 1843 + 1844 + /** 1845 + * init_mc() - create and register as the master context 1846 + * @cxlflash: Internal structure associated with the host. 1847 + * 1848 + * Return: 1849 + * 0 on success 1850 + * -ENOMEM when unable to obtain a context from CXL services 1851 + * A failure value from CXL services. 1852 + */ 1853 + static int init_mc(struct cxlflash_cfg *cfg) 1854 + { 1855 + struct cxl_context *ctx; 1856 + struct device *dev = &cfg->dev->dev; 1857 + struct afu *afu = cfg->afu; 1858 + int rc = 0; 1859 + enum undo_level level; 1860 + 1861 + ctx = cxl_get_context(cfg->dev); 1862 + if (unlikely(!ctx)) 1863 + return -ENOMEM; 1864 + cfg->mcctx = ctx; 1865 + 1866 + /* Set it up as a master with the CXL */ 1867 + cxl_set_master(ctx); 1868 + 1869 + /* During initialization reset the AFU to start from a clean slate */ 1870 + rc = cxl_afu_reset(cfg->mcctx); 1871 + if (unlikely(rc)) { 1872 + dev_err(dev, "%s: initial AFU reset failed rc=%d\n", 1873 + __func__, rc); 1874 + level = RELEASE_CONTEXT; 1875 + goto out; 1876 + } 1877 + 1878 + rc = cxl_allocate_afu_irqs(ctx, 3); 1879 + if (unlikely(rc)) { 1880 + dev_err(dev, "%s: call to allocate_afu_irqs failed rc=%d!\n", 1881 + __func__, rc); 1882 + level = RELEASE_CONTEXT; 1883 + goto out; 1884 + } 1885 + 1886 + rc = cxl_map_afu_irq(ctx, 1, cxlflash_sync_err_irq, afu, 1887 + "SISL_MSI_SYNC_ERROR"); 1888 + if (unlikely(rc <= 0)) { 1889 + dev_err(dev, "%s: IRQ 1 (SISL_MSI_SYNC_ERROR) map failed!\n", 1890 + __func__); 1891 + level = FREE_IRQ; 1892 + goto out; 1893 + } 1894 + 1895 + rc = cxl_map_afu_irq(ctx, 2, cxlflash_rrq_irq, afu, 1896 + "SISL_MSI_RRQ_UPDATED"); 1897 + if (unlikely(rc <= 0)) { 1898 + dev_err(dev, "%s: IRQ 2 (SISL_MSI_RRQ_UPDATED) map failed!\n", 1899 + __func__); 1900 + level = UNMAP_ONE; 1901 + goto out; 1902 + } 1903 + 1904 + rc = cxl_map_afu_irq(ctx, 3, cxlflash_async_err_irq, afu, 1905 + "SISL_MSI_ASYNC_ERROR"); 1906 + if (unlikely(rc <= 0)) { 1907 + dev_err(dev, "%s: IRQ 3 (SISL_MSI_ASYNC_ERROR) map failed!\n", 1908 + __func__); 1909 + level = UNMAP_TWO; 1910 + goto out; 1911 + } 1912 + 1913 + rc = 0; 1914 + 1915 + /* This performs the equivalent of the CXL_IOCTL_START_WORK. 1916 + * The CXL_IOCTL_GET_PROCESS_ELEMENT is implicit in the process 1917 + * element (pe) that is embedded in the context (ctx) 1918 + */ 1919 + rc = start_context(cfg); 1920 + if (unlikely(rc)) { 1921 + dev_err(dev, "%s: start context failed rc=%d\n", __func__, rc); 1922 + level = UNMAP_THREE; 1923 + goto out; 1924 + } 1925 + ret: 1926 + pr_debug("%s: returning rc=%d\n", __func__, rc); 1927 + return rc; 1928 + out: 1929 + term_mc(cfg, level); 1930 + goto ret; 1931 + } 1932 + 1933 + /** 1934 + * init_afu() - setup as master context and start AFU 1935 + * @cxlflash: Internal structure associated with the host. 1936 + * 1937 + * This routine is a higher level of control for configuring the 1938 + * AFU on probe and reset paths. 1939 + * 1940 + * Return: 1941 + * 0 on success 1942 + * -ENOMEM when unable to map the AFU MMIO space 1943 + * A failure value from internal services. 1944 + */ 1945 + static int init_afu(struct cxlflash_cfg *cfg) 1946 + { 1947 + u64 reg; 1948 + int rc = 0; 1949 + struct afu *afu = cfg->afu; 1950 + struct device *dev = &cfg->dev->dev; 1951 + 1952 + cxl_perst_reloads_same_image(cfg->cxl_afu, true); 1953 + 1954 + rc = init_mc(cfg); 1955 + if (rc) { 1956 + dev_err(dev, "%s: call to init_mc failed, rc=%d!\n", 1957 + __func__, rc); 1958 + goto err1; 1959 + } 1960 + 1961 + /* Map the entire MMIO space of the AFU. 1962 + */ 1963 + afu->afu_map = cxl_psa_map(cfg->mcctx); 1964 + if (!afu->afu_map) { 1965 + rc = -ENOMEM; 1966 + term_mc(cfg, UNDO_START); 1967 + dev_err(dev, "%s: call to cxl_psa_map failed!\n", __func__); 1968 + goto err1; 1969 + } 1970 + 1971 + /* don't byte reverse on reading afu_version, else the string form */ 1972 + /* will be backwards */ 1973 + reg = afu->afu_map->global.regs.afu_version; 1974 + memcpy(afu->version, &reg, 8); 1975 + afu->interface_version = 1976 + readq_be(&afu->afu_map->global.regs.interface_version); 1977 + pr_debug("%s: afu version %s, interface version 0x%llX\n", 1978 + __func__, afu->version, afu->interface_version); 1979 + 1980 + rc = start_afu(cfg); 1981 + if (rc) { 1982 + dev_err(dev, "%s: call to start_afu failed, rc=%d!\n", 1983 + __func__, rc); 1984 + term_mc(cfg, UNDO_START); 1985 + cxl_psa_unmap((void *)afu->afu_map); 1986 + afu->afu_map = NULL; 1987 + goto err1; 1988 + } 1989 + 1990 + afu_err_intr_init(cfg->afu); 1991 + atomic64_set(&afu->room, readq_be(&afu->host_map->cmd_room)); 1992 + 1993 + /* Restore the LUN mappings */ 1994 + cxlflash_restore_luntable(cfg); 1995 + err1: 1996 + pr_debug("%s: returning rc=%d\n", __func__, rc); 1997 + return rc; 1998 + } 1999 + 2000 + /** 2001 + * cxlflash_send_cmd() - sends an AFU command 2002 + * @afu: AFU associated with the host. 2003 + * @cmd: AFU command to send. 2004 + * 2005 + * Return: 2006 + * 0 on success 2007 + * -1 on failure 2008 + */ 2009 + int cxlflash_send_cmd(struct afu *afu, struct afu_cmd *cmd) 2010 + { 2011 + struct cxlflash_cfg *cfg = afu->parent; 2012 + int nretry = 0; 2013 + int rc = 0; 2014 + u64 room; 2015 + long newval; 2016 + 2017 + /* 2018 + * This routine is used by critical users such an AFU sync and to 2019 + * send a task management function (TMF). Thus we want to retry a 2020 + * bit before returning an error. To avoid the performance penalty 2021 + * of MMIO, we spread the update of 'room' over multiple commands. 2022 + */ 2023 + retry: 2024 + newval = atomic64_dec_if_positive(&afu->room); 2025 + if (!newval) { 2026 + do { 2027 + room = readq_be(&afu->host_map->cmd_room); 2028 + atomic64_set(&afu->room, room); 2029 + if (room) 2030 + goto write_ioarrin; 2031 + udelay(nretry); 2032 + } while (nretry++ < MC_ROOM_RETRY_CNT); 2033 + 2034 + pr_err("%s: no cmd_room to send 0x%X\n", 2035 + __func__, cmd->rcb.cdb[0]); 2036 + 2037 + goto no_room; 2038 + } else if (unlikely(newval < 0)) { 2039 + /* This should be rare. i.e. Only if two threads race and 2040 + * decrement before the MMIO read is done. In this case 2041 + * just benefit from the other thread having updated 2042 + * afu->room. 2043 + */ 2044 + if (nretry++ < MC_ROOM_RETRY_CNT) { 2045 + udelay(nretry); 2046 + goto retry; 2047 + } 2048 + 2049 + goto no_room; 2050 + } 2051 + 2052 + write_ioarrin: 2053 + writeq_be((u64)&cmd->rcb, &afu->host_map->ioarrin); 2054 + out: 2055 + pr_debug("%s: cmd=%p len=%d ea=%p rc=%d\n", __func__, cmd, 2056 + cmd->rcb.data_len, (void *)cmd->rcb.data_ea, rc); 2057 + return rc; 2058 + 2059 + no_room: 2060 + afu->read_room = true; 2061 + schedule_work(&cfg->work_q); 2062 + rc = SCSI_MLQUEUE_HOST_BUSY; 2063 + goto out; 2064 + } 2065 + 2066 + /** 2067 + * cxlflash_wait_resp() - polls for a response or timeout to a sent AFU command 2068 + * @afu: AFU associated with the host. 2069 + * @cmd: AFU command that was sent. 2070 + */ 2071 + void cxlflash_wait_resp(struct afu *afu, struct afu_cmd *cmd) 2072 + { 2073 + ulong timeout = jiffies + (cmd->rcb.timeout * 2 * HZ); 2074 + 2075 + timeout = wait_for_completion_timeout(&cmd->cevent, timeout); 2076 + if (!timeout) 2077 + cxlflash_context_reset(cmd); 2078 + 2079 + if (unlikely(cmd->sa.ioasc != 0)) 2080 + pr_err("%s: CMD 0x%X failed, IOASC: flags 0x%X, afu_rc 0x%X, " 2081 + "scsi_rc 0x%X, fc_rc 0x%X\n", __func__, cmd->rcb.cdb[0], 2082 + cmd->sa.rc.flags, cmd->sa.rc.afu_rc, cmd->sa.rc.scsi_rc, 2083 + cmd->sa.rc.fc_rc); 2084 + } 2085 + 2086 + /** 2087 + * cxlflash_afu_sync() - builds and sends an AFU sync command 2088 + * @afu: AFU associated with the host. 2089 + * @ctx_hndl_u: Identifies context requesting sync. 2090 + * @res_hndl_u: Identifies resource requesting sync. 2091 + * @mode: Type of sync to issue (lightweight, heavyweight, global). 2092 + * 2093 + * The AFU can only take 1 sync command at a time. This routine enforces this 2094 + * limitation by using a mutex to provide exlusive access to the AFU during 2095 + * the sync. This design point requires calling threads to not be on interrupt 2096 + * context due to the possibility of sleeping during concurrent sync operations. 2097 + * 2098 + * AFU sync operations are only necessary and allowed when the device is 2099 + * operating normally. When not operating normally, sync requests can occur as 2100 + * part of cleaning up resources associated with an adapter prior to removal. 2101 + * In this scenario, these requests are simply ignored (safe due to the AFU 2102 + * going away). 2103 + * 2104 + * Return: 2105 + * 0 on success 2106 + * -1 on failure 2107 + */ 2108 + int cxlflash_afu_sync(struct afu *afu, ctx_hndl_t ctx_hndl_u, 2109 + res_hndl_t res_hndl_u, u8 mode) 2110 + { 2111 + struct cxlflash_cfg *cfg = afu->parent; 2112 + struct afu_cmd *cmd = NULL; 2113 + int rc = 0; 2114 + int retry_cnt = 0; 2115 + static DEFINE_MUTEX(sync_active); 2116 + 2117 + if (cfg->state != STATE_NORMAL) { 2118 + pr_debug("%s: Sync not required! (%u)\n", __func__, cfg->state); 2119 + return 0; 2120 + } 2121 + 2122 + mutex_lock(&sync_active); 2123 + retry: 2124 + cmd = cxlflash_cmd_checkout(afu); 2125 + if (unlikely(!cmd)) { 2126 + retry_cnt++; 2127 + udelay(1000 * retry_cnt); 2128 + if (retry_cnt < MC_RETRY_CNT) 2129 + goto retry; 2130 + pr_err("%s: could not get a free command\n", __func__); 2131 + rc = -1; 2132 + goto out; 2133 + } 2134 + 2135 + pr_debug("%s: afu=%p cmd=%p %d\n", __func__, afu, cmd, ctx_hndl_u); 2136 + 2137 + memset(cmd->rcb.cdb, 0, sizeof(cmd->rcb.cdb)); 2138 + 2139 + cmd->rcb.req_flags = SISL_REQ_FLAGS_AFU_CMD; 2140 + cmd->rcb.port_sel = 0x0; /* NA */ 2141 + cmd->rcb.lun_id = 0x0; /* NA */ 2142 + cmd->rcb.data_len = 0x0; 2143 + cmd->rcb.data_ea = 0x0; 2144 + cmd->rcb.timeout = MC_AFU_SYNC_TIMEOUT; 2145 + 2146 + cmd->rcb.cdb[0] = 0xC0; /* AFU Sync */ 2147 + cmd->rcb.cdb[1] = mode; 2148 + 2149 + /* The cdb is aligned, no unaligned accessors required */ 2150 + *((u16 *)&cmd->rcb.cdb[2]) = swab16(ctx_hndl_u); 2151 + *((u32 *)&cmd->rcb.cdb[4]) = swab32(res_hndl_u); 2152 + 2153 + rc = cxlflash_send_cmd(afu, cmd); 2154 + if (unlikely(rc)) 2155 + goto out; 2156 + 2157 + cxlflash_wait_resp(afu, cmd); 2158 + 2159 + /* set on timeout */ 2160 + if (unlikely((cmd->sa.ioasc != 0) || 2161 + (cmd->sa.host_use_b[0] & B_ERROR))) 2162 + rc = -1; 2163 + out: 2164 + mutex_unlock(&sync_active); 2165 + if (cmd) 2166 + cxlflash_cmd_checkin(cmd); 2167 + pr_debug("%s: returning rc=%d\n", __func__, rc); 2168 + return rc; 2169 + } 2170 + 2171 + /** 2172 + * cxlflash_afu_reset() - resets the AFU 2173 + * @cxlflash: Internal structure associated with the host. 2174 + * 2175 + * Return: 2176 + * 0 on success 2177 + * A failure value from internal services. 2178 + */ 2179 + int cxlflash_afu_reset(struct cxlflash_cfg *cfg) 2180 + { 2181 + int rc = 0; 2182 + /* Stop the context before the reset. Since the context is 2183 + * no longer available restart it after the reset is complete 2184 + */ 2185 + 2186 + term_afu(cfg); 2187 + 2188 + rc = init_afu(cfg); 2189 + 2190 + pr_debug("%s: returning rc=%d\n", __func__, rc); 2191 + return rc; 2192 + } 2193 + 2194 + /** 2195 + * cxlflash_worker_thread() - work thread handler for the AFU 2196 + * @work: Work structure contained within cxlflash associated with host. 2197 + * 2198 + * Handles the following events: 2199 + * - Link reset which cannot be performed on interrupt context due to 2200 + * blocking up to a few seconds 2201 + * - Read AFU command room 2202 + */ 2203 + static void cxlflash_worker_thread(struct work_struct *work) 2204 + { 2205 + struct cxlflash_cfg *cfg = container_of(work, struct cxlflash_cfg, 2206 + work_q); 2207 + struct afu *afu = cfg->afu; 2208 + int port; 2209 + ulong lock_flags; 2210 + 2211 + /* Avoid MMIO if the device has failed */ 2212 + 2213 + if (cfg->state != STATE_NORMAL) 2214 + return; 2215 + 2216 + spin_lock_irqsave(cfg->host->host_lock, lock_flags); 2217 + 2218 + if (cfg->lr_state == LINK_RESET_REQUIRED) { 2219 + port = cfg->lr_port; 2220 + if (port < 0) 2221 + pr_err("%s: invalid port index %d\n", __func__, port); 2222 + else { 2223 + spin_unlock_irqrestore(cfg->host->host_lock, 2224 + lock_flags); 2225 + 2226 + /* The reset can block... */ 2227 + afu_link_reset(afu, port, 2228 + &afu->afu_map-> 2229 + global.fc_regs[port][0]); 2230 + spin_lock_irqsave(cfg->host->host_lock, lock_flags); 2231 + } 2232 + 2233 + cfg->lr_state = LINK_RESET_COMPLETE; 2234 + } 2235 + 2236 + if (afu->read_room) { 2237 + atomic64_set(&afu->room, readq_be(&afu->host_map->cmd_room)); 2238 + afu->read_room = false; 2239 + } 2240 + 2241 + spin_unlock_irqrestore(cfg->host->host_lock, lock_flags); 2242 + } 2243 + 2244 + /** 2245 + * cxlflash_probe() - PCI entry point to add host 2246 + * @pdev: PCI device associated with the host. 2247 + * @dev_id: PCI device id associated with device. 2248 + * 2249 + * Return: 0 on success / non-zero on failure 2250 + */ 2251 + static int cxlflash_probe(struct pci_dev *pdev, 2252 + const struct pci_device_id *dev_id) 2253 + { 2254 + struct Scsi_Host *host; 2255 + struct cxlflash_cfg *cfg = NULL; 2256 + struct device *phys_dev; 2257 + struct dev_dependent_vals *ddv; 2258 + int rc = 0; 2259 + 2260 + dev_dbg(&pdev->dev, "%s: Found CXLFLASH with IRQ: %d\n", 2261 + __func__, pdev->irq); 2262 + 2263 + ddv = (struct dev_dependent_vals *)dev_id->driver_data; 2264 + driver_template.max_sectors = ddv->max_sectors; 2265 + 2266 + host = scsi_host_alloc(&driver_template, sizeof(struct cxlflash_cfg)); 2267 + if (!host) { 2268 + dev_err(&pdev->dev, "%s: call to scsi_host_alloc failed!\n", 2269 + __func__); 2270 + rc = -ENOMEM; 2271 + goto out; 2272 + } 2273 + 2274 + host->max_id = CXLFLASH_MAX_NUM_TARGETS_PER_BUS; 2275 + host->max_lun = CXLFLASH_MAX_NUM_LUNS_PER_TARGET; 2276 + host->max_channel = NUM_FC_PORTS - 1; 2277 + host->unique_id = host->host_no; 2278 + host->max_cmd_len = CXLFLASH_MAX_CDB_LEN; 2279 + 2280 + cfg = (struct cxlflash_cfg *)host->hostdata; 2281 + cfg->host = host; 2282 + rc = alloc_mem(cfg); 2283 + if (rc) { 2284 + dev_err(&pdev->dev, "%s: call to scsi_host_alloc failed!\n", 2285 + __func__); 2286 + rc = -ENOMEM; 2287 + goto out; 2288 + } 2289 + 2290 + cfg->init_state = INIT_STATE_NONE; 2291 + cfg->dev = pdev; 2292 + 2293 + /* 2294 + * The promoted LUNs move to the top of the LUN table. The rest stay 2295 + * on the bottom half. The bottom half grows from the end 2296 + * (index = 255), whereas the top half grows from the beginning 2297 + * (index = 0). 2298 + */ 2299 + cfg->promote_lun_index = 0; 2300 + cfg->last_lun_index[0] = CXLFLASH_NUM_VLUNS/2 - 1; 2301 + cfg->last_lun_index[1] = CXLFLASH_NUM_VLUNS/2 - 1; 2302 + 2303 + cfg->dev_id = (struct pci_device_id *)dev_id; 2304 + cfg->mcctx = NULL; 2305 + 2306 + init_waitqueue_head(&cfg->tmf_waitq); 2307 + init_waitqueue_head(&cfg->limbo_waitq); 2308 + 2309 + INIT_WORK(&cfg->work_q, cxlflash_worker_thread); 2310 + cfg->lr_state = LINK_RESET_INVALID; 2311 + cfg->lr_port = -1; 2312 + mutex_init(&cfg->ctx_tbl_list_mutex); 2313 + mutex_init(&cfg->ctx_recovery_mutex); 2314 + INIT_LIST_HEAD(&cfg->ctx_err_recovery); 2315 + INIT_LIST_HEAD(&cfg->lluns); 2316 + 2317 + pci_set_drvdata(pdev, cfg); 2318 + 2319 + /* Use the special service provided to look up the physical 2320 + * PCI device, since we are called on the probe of the virtual 2321 + * PCI host bus (vphb) 2322 + */ 2323 + phys_dev = cxl_get_phys_dev(pdev); 2324 + if (!dev_is_pci(phys_dev)) { 2325 + pr_err("%s: not a pci dev\n", __func__); 2326 + rc = -ENODEV; 2327 + goto out_remove; 2328 + } 2329 + cfg->parent_dev = to_pci_dev(phys_dev); 2330 + 2331 + cfg->cxl_afu = cxl_pci_to_afu(pdev); 2332 + 2333 + rc = init_pci(cfg); 2334 + if (rc) { 2335 + dev_err(&pdev->dev, "%s: call to init_pci " 2336 + "failed rc=%d!\n", __func__, rc); 2337 + goto out_remove; 2338 + } 2339 + cfg->init_state = INIT_STATE_PCI; 2340 + 2341 + rc = init_afu(cfg); 2342 + if (rc) { 2343 + dev_err(&pdev->dev, "%s: call to init_afu " 2344 + "failed rc=%d!\n", __func__, rc); 2345 + goto out_remove; 2346 + } 2347 + cfg->init_state = INIT_STATE_AFU; 2348 + 2349 + 2350 + rc = init_scsi(cfg); 2351 + if (rc) { 2352 + dev_err(&pdev->dev, "%s: call to init_scsi " 2353 + "failed rc=%d!\n", __func__, rc); 2354 + goto out_remove; 2355 + } 2356 + cfg->init_state = INIT_STATE_SCSI; 2357 + 2358 + out: 2359 + pr_debug("%s: returning rc=%d\n", __func__, rc); 2360 + return rc; 2361 + 2362 + out_remove: 2363 + cxlflash_remove(pdev); 2364 + goto out; 2365 + } 2366 + 2367 + /** 2368 + * cxlflash_pci_error_detected() - called when a PCI error is detected 2369 + * @pdev: PCI device struct. 2370 + * @state: PCI channel state. 2371 + * 2372 + * Return: PCI_ERS_RESULT_NEED_RESET or PCI_ERS_RESULT_DISCONNECT 2373 + */ 2374 + static pci_ers_result_t cxlflash_pci_error_detected(struct pci_dev *pdev, 2375 + pci_channel_state_t state) 2376 + { 2377 + int rc = 0; 2378 + struct cxlflash_cfg *cfg = pci_get_drvdata(pdev); 2379 + struct device *dev = &cfg->dev->dev; 2380 + 2381 + dev_dbg(dev, "%s: pdev=%p state=%u\n", __func__, pdev, state); 2382 + 2383 + switch (state) { 2384 + case pci_channel_io_frozen: 2385 + cfg->state = STATE_LIMBO; 2386 + 2387 + /* Turn off legacy I/O */ 2388 + scsi_block_requests(cfg->host); 2389 + rc = cxlflash_mark_contexts_error(cfg); 2390 + if (unlikely(rc)) 2391 + dev_err(dev, "%s: Failed to mark user contexts!(%d)\n", 2392 + __func__, rc); 2393 + term_mc(cfg, UNDO_START); 2394 + stop_afu(cfg); 2395 + 2396 + return PCI_ERS_RESULT_NEED_RESET; 2397 + case pci_channel_io_perm_failure: 2398 + cfg->state = STATE_FAILTERM; 2399 + wake_up_all(&cfg->limbo_waitq); 2400 + scsi_unblock_requests(cfg->host); 2401 + return PCI_ERS_RESULT_DISCONNECT; 2402 + default: 2403 + break; 2404 + } 2405 + return PCI_ERS_RESULT_NEED_RESET; 2406 + } 2407 + 2408 + /** 2409 + * cxlflash_pci_slot_reset() - called when PCI slot has been reset 2410 + * @pdev: PCI device struct. 2411 + * 2412 + * This routine is called by the pci error recovery code after the PCI 2413 + * slot has been reset, just before we should resume normal operations. 2414 + * 2415 + * Return: PCI_ERS_RESULT_RECOVERED or PCI_ERS_RESULT_DISCONNECT 2416 + */ 2417 + static pci_ers_result_t cxlflash_pci_slot_reset(struct pci_dev *pdev) 2418 + { 2419 + int rc = 0; 2420 + struct cxlflash_cfg *cfg = pci_get_drvdata(pdev); 2421 + struct device *dev = &cfg->dev->dev; 2422 + 2423 + dev_dbg(dev, "%s: pdev=%p\n", __func__, pdev); 2424 + 2425 + rc = init_afu(cfg); 2426 + if (unlikely(rc)) { 2427 + dev_err(dev, "%s: EEH recovery failed! (%d)\n", __func__, rc); 2428 + return PCI_ERS_RESULT_DISCONNECT; 2429 + } 2430 + 2431 + return PCI_ERS_RESULT_RECOVERED; 2432 + } 2433 + 2434 + /** 2435 + * cxlflash_pci_resume() - called when normal operation can resume 2436 + * @pdev: PCI device struct 2437 + */ 2438 + static void cxlflash_pci_resume(struct pci_dev *pdev) 2439 + { 2440 + struct cxlflash_cfg *cfg = pci_get_drvdata(pdev); 2441 + struct device *dev = &cfg->dev->dev; 2442 + 2443 + dev_dbg(dev, "%s: pdev=%p\n", __func__, pdev); 2444 + 2445 + cfg->state = STATE_NORMAL; 2446 + wake_up_all(&cfg->limbo_waitq); 2447 + scsi_unblock_requests(cfg->host); 2448 + } 2449 + 2450 + static const struct pci_error_handlers cxlflash_err_handler = { 2451 + .error_detected = cxlflash_pci_error_detected, 2452 + .slot_reset = cxlflash_pci_slot_reset, 2453 + .resume = cxlflash_pci_resume, 2454 + }; 2455 + 2456 + /* 2457 + * PCI device structure 2458 + */ 2459 + static struct pci_driver cxlflash_driver = { 2460 + .name = CXLFLASH_NAME, 2461 + .id_table = cxlflash_pci_table, 2462 + .probe = cxlflash_probe, 2463 + .remove = cxlflash_remove, 2464 + .err_handler = &cxlflash_err_handler, 2465 + }; 2466 + 2467 + /** 2468 + * init_cxlflash() - module entry point 2469 + * 2470 + * Return: 0 on success / non-zero on failure 2471 + */ 2472 + static int __init init_cxlflash(void) 2473 + { 2474 + pr_info("%s: IBM Power CXL Flash Adapter: %s\n", 2475 + __func__, CXLFLASH_DRIVER_DATE); 2476 + 2477 + cxlflash_list_init(); 2478 + 2479 + return pci_register_driver(&cxlflash_driver); 2480 + } 2481 + 2482 + /** 2483 + * exit_cxlflash() - module exit point 2484 + */ 2485 + static void __exit exit_cxlflash(void) 2486 + { 2487 + cxlflash_term_global_luns(); 2488 + cxlflash_free_errpage(); 2489 + 2490 + pci_unregister_driver(&cxlflash_driver); 2491 + } 2492 + 2493 + module_init(init_cxlflash); 2494 + module_exit(exit_cxlflash);
+108
drivers/scsi/cxlflash/main.h
··· 1 + /* 2 + * CXL Flash Device Driver 3 + * 4 + * Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation 5 + * Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation 6 + * 7 + * Copyright (C) 2015 IBM Corporation 8 + * 9 + * This program is free software; you can redistribute it and/or 10 + * modify it under the terms of the GNU General Public License 11 + * as published by the Free Software Foundation; either version 12 + * 2 of the License, or (at your option) any later version. 13 + */ 14 + 15 + #ifndef _CXLFLASH_MAIN_H 16 + #define _CXLFLASH_MAIN_H 17 + 18 + #include <linux/list.h> 19 + #include <linux/types.h> 20 + #include <scsi/scsi.h> 21 + #include <scsi/scsi_device.h> 22 + 23 + #define CXLFLASH_NAME "cxlflash" 24 + #define CXLFLASH_ADAPTER_NAME "IBM POWER CXL Flash Adapter" 25 + #define CXLFLASH_DRIVER_DATE "(August 13, 2015)" 26 + 27 + #define PCI_DEVICE_ID_IBM_CORSA 0x04F0 28 + #define CXLFLASH_SUBS_DEV_ID 0x04F0 29 + 30 + /* Since there is only one target, make it 0 */ 31 + #define CXLFLASH_TARGET 0 32 + #define CXLFLASH_MAX_CDB_LEN 16 33 + 34 + /* Really only one target per bus since the Texan is directly attached */ 35 + #define CXLFLASH_MAX_NUM_TARGETS_PER_BUS 1 36 + #define CXLFLASH_MAX_NUM_LUNS_PER_TARGET 65536 37 + 38 + #define CXLFLASH_PCI_ERROR_RECOVERY_TIMEOUT (120 * HZ) 39 + 40 + #define NUM_FC_PORTS CXLFLASH_NUM_FC_PORTS /* ports per AFU */ 41 + 42 + /* FC defines */ 43 + #define FC_MTIP_CMDCONFIG 0x010 44 + #define FC_MTIP_STATUS 0x018 45 + 46 + #define FC_PNAME 0x300 47 + #define FC_CONFIG 0x320 48 + #define FC_CONFIG2 0x328 49 + #define FC_STATUS 0x330 50 + #define FC_ERROR 0x380 51 + #define FC_ERRCAP 0x388 52 + #define FC_ERRMSK 0x390 53 + #define FC_CNT_CRCERR 0x538 54 + #define FC_CRC_THRESH 0x580 55 + 56 + #define FC_MTIP_CMDCONFIG_ONLINE 0x20ULL 57 + #define FC_MTIP_CMDCONFIG_OFFLINE 0x40ULL 58 + 59 + #define FC_MTIP_STATUS_MASK 0x30ULL 60 + #define FC_MTIP_STATUS_ONLINE 0x20ULL 61 + #define FC_MTIP_STATUS_OFFLINE 0x10ULL 62 + 63 + /* TIMEOUT and RETRY definitions */ 64 + 65 + /* AFU command timeout values */ 66 + #define MC_AFU_SYNC_TIMEOUT 5 /* 5 secs */ 67 + 68 + /* AFU command room retry limit */ 69 + #define MC_ROOM_RETRY_CNT 10 70 + 71 + /* FC CRC clear periodic timer */ 72 + #define MC_CRC_THRESH 100 /* threshold in 5 mins */ 73 + 74 + #define FC_PORT_STATUS_RETRY_CNT 100 /* 100 100ms retries = 10 seconds */ 75 + #define FC_PORT_STATUS_RETRY_INTERVAL_US 100000 /* microseconds */ 76 + 77 + /* VPD defines */ 78 + #define CXLFLASH_VPD_LEN 256 79 + #define WWPN_LEN 16 80 + #define WWPN_BUF_LEN (WWPN_LEN + 1) 81 + 82 + enum undo_level { 83 + RELEASE_CONTEXT = 0, 84 + FREE_IRQ, 85 + UNMAP_ONE, 86 + UNMAP_TWO, 87 + UNMAP_THREE, 88 + UNDO_START 89 + }; 90 + 91 + struct dev_dependent_vals { 92 + u64 max_sectors; 93 + }; 94 + 95 + struct asyc_intr_info { 96 + u64 status; 97 + char *desc; 98 + u8 port; 99 + u8 action; 100 + #define CLR_FC_ERROR 0x01 101 + #define LINK_RESET 0x02 102 + }; 103 + 104 + #ifndef CONFIG_CXL_EEH 105 + #define cxl_perst_reloads_same_image(_a, _b) do { } while (0) 106 + #endif 107 + 108 + #endif /* _CXLFLASH_MAIN_H */
+472
drivers/scsi/cxlflash/sislite.h
··· 1 + /* 2 + * CXL Flash Device Driver 3 + * 4 + * Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation 5 + * Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation 6 + * 7 + * Copyright (C) 2015 IBM Corporation 8 + * 9 + * This program is free software; you can redistribute it and/or 10 + * modify it under the terms of the GNU General Public License 11 + * as published by the Free Software Foundation; either version 12 + * 2 of the License, or (at your option) any later version. 13 + */ 14 + 15 + #ifndef _SISLITE_H 16 + #define _SISLITE_H 17 + 18 + #include <linux/types.h> 19 + 20 + typedef u16 ctx_hndl_t; 21 + typedef u32 res_hndl_t; 22 + 23 + #define SIZE_4K 4096 24 + #define SIZE_64K 65536 25 + 26 + /* 27 + * IOARCB: 64 bytes, min 16 byte alignment required, host native endianness 28 + * except for SCSI CDB which remains big endian per SCSI standards. 29 + */ 30 + struct sisl_ioarcb { 31 + u16 ctx_id; /* ctx_hndl_t */ 32 + u16 req_flags; 33 + #define SISL_REQ_FLAGS_RES_HNDL 0x8000U /* bit 0 (MSB) */ 34 + #define SISL_REQ_FLAGS_PORT_LUN_ID 0x0000U 35 + 36 + #define SISL_REQ_FLAGS_SUP_UNDERRUN 0x4000U /* bit 1 */ 37 + 38 + #define SISL_REQ_FLAGS_TIMEOUT_SECS 0x0000U /* bits 8,9 */ 39 + #define SISL_REQ_FLAGS_TIMEOUT_MSECS 0x0040U 40 + #define SISL_REQ_FLAGS_TIMEOUT_USECS 0x0080U 41 + #define SISL_REQ_FLAGS_TIMEOUT_CYCLES 0x00C0U 42 + 43 + #define SISL_REQ_FLAGS_TMF_CMD 0x0004u /* bit 13 */ 44 + 45 + #define SISL_REQ_FLAGS_AFU_CMD 0x0002U /* bit 14 */ 46 + 47 + #define SISL_REQ_FLAGS_HOST_WRITE 0x0001U /* bit 15 (LSB) */ 48 + #define SISL_REQ_FLAGS_HOST_READ 0x0000U 49 + 50 + union { 51 + u32 res_hndl; /* res_hndl_t */ 52 + u32 port_sel; /* this is a selection mask: 53 + * 0x1 -> port#0 can be selected, 54 + * 0x2 -> port#1 can be selected. 55 + * Can be bitwise ORed. 56 + */ 57 + }; 58 + u64 lun_id; 59 + u32 data_len; /* 4K for read/write */ 60 + u32 ioadl_len; 61 + union { 62 + u64 data_ea; /* min 16 byte aligned */ 63 + u64 ioadl_ea; 64 + }; 65 + u8 msi; /* LISN to send on RRQ write */ 66 + #define SISL_MSI_CXL_PFAULT 0 /* reserved for CXL page faults */ 67 + #define SISL_MSI_SYNC_ERROR 1 /* recommended for AFU sync error */ 68 + #define SISL_MSI_RRQ_UPDATED 2 /* recommended for IO completion */ 69 + #define SISL_MSI_ASYNC_ERROR 3 /* master only - for AFU async error */ 70 + 71 + u8 rrq; /* 0 for a single RRQ */ 72 + u16 timeout; /* in units specified by req_flags */ 73 + u32 rsvd1; 74 + u8 cdb[16]; /* must be in big endian */ 75 + struct scsi_cmnd *scp; 76 + } __packed; 77 + 78 + struct sisl_rc { 79 + u8 flags; 80 + #define SISL_RC_FLAGS_SENSE_VALID 0x80U 81 + #define SISL_RC_FLAGS_FCP_RSP_CODE_VALID 0x40U 82 + #define SISL_RC_FLAGS_OVERRUN 0x20U 83 + #define SISL_RC_FLAGS_UNDERRUN 0x10U 84 + 85 + u8 afu_rc; 86 + #define SISL_AFU_RC_RHT_INVALID 0x01U /* user error */ 87 + #define SISL_AFU_RC_RHT_UNALIGNED 0x02U /* should never happen */ 88 + #define SISL_AFU_RC_RHT_OUT_OF_BOUNDS 0x03u /* user error */ 89 + #define SISL_AFU_RC_RHT_DMA_ERR 0x04u /* see afu_extra 90 + may retry if afu_retry is off 91 + possible on master exit 92 + */ 93 + #define SISL_AFU_RC_RHT_RW_PERM 0x05u /* no RW perms, user error */ 94 + #define SISL_AFU_RC_LXT_UNALIGNED 0x12U /* should never happen */ 95 + #define SISL_AFU_RC_LXT_OUT_OF_BOUNDS 0x13u /* user error */ 96 + #define SISL_AFU_RC_LXT_DMA_ERR 0x14u /* see afu_extra 97 + may retry if afu_retry is off 98 + possible on master exit 99 + */ 100 + #define SISL_AFU_RC_LXT_RW_PERM 0x15u /* no RW perms, user error */ 101 + 102 + #define SISL_AFU_RC_NOT_XLATE_HOST 0x1au /* possible if master exited */ 103 + 104 + /* NO_CHANNELS means the FC ports selected by dest_port in 105 + * IOARCB or in the LXT entry are down when the AFU tried to select 106 + * a FC port. If the port went down on an active IO, it will set 107 + * fc_rc to =0x54(NOLOGI) or 0x57(LINKDOWN) instead. 108 + */ 109 + #define SISL_AFU_RC_NO_CHANNELS 0x20U /* see afu_extra, may retry */ 110 + #define SISL_AFU_RC_CAP_VIOLATION 0x21U /* either user error or 111 + afu reset/master restart 112 + */ 113 + #define SISL_AFU_RC_OUT_OF_DATA_BUFS 0x30U /* always retry */ 114 + #define SISL_AFU_RC_DATA_DMA_ERR 0x31U /* see afu_extra 115 + may retry if afu_retry is off 116 + */ 117 + 118 + u8 scsi_rc; /* SCSI status byte, retry as appropriate */ 119 + #define SISL_SCSI_RC_CHECK 0x02U 120 + #define SISL_SCSI_RC_BUSY 0x08u 121 + 122 + u8 fc_rc; /* retry */ 123 + /* 124 + * We should only see fc_rc=0x57 (LINKDOWN) or 0x54(NOLOGI) for 125 + * commands that are in flight when a link goes down or is logged out. 126 + * If the link is down or logged out before AFU selects the port, either 127 + * it will choose the other port or we will get afu_rc=0x20 (no_channel) 128 + * if there is no valid port to use. 129 + * 130 + * ABORTPEND/ABORTOK/ABORTFAIL/TGTABORT can be retried, typically these 131 + * would happen if a frame is dropped and something times out. 132 + * NOLOGI or LINKDOWN can be retried if the other port is up. 133 + * RESIDERR can be retried as well. 134 + * 135 + * ABORTFAIL might indicate that lots of frames are getting CRC errors. 136 + * So it maybe retried once and reset the link if it happens again. 137 + * The link can also be reset on the CRC error threshold interrupt. 138 + */ 139 + #define SISL_FC_RC_ABORTPEND 0x52 /* exchange timeout or abort request */ 140 + #define SISL_FC_RC_WRABORTPEND 0x53 /* due to write XFER_RDY invalid */ 141 + #define SISL_FC_RC_NOLOGI 0x54 /* port not logged in, in-flight cmds */ 142 + #define SISL_FC_RC_NOEXP 0x55 /* FC protocol error or HW bug */ 143 + #define SISL_FC_RC_INUSE 0x56 /* tag already in use, HW bug */ 144 + #define SISL_FC_RC_LINKDOWN 0x57 /* link down, in-flight cmds */ 145 + #define SISL_FC_RC_ABORTOK 0x58 /* pending abort completed w/success */ 146 + #define SISL_FC_RC_ABORTFAIL 0x59 /* pending abort completed w/fail */ 147 + #define SISL_FC_RC_RESID 0x5A /* ioasa underrun/overrun flags set */ 148 + #define SISL_FC_RC_RESIDERR 0x5B /* actual data len does not match SCSI 149 + reported len, possbly due to dropped 150 + frames */ 151 + #define SISL_FC_RC_TGTABORT 0x5C /* command aborted by target */ 152 + }; 153 + 154 + #define SISL_SENSE_DATA_LEN 20 /* Sense data length */ 155 + 156 + /* 157 + * IOASA: 64 bytes & must follow IOARCB, min 16 byte alignment required, 158 + * host native endianness 159 + */ 160 + struct sisl_ioasa { 161 + union { 162 + struct sisl_rc rc; 163 + u32 ioasc; 164 + #define SISL_IOASC_GOOD_COMPLETION 0x00000000U 165 + }; 166 + u32 resid; 167 + u8 port; 168 + u8 afu_extra; 169 + /* when afu_rc=0x04, 0x14, 0x31 (_xxx_DMA_ERR): 170 + * afu_exta contains PSL response code. Useful codes are: 171 + */ 172 + #define SISL_AFU_DMA_ERR_PAGE_IN 0x0A /* AFU_retry_on_pagein Action 173 + * Enabled N/A 174 + * Disabled retry 175 + */ 176 + #define SISL_AFU_DMA_ERR_INVALID_EA 0x0B /* this is a hard error 177 + * afu_rc Implies 178 + * 0x04, 0x14 master exit. 179 + * 0x31 user error. 180 + */ 181 + /* when afu rc=0x20 (no channels): 182 + * afu_extra bits [4:5]: available portmask, [6:7]: requested portmask. 183 + */ 184 + #define SISL_AFU_NO_CLANNELS_AMASK(afu_extra) (((afu_extra) & 0x0C) >> 2) 185 + #define SISL_AFU_NO_CLANNELS_RMASK(afu_extra) ((afu_extra) & 0x03) 186 + 187 + u8 scsi_extra; 188 + u8 fc_extra; 189 + u8 sense_data[SISL_SENSE_DATA_LEN]; 190 + 191 + /* These fields are defined by the SISlite architecture for the 192 + * host to use as they see fit for their implementation. 193 + */ 194 + union { 195 + u64 host_use[4]; 196 + u8 host_use_b[32]; 197 + }; 198 + } __packed; 199 + 200 + #define SISL_RESP_HANDLE_T_BIT 0x1ULL /* Toggle bit */ 201 + 202 + /* MMIO space is required to support only 64-bit access */ 203 + 204 + /* 205 + * This AFU has two mechanisms to deal with endian-ness. 206 + * One is a global configuration (in the afu_config) register 207 + * below that specifies the endian-ness of the host. 208 + * The other is a per context (i.e. application) specification 209 + * controlled by the endian_ctrl field here. Since the master 210 + * context is one such application the master context's 211 + * endian-ness is set to be the same as the host. 212 + * 213 + * As per the SISlite spec, the MMIO registers are always 214 + * big endian. 215 + */ 216 + #define SISL_ENDIAN_CTRL_BE 0x8000000000000080ULL 217 + #define SISL_ENDIAN_CTRL_LE 0x0000000000000000ULL 218 + 219 + #ifdef __BIG_ENDIAN 220 + #define SISL_ENDIAN_CTRL SISL_ENDIAN_CTRL_BE 221 + #else 222 + #define SISL_ENDIAN_CTRL SISL_ENDIAN_CTRL_LE 223 + #endif 224 + 225 + /* per context host transport MMIO */ 226 + struct sisl_host_map { 227 + __be64 endian_ctrl; /* Per context Endian Control. The AFU will 228 + * operate on whatever the context is of the 229 + * host application. 230 + */ 231 + 232 + __be64 intr_status; /* this sends LISN# programmed in ctx_ctrl. 233 + * Only recovery in a PERM_ERR is a context 234 + * exit since there is no way to tell which 235 + * command caused the error. 236 + */ 237 + #define SISL_ISTATUS_PERM_ERR_CMDROOM 0x0010ULL /* b59, user error */ 238 + #define SISL_ISTATUS_PERM_ERR_RCB_READ 0x0008ULL /* b60, user error */ 239 + #define SISL_ISTATUS_PERM_ERR_SA_WRITE 0x0004ULL /* b61, user error */ 240 + #define SISL_ISTATUS_PERM_ERR_RRQ_WRITE 0x0002ULL /* b62, user error */ 241 + /* Page in wait accessing RCB/IOASA/RRQ is reported in b63. 242 + * Same error in data/LXT/RHT access is reported via IOASA. 243 + */ 244 + #define SISL_ISTATUS_TEMP_ERR_PAGEIN 0x0001ULL /* b63, can be generated 245 + * only when AFU auto 246 + * retry is disabled. 247 + * If user can determine 248 + * the command that 249 + * caused the error, it 250 + * can be retried. 251 + */ 252 + #define SISL_ISTATUS_UNMASK (0x001FULL) /* 1 means unmasked */ 253 + #define SISL_ISTATUS_MASK ~(SISL_ISTATUS_UNMASK) /* 1 means masked */ 254 + 255 + __be64 intr_clear; 256 + __be64 intr_mask; 257 + __be64 ioarrin; /* only write what cmd_room permits */ 258 + __be64 rrq_start; /* start & end are both inclusive */ 259 + __be64 rrq_end; /* write sequence: start followed by end */ 260 + __be64 cmd_room; 261 + __be64 ctx_ctrl; /* least signiifcant byte or b56:63 is LISN# */ 262 + __be64 mbox_w; /* restricted use */ 263 + }; 264 + 265 + /* per context provisioning & control MMIO */ 266 + struct sisl_ctrl_map { 267 + __be64 rht_start; 268 + __be64 rht_cnt_id; 269 + /* both cnt & ctx_id args must be ULL */ 270 + #define SISL_RHT_CNT_ID(cnt, ctx_id) (((cnt) << 48) | ((ctx_id) << 32)) 271 + 272 + __be64 ctx_cap; /* afu_rc below is when the capability is violated */ 273 + #define SISL_CTX_CAP_PROXY_ISSUE 0x8000000000000000ULL /* afu_rc 0x21 */ 274 + #define SISL_CTX_CAP_REAL_MODE 0x4000000000000000ULL /* afu_rc 0x21 */ 275 + #define SISL_CTX_CAP_HOST_XLATE 0x2000000000000000ULL /* afu_rc 0x1a */ 276 + #define SISL_CTX_CAP_PROXY_TARGET 0x1000000000000000ULL /* afu_rc 0x21 */ 277 + #define SISL_CTX_CAP_AFU_CMD 0x0000000000000008ULL /* afu_rc 0x21 */ 278 + #define SISL_CTX_CAP_GSCSI_CMD 0x0000000000000004ULL /* afu_rc 0x21 */ 279 + #define SISL_CTX_CAP_WRITE_CMD 0x0000000000000002ULL /* afu_rc 0x21 */ 280 + #define SISL_CTX_CAP_READ_CMD 0x0000000000000001ULL /* afu_rc 0x21 */ 281 + __be64 mbox_r; 282 + }; 283 + 284 + /* single copy global regs */ 285 + struct sisl_global_regs { 286 + __be64 aintr_status; 287 + /* In cxlflash, each FC port/link gets a byte of status */ 288 + #define SISL_ASTATUS_FC0_OTHER 0x8000ULL /* b48, other err, 289 + FC_ERRCAP[31:20] */ 290 + #define SISL_ASTATUS_FC0_LOGO 0x4000ULL /* b49, target sent FLOGI/PLOGI/LOGO 291 + while logged in */ 292 + #define SISL_ASTATUS_FC0_CRC_T 0x2000ULL /* b50, CRC threshold exceeded */ 293 + #define SISL_ASTATUS_FC0_LOGI_R 0x1000ULL /* b51, login state mechine timed out 294 + and retrying */ 295 + #define SISL_ASTATUS_FC0_LOGI_F 0x0800ULL /* b52, login failed, 296 + FC_ERROR[19:0] */ 297 + #define SISL_ASTATUS_FC0_LOGI_S 0x0400ULL /* b53, login succeeded */ 298 + #define SISL_ASTATUS_FC0_LINK_DN 0x0200ULL /* b54, link online to offline */ 299 + #define SISL_ASTATUS_FC0_LINK_UP 0x0100ULL /* b55, link offline to online */ 300 + 301 + #define SISL_ASTATUS_FC1_OTHER 0x0080ULL /* b56 */ 302 + #define SISL_ASTATUS_FC1_LOGO 0x0040ULL /* b57 */ 303 + #define SISL_ASTATUS_FC1_CRC_T 0x0020ULL /* b58 */ 304 + #define SISL_ASTATUS_FC1_LOGI_R 0x0010ULL /* b59 */ 305 + #define SISL_ASTATUS_FC1_LOGI_F 0x0008ULL /* b60 */ 306 + #define SISL_ASTATUS_FC1_LOGI_S 0x0004ULL /* b61 */ 307 + #define SISL_ASTATUS_FC1_LINK_DN 0x0002ULL /* b62 */ 308 + #define SISL_ASTATUS_FC1_LINK_UP 0x0001ULL /* b63 */ 309 + 310 + #define SISL_FC_INTERNAL_UNMASK 0x0000000300000000ULL /* 1 means unmasked */ 311 + #define SISL_FC_INTERNAL_MASK ~(SISL_FC_INTERNAL_UNMASK) 312 + #define SISL_FC_INTERNAL_SHIFT 32 313 + 314 + #define SISL_ASTATUS_UNMASK 0xFFFFULL /* 1 means unmasked */ 315 + #define SISL_ASTATUS_MASK ~(SISL_ASTATUS_UNMASK) /* 1 means masked */ 316 + 317 + __be64 aintr_clear; 318 + __be64 aintr_mask; 319 + __be64 afu_ctrl; 320 + __be64 afu_hb; 321 + __be64 afu_scratch_pad; 322 + __be64 afu_port_sel; 323 + #define SISL_AFUCONF_AR_IOARCB 0x4000ULL 324 + #define SISL_AFUCONF_AR_LXT 0x2000ULL 325 + #define SISL_AFUCONF_AR_RHT 0x1000ULL 326 + #define SISL_AFUCONF_AR_DATA 0x0800ULL 327 + #define SISL_AFUCONF_AR_RSRC 0x0400ULL 328 + #define SISL_AFUCONF_AR_IOASA 0x0200ULL 329 + #define SISL_AFUCONF_AR_RRQ 0x0100ULL 330 + /* Aggregate all Auto Retry Bits */ 331 + #define SISL_AFUCONF_AR_ALL (SISL_AFUCONF_AR_IOARCB|SISL_AFUCONF_AR_LXT| \ 332 + SISL_AFUCONF_AR_RHT|SISL_AFUCONF_AR_DATA| \ 333 + SISL_AFUCONF_AR_RSRC|SISL_AFUCONF_AR_IOASA| \ 334 + SISL_AFUCONF_AR_RRQ) 335 + #ifdef __BIG_ENDIAN 336 + #define SISL_AFUCONF_ENDIAN 0x0000ULL 337 + #else 338 + #define SISL_AFUCONF_ENDIAN 0x0020ULL 339 + #endif 340 + #define SISL_AFUCONF_MBOX_CLR_READ 0x0010ULL 341 + __be64 afu_config; 342 + __be64 rsvd[0xf8]; 343 + __be64 afu_version; 344 + __be64 interface_version; 345 + }; 346 + 347 + #define CXLFLASH_NUM_FC_PORTS 2 348 + #define CXLFLASH_MAX_CONTEXT 512 /* how many contexts per afu */ 349 + #define CXLFLASH_NUM_VLUNS 512 350 + 351 + struct sisl_global_map { 352 + union { 353 + struct sisl_global_regs regs; 354 + char page0[SIZE_4K]; /* page 0 */ 355 + }; 356 + 357 + char page1[SIZE_4K]; /* page 1 */ 358 + 359 + /* pages 2 & 3 */ 360 + __be64 fc_regs[CXLFLASH_NUM_FC_PORTS][CXLFLASH_NUM_VLUNS]; 361 + 362 + /* pages 4 & 5 (lun tbl) */ 363 + __be64 fc_port[CXLFLASH_NUM_FC_PORTS][CXLFLASH_NUM_VLUNS]; 364 + 365 + }; 366 + 367 + /* 368 + * CXL Flash Memory Map 369 + * 370 + * +-------------------------------+ 371 + * | 512 * 64 KB User MMIO | 372 + * | (per context) | 373 + * | User Accessible | 374 + * +-------------------------------+ 375 + * | 512 * 128 B per context | 376 + * | Provisioning and Control | 377 + * | Trusted Process accessible | 378 + * +-------------------------------+ 379 + * | 64 KB Global | 380 + * | Trusted Process accessible | 381 + * +-------------------------------+ 382 + */ 383 + struct cxlflash_afu_map { 384 + union { 385 + struct sisl_host_map host; 386 + char harea[SIZE_64K]; /* 64KB each */ 387 + } hosts[CXLFLASH_MAX_CONTEXT]; 388 + 389 + union { 390 + struct sisl_ctrl_map ctrl; 391 + char carea[cache_line_size()]; /* 128B each */ 392 + } ctrls[CXLFLASH_MAX_CONTEXT]; 393 + 394 + union { 395 + struct sisl_global_map global; 396 + char garea[SIZE_64K]; /* 64KB single block */ 397 + }; 398 + }; 399 + 400 + /* 401 + * LXT - LBA Translation Table 402 + * LXT control blocks 403 + */ 404 + struct sisl_lxt_entry { 405 + u64 rlba_base; /* bits 0:47 is base 406 + * b48:55 is lun index 407 + * b58:59 is write & read perms 408 + * (if no perm, afu_rc=0x15) 409 + * b60:63 is port_sel mask 410 + */ 411 + }; 412 + 413 + /* 414 + * RHT - Resource Handle Table 415 + * Per the SISlite spec, RHT entries are to be 16-byte aligned 416 + */ 417 + struct sisl_rht_entry { 418 + struct sisl_lxt_entry *lxt_start; 419 + u32 lxt_cnt; 420 + u16 rsvd; 421 + u8 fp; /* format & perm nibbles. 422 + * (if no perm, afu_rc=0x05) 423 + */ 424 + u8 nmask; 425 + } __packed __aligned(16); 426 + 427 + struct sisl_rht_entry_f1 { 428 + u64 lun_id; 429 + union { 430 + struct { 431 + u8 valid; 432 + u8 rsvd[5]; 433 + u8 fp; 434 + u8 port_sel; 435 + }; 436 + 437 + u64 dw; 438 + }; 439 + } __packed __aligned(16); 440 + 441 + /* make the fp byte */ 442 + #define SISL_RHT_FP(fmt, perm) (((fmt) << 4) | (perm)) 443 + 444 + /* make the fp byte for a clone from a source fp and clone flags 445 + * flags must be only 2 LSB bits. 446 + */ 447 + #define SISL_RHT_FP_CLONE(src_fp, cln_flags) ((src_fp) & (0xFC | (cln_flags))) 448 + 449 + #define RHT_PERM_READ 0x01U 450 + #define RHT_PERM_WRITE 0x02U 451 + #define RHT_PERM_RW (RHT_PERM_READ | RHT_PERM_WRITE) 452 + 453 + /* extract the perm bits from a fp */ 454 + #define SISL_RHT_PERM(fp) ((fp) & RHT_PERM_RW) 455 + 456 + #define PORT0 0x01U 457 + #define PORT1 0x02U 458 + #define BOTH_PORTS (PORT0 | PORT1) 459 + 460 + /* AFU Sync Mode byte */ 461 + #define AFU_LW_SYNC 0x0U 462 + #define AFU_HW_SYNC 0x1U 463 + #define AFU_GSYNC 0x2U 464 + 465 + /* Special Task Management Function CDB */ 466 + #define TMF_LUN_RESET 0x1U 467 + #define TMF_CLEAR_ACA 0x2U 468 + 469 + 470 + #define SISLITE_MAX_WS_BLOCKS 512 471 + 472 + #endif /* _SISLITE_H */
+2084
drivers/scsi/cxlflash/superpipe.c
··· 1 + /* 2 + * CXL Flash Device Driver 3 + * 4 + * Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation 5 + * Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation 6 + * 7 + * Copyright (C) 2015 IBM Corporation 8 + * 9 + * This program is free software; you can redistribute it and/or 10 + * modify it under the terms of the GNU General Public License 11 + * as published by the Free Software Foundation; either version 12 + * 2 of the License, or (at your option) any later version. 13 + */ 14 + 15 + #include <linux/delay.h> 16 + #include <linux/file.h> 17 + #include <linux/syscalls.h> 18 + #include <misc/cxl.h> 19 + #include <asm/unaligned.h> 20 + 21 + #include <scsi/scsi.h> 22 + #include <scsi/scsi_host.h> 23 + #include <scsi/scsi_cmnd.h> 24 + #include <scsi/scsi_eh.h> 25 + #include <uapi/scsi/cxlflash_ioctl.h> 26 + 27 + #include "sislite.h" 28 + #include "common.h" 29 + #include "vlun.h" 30 + #include "superpipe.h" 31 + 32 + struct cxlflash_global global; 33 + 34 + /** 35 + * marshal_rele_to_resize() - translate release to resize structure 36 + * @rele: Source structure from which to translate/copy. 37 + * @resize: Destination structure for the translate/copy. 38 + */ 39 + static void marshal_rele_to_resize(struct dk_cxlflash_release *release, 40 + struct dk_cxlflash_resize *resize) 41 + { 42 + resize->hdr = release->hdr; 43 + resize->context_id = release->context_id; 44 + resize->rsrc_handle = release->rsrc_handle; 45 + } 46 + 47 + /** 48 + * marshal_det_to_rele() - translate detach to release structure 49 + * @detach: Destination structure for the translate/copy. 50 + * @rele: Source structure from which to translate/copy. 51 + */ 52 + static void marshal_det_to_rele(struct dk_cxlflash_detach *detach, 53 + struct dk_cxlflash_release *release) 54 + { 55 + release->hdr = detach->hdr; 56 + release->context_id = detach->context_id; 57 + } 58 + 59 + /** 60 + * cxlflash_free_errpage() - frees resources associated with global error page 61 + */ 62 + void cxlflash_free_errpage(void) 63 + { 64 + 65 + mutex_lock(&global.mutex); 66 + if (global.err_page) { 67 + __free_page(global.err_page); 68 + global.err_page = NULL; 69 + } 70 + mutex_unlock(&global.mutex); 71 + } 72 + 73 + /** 74 + * cxlflash_stop_term_user_contexts() - stops/terminates known user contexts 75 + * @cfg: Internal structure associated with the host. 76 + * 77 + * When the host needs to go down, all users must be quiesced and their 78 + * memory freed. This is accomplished by putting the contexts in error 79 + * state which will notify the user and let them 'drive' the tear-down. 80 + * Meanwhile, this routine camps until all user contexts have been removed. 81 + */ 82 + void cxlflash_stop_term_user_contexts(struct cxlflash_cfg *cfg) 83 + { 84 + struct device *dev = &cfg->dev->dev; 85 + int i, found; 86 + 87 + cxlflash_mark_contexts_error(cfg); 88 + 89 + while (true) { 90 + found = false; 91 + 92 + for (i = 0; i < MAX_CONTEXT; i++) 93 + if (cfg->ctx_tbl[i]) { 94 + found = true; 95 + break; 96 + } 97 + 98 + if (!found && list_empty(&cfg->ctx_err_recovery)) 99 + return; 100 + 101 + dev_dbg(dev, "%s: Wait for user contexts to quiesce...\n", 102 + __func__); 103 + wake_up_all(&cfg->limbo_waitq); 104 + ssleep(1); 105 + } 106 + } 107 + 108 + /** 109 + * find_error_context() - locates a context by cookie on the error recovery list 110 + * @cfg: Internal structure associated with the host. 111 + * @rctxid: Desired context by id. 112 + * @file: Desired context by file. 113 + * 114 + * Return: Found context on success, NULL on failure 115 + */ 116 + static struct ctx_info *find_error_context(struct cxlflash_cfg *cfg, u64 rctxid, 117 + struct file *file) 118 + { 119 + struct ctx_info *ctxi; 120 + 121 + list_for_each_entry(ctxi, &cfg->ctx_err_recovery, list) 122 + if ((ctxi->ctxid == rctxid) || (ctxi->file == file)) 123 + return ctxi; 124 + 125 + return NULL; 126 + } 127 + 128 + /** 129 + * get_context() - obtains a validated and locked context reference 130 + * @cfg: Internal structure associated with the host. 131 + * @rctxid: Desired context (raw, un-decoded format). 132 + * @arg: LUN information or file associated with request. 133 + * @ctx_ctrl: Control information to 'steer' desired lookup. 134 + * 135 + * NOTE: despite the name pid, in linux, current->pid actually refers 136 + * to the lightweight process id (tid) and can change if the process is 137 + * multi threaded. The tgid remains constant for the process and only changes 138 + * when the process of fork. For all intents and purposes, think of tgid 139 + * as a pid in the traditional sense. 140 + * 141 + * Return: Validated context on success, NULL on failure 142 + */ 143 + struct ctx_info *get_context(struct cxlflash_cfg *cfg, u64 rctxid, 144 + void *arg, enum ctx_ctrl ctx_ctrl) 145 + { 146 + struct device *dev = &cfg->dev->dev; 147 + struct ctx_info *ctxi = NULL; 148 + struct lun_access *lun_access = NULL; 149 + struct file *file = NULL; 150 + struct llun_info *lli = arg; 151 + u64 ctxid = DECODE_CTXID(rctxid); 152 + int rc; 153 + pid_t pid = current->tgid, ctxpid = 0; 154 + 155 + if (ctx_ctrl & CTX_CTRL_FILE) { 156 + lli = NULL; 157 + file = (struct file *)arg; 158 + } 159 + 160 + if (ctx_ctrl & CTX_CTRL_CLONE) 161 + pid = current->parent->tgid; 162 + 163 + if (likely(ctxid < MAX_CONTEXT)) { 164 + while (true) { 165 + rc = mutex_lock_interruptible(&cfg->ctx_tbl_list_mutex); 166 + if (rc) 167 + goto out; 168 + 169 + ctxi = cfg->ctx_tbl[ctxid]; 170 + if (ctxi) 171 + if ((file && (ctxi->file != file)) || 172 + (!file && (ctxi->ctxid != rctxid))) 173 + ctxi = NULL; 174 + 175 + if ((ctx_ctrl & CTX_CTRL_ERR) || 176 + (!ctxi && (ctx_ctrl & CTX_CTRL_ERR_FALLBACK))) 177 + ctxi = find_error_context(cfg, rctxid, file); 178 + if (!ctxi) { 179 + mutex_unlock(&cfg->ctx_tbl_list_mutex); 180 + goto out; 181 + } 182 + 183 + /* 184 + * Need to acquire ownership of the context while still 185 + * under the table/list lock to serialize with a remove 186 + * thread. Use the 'try' to avoid stalling the 187 + * table/list lock for a single context. 188 + * 189 + * Note that the lock order is: 190 + * 191 + * cfg->ctx_tbl_list_mutex -> ctxi->mutex 192 + * 193 + * Therefore release ctx_tbl_list_mutex before retrying. 194 + */ 195 + rc = mutex_trylock(&ctxi->mutex); 196 + mutex_unlock(&cfg->ctx_tbl_list_mutex); 197 + if (rc) 198 + break; /* got the context's lock! */ 199 + } 200 + 201 + if (ctxi->unavail) 202 + goto denied; 203 + 204 + ctxpid = ctxi->pid; 205 + if (likely(!(ctx_ctrl & CTX_CTRL_NOPID))) 206 + if (pid != ctxpid) 207 + goto denied; 208 + 209 + if (lli) { 210 + list_for_each_entry(lun_access, &ctxi->luns, list) 211 + if (lun_access->lli == lli) 212 + goto out; 213 + goto denied; 214 + } 215 + } 216 + 217 + out: 218 + dev_dbg(dev, "%s: rctxid=%016llX ctxinfo=%p ctxpid=%u pid=%u " 219 + "ctx_ctrl=%u\n", __func__, rctxid, ctxi, ctxpid, pid, 220 + ctx_ctrl); 221 + 222 + return ctxi; 223 + 224 + denied: 225 + mutex_unlock(&ctxi->mutex); 226 + ctxi = NULL; 227 + goto out; 228 + } 229 + 230 + /** 231 + * put_context() - release a context that was retrieved from get_context() 232 + * @ctxi: Context to release. 233 + * 234 + * For now, releasing the context equates to unlocking it's mutex. 235 + */ 236 + void put_context(struct ctx_info *ctxi) 237 + { 238 + mutex_unlock(&ctxi->mutex); 239 + } 240 + 241 + /** 242 + * afu_attach() - attach a context to the AFU 243 + * @cfg: Internal structure associated with the host. 244 + * @ctxi: Context to attach. 245 + * 246 + * Upon setting the context capabilities, they must be confirmed with 247 + * a read back operation as the context might have been closed since 248 + * the mailbox was unlocked. When this occurs, registration is failed. 249 + * 250 + * Return: 0 on success, -errno on failure 251 + */ 252 + static int afu_attach(struct cxlflash_cfg *cfg, struct ctx_info *ctxi) 253 + { 254 + struct device *dev = &cfg->dev->dev; 255 + struct afu *afu = cfg->afu; 256 + struct sisl_ctrl_map *ctrl_map = ctxi->ctrl_map; 257 + int rc = 0; 258 + u64 val; 259 + 260 + /* Unlock cap and restrict user to read/write cmds in translated mode */ 261 + readq_be(&ctrl_map->mbox_r); 262 + val = (SISL_CTX_CAP_READ_CMD | SISL_CTX_CAP_WRITE_CMD); 263 + writeq_be(val, &ctrl_map->ctx_cap); 264 + val = readq_be(&ctrl_map->ctx_cap); 265 + if (val != (SISL_CTX_CAP_READ_CMD | SISL_CTX_CAP_WRITE_CMD)) { 266 + dev_err(dev, "%s: ctx may be closed val=%016llX\n", 267 + __func__, val); 268 + rc = -EAGAIN; 269 + goto out; 270 + } 271 + 272 + /* Set up MMIO registers pointing to the RHT */ 273 + writeq_be((u64)ctxi->rht_start, &ctrl_map->rht_start); 274 + val = SISL_RHT_CNT_ID((u64)MAX_RHT_PER_CONTEXT, (u64)(afu->ctx_hndl)); 275 + writeq_be(val, &ctrl_map->rht_cnt_id); 276 + out: 277 + dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 278 + return rc; 279 + } 280 + 281 + /** 282 + * read_cap16() - issues a SCSI READ_CAP16 command 283 + * @sdev: SCSI device associated with LUN. 284 + * @lli: LUN destined for capacity request. 285 + * 286 + * Return: 0 on success, -errno on failure 287 + */ 288 + static int read_cap16(struct scsi_device *sdev, struct llun_info *lli) 289 + { 290 + struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)sdev->host->hostdata; 291 + struct device *dev = &cfg->dev->dev; 292 + struct glun_info *gli = lli->parent; 293 + u8 *cmd_buf = NULL; 294 + u8 *scsi_cmd = NULL; 295 + u8 *sense_buf = NULL; 296 + int rc = 0; 297 + int result = 0; 298 + int retry_cnt = 0; 299 + u32 tout = (MC_DISCOVERY_TIMEOUT * HZ); 300 + 301 + retry: 302 + cmd_buf = kzalloc(CMD_BUFSIZE, GFP_KERNEL); 303 + scsi_cmd = kzalloc(MAX_COMMAND_SIZE, GFP_KERNEL); 304 + sense_buf = kzalloc(SCSI_SENSE_BUFFERSIZE, GFP_KERNEL); 305 + if (unlikely(!cmd_buf || !scsi_cmd || !sense_buf)) { 306 + rc = -ENOMEM; 307 + goto out; 308 + } 309 + 310 + scsi_cmd[0] = SERVICE_ACTION_IN_16; /* read cap(16) */ 311 + scsi_cmd[1] = SAI_READ_CAPACITY_16; /* service action */ 312 + put_unaligned_be32(CMD_BUFSIZE, &scsi_cmd[10]); 313 + 314 + dev_dbg(dev, "%s: %ssending cmd(0x%x)\n", __func__, 315 + retry_cnt ? "re" : "", scsi_cmd[0]); 316 + 317 + result = scsi_execute(sdev, scsi_cmd, DMA_FROM_DEVICE, cmd_buf, 318 + CMD_BUFSIZE, sense_buf, tout, 5, 0, NULL); 319 + 320 + if (driver_byte(result) == DRIVER_SENSE) { 321 + result &= ~(0xFF<<24); /* DRIVER_SENSE is not an error */ 322 + if (result & SAM_STAT_CHECK_CONDITION) { 323 + struct scsi_sense_hdr sshdr; 324 + 325 + scsi_normalize_sense(sense_buf, SCSI_SENSE_BUFFERSIZE, 326 + &sshdr); 327 + switch (sshdr.sense_key) { 328 + case NO_SENSE: 329 + case RECOVERED_ERROR: 330 + /* fall through */ 331 + case NOT_READY: 332 + result &= ~SAM_STAT_CHECK_CONDITION; 333 + break; 334 + case UNIT_ATTENTION: 335 + switch (sshdr.asc) { 336 + case 0x29: /* Power on Reset or Device Reset */ 337 + /* fall through */ 338 + case 0x2A: /* Device capacity changed */ 339 + case 0x3F: /* Report LUNs changed */ 340 + /* Retry the command once more */ 341 + if (retry_cnt++ < 1) { 342 + kfree(cmd_buf); 343 + kfree(scsi_cmd); 344 + kfree(sense_buf); 345 + goto retry; 346 + } 347 + } 348 + break; 349 + default: 350 + break; 351 + } 352 + } 353 + } 354 + 355 + if (result) { 356 + dev_err(dev, "%s: command failed, result=0x%x\n", 357 + __func__, result); 358 + rc = -EIO; 359 + goto out; 360 + } 361 + 362 + /* 363 + * Read cap was successful, grab values from the buffer; 364 + * note that we don't need to worry about unaligned access 365 + * as the buffer is allocated on an aligned boundary. 366 + */ 367 + mutex_lock(&gli->mutex); 368 + gli->max_lba = be64_to_cpu(*((u64 *)&cmd_buf[0])); 369 + gli->blk_len = be32_to_cpu(*((u32 *)&cmd_buf[8])); 370 + mutex_unlock(&gli->mutex); 371 + 372 + out: 373 + kfree(cmd_buf); 374 + kfree(scsi_cmd); 375 + kfree(sense_buf); 376 + 377 + dev_dbg(dev, "%s: maxlba=%lld blklen=%d rc=%d\n", 378 + __func__, gli->max_lba, gli->blk_len, rc); 379 + return rc; 380 + } 381 + 382 + /** 383 + * get_rhte() - obtains validated resource handle table entry reference 384 + * @ctxi: Context owning the resource handle. 385 + * @rhndl: Resource handle associated with entry. 386 + * @lli: LUN associated with request. 387 + * 388 + * Return: Validated RHTE on success, NULL on failure 389 + */ 390 + struct sisl_rht_entry *get_rhte(struct ctx_info *ctxi, res_hndl_t rhndl, 391 + struct llun_info *lli) 392 + { 393 + struct sisl_rht_entry *rhte = NULL; 394 + 395 + if (unlikely(!ctxi->rht_start)) { 396 + pr_debug("%s: Context does not have allocated RHT!\n", 397 + __func__); 398 + goto out; 399 + } 400 + 401 + if (unlikely(rhndl >= MAX_RHT_PER_CONTEXT)) { 402 + pr_debug("%s: Bad resource handle! (%d)\n", __func__, rhndl); 403 + goto out; 404 + } 405 + 406 + if (unlikely(ctxi->rht_lun[rhndl] != lli)) { 407 + pr_debug("%s: Bad resource handle LUN! (%d)\n", 408 + __func__, rhndl); 409 + goto out; 410 + } 411 + 412 + rhte = &ctxi->rht_start[rhndl]; 413 + if (unlikely(rhte->nmask == 0)) { 414 + pr_debug("%s: Unopened resource handle! (%d)\n", 415 + __func__, rhndl); 416 + rhte = NULL; 417 + goto out; 418 + } 419 + 420 + out: 421 + return rhte; 422 + } 423 + 424 + /** 425 + * rhte_checkout() - obtains free/empty resource handle table entry 426 + * @ctxi: Context owning the resource handle. 427 + * @lli: LUN associated with request. 428 + * 429 + * Return: Free RHTE on success, NULL on failure 430 + */ 431 + struct sisl_rht_entry *rhte_checkout(struct ctx_info *ctxi, 432 + struct llun_info *lli) 433 + { 434 + struct sisl_rht_entry *rhte = NULL; 435 + int i; 436 + 437 + /* Find a free RHT entry */ 438 + for (i = 0; i < MAX_RHT_PER_CONTEXT; i++) 439 + if (ctxi->rht_start[i].nmask == 0) { 440 + rhte = &ctxi->rht_start[i]; 441 + ctxi->rht_out++; 442 + break; 443 + } 444 + 445 + if (likely(rhte)) 446 + ctxi->rht_lun[i] = lli; 447 + 448 + pr_debug("%s: returning rhte=%p (%d)\n", __func__, rhte, i); 449 + return rhte; 450 + } 451 + 452 + /** 453 + * rhte_checkin() - releases a resource handle table entry 454 + * @ctxi: Context owning the resource handle. 455 + * @rhte: RHTE to release. 456 + */ 457 + void rhte_checkin(struct ctx_info *ctxi, 458 + struct sisl_rht_entry *rhte) 459 + { 460 + u32 rsrc_handle = rhte - ctxi->rht_start; 461 + 462 + rhte->nmask = 0; 463 + rhte->fp = 0; 464 + ctxi->rht_out--; 465 + ctxi->rht_lun[rsrc_handle] = NULL; 466 + ctxi->rht_needs_ws[rsrc_handle] = false; 467 + } 468 + 469 + /** 470 + * rhte_format1() - populates a RHTE for format 1 471 + * @rhte: RHTE to populate. 472 + * @lun_id: LUN ID of LUN associated with RHTE. 473 + * @perm: Desired permissions for RHTE. 474 + * @port_sel: Port selection mask 475 + */ 476 + static void rht_format1(struct sisl_rht_entry *rhte, u64 lun_id, u32 perm, 477 + u32 port_sel) 478 + { 479 + /* 480 + * Populate the Format 1 RHT entry for direct access (physical 481 + * LUN) using the synchronization sequence defined in the 482 + * SISLite specification. 483 + */ 484 + struct sisl_rht_entry_f1 dummy = { 0 }; 485 + struct sisl_rht_entry_f1 *rhte_f1 = (struct sisl_rht_entry_f1 *)rhte; 486 + 487 + memset(rhte_f1, 0, sizeof(*rhte_f1)); 488 + rhte_f1->fp = SISL_RHT_FP(1U, 0); 489 + dma_wmb(); /* Make setting of format bit visible */ 490 + 491 + rhte_f1->lun_id = lun_id; 492 + dma_wmb(); /* Make setting of LUN id visible */ 493 + 494 + /* 495 + * Use a dummy RHT Format 1 entry to build the second dword 496 + * of the entry that must be populated in a single write when 497 + * enabled (valid bit set to TRUE). 498 + */ 499 + dummy.valid = 0x80; 500 + dummy.fp = SISL_RHT_FP(1U, perm); 501 + dummy.port_sel = port_sel; 502 + rhte_f1->dw = dummy.dw; 503 + 504 + dma_wmb(); /* Make remaining RHT entry fields visible */ 505 + } 506 + 507 + /** 508 + * cxlflash_lun_attach() - attaches a user to a LUN and manages the LUN's mode 509 + * @gli: LUN to attach. 510 + * @mode: Desired mode of the LUN. 511 + * @locked: Mutex status on current thread. 512 + * 513 + * Return: 0 on success, -errno on failure 514 + */ 515 + int cxlflash_lun_attach(struct glun_info *gli, enum lun_mode mode, bool locked) 516 + { 517 + int rc = 0; 518 + 519 + if (!locked) 520 + mutex_lock(&gli->mutex); 521 + 522 + if (gli->mode == MODE_NONE) 523 + gli->mode = mode; 524 + else if (gli->mode != mode) { 525 + pr_debug("%s: LUN operating in mode %d, requested mode %d\n", 526 + __func__, gli->mode, mode); 527 + rc = -EINVAL; 528 + goto out; 529 + } 530 + 531 + gli->users++; 532 + WARN_ON(gli->users <= 0); 533 + out: 534 + pr_debug("%s: Returning rc=%d gli->mode=%u gli->users=%u\n", 535 + __func__, rc, gli->mode, gli->users); 536 + if (!locked) 537 + mutex_unlock(&gli->mutex); 538 + return rc; 539 + } 540 + 541 + /** 542 + * cxlflash_lun_detach() - detaches a user from a LUN and resets the LUN's mode 543 + * @gli: LUN to detach. 544 + * 545 + * When resetting the mode, terminate block allocation resources as they 546 + * are no longer required (service is safe to call even when block allocation 547 + * resources were not present - such as when transitioning from physical mode). 548 + * These resources will be reallocated when needed (subsequent transition to 549 + * virtual mode). 550 + */ 551 + void cxlflash_lun_detach(struct glun_info *gli) 552 + { 553 + mutex_lock(&gli->mutex); 554 + WARN_ON(gli->mode == MODE_NONE); 555 + if (--gli->users == 0) { 556 + gli->mode = MODE_NONE; 557 + cxlflash_ba_terminate(&gli->blka.ba_lun); 558 + } 559 + pr_debug("%s: gli->users=%u\n", __func__, gli->users); 560 + WARN_ON(gli->users < 0); 561 + mutex_unlock(&gli->mutex); 562 + } 563 + 564 + /** 565 + * _cxlflash_disk_release() - releases the specified resource entry 566 + * @sdev: SCSI device associated with LUN. 567 + * @ctxi: Context owning resources. 568 + * @release: Release ioctl data structure. 569 + * 570 + * For LUNs in virtual mode, the virtual LUN associated with the specified 571 + * resource handle is resized to 0 prior to releasing the RHTE. Note that the 572 + * AFU sync should _not_ be performed when the context is sitting on the error 573 + * recovery list. A context on the error recovery list is not known to the AFU 574 + * due to reset. When the context is recovered, it will be reattached and made 575 + * known again to the AFU. 576 + * 577 + * Return: 0 on success, -errno on failure 578 + */ 579 + int _cxlflash_disk_release(struct scsi_device *sdev, 580 + struct ctx_info *ctxi, 581 + struct dk_cxlflash_release *release) 582 + { 583 + struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)sdev->host->hostdata; 584 + struct device *dev = &cfg->dev->dev; 585 + struct llun_info *lli = sdev->hostdata; 586 + struct glun_info *gli = lli->parent; 587 + struct afu *afu = cfg->afu; 588 + bool put_ctx = false; 589 + 590 + struct dk_cxlflash_resize size; 591 + res_hndl_t rhndl = release->rsrc_handle; 592 + 593 + int rc = 0; 594 + u64 ctxid = DECODE_CTXID(release->context_id), 595 + rctxid = release->context_id; 596 + 597 + struct sisl_rht_entry *rhte; 598 + struct sisl_rht_entry_f1 *rhte_f1; 599 + 600 + dev_dbg(dev, "%s: ctxid=%llu rhndl=0x%llx gli->mode=%u gli->users=%u\n", 601 + __func__, ctxid, release->rsrc_handle, gli->mode, gli->users); 602 + 603 + if (!ctxi) { 604 + ctxi = get_context(cfg, rctxid, lli, CTX_CTRL_ERR_FALLBACK); 605 + if (unlikely(!ctxi)) { 606 + dev_dbg(dev, "%s: Bad context! (%llu)\n", 607 + __func__, ctxid); 608 + rc = -EINVAL; 609 + goto out; 610 + } 611 + 612 + put_ctx = true; 613 + } 614 + 615 + rhte = get_rhte(ctxi, rhndl, lli); 616 + if (unlikely(!rhte)) { 617 + dev_dbg(dev, "%s: Bad resource handle! (%d)\n", 618 + __func__, rhndl); 619 + rc = -EINVAL; 620 + goto out; 621 + } 622 + 623 + /* 624 + * Resize to 0 for virtual LUNS by setting the size 625 + * to 0. This will clear LXT_START and LXT_CNT fields 626 + * in the RHT entry and properly sync with the AFU. 627 + * 628 + * Afterwards we clear the remaining fields. 629 + */ 630 + switch (gli->mode) { 631 + case MODE_VIRTUAL: 632 + marshal_rele_to_resize(release, &size); 633 + size.req_size = 0; 634 + rc = _cxlflash_vlun_resize(sdev, ctxi, &size); 635 + if (rc) { 636 + dev_dbg(dev, "%s: resize failed rc %d\n", __func__, rc); 637 + goto out; 638 + } 639 + 640 + break; 641 + case MODE_PHYSICAL: 642 + /* 643 + * Clear the Format 1 RHT entry for direct access 644 + * (physical LUN) using the synchronization sequence 645 + * defined in the SISLite specification. 646 + */ 647 + rhte_f1 = (struct sisl_rht_entry_f1 *)rhte; 648 + 649 + rhte_f1->valid = 0; 650 + dma_wmb(); /* Make revocation of RHT entry visible */ 651 + 652 + rhte_f1->lun_id = 0; 653 + dma_wmb(); /* Make clearing of LUN id visible */ 654 + 655 + rhte_f1->dw = 0; 656 + dma_wmb(); /* Make RHT entry bottom-half clearing visible */ 657 + 658 + if (!ctxi->err_recovery_active) 659 + cxlflash_afu_sync(afu, ctxid, rhndl, AFU_HW_SYNC); 660 + break; 661 + default: 662 + WARN(1, "Unsupported LUN mode!"); 663 + goto out; 664 + } 665 + 666 + rhte_checkin(ctxi, rhte); 667 + cxlflash_lun_detach(gli); 668 + 669 + out: 670 + if (put_ctx) 671 + put_context(ctxi); 672 + dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 673 + return rc; 674 + } 675 + 676 + int cxlflash_disk_release(struct scsi_device *sdev, 677 + struct dk_cxlflash_release *release) 678 + { 679 + return _cxlflash_disk_release(sdev, NULL, release); 680 + } 681 + 682 + /** 683 + * destroy_context() - releases a context 684 + * @cfg: Internal structure associated with the host. 685 + * @ctxi: Context to release. 686 + * 687 + * Note that the rht_lun member of the context was cut from a single 688 + * allocation when the context was created and therefore does not need 689 + * to be explicitly freed. Also note that we conditionally check for the 690 + * existence of the context control map before clearing the RHT registers 691 + * and context capabilities because it is possible to destroy a context 692 + * while the context is in the error state (previous mapping was removed 693 + * [so we don't have to worry about clearing] and context is waiting for 694 + * a new mapping). 695 + */ 696 + static void destroy_context(struct cxlflash_cfg *cfg, 697 + struct ctx_info *ctxi) 698 + { 699 + struct afu *afu = cfg->afu; 700 + 701 + WARN_ON(!list_empty(&ctxi->luns)); 702 + 703 + /* Clear RHT registers and drop all capabilities for this context */ 704 + if (afu->afu_map && ctxi->ctrl_map) { 705 + writeq_be(0, &ctxi->ctrl_map->rht_start); 706 + writeq_be(0, &ctxi->ctrl_map->rht_cnt_id); 707 + writeq_be(0, &ctxi->ctrl_map->ctx_cap); 708 + } 709 + 710 + /* Free memory associated with context */ 711 + free_page((ulong)ctxi->rht_start); 712 + kfree(ctxi->rht_needs_ws); 713 + kfree(ctxi->rht_lun); 714 + kfree(ctxi); 715 + atomic_dec_if_positive(&cfg->num_user_contexts); 716 + } 717 + 718 + /** 719 + * create_context() - allocates and initializes a context 720 + * @cfg: Internal structure associated with the host. 721 + * @ctx: Previously obtained CXL context reference. 722 + * @ctxid: Previously obtained process element associated with CXL context. 723 + * @adap_fd: Previously obtained adapter fd associated with CXL context. 724 + * @file: Previously obtained file associated with CXL context. 725 + * @perms: User-specified permissions. 726 + * 727 + * The context's mutex is locked when an allocated context is returned. 728 + * 729 + * Return: Allocated context on success, NULL on failure 730 + */ 731 + static struct ctx_info *create_context(struct cxlflash_cfg *cfg, 732 + struct cxl_context *ctx, int ctxid, 733 + int adap_fd, struct file *file, 734 + u32 perms) 735 + { 736 + struct device *dev = &cfg->dev->dev; 737 + struct afu *afu = cfg->afu; 738 + struct ctx_info *ctxi = NULL; 739 + struct llun_info **lli = NULL; 740 + bool *ws = NULL; 741 + struct sisl_rht_entry *rhte; 742 + 743 + ctxi = kzalloc(sizeof(*ctxi), GFP_KERNEL); 744 + lli = kzalloc((MAX_RHT_PER_CONTEXT * sizeof(*lli)), GFP_KERNEL); 745 + ws = kzalloc((MAX_RHT_PER_CONTEXT * sizeof(*ws)), GFP_KERNEL); 746 + if (unlikely(!ctxi || !lli || !ws)) { 747 + dev_err(dev, "%s: Unable to allocate context!\n", __func__); 748 + goto err; 749 + } 750 + 751 + rhte = (struct sisl_rht_entry *)get_zeroed_page(GFP_KERNEL); 752 + if (unlikely(!rhte)) { 753 + dev_err(dev, "%s: Unable to allocate RHT!\n", __func__); 754 + goto err; 755 + } 756 + 757 + ctxi->rht_lun = lli; 758 + ctxi->rht_needs_ws = ws; 759 + ctxi->rht_start = rhte; 760 + ctxi->rht_perms = perms; 761 + 762 + ctxi->ctrl_map = &afu->afu_map->ctrls[ctxid].ctrl; 763 + ctxi->ctxid = ENCODE_CTXID(ctxi, ctxid); 764 + ctxi->lfd = adap_fd; 765 + ctxi->pid = current->tgid; /* tgid = pid */ 766 + ctxi->ctx = ctx; 767 + ctxi->file = file; 768 + mutex_init(&ctxi->mutex); 769 + INIT_LIST_HEAD(&ctxi->luns); 770 + INIT_LIST_HEAD(&ctxi->list); /* initialize for list_empty() */ 771 + 772 + atomic_inc(&cfg->num_user_contexts); 773 + mutex_lock(&ctxi->mutex); 774 + out: 775 + return ctxi; 776 + 777 + err: 778 + kfree(ws); 779 + kfree(lli); 780 + kfree(ctxi); 781 + ctxi = NULL; 782 + goto out; 783 + } 784 + 785 + /** 786 + * _cxlflash_disk_detach() - detaches a LUN from a context 787 + * @sdev: SCSI device associated with LUN. 788 + * @ctxi: Context owning resources. 789 + * @detach: Detach ioctl data structure. 790 + * 791 + * As part of the detach, all per-context resources associated with the LUN 792 + * are cleaned up. When detaching the last LUN for a context, the context 793 + * itself is cleaned up and released. 794 + * 795 + * Return: 0 on success, -errno on failure 796 + */ 797 + static int _cxlflash_disk_detach(struct scsi_device *sdev, 798 + struct ctx_info *ctxi, 799 + struct dk_cxlflash_detach *detach) 800 + { 801 + struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)sdev->host->hostdata; 802 + struct device *dev = &cfg->dev->dev; 803 + struct llun_info *lli = sdev->hostdata; 804 + struct lun_access *lun_access, *t; 805 + struct dk_cxlflash_release rel; 806 + bool put_ctx = false; 807 + 808 + int i; 809 + int rc = 0; 810 + int lfd; 811 + u64 ctxid = DECODE_CTXID(detach->context_id), 812 + rctxid = detach->context_id; 813 + 814 + dev_dbg(dev, "%s: ctxid=%llu\n", __func__, ctxid); 815 + 816 + if (!ctxi) { 817 + ctxi = get_context(cfg, rctxid, lli, CTX_CTRL_ERR_FALLBACK); 818 + if (unlikely(!ctxi)) { 819 + dev_dbg(dev, "%s: Bad context! (%llu)\n", 820 + __func__, ctxid); 821 + rc = -EINVAL; 822 + goto out; 823 + } 824 + 825 + put_ctx = true; 826 + } 827 + 828 + /* Cleanup outstanding resources tied to this LUN */ 829 + if (ctxi->rht_out) { 830 + marshal_det_to_rele(detach, &rel); 831 + for (i = 0; i < MAX_RHT_PER_CONTEXT; i++) { 832 + if (ctxi->rht_lun[i] == lli) { 833 + rel.rsrc_handle = i; 834 + _cxlflash_disk_release(sdev, ctxi, &rel); 835 + } 836 + 837 + /* No need to loop further if we're done */ 838 + if (ctxi->rht_out == 0) 839 + break; 840 + } 841 + } 842 + 843 + /* Take our LUN out of context, free the node */ 844 + list_for_each_entry_safe(lun_access, t, &ctxi->luns, list) 845 + if (lun_access->lli == lli) { 846 + list_del(&lun_access->list); 847 + kfree(lun_access); 848 + lun_access = NULL; 849 + break; 850 + } 851 + 852 + /* Tear down context following last LUN cleanup */ 853 + if (list_empty(&ctxi->luns)) { 854 + ctxi->unavail = true; 855 + mutex_unlock(&ctxi->mutex); 856 + mutex_lock(&cfg->ctx_tbl_list_mutex); 857 + mutex_lock(&ctxi->mutex); 858 + 859 + /* Might not have been in error list so conditionally remove */ 860 + if (!list_empty(&ctxi->list)) 861 + list_del(&ctxi->list); 862 + cfg->ctx_tbl[ctxid] = NULL; 863 + mutex_unlock(&cfg->ctx_tbl_list_mutex); 864 + mutex_unlock(&ctxi->mutex); 865 + 866 + lfd = ctxi->lfd; 867 + destroy_context(cfg, ctxi); 868 + ctxi = NULL; 869 + put_ctx = false; 870 + 871 + /* 872 + * As a last step, clean up external resources when not 873 + * already on an external cleanup thread, i.e.: close(adap_fd). 874 + * 875 + * NOTE: this will free up the context from the CXL services, 876 + * allowing it to dole out the same context_id on a future 877 + * (or even currently in-flight) disk_attach operation. 878 + */ 879 + if (lfd != -1) 880 + sys_close(lfd); 881 + } 882 + 883 + out: 884 + if (put_ctx) 885 + put_context(ctxi); 886 + dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 887 + return rc; 888 + } 889 + 890 + static int cxlflash_disk_detach(struct scsi_device *sdev, 891 + struct dk_cxlflash_detach *detach) 892 + { 893 + return _cxlflash_disk_detach(sdev, NULL, detach); 894 + } 895 + 896 + /** 897 + * cxlflash_cxl_release() - release handler for adapter file descriptor 898 + * @inode: File-system inode associated with fd. 899 + * @file: File installed with adapter file descriptor. 900 + * 901 + * This routine is the release handler for the fops registered with 902 + * the CXL services on an initial attach for a context. It is called 903 + * when a close is performed on the adapter file descriptor returned 904 + * to the user. Programmatically, the user is not required to perform 905 + * the close, as it is handled internally via the detach ioctl when 906 + * a context is being removed. Note that nothing prevents the user 907 + * from performing a close, but the user should be aware that doing 908 + * so is considered catastrophic and subsequent usage of the superpipe 909 + * API with previously saved off tokens will fail. 910 + * 911 + * When initiated from an external close (either by the user or via 912 + * a process tear down), the routine derives the context reference 913 + * and calls detach for each LUN associated with the context. The 914 + * final detach operation will cause the context itself to be freed. 915 + * Note that the saved off lfd is reset prior to calling detach to 916 + * signify that the final detach should not perform a close. 917 + * 918 + * When initiated from a detach operation as part of the tear down 919 + * of a context, the context is first completely freed and then the 920 + * close is performed. This routine will fail to derive the context 921 + * reference (due to the context having already been freed) and then 922 + * call into the CXL release entry point. 923 + * 924 + * Thus, with exception to when the CXL process element (context id) 925 + * lookup fails (a case that should theoretically never occur), every 926 + * call into this routine results in a complete freeing of a context. 927 + * 928 + * As part of the detach, all per-context resources associated with the LUN 929 + * are cleaned up. When detaching the last LUN for a context, the context 930 + * itself is cleaned up and released. 931 + * 932 + * Return: 0 on success 933 + */ 934 + static int cxlflash_cxl_release(struct inode *inode, struct file *file) 935 + { 936 + struct cxl_context *ctx = cxl_fops_get_context(file); 937 + struct cxlflash_cfg *cfg = container_of(file->f_op, struct cxlflash_cfg, 938 + cxl_fops); 939 + struct device *dev = &cfg->dev->dev; 940 + struct ctx_info *ctxi = NULL; 941 + struct dk_cxlflash_detach detach = { { 0 }, 0 }; 942 + struct lun_access *lun_access, *t; 943 + enum ctx_ctrl ctrl = CTX_CTRL_ERR_FALLBACK | CTX_CTRL_FILE; 944 + int ctxid; 945 + 946 + ctxid = cxl_process_element(ctx); 947 + if (unlikely(ctxid < 0)) { 948 + dev_err(dev, "%s: Context %p was closed! (%d)\n", 949 + __func__, ctx, ctxid); 950 + goto out; 951 + } 952 + 953 + ctxi = get_context(cfg, ctxid, file, ctrl); 954 + if (unlikely(!ctxi)) { 955 + ctxi = get_context(cfg, ctxid, file, ctrl | CTX_CTRL_CLONE); 956 + if (!ctxi) { 957 + dev_dbg(dev, "%s: Context %d already free!\n", 958 + __func__, ctxid); 959 + goto out_release; 960 + } 961 + 962 + dev_dbg(dev, "%s: Another process owns context %d!\n", 963 + __func__, ctxid); 964 + put_context(ctxi); 965 + goto out; 966 + } 967 + 968 + dev_dbg(dev, "%s: close(%d) for context %d\n", 969 + __func__, ctxi->lfd, ctxid); 970 + 971 + /* Reset the file descriptor to indicate we're on a close() thread */ 972 + ctxi->lfd = -1; 973 + detach.context_id = ctxi->ctxid; 974 + list_for_each_entry_safe(lun_access, t, &ctxi->luns, list) 975 + _cxlflash_disk_detach(lun_access->sdev, ctxi, &detach); 976 + out_release: 977 + cxl_fd_release(inode, file); 978 + out: 979 + dev_dbg(dev, "%s: returning\n", __func__); 980 + return 0; 981 + } 982 + 983 + /** 984 + * unmap_context() - clears a previously established mapping 985 + * @ctxi: Context owning the mapping. 986 + * 987 + * This routine is used to switch between the error notification page 988 + * (dummy page of all 1's) and the real mapping (established by the CXL 989 + * fault handler). 990 + */ 991 + static void unmap_context(struct ctx_info *ctxi) 992 + { 993 + unmap_mapping_range(ctxi->file->f_mapping, 0, 0, 1); 994 + } 995 + 996 + /** 997 + * get_err_page() - obtains and allocates the error notification page 998 + * 999 + * Return: error notification page on success, NULL on failure 1000 + */ 1001 + static struct page *get_err_page(void) 1002 + { 1003 + struct page *err_page = global.err_page; 1004 + 1005 + if (unlikely(!err_page)) { 1006 + err_page = alloc_page(GFP_KERNEL); 1007 + if (unlikely(!err_page)) { 1008 + pr_err("%s: Unable to allocate err_page!\n", __func__); 1009 + goto out; 1010 + } 1011 + 1012 + memset(page_address(err_page), -1, PAGE_SIZE); 1013 + 1014 + /* Serialize update w/ other threads to avoid a leak */ 1015 + mutex_lock(&global.mutex); 1016 + if (likely(!global.err_page)) 1017 + global.err_page = err_page; 1018 + else { 1019 + __free_page(err_page); 1020 + err_page = global.err_page; 1021 + } 1022 + mutex_unlock(&global.mutex); 1023 + } 1024 + 1025 + out: 1026 + pr_debug("%s: returning err_page=%p\n", __func__, err_page); 1027 + return err_page; 1028 + } 1029 + 1030 + /** 1031 + * cxlflash_mmap_fault() - mmap fault handler for adapter file descriptor 1032 + * @vma: VM area associated with mapping. 1033 + * @vmf: VM fault associated with current fault. 1034 + * 1035 + * To support error notification via MMIO, faults are 'caught' by this routine 1036 + * that was inserted before passing back the adapter file descriptor on attach. 1037 + * When a fault occurs, this routine evaluates if error recovery is active and 1038 + * if so, installs the error page to 'notify' the user about the error state. 1039 + * During normal operation, the fault is simply handled by the original fault 1040 + * handler that was installed by CXL services as part of initializing the 1041 + * adapter file descriptor. The VMA's page protection bits are toggled to 1042 + * indicate cached/not-cached depending on the memory backing the fault. 1043 + * 1044 + * Return: 0 on success, VM_FAULT_SIGBUS on failure 1045 + */ 1046 + static int cxlflash_mmap_fault(struct vm_area_struct *vma, struct vm_fault *vmf) 1047 + { 1048 + struct file *file = vma->vm_file; 1049 + struct cxl_context *ctx = cxl_fops_get_context(file); 1050 + struct cxlflash_cfg *cfg = container_of(file->f_op, struct cxlflash_cfg, 1051 + cxl_fops); 1052 + struct device *dev = &cfg->dev->dev; 1053 + struct ctx_info *ctxi = NULL; 1054 + struct page *err_page = NULL; 1055 + enum ctx_ctrl ctrl = CTX_CTRL_ERR_FALLBACK | CTX_CTRL_FILE; 1056 + int rc = 0; 1057 + int ctxid; 1058 + 1059 + ctxid = cxl_process_element(ctx); 1060 + if (unlikely(ctxid < 0)) { 1061 + dev_err(dev, "%s: Context %p was closed! (%d)\n", 1062 + __func__, ctx, ctxid); 1063 + goto err; 1064 + } 1065 + 1066 + ctxi = get_context(cfg, ctxid, file, ctrl); 1067 + if (unlikely(!ctxi)) { 1068 + dev_dbg(dev, "%s: Bad context! (%d)\n", __func__, ctxid); 1069 + goto err; 1070 + } 1071 + 1072 + dev_dbg(dev, "%s: fault(%d) for context %d\n", 1073 + __func__, ctxi->lfd, ctxid); 1074 + 1075 + if (likely(!ctxi->err_recovery_active)) { 1076 + vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 1077 + rc = ctxi->cxl_mmap_vmops->fault(vma, vmf); 1078 + } else { 1079 + dev_dbg(dev, "%s: err recovery active, use err_page!\n", 1080 + __func__); 1081 + 1082 + err_page = get_err_page(); 1083 + if (unlikely(!err_page)) { 1084 + dev_err(dev, "%s: Could not obtain error page!\n", 1085 + __func__); 1086 + rc = VM_FAULT_RETRY; 1087 + goto out; 1088 + } 1089 + 1090 + get_page(err_page); 1091 + vmf->page = err_page; 1092 + vma->vm_page_prot = pgprot_cached(vma->vm_page_prot); 1093 + } 1094 + 1095 + out: 1096 + if (likely(ctxi)) 1097 + put_context(ctxi); 1098 + dev_dbg(dev, "%s: returning rc=%d\n", __func__, rc); 1099 + return rc; 1100 + 1101 + err: 1102 + rc = VM_FAULT_SIGBUS; 1103 + goto out; 1104 + } 1105 + 1106 + /* 1107 + * Local MMAP vmops to 'catch' faults 1108 + */ 1109 + static const struct vm_operations_struct cxlflash_mmap_vmops = { 1110 + .fault = cxlflash_mmap_fault, 1111 + }; 1112 + 1113 + /** 1114 + * cxlflash_cxl_mmap() - mmap handler for adapter file descriptor 1115 + * @file: File installed with adapter file descriptor. 1116 + * @vma: VM area associated with mapping. 1117 + * 1118 + * Installs local mmap vmops to 'catch' faults for error notification support. 1119 + * 1120 + * Return: 0 on success, -errno on failure 1121 + */ 1122 + static int cxlflash_cxl_mmap(struct file *file, struct vm_area_struct *vma) 1123 + { 1124 + struct cxl_context *ctx = cxl_fops_get_context(file); 1125 + struct cxlflash_cfg *cfg = container_of(file->f_op, struct cxlflash_cfg, 1126 + cxl_fops); 1127 + struct device *dev = &cfg->dev->dev; 1128 + struct ctx_info *ctxi = NULL; 1129 + enum ctx_ctrl ctrl = CTX_CTRL_ERR_FALLBACK | CTX_CTRL_FILE; 1130 + int ctxid; 1131 + int rc = 0; 1132 + 1133 + ctxid = cxl_process_element(ctx); 1134 + if (unlikely(ctxid < 0)) { 1135 + dev_err(dev, "%s: Context %p was closed! (%d)\n", 1136 + __func__, ctx, ctxid); 1137 + rc = -EIO; 1138 + goto out; 1139 + } 1140 + 1141 + ctxi = get_context(cfg, ctxid, file, ctrl); 1142 + if (unlikely(!ctxi)) { 1143 + dev_dbg(dev, "%s: Bad context! (%d)\n", __func__, ctxid); 1144 + rc = -EIO; 1145 + goto out; 1146 + } 1147 + 1148 + dev_dbg(dev, "%s: mmap(%d) for context %d\n", 1149 + __func__, ctxi->lfd, ctxid); 1150 + 1151 + rc = cxl_fd_mmap(file, vma); 1152 + if (likely(!rc)) { 1153 + /* Insert ourself in the mmap fault handler path */ 1154 + ctxi->cxl_mmap_vmops = vma->vm_ops; 1155 + vma->vm_ops = &cxlflash_mmap_vmops; 1156 + } 1157 + 1158 + out: 1159 + if (likely(ctxi)) 1160 + put_context(ctxi); 1161 + return rc; 1162 + } 1163 + 1164 + /* 1165 + * Local fops for adapter file descriptor 1166 + */ 1167 + static const struct file_operations cxlflash_cxl_fops = { 1168 + .owner = THIS_MODULE, 1169 + .mmap = cxlflash_cxl_mmap, 1170 + .release = cxlflash_cxl_release, 1171 + }; 1172 + 1173 + /** 1174 + * cxlflash_mark_contexts_error() - move contexts to error state and list 1175 + * @cfg: Internal structure associated with the host. 1176 + * 1177 + * A context is only moved over to the error list when there are no outstanding 1178 + * references to it. This ensures that a running operation has completed. 1179 + * 1180 + * Return: 0 on success, -errno on failure 1181 + */ 1182 + int cxlflash_mark_contexts_error(struct cxlflash_cfg *cfg) 1183 + { 1184 + int i, rc = 0; 1185 + struct ctx_info *ctxi = NULL; 1186 + 1187 + mutex_lock(&cfg->ctx_tbl_list_mutex); 1188 + 1189 + for (i = 0; i < MAX_CONTEXT; i++) { 1190 + ctxi = cfg->ctx_tbl[i]; 1191 + if (ctxi) { 1192 + mutex_lock(&ctxi->mutex); 1193 + cfg->ctx_tbl[i] = NULL; 1194 + list_add(&ctxi->list, &cfg->ctx_err_recovery); 1195 + ctxi->err_recovery_active = true; 1196 + ctxi->ctrl_map = NULL; 1197 + unmap_context(ctxi); 1198 + mutex_unlock(&ctxi->mutex); 1199 + } 1200 + } 1201 + 1202 + mutex_unlock(&cfg->ctx_tbl_list_mutex); 1203 + return rc; 1204 + } 1205 + 1206 + /* 1207 + * Dummy NULL fops 1208 + */ 1209 + static const struct file_operations null_fops = { 1210 + .owner = THIS_MODULE, 1211 + }; 1212 + 1213 + /** 1214 + * cxlflash_disk_attach() - attach a LUN to a context 1215 + * @sdev: SCSI device associated with LUN. 1216 + * @attach: Attach ioctl data structure. 1217 + * 1218 + * Creates a context and attaches LUN to it. A LUN can only be attached 1219 + * one time to a context (subsequent attaches for the same context/LUN pair 1220 + * are not supported). Additional LUNs can be attached to a context by 1221 + * specifying the 'reuse' flag defined in the cxlflash_ioctl.h header. 1222 + * 1223 + * Return: 0 on success, -errno on failure 1224 + */ 1225 + static int cxlflash_disk_attach(struct scsi_device *sdev, 1226 + struct dk_cxlflash_attach *attach) 1227 + { 1228 + struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)sdev->host->hostdata; 1229 + struct device *dev = &cfg->dev->dev; 1230 + struct afu *afu = cfg->afu; 1231 + struct llun_info *lli = sdev->hostdata; 1232 + struct glun_info *gli = lli->parent; 1233 + struct cxl_ioctl_start_work *work; 1234 + struct ctx_info *ctxi = NULL; 1235 + struct lun_access *lun_access = NULL; 1236 + int rc = 0; 1237 + u32 perms; 1238 + int ctxid = -1; 1239 + u64 rctxid = 0UL; 1240 + struct file *file; 1241 + 1242 + struct cxl_context *ctx; 1243 + 1244 + int fd = -1; 1245 + 1246 + /* On first attach set fileops */ 1247 + if (atomic_read(&cfg->num_user_contexts) == 0) 1248 + cfg->cxl_fops = cxlflash_cxl_fops; 1249 + 1250 + if (attach->num_interrupts > 4) { 1251 + dev_dbg(dev, "%s: Cannot support this many interrupts %llu\n", 1252 + __func__, attach->num_interrupts); 1253 + rc = -EINVAL; 1254 + goto out; 1255 + } 1256 + 1257 + if (gli->max_lba == 0) { 1258 + dev_dbg(dev, "%s: No capacity info for this LUN (%016llX)\n", 1259 + __func__, lli->lun_id[sdev->channel]); 1260 + rc = read_cap16(sdev, lli); 1261 + if (rc) { 1262 + dev_err(dev, "%s: Invalid device! (%d)\n", 1263 + __func__, rc); 1264 + rc = -ENODEV; 1265 + goto out; 1266 + } 1267 + dev_dbg(dev, "%s: LBA = %016llX\n", __func__, gli->max_lba); 1268 + dev_dbg(dev, "%s: BLK_LEN = %08X\n", __func__, gli->blk_len); 1269 + } 1270 + 1271 + if (attach->hdr.flags & DK_CXLFLASH_ATTACH_REUSE_CONTEXT) { 1272 + rctxid = attach->context_id; 1273 + ctxi = get_context(cfg, rctxid, NULL, 0); 1274 + if (!ctxi) { 1275 + dev_dbg(dev, "%s: Bad context! (%016llX)\n", 1276 + __func__, rctxid); 1277 + rc = -EINVAL; 1278 + goto out; 1279 + } 1280 + 1281 + list_for_each_entry(lun_access, &ctxi->luns, list) 1282 + if (lun_access->lli == lli) { 1283 + dev_dbg(dev, "%s: Already attached!\n", 1284 + __func__); 1285 + rc = -EINVAL; 1286 + goto out; 1287 + } 1288 + } 1289 + 1290 + lun_access = kzalloc(sizeof(*lun_access), GFP_KERNEL); 1291 + if (unlikely(!lun_access)) { 1292 + dev_err(dev, "%s: Unable to allocate lun_access!\n", __func__); 1293 + rc = -ENOMEM; 1294 + goto out; 1295 + } 1296 + 1297 + lun_access->lli = lli; 1298 + lun_access->sdev = sdev; 1299 + 1300 + /* Non-NULL context indicates reuse */ 1301 + if (ctxi) { 1302 + dev_dbg(dev, "%s: Reusing context for LUN! (%016llX)\n", 1303 + __func__, rctxid); 1304 + list_add(&lun_access->list, &ctxi->luns); 1305 + fd = ctxi->lfd; 1306 + goto out_attach; 1307 + } 1308 + 1309 + ctx = cxl_dev_context_init(cfg->dev); 1310 + if (unlikely(IS_ERR_OR_NULL(ctx))) { 1311 + dev_err(dev, "%s: Could not initialize context %p\n", 1312 + __func__, ctx); 1313 + rc = -ENODEV; 1314 + goto err0; 1315 + } 1316 + 1317 + ctxid = cxl_process_element(ctx); 1318 + if (unlikely((ctxid > MAX_CONTEXT) || (ctxid < 0))) { 1319 + dev_err(dev, "%s: ctxid (%d) invalid!\n", __func__, ctxid); 1320 + rc = -EPERM; 1321 + goto err1; 1322 + } 1323 + 1324 + file = cxl_get_fd(ctx, &cfg->cxl_fops, &fd); 1325 + if (unlikely(fd < 0)) { 1326 + rc = -ENODEV; 1327 + dev_err(dev, "%s: Could not get file descriptor\n", __func__); 1328 + goto err1; 1329 + } 1330 + 1331 + /* Translate read/write O_* flags from fcntl.h to AFU permission bits */ 1332 + perms = SISL_RHT_PERM(attach->hdr.flags + 1); 1333 + 1334 + ctxi = create_context(cfg, ctx, ctxid, fd, file, perms); 1335 + if (unlikely(!ctxi)) { 1336 + dev_err(dev, "%s: Failed to create context! (%d)\n", 1337 + __func__, ctxid); 1338 + goto err2; 1339 + } 1340 + 1341 + work = &ctxi->work; 1342 + work->num_interrupts = attach->num_interrupts; 1343 + work->flags = CXL_START_WORK_NUM_IRQS; 1344 + 1345 + rc = cxl_start_work(ctx, work); 1346 + if (unlikely(rc)) { 1347 + dev_dbg(dev, "%s: Could not start context rc=%d\n", 1348 + __func__, rc); 1349 + goto err3; 1350 + } 1351 + 1352 + rc = afu_attach(cfg, ctxi); 1353 + if (unlikely(rc)) { 1354 + dev_err(dev, "%s: Could not attach AFU rc %d\n", __func__, rc); 1355 + goto err4; 1356 + } 1357 + 1358 + /* 1359 + * No error paths after this point. Once the fd is installed it's 1360 + * visible to user space and can't be undone safely on this thread. 1361 + * There is no need to worry about a deadlock here because no one 1362 + * knows about us yet; we can be the only one holding our mutex. 1363 + */ 1364 + list_add(&lun_access->list, &ctxi->luns); 1365 + mutex_unlock(&ctxi->mutex); 1366 + mutex_lock(&cfg->ctx_tbl_list_mutex); 1367 + mutex_lock(&ctxi->mutex); 1368 + cfg->ctx_tbl[ctxid] = ctxi; 1369 + mutex_unlock(&cfg->ctx_tbl_list_mutex); 1370 + fd_install(fd, file); 1371 + 1372 + out_attach: 1373 + attach->hdr.return_flags = 0; 1374 + attach->context_id = ctxi->ctxid; 1375 + attach->block_size = gli->blk_len; 1376 + attach->mmio_size = sizeof(afu->afu_map->hosts[0].harea); 1377 + attach->last_lba = gli->max_lba; 1378 + attach->max_xfer = (sdev->host->max_sectors * 512) / gli->blk_len; 1379 + 1380 + out: 1381 + attach->adap_fd = fd; 1382 + 1383 + if (ctxi) 1384 + put_context(ctxi); 1385 + 1386 + dev_dbg(dev, "%s: returning ctxid=%d fd=%d bs=%lld rc=%d llba=%lld\n", 1387 + __func__, ctxid, fd, attach->block_size, rc, attach->last_lba); 1388 + return rc; 1389 + 1390 + err4: 1391 + cxl_stop_context(ctx); 1392 + err3: 1393 + put_context(ctxi); 1394 + destroy_context(cfg, ctxi); 1395 + ctxi = NULL; 1396 + err2: 1397 + /* 1398 + * Here, we're overriding the fops with a dummy all-NULL fops because 1399 + * fput() calls the release fop, which will cause us to mistakenly 1400 + * call into the CXL code. Rather than try to add yet more complexity 1401 + * to that routine (cxlflash_cxl_release) we should try to fix the 1402 + * issue here. 1403 + */ 1404 + file->f_op = &null_fops; 1405 + fput(file); 1406 + put_unused_fd(fd); 1407 + fd = -1; 1408 + err1: 1409 + cxl_release_context(ctx); 1410 + err0: 1411 + kfree(lun_access); 1412 + goto out; 1413 + } 1414 + 1415 + /** 1416 + * recover_context() - recovers a context in error 1417 + * @cfg: Internal structure associated with the host. 1418 + * @ctxi: Context to release. 1419 + * 1420 + * Restablishes the state for a context-in-error. 1421 + * 1422 + * Return: 0 on success, -errno on failure 1423 + */ 1424 + static int recover_context(struct cxlflash_cfg *cfg, struct ctx_info *ctxi) 1425 + { 1426 + struct device *dev = &cfg->dev->dev; 1427 + int rc = 0; 1428 + int old_fd, fd = -1; 1429 + int ctxid = -1; 1430 + struct file *file; 1431 + struct cxl_context *ctx; 1432 + struct afu *afu = cfg->afu; 1433 + 1434 + ctx = cxl_dev_context_init(cfg->dev); 1435 + if (unlikely(IS_ERR_OR_NULL(ctx))) { 1436 + dev_err(dev, "%s: Could not initialize context %p\n", 1437 + __func__, ctx); 1438 + rc = -ENODEV; 1439 + goto out; 1440 + } 1441 + 1442 + ctxid = cxl_process_element(ctx); 1443 + if (unlikely((ctxid > MAX_CONTEXT) || (ctxid < 0))) { 1444 + dev_err(dev, "%s: ctxid (%d) invalid!\n", __func__, ctxid); 1445 + rc = -EPERM; 1446 + goto err1; 1447 + } 1448 + 1449 + file = cxl_get_fd(ctx, &cfg->cxl_fops, &fd); 1450 + if (unlikely(fd < 0)) { 1451 + rc = -ENODEV; 1452 + dev_err(dev, "%s: Could not get file descriptor\n", __func__); 1453 + goto err1; 1454 + } 1455 + 1456 + rc = cxl_start_work(ctx, &ctxi->work); 1457 + if (unlikely(rc)) { 1458 + dev_dbg(dev, "%s: Could not start context rc=%d\n", 1459 + __func__, rc); 1460 + goto err2; 1461 + } 1462 + 1463 + /* Update with new MMIO area based on updated context id */ 1464 + ctxi->ctrl_map = &afu->afu_map->ctrls[ctxid].ctrl; 1465 + 1466 + rc = afu_attach(cfg, ctxi); 1467 + if (rc) { 1468 + dev_err(dev, "%s: Could not attach AFU rc %d\n", __func__, rc); 1469 + goto err3; 1470 + } 1471 + 1472 + /* 1473 + * No error paths after this point. Once the fd is installed it's 1474 + * visible to user space and can't be undone safely on this thread. 1475 + */ 1476 + old_fd = ctxi->lfd; 1477 + ctxi->ctxid = ENCODE_CTXID(ctxi, ctxid); 1478 + ctxi->lfd = fd; 1479 + ctxi->ctx = ctx; 1480 + ctxi->file = file; 1481 + 1482 + /* 1483 + * Put context back in table (note the reinit of the context list); 1484 + * we must first drop the context's mutex and then acquire it in 1485 + * order with the table/list mutex to avoid a deadlock - safe to do 1486 + * here because no one can find us at this moment in time. 1487 + */ 1488 + mutex_unlock(&ctxi->mutex); 1489 + mutex_lock(&cfg->ctx_tbl_list_mutex); 1490 + mutex_lock(&ctxi->mutex); 1491 + list_del_init(&ctxi->list); 1492 + cfg->ctx_tbl[ctxid] = ctxi; 1493 + mutex_unlock(&cfg->ctx_tbl_list_mutex); 1494 + fd_install(fd, file); 1495 + 1496 + /* Release the original adapter fd and associated CXL resources */ 1497 + sys_close(old_fd); 1498 + out: 1499 + dev_dbg(dev, "%s: returning ctxid=%d fd=%d rc=%d\n", 1500 + __func__, ctxid, fd, rc); 1501 + return rc; 1502 + 1503 + err3: 1504 + cxl_stop_context(ctx); 1505 + err2: 1506 + fput(file); 1507 + put_unused_fd(fd); 1508 + err1: 1509 + cxl_release_context(ctx); 1510 + goto out; 1511 + } 1512 + 1513 + /** 1514 + * check_state() - checks and responds to the current adapter state 1515 + * @cfg: Internal structure associated with the host. 1516 + * 1517 + * This routine can block and should only be used on process context. 1518 + * Note that when waking up from waiting in limbo, the state is unknown 1519 + * and must be checked again before proceeding. 1520 + * 1521 + * Return: 0 on success, -errno on failure 1522 + */ 1523 + static int check_state(struct cxlflash_cfg *cfg) 1524 + { 1525 + struct device *dev = &cfg->dev->dev; 1526 + int rc = 0; 1527 + 1528 + retry: 1529 + switch (cfg->state) { 1530 + case STATE_LIMBO: 1531 + dev_dbg(dev, "%s: Limbo, going to wait...\n", __func__); 1532 + rc = wait_event_interruptible(cfg->limbo_waitq, 1533 + cfg->state != STATE_LIMBO); 1534 + if (unlikely(rc)) 1535 + break; 1536 + goto retry; 1537 + case STATE_FAILTERM: 1538 + dev_dbg(dev, "%s: Failed/Terminating!\n", __func__); 1539 + rc = -ENODEV; 1540 + break; 1541 + default: 1542 + break; 1543 + } 1544 + 1545 + return rc; 1546 + } 1547 + 1548 + /** 1549 + * cxlflash_afu_recover() - initiates AFU recovery 1550 + * @sdev: SCSI device associated with LUN. 1551 + * @recover: Recover ioctl data structure. 1552 + * 1553 + * Only a single recovery is allowed at a time to avoid exhausting CXL 1554 + * resources (leading to recovery failure) in the event that we're up 1555 + * against the maximum number of contexts limit. For similar reasons, 1556 + * a context recovery is retried if there are multiple recoveries taking 1557 + * place at the same time and the failure was due to CXL services being 1558 + * unable to keep up. 1559 + * 1560 + * Because a user can detect an error condition before the kernel, it is 1561 + * quite possible for this routine to act as the kernel's EEH detection 1562 + * source (MMIO read of mbox_r). Because of this, there is a window of 1563 + * time where an EEH might have been detected but not yet 'serviced' 1564 + * (callback invoked, causing the device to enter limbo state). To avoid 1565 + * looping in this routine during that window, a 1 second sleep is in place 1566 + * between the time the MMIO failure is detected and the time a wait on the 1567 + * limbo wait queue is attempted via check_state(). 1568 + * 1569 + * Return: 0 on success, -errno on failure 1570 + */ 1571 + static int cxlflash_afu_recover(struct scsi_device *sdev, 1572 + struct dk_cxlflash_recover_afu *recover) 1573 + { 1574 + struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)sdev->host->hostdata; 1575 + struct device *dev = &cfg->dev->dev; 1576 + struct llun_info *lli = sdev->hostdata; 1577 + struct afu *afu = cfg->afu; 1578 + struct ctx_info *ctxi = NULL; 1579 + struct mutex *mutex = &cfg->ctx_recovery_mutex; 1580 + u64 ctxid = DECODE_CTXID(recover->context_id), 1581 + rctxid = recover->context_id; 1582 + long reg; 1583 + int lretry = 20; /* up to 2 seconds */ 1584 + int rc = 0; 1585 + 1586 + atomic_inc(&cfg->recovery_threads); 1587 + rc = mutex_lock_interruptible(mutex); 1588 + if (rc) 1589 + goto out; 1590 + 1591 + dev_dbg(dev, "%s: reason 0x%016llX rctxid=%016llX\n", 1592 + __func__, recover->reason, rctxid); 1593 + 1594 + retry: 1595 + /* Ensure that this process is attached to the context */ 1596 + ctxi = get_context(cfg, rctxid, lli, CTX_CTRL_ERR_FALLBACK); 1597 + if (unlikely(!ctxi)) { 1598 + dev_dbg(dev, "%s: Bad context! (%llu)\n", __func__, ctxid); 1599 + rc = -EINVAL; 1600 + goto out; 1601 + } 1602 + 1603 + if (ctxi->err_recovery_active) { 1604 + retry_recover: 1605 + rc = recover_context(cfg, ctxi); 1606 + if (unlikely(rc)) { 1607 + dev_err(dev, "%s: Recovery failed for context %llu (rc=%d)\n", 1608 + __func__, ctxid, rc); 1609 + if ((rc == -ENODEV) && 1610 + ((atomic_read(&cfg->recovery_threads) > 1) || 1611 + (lretry--))) { 1612 + dev_dbg(dev, "%s: Going to try again!\n", 1613 + __func__); 1614 + mutex_unlock(mutex); 1615 + msleep(100); 1616 + rc = mutex_lock_interruptible(mutex); 1617 + if (rc) 1618 + goto out; 1619 + goto retry_recover; 1620 + } 1621 + 1622 + goto out; 1623 + } 1624 + 1625 + ctxi->err_recovery_active = false; 1626 + recover->context_id = ctxi->ctxid; 1627 + recover->adap_fd = ctxi->lfd; 1628 + recover->mmio_size = sizeof(afu->afu_map->hosts[0].harea); 1629 + recover->hdr.return_flags |= 1630 + DK_CXLFLASH_RECOVER_AFU_CONTEXT_RESET; 1631 + goto out; 1632 + } 1633 + 1634 + /* Test if in error state */ 1635 + reg = readq_be(&afu->ctrl_map->mbox_r); 1636 + if (reg == -1) { 1637 + dev_dbg(dev, "%s: MMIO read fail! Wait for recovery...\n", 1638 + __func__); 1639 + mutex_unlock(&ctxi->mutex); 1640 + ctxi = NULL; 1641 + ssleep(1); 1642 + rc = check_state(cfg); 1643 + if (unlikely(rc)) 1644 + goto out; 1645 + goto retry; 1646 + } 1647 + 1648 + dev_dbg(dev, "%s: MMIO working, no recovery required!\n", __func__); 1649 + out: 1650 + if (likely(ctxi)) 1651 + put_context(ctxi); 1652 + mutex_unlock(mutex); 1653 + atomic_dec_if_positive(&cfg->recovery_threads); 1654 + return rc; 1655 + } 1656 + 1657 + /** 1658 + * process_sense() - evaluates and processes sense data 1659 + * @sdev: SCSI device associated with LUN. 1660 + * @verify: Verify ioctl data structure. 1661 + * 1662 + * Return: 0 on success, -errno on failure 1663 + */ 1664 + static int process_sense(struct scsi_device *sdev, 1665 + struct dk_cxlflash_verify *verify) 1666 + { 1667 + struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)sdev->host->hostdata; 1668 + struct device *dev = &cfg->dev->dev; 1669 + struct llun_info *lli = sdev->hostdata; 1670 + struct glun_info *gli = lli->parent; 1671 + u64 prev_lba = gli->max_lba; 1672 + struct scsi_sense_hdr sshdr = { 0 }; 1673 + int rc = 0; 1674 + 1675 + rc = scsi_normalize_sense((const u8 *)&verify->sense_data, 1676 + DK_CXLFLASH_VERIFY_SENSE_LEN, &sshdr); 1677 + if (!rc) { 1678 + dev_err(dev, "%s: Failed to normalize sense data!\n", __func__); 1679 + rc = -EINVAL; 1680 + goto out; 1681 + } 1682 + 1683 + switch (sshdr.sense_key) { 1684 + case NO_SENSE: 1685 + case RECOVERED_ERROR: 1686 + /* fall through */ 1687 + case NOT_READY: 1688 + break; 1689 + case UNIT_ATTENTION: 1690 + switch (sshdr.asc) { 1691 + case 0x29: /* Power on Reset or Device Reset */ 1692 + /* fall through */ 1693 + case 0x2A: /* Device settings/capacity changed */ 1694 + rc = read_cap16(sdev, lli); 1695 + if (rc) { 1696 + rc = -ENODEV; 1697 + break; 1698 + } 1699 + if (prev_lba != gli->max_lba) 1700 + dev_dbg(dev, "%s: Capacity changed old=%lld " 1701 + "new=%lld\n", __func__, prev_lba, 1702 + gli->max_lba); 1703 + break; 1704 + case 0x3F: /* Report LUNs changed, Rescan. */ 1705 + scsi_scan_host(cfg->host); 1706 + break; 1707 + default: 1708 + rc = -EIO; 1709 + break; 1710 + } 1711 + break; 1712 + default: 1713 + rc = -EIO; 1714 + break; 1715 + } 1716 + out: 1717 + dev_dbg(dev, "%s: sense_key %x asc %x ascq %x rc %d\n", __func__, 1718 + sshdr.sense_key, sshdr.asc, sshdr.ascq, rc); 1719 + return rc; 1720 + } 1721 + 1722 + /** 1723 + * cxlflash_disk_verify() - verifies a LUN is the same and handle size changes 1724 + * @sdev: SCSI device associated with LUN. 1725 + * @verify: Verify ioctl data structure. 1726 + * 1727 + * Return: 0 on success, -errno on failure 1728 + */ 1729 + static int cxlflash_disk_verify(struct scsi_device *sdev, 1730 + struct dk_cxlflash_verify *verify) 1731 + { 1732 + int rc = 0; 1733 + struct ctx_info *ctxi = NULL; 1734 + struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)sdev->host->hostdata; 1735 + struct device *dev = &cfg->dev->dev; 1736 + struct llun_info *lli = sdev->hostdata; 1737 + struct glun_info *gli = lli->parent; 1738 + struct sisl_rht_entry *rhte = NULL; 1739 + res_hndl_t rhndl = verify->rsrc_handle; 1740 + u64 ctxid = DECODE_CTXID(verify->context_id), 1741 + rctxid = verify->context_id; 1742 + u64 last_lba = 0; 1743 + 1744 + dev_dbg(dev, "%s: ctxid=%llu rhndl=%016llX, hint=%016llX, " 1745 + "flags=%016llX\n", __func__, ctxid, verify->rsrc_handle, 1746 + verify->hint, verify->hdr.flags); 1747 + 1748 + ctxi = get_context(cfg, rctxid, lli, 0); 1749 + if (unlikely(!ctxi)) { 1750 + dev_dbg(dev, "%s: Bad context! (%llu)\n", __func__, ctxid); 1751 + rc = -EINVAL; 1752 + goto out; 1753 + } 1754 + 1755 + rhte = get_rhte(ctxi, rhndl, lli); 1756 + if (unlikely(!rhte)) { 1757 + dev_dbg(dev, "%s: Bad resource handle! (%d)\n", 1758 + __func__, rhndl); 1759 + rc = -EINVAL; 1760 + goto out; 1761 + } 1762 + 1763 + /* 1764 + * Look at the hint/sense to see if it requires us to redrive 1765 + * inquiry (i.e. the Unit attention is due to the WWN changing). 1766 + */ 1767 + if (verify->hint & DK_CXLFLASH_VERIFY_HINT_SENSE) { 1768 + rc = process_sense(sdev, verify); 1769 + if (unlikely(rc)) { 1770 + dev_err(dev, "%s: Failed to validate sense data (%d)\n", 1771 + __func__, rc); 1772 + goto out; 1773 + } 1774 + } 1775 + 1776 + switch (gli->mode) { 1777 + case MODE_PHYSICAL: 1778 + last_lba = gli->max_lba; 1779 + break; 1780 + case MODE_VIRTUAL: 1781 + /* Cast lxt_cnt to u64 for multiply to be treated as 64bit op */ 1782 + last_lba = ((u64)rhte->lxt_cnt * MC_CHUNK_SIZE * gli->blk_len); 1783 + last_lba /= CXLFLASH_BLOCK_SIZE; 1784 + last_lba--; 1785 + break; 1786 + default: 1787 + WARN(1, "Unsupported LUN mode!"); 1788 + } 1789 + 1790 + verify->last_lba = last_lba; 1791 + 1792 + out: 1793 + if (likely(ctxi)) 1794 + put_context(ctxi); 1795 + dev_dbg(dev, "%s: returning rc=%d llba=%llX\n", 1796 + __func__, rc, verify->last_lba); 1797 + return rc; 1798 + } 1799 + 1800 + /** 1801 + * decode_ioctl() - translates an encoded ioctl to an easily identifiable string 1802 + * @cmd: The ioctl command to decode. 1803 + * 1804 + * Return: A string identifying the decoded ioctl. 1805 + */ 1806 + static char *decode_ioctl(int cmd) 1807 + { 1808 + switch (cmd) { 1809 + case DK_CXLFLASH_ATTACH: 1810 + return __stringify_1(DK_CXLFLASH_ATTACH); 1811 + case DK_CXLFLASH_USER_DIRECT: 1812 + return __stringify_1(DK_CXLFLASH_USER_DIRECT); 1813 + case DK_CXLFLASH_USER_VIRTUAL: 1814 + return __stringify_1(DK_CXLFLASH_USER_VIRTUAL); 1815 + case DK_CXLFLASH_VLUN_RESIZE: 1816 + return __stringify_1(DK_CXLFLASH_VLUN_RESIZE); 1817 + case DK_CXLFLASH_RELEASE: 1818 + return __stringify_1(DK_CXLFLASH_RELEASE); 1819 + case DK_CXLFLASH_DETACH: 1820 + return __stringify_1(DK_CXLFLASH_DETACH); 1821 + case DK_CXLFLASH_VERIFY: 1822 + return __stringify_1(DK_CXLFLASH_VERIFY); 1823 + case DK_CXLFLASH_VLUN_CLONE: 1824 + return __stringify_1(DK_CXLFLASH_VLUN_CLONE); 1825 + case DK_CXLFLASH_RECOVER_AFU: 1826 + return __stringify_1(DK_CXLFLASH_RECOVER_AFU); 1827 + case DK_CXLFLASH_MANAGE_LUN: 1828 + return __stringify_1(DK_CXLFLASH_MANAGE_LUN); 1829 + } 1830 + 1831 + return "UNKNOWN"; 1832 + } 1833 + 1834 + /** 1835 + * cxlflash_disk_direct_open() - opens a direct (physical) disk 1836 + * @sdev: SCSI device associated with LUN. 1837 + * @arg: UDirect ioctl data structure. 1838 + * 1839 + * On successful return, the user is informed of the resource handle 1840 + * to be used to identify the direct lun and the size (in blocks) of 1841 + * the direct lun in last LBA format. 1842 + * 1843 + * Return: 0 on success, -errno on failure 1844 + */ 1845 + static int cxlflash_disk_direct_open(struct scsi_device *sdev, void *arg) 1846 + { 1847 + struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)sdev->host->hostdata; 1848 + struct device *dev = &cfg->dev->dev; 1849 + struct afu *afu = cfg->afu; 1850 + struct llun_info *lli = sdev->hostdata; 1851 + struct glun_info *gli = lli->parent; 1852 + 1853 + struct dk_cxlflash_udirect *pphys = (struct dk_cxlflash_udirect *)arg; 1854 + 1855 + u64 ctxid = DECODE_CTXID(pphys->context_id), 1856 + rctxid = pphys->context_id; 1857 + u64 lun_size = 0; 1858 + u64 last_lba = 0; 1859 + u64 rsrc_handle = -1; 1860 + u32 port = CHAN2PORT(sdev->channel); 1861 + 1862 + int rc = 0; 1863 + 1864 + struct ctx_info *ctxi = NULL; 1865 + struct sisl_rht_entry *rhte = NULL; 1866 + 1867 + pr_debug("%s: ctxid=%llu ls=0x%llx\n", __func__, ctxid, lun_size); 1868 + 1869 + rc = cxlflash_lun_attach(gli, MODE_PHYSICAL, false); 1870 + if (unlikely(rc)) { 1871 + dev_dbg(dev, "%s: Failed to attach to LUN! (PHYSICAL)\n", 1872 + __func__); 1873 + goto out; 1874 + } 1875 + 1876 + ctxi = get_context(cfg, rctxid, lli, 0); 1877 + if (unlikely(!ctxi)) { 1878 + dev_dbg(dev, "%s: Bad context! (%llu)\n", __func__, ctxid); 1879 + rc = -EINVAL; 1880 + goto err1; 1881 + } 1882 + 1883 + rhte = rhte_checkout(ctxi, lli); 1884 + if (unlikely(!rhte)) { 1885 + dev_dbg(dev, "%s: too many opens for this context\n", __func__); 1886 + rc = -EMFILE; /* too many opens */ 1887 + goto err1; 1888 + } 1889 + 1890 + rsrc_handle = (rhte - ctxi->rht_start); 1891 + 1892 + rht_format1(rhte, lli->lun_id[sdev->channel], ctxi->rht_perms, port); 1893 + cxlflash_afu_sync(afu, ctxid, rsrc_handle, AFU_LW_SYNC); 1894 + 1895 + last_lba = gli->max_lba; 1896 + pphys->hdr.return_flags = 0; 1897 + pphys->last_lba = last_lba; 1898 + pphys->rsrc_handle = rsrc_handle; 1899 + 1900 + out: 1901 + if (likely(ctxi)) 1902 + put_context(ctxi); 1903 + dev_dbg(dev, "%s: returning handle 0x%llx rc=%d llba %lld\n", 1904 + __func__, rsrc_handle, rc, last_lba); 1905 + return rc; 1906 + 1907 + err1: 1908 + cxlflash_lun_detach(gli); 1909 + goto out; 1910 + } 1911 + 1912 + /** 1913 + * ioctl_common() - common IOCTL handler for driver 1914 + * @sdev: SCSI device associated with LUN. 1915 + * @cmd: IOCTL command. 1916 + * 1917 + * Handles common fencing operations that are valid for multiple ioctls. Always 1918 + * allow through ioctls that are cleanup oriented in nature, even when operating 1919 + * in a failed/terminating state. 1920 + * 1921 + * Return: 0 on success, -errno on failure 1922 + */ 1923 + static int ioctl_common(struct scsi_device *sdev, int cmd) 1924 + { 1925 + struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)sdev->host->hostdata; 1926 + struct device *dev = &cfg->dev->dev; 1927 + struct llun_info *lli = sdev->hostdata; 1928 + int rc = 0; 1929 + 1930 + if (unlikely(!lli)) { 1931 + dev_dbg(dev, "%s: Unknown LUN\n", __func__); 1932 + rc = -EINVAL; 1933 + goto out; 1934 + } 1935 + 1936 + rc = check_state(cfg); 1937 + if (unlikely(rc) && (cfg->state == STATE_FAILTERM)) { 1938 + switch (cmd) { 1939 + case DK_CXLFLASH_VLUN_RESIZE: 1940 + case DK_CXLFLASH_RELEASE: 1941 + case DK_CXLFLASH_DETACH: 1942 + dev_dbg(dev, "%s: Command override! (%d)\n", 1943 + __func__, rc); 1944 + rc = 0; 1945 + break; 1946 + } 1947 + } 1948 + out: 1949 + return rc; 1950 + } 1951 + 1952 + /** 1953 + * cxlflash_ioctl() - IOCTL handler for driver 1954 + * @sdev: SCSI device associated with LUN. 1955 + * @cmd: IOCTL command. 1956 + * @arg: Userspace ioctl data structure. 1957 + * 1958 + * Return: 0 on success, -errno on failure 1959 + */ 1960 + int cxlflash_ioctl(struct scsi_device *sdev, int cmd, void __user *arg) 1961 + { 1962 + typedef int (*sioctl) (struct scsi_device *, void *); 1963 + 1964 + struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)sdev->host->hostdata; 1965 + struct device *dev = &cfg->dev->dev; 1966 + struct afu *afu = cfg->afu; 1967 + struct dk_cxlflash_hdr *hdr; 1968 + char buf[sizeof(union cxlflash_ioctls)]; 1969 + size_t size = 0; 1970 + bool known_ioctl = false; 1971 + int idx; 1972 + int rc = 0; 1973 + struct Scsi_Host *shost = sdev->host; 1974 + sioctl do_ioctl = NULL; 1975 + 1976 + static const struct { 1977 + size_t size; 1978 + sioctl ioctl; 1979 + } ioctl_tbl[] = { /* NOTE: order matters here */ 1980 + {sizeof(struct dk_cxlflash_attach), (sioctl)cxlflash_disk_attach}, 1981 + {sizeof(struct dk_cxlflash_udirect), cxlflash_disk_direct_open}, 1982 + {sizeof(struct dk_cxlflash_release), (sioctl)cxlflash_disk_release}, 1983 + {sizeof(struct dk_cxlflash_detach), (sioctl)cxlflash_disk_detach}, 1984 + {sizeof(struct dk_cxlflash_verify), (sioctl)cxlflash_disk_verify}, 1985 + {sizeof(struct dk_cxlflash_recover_afu), (sioctl)cxlflash_afu_recover}, 1986 + {sizeof(struct dk_cxlflash_manage_lun), (sioctl)cxlflash_manage_lun}, 1987 + {sizeof(struct dk_cxlflash_uvirtual), cxlflash_disk_virtual_open}, 1988 + {sizeof(struct dk_cxlflash_resize), (sioctl)cxlflash_vlun_resize}, 1989 + {sizeof(struct dk_cxlflash_clone), (sioctl)cxlflash_disk_clone}, 1990 + }; 1991 + 1992 + /* Restrict command set to physical support only for internal LUN */ 1993 + if (afu->internal_lun) 1994 + switch (cmd) { 1995 + case DK_CXLFLASH_RELEASE: 1996 + case DK_CXLFLASH_USER_VIRTUAL: 1997 + case DK_CXLFLASH_VLUN_RESIZE: 1998 + case DK_CXLFLASH_VLUN_CLONE: 1999 + dev_dbg(dev, "%s: %s not supported for lun_mode=%d\n", 2000 + __func__, decode_ioctl(cmd), afu->internal_lun); 2001 + rc = -EINVAL; 2002 + goto cxlflash_ioctl_exit; 2003 + } 2004 + 2005 + switch (cmd) { 2006 + case DK_CXLFLASH_ATTACH: 2007 + case DK_CXLFLASH_USER_DIRECT: 2008 + case DK_CXLFLASH_RELEASE: 2009 + case DK_CXLFLASH_DETACH: 2010 + case DK_CXLFLASH_VERIFY: 2011 + case DK_CXLFLASH_RECOVER_AFU: 2012 + case DK_CXLFLASH_USER_VIRTUAL: 2013 + case DK_CXLFLASH_VLUN_RESIZE: 2014 + case DK_CXLFLASH_VLUN_CLONE: 2015 + dev_dbg(dev, "%s: %s (%08X) on dev(%d/%d/%d/%llu)\n", 2016 + __func__, decode_ioctl(cmd), cmd, shost->host_no, 2017 + sdev->channel, sdev->id, sdev->lun); 2018 + rc = ioctl_common(sdev, cmd); 2019 + if (unlikely(rc)) 2020 + goto cxlflash_ioctl_exit; 2021 + 2022 + /* fall through */ 2023 + 2024 + case DK_CXLFLASH_MANAGE_LUN: 2025 + known_ioctl = true; 2026 + idx = _IOC_NR(cmd) - _IOC_NR(DK_CXLFLASH_ATTACH); 2027 + size = ioctl_tbl[idx].size; 2028 + do_ioctl = ioctl_tbl[idx].ioctl; 2029 + 2030 + if (likely(do_ioctl)) 2031 + break; 2032 + 2033 + /* fall through */ 2034 + default: 2035 + rc = -EINVAL; 2036 + goto cxlflash_ioctl_exit; 2037 + } 2038 + 2039 + if (unlikely(copy_from_user(&buf, arg, size))) { 2040 + dev_err(dev, "%s: copy_from_user() fail! " 2041 + "size=%lu cmd=%d (%s) arg=%p\n", 2042 + __func__, size, cmd, decode_ioctl(cmd), arg); 2043 + rc = -EFAULT; 2044 + goto cxlflash_ioctl_exit; 2045 + } 2046 + 2047 + hdr = (struct dk_cxlflash_hdr *)&buf; 2048 + if (hdr->version != DK_CXLFLASH_VERSION_0) { 2049 + dev_dbg(dev, "%s: Version %u not supported for %s\n", 2050 + __func__, hdr->version, decode_ioctl(cmd)); 2051 + rc = -EINVAL; 2052 + goto cxlflash_ioctl_exit; 2053 + } 2054 + 2055 + if (hdr->rsvd[0] || hdr->rsvd[1] || hdr->rsvd[2] || hdr->return_flags) { 2056 + dev_dbg(dev, "%s: Reserved/rflags populated!\n", __func__); 2057 + rc = -EINVAL; 2058 + goto cxlflash_ioctl_exit; 2059 + } 2060 + 2061 + rc = do_ioctl(sdev, (void *)&buf); 2062 + if (likely(!rc)) 2063 + if (unlikely(copy_to_user(arg, &buf, size))) { 2064 + dev_err(dev, "%s: copy_to_user() fail! " 2065 + "size=%lu cmd=%d (%s) arg=%p\n", 2066 + __func__, size, cmd, decode_ioctl(cmd), arg); 2067 + rc = -EFAULT; 2068 + } 2069 + 2070 + /* fall through to exit */ 2071 + 2072 + cxlflash_ioctl_exit: 2073 + if (unlikely(rc && known_ioctl)) 2074 + dev_err(dev, "%s: ioctl %s (%08X) on dev(%d/%d/%d/%llu) " 2075 + "returned rc %d\n", __func__, 2076 + decode_ioctl(cmd), cmd, shost->host_no, 2077 + sdev->channel, sdev->id, sdev->lun, rc); 2078 + else 2079 + dev_dbg(dev, "%s: ioctl %s (%08X) on dev(%d/%d/%d/%llu) " 2080 + "returned rc %d\n", __func__, decode_ioctl(cmd), 2081 + cmd, shost->host_no, sdev->channel, sdev->id, 2082 + sdev->lun, rc); 2083 + return rc; 2084 + }
+147
drivers/scsi/cxlflash/superpipe.h
··· 1 + /* 2 + * CXL Flash Device Driver 3 + * 4 + * Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation 5 + * Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation 6 + * 7 + * Copyright (C) 2015 IBM Corporation 8 + * 9 + * This program is free software; you can redistribute it and/or 10 + * modify it under the terms of the GNU General Public License 11 + * as published by the Free Software Foundation; either version 12 + * 2 of the License, or (at your option) any later version. 13 + */ 14 + 15 + #ifndef _CXLFLASH_SUPERPIPE_H 16 + #define _CXLFLASH_SUPERPIPE_H 17 + 18 + extern struct cxlflash_global global; 19 + 20 + /* 21 + * Terminology: use afu (and not adapter) to refer to the HW. 22 + * Adapter is the entire slot and includes PSL out of which 23 + * only the AFU is visible to user space. 24 + */ 25 + 26 + /* Chunk size parms: note sislite minimum chunk size is 27 + 0x10000 LBAs corresponding to a NMASK or 16. 28 + */ 29 + #define MC_CHUNK_SIZE (1 << MC_RHT_NMASK) /* in LBAs */ 30 + 31 + #define MC_DISCOVERY_TIMEOUT 5 /* 5 secs */ 32 + 33 + #define CHAN2PORT(_x) ((_x) + 1) 34 + #define PORT2CHAN(_x) ((_x) - 1) 35 + 36 + enum lun_mode { 37 + MODE_NONE = 0, 38 + MODE_VIRTUAL, 39 + MODE_PHYSICAL 40 + }; 41 + 42 + /* Global (entire driver, spans adapters) lun_info structure */ 43 + struct glun_info { 44 + u64 max_lba; /* from read cap(16) */ 45 + u32 blk_len; /* from read cap(16) */ 46 + enum lun_mode mode; /* NONE, VIRTUAL, PHYSICAL */ 47 + int users; /* Number of users w/ references to LUN */ 48 + 49 + u8 wwid[16]; 50 + 51 + struct mutex mutex; 52 + 53 + struct blka blka; 54 + struct list_head list; 55 + }; 56 + 57 + /* Local (per-adapter) lun_info structure */ 58 + struct llun_info { 59 + u64 lun_id[CXLFLASH_NUM_FC_PORTS]; /* from REPORT_LUNS */ 60 + u32 lun_index; /* Index in the LUN table */ 61 + u32 host_no; /* host_no from Scsi_host */ 62 + u32 port_sel; /* What port to use for this LUN */ 63 + bool newly_created; /* Whether the LUN was just discovered */ 64 + bool in_table; /* Whether a LUN table entry was created */ 65 + 66 + u8 wwid[16]; /* Keep a duplicate copy here? */ 67 + 68 + struct glun_info *parent; /* Pointer to entry in global LUN structure */ 69 + struct scsi_device *sdev; 70 + struct list_head list; 71 + }; 72 + 73 + struct lun_access { 74 + struct llun_info *lli; 75 + struct scsi_device *sdev; 76 + struct list_head list; 77 + }; 78 + 79 + enum ctx_ctrl { 80 + CTX_CTRL_CLONE = (1 << 1), 81 + CTX_CTRL_ERR = (1 << 2), 82 + CTX_CTRL_ERR_FALLBACK = (1 << 3), 83 + CTX_CTRL_NOPID = (1 << 4), 84 + CTX_CTRL_FILE = (1 << 5) 85 + }; 86 + 87 + #define ENCODE_CTXID(_ctx, _id) (((((u64)_ctx) & 0xFFFFFFFF0) << 28) | _id) 88 + #define DECODE_CTXID(_val) (_val & 0xFFFFFFFF) 89 + 90 + struct ctx_info { 91 + struct sisl_ctrl_map *ctrl_map; /* initialized at startup */ 92 + struct sisl_rht_entry *rht_start; /* 1 page (req'd for alignment), 93 + alloc/free on attach/detach */ 94 + u32 rht_out; /* Number of checked out RHT entries */ 95 + u32 rht_perms; /* User-defined permissions for RHT entries */ 96 + struct llun_info **rht_lun; /* Mapping of RHT entries to LUNs */ 97 + bool *rht_needs_ws; /* User-desired write-same function per RHTE */ 98 + 99 + struct cxl_ioctl_start_work work; 100 + u64 ctxid; 101 + int lfd; 102 + pid_t pid; 103 + bool unavail; 104 + bool err_recovery_active; 105 + struct mutex mutex; /* Context protection */ 106 + struct cxl_context *ctx; 107 + struct list_head luns; /* LUNs attached to this context */ 108 + const struct vm_operations_struct *cxl_mmap_vmops; 109 + struct file *file; 110 + struct list_head list; /* Link contexts in error recovery */ 111 + }; 112 + 113 + struct cxlflash_global { 114 + struct mutex mutex; 115 + struct list_head gluns;/* list of glun_info structs */ 116 + struct page *err_page; /* One page of all 0xF for error notification */ 117 + }; 118 + 119 + int cxlflash_vlun_resize(struct scsi_device *, struct dk_cxlflash_resize *); 120 + int _cxlflash_vlun_resize(struct scsi_device *, struct ctx_info *, 121 + struct dk_cxlflash_resize *); 122 + 123 + int cxlflash_disk_release(struct scsi_device *, struct dk_cxlflash_release *); 124 + int _cxlflash_disk_release(struct scsi_device *, struct ctx_info *, 125 + struct dk_cxlflash_release *); 126 + 127 + int cxlflash_disk_clone(struct scsi_device *, struct dk_cxlflash_clone *); 128 + 129 + int cxlflash_disk_virtual_open(struct scsi_device *, void *); 130 + 131 + int cxlflash_lun_attach(struct glun_info *, enum lun_mode, bool); 132 + void cxlflash_lun_detach(struct glun_info *); 133 + 134 + struct ctx_info *get_context(struct cxlflash_cfg *, u64, void *, enum ctx_ctrl); 135 + void put_context(struct ctx_info *); 136 + 137 + struct sisl_rht_entry *get_rhte(struct ctx_info *, res_hndl_t, 138 + struct llun_info *); 139 + 140 + struct sisl_rht_entry *rhte_checkout(struct ctx_info *, struct llun_info *); 141 + void rhte_checkin(struct ctx_info *, struct sisl_rht_entry *); 142 + 143 + void cxlflash_ba_terminate(struct ba_lun *); 144 + 145 + int cxlflash_manage_lun(struct scsi_device *, struct dk_cxlflash_manage_lun *); 146 + 147 + #endif /* ifndef _CXLFLASH_SUPERPIPE_H */
+1243
drivers/scsi/cxlflash/vlun.c
··· 1 + /* 2 + * CXL Flash Device Driver 3 + * 4 + * Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation 5 + * Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation 6 + * 7 + * Copyright (C) 2015 IBM Corporation 8 + * 9 + * This program is free software; you can redistribute it and/or 10 + * modify it under the terms of the GNU General Public License 11 + * as published by the Free Software Foundation; either version 12 + * 2 of the License, or (at your option) any later version. 13 + */ 14 + 15 + #include <linux/syscalls.h> 16 + #include <misc/cxl.h> 17 + #include <asm/unaligned.h> 18 + #include <asm/bitsperlong.h> 19 + 20 + #include <scsi/scsi_cmnd.h> 21 + #include <scsi/scsi_host.h> 22 + #include <uapi/scsi/cxlflash_ioctl.h> 23 + 24 + #include "sislite.h" 25 + #include "common.h" 26 + #include "vlun.h" 27 + #include "superpipe.h" 28 + 29 + /** 30 + * marshal_virt_to_resize() - translate uvirtual to resize structure 31 + * @virt: Source structure from which to translate/copy. 32 + * @resize: Destination structure for the translate/copy. 33 + */ 34 + static void marshal_virt_to_resize(struct dk_cxlflash_uvirtual *virt, 35 + struct dk_cxlflash_resize *resize) 36 + { 37 + resize->hdr = virt->hdr; 38 + resize->context_id = virt->context_id; 39 + resize->rsrc_handle = virt->rsrc_handle; 40 + resize->req_size = virt->lun_size; 41 + resize->last_lba = virt->last_lba; 42 + } 43 + 44 + /** 45 + * marshal_clone_to_rele() - translate clone to release structure 46 + * @clone: Source structure from which to translate/copy. 47 + * @rele: Destination structure for the translate/copy. 48 + */ 49 + static void marshal_clone_to_rele(struct dk_cxlflash_clone *clone, 50 + struct dk_cxlflash_release *release) 51 + { 52 + release->hdr = clone->hdr; 53 + release->context_id = clone->context_id_dst; 54 + } 55 + 56 + /** 57 + * ba_init() - initializes a block allocator 58 + * @ba_lun: Block allocator to initialize. 59 + * 60 + * Return: 0 on success, -errno on failure 61 + */ 62 + static int ba_init(struct ba_lun *ba_lun) 63 + { 64 + struct ba_lun_info *bali = NULL; 65 + int lun_size_au = 0, i = 0; 66 + int last_word_underflow = 0; 67 + u64 *lam; 68 + 69 + pr_debug("%s: Initializing LUN: lun_id = %llX, " 70 + "ba_lun->lsize = %lX, ba_lun->au_size = %lX\n", 71 + __func__, ba_lun->lun_id, ba_lun->lsize, ba_lun->au_size); 72 + 73 + /* Calculate bit map size */ 74 + lun_size_au = ba_lun->lsize / ba_lun->au_size; 75 + if (lun_size_au == 0) { 76 + pr_debug("%s: Requested LUN size of 0!\n", __func__); 77 + return -EINVAL; 78 + } 79 + 80 + /* Allocate lun information container */ 81 + bali = kzalloc(sizeof(struct ba_lun_info), GFP_KERNEL); 82 + if (unlikely(!bali)) { 83 + pr_err("%s: Failed to allocate lun_info for lun_id %llX\n", 84 + __func__, ba_lun->lun_id); 85 + return -ENOMEM; 86 + } 87 + 88 + bali->total_aus = lun_size_au; 89 + bali->lun_bmap_size = lun_size_au / BITS_PER_LONG; 90 + 91 + if (lun_size_au % BITS_PER_LONG) 92 + bali->lun_bmap_size++; 93 + 94 + /* Allocate bitmap space */ 95 + bali->lun_alloc_map = kzalloc((bali->lun_bmap_size * sizeof(u64)), 96 + GFP_KERNEL); 97 + if (unlikely(!bali->lun_alloc_map)) { 98 + pr_err("%s: Failed to allocate lun allocation map: " 99 + "lun_id = %llX\n", __func__, ba_lun->lun_id); 100 + kfree(bali); 101 + return -ENOMEM; 102 + } 103 + 104 + /* Initialize the bit map size and set all bits to '1' */ 105 + bali->free_aun_cnt = lun_size_au; 106 + 107 + for (i = 0; i < bali->lun_bmap_size; i++) 108 + bali->lun_alloc_map[i] = 0xFFFFFFFFFFFFFFFFULL; 109 + 110 + /* If the last word not fully utilized, mark extra bits as allocated */ 111 + last_word_underflow = (bali->lun_bmap_size * BITS_PER_LONG); 112 + last_word_underflow -= bali->free_aun_cnt; 113 + if (last_word_underflow > 0) { 114 + lam = &bali->lun_alloc_map[bali->lun_bmap_size - 1]; 115 + for (i = (HIBIT - last_word_underflow + 1); 116 + i < BITS_PER_LONG; 117 + i++) 118 + clear_bit(i, (ulong *)lam); 119 + } 120 + 121 + /* Initialize high elevator index, low/curr already at 0 from kzalloc */ 122 + bali->free_high_idx = bali->lun_bmap_size; 123 + 124 + /* Allocate clone map */ 125 + bali->aun_clone_map = kzalloc((bali->total_aus * sizeof(u8)), 126 + GFP_KERNEL); 127 + if (unlikely(!bali->aun_clone_map)) { 128 + pr_err("%s: Failed to allocate clone map: lun_id = %llX\n", 129 + __func__, ba_lun->lun_id); 130 + kfree(bali->lun_alloc_map); 131 + kfree(bali); 132 + return -ENOMEM; 133 + } 134 + 135 + /* Pass the allocated lun info as a handle to the user */ 136 + ba_lun->ba_lun_handle = bali; 137 + 138 + pr_debug("%s: Successfully initialized the LUN: " 139 + "lun_id = %llX, bitmap size = %X, free_aun_cnt = %llX\n", 140 + __func__, ba_lun->lun_id, bali->lun_bmap_size, 141 + bali->free_aun_cnt); 142 + return 0; 143 + } 144 + 145 + /** 146 + * find_free_range() - locates a free bit within the block allocator 147 + * @low: First word in block allocator to start search. 148 + * @high: Last word in block allocator to search. 149 + * @bali: LUN information structure owning the block allocator to search. 150 + * @bit_word: Passes back the word in the block allocator owning the free bit. 151 + * 152 + * Return: The bit position within the passed back word, -1 on failure 153 + */ 154 + static int find_free_range(u32 low, 155 + u32 high, 156 + struct ba_lun_info *bali, int *bit_word) 157 + { 158 + int i; 159 + u64 bit_pos = -1; 160 + ulong *lam, num_bits; 161 + 162 + for (i = low; i < high; i++) 163 + if (bali->lun_alloc_map[i] != 0) { 164 + lam = (ulong *)&bali->lun_alloc_map[i]; 165 + num_bits = (sizeof(*lam) * BITS_PER_BYTE); 166 + bit_pos = find_first_bit(lam, num_bits); 167 + 168 + pr_devel("%s: Found free bit %llX in lun " 169 + "map entry %llX at bitmap index = %X\n", 170 + __func__, bit_pos, bali->lun_alloc_map[i], 171 + i); 172 + 173 + *bit_word = i; 174 + bali->free_aun_cnt--; 175 + clear_bit(bit_pos, lam); 176 + break; 177 + } 178 + 179 + return bit_pos; 180 + } 181 + 182 + /** 183 + * ba_alloc() - allocates a block from the block allocator 184 + * @ba_lun: Block allocator from which to allocate a block. 185 + * 186 + * Return: The allocated block, -1 on failure 187 + */ 188 + static u64 ba_alloc(struct ba_lun *ba_lun) 189 + { 190 + u64 bit_pos = -1; 191 + int bit_word = 0; 192 + struct ba_lun_info *bali = NULL; 193 + 194 + bali = ba_lun->ba_lun_handle; 195 + 196 + pr_debug("%s: Received block allocation request: " 197 + "lun_id = %llX, free_aun_cnt = %llX\n", 198 + __func__, ba_lun->lun_id, bali->free_aun_cnt); 199 + 200 + if (bali->free_aun_cnt == 0) { 201 + pr_debug("%s: No space left on LUN: lun_id = %llX\n", 202 + __func__, ba_lun->lun_id); 203 + return -1ULL; 204 + } 205 + 206 + /* Search to find a free entry, curr->high then low->curr */ 207 + bit_pos = find_free_range(bali->free_curr_idx, 208 + bali->free_high_idx, bali, &bit_word); 209 + if (bit_pos == -1) { 210 + bit_pos = find_free_range(bali->free_low_idx, 211 + bali->free_curr_idx, 212 + bali, &bit_word); 213 + if (bit_pos == -1) { 214 + pr_debug("%s: Could not find an allocation unit on LUN:" 215 + " lun_id = %llX\n", __func__, ba_lun->lun_id); 216 + return -1ULL; 217 + } 218 + } 219 + 220 + /* Update the free_curr_idx */ 221 + if (bit_pos == HIBIT) 222 + bali->free_curr_idx = bit_word + 1; 223 + else 224 + bali->free_curr_idx = bit_word; 225 + 226 + pr_debug("%s: Allocating AU number %llX, on lun_id %llX, " 227 + "free_aun_cnt = %llX\n", __func__, 228 + ((bit_word * BITS_PER_LONG) + bit_pos), ba_lun->lun_id, 229 + bali->free_aun_cnt); 230 + 231 + return (u64) ((bit_word * BITS_PER_LONG) + bit_pos); 232 + } 233 + 234 + /** 235 + * validate_alloc() - validates the specified block has been allocated 236 + * @ba_lun_info: LUN info owning the block allocator. 237 + * @aun: Block to validate. 238 + * 239 + * Return: 0 on success, -1 on failure 240 + */ 241 + static int validate_alloc(struct ba_lun_info *bali, u64 aun) 242 + { 243 + int idx = 0, bit_pos = 0; 244 + 245 + idx = aun / BITS_PER_LONG; 246 + bit_pos = aun % BITS_PER_LONG; 247 + 248 + if (test_bit(bit_pos, (ulong *)&bali->lun_alloc_map[idx])) 249 + return -1; 250 + 251 + return 0; 252 + } 253 + 254 + /** 255 + * ba_free() - frees a block from the block allocator 256 + * @ba_lun: Block allocator from which to allocate a block. 257 + * @to_free: Block to free. 258 + * 259 + * Return: 0 on success, -1 on failure 260 + */ 261 + static int ba_free(struct ba_lun *ba_lun, u64 to_free) 262 + { 263 + int idx = 0, bit_pos = 0; 264 + struct ba_lun_info *bali = NULL; 265 + 266 + bali = ba_lun->ba_lun_handle; 267 + 268 + if (validate_alloc(bali, to_free)) { 269 + pr_debug("%s: The AUN %llX is not allocated on lun_id %llX\n", 270 + __func__, to_free, ba_lun->lun_id); 271 + return -1; 272 + } 273 + 274 + pr_debug("%s: Received a request to free AU %llX on lun_id %llX, " 275 + "free_aun_cnt = %llX\n", __func__, to_free, ba_lun->lun_id, 276 + bali->free_aun_cnt); 277 + 278 + if (bali->aun_clone_map[to_free] > 0) { 279 + pr_debug("%s: AUN %llX on lun_id %llX has been cloned. Clone " 280 + "count = %X\n", __func__, to_free, ba_lun->lun_id, 281 + bali->aun_clone_map[to_free]); 282 + bali->aun_clone_map[to_free]--; 283 + return 0; 284 + } 285 + 286 + idx = to_free / BITS_PER_LONG; 287 + bit_pos = to_free % BITS_PER_LONG; 288 + 289 + set_bit(bit_pos, (ulong *)&bali->lun_alloc_map[idx]); 290 + bali->free_aun_cnt++; 291 + 292 + if (idx < bali->free_low_idx) 293 + bali->free_low_idx = idx; 294 + else if (idx > bali->free_high_idx) 295 + bali->free_high_idx = idx; 296 + 297 + pr_debug("%s: Successfully freed AU at bit_pos %X, bit map index %X on " 298 + "lun_id %llX, free_aun_cnt = %llX\n", __func__, bit_pos, idx, 299 + ba_lun->lun_id, bali->free_aun_cnt); 300 + 301 + return 0; 302 + } 303 + 304 + /** 305 + * ba_clone() - Clone a chunk of the block allocation table 306 + * @ba_lun: Block allocator from which to allocate a block. 307 + * @to_free: Block to free. 308 + * 309 + * Return: 0 on success, -1 on failure 310 + */ 311 + static int ba_clone(struct ba_lun *ba_lun, u64 to_clone) 312 + { 313 + struct ba_lun_info *bali = ba_lun->ba_lun_handle; 314 + 315 + if (validate_alloc(bali, to_clone)) { 316 + pr_debug("%s: AUN %llX is not allocated on lun_id %llX\n", 317 + __func__, to_clone, ba_lun->lun_id); 318 + return -1; 319 + } 320 + 321 + pr_debug("%s: Received a request to clone AUN %llX on lun_id %llX\n", 322 + __func__, to_clone, ba_lun->lun_id); 323 + 324 + if (bali->aun_clone_map[to_clone] == MAX_AUN_CLONE_CNT) { 325 + pr_debug("%s: AUN %llX on lun_id %llX hit max clones already\n", 326 + __func__, to_clone, ba_lun->lun_id); 327 + return -1; 328 + } 329 + 330 + bali->aun_clone_map[to_clone]++; 331 + 332 + return 0; 333 + } 334 + 335 + /** 336 + * ba_space() - returns the amount of free space left in the block allocator 337 + * @ba_lun: Block allocator. 338 + * 339 + * Return: Amount of free space in block allocator 340 + */ 341 + static u64 ba_space(struct ba_lun *ba_lun) 342 + { 343 + struct ba_lun_info *bali = ba_lun->ba_lun_handle; 344 + 345 + return bali->free_aun_cnt; 346 + } 347 + 348 + /** 349 + * cxlflash_ba_terminate() - frees resources associated with the block allocator 350 + * @ba_lun: Block allocator. 351 + * 352 + * Safe to call in a partially allocated state. 353 + */ 354 + void cxlflash_ba_terminate(struct ba_lun *ba_lun) 355 + { 356 + struct ba_lun_info *bali = ba_lun->ba_lun_handle; 357 + 358 + if (bali) { 359 + kfree(bali->aun_clone_map); 360 + kfree(bali->lun_alloc_map); 361 + kfree(bali); 362 + ba_lun->ba_lun_handle = NULL; 363 + } 364 + } 365 + 366 + /** 367 + * init_vlun() - initializes a LUN for virtual use 368 + * @lun_info: LUN information structure that owns the block allocator. 369 + * 370 + * Return: 0 on success, -errno on failure 371 + */ 372 + static int init_vlun(struct llun_info *lli) 373 + { 374 + int rc = 0; 375 + struct glun_info *gli = lli->parent; 376 + struct blka *blka = &gli->blka; 377 + 378 + memset(blka, 0, sizeof(*blka)); 379 + mutex_init(&blka->mutex); 380 + 381 + /* LUN IDs are unique per port, save the index instead */ 382 + blka->ba_lun.lun_id = lli->lun_index; 383 + blka->ba_lun.lsize = gli->max_lba + 1; 384 + blka->ba_lun.lba_size = gli->blk_len; 385 + 386 + blka->ba_lun.au_size = MC_CHUNK_SIZE; 387 + blka->nchunk = blka->ba_lun.lsize / MC_CHUNK_SIZE; 388 + 389 + rc = ba_init(&blka->ba_lun); 390 + if (unlikely(rc)) 391 + pr_debug("%s: cannot init block_alloc, rc=%d\n", __func__, rc); 392 + 393 + pr_debug("%s: returning rc=%d lli=%p\n", __func__, rc, lli); 394 + return rc; 395 + } 396 + 397 + /** 398 + * write_same16() - sends a SCSI WRITE_SAME16 (0) command to specified LUN 399 + * @sdev: SCSI device associated with LUN. 400 + * @lba: Logical block address to start write same. 401 + * @nblks: Number of logical blocks to write same. 402 + * 403 + * Return: 0 on success, -errno on failure 404 + */ 405 + static int write_same16(struct scsi_device *sdev, 406 + u64 lba, 407 + u32 nblks) 408 + { 409 + u8 *cmd_buf = NULL; 410 + u8 *scsi_cmd = NULL; 411 + u8 *sense_buf = NULL; 412 + int rc = 0; 413 + int result = 0; 414 + int ws_limit = SISLITE_MAX_WS_BLOCKS; 415 + u64 offset = lba; 416 + int left = nblks; 417 + u32 tout = sdev->request_queue->rq_timeout; 418 + struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)sdev->host->hostdata; 419 + struct device *dev = &cfg->dev->dev; 420 + 421 + cmd_buf = kzalloc(CMD_BUFSIZE, GFP_KERNEL); 422 + scsi_cmd = kzalloc(MAX_COMMAND_SIZE, GFP_KERNEL); 423 + sense_buf = kzalloc(SCSI_SENSE_BUFFERSIZE, GFP_KERNEL); 424 + if (unlikely(!cmd_buf || !scsi_cmd || !sense_buf)) { 425 + rc = -ENOMEM; 426 + goto out; 427 + } 428 + 429 + while (left > 0) { 430 + 431 + scsi_cmd[0] = WRITE_SAME_16; 432 + put_unaligned_be64(offset, &scsi_cmd[2]); 433 + put_unaligned_be32(ws_limit < left ? ws_limit : left, 434 + &scsi_cmd[10]); 435 + 436 + result = scsi_execute(sdev, scsi_cmd, DMA_TO_DEVICE, cmd_buf, 437 + CMD_BUFSIZE, sense_buf, tout, 5, 0, NULL); 438 + if (result) { 439 + dev_err_ratelimited(dev, "%s: command failed for " 440 + "offset %lld result=0x%x\n", 441 + __func__, offset, result); 442 + rc = -EIO; 443 + goto out; 444 + } 445 + left -= ws_limit; 446 + offset += ws_limit; 447 + } 448 + 449 + out: 450 + kfree(cmd_buf); 451 + kfree(scsi_cmd); 452 + kfree(sense_buf); 453 + pr_debug("%s: returning rc=%d\n", __func__, rc); 454 + return rc; 455 + } 456 + 457 + /** 458 + * grow_lxt() - expands the translation table associated with the specified RHTE 459 + * @afu: AFU associated with the host. 460 + * @sdev: SCSI device associated with LUN. 461 + * @ctxid: Context ID of context owning the RHTE. 462 + * @rhndl: Resource handle associated with the RHTE. 463 + * @rhte: Resource handle entry (RHTE). 464 + * @new_size: Number of translation entries associated with RHTE. 465 + * 466 + * By design, this routine employs a 'best attempt' allocation and will 467 + * truncate the requested size down if there is not sufficient space in 468 + * the block allocator to satisfy the request but there does exist some 469 + * amount of space. The user is made aware of this by returning the size 470 + * allocated. 471 + * 472 + * Return: 0 on success, -errno on failure 473 + */ 474 + static int grow_lxt(struct afu *afu, 475 + struct scsi_device *sdev, 476 + ctx_hndl_t ctxid, 477 + res_hndl_t rhndl, 478 + struct sisl_rht_entry *rhte, 479 + u64 *new_size) 480 + { 481 + struct sisl_lxt_entry *lxt = NULL, *lxt_old = NULL; 482 + struct llun_info *lli = sdev->hostdata; 483 + struct glun_info *gli = lli->parent; 484 + struct blka *blka = &gli->blka; 485 + u32 av_size; 486 + u32 ngrps, ngrps_old; 487 + u64 aun; /* chunk# allocated by block allocator */ 488 + u64 delta = *new_size - rhte->lxt_cnt; 489 + u64 my_new_size; 490 + int i, rc = 0; 491 + 492 + /* 493 + * Check what is available in the block allocator before re-allocating 494 + * LXT array. This is done up front under the mutex which must not be 495 + * released until after allocation is complete. 496 + */ 497 + mutex_lock(&blka->mutex); 498 + av_size = ba_space(&blka->ba_lun); 499 + if (unlikely(av_size <= 0)) { 500 + pr_debug("%s: ba_space error: av_size %d\n", __func__, av_size); 501 + mutex_unlock(&blka->mutex); 502 + rc = -ENOSPC; 503 + goto out; 504 + } 505 + 506 + if (av_size < delta) 507 + delta = av_size; 508 + 509 + lxt_old = rhte->lxt_start; 510 + ngrps_old = LXT_NUM_GROUPS(rhte->lxt_cnt); 511 + ngrps = LXT_NUM_GROUPS(rhte->lxt_cnt + delta); 512 + 513 + if (ngrps != ngrps_old) { 514 + /* reallocate to fit new size */ 515 + lxt = kzalloc((sizeof(*lxt) * LXT_GROUP_SIZE * ngrps), 516 + GFP_KERNEL); 517 + if (unlikely(!lxt)) { 518 + mutex_unlock(&blka->mutex); 519 + rc = -ENOMEM; 520 + goto out; 521 + } 522 + 523 + /* copy over all old entries */ 524 + memcpy(lxt, lxt_old, (sizeof(*lxt) * rhte->lxt_cnt)); 525 + } else 526 + lxt = lxt_old; 527 + 528 + /* nothing can fail from now on */ 529 + my_new_size = rhte->lxt_cnt + delta; 530 + 531 + /* add new entries to the end */ 532 + for (i = rhte->lxt_cnt; i < my_new_size; i++) { 533 + /* 534 + * Due to the earlier check of available space, ba_alloc 535 + * cannot fail here. If it did due to internal error, 536 + * leave a rlba_base of -1u which will likely be a 537 + * invalid LUN (too large). 538 + */ 539 + aun = ba_alloc(&blka->ba_lun); 540 + if ((aun == -1ULL) || (aun >= blka->nchunk)) 541 + pr_debug("%s: ba_alloc error: allocated chunk# %llX, " 542 + "max %llX\n", __func__, aun, blka->nchunk - 1); 543 + 544 + /* select both ports, use r/w perms from RHT */ 545 + lxt[i].rlba_base = ((aun << MC_CHUNK_SHIFT) | 546 + (lli->lun_index << LXT_LUNIDX_SHIFT) | 547 + (RHT_PERM_RW << LXT_PERM_SHIFT | 548 + lli->port_sel)); 549 + } 550 + 551 + mutex_unlock(&blka->mutex); 552 + 553 + /* 554 + * The following sequence is prescribed in the SISlite spec 555 + * for syncing up with the AFU when adding LXT entries. 556 + */ 557 + dma_wmb(); /* Make LXT updates are visible */ 558 + 559 + rhte->lxt_start = lxt; 560 + dma_wmb(); /* Make RHT entry's LXT table update visible */ 561 + 562 + rhte->lxt_cnt = my_new_size; 563 + dma_wmb(); /* Make RHT entry's LXT table size update visible */ 564 + 565 + cxlflash_afu_sync(afu, ctxid, rhndl, AFU_LW_SYNC); 566 + 567 + /* free old lxt if reallocated */ 568 + if (lxt != lxt_old) 569 + kfree(lxt_old); 570 + *new_size = my_new_size; 571 + out: 572 + pr_debug("%s: returning rc=%d\n", __func__, rc); 573 + return rc; 574 + } 575 + 576 + /** 577 + * shrink_lxt() - reduces translation table associated with the specified RHTE 578 + * @afu: AFU associated with the host. 579 + * @sdev: SCSI device associated with LUN. 580 + * @rhndl: Resource handle associated with the RHTE. 581 + * @rhte: Resource handle entry (RHTE). 582 + * @ctxi: Context owning resources. 583 + * @new_size: Number of translation entries associated with RHTE. 584 + * 585 + * Return: 0 on success, -errno on failure 586 + */ 587 + static int shrink_lxt(struct afu *afu, 588 + struct scsi_device *sdev, 589 + res_hndl_t rhndl, 590 + struct sisl_rht_entry *rhte, 591 + struct ctx_info *ctxi, 592 + u64 *new_size) 593 + { 594 + struct sisl_lxt_entry *lxt, *lxt_old; 595 + struct llun_info *lli = sdev->hostdata; 596 + struct glun_info *gli = lli->parent; 597 + struct blka *blka = &gli->blka; 598 + ctx_hndl_t ctxid = DECODE_CTXID(ctxi->ctxid); 599 + bool needs_ws = ctxi->rht_needs_ws[rhndl]; 600 + bool needs_sync = !ctxi->err_recovery_active; 601 + u32 ngrps, ngrps_old; 602 + u64 aun; /* chunk# allocated by block allocator */ 603 + u64 delta = rhte->lxt_cnt - *new_size; 604 + u64 my_new_size; 605 + int i, rc = 0; 606 + 607 + lxt_old = rhte->lxt_start; 608 + ngrps_old = LXT_NUM_GROUPS(rhte->lxt_cnt); 609 + ngrps = LXT_NUM_GROUPS(rhte->lxt_cnt - delta); 610 + 611 + if (ngrps != ngrps_old) { 612 + /* Reallocate to fit new size unless new size is 0 */ 613 + if (ngrps) { 614 + lxt = kzalloc((sizeof(*lxt) * LXT_GROUP_SIZE * ngrps), 615 + GFP_KERNEL); 616 + if (unlikely(!lxt)) { 617 + rc = -ENOMEM; 618 + goto out; 619 + } 620 + 621 + /* Copy over old entries that will remain */ 622 + memcpy(lxt, lxt_old, 623 + (sizeof(*lxt) * (rhte->lxt_cnt - delta))); 624 + } else 625 + lxt = NULL; 626 + } else 627 + lxt = lxt_old; 628 + 629 + /* Nothing can fail from now on */ 630 + my_new_size = rhte->lxt_cnt - delta; 631 + 632 + /* 633 + * The following sequence is prescribed in the SISlite spec 634 + * for syncing up with the AFU when removing LXT entries. 635 + */ 636 + rhte->lxt_cnt = my_new_size; 637 + dma_wmb(); /* Make RHT entry's LXT table size update visible */ 638 + 639 + rhte->lxt_start = lxt; 640 + dma_wmb(); /* Make RHT entry's LXT table update visible */ 641 + 642 + if (needs_sync) 643 + cxlflash_afu_sync(afu, ctxid, rhndl, AFU_HW_SYNC); 644 + 645 + if (needs_ws) { 646 + /* 647 + * Mark the context as unavailable, so that we can release 648 + * the mutex safely. 649 + */ 650 + ctxi->unavail = true; 651 + mutex_unlock(&ctxi->mutex); 652 + } 653 + 654 + /* Free LBAs allocated to freed chunks */ 655 + mutex_lock(&blka->mutex); 656 + for (i = delta - 1; i >= 0; i--) { 657 + /* Mask the higher 48 bits before shifting, even though 658 + * it is a noop 659 + */ 660 + aun = (lxt_old[my_new_size + i].rlba_base & SISL_ASTATUS_MASK); 661 + aun = (aun >> MC_CHUNK_SHIFT); 662 + if (needs_ws) 663 + write_same16(sdev, aun, MC_CHUNK_SIZE); 664 + ba_free(&blka->ba_lun, aun); 665 + } 666 + mutex_unlock(&blka->mutex); 667 + 668 + if (needs_ws) { 669 + /* Make the context visible again */ 670 + mutex_lock(&ctxi->mutex); 671 + ctxi->unavail = false; 672 + } 673 + 674 + /* Free old lxt if reallocated */ 675 + if (lxt != lxt_old) 676 + kfree(lxt_old); 677 + *new_size = my_new_size; 678 + out: 679 + pr_debug("%s: returning rc=%d\n", __func__, rc); 680 + return rc; 681 + } 682 + 683 + /** 684 + * _cxlflash_vlun_resize() - changes the size of a virtual lun 685 + * @sdev: SCSI device associated with LUN owning virtual LUN. 686 + * @ctxi: Context owning resources. 687 + * @resize: Resize ioctl data structure. 688 + * 689 + * On successful return, the user is informed of the new size (in blocks) 690 + * of the virtual lun in last LBA format. When the size of the virtual 691 + * lun is zero, the last LBA is reflected as -1. See comment in the 692 + * prologue for _cxlflash_disk_release() regarding AFU syncs and contexts 693 + * on the error recovery list. 694 + * 695 + * Return: 0 on success, -errno on failure 696 + */ 697 + int _cxlflash_vlun_resize(struct scsi_device *sdev, 698 + struct ctx_info *ctxi, 699 + struct dk_cxlflash_resize *resize) 700 + { 701 + struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)sdev->host->hostdata; 702 + struct llun_info *lli = sdev->hostdata; 703 + struct glun_info *gli = lli->parent; 704 + struct afu *afu = cfg->afu; 705 + bool put_ctx = false; 706 + 707 + res_hndl_t rhndl = resize->rsrc_handle; 708 + u64 new_size; 709 + u64 nsectors; 710 + u64 ctxid = DECODE_CTXID(resize->context_id), 711 + rctxid = resize->context_id; 712 + 713 + struct sisl_rht_entry *rhte; 714 + 715 + int rc = 0; 716 + 717 + /* 718 + * The requested size (req_size) is always assumed to be in 4k blocks, 719 + * so we have to convert it here from 4k to chunk size. 720 + */ 721 + nsectors = (resize->req_size * CXLFLASH_BLOCK_SIZE) / gli->blk_len; 722 + new_size = DIV_ROUND_UP(nsectors, MC_CHUNK_SIZE); 723 + 724 + pr_debug("%s: ctxid=%llu rhndl=0x%llx, req_size=0x%llx," 725 + "new_size=%llx\n", __func__, ctxid, resize->rsrc_handle, 726 + resize->req_size, new_size); 727 + 728 + if (unlikely(gli->mode != MODE_VIRTUAL)) { 729 + pr_debug("%s: LUN mode does not support resize! (%d)\n", 730 + __func__, gli->mode); 731 + rc = -EINVAL; 732 + goto out; 733 + 734 + } 735 + 736 + if (!ctxi) { 737 + ctxi = get_context(cfg, rctxid, lli, CTX_CTRL_ERR_FALLBACK); 738 + if (unlikely(!ctxi)) { 739 + pr_debug("%s: Bad context! (%llu)\n", __func__, ctxid); 740 + rc = -EINVAL; 741 + goto out; 742 + } 743 + 744 + put_ctx = true; 745 + } 746 + 747 + rhte = get_rhte(ctxi, rhndl, lli); 748 + if (unlikely(!rhte)) { 749 + pr_debug("%s: Bad resource handle! (%u)\n", __func__, rhndl); 750 + rc = -EINVAL; 751 + goto out; 752 + } 753 + 754 + if (new_size > rhte->lxt_cnt) 755 + rc = grow_lxt(afu, sdev, ctxid, rhndl, rhte, &new_size); 756 + else if (new_size < rhte->lxt_cnt) 757 + rc = shrink_lxt(afu, sdev, rhndl, rhte, ctxi, &new_size); 758 + 759 + resize->hdr.return_flags = 0; 760 + resize->last_lba = (new_size * MC_CHUNK_SIZE * gli->blk_len); 761 + resize->last_lba /= CXLFLASH_BLOCK_SIZE; 762 + resize->last_lba--; 763 + 764 + out: 765 + if (put_ctx) 766 + put_context(ctxi); 767 + pr_debug("%s: resized to %lld returning rc=%d\n", 768 + __func__, resize->last_lba, rc); 769 + return rc; 770 + } 771 + 772 + int cxlflash_vlun_resize(struct scsi_device *sdev, 773 + struct dk_cxlflash_resize *resize) 774 + { 775 + return _cxlflash_vlun_resize(sdev, NULL, resize); 776 + } 777 + 778 + /** 779 + * cxlflash_restore_luntable() - Restore LUN table to prior state 780 + * @cfg: Internal structure associated with the host. 781 + */ 782 + void cxlflash_restore_luntable(struct cxlflash_cfg *cfg) 783 + { 784 + struct llun_info *lli, *temp; 785 + u32 chan; 786 + u32 lind; 787 + struct afu *afu = cfg->afu; 788 + struct sisl_global_map *agm = &afu->afu_map->global; 789 + 790 + mutex_lock(&global.mutex); 791 + 792 + list_for_each_entry_safe(lli, temp, &cfg->lluns, list) { 793 + if (!lli->in_table) 794 + continue; 795 + 796 + lind = lli->lun_index; 797 + 798 + if (lli->port_sel == BOTH_PORTS) { 799 + writeq_be(lli->lun_id[0], &agm->fc_port[0][lind]); 800 + writeq_be(lli->lun_id[1], &agm->fc_port[1][lind]); 801 + pr_debug("%s: Virtual LUN on slot %d id0=%llx, " 802 + "id1=%llx\n", __func__, lind, 803 + lli->lun_id[0], lli->lun_id[1]); 804 + } else { 805 + chan = PORT2CHAN(lli->port_sel); 806 + writeq_be(lli->lun_id[chan], &agm->fc_port[chan][lind]); 807 + pr_debug("%s: Virtual LUN on slot %d chan=%d, " 808 + "id=%llx\n", __func__, lind, chan, 809 + lli->lun_id[chan]); 810 + } 811 + } 812 + 813 + mutex_unlock(&global.mutex); 814 + } 815 + 816 + /** 817 + * init_luntable() - write an entry in the LUN table 818 + * @cfg: Internal structure associated with the host. 819 + * @lli: Per adapter LUN information structure. 820 + * 821 + * On successful return, a LUN table entry is created. 822 + * At the top for LUNs visible on both ports. 823 + * At the bottom for LUNs visible only on one port. 824 + * 825 + * Return: 0 on success, -errno on failure 826 + */ 827 + static int init_luntable(struct cxlflash_cfg *cfg, struct llun_info *lli) 828 + { 829 + u32 chan; 830 + u32 lind; 831 + int rc = 0; 832 + struct afu *afu = cfg->afu; 833 + struct sisl_global_map *agm = &afu->afu_map->global; 834 + 835 + mutex_lock(&global.mutex); 836 + 837 + if (lli->in_table) 838 + goto out; 839 + 840 + if (lli->port_sel == BOTH_PORTS) { 841 + /* 842 + * If this LUN is visible from both ports, we will put 843 + * it in the top half of the LUN table. 844 + */ 845 + if ((cfg->promote_lun_index == cfg->last_lun_index[0]) || 846 + (cfg->promote_lun_index == cfg->last_lun_index[1])) { 847 + rc = -ENOSPC; 848 + goto out; 849 + } 850 + 851 + lind = lli->lun_index = cfg->promote_lun_index; 852 + writeq_be(lli->lun_id[0], &agm->fc_port[0][lind]); 853 + writeq_be(lli->lun_id[1], &agm->fc_port[1][lind]); 854 + cfg->promote_lun_index++; 855 + pr_debug("%s: Virtual LUN on slot %d id0=%llx, id1=%llx\n", 856 + __func__, lind, lli->lun_id[0], lli->lun_id[1]); 857 + } else { 858 + /* 859 + * If this LUN is visible only from one port, we will put 860 + * it in the bottom half of the LUN table. 861 + */ 862 + chan = PORT2CHAN(lli->port_sel); 863 + if (cfg->promote_lun_index == cfg->last_lun_index[chan]) { 864 + rc = -ENOSPC; 865 + goto out; 866 + } 867 + 868 + lind = lli->lun_index = cfg->last_lun_index[chan]; 869 + writeq_be(lli->lun_id[chan], &agm->fc_port[chan][lind]); 870 + cfg->last_lun_index[chan]--; 871 + pr_debug("%s: Virtual LUN on slot %d chan=%d, id=%llx\n", 872 + __func__, lind, chan, lli->lun_id[chan]); 873 + } 874 + 875 + lli->in_table = true; 876 + out: 877 + mutex_unlock(&global.mutex); 878 + pr_debug("%s: returning rc=%d\n", __func__, rc); 879 + return rc; 880 + } 881 + 882 + /** 883 + * cxlflash_disk_virtual_open() - open a virtual disk of specified size 884 + * @sdev: SCSI device associated with LUN owning virtual LUN. 885 + * @arg: UVirtual ioctl data structure. 886 + * 887 + * On successful return, the user is informed of the resource handle 888 + * to be used to identify the virtual lun and the size (in blocks) of 889 + * the virtual lun in last LBA format. When the size of the virtual lun 890 + * is zero, the last LBA is reflected as -1. 891 + * 892 + * Return: 0 on success, -errno on failure 893 + */ 894 + int cxlflash_disk_virtual_open(struct scsi_device *sdev, void *arg) 895 + { 896 + struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)sdev->host->hostdata; 897 + struct device *dev = &cfg->dev->dev; 898 + struct llun_info *lli = sdev->hostdata; 899 + struct glun_info *gli = lli->parent; 900 + 901 + struct dk_cxlflash_uvirtual *virt = (struct dk_cxlflash_uvirtual *)arg; 902 + struct dk_cxlflash_resize resize; 903 + 904 + u64 ctxid = DECODE_CTXID(virt->context_id), 905 + rctxid = virt->context_id; 906 + u64 lun_size = virt->lun_size; 907 + u64 last_lba = 0; 908 + u64 rsrc_handle = -1; 909 + 910 + int rc = 0; 911 + 912 + struct ctx_info *ctxi = NULL; 913 + struct sisl_rht_entry *rhte = NULL; 914 + 915 + pr_debug("%s: ctxid=%llu ls=0x%llx\n", __func__, ctxid, lun_size); 916 + 917 + mutex_lock(&gli->mutex); 918 + if (gli->mode == MODE_NONE) { 919 + /* Setup the LUN table and block allocator on first call */ 920 + rc = init_luntable(cfg, lli); 921 + if (rc) { 922 + dev_err(dev, "%s: call to init_luntable failed " 923 + "rc=%d!\n", __func__, rc); 924 + goto err0; 925 + } 926 + 927 + rc = init_vlun(lli); 928 + if (rc) { 929 + dev_err(dev, "%s: call to init_vlun failed rc=%d!\n", 930 + __func__, rc); 931 + rc = -ENOMEM; 932 + goto err0; 933 + } 934 + } 935 + 936 + rc = cxlflash_lun_attach(gli, MODE_VIRTUAL, true); 937 + if (unlikely(rc)) { 938 + dev_err(dev, "%s: Failed to attach to LUN! (VIRTUAL)\n", 939 + __func__); 940 + goto err0; 941 + } 942 + mutex_unlock(&gli->mutex); 943 + 944 + ctxi = get_context(cfg, rctxid, lli, 0); 945 + if (unlikely(!ctxi)) { 946 + dev_err(dev, "%s: Bad context! (%llu)\n", __func__, ctxid); 947 + rc = -EINVAL; 948 + goto err1; 949 + } 950 + 951 + rhte = rhte_checkout(ctxi, lli); 952 + if (unlikely(!rhte)) { 953 + dev_err(dev, "%s: too many opens for this context\n", __func__); 954 + rc = -EMFILE; /* too many opens */ 955 + goto err1; 956 + } 957 + 958 + rsrc_handle = (rhte - ctxi->rht_start); 959 + 960 + /* Populate RHT format 0 */ 961 + rhte->nmask = MC_RHT_NMASK; 962 + rhte->fp = SISL_RHT_FP(0U, ctxi->rht_perms); 963 + 964 + /* Resize even if requested size is 0 */ 965 + marshal_virt_to_resize(virt, &resize); 966 + resize.rsrc_handle = rsrc_handle; 967 + rc = _cxlflash_vlun_resize(sdev, ctxi, &resize); 968 + if (rc) { 969 + dev_err(dev, "%s: resize failed rc %d\n", __func__, rc); 970 + goto err2; 971 + } 972 + last_lba = resize.last_lba; 973 + 974 + if (virt->hdr.flags & DK_CXLFLASH_UVIRTUAL_NEED_WRITE_SAME) 975 + ctxi->rht_needs_ws[rsrc_handle] = true; 976 + 977 + virt->hdr.return_flags = 0; 978 + virt->last_lba = last_lba; 979 + virt->rsrc_handle = rsrc_handle; 980 + 981 + out: 982 + if (likely(ctxi)) 983 + put_context(ctxi); 984 + pr_debug("%s: returning handle 0x%llx rc=%d llba %lld\n", 985 + __func__, rsrc_handle, rc, last_lba); 986 + return rc; 987 + 988 + err2: 989 + rhte_checkin(ctxi, rhte); 990 + err1: 991 + cxlflash_lun_detach(gli); 992 + goto out; 993 + err0: 994 + /* Special common cleanup prior to successful LUN attach */ 995 + cxlflash_ba_terminate(&gli->blka.ba_lun); 996 + mutex_unlock(&gli->mutex); 997 + goto out; 998 + } 999 + 1000 + /** 1001 + * clone_lxt() - copies translation tables from source to destination RHTE 1002 + * @afu: AFU associated with the host. 1003 + * @blka: Block allocator associated with LUN. 1004 + * @ctxid: Context ID of context owning the RHTE. 1005 + * @rhndl: Resource handle associated with the RHTE. 1006 + * @rhte: Destination resource handle entry (RHTE). 1007 + * @rhte_src: Source resource handle entry (RHTE). 1008 + * 1009 + * Return: 0 on success, -errno on failure 1010 + */ 1011 + static int clone_lxt(struct afu *afu, 1012 + struct blka *blka, 1013 + ctx_hndl_t ctxid, 1014 + res_hndl_t rhndl, 1015 + struct sisl_rht_entry *rhte, 1016 + struct sisl_rht_entry *rhte_src) 1017 + { 1018 + struct sisl_lxt_entry *lxt; 1019 + u32 ngrps; 1020 + u64 aun; /* chunk# allocated by block allocator */ 1021 + int i, j; 1022 + 1023 + ngrps = LXT_NUM_GROUPS(rhte_src->lxt_cnt); 1024 + 1025 + if (ngrps) { 1026 + /* allocate new LXTs for clone */ 1027 + lxt = kzalloc((sizeof(*lxt) * LXT_GROUP_SIZE * ngrps), 1028 + GFP_KERNEL); 1029 + if (unlikely(!lxt)) 1030 + return -ENOMEM; 1031 + 1032 + /* copy over */ 1033 + memcpy(lxt, rhte_src->lxt_start, 1034 + (sizeof(*lxt) * rhte_src->lxt_cnt)); 1035 + 1036 + /* clone the LBAs in block allocator via ref_cnt */ 1037 + mutex_lock(&blka->mutex); 1038 + for (i = 0; i < rhte_src->lxt_cnt; i++) { 1039 + aun = (lxt[i].rlba_base >> MC_CHUNK_SHIFT); 1040 + if (ba_clone(&blka->ba_lun, aun) == -1ULL) { 1041 + /* free the clones already made */ 1042 + for (j = 0; j < i; j++) { 1043 + aun = (lxt[j].rlba_base >> 1044 + MC_CHUNK_SHIFT); 1045 + ba_free(&blka->ba_lun, aun); 1046 + } 1047 + 1048 + mutex_unlock(&blka->mutex); 1049 + kfree(lxt); 1050 + return -EIO; 1051 + } 1052 + } 1053 + mutex_unlock(&blka->mutex); 1054 + } else { 1055 + lxt = NULL; 1056 + } 1057 + 1058 + /* 1059 + * The following sequence is prescribed in the SISlite spec 1060 + * for syncing up with the AFU when adding LXT entries. 1061 + */ 1062 + dma_wmb(); /* Make LXT updates are visible */ 1063 + 1064 + rhte->lxt_start = lxt; 1065 + dma_wmb(); /* Make RHT entry's LXT table update visible */ 1066 + 1067 + rhte->lxt_cnt = rhte_src->lxt_cnt; 1068 + dma_wmb(); /* Make RHT entry's LXT table size update visible */ 1069 + 1070 + cxlflash_afu_sync(afu, ctxid, rhndl, AFU_LW_SYNC); 1071 + 1072 + pr_debug("%s: returning\n", __func__); 1073 + return 0; 1074 + } 1075 + 1076 + /** 1077 + * cxlflash_disk_clone() - clone a context by making snapshot of another 1078 + * @sdev: SCSI device associated with LUN owning virtual LUN. 1079 + * @clone: Clone ioctl data structure. 1080 + * 1081 + * This routine effectively performs cxlflash_disk_open operation for each 1082 + * in-use virtual resource in the source context. Note that the destination 1083 + * context must be in pristine state and cannot have any resource handles 1084 + * open at the time of the clone. 1085 + * 1086 + * Return: 0 on success, -errno on failure 1087 + */ 1088 + int cxlflash_disk_clone(struct scsi_device *sdev, 1089 + struct dk_cxlflash_clone *clone) 1090 + { 1091 + struct cxlflash_cfg *cfg = (struct cxlflash_cfg *)sdev->host->hostdata; 1092 + struct llun_info *lli = sdev->hostdata; 1093 + struct glun_info *gli = lli->parent; 1094 + struct blka *blka = &gli->blka; 1095 + struct afu *afu = cfg->afu; 1096 + struct dk_cxlflash_release release = { { 0 }, 0 }; 1097 + 1098 + struct ctx_info *ctxi_src = NULL, 1099 + *ctxi_dst = NULL; 1100 + struct lun_access *lun_access_src, *lun_access_dst; 1101 + u32 perms; 1102 + u64 ctxid_src = DECODE_CTXID(clone->context_id_src), 1103 + ctxid_dst = DECODE_CTXID(clone->context_id_dst), 1104 + rctxid_src = clone->context_id_src, 1105 + rctxid_dst = clone->context_id_dst; 1106 + int adap_fd_src = clone->adap_fd_src; 1107 + int i, j; 1108 + int rc = 0; 1109 + bool found; 1110 + LIST_HEAD(sidecar); 1111 + 1112 + pr_debug("%s: ctxid_src=%llu ctxid_dst=%llu adap_fd_src=%d\n", 1113 + __func__, ctxid_src, ctxid_dst, adap_fd_src); 1114 + 1115 + /* Do not clone yourself */ 1116 + if (unlikely(rctxid_src == rctxid_dst)) { 1117 + rc = -EINVAL; 1118 + goto out; 1119 + } 1120 + 1121 + if (unlikely(gli->mode != MODE_VIRTUAL)) { 1122 + rc = -EINVAL; 1123 + pr_debug("%s: Clone not supported on physical LUNs! (%d)\n", 1124 + __func__, gli->mode); 1125 + goto out; 1126 + } 1127 + 1128 + ctxi_src = get_context(cfg, rctxid_src, lli, CTX_CTRL_CLONE); 1129 + ctxi_dst = get_context(cfg, rctxid_dst, lli, 0); 1130 + if (unlikely(!ctxi_src || !ctxi_dst)) { 1131 + pr_debug("%s: Bad context! (%llu,%llu)\n", __func__, 1132 + ctxid_src, ctxid_dst); 1133 + rc = -EINVAL; 1134 + goto out; 1135 + } 1136 + 1137 + if (unlikely(adap_fd_src != ctxi_src->lfd)) { 1138 + pr_debug("%s: Invalid source adapter fd! (%d)\n", 1139 + __func__, adap_fd_src); 1140 + rc = -EINVAL; 1141 + goto out; 1142 + } 1143 + 1144 + /* Verify there is no open resource handle in the destination context */ 1145 + for (i = 0; i < MAX_RHT_PER_CONTEXT; i++) 1146 + if (ctxi_dst->rht_start[i].nmask != 0) { 1147 + rc = -EINVAL; 1148 + goto out; 1149 + } 1150 + 1151 + /* Clone LUN access list */ 1152 + list_for_each_entry(lun_access_src, &ctxi_src->luns, list) { 1153 + found = false; 1154 + list_for_each_entry(lun_access_dst, &ctxi_dst->luns, list) 1155 + if (lun_access_dst->sdev == lun_access_src->sdev) { 1156 + found = true; 1157 + break; 1158 + } 1159 + 1160 + if (!found) { 1161 + lun_access_dst = kzalloc(sizeof(*lun_access_dst), 1162 + GFP_KERNEL); 1163 + if (unlikely(!lun_access_dst)) { 1164 + pr_err("%s: Unable to allocate lun_access!\n", 1165 + __func__); 1166 + rc = -ENOMEM; 1167 + goto out; 1168 + } 1169 + 1170 + *lun_access_dst = *lun_access_src; 1171 + list_add(&lun_access_dst->list, &sidecar); 1172 + } 1173 + } 1174 + 1175 + if (unlikely(!ctxi_src->rht_out)) { 1176 + pr_debug("%s: Nothing to clone!\n", __func__); 1177 + goto out_success; 1178 + } 1179 + 1180 + /* User specified permission on attach */ 1181 + perms = ctxi_dst->rht_perms; 1182 + 1183 + /* 1184 + * Copy over checked-out RHT (and their associated LXT) entries by 1185 + * hand, stopping after we've copied all outstanding entries and 1186 + * cleaning up if the clone fails. 1187 + * 1188 + * Note: This loop is equivalent to performing cxlflash_disk_open and 1189 + * cxlflash_vlun_resize. As such, LUN accounting needs to be taken into 1190 + * account by attaching after each successful RHT entry clone. In the 1191 + * event that a clone failure is experienced, the LUN detach is handled 1192 + * via the cleanup performed by _cxlflash_disk_release. 1193 + */ 1194 + for (i = 0; i < MAX_RHT_PER_CONTEXT; i++) { 1195 + if (ctxi_src->rht_out == ctxi_dst->rht_out) 1196 + break; 1197 + if (ctxi_src->rht_start[i].nmask == 0) 1198 + continue; 1199 + 1200 + /* Consume a destination RHT entry */ 1201 + ctxi_dst->rht_out++; 1202 + ctxi_dst->rht_start[i].nmask = ctxi_src->rht_start[i].nmask; 1203 + ctxi_dst->rht_start[i].fp = 1204 + SISL_RHT_FP_CLONE(ctxi_src->rht_start[i].fp, perms); 1205 + ctxi_dst->rht_lun[i] = ctxi_src->rht_lun[i]; 1206 + 1207 + rc = clone_lxt(afu, blka, ctxid_dst, i, 1208 + &ctxi_dst->rht_start[i], 1209 + &ctxi_src->rht_start[i]); 1210 + if (rc) { 1211 + marshal_clone_to_rele(clone, &release); 1212 + for (j = 0; j < i; j++) { 1213 + release.rsrc_handle = j; 1214 + _cxlflash_disk_release(sdev, ctxi_dst, 1215 + &release); 1216 + } 1217 + 1218 + /* Put back the one we failed on */ 1219 + rhte_checkin(ctxi_dst, &ctxi_dst->rht_start[i]); 1220 + goto err; 1221 + } 1222 + 1223 + cxlflash_lun_attach(gli, gli->mode, false); 1224 + } 1225 + 1226 + out_success: 1227 + list_splice(&sidecar, &ctxi_dst->luns); 1228 + sys_close(adap_fd_src); 1229 + 1230 + /* fall through */ 1231 + out: 1232 + if (ctxi_src) 1233 + put_context(ctxi_src); 1234 + if (ctxi_dst) 1235 + put_context(ctxi_dst); 1236 + pr_debug("%s: returning rc=%d\n", __func__, rc); 1237 + return rc; 1238 + 1239 + err: 1240 + list_for_each_entry_safe(lun_access_src, lun_access_dst, &sidecar, list) 1241 + kfree(lun_access_src); 1242 + goto out; 1243 + }
+86
drivers/scsi/cxlflash/vlun.h
··· 1 + /* 2 + * CXL Flash Device Driver 3 + * 4 + * Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation 5 + * Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation 6 + * 7 + * Copyright (C) 2015 IBM Corporation 8 + * 9 + * This program is free software; you can redistribute it and/or 10 + * modify it under the terms of the GNU General Public License 11 + * as published by the Free Software Foundation; either version 12 + * 2 of the License, or (at your option) any later version. 13 + */ 14 + 15 + #ifndef _CXLFLASH_VLUN_H 16 + #define _CXLFLASH_VLUN_H 17 + 18 + /* RHT - Resource Handle Table */ 19 + #define MC_RHT_NMASK 16 /* in bits */ 20 + #define MC_CHUNK_SHIFT MC_RHT_NMASK /* shift to go from LBA to chunk# */ 21 + 22 + #define HIBIT (BITS_PER_LONG - 1) 23 + 24 + #define MAX_AUN_CLONE_CNT 0xFF 25 + 26 + /* 27 + * LXT - LBA Translation Table 28 + * 29 + * +-------+-------+-------+-------+-------+-------+-------+---+---+ 30 + * | RLBA_BASE |LUN_IDX| P |SEL| 31 + * +-------+-------+-------+-------+-------+-------+-------+---+---+ 32 + * 33 + * The LXT Entry contains the physical LBA where the chunk starts (RLBA_BASE). 34 + * AFU ORes the low order bits from the virtual LBA (offset into the chunk) 35 + * with RLBA_BASE. The result is the physical LBA to be sent to storage. 36 + * The LXT Entry also contains an index to a LUN TBL and a bitmask of which 37 + * outgoing (FC) * ports can be selected. The port select bit-mask is ANDed 38 + * with a global port select bit-mask maintained by the driver. 39 + * In addition, it has permission bits that are ANDed with the 40 + * RHT permissions to arrive at the final permissions for the chunk. 41 + * 42 + * LXT tables are allocated dynamically in groups. This is done to avoid 43 + * a malloc/free overhead each time the LXT has to grow or shrink. 44 + * 45 + * Based on the current lxt_cnt (used), it is always possible to know 46 + * how many are allocated (used+free). The number of allocated entries is 47 + * not stored anywhere. 48 + * 49 + * The LXT table is re-allocated whenever it needs to cross into another group. 50 + */ 51 + #define LXT_GROUP_SIZE 8 52 + #define LXT_NUM_GROUPS(lxt_cnt) (((lxt_cnt) + 7)/8) /* alloc'ed groups */ 53 + #define LXT_LUNIDX_SHIFT 8 /* LXT entry, shift for LUN index */ 54 + #define LXT_PERM_SHIFT 4 /* LXT entry, shift for permission bits */ 55 + 56 + struct ba_lun_info { 57 + u64 *lun_alloc_map; 58 + u32 lun_bmap_size; 59 + u32 total_aus; 60 + u64 free_aun_cnt; 61 + 62 + /* indices to be used for elevator lookup of free map */ 63 + u32 free_low_idx; 64 + u32 free_curr_idx; 65 + u32 free_high_idx; 66 + 67 + u8 *aun_clone_map; 68 + }; 69 + 70 + struct ba_lun { 71 + u64 lun_id; 72 + u64 wwpn; 73 + size_t lsize; /* LUN size in number of LBAs */ 74 + size_t lba_size; /* LBA size in number of bytes */ 75 + size_t au_size; /* Allocation Unit size in number of LBAs */ 76 + struct ba_lun_info *ba_lun_handle; 77 + }; 78 + 79 + /* Block Allocator */ 80 + struct blka { 81 + struct ba_lun ba_lun; 82 + u64 nchunk; /* number of chunks */ 83 + struct mutex mutex; 84 + }; 85 + 86 + #endif /* ifndef _CXLFLASH_SUPERPIPE_H */
+185 -116
drivers/scsi/hpsa.c
··· 1 1 /* 2 2 * Disk Array driver for HP Smart Array SAS controllers 3 - * Copyright 2000, 2014 Hewlett-Packard Development Company, L.P. 3 + * Copyright 2014-2015 PMC-Sierra, Inc. 4 + * Copyright 2000,2009-2015 Hewlett-Packard Development Company, L.P. 4 5 * 5 6 * This program is free software; you can redistribute it and/or modify 6 7 * it under the terms of the GNU General Public License as published by ··· 12 11 * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or 13 12 * NON INFRINGEMENT. See the GNU General Public License for more details. 14 13 * 15 - * You should have received a copy of the GNU General Public License 16 - * along with this program; if not, write to the Free Software 17 - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 18 - * 19 - * Questions/Comments/Bugfixes to iss_storagedev@hp.com 14 + * Questions/Comments/Bugfixes to storagedev@pmcs.com 20 15 * 21 16 */ 22 17 ··· 129 132 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSI, 0x103C, 0x21CD}, 130 133 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSI, 0x103C, 0x21CE}, 131 134 {PCI_VENDOR_ID_ADAPTEC2, 0x0290, 0x9005, 0x0580}, 135 + {PCI_VENDOR_ID_ADAPTEC2, 0x0290, 0x9005, 0x0581}, 136 + {PCI_VENDOR_ID_ADAPTEC2, 0x0290, 0x9005, 0x0582}, 137 + {PCI_VENDOR_ID_ADAPTEC2, 0x0290, 0x9005, 0x0583}, 138 + {PCI_VENDOR_ID_ADAPTEC2, 0x0290, 0x9005, 0x0584}, 139 + {PCI_VENDOR_ID_ADAPTEC2, 0x0290, 0x9005, 0x0585}, 132 140 {PCI_VENDOR_ID_HP_3PAR, 0x0075, 0x1590, 0x0076}, 133 141 {PCI_VENDOR_ID_HP_3PAR, 0x0075, 0x1590, 0x0087}, 134 142 {PCI_VENDOR_ID_HP_3PAR, 0x0075, 0x1590, 0x007D}, ··· 192 190 {0x21CD103C, "Smart Array", &SA5_access}, 193 191 {0x21CE103C, "Smart HBA", &SA5_access}, 194 192 {0x05809005, "SmartHBA-SA", &SA5_access}, 193 + {0x05819005, "SmartHBA-SA 8i", &SA5_access}, 194 + {0x05829005, "SmartHBA-SA 8i8e", &SA5_access}, 195 + {0x05839005, "SmartHBA-SA 8e", &SA5_access}, 196 + {0x05849005, "SmartHBA-SA 16i", &SA5_access}, 197 + {0x05859005, "SmartHBA-SA 4i4e", &SA5_access}, 195 198 {0x00761590, "HP Storage P1224 Array Controller", &SA5_access}, 196 199 {0x00871590, "HP Storage P1224e Array Controller", &SA5_access}, 197 200 {0x007D1590, "HP Storage P1228 Array Controller", &SA5_access}, ··· 274 267 static void hpsa_command_resubmit_worker(struct work_struct *work); 275 268 static u32 lockup_detected(struct ctlr_info *h); 276 269 static int detect_controller_lockup(struct ctlr_info *h); 270 + static int is_ext_target(struct ctlr_info *h, struct hpsa_scsi_dev_t *device); 277 271 278 272 static inline struct ctlr_info *sdev_to_hba(struct scsi_device *sdev) 279 273 { ··· 333 325 334 326 decode_sense_data(c->err_info->SenseInfo, sense_len, 335 327 &sense_key, &asc, &ascq); 336 - if (sense_key != UNIT_ATTENTION || asc == -1) 328 + if (sense_key != UNIT_ATTENTION || asc == 0xff) 337 329 return 0; 338 330 339 331 switch (asc) { ··· 725 717 return snprintf(buf, 20, "%d\n", offload_enabled); 726 718 } 727 719 720 + #define MAX_PATHS 8 721 + #define PATH_STRING_LEN 50 722 + 723 + static ssize_t path_info_show(struct device *dev, 724 + struct device_attribute *attr, char *buf) 725 + { 726 + struct ctlr_info *h; 727 + struct scsi_device *sdev; 728 + struct hpsa_scsi_dev_t *hdev; 729 + unsigned long flags; 730 + int i; 731 + int output_len = 0; 732 + u8 box; 733 + u8 bay; 734 + u8 path_map_index = 0; 735 + char *active; 736 + unsigned char phys_connector[2]; 737 + unsigned char path[MAX_PATHS][PATH_STRING_LEN]; 738 + 739 + memset(path, 0, MAX_PATHS * PATH_STRING_LEN); 740 + sdev = to_scsi_device(dev); 741 + h = sdev_to_hba(sdev); 742 + spin_lock_irqsave(&h->devlock, flags); 743 + hdev = sdev->hostdata; 744 + if (!hdev) { 745 + spin_unlock_irqrestore(&h->devlock, flags); 746 + return -ENODEV; 747 + } 748 + 749 + bay = hdev->bay; 750 + for (i = 0; i < MAX_PATHS; i++) { 751 + path_map_index = 1<<i; 752 + if (i == hdev->active_path_index) 753 + active = "Active"; 754 + else if (hdev->path_map & path_map_index) 755 + active = "Inactive"; 756 + else 757 + continue; 758 + 759 + output_len = snprintf(path[i], 760 + PATH_STRING_LEN, "[%d:%d:%d:%d] %20.20s ", 761 + h->scsi_host->host_no, 762 + hdev->bus, hdev->target, hdev->lun, 763 + scsi_device_type(hdev->devtype)); 764 + 765 + if (is_ext_target(h, hdev) || 766 + (hdev->devtype == TYPE_RAID) || 767 + is_logical_dev_addr_mode(hdev->scsi3addr)) { 768 + output_len += snprintf(path[i] + output_len, 769 + PATH_STRING_LEN, "%s\n", 770 + active); 771 + continue; 772 + } 773 + 774 + box = hdev->box[i]; 775 + memcpy(&phys_connector, &hdev->phys_connector[i], 776 + sizeof(phys_connector)); 777 + if (phys_connector[0] < '0') 778 + phys_connector[0] = '0'; 779 + if (phys_connector[1] < '0') 780 + phys_connector[1] = '0'; 781 + if (hdev->phys_connector[i] > 0) 782 + output_len += snprintf(path[i] + output_len, 783 + PATH_STRING_LEN, 784 + "PORT: %.2s ", 785 + phys_connector); 786 + if (hdev->devtype == TYPE_DISK && 787 + hdev->expose_state != HPSA_DO_NOT_EXPOSE) { 788 + if (box == 0 || box == 0xFF) { 789 + output_len += snprintf(path[i] + output_len, 790 + PATH_STRING_LEN, 791 + "BAY: %hhu %s\n", 792 + bay, active); 793 + } else { 794 + output_len += snprintf(path[i] + output_len, 795 + PATH_STRING_LEN, 796 + "BOX: %hhu BAY: %hhu %s\n", 797 + box, bay, active); 798 + } 799 + } else if (box != 0 && box != 0xFF) { 800 + output_len += snprintf(path[i] + output_len, 801 + PATH_STRING_LEN, "BOX: %hhu %s\n", 802 + box, active); 803 + } else 804 + output_len += snprintf(path[i] + output_len, 805 + PATH_STRING_LEN, "%s\n", active); 806 + } 807 + 808 + spin_unlock_irqrestore(&h->devlock, flags); 809 + return snprintf(buf, output_len+1, "%s%s%s%s%s%s%s%s", 810 + path[0], path[1], path[2], path[3], 811 + path[4], path[5], path[6], path[7]); 812 + } 813 + 728 814 static DEVICE_ATTR(raid_level, S_IRUGO, raid_level_show, NULL); 729 815 static DEVICE_ATTR(lunid, S_IRUGO, lunid_show, NULL); 730 816 static DEVICE_ATTR(unique_id, S_IRUGO, unique_id_show, NULL); 731 817 static DEVICE_ATTR(rescan, S_IWUSR, NULL, host_store_rescan); 732 818 static DEVICE_ATTR(hp_ssd_smart_path_enabled, S_IRUGO, 733 819 host_show_hp_ssd_smart_path_enabled, NULL); 820 + static DEVICE_ATTR(path_info, S_IRUGO, path_info_show, NULL); 734 821 static DEVICE_ATTR(hp_ssd_smart_path_status, S_IWUSR|S_IRUGO|S_IROTH, 735 822 host_show_hp_ssd_smart_path_status, 736 823 host_store_hp_ssd_smart_path_status); ··· 847 744 &dev_attr_lunid, 848 745 &dev_attr_unique_id, 849 746 &dev_attr_hp_ssd_smart_path_enabled, 747 + &dev_attr_path_info, 850 748 &dev_attr_lockup_detected, 851 749 NULL, 852 750 }; ··· 1187 1083 1188 1084 /* This is a non-zero lun of a multi-lun device. 1189 1085 * Search through our list and find the device which 1190 - * has the same 8 byte LUN address, excepting byte 4. 1086 + * has the same 8 byte LUN address, excepting byte 4 and 5. 1191 1087 * Assign the same bus and target for this new LUN. 1192 1088 * Use the logical unit number from the firmware. 1193 1089 */ 1194 1090 memcpy(addr1, device->scsi3addr, 8); 1195 1091 addr1[4] = 0; 1092 + addr1[5] = 0; 1196 1093 for (i = 0; i < n; i++) { 1197 1094 sd = h->dev[i]; 1198 1095 memcpy(addr2, sd->scsi3addr, 8); 1199 1096 addr2[4] = 0; 1200 - /* differ only in byte 4? */ 1097 + addr2[5] = 0; 1098 + /* differ only in byte 4 and 5? */ 1201 1099 if (memcmp(addr1, addr2, 8) == 0) { 1202 1100 device->bus = sd->bus; 1203 1101 device->target = sd->target; ··· 1392 1286 return 1; 1393 1287 if (dev1->offload_enabled != dev2->offload_enabled) 1394 1288 return 1; 1395 - if (dev1->queue_depth != dev2->queue_depth) 1396 - return 1; 1289 + if (!is_logical_dev_addr_mode(dev1->scsi3addr)) 1290 + if (dev1->queue_depth != dev2->queue_depth) 1291 + return 1; 1397 1292 return 0; 1398 1293 } 1399 1294 ··· 1483 1376 h->scsi_host->host_no, 1484 1377 sd->bus, sd->target, sd->lun); 1485 1378 break; 1379 + case HPSA_LV_NOT_AVAILABLE: 1380 + dev_info(&h->pdev->dev, 1381 + "C%d:B%d:T%d:L%d Volume is waiting for transforming volume.\n", 1382 + h->scsi_host->host_no, 1383 + sd->bus, sd->target, sd->lun); 1384 + break; 1486 1385 case HPSA_LV_UNDERGOING_RPI: 1487 1386 dev_info(&h->pdev->dev, 1488 - "C%d:B%d:T%d:L%d Volume is undergoing rapid parity initialization process.\n", 1387 + "C%d:B%d:T%d:L%d Volume is undergoing rapid parity init.\n", 1489 1388 h->scsi_host->host_no, 1490 1389 sd->bus, sd->target, sd->lun); 1491 1390 break; 1492 1391 case HPSA_LV_PENDING_RPI: 1493 1392 dev_info(&h->pdev->dev, 1494 - "C%d:B%d:T%d:L%d Volume is queued for rapid parity initialization process.\n", 1495 - h->scsi_host->host_no, 1496 - sd->bus, sd->target, sd->lun); 1393 + "C%d:B%d:T%d:L%d Volume is queued for rapid parity initialization process.\n", 1394 + h->scsi_host->host_no, 1395 + sd->bus, sd->target, sd->lun); 1497 1396 break; 1498 1397 case HPSA_LV_ENCRYPTED_NO_KEY: 1499 1398 dev_info(&h->pdev->dev, ··· 2698 2585 return rc; 2699 2586 } 2700 2587 2701 - static int hpsa_bmic_ctrl_mode_sense(struct ctlr_info *h, 2702 - unsigned char *scsi3addr, unsigned char page, 2703 - struct bmic_controller_parameters *buf, size_t bufsize) 2704 - { 2705 - int rc = IO_OK; 2706 - struct CommandList *c; 2707 - struct ErrorInfo *ei; 2708 - 2709 - c = cmd_alloc(h); 2710 - if (fill_cmd(c, BMIC_SENSE_CONTROLLER_PARAMETERS, h, buf, bufsize, 2711 - page, scsi3addr, TYPE_CMD)) { 2712 - rc = -1; 2713 - goto out; 2714 - } 2715 - rc = hpsa_scsi_do_simple_cmd_with_retry(h, c, 2716 - PCI_DMA_FROMDEVICE, NO_TIMEOUT); 2717 - if (rc) 2718 - goto out; 2719 - ei = c->err_info; 2720 - if (ei->CommandStatus != 0 && ei->CommandStatus != CMD_DATA_UNDERRUN) { 2721 - hpsa_scsi_interpret_error(h, c); 2722 - rc = -1; 2723 - } 2724 - out: 2725 - cmd_free(h, c); 2726 - return rc; 2727 - } 2728 - 2729 2588 static int hpsa_send_reset(struct ctlr_info *h, unsigned char *scsi3addr, 2730 2589 u8 reset_type, int reply_queue) 2731 2590 { ··· 2834 2749 lockup_detected(h)); 2835 2750 2836 2751 if (unlikely(lockup_detected(h))) { 2837 - dev_warn(&h->pdev->dev, 2838 - "Controller lockup detected during reset wait\n"); 2839 - mutex_unlock(&h->reset_mutex); 2840 - rc = -ENODEV; 2841 - } 2752 + dev_warn(&h->pdev->dev, 2753 + "Controller lockup detected during reset wait\n"); 2754 + rc = -ENODEV; 2755 + } 2842 2756 2843 2757 if (unlikely(rc)) 2844 2758 atomic_set(&dev->reset_cmds_out, 0); ··· 3270 3186 /* Keep volume offline in certain cases: */ 3271 3187 switch (ldstat) { 3272 3188 case HPSA_LV_UNDERGOING_ERASE: 3189 + case HPSA_LV_NOT_AVAILABLE: 3273 3190 case HPSA_LV_UNDERGOING_RPI: 3274 3191 case HPSA_LV_PENDING_RPI: 3275 3192 case HPSA_LV_ENCRYPTED_NO_KEY: ··· 3647 3562 return NULL; 3648 3563 } 3649 3564 3650 - static int hpsa_hba_mode_enabled(struct ctlr_info *h) 3651 - { 3652 - int rc; 3653 - int hba_mode_enabled; 3654 - struct bmic_controller_parameters *ctlr_params; 3655 - ctlr_params = kzalloc(sizeof(struct bmic_controller_parameters), 3656 - GFP_KERNEL); 3657 - 3658 - if (!ctlr_params) 3659 - return -ENOMEM; 3660 - rc = hpsa_bmic_ctrl_mode_sense(h, RAID_CTLR_LUNID, 0, ctlr_params, 3661 - sizeof(struct bmic_controller_parameters)); 3662 - if (rc) { 3663 - kfree(ctlr_params); 3664 - return rc; 3665 - } 3666 - 3667 - hba_mode_enabled = 3668 - ((ctlr_params->nvram_flags & HBA_MODE_ENABLED_FLAG) != 0); 3669 - kfree(ctlr_params); 3670 - return hba_mode_enabled; 3671 - } 3672 - 3673 3565 /* get physical drive ioaccel handle and queue depth */ 3674 3566 static void hpsa_get_ioaccel_drive_info(struct ctlr_info *h, 3675 3567 struct hpsa_scsi_dev_t *dev, ··· 3677 3615 atomic_set(&dev->reset_cmds_out, 0); 3678 3616 } 3679 3617 3618 + static void hpsa_get_path_info(struct hpsa_scsi_dev_t *this_device, 3619 + u8 *lunaddrbytes, 3620 + struct bmic_identify_physical_device *id_phys) 3621 + { 3622 + if (PHYS_IOACCEL(lunaddrbytes) 3623 + && this_device->ioaccel_handle) 3624 + this_device->hba_ioaccel_enabled = 1; 3625 + 3626 + memcpy(&this_device->active_path_index, 3627 + &id_phys->active_path_number, 3628 + sizeof(this_device->active_path_index)); 3629 + memcpy(&this_device->path_map, 3630 + &id_phys->redundant_path_present_map, 3631 + sizeof(this_device->path_map)); 3632 + memcpy(&this_device->box, 3633 + &id_phys->alternate_paths_phys_box_on_port, 3634 + sizeof(this_device->box)); 3635 + memcpy(&this_device->phys_connector, 3636 + &id_phys->alternate_paths_phys_connector, 3637 + sizeof(this_device->phys_connector)); 3638 + memcpy(&this_device->bay, 3639 + &id_phys->phys_bay_in_box, 3640 + sizeof(this_device->bay)); 3641 + } 3642 + 3680 3643 static void hpsa_update_scsi_devices(struct ctlr_info *h, int hostno) 3681 3644 { 3682 3645 /* the idea here is we could get notified ··· 3724 3637 int ncurrent = 0; 3725 3638 int i, n_ext_target_devs, ndevs_to_allocate; 3726 3639 int raid_ctlr_position; 3727 - int rescan_hba_mode; 3728 3640 DECLARE_BITMAP(lunzerobits, MAX_EXT_TARGETS); 3729 3641 3730 3642 currentsd = kzalloc(sizeof(*currentsd) * HPSA_MAX_DEVICES, GFP_KERNEL); ··· 3738 3652 goto out; 3739 3653 } 3740 3654 memset(lunzerobits, 0, sizeof(lunzerobits)); 3741 - 3742 - rescan_hba_mode = hpsa_hba_mode_enabled(h); 3743 - if (rescan_hba_mode < 0) 3744 - goto out; 3745 - 3746 - if (!h->hba_mode_enabled && rescan_hba_mode) 3747 - dev_warn(&h->pdev->dev, "HBA mode enabled\n"); 3748 - else if (h->hba_mode_enabled && !rescan_hba_mode) 3749 - dev_warn(&h->pdev->dev, "HBA mode disabled\n"); 3750 - 3751 - h->hba_mode_enabled = rescan_hba_mode; 3752 3655 3753 3656 if (hpsa_gather_lun_info(h, physdev_list, &nphysicals, 3754 3657 logdev_list, &nlogicals)) ··· 3814 3739 /* do not expose masked devices */ 3815 3740 if (MASKED_DEVICE(lunaddrbytes) && 3816 3741 i < nphysicals + (raid_ctlr_position == 0)) { 3817 - if (h->hba_mode_enabled) 3818 - dev_warn(&h->pdev->dev, 3819 - "Masked physical device detected\n"); 3820 3742 this_device->expose_state = HPSA_DO_NOT_EXPOSE; 3821 3743 } else { 3822 3744 this_device->expose_state = ··· 3833 3761 ncurrent++; 3834 3762 break; 3835 3763 case TYPE_DISK: 3836 - if (i >= nphysicals) { 3837 - ncurrent++; 3838 - break; 3839 - } 3840 - 3841 - if (h->hba_mode_enabled) 3842 - /* never use raid mapper in HBA mode */ 3764 + if (i < nphysicals + (raid_ctlr_position == 0)) { 3765 + /* The disk is in HBA mode. */ 3766 + /* Never use RAID mapper in HBA mode. */ 3843 3767 this_device->offload_enabled = 0; 3844 - else if (!(h->transMethod & CFGTBL_Trans_io_accel1 || 3845 - h->transMethod & CFGTBL_Trans_io_accel2)) 3846 - break; 3847 - 3848 - hpsa_get_ioaccel_drive_info(h, this_device, 3849 - lunaddrbytes, id_phys); 3850 - atomic_set(&this_device->ioaccel_cmds_out, 0); 3768 + hpsa_get_ioaccel_drive_info(h, this_device, 3769 + lunaddrbytes, id_phys); 3770 + hpsa_get_path_info(this_device, lunaddrbytes, 3771 + id_phys); 3772 + } 3851 3773 ncurrent++; 3852 3774 break; 3853 3775 case TYPE_TAPE: 3854 3776 case TYPE_MEDIUM_CHANGER: 3855 - ncurrent++; 3856 - break; 3857 3777 case TYPE_ENCLOSURE: 3858 - if (h->hba_mode_enabled) 3859 - ncurrent++; 3778 + ncurrent++; 3860 3779 break; 3861 3780 case TYPE_RAID: 3862 3781 /* Only present the Smartarray HBA as a RAID controller. ··· 5167 5104 int rc; 5168 5105 struct ctlr_info *h; 5169 5106 struct hpsa_scsi_dev_t *dev; 5170 - char msg[40]; 5107 + char msg[48]; 5171 5108 5172 5109 /* find the controller to which the command to be aborted was sent */ 5173 5110 h = sdev_to_hba(scsicmd->device); ··· 5185 5122 5186 5123 /* if controller locked up, we can guarantee command won't complete */ 5187 5124 if (lockup_detected(h)) { 5188 - sprintf(msg, "cmd %d RESET FAILED, lockup detected", 5189 - hpsa_get_cmd_index(scsicmd)); 5125 + snprintf(msg, sizeof(msg), 5126 + "cmd %d RESET FAILED, lockup detected", 5127 + hpsa_get_cmd_index(scsicmd)); 5190 5128 hpsa_show_dev_msg(KERN_WARNING, h, dev, msg); 5191 5129 return FAILED; 5192 5130 } 5193 5131 5194 5132 /* this reset request might be the result of a lockup; check */ 5195 5133 if (detect_controller_lockup(h)) { 5196 - sprintf(msg, "cmd %d RESET FAILED, new lockup detected", 5197 - hpsa_get_cmd_index(scsicmd)); 5134 + snprintf(msg, sizeof(msg), 5135 + "cmd %d RESET FAILED, new lockup detected", 5136 + hpsa_get_cmd_index(scsicmd)); 5198 5137 hpsa_show_dev_msg(KERN_WARNING, h, dev, msg); 5199 5138 return FAILED; 5200 5139 } ··· 5210 5145 /* send a reset to the SCSI LUN which the command was sent to */ 5211 5146 rc = hpsa_do_reset(h, dev, dev->scsi3addr, HPSA_RESET_TYPE_LUN, 5212 5147 DEFAULT_REPLY_QUEUE); 5213 - sprintf(msg, "reset %s", rc == 0 ? "completed successfully" : "failed"); 5148 + snprintf(msg, sizeof(msg), "reset %s", 5149 + rc == 0 ? "completed successfully" : "failed"); 5214 5150 hpsa_show_dev_msg(KERN_WARNING, h, dev, msg); 5215 5151 return rc == 0 ? SUCCESS : FAILED; 5216 5152 } ··· 8055 7989 8056 7990 pci_set_drvdata(pdev, h); 8057 7991 h->ndevices = 0; 8058 - h->hba_mode_enabled = 0; 8059 7992 8060 7993 spin_lock_init(&h->devlock); 8061 7994 rc = hpsa_put_ctlr_into_performant_mode(h); ··· 8119 8054 rc = hpsa_kdump_soft_reset(h); 8120 8055 if (rc) 8121 8056 /* Neither hard nor soft reset worked, we're hosed. */ 8122 - goto clean9; 8057 + goto clean7; 8123 8058 8124 8059 dev_info(&h->pdev->dev, "Board READY.\n"); 8125 8060 dev_info(&h->pdev->dev, ··· 8165 8100 h->heartbeat_sample_interval); 8166 8101 return 0; 8167 8102 8168 - clean9: /* wq, sh, perf, sg, cmd, irq, shost, pci, lu, aer/h */ 8169 - kfree(h->hba_inquiry_data); 8170 8103 clean7: /* perf, sg, cmd, irq, shost, pci, lu, aer/h */ 8171 8104 hpsa_free_performant_mode(h); 8172 8105 h->access.set_intr_mask(h, HPSA_INTR_OFF); ··· 8272 8209 destroy_workqueue(h->rescan_ctlr_wq); 8273 8210 destroy_workqueue(h->resubmit_wq); 8274 8211 8212 + /* 8213 + * Call before disabling interrupts. 8214 + * scsi_remove_host can trigger I/O operations especially 8215 + * when multipath is enabled. There can be SYNCHRONIZE CACHE 8216 + * operations which cannot complete and will hang the system. 8217 + */ 8218 + if (h->scsi_host) 8219 + scsi_remove_host(h->scsi_host); /* init_one 8 */ 8275 8220 /* includes hpsa_free_irqs - init_one 4 */ 8276 8221 /* includes hpsa_disable_interrupt_mode - pci_init 2 */ 8277 8222 hpsa_shutdown(pdev); ··· 8288 8217 8289 8218 kfree(h->hba_inquiry_data); /* init_one 10 */ 8290 8219 h->hba_inquiry_data = NULL; /* init_one 10 */ 8291 - if (h->scsi_host) 8292 - scsi_remove_host(h->scsi_host); /* init_one 8 */ 8293 8220 hpsa_free_ioaccel2_sg_chain_blocks(h); 8294 8221 hpsa_free_performant_mode(h); /* init_one 7 */ 8295 8222 hpsa_free_sg_chain_blocks(h); /* init_one 6 */
+8 -8
drivers/scsi/hpsa.h
··· 1 1 /* 2 2 * Disk Array driver for HP Smart Array SAS controllers 3 - * Copyright 2000, 2014 Hewlett-Packard Development Company, L.P. 3 + * Copyright 2014-2015 PMC-Sierra, Inc. 4 + * Copyright 2000,2009-2015 Hewlett-Packard Development Company, L.P. 4 5 * 5 6 * This program is free software; you can redistribute it and/or modify 6 7 * it under the terms of the GNU General Public License as published by ··· 12 11 * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or 13 12 * NON INFRINGEMENT. See the GNU General Public License for more details. 14 13 * 15 - * You should have received a copy of the GNU General Public License 16 - * along with this program; if not, write to the Free Software 17 - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 18 - * 19 - * Questions/Comments/Bugfixes to iss_storagedev@hp.com 14 + * Questions/Comments/Bugfixes to storagedev@pmcs.com 20 15 * 21 16 */ 22 17 #ifndef HPSA_H ··· 50 53 * device via "ioaccel" path. 51 54 */ 52 55 u32 ioaccel_handle; 56 + u8 active_path_index; 57 + u8 path_map; 58 + u8 bay; 59 + u8 box[8]; 60 + u16 phys_connector[8]; 53 61 int offload_config; /* I/O accel RAID offload configured */ 54 62 int offload_enabled; /* I/O accel RAID offload enabled */ 55 63 int offload_to_be_enabled; ··· 116 114 u8 automatic_drive_slamming; 117 115 u8 reserved1; 118 116 u8 nvram_flags; 119 - #define HBA_MODE_ENABLED_FLAG (1 << 3) 120 117 u8 cache_nvram_flags; 121 118 u8 drive_config_flags; 122 119 u16 reserved2; ··· 154 153 unsigned int msi_vector; 155 154 int intr_mode; /* either PERF_MODE_INT or SIMPLE_MODE_INT */ 156 155 struct access_method access; 157 - char hba_mode_enabled; 158 156 159 157 /* queue and queue Info */ 160 158 unsigned int Qdepth;
+4 -6
drivers/scsi/hpsa_cmd.h
··· 1 1 /* 2 2 * Disk Array driver for HP Smart Array SAS controllers 3 - * Copyright 2000, 2014 Hewlett-Packard Development Company, L.P. 3 + * Copyright 2014-2015 PMC-Sierra, Inc. 4 + * Copyright 2000,2009-2015 Hewlett-Packard Development Company, L.P. 4 5 * 5 6 * This program is free software; you can redistribute it and/or modify 6 7 * it under the terms of the GNU General Public License as published by ··· 12 11 * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or 13 12 * NON INFRINGEMENT. See the GNU General Public License for more details. 14 13 * 15 - * You should have received a copy of the GNU General Public License 16 - * along with this program; if not, write to the Free Software 17 - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 18 - * 19 - * Questions/Comments/Bugfixes to iss_storagedev@hp.com 14 + * Questions/Comments/Bugfixes to storagedev@pmcs.com 20 15 * 21 16 */ 22 17 #ifndef HPSA_CMD_H ··· 164 167 /* Logical volume states */ 165 168 #define HPSA_VPD_LV_STATUS_UNSUPPORTED 0xff 166 169 #define HPSA_LV_OK 0x0 170 + #define HPSA_LV_NOT_AVAILABLE 0x0b 167 171 #define HPSA_LV_UNDERGOING_ERASE 0x0F 168 172 #define HPSA_LV_UNDERGOING_RPI 0x12 169 173 #define HPSA_LV_PENDING_RPI 0x13
+64 -37
drivers/scsi/hptiop.c
··· 1 1 /* 2 2 * HighPoint RR3xxx/4xxx controller driver for Linux 3 - * Copyright (C) 2006-2012 HighPoint Technologies, Inc. All Rights Reserved. 3 + * Copyright (C) 2006-2015 HighPoint Technologies, Inc. All Rights Reserved. 4 4 * 5 5 * This program is free software; you can redistribute it and/or modify 6 6 * it under the terms of the GNU General Public License as published by ··· 42 42 43 43 static char driver_name[] = "hptiop"; 44 44 static const char driver_name_long[] = "RocketRAID 3xxx/4xxx Controller driver"; 45 - static const char driver_ver[] = "v1.8"; 45 + static const char driver_ver[] = "v1.10.0"; 46 46 47 47 static int iop_send_sync_msg(struct hptiop_hba *hba, u32 msg, u32 millisec); 48 48 static void hptiop_finish_scsi_req(struct hptiop_hba *hba, u32 tag, ··· 764 764 scsi_set_resid(scp, 765 765 scsi_bufflen(scp) - le32_to_cpu(req->dataxfer_length)); 766 766 scp->result = SAM_STAT_CHECK_CONDITION; 767 - memcpy(scp->sense_buffer, &req->sg_list, 768 - min_t(size_t, SCSI_SENSE_BUFFERSIZE, 769 - le32_to_cpu(req->dataxfer_length))); 767 + memcpy(scp->sense_buffer, &req->sg_list, SCSI_SENSE_BUFFERSIZE); 770 768 goto skip_resid; 771 769 break; 772 770 ··· 1035 1037 1036 1038 scp->result = 0; 1037 1039 1038 - if (scp->device->channel || scp->device->lun || 1039 - scp->device->id > hba->max_devices) { 1040 + if (scp->device->channel || 1041 + (scp->device->id > hba->max_devices) || 1042 + ((scp->device->id == (hba->max_devices-1)) && scp->device->lun)) { 1040 1043 scp->result = DID_BAD_TARGET << 16; 1041 1044 free_req(hba, _req); 1042 1045 goto cmd_done; ··· 1167 1168 NULL 1168 1169 }; 1169 1170 1171 + static int hptiop_slave_config(struct scsi_device *sdev) 1172 + { 1173 + if (sdev->type == TYPE_TAPE) 1174 + blk_queue_max_hw_sectors(sdev->request_queue, 8192); 1175 + 1176 + return 0; 1177 + } 1178 + 1170 1179 static struct scsi_host_template driver_template = { 1171 1180 .module = THIS_MODULE, 1172 1181 .name = driver_name, ··· 1186 1179 .use_clustering = ENABLE_CLUSTERING, 1187 1180 .proc_name = driver_name, 1188 1181 .shost_attrs = hptiop_attrs, 1182 + .slave_configure = hptiop_slave_config, 1189 1183 .this_id = -1, 1190 1184 .change_queue_depth = hptiop_adjust_disk_queue_depth, 1191 1185 }; ··· 1331 1323 } 1332 1324 1333 1325 hba = (struct hptiop_hba *)host->hostdata; 1326 + memset(hba, 0, sizeof(struct hptiop_hba)); 1334 1327 1335 1328 hba->ops = iop_ops; 1336 1329 hba->pcidev = pcidev; ··· 1345 1336 init_waitqueue_head(&hba->reset_wq); 1346 1337 init_waitqueue_head(&hba->ioctl_wq); 1347 1338 1348 - host->max_lun = 1; 1339 + host->max_lun = 128; 1349 1340 host->max_channel = 0; 1350 1341 host->io_port = 0; 1351 1342 host->n_io_port = 0; ··· 1437 1428 dprintk("req_size=%d, max_requests=%d\n", req_size, hba->max_requests); 1438 1429 1439 1430 hba->req_size = req_size; 1440 - start_virt = dma_alloc_coherent(&pcidev->dev, 1441 - hba->req_size*hba->max_requests + 0x20, 1442 - &start_phy, GFP_KERNEL); 1443 - 1444 - if (!start_virt) { 1445 - printk(KERN_ERR "scsi%d: fail to alloc request mem\n", 1446 - hba->host->host_no); 1447 - goto free_request_irq; 1448 - } 1449 - 1450 - hba->dma_coherent = start_virt; 1451 - hba->dma_coherent_handle = start_phy; 1452 - 1453 - if ((start_phy & 0x1f) != 0) { 1454 - offset = ((start_phy + 0x1f) & ~0x1f) - start_phy; 1455 - start_phy += offset; 1456 - start_virt += offset; 1457 - } 1458 - 1459 1431 hba->req_list = NULL; 1432 + 1460 1433 for (i = 0; i < hba->max_requests; i++) { 1434 + start_virt = dma_alloc_coherent(&pcidev->dev, 1435 + hba->req_size + 0x20, 1436 + &start_phy, GFP_KERNEL); 1437 + 1438 + if (!start_virt) { 1439 + printk(KERN_ERR "scsi%d: fail to alloc request mem\n", 1440 + hba->host->host_no); 1441 + goto free_request_mem; 1442 + } 1443 + 1444 + hba->dma_coherent[i] = start_virt; 1445 + hba->dma_coherent_handle[i] = start_phy; 1446 + 1447 + if ((start_phy & 0x1f) != 0) { 1448 + offset = ((start_phy + 0x1f) & ~0x1f) - start_phy; 1449 + start_phy += offset; 1450 + start_virt += offset; 1451 + } 1452 + 1461 1453 hba->reqs[i].next = NULL; 1462 1454 hba->reqs[i].req_virt = start_virt; 1463 1455 hba->reqs[i].req_shifted_phy = start_phy >> 5; 1464 1456 hba->reqs[i].index = i; 1465 1457 free_req(hba, &hba->reqs[i]); 1466 - start_virt = (char *)start_virt + hba->req_size; 1467 - start_phy = start_phy + hba->req_size; 1468 1458 } 1469 1459 1470 1460 /* Enable Interrupt and start background task */ ··· 1482 1474 return 0; 1483 1475 1484 1476 free_request_mem: 1485 - dma_free_coherent(&hba->pcidev->dev, 1486 - hba->req_size * hba->max_requests + 0x20, 1487 - hba->dma_coherent, hba->dma_coherent_handle); 1477 + for (i = 0; i < hba->max_requests; i++) { 1478 + if (hba->dma_coherent[i] && hba->dma_coherent_handle[i]) 1479 + dma_free_coherent(&hba->pcidev->dev, 1480 + hba->req_size + 0x20, 1481 + hba->dma_coherent[i], 1482 + hba->dma_coherent_handle[i]); 1483 + else 1484 + break; 1485 + } 1488 1486 1489 - free_request_irq: 1490 1487 free_irq(hba->pcidev->irq, hba); 1491 1488 1492 1489 unmap_pci_bar: ··· 1559 1546 { 1560 1547 struct Scsi_Host *host = pci_get_drvdata(pcidev); 1561 1548 struct hptiop_hba *hba = (struct hptiop_hba *)host->hostdata; 1549 + u32 i; 1562 1550 1563 1551 dprintk("scsi%d: hptiop_remove\n", hba->host->host_no); 1564 1552 ··· 1569 1555 1570 1556 free_irq(hba->pcidev->irq, hba); 1571 1557 1572 - dma_free_coherent(&hba->pcidev->dev, 1573 - hba->req_size * hba->max_requests + 0x20, 1574 - hba->dma_coherent, 1575 - hba->dma_coherent_handle); 1558 + for (i = 0; i < hba->max_requests; i++) { 1559 + if (hba->dma_coherent[i] && hba->dma_coherent_handle[i]) 1560 + dma_free_coherent(&hba->pcidev->dev, 1561 + hba->req_size + 0x20, 1562 + hba->dma_coherent[i], 1563 + hba->dma_coherent_handle[i]); 1564 + else 1565 + break; 1566 + } 1576 1567 1577 1568 hba->ops->internal_memfree(hba); 1578 1569 ··· 1672 1653 { PCI_VDEVICE(TTI, 0x3020), (kernel_ulong_t)&hptiop_mv_ops }, 1673 1654 { PCI_VDEVICE(TTI, 0x4520), (kernel_ulong_t)&hptiop_mvfrey_ops }, 1674 1655 { PCI_VDEVICE(TTI, 0x4522), (kernel_ulong_t)&hptiop_mvfrey_ops }, 1656 + { PCI_VDEVICE(TTI, 0x3610), (kernel_ulong_t)&hptiop_mvfrey_ops }, 1657 + { PCI_VDEVICE(TTI, 0x3611), (kernel_ulong_t)&hptiop_mvfrey_ops }, 1658 + { PCI_VDEVICE(TTI, 0x3620), (kernel_ulong_t)&hptiop_mvfrey_ops }, 1659 + { PCI_VDEVICE(TTI, 0x3622), (kernel_ulong_t)&hptiop_mvfrey_ops }, 1660 + { PCI_VDEVICE(TTI, 0x3640), (kernel_ulong_t)&hptiop_mvfrey_ops }, 1661 + { PCI_VDEVICE(TTI, 0x3660), (kernel_ulong_t)&hptiop_mvfrey_ops }, 1662 + { PCI_VDEVICE(TTI, 0x3680), (kernel_ulong_t)&hptiop_mvfrey_ops }, 1663 + { PCI_VDEVICE(TTI, 0x3690), (kernel_ulong_t)&hptiop_mvfrey_ops }, 1675 1664 {}, 1676 1665 }; 1677 1666
+3 -3
drivers/scsi/hptiop.h
··· 1 1 /* 2 2 * HighPoint RR3xxx/4xxx controller driver for Linux 3 - * Copyright (C) 2006-2012 HighPoint Technologies, Inc. All Rights Reserved. 3 + * Copyright (C) 2006-2015 HighPoint Technologies, Inc. All Rights Reserved. 4 4 * 5 5 * This program is free software; you can redistribute it and/or modify 6 6 * it under the terms of the GNU General Public License as published by ··· 327 327 struct hptiop_request reqs[HPTIOP_MAX_REQUESTS]; 328 328 329 329 /* used to free allocated dma area */ 330 - void *dma_coherent; 331 - dma_addr_t dma_coherent_handle; 330 + void *dma_coherent[HPTIOP_MAX_REQUESTS]; 331 + dma_addr_t dma_coherent_handle[HPTIOP_MAX_REQUESTS]; 332 332 333 333 atomic_t reset_count; 334 334 atomic_t resetting;
+8 -7
drivers/scsi/ipr.c
··· 1165 1165 1166 1166 if (ioa_cfg->sis64) { 1167 1167 proto = cfgtew->u.cfgte64->proto; 1168 - res->res_flags = cfgtew->u.cfgte64->res_flags; 1168 + res->flags = be16_to_cpu(cfgtew->u.cfgte64->flags); 1169 + res->res_flags = be16_to_cpu(cfgtew->u.cfgte64->res_flags); 1169 1170 res->qmodel = IPR_QUEUEING_MODEL64(res); 1170 1171 res->type = cfgtew->u.cfgte64->res_type; 1171 1172 ··· 1314 1313 int new_path = 0; 1315 1314 1316 1315 if (res->ioa_cfg->sis64) { 1317 - res->flags = cfgtew->u.cfgte64->flags; 1318 - res->res_flags = cfgtew->u.cfgte64->res_flags; 1316 + res->flags = be16_to_cpu(cfgtew->u.cfgte64->flags); 1317 + res->res_flags = be16_to_cpu(cfgtew->u.cfgte64->res_flags); 1319 1318 res->type = cfgtew->u.cfgte64->res_type; 1320 1319 1321 1320 memcpy(&res->std_inq_data, &cfgtew->u.cfgte64->std_inq_data, ··· 1901 1900 * Return value: 1902 1901 * none 1903 1902 **/ 1904 - static void ipr_log_hex_data(struct ipr_ioa_cfg *ioa_cfg, u32 *data, int len) 1903 + static void ipr_log_hex_data(struct ipr_ioa_cfg *ioa_cfg, __be32 *data, int len) 1905 1904 { 1906 1905 int i; 1907 1906 ··· 2271 2270 ((unsigned long)fabric + be16_to_cpu(fabric->length)); 2272 2271 } 2273 2272 2274 - ipr_log_hex_data(ioa_cfg, (u32 *)fabric, add_len); 2273 + ipr_log_hex_data(ioa_cfg, (__be32 *)fabric, add_len); 2275 2274 } 2276 2275 2277 2276 /** ··· 2365 2364 ((unsigned long)fabric + be16_to_cpu(fabric->length)); 2366 2365 } 2367 2366 2368 - ipr_log_hex_data(ioa_cfg, (u32 *)fabric, add_len); 2367 + ipr_log_hex_data(ioa_cfg, (__be32 *)fabric, add_len); 2369 2368 } 2370 2369 2371 2370 /** ··· 4456 4455 spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags); 4457 4456 res = (struct ipr_resource_entry *)sdev->hostdata; 4458 4457 if (res && ioa_cfg->sis64) 4459 - len = snprintf(buf, PAGE_SIZE, "0x%llx\n", res->dev_id); 4458 + len = snprintf(buf, PAGE_SIZE, "0x%llx\n", be64_to_cpu(res->dev_id)); 4460 4459 else if (res) 4461 4460 len = snprintf(buf, PAGE_SIZE, "0x%llx\n", res->lun_wwn); 4462 4461
+8 -9
drivers/scsi/ipr.h
··· 39 39 /* 40 40 * Literals 41 41 */ 42 - #define IPR_DRIVER_VERSION "2.6.1" 43 - #define IPR_DRIVER_DATE "(March 12, 2015)" 42 + #define IPR_DRIVER_VERSION "2.6.2" 43 + #define IPR_DRIVER_DATE "(June 11, 2015)" 44 44 45 45 /* 46 46 * IPR_MAX_CMD_PER_LUN: This defines the maximum number of outstanding ··· 1005 1005 struct ipr_hostrcb_type_07_error { 1006 1006 u8 failure_reason[64]; 1007 1007 struct ipr_vpd vpd; 1008 - u32 data[222]; 1008 + __be32 data[222]; 1009 1009 }__attribute__((packed, aligned (4))); 1010 1010 1011 1011 struct ipr_hostrcb_type_17_error { 1012 1012 u8 failure_reason[64]; 1013 1013 struct ipr_ext_vpd vpd; 1014 - u32 data[476]; 1014 + __be32 data[476]; 1015 1015 }__attribute__((packed, aligned (4))); 1016 1016 1017 1017 struct ipr_hostrcb_config_element { ··· 1289 1289 (((res)->bus << 24) | ((res)->target << 8) | (res)->lun) 1290 1290 1291 1291 u8 ata_class; 1292 - 1293 - u8 flags; 1294 - __be16 res_flags; 1295 - 1296 1292 u8 type; 1293 + 1294 + u16 flags; 1295 + u16 res_flags; 1297 1296 1298 1297 u8 qmodel; 1299 1298 struct ipr_std_inq_data std_inq_data; 1300 1299 1301 1300 __be32 res_handle; 1302 1301 __be64 dev_id; 1303 - __be64 lun_wwn; 1302 + u64 lun_wwn; 1304 1303 struct scsi_lun dev_lun; 1305 1304 u8 res_path[8]; 1306 1305
+1 -1
drivers/scsi/libfc/fc_fcp.c
··· 191 191 } 192 192 193 193 /** 194 - * fc_fcp_pkt_destory() - Release hold on a fcp_pkt 194 + * fc_fcp_pkt_destroy() - Release hold on a fcp_pkt 195 195 * @seq: The sequence that the FCP packet is on (required by destructor API) 196 196 * @fsp: The FCP packet to be released 197 197 *
+1 -1
drivers/scsi/lpfc/lpfc_hbadisc.c
··· 701 701 HA_RXMASK)); 702 702 } 703 703 } 704 - if ((phba->sli_rev == LPFC_SLI_REV4) & 704 + if ((phba->sli_rev == LPFC_SLI_REV4) && 705 705 (!list_empty(&pring->txq))) 706 706 lpfc_drain_txq(phba); 707 707 /*
+66 -74
drivers/scsi/megaraid.c
··· 268 268 raw_mbox[2] = NC_SUBOP_PRODUCT_INFO; /* i.e. 0x0E */ 269 269 270 270 if ((retval = issue_scb_block(adapter, raw_mbox))) 271 - printk(KERN_WARNING 272 - "megaraid: Product_info cmd failed with error: %d\n", 271 + dev_warn(&adapter->dev->dev, 272 + "Product_info cmd failed with error: %d\n", 273 273 retval); 274 274 275 275 pci_unmap_single(adapter->dev, prod_info_dma_handle, ··· 334 334 adapter->bios_version[4] = 0; 335 335 } 336 336 337 - printk(KERN_NOTICE "megaraid: [%s:%s] detected %d logical drives.\n", 337 + dev_notice(&adapter->dev->dev, "[%s:%s] detected %d logical drives\n", 338 338 adapter->fw_version, adapter->bios_version, adapter->numldrv); 339 339 340 340 /* ··· 342 342 */ 343 343 adapter->support_ext_cdb = mega_support_ext_cdb(adapter); 344 344 if (adapter->support_ext_cdb) 345 - printk(KERN_NOTICE "megaraid: supports extended CDBs.\n"); 345 + dev_notice(&adapter->dev->dev, "supports extended CDBs\n"); 346 346 347 347 348 348 return 0; ··· 678 678 679 679 if(!(adapter->flag & (1L << cmd->device->channel))) { 680 680 681 - printk(KERN_NOTICE 682 - "scsi%d: scanning scsi channel %d ", 681 + dev_notice(&adapter->dev->dev, 682 + "scsi%d: scanning scsi channel %d " 683 + "for logical drives\n", 683 684 adapter->host->host_no, 684 685 cmd->device->channel); 685 - printk("for logical drives.\n"); 686 686 687 687 adapter->flag |= (1L << cmd->device->channel); 688 688 } ··· 983 983 case READ_CAPACITY: 984 984 if(!(adapter->flag & (1L << cmd->device->channel))) { 985 985 986 - printk(KERN_NOTICE 987 - "scsi%d: scanning scsi channel %d [P%d] ", 986 + dev_notice(&adapter->dev->dev, 987 + "scsi%d: scanning scsi channel %d [P%d] " 988 + "for physical devices\n", 988 989 adapter->host->host_no, 989 990 cmd->device->channel, channel); 990 - printk("for physical devices.\n"); 991 991 992 992 adapter->flag |= (1L << cmd->device->channel); 993 993 } ··· 1045 1045 case READ_CAPACITY: 1046 1046 if(!(adapter->flag & (1L << cmd->device->channel))) { 1047 1047 1048 - printk(KERN_NOTICE 1049 - "scsi%d: scanning scsi channel %d [P%d] ", 1048 + dev_notice(&adapter->dev->dev, 1049 + "scsi%d: scanning scsi channel %d [P%d] " 1050 + "for physical devices\n", 1050 1051 adapter->host->host_no, 1051 1052 cmd->device->channel, channel); 1052 - printk("for physical devices.\n"); 1053 1053 1054 1054 adapter->flag |= (1L << cmd->device->channel); 1055 1055 } ··· 1241 1241 return mbox->m_in.status; 1242 1242 1243 1243 bug_blocked_mailbox: 1244 - printk(KERN_WARNING "megaraid: Blocked mailbox......!!\n"); 1244 + dev_warn(&adapter->dev->dev, "Blocked mailbox......!!\n"); 1245 1245 udelay (1000); 1246 1246 return -1; 1247 1247 } ··· 1454 1454 * Make sure f/w has completed a valid command 1455 1455 */ 1456 1456 if( !(scb->state & SCB_ISSUED) || scb->cmd == NULL ) { 1457 - printk(KERN_CRIT 1458 - "megaraid: invalid command "); 1459 - printk("Id %d, scb->state:%x, scsi cmd:%p\n", 1457 + dev_crit(&adapter->dev->dev, "invalid command " 1458 + "Id %d, scb->state:%x, scsi cmd:%p\n", 1460 1459 cmdid, scb->state, scb->cmd); 1461 1460 1462 1461 continue; ··· 1466 1467 */ 1467 1468 if( scb->state & SCB_ABORT ) { 1468 1469 1469 - printk(KERN_WARNING 1470 - "megaraid: aborted cmd [%x] complete.\n", 1470 + dev_warn(&adapter->dev->dev, 1471 + "aborted cmd [%x] complete\n", 1471 1472 scb->idx); 1472 1473 1473 1474 scb->cmd->result = (DID_ABORT << 16); ··· 1485 1486 */ 1486 1487 if( scb->state & SCB_RESET ) { 1487 1488 1488 - printk(KERN_WARNING 1489 - "megaraid: reset cmd [%x] complete.\n", 1489 + dev_warn(&adapter->dev->dev, 1490 + "reset cmd [%x] complete\n", 1490 1491 scb->idx); 1491 1492 1492 1493 scb->cmd->result = (DID_RESET << 16); ··· 1552 1553 if( sg_page(sgl) ) { 1553 1554 c = *(unsigned char *) sg_virt(&sgl[0]); 1554 1555 } else { 1555 - printk(KERN_WARNING 1556 - "megaraid: invalid sg.\n"); 1556 + dev_warn(&adapter->dev->dev, "invalid sg\n"); 1557 1557 c = 0; 1558 1558 } 1559 1559 ··· 1900 1902 mc.opcode = MEGA_RESET_RESERVATIONS; 1901 1903 1902 1904 if( mega_internal_command(adapter, &mc, NULL) != 0 ) { 1903 - printk(KERN_WARNING 1904 - "megaraid: reservation reset failed.\n"); 1905 + dev_warn(&adapter->dev->dev, "reservation reset failed\n"); 1905 1906 } 1906 1907 else { 1907 - printk(KERN_INFO "megaraid: reservation reset.\n"); 1908 + dev_info(&adapter->dev->dev, "reservation reset\n"); 1908 1909 } 1909 1910 #endif 1910 1911 ··· 1936 1939 struct list_head *pos, *next; 1937 1940 scb_t *scb; 1938 1941 1939 - printk(KERN_WARNING "megaraid: %s cmd=%x <c=%d t=%d l=%d>\n", 1942 + dev_warn(&adapter->dev->dev, "%s cmd=%x <c=%d t=%d l=%d>\n", 1940 1943 (aor == SCB_ABORT)? "ABORTING":"RESET", 1941 1944 cmd->cmnd[0], cmd->device->channel, 1942 1945 cmd->device->id, (u32)cmd->device->lun); ··· 1960 1963 */ 1961 1964 if( scb->state & SCB_ISSUED ) { 1962 1965 1963 - printk(KERN_WARNING 1964 - "megaraid: %s[%x], fw owner.\n", 1966 + dev_warn(&adapter->dev->dev, 1967 + "%s[%x], fw owner\n", 1965 1968 (aor==SCB_ABORT) ? "ABORTING":"RESET", 1966 1969 scb->idx); 1967 1970 ··· 1973 1976 * Not yet issued! Remove from the pending 1974 1977 * list 1975 1978 */ 1976 - printk(KERN_WARNING 1977 - "megaraid: %s-[%x], driver owner.\n", 1979 + dev_warn(&adapter->dev->dev, 1980 + "%s-[%x], driver owner\n", 1978 1981 (aor==SCB_ABORT) ? "ABORTING":"RESET", 1979 1982 scb->idx); 1980 1983 ··· 2194 2197 2195 2198 if( mega_adapinq(adapter, dma_handle) != 0 ) { 2196 2199 seq_puts(m, "Adapter inquiry failed.\n"); 2197 - printk(KERN_WARNING "megaraid: inquiry failed.\n"); 2200 + dev_warn(&adapter->dev->dev, "inquiry failed\n"); 2198 2201 goto free_inquiry; 2199 2202 } 2200 2203 ··· 2238 2241 2239 2242 if( mega_adapinq(adapter, dma_handle) != 0 ) { 2240 2243 seq_puts(m, "Adapter inquiry failed.\n"); 2241 - printk(KERN_WARNING "megaraid: inquiry failed.\n"); 2244 + dev_warn(&adapter->dev->dev, "inquiry failed\n"); 2242 2245 goto free_inquiry; 2243 2246 } 2244 2247 ··· 2347 2350 2348 2351 if( mega_adapinq(adapter, dma_handle) != 0 ) { 2349 2352 seq_puts(m, "Adapter inquiry failed.\n"); 2350 - printk(KERN_WARNING "megaraid: inquiry failed.\n"); 2353 + dev_warn(&adapter->dev->dev, "inquiry failed\n"); 2351 2354 goto free_inquiry; 2352 2355 } 2353 2356 ··· 2522 2525 2523 2526 if( mega_adapinq(adapter, dma_handle) != 0 ) { 2524 2527 seq_puts(m, "Adapter inquiry failed.\n"); 2525 - printk(KERN_WARNING "megaraid: inquiry failed.\n"); 2528 + dev_warn(&adapter->dev->dev, "inquiry failed\n"); 2526 2529 goto free_inquiry; 2527 2530 } 2528 2531 ··· 2796 2799 dir = adapter->controller_proc_dir_entry = 2797 2800 proc_mkdir_data(string, 0, parent, adapter); 2798 2801 if(!dir) { 2799 - printk(KERN_WARNING "\nmegaraid: proc_mkdir failed\n"); 2802 + dev_warn(&adapter->dev->dev, "proc_mkdir failed\n"); 2800 2803 return; 2801 2804 } 2802 2805 ··· 2804 2807 de = proc_create_data(f->name, S_IRUSR, dir, &mega_proc_fops, 2805 2808 f->show); 2806 2809 if (!de) { 2807 - printk(KERN_WARNING "\nmegaraid: proc_create failed\n"); 2810 + dev_warn(&adapter->dev->dev, "proc_create failed\n"); 2808 2811 return; 2809 2812 } 2810 2813 ··· 2871 2874 return rval; 2872 2875 } 2873 2876 2874 - printk(KERN_INFO 2875 - "megaraid: invalid partition on this disk on channel %d\n", 2876 - sdev->channel); 2877 + dev_info(&adapter->dev->dev, 2878 + "invalid partition on this disk on channel %d\n", 2879 + sdev->channel); 2877 2880 2878 2881 /* Default heads (64) & sectors (32) */ 2879 2882 heads = 64; ··· 2933 2936 scb->sgl = (mega_sglist *)scb->sgl64; 2934 2937 2935 2938 if( !scb->sgl ) { 2936 - printk(KERN_WARNING "RAID: Can't allocate sglist.\n"); 2939 + dev_warn(&adapter->dev->dev, "RAID: Can't allocate sglist\n"); 2937 2940 mega_free_sgl(adapter); 2938 2941 return -1; 2939 2942 } ··· 2943 2946 &scb->pthru_dma_addr); 2944 2947 2945 2948 if( !scb->pthru ) { 2946 - printk(KERN_WARNING "RAID: Can't allocate passthru.\n"); 2949 + dev_warn(&adapter->dev->dev, "RAID: Can't allocate passthru\n"); 2947 2950 mega_free_sgl(adapter); 2948 2951 return -1; 2949 2952 } ··· 2953 2956 &scb->epthru_dma_addr); 2954 2957 2955 2958 if( !scb->epthru ) { 2956 - printk(KERN_WARNING 2957 - "Can't allocate extended passthru.\n"); 2959 + dev_warn(&adapter->dev->dev, 2960 + "Can't allocate extended passthru\n"); 2958 2961 mega_free_sgl(adapter); 2959 2962 return -1; 2960 2963 } ··· 3151 3154 * Do we support this feature 3152 3155 */ 3153 3156 if( !adapter->support_random_del ) { 3154 - printk(KERN_WARNING "megaraid: logdrv "); 3155 - printk("delete on non-supporting F/W.\n"); 3157 + dev_warn(&adapter->dev->dev, "logdrv " 3158 + "delete on non-supporting F/W\n"); 3156 3159 3157 3160 return (-EINVAL); 3158 3161 } ··· 3176 3179 if( uioc.uioc_rmbox[0] == MEGA_MBOXCMD_PASSTHRU64 || 3177 3180 uioc.uioc_rmbox[0] == MEGA_MBOXCMD_EXTPTHRU ) { 3178 3181 3179 - printk(KERN_WARNING "megaraid: rejected passthru.\n"); 3182 + dev_warn(&adapter->dev->dev, "rejected passthru\n"); 3180 3183 3181 3184 return (-EINVAL); 3182 3185 } ··· 3680 3683 3681 3684 for( i = 0; i < adapter->product_info.nchannels; i++ ) { 3682 3685 if( (adapter->mega_ch_class >> i) & 0x01 ) { 3683 - printk(KERN_INFO "megaraid: channel[%d] is raid.\n", 3686 + dev_info(&adapter->dev->dev, "channel[%d] is raid\n", 3684 3687 i); 3685 3688 } 3686 3689 else { 3687 - printk(KERN_INFO "megaraid: channel[%d] is scsi.\n", 3690 + dev_info(&adapter->dev->dev, "channel[%d] is scsi\n", 3688 3691 i); 3689 3692 } 3690 3693 } ··· 3890 3893 3891 3894 /* log this event */ 3892 3895 if(rval) { 3893 - printk(KERN_WARNING "megaraid: Delete LD-%d failed.", logdrv); 3896 + dev_warn(&adapter->dev->dev, "Delete LD-%d failed", logdrv); 3894 3897 return rval; 3895 3898 } 3896 3899 ··· 4158 4161 * this information. 4159 4162 */ 4160 4163 if (rval && trace_level) { 4161 - printk("megaraid: cmd [%x, %x, %x] status:[%x]\n", 4164 + dev_info(&adapter->dev->dev, "cmd [%x, %x, %x] status:[%x]\n", 4162 4165 mc->cmd, mc->opcode, mc->subopcode, rval); 4163 4166 } 4164 4167 ··· 4241 4244 subsysvid = pdev->subsystem_vendor; 4242 4245 subsysid = pdev->subsystem_device; 4243 4246 4244 - printk(KERN_NOTICE "megaraid: found 0x%4.04x:0x%4.04x:bus %d:", 4245 - id->vendor, id->device, pci_bus); 4246 - 4247 - printk("slot %d:func %d\n", 4248 - PCI_SLOT(pci_dev_func), PCI_FUNC(pci_dev_func)); 4247 + dev_notice(&pdev->dev, "found 0x%4.04x:0x%4.04x\n", 4248 + id->vendor, id->device); 4249 4249 4250 4250 /* Read the base port and IRQ from PCI */ 4251 4251 mega_baseport = pci_resource_start(pdev, 0); ··· 4253 4259 flag |= BOARD_MEMMAP; 4254 4260 4255 4261 if (!request_mem_region(mega_baseport, 128, "megaraid")) { 4256 - printk(KERN_WARNING "megaraid: mem region busy!\n"); 4262 + dev_warn(&pdev->dev, "mem region busy!\n"); 4257 4263 goto out_disable_device; 4258 4264 } 4259 4265 4260 4266 mega_baseport = (unsigned long)ioremap(mega_baseport, 128); 4261 4267 if (!mega_baseport) { 4262 - printk(KERN_WARNING 4263 - "megaraid: could not map hba memory\n"); 4268 + dev_warn(&pdev->dev, "could not map hba memory\n"); 4264 4269 goto out_release_region; 4265 4270 } 4266 4271 } else { ··· 4278 4285 adapter = (adapter_t *)host->hostdata; 4279 4286 memset(adapter, 0, sizeof(adapter_t)); 4280 4287 4281 - printk(KERN_NOTICE 4288 + dev_notice(&pdev->dev, 4282 4289 "scsi%d:Found MegaRAID controller at 0x%lx, IRQ:%d\n", 4283 4290 host->host_no, mega_baseport, irq); 4284 4291 ··· 4316 4323 adapter->mega_buffer = pci_alloc_consistent(adapter->dev, 4317 4324 MEGA_BUFFER_SIZE, &adapter->buf_dma_handle); 4318 4325 if (!adapter->mega_buffer) { 4319 - printk(KERN_WARNING "megaraid: out of RAM.\n"); 4326 + dev_warn(&pdev->dev, "out of RAM\n"); 4320 4327 goto out_host_put; 4321 4328 } 4322 4329 4323 4330 adapter->scb_list = kmalloc(sizeof(scb_t) * MAX_COMMANDS, GFP_KERNEL); 4324 4331 if (!adapter->scb_list) { 4325 - printk(KERN_WARNING "megaraid: out of RAM.\n"); 4332 + dev_warn(&pdev->dev, "out of RAM\n"); 4326 4333 goto out_free_cmd_buffer; 4327 4334 } 4328 4335 4329 4336 if (request_irq(irq, (adapter->flag & BOARD_MEMMAP) ? 4330 4337 megaraid_isr_memmapped : megaraid_isr_iomapped, 4331 4338 IRQF_SHARED, "megaraid", adapter)) { 4332 - printk(KERN_WARNING 4333 - "megaraid: Couldn't register IRQ %d!\n", irq); 4339 + dev_warn(&pdev->dev, "Couldn't register IRQ %d!\n", irq); 4334 4340 goto out_free_scb_list; 4335 4341 } 4336 4342 ··· 4349 4357 if (!strcmp(adapter->fw_version, "3.00") || 4350 4358 !strcmp(adapter->fw_version, "3.01")) { 4351 4359 4352 - printk( KERN_WARNING 4353 - "megaraid: Your card is a Dell PERC " 4354 - "2/SC RAID controller with " 4360 + dev_warn(&pdev->dev, 4361 + "Your card is a Dell PERC " 4362 + "2/SC RAID controller with " 4355 4363 "firmware\nmegaraid: 3.00 or 3.01. " 4356 4364 "This driver is known to have " 4357 4365 "corruption issues\nmegaraid: with " ··· 4382 4390 if (!strcmp(adapter->fw_version, "H01.07") || 4383 4391 !strcmp(adapter->fw_version, "H01.08") || 4384 4392 !strcmp(adapter->fw_version, "H01.09") ) { 4385 - printk(KERN_WARNING 4386 - "megaraid: Firmware H.01.07, " 4393 + dev_warn(&pdev->dev, 4394 + "Firmware H.01.07, " 4387 4395 "H.01.08, and H.01.09 on 1M/2M " 4388 4396 "controllers\n" 4389 - "megaraid: do not support 64 bit " 4390 - "addressing.\nmegaraid: DISABLING " 4397 + "do not support 64 bit " 4398 + "addressing.\nDISABLING " 4391 4399 "64 bit support.\n"); 4392 4400 adapter->flag &= ~BOARD_64BIT; 4393 4401 } ··· 4495 4503 */ 4496 4504 adapter->has_cluster = mega_support_cluster(adapter); 4497 4505 if (adapter->has_cluster) { 4498 - printk(KERN_NOTICE 4499 - "megaraid: Cluster driver, initiator id:%d\n", 4506 + dev_notice(&pdev->dev, 4507 + "Cluster driver, initiator id:%d\n", 4500 4508 adapter->this_id); 4501 4509 } 4502 4510 #endif ··· 4563 4571 issue_scb_block(adapter, raw_mbox); 4564 4572 4565 4573 if (atomic_read(&adapter->pend_cmds) > 0) 4566 - printk(KERN_WARNING "megaraid: pending commands!!\n"); 4574 + dev_warn(&adapter->dev->dev, "pending commands!!\n"); 4567 4575 4568 4576 /* 4569 4577 * Have a delibrate delay to make sure all the caches are
+266 -278
drivers/scsi/megaraid/megaraid_sas_base.c
··· 216 216 struct megasas_cmd, list); 217 217 list_del_init(&cmd->list); 218 218 } else { 219 - printk(KERN_ERR "megasas: Command pool empty!\n"); 219 + dev_err(&instance->pdev->dev, "Command pool empty!\n"); 220 220 } 221 221 222 222 spin_unlock_irqrestore(&instance->mfi_pool_lock, flags); ··· 273 273 megasas_enable_intr_xscale(struct megasas_instance *instance) 274 274 { 275 275 struct megasas_register_set __iomem *regs; 276 + 276 277 regs = instance->reg_set; 277 278 writel(0, &(regs)->outbound_intr_mask); 278 279 ··· 290 289 { 291 290 struct megasas_register_set __iomem *regs; 292 291 u32 mask = 0x1f; 292 + 293 293 regs = instance->reg_set; 294 294 writel(mask, &regs->outbound_intr_mask); 295 295 /* Dummy readl to force pci flush */ ··· 315 313 { 316 314 u32 status; 317 315 u32 mfiStatus = 0; 316 + 318 317 /* 319 318 * Check if it is our interrupt 320 319 */ ··· 351 348 struct megasas_register_set __iomem *regs) 352 349 { 353 350 unsigned long flags; 351 + 354 352 spin_lock_irqsave(&instance->hba_lock, flags); 355 353 writel((frame_phys_addr >> 3)|(frame_count), 356 354 &(regs)->inbound_queue_port); ··· 368 364 { 369 365 u32 i; 370 366 u32 pcidata; 367 + 371 368 writel(MFI_ADP_RESET, &regs->inbound_doorbell); 372 369 373 370 for (i = 0; i < 3; i++) 374 371 msleep(1000); /* sleep for 3 secs */ 375 372 pcidata = 0; 376 373 pci_read_config_dword(instance->pdev, MFI_1068_PCSR_OFFSET, &pcidata); 377 - printk(KERN_NOTICE "pcidata = %x\n", pcidata); 374 + dev_notice(&instance->pdev->dev, "pcidata = %x\n", pcidata); 378 375 if (pcidata & 0x2) { 379 - printk(KERN_NOTICE "mfi 1068 offset read=%x\n", pcidata); 376 + dev_notice(&instance->pdev->dev, "mfi 1068 offset read=%x\n", pcidata); 380 377 pcidata &= ~0x2; 381 378 pci_write_config_dword(instance->pdev, 382 379 MFI_1068_PCSR_OFFSET, pcidata); ··· 388 383 pcidata = 0; 389 384 pci_read_config_dword(instance->pdev, 390 385 MFI_1068_FW_HANDSHAKE_OFFSET, &pcidata); 391 - printk(KERN_NOTICE "1068 offset handshake read=%x\n", pcidata); 386 + dev_notice(&instance->pdev->dev, "1068 offset handshake read=%x\n", pcidata); 392 387 if ((pcidata & 0xffff0000) == MFI_1068_FW_READY) { 393 - printk(KERN_NOTICE "1068 offset pcidt=%x\n", pcidata); 388 + dev_notice(&instance->pdev->dev, "1068 offset pcidt=%x\n", pcidata); 394 389 pcidata = 0; 395 390 pci_write_config_dword(instance->pdev, 396 391 MFI_1068_FW_HANDSHAKE_OFFSET, pcidata); ··· 407 402 megasas_check_reset_xscale(struct megasas_instance *instance, 408 403 struct megasas_register_set __iomem *regs) 409 404 { 410 - 411 405 if ((instance->adprecovery != MEGASAS_HBA_OPERATIONAL) && 412 406 (le32_to_cpu(*instance->consumer) == 413 407 MEGASAS_ADPRESET_INPROG_SIGN)) ··· 437 433 438 434 /** 439 435 * The following functions are defined for ppc (deviceid : 0x60) 440 - * controllers 436 + * controllers 441 437 */ 442 438 443 439 /** ··· 448 444 megasas_enable_intr_ppc(struct megasas_instance *instance) 449 445 { 450 446 struct megasas_register_set __iomem *regs; 447 + 451 448 regs = instance->reg_set; 452 449 writel(0xFFFFFFFF, &(regs)->outbound_doorbell_clear); 453 450 ··· 467 462 { 468 463 struct megasas_register_set __iomem *regs; 469 464 u32 mask = 0xFFFFFFFF; 465 + 470 466 regs = instance->reg_set; 471 467 writel(mask, &regs->outbound_intr_mask); 472 468 /* Dummy readl to force pci flush */ ··· 528 522 struct megasas_register_set __iomem *regs) 529 523 { 530 524 unsigned long flags; 525 + 531 526 spin_lock_irqsave(&instance->hba_lock, flags); 532 527 writel((frame_phys_addr | (frame_count<<1))|1, 533 528 &(regs)->inbound_queue_port); ··· 573 566 megasas_enable_intr_skinny(struct megasas_instance *instance) 574 567 { 575 568 struct megasas_register_set __iomem *regs; 569 + 576 570 regs = instance->reg_set; 577 571 writel(0xFFFFFFFF, &(regs)->outbound_intr_mask); 578 572 ··· 592 584 { 593 585 struct megasas_register_set __iomem *regs; 594 586 u32 mask = 0xFFFFFFFF; 587 + 595 588 regs = instance->reg_set; 596 589 writel(mask, &regs->outbound_intr_mask); 597 590 /* Dummy readl to force pci flush */ ··· 643 634 writel(status, &regs->outbound_intr_status); 644 635 645 636 /* 646 - * dummy read to flush PCI 647 - */ 637 + * dummy read to flush PCI 638 + */ 648 639 readl(&regs->outbound_intr_status); 649 640 650 641 return mfiStatus; ··· 663 654 struct megasas_register_set __iomem *regs) 664 655 { 665 656 unsigned long flags; 657 + 666 658 spin_lock_irqsave(&instance->hba_lock, flags); 667 659 writel(upper_32_bits(frame_phys_addr), 668 660 &(regs)->inbound_high_queue_port); ··· 716 706 megasas_enable_intr_gen2(struct megasas_instance *instance) 717 707 { 718 708 struct megasas_register_set __iomem *regs; 709 + 719 710 regs = instance->reg_set; 720 711 writel(0xFFFFFFFF, &(regs)->outbound_doorbell_clear); 721 712 ··· 736 725 { 737 726 struct megasas_register_set __iomem *regs; 738 727 u32 mask = 0xFFFFFFFF; 728 + 739 729 regs = instance->reg_set; 740 730 writel(mask, &regs->outbound_intr_mask); 741 731 /* Dummy readl to force pci flush */ ··· 762 750 { 763 751 u32 status; 764 752 u32 mfiStatus = 0; 753 + 765 754 /* 766 755 * Check if it is our interrupt 767 756 */ ··· 799 786 struct megasas_register_set __iomem *regs) 800 787 { 801 788 unsigned long flags; 789 + 802 790 spin_lock_irqsave(&instance->hba_lock, flags); 803 791 writel((frame_phys_addr | (frame_count<<1))|1, 804 792 &(regs)->inbound_queue_port); ··· 814 800 megasas_adp_reset_gen2(struct megasas_instance *instance, 815 801 struct megasas_register_set __iomem *reg_set) 816 802 { 817 - u32 retry = 0 ; 818 - u32 HostDiag; 819 - u32 __iomem *seq_offset = &reg_set->seq_offset; 820 - u32 __iomem *hostdiag_offset = &reg_set->host_diag; 803 + u32 retry = 0 ; 804 + u32 HostDiag; 805 + u32 __iomem *seq_offset = &reg_set->seq_offset; 806 + u32 __iomem *hostdiag_offset = &reg_set->host_diag; 821 807 822 808 if (instance->instancet == &megasas_instance_template_skinny) { 823 809 seq_offset = &reg_set->fusion_seq_offset; ··· 835 821 836 822 HostDiag = (u32)readl(hostdiag_offset); 837 823 838 - while ( !( HostDiag & DIAG_WRITE_ENABLE) ) { 824 + while (!(HostDiag & DIAG_WRITE_ENABLE)) { 839 825 msleep(100); 840 826 HostDiag = (u32)readl(hostdiag_offset); 841 - printk(KERN_NOTICE "RESETGEN2: retry=%x, hostdiag=%x\n", 827 + dev_notice(&instance->pdev->dev, "RESETGEN2: retry=%x, hostdiag=%x\n", 842 828 retry, HostDiag); 843 829 844 830 if (retry++ >= 100) ··· 846 832 847 833 } 848 834 849 - printk(KERN_NOTICE "ADP_RESET_GEN2: HostDiag=%x\n", HostDiag); 835 + dev_notice(&instance->pdev->dev, "ADP_RESET_GEN2: HostDiag=%x\n", HostDiag); 850 836 851 837 writel((HostDiag | DIAG_RESET_ADAPTER), hostdiag_offset); 852 838 853 839 ssleep(10); 854 840 855 841 HostDiag = (u32)readl(hostdiag_offset); 856 - while ( ( HostDiag & DIAG_RESET_ADAPTER) ) { 842 + while (HostDiag & DIAG_RESET_ADAPTER) { 857 843 msleep(100); 858 844 HostDiag = (u32)readl(hostdiag_offset); 859 - printk(KERN_NOTICE "RESET_GEN2: retry=%x, hostdiag=%x\n", 845 + dev_notice(&instance->pdev->dev, "RESET_GEN2: retry=%x, hostdiag=%x\n", 860 846 retry, HostDiag); 861 847 862 848 if (retry++ >= 1000) ··· 918 904 megasas_issue_polled(struct megasas_instance *instance, struct megasas_cmd *cmd) 919 905 { 920 906 int seconds; 921 - 922 907 struct megasas_header *frame_hdr = &cmd->frame->hdr; 923 908 924 909 frame_hdr->cmd_status = MFI_CMD_STATUS_POLL_MODE; ··· 953 940 struct megasas_cmd *cmd, int timeout) 954 941 { 955 942 int ret = 0; 943 + 956 944 cmd->cmd_status_drv = MFI_STAT_INVALID_STATUS; 957 945 958 946 instance->instancet->issue_dcmd(instance, cmd); ··· 1134 1120 int num_cnt; 1135 1121 int sge_bytes; 1136 1122 u32 sge_sz; 1137 - u32 frame_count=0; 1123 + u32 frame_count = 0; 1138 1124 1139 1125 sge_sz = (IS_DMA64) ? sizeof(struct megasas_sge64) : 1140 1126 sizeof(struct megasas_sge32); ··· 1165 1151 num_cnt = sge_count - 3; 1166 1152 } 1167 1153 1168 - if(num_cnt>0){ 1154 + if (num_cnt > 0) { 1169 1155 sge_bytes = sge_sz * num_cnt; 1170 1156 1171 1157 frame_count = (sge_bytes / MEGAMFI_FRAME_SIZE) + 1172 1158 ((sge_bytes % MEGAMFI_FRAME_SIZE) ? 1 : 0) ; 1173 1159 } 1174 1160 /* Main frame */ 1175 - frame_count +=1; 1161 + frame_count += 1; 1176 1162 1177 1163 if (frame_count > 7) 1178 1164 frame_count = 8; ··· 1229 1215 memcpy(pthru->cdb, scp->cmnd, scp->cmd_len); 1230 1216 1231 1217 /* 1232 - * If the command is for the tape device, set the 1233 - * pthru timeout to the os layer timeout value. 1234 - */ 1218 + * If the command is for the tape device, set the 1219 + * pthru timeout to the os layer timeout value. 1220 + */ 1235 1221 if (scp->device->type == TYPE_TAPE) { 1236 1222 if ((scp->request->timeout / HZ) > 0xFFFF) 1237 1223 pthru->timeout = cpu_to_le16(0xFFFF); ··· 1255 1241 &pthru->sgl); 1256 1242 1257 1243 if (pthru->sge_count > instance->max_num_sge) { 1258 - printk(KERN_ERR "megasas: DCDB two many SGE NUM=%x\n", 1244 + dev_err(&instance->pdev->dev, "DCDB too many SGE NUM=%x\n", 1259 1245 pthru->sge_count); 1260 1246 return 0; 1261 1247 } ··· 1396 1382 ldio->sge_count = megasas_make_sgl32(instance, scp, &ldio->sgl); 1397 1383 1398 1384 if (ldio->sge_count > instance->max_num_sge) { 1399 - printk(KERN_ERR "megasas: build_ld_io: sge_count = %x\n", 1385 + dev_err(&instance->pdev->dev, "build_ld_io: sge_count = %x\n", 1400 1386 ldio->sge_count); 1401 1387 return 0; 1402 1388 } ··· 1449 1435 1450 1436 /** 1451 1437 * megasas_dump_pending_frames - Dumps the frame address of all pending cmds 1452 - * in FW 1438 + * in FW 1453 1439 * @instance: Adapter soft state 1454 1440 */ 1455 1441 static inline void ··· 1463 1449 u32 sgcount; 1464 1450 u32 max_cmd = instance->max_fw_cmds; 1465 1451 1466 - printk(KERN_ERR "\nmegasas[%d]: Dumping Frame Phys Address of all pending cmds in FW\n",instance->host->host_no); 1467 - printk(KERN_ERR "megasas[%d]: Total OS Pending cmds : %d\n",instance->host->host_no,atomic_read(&instance->fw_outstanding)); 1452 + dev_err(&instance->pdev->dev, "[%d]: Dumping Frame Phys Address of all pending cmds in FW\n",instance->host->host_no); 1453 + dev_err(&instance->pdev->dev, "[%d]: Total OS Pending cmds : %d\n",instance->host->host_no,atomic_read(&instance->fw_outstanding)); 1468 1454 if (IS_DMA64) 1469 - printk(KERN_ERR "\nmegasas[%d]: 64 bit SGLs were sent to FW\n",instance->host->host_no); 1455 + dev_err(&instance->pdev->dev, "[%d]: 64 bit SGLs were sent to FW\n",instance->host->host_no); 1470 1456 else 1471 - printk(KERN_ERR "\nmegasas[%d]: 32 bit SGLs were sent to FW\n",instance->host->host_no); 1457 + dev_err(&instance->pdev->dev, "[%d]: 32 bit SGLs were sent to FW\n",instance->host->host_no); 1472 1458 1473 - printk(KERN_ERR "megasas[%d]: Pending OS cmds in FW : \n",instance->host->host_no); 1459 + dev_err(&instance->pdev->dev, "[%d]: Pending OS cmds in FW : \n",instance->host->host_no); 1474 1460 for (i = 0; i < max_cmd; i++) { 1475 1461 cmd = instance->cmd_list[i]; 1476 - if(!cmd->scmd) 1462 + if (!cmd->scmd) 1477 1463 continue; 1478 - printk(KERN_ERR "megasas[%d]: Frame addr :0x%08lx : ",instance->host->host_no,(unsigned long)cmd->frame_phys_addr); 1464 + dev_err(&instance->pdev->dev, "[%d]: Frame addr :0x%08lx : ",instance->host->host_no,(unsigned long)cmd->frame_phys_addr); 1479 1465 if (megasas_cmd_type(cmd->scmd) == READ_WRITE_LDIO) { 1480 1466 ldio = (struct megasas_io_frame *)cmd->frame; 1481 1467 mfi_sgl = &ldio->sgl; 1482 1468 sgcount = ldio->sge_count; 1483 - printk(KERN_ERR "megasas[%d]: frame count : 0x%x, Cmd : 0x%x, Tgt id : 0x%x," 1469 + dev_err(&instance->pdev->dev, "[%d]: frame count : 0x%x, Cmd : 0x%x, Tgt id : 0x%x," 1484 1470 " lba lo : 0x%x, lba_hi : 0x%x, sense_buf addr : 0x%x,sge count : 0x%x\n", 1485 1471 instance->host->host_no, cmd->frame_count, ldio->cmd, ldio->target_id, 1486 1472 le32_to_cpu(ldio->start_lba_lo), le32_to_cpu(ldio->start_lba_hi), 1487 1473 le32_to_cpu(ldio->sense_buf_phys_addr_lo), sgcount); 1488 - } 1489 - else { 1474 + } else { 1490 1475 pthru = (struct megasas_pthru_frame *) cmd->frame; 1491 1476 mfi_sgl = &pthru->sgl; 1492 1477 sgcount = pthru->sge_count; 1493 - printk(KERN_ERR "megasas[%d]: frame count : 0x%x, Cmd : 0x%x, Tgt id : 0x%x, " 1478 + dev_err(&instance->pdev->dev, "[%d]: frame count : 0x%x, Cmd : 0x%x, Tgt id : 0x%x, " 1494 1479 "lun : 0x%x, cdb_len : 0x%x, data xfer len : 0x%x, sense_buf addr : 0x%x,sge count : 0x%x\n", 1495 1480 instance->host->host_no, cmd->frame_count, pthru->cmd, pthru->target_id, 1496 1481 pthru->lun, pthru->cdb_len, le32_to_cpu(pthru->data_xfer_len), 1497 1482 le32_to_cpu(pthru->sense_buf_phys_addr_lo), sgcount); 1498 1483 } 1499 - if(megasas_dbg_lvl & MEGASAS_DBG_LVL){ 1500 - for (n = 0; n < sgcount; n++){ 1501 - if (IS_DMA64) 1502 - printk(KERN_ERR "megasas: sgl len : 0x%x, sgl addr : 0x%llx ", 1503 - le32_to_cpu(mfi_sgl->sge64[n].length), 1504 - le64_to_cpu(mfi_sgl->sge64[n].phys_addr)); 1505 - else 1506 - printk(KERN_ERR "megasas: sgl len : 0x%x, sgl addr : 0x%x ", 1507 - le32_to_cpu(mfi_sgl->sge32[n].length), 1508 - le32_to_cpu(mfi_sgl->sge32[n].phys_addr)); 1484 + if (megasas_dbg_lvl & MEGASAS_DBG_LVL) { 1485 + for (n = 0; n < sgcount; n++) { 1486 + if (IS_DMA64) 1487 + dev_err(&instance->pdev->dev, "sgl len : 0x%x, sgl addr : 0x%llx\n", 1488 + le32_to_cpu(mfi_sgl->sge64[n].length), 1489 + le64_to_cpu(mfi_sgl->sge64[n].phys_addr)); 1490 + else 1491 + dev_err(&instance->pdev->dev, "sgl len : 0x%x, sgl addr : 0x%x\n", 1492 + le32_to_cpu(mfi_sgl->sge32[n].length), 1493 + le32_to_cpu(mfi_sgl->sge32[n].phys_addr)); 1509 1494 } 1510 1495 } 1511 - printk(KERN_ERR "\n"); 1512 1496 } /*for max_cmd*/ 1513 - printk(KERN_ERR "\nmegasas[%d]: Pending Internal cmds in FW : \n",instance->host->host_no); 1497 + dev_err(&instance->pdev->dev, "[%d]: Pending Internal cmds in FW : \n",instance->host->host_no); 1514 1498 for (i = 0; i < max_cmd; i++) { 1515 1499 1516 1500 cmd = instance->cmd_list[i]; 1517 1501 1518 - if(cmd->sync_cmd == 1){ 1519 - printk(KERN_ERR "0x%08lx : ", (unsigned long)cmd->frame_phys_addr); 1520 - } 1502 + if (cmd->sync_cmd == 1) 1503 + dev_err(&instance->pdev->dev, "0x%08lx : ", (unsigned long)cmd->frame_phys_addr); 1521 1504 } 1522 - printk(KERN_ERR "megasas[%d]: Dumping Done.\n\n",instance->host->host_no); 1505 + dev_err(&instance->pdev->dev, "[%d]: Dumping Done\n\n",instance->host->host_no); 1523 1506 } 1524 1507 1525 1508 u32 ··· 1634 1623 } 1635 1624 1636 1625 if (instance->instancet->build_and_issue_cmd(instance, scmd)) { 1637 - printk(KERN_ERR "megasas: Err returned from build_and_issue_cmd\n"); 1626 + dev_err(&instance->pdev->dev, "Err returned from build_and_issue_cmd\n"); 1638 1627 return SCSI_MLQUEUE_HOST_BUSY; 1639 1628 } 1640 1629 ··· 1662 1651 static int megasas_slave_configure(struct scsi_device *sdev) 1663 1652 { 1664 1653 /* 1665 - * The RAID firmware may require extended timeouts. 1666 - */ 1654 + * The RAID firmware may require extended timeouts. 1655 + */ 1667 1656 blk_queue_rq_timeout(sdev->request_queue, 1668 1657 MEGASAS_DEFAULT_CMD_TIMEOUT * HZ); 1669 1658 ··· 1672 1661 1673 1662 static int megasas_slave_alloc(struct scsi_device *sdev) 1674 1663 { 1675 - u16 pd_index = 0; 1664 + u16 pd_index = 0; 1676 1665 struct megasas_instance *instance ; 1666 + 1677 1667 instance = megasas_lookup_instance(sdev->host->host_no); 1678 1668 if (sdev->channel < MEGASAS_MAX_PD_CHANNELS) { 1679 1669 /* ··· 1740 1728 (instance->pdev->device == PCI_DEVICE_ID_LSI_PLASMA) || 1741 1729 (instance->pdev->device == PCI_DEVICE_ID_LSI_INVADER) || 1742 1730 (instance->pdev->device == PCI_DEVICE_ID_LSI_FURY)) { 1743 - writel(MFI_STOP_ADP, 1744 - &instance->reg_set->doorbell); 1731 + writel(MFI_STOP_ADP, &instance->reg_set->doorbell); 1745 1732 /* Flush */ 1746 1733 readl(&instance->reg_set->doorbell); 1747 1734 if (instance->mpio && instance->requestorId) ··· 1794 1783 unsigned long flags; 1795 1784 1796 1785 /* If we have already declared adapter dead, donot complete cmds */ 1797 - if (instance->adprecovery == MEGASAS_HW_CRITICAL_ERROR ) 1786 + if (instance->adprecovery == MEGASAS_HW_CRITICAL_ERROR) 1798 1787 return; 1799 1788 1800 1789 spin_lock_irqsave(&instance->completion_lock, flags); ··· 1805 1794 while (consumer != producer) { 1806 1795 context = le32_to_cpu(instance->reply_queue[consumer]); 1807 1796 if (context >= instance->max_fw_cmds) { 1808 - printk(KERN_ERR "Unexpected context value %x\n", 1797 + dev_err(&instance->pdev->dev, "Unexpected context value %x\n", 1809 1798 context); 1810 1799 BUG(); 1811 1800 } ··· 1884 1873 cmd = megasas_get_cmd(instance); 1885 1874 1886 1875 if (!cmd) { 1887 - printk(KERN_DEBUG "megasas: megasas_get_ld_vf_affiliation_111:" 1888 - "Failed to get cmd for scsi%d.\n", 1876 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "megasas_get_ld_vf_affiliation_111:" 1877 + "Failed to get cmd for scsi%d\n", 1889 1878 instance->host->host_no); 1890 1879 return -ENOMEM; 1891 1880 } ··· 1893 1882 dcmd = &cmd->frame->dcmd; 1894 1883 1895 1884 if (!instance->vf_affiliation_111) { 1896 - printk(KERN_WARNING "megasas: SR-IOV: Couldn't get LD/VF " 1897 - "affiliation for scsi%d.\n", instance->host->host_no); 1885 + dev_warn(&instance->pdev->dev, "SR-IOV: Couldn't get LD/VF " 1886 + "affiliation for scsi%d\n", instance->host->host_no); 1898 1887 megasas_return_cmd(instance, cmd); 1899 1888 return -ENOMEM; 1900 1889 } ··· 1908 1897 sizeof(struct MR_LD_VF_AFFILIATION_111), 1909 1898 &new_affiliation_111_h); 1910 1899 if (!new_affiliation_111) { 1911 - printk(KERN_DEBUG "megasas: SR-IOV: Couldn't allocate " 1912 - "memory for new affiliation for scsi%d.\n", 1900 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "SR-IOV: Couldn't allocate " 1901 + "memory for new affiliation for scsi%d\n", 1913 1902 instance->host->host_no); 1914 1903 megasas_return_cmd(instance, cmd); 1915 1904 return -ENOMEM; ··· 1940 1929 dcmd->sgl.sge32[0].length = cpu_to_le32( 1941 1930 sizeof(struct MR_LD_VF_AFFILIATION_111)); 1942 1931 1943 - printk(KERN_WARNING "megasas: SR-IOV: Getting LD/VF affiliation for " 1932 + dev_warn(&instance->pdev->dev, "SR-IOV: Getting LD/VF affiliation for " 1944 1933 "scsi%d\n", instance->host->host_no); 1945 1934 1946 1935 megasas_issue_blocked_cmd(instance, cmd, 0); 1947 1936 1948 1937 if (dcmd->cmd_status) { 1949 - printk(KERN_WARNING "megasas: SR-IOV: LD/VF affiliation DCMD" 1950 - " failed with status 0x%x for scsi%d.\n", 1938 + dev_warn(&instance->pdev->dev, "SR-IOV: LD/VF affiliation DCMD" 1939 + " failed with status 0x%x for scsi%d\n", 1951 1940 dcmd->cmd_status, instance->host->host_no); 1952 1941 retval = 1; /* Do a scan if we couldn't get affiliation */ 1953 1942 goto out; ··· 1958 1947 for (ld = 0 ; ld < new_affiliation_111->vdCount; ld++) 1959 1948 if (instance->vf_affiliation_111->map[ld].policy[thisVf] != 1960 1949 new_affiliation_111->map[ld].policy[thisVf]) { 1961 - printk(KERN_WARNING "megasas: SR-IOV: " 1962 - "Got new LD/VF affiliation " 1963 - "for scsi%d.\n", 1950 + dev_warn(&instance->pdev->dev, "SR-IOV: " 1951 + "Got new LD/VF affiliation for scsi%d\n", 1964 1952 instance->host->host_no); 1965 1953 memcpy(instance->vf_affiliation_111, 1966 1954 new_affiliation_111, ··· 1995 1985 cmd = megasas_get_cmd(instance); 1996 1986 1997 1987 if (!cmd) { 1998 - printk(KERN_DEBUG "megasas: megasas_get_ld_vf_affiliation12: " 1999 - "Failed to get cmd for scsi%d.\n", 1988 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "megasas_get_ld_vf_affiliation12: " 1989 + "Failed to get cmd for scsi%d\n", 2000 1990 instance->host->host_no); 2001 1991 return -ENOMEM; 2002 1992 } ··· 2004 1994 dcmd = &cmd->frame->dcmd; 2005 1995 2006 1996 if (!instance->vf_affiliation) { 2007 - printk(KERN_WARNING "megasas: SR-IOV: Couldn't get LD/VF " 2008 - "affiliation for scsi%d.\n", instance->host->host_no); 1997 + dev_warn(&instance->pdev->dev, "SR-IOV: Couldn't get LD/VF " 1998 + "affiliation for scsi%d\n", instance->host->host_no); 2009 1999 megasas_return_cmd(instance, cmd); 2010 2000 return -ENOMEM; 2011 2001 } ··· 2020 2010 sizeof(struct MR_LD_VF_AFFILIATION), 2021 2011 &new_affiliation_h); 2022 2012 if (!new_affiliation) { 2023 - printk(KERN_DEBUG "megasas: SR-IOV: Couldn't allocate " 2024 - "memory for new affiliation for scsi%d.\n", 2013 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "SR-IOV: Couldn't allocate " 2014 + "memory for new affiliation for scsi%d\n", 2025 2015 instance->host->host_no); 2026 2016 megasas_return_cmd(instance, cmd); 2027 2017 return -ENOMEM; ··· 2052 2042 dcmd->sgl.sge32[0].length = cpu_to_le32((MAX_LOGICAL_DRIVES + 1) * 2053 2043 sizeof(struct MR_LD_VF_AFFILIATION)); 2054 2044 2055 - printk(KERN_WARNING "megasas: SR-IOV: Getting LD/VF affiliation for " 2045 + dev_warn(&instance->pdev->dev, "SR-IOV: Getting LD/VF affiliation for " 2056 2046 "scsi%d\n", instance->host->host_no); 2057 2047 2058 2048 megasas_issue_blocked_cmd(instance, cmd, 0); 2059 2049 2060 2050 if (dcmd->cmd_status) { 2061 - printk(KERN_WARNING "megasas: SR-IOV: LD/VF affiliation DCMD" 2062 - " failed with status 0x%x for scsi%d.\n", 2051 + dev_warn(&instance->pdev->dev, "SR-IOV: LD/VF affiliation DCMD" 2052 + " failed with status 0x%x for scsi%d\n", 2063 2053 dcmd->cmd_status, instance->host->host_no); 2064 2054 retval = 1; /* Do a scan if we couldn't get affiliation */ 2065 2055 goto out; ··· 2067 2057 2068 2058 if (!initial) { 2069 2059 if (!new_affiliation->ldCount) { 2070 - printk(KERN_WARNING "megasas: SR-IOV: Got new LD/VF " 2071 - "affiliation for passive path for scsi%d.\n", 2060 + dev_warn(&instance->pdev->dev, "SR-IOV: Got new LD/VF " 2061 + "affiliation for passive path for scsi%d\n", 2072 2062 instance->host->host_no); 2073 2063 retval = 1; 2074 2064 goto out; ··· 2133 2123 } 2134 2124 out: 2135 2125 if (doscan) { 2136 - printk(KERN_WARNING "megasas: SR-IOV: Got new LD/VF " 2137 - "affiliation for scsi%d.\n", instance->host->host_no); 2126 + dev_warn(&instance->pdev->dev, "SR-IOV: Got new LD/VF " 2127 + "affiliation for scsi%d\n", instance->host->host_no); 2138 2128 memcpy(instance->vf_affiliation, new_affiliation, 2139 2129 new_affiliation->size); 2140 2130 retval = 1; ··· 2174 2164 cmd = megasas_get_cmd(instance); 2175 2165 2176 2166 if (!cmd) { 2177 - printk(KERN_DEBUG "megasas: megasas_sriov_start_heartbeat: " 2178 - "Failed to get cmd for scsi%d.\n", 2167 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "megasas_sriov_start_heartbeat: " 2168 + "Failed to get cmd for scsi%d\n", 2179 2169 instance->host->host_no); 2180 2170 return -ENOMEM; 2181 2171 } ··· 2188 2178 sizeof(struct MR_CTRL_HB_HOST_MEM), 2189 2179 &instance->hb_host_mem_h); 2190 2180 if (!instance->hb_host_mem) { 2191 - printk(KERN_DEBUG "megasas: SR-IOV: Couldn't allocate" 2192 - " memory for heartbeat host memory for " 2193 - "scsi%d.\n", instance->host->host_no); 2181 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "SR-IOV: Couldn't allocate" 2182 + " memory for heartbeat host memory for scsi%d\n", 2183 + instance->host->host_no); 2194 2184 retval = -ENOMEM; 2195 2185 goto out; 2196 2186 } ··· 2210 2200 dcmd->sgl.sge32[0].phys_addr = cpu_to_le32(instance->hb_host_mem_h); 2211 2201 dcmd->sgl.sge32[0].length = cpu_to_le32(sizeof(struct MR_CTRL_HB_HOST_MEM)); 2212 2202 2213 - printk(KERN_WARNING "megasas: SR-IOV: Starting heartbeat for scsi%d\n", 2203 + dev_warn(&instance->pdev->dev, "SR-IOV: Starting heartbeat for scsi%d\n", 2214 2204 instance->host->host_no); 2215 2205 2216 2206 if (instance->ctrl_context && !instance->mask_interrupts) ··· 2246 2236 mod_timer(&instance->sriov_heartbeat_timer, 2247 2237 jiffies + MEGASAS_SRIOV_HEARTBEAT_INTERVAL_VF); 2248 2238 } else { 2249 - printk(KERN_WARNING "megasas: SR-IOV: Heartbeat never " 2239 + dev_warn(&instance->pdev->dev, "SR-IOV: Heartbeat never " 2250 2240 "completed for scsi%d\n", instance->host->host_no); 2251 2241 schedule_work(&instance->work_init); 2252 2242 } ··· 2284 2274 &clist_local); 2285 2275 spin_unlock_irqrestore(&instance->hba_lock, flags); 2286 2276 2287 - printk(KERN_NOTICE "megasas: HBA reset wait ...\n"); 2277 + dev_notice(&instance->pdev->dev, "HBA reset wait ...\n"); 2288 2278 for (i = 0; i < wait_time; i++) { 2289 2279 msleep(1000); 2290 2280 spin_lock_irqsave(&instance->hba_lock, flags); ··· 2295 2285 } 2296 2286 2297 2287 if (adprecovery != MEGASAS_HBA_OPERATIONAL) { 2298 - printk(KERN_NOTICE "megasas: reset: Stopping HBA.\n"); 2288 + dev_notice(&instance->pdev->dev, "reset: Stopping HBA.\n"); 2299 2289 spin_lock_irqsave(&instance->hba_lock, flags); 2300 - instance->adprecovery = MEGASAS_HW_CRITICAL_ERROR; 2290 + instance->adprecovery = MEGASAS_HW_CRITICAL_ERROR; 2301 2291 spin_unlock_irqrestore(&instance->hba_lock, flags); 2302 2292 return FAILED; 2303 2293 } 2304 2294 2305 - reset_index = 0; 2295 + reset_index = 0; 2306 2296 while (!list_empty(&clist_local)) { 2307 - reset_cmd = list_entry((&clist_local)->next, 2297 + reset_cmd = list_entry((&clist_local)->next, 2308 2298 struct megasas_cmd, list); 2309 2299 list_del_init(&reset_cmd->list); 2310 2300 if (reset_cmd->scmd) { 2311 2301 reset_cmd->scmd->result = DID_RESET << 16; 2312 - printk(KERN_NOTICE "%d:%p reset [%02x]\n", 2302 + dev_notice(&instance->pdev->dev, "%d:%p reset [%02x]\n", 2313 2303 reset_index, reset_cmd, 2314 2304 reset_cmd->scmd->cmnd[0]); 2315 2305 2316 2306 reset_cmd->scmd->scsi_done(reset_cmd->scmd); 2317 2307 megasas_return_cmd(instance, reset_cmd); 2318 2308 } else if (reset_cmd->sync_cmd) { 2319 - printk(KERN_NOTICE "megasas:%p synch cmds" 2309 + dev_notice(&instance->pdev->dev, "%p synch cmds" 2320 2310 "reset queue\n", 2321 2311 reset_cmd); 2322 2312 ··· 2325 2315 reset_cmd->frame_phys_addr, 2326 2316 0, instance->reg_set); 2327 2317 } else { 2328 - printk(KERN_NOTICE "megasas: %p unexpected" 2318 + dev_notice(&instance->pdev->dev, "%p unexpected" 2329 2319 "cmds lst\n", 2330 2320 reset_cmd); 2331 2321 } ··· 2336 2326 } 2337 2327 2338 2328 for (i = 0; i < resetwaittime; i++) { 2339 - 2340 2329 int outstanding = atomic_read(&instance->fw_outstanding); 2341 2330 2342 2331 if (!outstanding) 2343 2332 break; 2344 2333 2345 2334 if (!(i % MEGASAS_RESET_NOTICE_INTERVAL)) { 2346 - printk(KERN_NOTICE "megasas: [%2d]waiting for %d " 2335 + dev_notice(&instance->pdev->dev, "[%2d]waiting for %d " 2347 2336 "commands to complete\n",i,outstanding); 2348 2337 /* 2349 2338 * Call cmd completion routine. Cmd to be ··· 2374 2365 i++; 2375 2366 } while (i <= 3); 2376 2367 2377 - if (atomic_read(&instance->fw_outstanding) && 2378 - !kill_adapter_flag) { 2368 + if (atomic_read(&instance->fw_outstanding) && !kill_adapter_flag) { 2379 2369 if (instance->disableOnlineCtrlReset == 0) { 2380 - 2381 2370 megasas_do_ocr(instance); 2382 2371 2383 2372 /* wait for 5 secs to let FW finish the pending cmds */ ··· 2391 2384 2392 2385 if (atomic_read(&instance->fw_outstanding) || 2393 2386 (kill_adapter_flag == 2)) { 2394 - printk(KERN_NOTICE "megaraid_sas: pending cmds after reset\n"); 2387 + dev_notice(&instance->pdev->dev, "pending cmds after reset\n"); 2395 2388 /* 2396 - * Send signal to FW to stop processing any pending cmds. 2397 - * The controller will be taken offline by the OS now. 2398 - */ 2389 + * Send signal to FW to stop processing any pending cmds. 2390 + * The controller will be taken offline by the OS now. 2391 + */ 2399 2392 if ((instance->pdev->device == 2400 2393 PCI_DEVICE_ID_LSI_SAS0073SKINNY) || 2401 2394 (instance->pdev->device == ··· 2408 2401 } 2409 2402 megasas_dump_pending_frames(instance); 2410 2403 spin_lock_irqsave(&instance->hba_lock, flags); 2411 - instance->adprecovery = MEGASAS_HW_CRITICAL_ERROR; 2404 + instance->adprecovery = MEGASAS_HW_CRITICAL_ERROR; 2412 2405 spin_unlock_irqrestore(&instance->hba_lock, flags); 2413 2406 return FAILED; 2414 2407 } 2415 2408 2416 - printk(KERN_NOTICE "megaraid_sas: no pending cmds after reset\n"); 2409 + dev_notice(&instance->pdev->dev, "no pending cmds after reset\n"); 2417 2410 2418 2411 return SUCCESS; 2419 2412 } ··· 2437 2430 scmd->cmnd[0], scmd->retries); 2438 2431 2439 2432 if (instance->adprecovery == MEGASAS_HW_CRITICAL_ERROR) { 2440 - printk(KERN_ERR "megasas: cannot recover from previous reset " 2441 - "failures\n"); 2433 + dev_err(&instance->pdev->dev, "cannot recover from previous reset failures\n"); 2442 2434 return FAILED; 2443 2435 } 2444 2436 2445 2437 ret_val = megasas_wait_for_outstanding(instance); 2446 2438 if (ret_val == SUCCESS) 2447 - printk(KERN_NOTICE "megasas: reset successful \n"); 2439 + dev_notice(&instance->pdev->dev, "reset successful\n"); 2448 2440 else 2449 - printk(KERN_ERR "megasas: failed to do reset\n"); 2441 + dev_err(&instance->pdev->dev, "failed to do reset\n"); 2450 2442 2451 2443 return ret_val; 2452 2444 } ··· 2487 2481 */ 2488 2482 static int megasas_reset_device(struct scsi_cmnd *scmd) 2489 2483 { 2490 - int ret; 2491 - 2492 2484 /* 2493 2485 * First wait for all commands to complete 2494 2486 */ 2495 - ret = megasas_generic_reset(scmd); 2496 - 2497 - return ret; 2487 + return megasas_generic_reset(scmd); 2498 2488 } 2499 2489 2500 2490 /** ··· 2500 2498 { 2501 2499 int ret; 2502 2500 struct megasas_instance *instance; 2501 + 2503 2502 instance = (struct megasas_instance *)scmd->device->host->hostdata; 2504 2503 2505 2504 /* ··· 2519 2516 2520 2517 /** 2521 2518 * megasas_bios_param - Returns disk geometry for a disk 2522 - * @sdev: device handle 2519 + * @sdev: device handle 2523 2520 * @bdev: block device 2524 2521 * @capacity: drive capacity 2525 2522 * @geom: geometry parameters ··· 2532 2529 int sectors; 2533 2530 sector_t cylinders; 2534 2531 unsigned long tmp; 2532 + 2535 2533 /* Default heads (64) & sectors (32) */ 2536 2534 heads = 64; 2537 2535 sectors = 32; ··· 2579 2575 megasas_service_aen(struct megasas_instance *instance, struct megasas_cmd *cmd) 2580 2576 { 2581 2577 unsigned long flags; 2578 + 2582 2579 /* 2583 2580 * Don't signal app if it is just an aborted previously registered aen 2584 2581 */ ··· 2600 2595 if ((instance->unload == 0) && 2601 2596 ((instance->issuepend_done == 1))) { 2602 2597 struct megasas_aen_event *ev; 2598 + 2603 2599 ev = kzalloc(sizeof(*ev), GFP_ATOMIC); 2604 2600 if (!ev) { 2605 - printk(KERN_ERR "megasas_service_aen: out of memory\n"); 2601 + dev_err(&instance->pdev->dev, "megasas_service_aen: out of memory\n"); 2606 2602 } else { 2607 2603 ev->instance = instance; 2608 2604 instance->ev = ev; ··· 2660 2654 2661 2655 buff_addr = (unsigned long) buf; 2662 2656 2663 - if (buff_offset > 2664 - (instance->fw_crash_buffer_size * dmachunk)) { 2657 + if (buff_offset > (instance->fw_crash_buffer_size * dmachunk)) { 2665 2658 dev_err(&instance->pdev->dev, 2666 2659 "Firmware crash dump offset is out of range\n"); 2667 2660 spin_unlock_irqrestore(&instance->crashdump_lock, flags); ··· 2672 2667 2673 2668 src_addr = (unsigned long)instance->crash_buf[buff_offset / dmachunk] + 2674 2669 (buff_offset % dmachunk); 2675 - memcpy(buf, (void *)src_addr, size); 2670 + memcpy(buf, (void *)src_addr, size); 2676 2671 spin_unlock_irqrestore(&instance->crashdump_lock, flags); 2677 2672 2678 2673 return size; ··· 2732 2727 struct Scsi_Host *shost = class_to_shost(cdev); 2733 2728 struct megasas_instance *instance = 2734 2729 (struct megasas_instance *) shost->hostdata; 2730 + 2735 2731 return snprintf(buf, PAGE_SIZE, "%d\n", instance->fw_crash_state); 2736 2732 } 2737 2733 ··· 2817 2811 cmd->cmd_status_drv = 0; 2818 2812 wake_up(&instance->abort_cmd_wait_q); 2819 2813 } 2820 - 2821 - return; 2822 2814 } 2823 2815 2824 2816 /** ··· 2824 2820 * @instance: Adapter soft state 2825 2821 * @cmd: Command to be completed 2826 2822 * @alt_status: If non-zero, use this value as status to 2827 - * SCSI mid-layer instead of the value returned 2828 - * by the FW. This should be used if caller wants 2829 - * an alternate status (as in the case of aborted 2830 - * commands) 2823 + * SCSI mid-layer instead of the value returned 2824 + * by the FW. This should be used if caller wants 2825 + * an alternate status (as in the case of aborted 2826 + * commands) 2831 2827 */ 2832 2828 void 2833 2829 megasas_complete_cmd(struct megasas_instance *instance, struct megasas_cmd *cmd, ··· 2851 2847 MR_DCMD_CTRL_EVENT_GET_INFO left over from the main kernel 2852 2848 when booting the kdump kernel. Ignore this command to 2853 2849 prevent a kernel panic on shutdown of the kdump kernel. */ 2854 - printk(KERN_WARNING "megaraid_sas: MFI_CMD_INVALID command " 2855 - "completed.\n"); 2856 - printk(KERN_WARNING "megaraid_sas: If you have a controller " 2857 - "other than PERC5, please upgrade your firmware.\n"); 2850 + dev_warn(&instance->pdev->dev, "MFI_CMD_INVALID command " 2851 + "completed\n"); 2852 + dev_warn(&instance->pdev->dev, "If you have a controller " 2853 + "other than PERC5, please upgrade your firmware\n"); 2858 2854 break; 2859 2855 case MFI_CMD_PD_SCSI_IO: 2860 2856 case MFI_CMD_LD_SCSI_IO: ··· 2922 2918 break; 2923 2919 2924 2920 default: 2925 - printk(KERN_DEBUG "megasas: MFI FW status %#x\n", 2921 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "MFI FW status %#x\n", 2926 2922 hdr->cmd_status); 2927 2923 cmd->scmd->result = DID_ERROR << 16; 2928 2924 break; ··· 2948 2944 if (cmd->frame->hdr.cmd_status != 0) { 2949 2945 if (cmd->frame->hdr.cmd_status != 2950 2946 MFI_STAT_NOT_FOUND) 2951 - printk(KERN_WARNING "megasas: map sync" 2952 - "failed, status = 0x%x.\n", 2947 + dev_warn(&instance->pdev->dev, "map syncfailed, status = 0x%x\n", 2953 2948 cmd->frame->hdr.cmd_status); 2954 2949 else { 2955 2950 megasas_return_cmd(instance, cmd); ··· 3000 2997 break; 3001 2998 3002 2999 default: 3003 - printk("megasas: Unknown command completed! [0x%X]\n", 3000 + dev_info(&instance->pdev->dev, "Unknown command completed! [0x%X]\n", 3004 3001 hdr->cmd); 3005 3002 break; 3006 3003 } ··· 3008 3005 3009 3006 /** 3010 3007 * megasas_issue_pending_cmds_again - issue all pending cmds 3011 - * in FW again because of the fw reset 3008 + * in FW again because of the fw reset 3012 3009 * @instance: Adapter soft state 3013 3010 */ 3014 3011 static inline void ··· 3026 3023 spin_unlock_irqrestore(&instance->hba_lock, flags); 3027 3024 3028 3025 while (!list_empty(&clist_local)) { 3029 - cmd = list_entry((&clist_local)->next, 3026 + cmd = list_entry((&clist_local)->next, 3030 3027 struct megasas_cmd, list); 3031 3028 list_del_init(&cmd->list); 3032 3029 3033 3030 if (cmd->sync_cmd || cmd->scmd) { 3034 - printk(KERN_NOTICE "megaraid_sas: command %p, %p:%d" 3035 - "detected to be pending while HBA reset.\n", 3031 + dev_notice(&instance->pdev->dev, "command %p, %p:%d" 3032 + "detected to be pending while HBA reset\n", 3036 3033 cmd, cmd->scmd, cmd->sync_cmd); 3037 3034 3038 3035 cmd->retry_for_fw_reset++; 3039 3036 3040 3037 if (cmd->retry_for_fw_reset == 3) { 3041 - printk(KERN_NOTICE "megaraid_sas: cmd %p, %p:%d" 3038 + dev_notice(&instance->pdev->dev, "cmd %p, %p:%d" 3042 3039 "was tried multiple times during reset." 3043 3040 "Shutting down the HBA\n", 3044 3041 cmd, cmd->scmd, cmd->sync_cmd); ··· 3051 3048 3052 3049 if (cmd->sync_cmd == 1) { 3053 3050 if (cmd->scmd) { 3054 - printk(KERN_NOTICE "megaraid_sas: unexpected" 3051 + dev_notice(&instance->pdev->dev, "unexpected" 3055 3052 "cmd attached to internal command!\n"); 3056 3053 } 3057 - printk(KERN_NOTICE "megasas: %p synchronous cmd" 3054 + dev_notice(&instance->pdev->dev, "%p synchronous cmd" 3058 3055 "on the internal reset queue," 3059 3056 "issue it again.\n", cmd); 3060 3057 cmd->cmd_status_drv = MFI_STAT_INVALID_STATUS; 3061 3058 instance->instancet->fire_cmd(instance, 3062 - cmd->frame_phys_addr , 3059 + cmd->frame_phys_addr, 3063 3060 0, instance->reg_set); 3064 3061 } else if (cmd->scmd) { 3065 - printk(KERN_NOTICE "megasas: %p scsi cmd [%02x]" 3062 + dev_notice(&instance->pdev->dev, "%p scsi cmd [%02x]" 3066 3063 "detected on the internal queue, issue again.\n", 3067 3064 cmd, cmd->scmd->cmnd[0]); 3068 3065 ··· 3071 3068 cmd->frame_phys_addr, 3072 3069 cmd->frame_count-1, instance->reg_set); 3073 3070 } else { 3074 - printk(KERN_NOTICE "megasas: %p unexpected cmd on the" 3071 + dev_notice(&instance->pdev->dev, "%p unexpected cmd on the" 3075 3072 "internal reset defer list while re-issue!!\n", 3076 3073 cmd); 3077 3074 } 3078 3075 } 3079 3076 3080 3077 if (instance->aen_cmd) { 3081 - printk(KERN_NOTICE "megaraid_sas: aen_cmd in def process\n"); 3078 + dev_notice(&instance->pdev->dev, "aen_cmd in def process\n"); 3082 3079 megasas_return_cmd(instance, instance->aen_cmd); 3083 3080 3084 - instance->aen_cmd = NULL; 3081 + instance->aen_cmd = NULL; 3085 3082 } 3086 3083 3087 3084 /* 3088 - * Initiate AEN (Asynchronous Event Notification) 3089 - */ 3085 + * Initiate AEN (Asynchronous Event Notification) 3086 + */ 3090 3087 seq_num = instance->last_seq_num; 3091 3088 class_locale.members.reserved = 0; 3092 3089 class_locale.members.locale = MR_EVT_LOCALE_ALL; ··· 3113 3110 u32 defer_index; 3114 3111 unsigned long flags; 3115 3112 3116 - defer_index = 0; 3113 + defer_index = 0; 3117 3114 spin_lock_irqsave(&instance->mfi_pool_lock, flags); 3118 3115 for (i = 0; i < max_cmd; i++) { 3119 3116 cmd = instance->cmd_list[i]; 3120 3117 if (cmd->sync_cmd == 1 || cmd->scmd) { 3121 - printk(KERN_NOTICE "megasas: moving cmd[%d]:%p:%d:%p" 3118 + dev_notice(&instance->pdev->dev, "moving cmd[%d]:%p:%d:%p" 3122 3119 "on the defer queue as internal\n", 3123 3120 defer_index, cmd, cmd->sync_cmd, cmd->scmd); 3124 3121 3125 3122 if (!list_empty(&cmd->list)) { 3126 - printk(KERN_NOTICE "megaraid_sas: ERROR while" 3123 + dev_notice(&instance->pdev->dev, "ERROR while" 3127 3124 " moving this cmd:%p, %d %p, it was" 3128 3125 "discovered on some list?\n", 3129 3126 cmd, cmd->sync_cmd, cmd->scmd); ··· 3148 3145 unsigned long flags; 3149 3146 3150 3147 if (instance->adprecovery != MEGASAS_ADPRESET_SM_INFAULT) { 3151 - printk(KERN_NOTICE "megaraid_sas: error, recovery st %x \n", 3148 + dev_notice(&instance->pdev->dev, "error, recovery st %x\n", 3152 3149 instance->adprecovery); 3153 3150 return ; 3154 3151 } 3155 3152 3156 3153 if (instance->adprecovery == MEGASAS_ADPRESET_SM_INFAULT) { 3157 - printk(KERN_NOTICE "megaraid_sas: FW detected to be in fault" 3154 + dev_notice(&instance->pdev->dev, "FW detected to be in fault" 3158 3155 "state, restarting it...\n"); 3159 3156 3160 3157 instance->instancet->disable_intr(instance); ··· 3162 3159 3163 3160 atomic_set(&instance->fw_reset_no_pci_access, 1); 3164 3161 instance->instancet->adp_reset(instance, instance->reg_set); 3165 - atomic_set(&instance->fw_reset_no_pci_access, 0 ); 3162 + atomic_set(&instance->fw_reset_no_pci_access, 0); 3166 3163 3167 - printk(KERN_NOTICE "megaraid_sas: FW restarted successfully," 3164 + dev_notice(&instance->pdev->dev, "FW restarted successfully," 3168 3165 "initiating next stage...\n"); 3169 3166 3170 - printk(KERN_NOTICE "megaraid_sas: HBA recovery state machine," 3167 + dev_notice(&instance->pdev->dev, "HBA recovery state machine," 3171 3168 "state 2 starting...\n"); 3172 3169 3173 - /*waitting for about 20 second before start the second init*/ 3170 + /* waiting for about 20 second before start the second init */ 3174 3171 for (wait = 0; wait < 30; wait++) { 3175 3172 msleep(1000); 3176 3173 } 3177 3174 3178 3175 if (megasas_transition_to_ready(instance, 1)) { 3179 - printk(KERN_NOTICE "megaraid_sas:adapter not ready\n"); 3176 + dev_notice(&instance->pdev->dev, "adapter not ready\n"); 3180 3177 3181 3178 atomic_set(&instance->fw_reset_no_pci_access, 1); 3182 3179 megaraid_sas_kill_hba(instance); ··· 3203 3200 megasas_issue_pending_cmds_again(instance); 3204 3201 instance->issuepend_done = 1; 3205 3202 } 3206 - return ; 3207 3203 } 3208 3204 3209 3205 /** 3210 3206 * megasas_deplete_reply_queue - Processes all completed commands 3211 3207 * @instance: Adapter soft state 3212 3208 * @alt_status: Alternate status to be returned to 3213 - * SCSI mid-layer instead of the status 3214 - * returned by the FW 3209 + * SCSI mid-layer instead of the status 3210 + * returned by the FW 3215 3211 * Note: this must be called with hba lock held 3216 3212 */ 3217 3213 static int ··· 3240 3238 instance->reg_set) & MFI_STATE_MASK; 3241 3239 3242 3240 if (fw_state != MFI_STATE_FAULT) { 3243 - printk(KERN_NOTICE "megaraid_sas: fw state:%x\n", 3241 + dev_notice(&instance->pdev->dev, "fw state:%x\n", 3244 3242 fw_state); 3245 3243 } 3246 3244 3247 3245 if ((fw_state == MFI_STATE_FAULT) && 3248 3246 (instance->disableOnlineCtrlReset == 0)) { 3249 - printk(KERN_NOTICE "megaraid_sas: wait adp restart\n"); 3247 + dev_notice(&instance->pdev->dev, "wait adp restart\n"); 3250 3248 3251 3249 if ((instance->pdev->device == 3252 3250 PCI_DEVICE_ID_LSI_SAS1064R) || ··· 3267 3265 atomic_set(&instance->fw_outstanding, 0); 3268 3266 megasas_internal_reset_defer_cmds(instance); 3269 3267 3270 - printk(KERN_NOTICE "megasas: fwState=%x, stage:%d\n", 3268 + dev_notice(&instance->pdev->dev, "fwState=%x, stage:%d\n", 3271 3269 fw_state, instance->adprecovery); 3272 3270 3273 3271 schedule_work(&instance->work_init); 3274 3272 return IRQ_HANDLED; 3275 3273 3276 3274 } else { 3277 - printk(KERN_NOTICE "megasas: fwstate:%x, dis_OCR=%x\n", 3275 + dev_notice(&instance->pdev->dev, "fwstate:%x, dis_OCR=%x\n", 3278 3276 fw_state, instance->disableOnlineCtrlReset); 3279 3277 } 3280 3278 } ··· 3290 3288 struct megasas_irq_context *irq_context = devp; 3291 3289 struct megasas_instance *instance = irq_context->instance; 3292 3290 unsigned long flags; 3293 - irqreturn_t rc; 3291 + irqreturn_t rc; 3294 3292 3295 3293 if (atomic_read(&instance->fw_reset_no_pci_access)) 3296 3294 return IRQ_HANDLED; 3297 3295 3298 3296 spin_lock_irqsave(&instance->hba_lock, flags); 3299 - rc = megasas_deplete_reply_queue(instance, DID_OK); 3297 + rc = megasas_deplete_reply_queue(instance, DID_OK); 3300 3298 spin_unlock_irqrestore(&instance->hba_lock, flags); 3301 3299 3302 3300 return rc; ··· 3324 3322 fw_state = abs_state & MFI_STATE_MASK; 3325 3323 3326 3324 if (fw_state != MFI_STATE_READY) 3327 - printk(KERN_INFO "megasas: Waiting for FW to come to ready" 3325 + dev_info(&instance->pdev->dev, "Waiting for FW to come to ready" 3328 3326 " state\n"); 3329 3327 3330 3328 while (fw_state != MFI_STATE_READY) { ··· 3332 3330 switch (fw_state) { 3333 3331 3334 3332 case MFI_STATE_FAULT: 3335 - printk(KERN_DEBUG "megasas: FW in FAULT state!!\n"); 3333 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "FW in FAULT state!!\n"); 3336 3334 if (ocr) { 3337 3335 max_wait = MEGASAS_RESET_WAIT_TIME; 3338 3336 cur_state = MFI_STATE_FAULT; ··· 3471 3469 break; 3472 3470 3473 3471 default: 3474 - printk(KERN_DEBUG "megasas: Unknown state 0x%x\n", 3472 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "Unknown state 0x%x\n", 3475 3473 fw_state); 3476 3474 return -ENODEV; 3477 3475 } ··· 3493 3491 * Return error if fw_state hasn't changed after max_wait 3494 3492 */ 3495 3493 if (curr_abs_state == abs_state) { 3496 - printk(KERN_DEBUG "FW state [%d] hasn't changed " 3494 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "FW state [%d] hasn't changed " 3497 3495 "in %d secs\n", fw_state, max_wait); 3498 3496 return -ENODEV; 3499 3497 } ··· 3501 3499 abs_state = curr_abs_state; 3502 3500 fw_state = curr_abs_state & MFI_STATE_MASK; 3503 3501 } 3504 - printk(KERN_INFO "megasas: FW now in Ready state\n"); 3502 + dev_info(&instance->pdev->dev, "FW now in Ready state\n"); 3505 3503 3506 3504 return 0; 3507 3505 } ··· 3572 3570 sge_sz = (IS_DMA64) ? sizeof(struct megasas_sge64) : 3573 3571 sizeof(struct megasas_sge32); 3574 3572 3575 - if (instance->flag_ieee) { 3573 + if (instance->flag_ieee) 3576 3574 sge_sz = sizeof(struct megasas_sge_skinny); 3577 - } 3578 3575 3579 3576 /* 3580 3577 * For MFI controllers. ··· 3595 3594 instance->pdev, total_sz, 256, 0); 3596 3595 3597 3596 if (!instance->frame_dma_pool) { 3598 - printk(KERN_DEBUG "megasas: failed to setup frame pool\n"); 3597 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "failed to setup frame pool\n"); 3599 3598 return -ENOMEM; 3600 3599 } 3601 3600 ··· 3603 3602 instance->pdev, 128, 4, 0); 3604 3603 3605 3604 if (!instance->sense_dma_pool) { 3606 - printk(KERN_DEBUG "megasas: failed to setup sense pool\n"); 3605 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "failed to setup sense pool\n"); 3607 3606 3608 3607 pci_pool_destroy(instance->frame_dma_pool); 3609 3608 instance->frame_dma_pool = NULL; ··· 3631 3630 * whatever has been allocated 3632 3631 */ 3633 3632 if (!cmd->frame || !cmd->sense) { 3634 - printk(KERN_DEBUG "megasas: pci_pool_alloc failed \n"); 3633 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "pci_pool_alloc failed\n"); 3635 3634 megasas_teardown_frame_pool(instance); 3636 3635 return -ENOMEM; 3637 3636 } ··· 3657 3656 void megasas_free_cmds(struct megasas_instance *instance) 3658 3657 { 3659 3658 int i; 3659 + 3660 3660 /* First free the MFI frame pool */ 3661 3661 megasas_teardown_frame_pool(instance); 3662 3662 ··· 3710 3708 instance->cmd_list = kcalloc(max_cmd, sizeof(struct megasas_cmd*), GFP_KERNEL); 3711 3709 3712 3710 if (!instance->cmd_list) { 3713 - printk(KERN_DEBUG "megasas: out of memory\n"); 3711 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "out of memory\n"); 3714 3712 return -ENOMEM; 3715 3713 } 3716 3714 ··· 3746 3744 * Create a frame pool and assign one frame to each cmd 3747 3745 */ 3748 3746 if (megasas_create_frame_pool(instance)) { 3749 - printk(KERN_DEBUG "megasas: Error creating frame DMA pool\n"); 3747 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "Error creating frame DMA pool\n"); 3750 3748 megasas_free_cmds(instance); 3751 3749 } 3752 3750 ··· 3775 3773 cmd = megasas_get_cmd(instance); 3776 3774 3777 3775 if (!cmd) { 3778 - printk(KERN_DEBUG "megasas (get_pd_list): Failed to get cmd\n"); 3776 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "(get_pd_list): Failed to get cmd\n"); 3779 3777 return -ENOMEM; 3780 3778 } 3781 3779 ··· 3785 3783 MEGASAS_MAX_PD * sizeof(struct MR_PD_LIST), &ci_h); 3786 3784 3787 3785 if (!ci) { 3788 - printk(KERN_DEBUG "Failed to alloc mem for pd_list\n"); 3786 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "Failed to alloc mem for pd_list\n"); 3789 3787 megasas_return_cmd(instance, cmd); 3790 3788 return -ENOMEM; 3791 3789 } ··· 3813 3811 ret = megasas_issue_polled(instance, cmd); 3814 3812 3815 3813 /* 3816 - * the following function will get the instance PD LIST. 3817 - */ 3814 + * the following function will get the instance PD LIST. 3815 + */ 3818 3816 3819 3817 pd_addr = ci->addr; 3820 3818 3821 - if ( ret == 0 && 3819 + if (ret == 0 && 3822 3820 (le32_to_cpu(ci->count) < 3823 3821 (MEGASAS_MAX_PD_CHANNELS * MEGASAS_MAX_DEV_PER_CHANNEL))) { 3824 3822 ··· 3870 3868 cmd = megasas_get_cmd(instance); 3871 3869 3872 3870 if (!cmd) { 3873 - printk(KERN_DEBUG "megasas_get_ld_list: Failed to get cmd\n"); 3871 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "megasas_get_ld_list: Failed to get cmd\n"); 3874 3872 return -ENOMEM; 3875 3873 } 3876 3874 ··· 3881 3879 &ci_h); 3882 3880 3883 3881 if (!ci) { 3884 - printk(KERN_DEBUG "Failed to alloc mem in get_ld_list\n"); 3882 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "Failed to alloc mem in get_ld_list\n"); 3885 3883 megasas_return_cmd(instance, cmd); 3886 3884 return -ENOMEM; 3887 3885 } ··· 3956 3954 cmd = megasas_get_cmd(instance); 3957 3955 3958 3956 if (!cmd) { 3959 - printk(KERN_WARNING 3960 - "megasas:(megasas_ld_list_query): Failed to get cmd\n"); 3957 + dev_warn(&instance->pdev->dev, 3958 + "megasas_ld_list_query: Failed to get cmd\n"); 3961 3959 return -ENOMEM; 3962 3960 } 3963 3961 ··· 3967 3965 sizeof(struct MR_LD_TARGETID_LIST), &ci_h); 3968 3966 3969 3967 if (!ci) { 3970 - printk(KERN_WARNING 3971 - "megasas: Failed to alloc mem for ld_list_query\n"); 3968 + dev_warn(&instance->pdev->dev, 3969 + "Failed to alloc mem for ld_list_query\n"); 3972 3970 megasas_return_cmd(instance, cmd); 3973 3971 return -ENOMEM; 3974 3972 } ··· 4054 4052 instance->supportmax256vd ? "Extended VD(240 VD)firmware" : 4055 4053 "Legacy(64 VD) firmware"); 4056 4054 4057 - old_map_sz = sizeof(struct MR_FW_RAID_MAP) + 4055 + old_map_sz = sizeof(struct MR_FW_RAID_MAP) + 4058 4056 (sizeof(struct MR_LD_SPAN_MAP) * 4059 4057 (instance->fw_supported_vd_count - 1)); 4060 - new_map_sz = sizeof(struct MR_FW_RAID_MAP_EXT); 4061 - fusion->drv_map_sz = sizeof(struct MR_DRV_RAID_MAP) + 4058 + new_map_sz = sizeof(struct MR_FW_RAID_MAP_EXT); 4059 + fusion->drv_map_sz = sizeof(struct MR_DRV_RAID_MAP) + 4062 4060 (sizeof(struct MR_LD_SPAN_MAP) * 4063 4061 (instance->drv_supported_vd_count - 1)); 4064 4062 ··· 4069 4067 fusion->current_map_sz = new_map_sz; 4070 4068 else 4071 4069 fusion->current_map_sz = old_map_sz; 4072 - 4073 4070 } 4074 4071 4075 4072 /** ··· 4094 4093 cmd = megasas_get_cmd(instance); 4095 4094 4096 4095 if (!cmd) { 4097 - printk(KERN_DEBUG "megasas: Failed to get a free cmd\n"); 4096 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "Failed to get a free cmd\n"); 4098 4097 return -ENOMEM; 4099 4098 } 4100 4099 ··· 4104 4103 sizeof(struct megasas_ctrl_info), &ci_h); 4105 4104 4106 4105 if (!ci) { 4107 - printk(KERN_DEBUG "Failed to alloc mem for ctrl info\n"); 4106 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "Failed to alloc mem for ctrl info\n"); 4108 4107 megasas_return_cmd(instance, cmd); 4109 4108 return -ENOMEM; 4110 4109 } ··· 4215 4214 megasas_issue_init_mfi(struct megasas_instance *instance) 4216 4215 { 4217 4216 __le32 context; 4218 - 4219 4217 struct megasas_cmd *cmd; 4220 - 4221 4218 struct megasas_init_frame *init_frame; 4222 4219 struct megasas_init_queue_info *initq_info; 4223 4220 dma_addr_t init_frame_h; ··· 4268 4269 */ 4269 4270 4270 4271 if (megasas_issue_polled(instance, cmd)) { 4271 - printk(KERN_ERR "megasas: Failed to init firmware\n"); 4272 + dev_err(&instance->pdev->dev, "Failed to init firmware\n"); 4272 4273 megasas_return_cmd(instance, cmd); 4273 4274 goto fail_fw_init; 4274 4275 } ··· 4341 4342 &instance->reply_queue_h); 4342 4343 4343 4344 if (!instance->reply_queue) { 4344 - printk(KERN_DEBUG "megasas: Out of DMA mem for reply queue\n"); 4345 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "Out of DMA mem for reply queue\n"); 4345 4346 goto fail_reply_queue; 4346 4347 } 4347 4348 ··· 4360 4361 (instance->instancet->read_fw_status_reg(reg_set) & 4361 4362 0x04000000); 4362 4363 4363 - printk(KERN_NOTICE "megasas_init_mfi: fw_support_ieee=%d", 4364 + dev_notice(&instance->pdev->dev, "megasas_init_mfi: fw_support_ieee=%d", 4364 4365 instance->fw_support_ieee); 4365 4366 4366 4367 if (instance->fw_support_ieee) ··· 4504 4505 instance->bar = find_first_bit(&bar_list, sizeof(unsigned long)); 4505 4506 if (pci_request_selected_regions(instance->pdev, instance->bar, 4506 4507 "megasas: LSI")) { 4507 - printk(KERN_DEBUG "megasas: IO memory region busy!\n"); 4508 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "IO memory region busy!\n"); 4508 4509 return -EBUSY; 4509 4510 } 4510 4511 ··· 4512 4513 instance->reg_set = ioremap_nocache(base_addr, 8192); 4513 4514 4514 4515 if (!instance->reg_set) { 4515 - printk(KERN_DEBUG "megasas: Failed to map IO mem\n"); 4516 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "Failed to map IO mem\n"); 4516 4517 goto fail_ioremap; 4517 4518 } 4518 4519 ··· 4550 4551 (instance, instance->reg_set); 4551 4552 atomic_set(&instance->fw_reset_no_pci_access, 0); 4552 4553 dev_info(&instance->pdev->dev, 4553 - "megasas: FW restarted successfully from %s!\n", 4554 + "FW restarted successfully from %s!\n", 4554 4555 __func__); 4555 4556 4556 4557 /*waitting for about 30 second before retry*/ ··· 4651 4652 4652 4653 instance->instancet->enable_intr(instance); 4653 4654 4654 - printk(KERN_ERR "megasas: INIT adapter done\n"); 4655 + dev_err(&instance->pdev->dev, "INIT adapter done\n"); 4655 4656 4656 4657 /** for passthrough 4657 - * the following function will get the PD LIST. 4658 - */ 4659 - 4660 - memset(instance->pd_list, 0 , 4658 + * the following function will get the PD LIST. 4659 + */ 4660 + memset(instance->pd_list, 0, 4661 4661 (MEGASAS_MAX_PD * sizeof(struct megasas_pd_list))); 4662 4662 if (megasas_get_pd_list(instance) < 0) { 4663 - printk(KERN_ERR "megasas: failed to get PD list\n"); 4663 + dev_err(&instance->pdev->dev, "failed to get PD list\n"); 4664 4664 goto fail_get_pd_list; 4665 4665 } 4666 4666 ··· 4684 4686 le16_to_cpu(ctrl_info->max_strips_per_io); 4685 4687 max_sectors_2 = le32_to_cpu(ctrl_info->max_request_size); 4686 4688 4687 - tmp_sectors = min_t(u32, max_sectors_1 , max_sectors_2); 4689 + tmp_sectors = min_t(u32, max_sectors_1, max_sectors_2); 4688 4690 4689 4691 instance->disableOnlineCtrlReset = 4690 4692 ctrl_info->properties.OnOffProperties.disableOnlineCtrlReset; ··· 4958 4960 aen_cmd, 30); 4959 4961 4960 4962 if (ret_val) { 4961 - printk(KERN_DEBUG "megasas: Failed to abort " 4963 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "Failed to abort " 4962 4964 "previous AEN command\n"); 4963 4965 return ret_val; 4964 4966 } ··· 5049 5051 static int megasas_io_attach(struct megasas_instance *instance) 5050 5052 { 5051 5053 struct Scsi_Host *host = instance->host; 5052 - u32 error; 5054 + u32 error; 5053 5055 5054 5056 /* 5055 5057 * Export parameters required by SCSI mid-layer ··· 5077 5079 (max_sectors <= MEGASAS_MAX_SECTORS)) { 5078 5080 instance->max_sectors_per_req = max_sectors; 5079 5081 } else { 5080 - printk(KERN_INFO "megasas: max_sectors should be > 0" 5082 + dev_info(&instance->pdev->dev, "max_sectors should be > 0" 5081 5083 "and <= %d (or < 1MB for GEN2 controller)\n", 5082 5084 instance->max_sectors_per_req); 5083 5085 } ··· 5124 5126 megasas_set_dma_mask(struct pci_dev *pdev) 5125 5127 { 5126 5128 /* 5127 - * All our contollers are capable of performing 64-bit DMA 5129 + * All our controllers are capable of performing 64-bit DMA 5128 5130 */ 5129 5131 if (IS_DMA64) { 5130 5132 if (pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) != 0) { ··· 5204 5206 sizeof(struct megasas_instance)); 5205 5207 5206 5208 if (!host) { 5207 - printk(KERN_DEBUG "megasas: scsi_host_alloc failed\n"); 5209 + dev_printk(KERN_DEBUG, &pdev->dev, "scsi_host_alloc failed\n"); 5208 5210 goto fail_alloc_instance; 5209 5211 } 5210 5212 5211 5213 instance = (struct megasas_instance *)host->hostdata; 5212 5214 memset(instance, 0, sizeof(*instance)); 5213 - atomic_set( &instance->fw_reset_no_pci_access, 0 ); 5215 + atomic_set(&instance->fw_reset_no_pci_access, 0); 5214 5216 instance->pdev = pdev; 5215 5217 5216 5218 switch (instance->pdev->device) { ··· 5224 5226 instance->ctrl_context = (void *)__get_free_pages(GFP_KERNEL, 5225 5227 instance->ctrl_context_pages); 5226 5228 if (!instance->ctrl_context) { 5227 - printk(KERN_DEBUG "megasas: Failed to allocate " 5229 + dev_printk(KERN_DEBUG, &pdev->dev, "Failed to allocate " 5228 5230 "memory for Fusion context info\n"); 5229 5231 goto fail_alloc_dma_buf; 5230 5232 } ··· 5243 5245 &instance->consumer_h); 5244 5246 5245 5247 if (!instance->producer || !instance->consumer) { 5246 - printk(KERN_DEBUG "megasas: Failed to allocate" 5248 + dev_printk(KERN_DEBUG, &pdev->dev, "Failed to allocate" 5247 5249 "memory for producer, consumer\n"); 5248 5250 goto fail_alloc_dma_buf; 5249 5251 } ··· 5274 5276 CRASH_DMA_BUF_SIZE, 5275 5277 &instance->crash_dump_h); 5276 5278 if (!instance->crash_dump_buf) 5277 - dev_err(&instance->pdev->dev, "Can't allocate Firmware " 5279 + dev_err(&pdev->dev, "Can't allocate Firmware " 5278 5280 "crash dump DMA buffer\n"); 5279 5281 5280 5282 megasas_poll_wait_aen = 0; ··· 5290 5292 &instance->evt_detail_h); 5291 5293 5292 5294 if (!instance->evt_detail) { 5293 - printk(KERN_DEBUG "megasas: Failed to allocate memory for " 5295 + dev_printk(KERN_DEBUG, &pdev->dev, "Failed to allocate memory for " 5294 5296 "event detail structure\n"); 5295 5297 goto fail_alloc_dma_buf; 5296 5298 } ··· 5354 5356 pci_alloc_consistent(pdev, sizeof(struct MR_LD_VF_AFFILIATION_111), 5355 5357 &instance->vf_affiliation_111_h); 5356 5358 if (!instance->vf_affiliation_111) 5357 - printk(KERN_WARNING "megasas: Can't allocate " 5359 + dev_warn(&pdev->dev, "Can't allocate " 5358 5360 "memory for VF affiliation buffer\n"); 5359 5361 } else { 5360 5362 instance->vf_affiliation = ··· 5363 5365 sizeof(struct MR_LD_VF_AFFILIATION), 5364 5366 &instance->vf_affiliation_h); 5365 5367 if (!instance->vf_affiliation) 5366 - printk(KERN_WARNING "megasas: Can't allocate " 5368 + dev_warn(&pdev->dev, "Can't allocate " 5367 5369 "memory for VF affiliation buffer\n"); 5368 5370 } 5369 5371 } ··· 5397 5399 * Initiate AEN (Asynchronous Event Notification) 5398 5400 */ 5399 5401 if (megasas_start_aen(instance)) { 5400 - printk(KERN_DEBUG "megasas: start aen failed\n"); 5402 + dev_printk(KERN_DEBUG, &pdev->dev, "start aen failed\n"); 5401 5403 goto fail_start_aen; 5402 5404 } 5403 5405 ··· 5407 5409 5408 5410 return 0; 5409 5411 5410 - fail_start_aen: 5411 - fail_io_attach: 5412 + fail_start_aen: 5413 + fail_io_attach: 5412 5414 megasas_mgmt_info.count--; 5413 5415 megasas_mgmt_info.instance[megasas_mgmt_info.max_index] = NULL; 5414 5416 megasas_mgmt_info.max_index--; ··· 5426 5428 if (instance->msix_vectors) 5427 5429 pci_disable_msix(instance->pdev); 5428 5430 fail_init_mfi: 5429 - fail_alloc_dma_buf: 5431 + fail_alloc_dma_buf: 5430 5432 if (instance->evt_detail) 5431 5433 pci_free_consistent(pdev, sizeof(struct megasas_evt_detail), 5432 5434 instance->evt_detail, ··· 5440 5442 instance->consumer_h); 5441 5443 scsi_host_put(host); 5442 5444 5443 - fail_alloc_instance: 5444 - fail_set_dma_mask: 5445 + fail_alloc_instance: 5446 + fail_set_dma_mask: 5445 5447 pci_disable_device(pdev); 5446 5448 5447 5449 return -ENODEV; ··· 5483 5485 " from %s\n", __func__); 5484 5486 5485 5487 megasas_return_cmd(instance, cmd); 5486 - 5487 - return; 5488 5488 } 5489 5489 5490 5490 /** ··· 5528 5532 "from %s\n", __func__); 5529 5533 5530 5534 megasas_return_cmd(instance, cmd); 5531 - 5532 - return; 5533 5535 } 5534 5536 5535 5537 #ifdef CONFIG_PM ··· 5601 5607 rval = pci_enable_device_mem(pdev); 5602 5608 5603 5609 if (rval) { 5604 - printk(KERN_ERR "megasas: Enable device failed\n"); 5610 + dev_err(&pdev->dev, "Enable device failed\n"); 5605 5611 return rval; 5606 5612 } 5607 5613 ··· 5680 5686 * Initiate AEN (Asynchronous Event Notification) 5681 5687 */ 5682 5688 if (megasas_start_aen(instance)) 5683 - printk(KERN_ERR "megasas: Start AEN failed\n"); 5689 + dev_err(&instance->pdev->dev, "Start AEN failed\n"); 5684 5690 5685 5691 return 0; 5686 5692 ··· 5833 5839 scsi_host_put(host); 5834 5840 5835 5841 pci_disable_device(pdev); 5836 - 5837 - return; 5838 5842 } 5839 5843 5840 5844 /** ··· 5901 5909 { 5902 5910 unsigned int mask; 5903 5911 unsigned long flags; 5912 + 5904 5913 poll_wait(file, &megasas_poll_wait, wait); 5905 5914 spin_lock_irqsave(&poll_aen_lock, flags); 5906 5915 if (megasas_poll_wait_aen) 5907 - mask = (POLLIN | POLLRDNORM); 5908 - 5916 + mask = (POLLIN | POLLRDNORM); 5909 5917 else 5910 5918 mask = 0; 5911 5919 megasas_poll_wait_aen = 0; ··· 5919 5927 * @cmd: MFI command frame 5920 5928 */ 5921 5929 5922 - static int megasas_set_crash_dump_params_ioctl( 5923 - struct megasas_cmd *cmd) 5930 + static int megasas_set_crash_dump_params_ioctl(struct megasas_cmd *cmd) 5924 5931 { 5925 5932 struct megasas_instance *local_instance; 5926 5933 int i, error = 0; ··· 5973 5982 memset(kbuff_arr, 0, sizeof(kbuff_arr)); 5974 5983 5975 5984 if (ioc->sge_count > MAX_IOCTL_SGE) { 5976 - printk(KERN_DEBUG "megasas: SGE count [%d] > max limit [%d]\n", 5985 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "SGE count [%d] > max limit [%d]\n", 5977 5986 ioc->sge_count, MAX_IOCTL_SGE); 5978 5987 return -EINVAL; 5979 5988 } 5980 5989 5981 5990 cmd = megasas_get_cmd(instance); 5982 5991 if (!cmd) { 5983 - printk(KERN_DEBUG "megasas: Failed to get a cmd packet\n"); 5992 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "Failed to get a cmd packet\n"); 5984 5993 return -ENOMEM; 5985 5994 } 5986 5995 ··· 6025 6034 ioc->sgl[i].iov_len, 6026 6035 &buf_handle, GFP_KERNEL); 6027 6036 if (!kbuff_arr[i]) { 6028 - printk(KERN_DEBUG "megasas: Failed to alloc " 6029 - "kernel SGL buffer for IOCTL \n"); 6037 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "Failed to alloc " 6038 + "kernel SGL buffer for IOCTL\n"); 6030 6039 error = -ENOMEM; 6031 6040 goto out; 6032 6041 } ··· 6099 6108 6100 6109 if (copy_to_user((void __user *)((unsigned long)(*sense_ptr)), 6101 6110 sense, ioc->sense_len)) { 6102 - printk(KERN_ERR "megasas: Failed to copy out to user " 6111 + dev_err(&instance->pdev->dev, "Failed to copy out to user " 6103 6112 "sense data\n"); 6104 6113 error = -EFAULT; 6105 6114 goto out; ··· 6111 6120 */ 6112 6121 if (copy_to_user(&user_ioc->frame.hdr.cmd_status, 6113 6122 &cmd->frame->hdr.cmd_status, sizeof(u8))) { 6114 - printk(KERN_DEBUG "megasas: Error copying out cmd_status\n"); 6123 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "Error copying out cmd_status\n"); 6115 6124 error = -EFAULT; 6116 6125 } 6117 6126 6118 - out: 6127 + out: 6119 6128 if (sense) { 6120 6129 dma_free_coherent(&instance->pdev->dev, ioc->sense_len, 6121 6130 sense, sense_handle); ··· 6171 6180 } 6172 6181 6173 6182 if (instance->adprecovery == MEGASAS_HW_CRITICAL_ERROR) { 6174 - printk(KERN_ERR "Controller in crit error\n"); 6183 + dev_err(&instance->pdev->dev, "Controller in crit error\n"); 6175 6184 error = -ENODEV; 6176 6185 goto out_kfree_ioc; 6177 6186 } ··· 6196 6205 spin_unlock_irqrestore(&instance->hba_lock, flags); 6197 6206 6198 6207 if (!(i % MEGASAS_RESET_NOTICE_INTERVAL)) { 6199 - printk(KERN_NOTICE "megasas: waiting" 6208 + dev_notice(&instance->pdev->dev, "waiting" 6200 6209 "for controller reset to finish\n"); 6201 6210 } 6202 6211 ··· 6207 6216 if (instance->adprecovery != MEGASAS_HBA_OPERATIONAL) { 6208 6217 spin_unlock_irqrestore(&instance->hba_lock, flags); 6209 6218 6210 - printk(KERN_ERR "megaraid_sas: timed out while" 6219 + dev_err(&instance->pdev->dev, "timed out while" 6211 6220 "waiting for HBA to recover\n"); 6212 6221 error = -ENODEV; 6213 6222 goto out_up; ··· 6215 6224 spin_unlock_irqrestore(&instance->hba_lock, flags); 6216 6225 6217 6226 error = megasas_mgmt_fw_ioctl(instance, user_ioc, ioc); 6218 - out_up: 6227 + out_up: 6219 6228 up(&instance->ioctl_sem); 6220 6229 6221 - out_kfree_ioc: 6230 + out_kfree_ioc: 6222 6231 kfree(ioc); 6223 6232 return error; 6224 6233 } ··· 6266 6275 spin_unlock_irqrestore(&instance->hba_lock, flags); 6267 6276 6268 6277 if (!(i % MEGASAS_RESET_NOTICE_INTERVAL)) { 6269 - printk(KERN_NOTICE "megasas: waiting for" 6278 + dev_notice(&instance->pdev->dev, "waiting for" 6270 6279 "controller reset to finish\n"); 6271 6280 } 6272 6281 ··· 6276 6285 spin_lock_irqsave(&instance->hba_lock, flags); 6277 6286 if (instance->adprecovery != MEGASAS_HBA_OPERATIONAL) { 6278 6287 spin_unlock_irqrestore(&instance->hba_lock, flags); 6279 - printk(KERN_ERR "megaraid_sas: timed out while waiting" 6280 - "for HBA to recover.\n"); 6288 + dev_err(&instance->pdev->dev, "timed out while waiting" 6289 + "for HBA to recover\n"); 6281 6290 return -ENODEV; 6282 6291 } 6283 6292 spin_unlock_irqrestore(&instance->hba_lock, flags); ··· 6453 6462 megasas_sysfs_set_dbg_lvl(struct device_driver *dd, const char *buf, size_t count) 6454 6463 { 6455 6464 int retval = count; 6456 - if(sscanf(buf,"%u",&megasas_dbg_lvl)<1){ 6465 + 6466 + if (sscanf(buf, "%u", &megasas_dbg_lvl) < 1) { 6457 6467 printk(KERN_ERR "megasas: could not set dbg_lvl\n"); 6458 6468 retval = -EINVAL; 6459 6469 } ··· 6494 6502 if (instance->adprecovery == MEGASAS_HBA_OPERATIONAL) 6495 6503 break; 6496 6504 if (!(i % MEGASAS_RESET_NOTICE_INTERVAL)) { 6497 - printk(KERN_NOTICE "megasas: %s waiting for " 6505 + dev_notice(&instance->pdev->dev, "%s waiting for " 6498 6506 "controller reset to finish for scsi%d\n", 6499 6507 __func__, instance->host->host_no); 6500 6508 } ··· 6516 6524 pd_index = 6517 6525 (i * MEGASAS_MAX_DEV_PER_CHANNEL) + j; 6518 6526 6519 - sdev1 = 6520 - scsi_device_lookup(host, i, j, 0); 6527 + sdev1 = scsi_device_lookup(host, i, j, 0); 6521 6528 6522 6529 if (instance->pd_list[pd_index].driveState 6523 6530 == MR_PD_STATE_SYSTEM) { 6524 - if (!sdev1) { 6531 + if (!sdev1) 6525 6532 scsi_add_device(host, i, j, 0); 6526 - } 6527 6533 6528 6534 if (sdev1) 6529 6535 scsi_device_put(sdev1); ··· 6542 6552 pd_index = 6543 6553 (i * MEGASAS_MAX_DEV_PER_CHANNEL) + j; 6544 6554 6545 - sdev1 = 6546 - scsi_device_lookup(host, i, j, 0); 6555 + sdev1 = scsi_device_lookup(host, i, j, 0); 6547 6556 6548 6557 if (instance->pd_list[pd_index].driveState 6549 6558 == MR_PD_STATE_SYSTEM) { 6550 - if (sdev1) { 6559 + if (sdev1) 6551 6560 scsi_device_put(sdev1); 6552 - } 6553 6561 } else { 6554 6562 if (sdev1) { 6555 6563 scsi_remove_device(sdev1); ··· 6632 6644 break; 6633 6645 } 6634 6646 } else { 6635 - printk(KERN_ERR "invalid evt_detail!\n"); 6647 + dev_err(&instance->pdev->dev, "invalid evt_detail!\n"); 6636 6648 kfree(ev); 6637 6649 return; 6638 6650 } 6639 6651 6640 6652 if (doscan) { 6641 - printk(KERN_INFO "megaraid_sas: scanning for scsi%d...\n", 6653 + dev_info(&instance->pdev->dev, "scanning for scsi%d...\n", 6642 6654 instance->host->host_no); 6643 6655 if (megasas_get_pd_list(instance) == 0) { 6644 6656 for (i = 0; i < MEGASAS_MAX_PD_CHANNELS; i++) { ··· 6693 6705 } 6694 6706 } 6695 6707 6696 - if ( instance->aen_cmd != NULL ) { 6708 + if (instance->aen_cmd != NULL) { 6697 6709 kfree(ev); 6698 6710 return ; 6699 6711 } ··· 6710 6722 mutex_unlock(&instance->aen_mutex); 6711 6723 6712 6724 if (error) 6713 - printk(KERN_ERR "register aen failed error %x\n", error); 6725 + dev_err(&instance->pdev->dev, "register aen failed error %x\n", error); 6714 6726 6715 6727 kfree(ev); 6716 6728 }
+46 -49
drivers/scsi/megaraid/megaraid_sas_fusion.c
··· 221 221 struct megasas_cmd_fusion *cmd; 222 222 223 223 if (!fusion->sg_dma_pool || !fusion->sense_dma_pool) { 224 - printk(KERN_ERR "megasas: dma pool is null. SG Pool %p, " 224 + dev_err(&instance->pdev->dev, "dma pool is null. SG Pool %p, " 225 225 "sense pool : %p\n", fusion->sg_dma_pool, 226 226 fusion->sense_dma_pool); 227 227 return; ··· 332 332 total_sz_chain_frame, 4, 333 333 0); 334 334 if (!fusion->sg_dma_pool) { 335 - printk(KERN_DEBUG "megasas: failed to setup request pool " 336 - "fusion\n"); 335 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "failed to setup request pool fusion\n"); 337 336 return -ENOMEM; 338 337 } 339 338 fusion->sense_dma_pool = pci_pool_create("megasas sense pool fusion", ··· 340 341 SCSI_SENSE_BUFFERSIZE, 64, 0); 341 342 342 343 if (!fusion->sense_dma_pool) { 343 - printk(KERN_DEBUG "megasas: failed to setup sense pool " 344 - "fusion\n"); 344 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "failed to setup sense pool fusion\n"); 345 345 pci_pool_destroy(fusion->sg_dma_pool); 346 346 fusion->sg_dma_pool = NULL; 347 347 return -ENOMEM; ··· 364 366 * whatever has been allocated 365 367 */ 366 368 if (!cmd->sg_frame || !cmd->sense) { 367 - printk(KERN_DEBUG "megasas: pci_pool_alloc failed\n"); 369 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "pci_pool_alloc failed\n"); 368 370 megasas_teardown_frame_pool_fusion(instance); 369 371 return -ENOMEM; 370 372 } ··· 410 412 &fusion->req_frames_desc_phys, GFP_KERNEL); 411 413 412 414 if (!fusion->req_frames_desc) { 413 - printk(KERN_ERR "megasas; Could not allocate memory for " 415 + dev_err(&instance->pdev->dev, "Could not allocate memory for " 414 416 "request_frames\n"); 415 417 goto fail_req_desc; 416 418 } ··· 421 423 fusion->reply_alloc_sz * count, 16, 0); 422 424 423 425 if (!fusion->reply_frames_desc_pool) { 424 - printk(KERN_ERR "megasas; Could not allocate memory for " 426 + dev_err(&instance->pdev->dev, "Could not allocate memory for " 425 427 "reply_frame pool\n"); 426 428 goto fail_reply_desc; 427 429 } ··· 430 432 pci_pool_alloc(fusion->reply_frames_desc_pool, GFP_KERNEL, 431 433 &fusion->reply_frames_desc_phys); 432 434 if (!fusion->reply_frames_desc) { 433 - printk(KERN_ERR "megasas; Could not allocate memory for " 435 + dev_err(&instance->pdev->dev, "Could not allocate memory for " 434 436 "reply_frame pool\n"); 435 437 pci_pool_destroy(fusion->reply_frames_desc_pool); 436 438 goto fail_reply_desc; ··· 447 449 fusion->io_frames_alloc_sz, 16, 0); 448 450 449 451 if (!fusion->io_request_frames_pool) { 450 - printk(KERN_ERR "megasas: Could not allocate memory for " 452 + dev_err(&instance->pdev->dev, "Could not allocate memory for " 451 453 "io_request_frame pool\n"); 452 454 goto fail_io_frames; 453 455 } ··· 456 458 pci_pool_alloc(fusion->io_request_frames_pool, GFP_KERNEL, 457 459 &fusion->io_request_frames_phys); 458 460 if (!fusion->io_request_frames) { 459 - printk(KERN_ERR "megasas: Could not allocate memory for " 461 + dev_err(&instance->pdev->dev, "Could not allocate memory for " 460 462 "io_request_frames frames\n"); 461 463 pci_pool_destroy(fusion->io_request_frames_pool); 462 464 goto fail_io_frames; ··· 471 473 * max_cmd, GFP_KERNEL); 472 474 473 475 if (!fusion->cmd_list) { 474 - printk(KERN_DEBUG "megasas: out of memory. Could not alloc " 476 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "out of memory. Could not alloc " 475 477 "memory for cmd_list_fusion\n"); 476 478 goto fail_cmd_list; 477 479 } ··· 481 483 fusion->cmd_list[i] = kmalloc(sizeof(struct megasas_cmd_fusion), 482 484 GFP_KERNEL); 483 485 if (!fusion->cmd_list[i]) { 484 - printk(KERN_ERR "Could not alloc cmd list fusion\n"); 486 + dev_err(&instance->pdev->dev, "Could not alloc cmd list fusion\n"); 485 487 486 488 for (j = 0; j < i; j++) 487 489 kfree(fusion->cmd_list[j]); ··· 525 527 * Create a frame pool and assign one frame to each cmd 526 528 */ 527 529 if (megasas_create_frame_pool_fusion(instance)) { 528 - printk(KERN_DEBUG "megasas: Error creating frame DMA pool\n"); 530 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "Error creating frame DMA pool\n"); 529 531 megasas_free_cmds_fusion(instance); 530 532 goto fail_req_desc; 531 533 } ··· 611 613 cmd = megasas_get_cmd(instance); 612 614 613 615 if (!cmd) { 614 - printk(KERN_ERR "Could not allocate cmd for INIT Frame\n"); 616 + dev_err(&instance->pdev->dev, "Could not allocate cmd for INIT Frame\n"); 615 617 ret = 1; 616 618 goto fail_get_cmd; 617 619 } ··· 622 624 &ioc_init_handle, GFP_KERNEL); 623 625 624 626 if (!IOCInitMessage) { 625 - printk(KERN_ERR "Could not allocate memory for " 627 + dev_err(&instance->pdev->dev, "Could not allocate memory for " 626 628 "IOCInitMessage\n"); 627 629 ret = 1; 628 630 goto fail_fw_init; ··· 712 714 ret = 1; 713 715 goto fail_fw_init; 714 716 } 715 - printk(KERN_ERR "megasas:IOC Init cmd success\n"); 717 + dev_err(&instance->pdev->dev, "Init cmd success\n"); 716 718 717 719 ret = 0; 718 720 ··· 755 757 cmd = megasas_get_cmd(instance); 756 758 757 759 if (!cmd) { 758 - printk(KERN_DEBUG "megasas: Failed to get cmd for map info.\n"); 760 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "Failed to get cmd for map info\n"); 759 761 return -ENOMEM; 760 762 } 761 763 ··· 774 776 ci_h = fusion->ld_map_phys[(instance->map_id & 1)]; 775 777 776 778 if (!ci) { 777 - printk(KERN_DEBUG "Failed to alloc mem for ld_map_info\n"); 779 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "Failed to alloc mem for ld_map_info\n"); 778 780 megasas_return_cmd(instance, cmd); 779 781 return -ENOMEM; 780 782 } ··· 849 851 cmd = megasas_get_cmd(instance); 850 852 851 853 if (!cmd) { 852 - printk(KERN_DEBUG "megasas: Failed to get cmd for sync" 853 - "info.\n"); 854 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "Failed to get cmd for sync info\n"); 854 855 return -ENOMEM; 855 856 } 856 857 ··· 1094 1097 &fusion->ld_map_phys[i], 1095 1098 GFP_KERNEL); 1096 1099 if (!fusion->ld_map[i]) { 1097 - printk(KERN_ERR "megasas: Could not allocate memory " 1100 + dev_err(&instance->pdev->dev, "Could not allocate memory " 1098 1101 "for map info\n"); 1099 1102 goto fail_map_info; 1100 1103 } ··· 1159 1162 cmd->scmd->result = DID_IMM_RETRY << 16; 1160 1163 break; 1161 1164 default: 1162 - printk(KERN_DEBUG "megasas: FW status %#x\n", status); 1165 + dev_printk(KERN_DEBUG, &cmd->instance->pdev->dev, "FW status %#x\n", status); 1163 1166 cmd->scmd->result = DID_ERROR << 16; 1164 1167 break; 1165 1168 } ··· 1848 1851 &io_request->SGL, cmd); 1849 1852 1850 1853 if (sge_count > instance->max_num_sge) { 1851 - printk(KERN_ERR "megasas: Error. sge_count (0x%x) exceeds " 1854 + dev_err(&instance->pdev->dev, "Error. sge_count (0x%x) exceeds " 1852 1855 "max (0x%x) allowed\n", sge_count, 1853 1856 instance->max_num_sge); 1854 1857 return 1; ··· 1882 1885 struct fusion_context *fusion; 1883 1886 1884 1887 if (index >= instance->max_fw_cmds) { 1885 - printk(KERN_ERR "megasas: Invalid SMID (0x%x)request for " 1888 + dev_err(&instance->pdev->dev, "Invalid SMID (0x%x)request for " 1886 1889 "descriptor for scsi%d\n", index, 1887 1890 instance->host->host_no); 1888 1891 return NULL; ··· 1924 1927 1925 1928 if (megasas_build_io_fusion(instance, scmd, cmd)) { 1926 1929 megasas_return_cmd_fusion(instance, cmd); 1927 - printk(KERN_ERR "megasas: Error building command.\n"); 1930 + dev_err(&instance->pdev->dev, "Error building command\n"); 1928 1931 cmd->request_desc = NULL; 1929 1932 return 1; 1930 1933 } ··· 1934 1937 1935 1938 if (cmd->io_request->ChainOffset != 0 && 1936 1939 cmd->io_request->ChainOffset != 0xF) 1937 - printk(KERN_ERR "megasas: The chain offset value is not " 1940 + dev_err(&instance->pdev->dev, "The chain offset value is not " 1938 1941 "correct : %x\n", cmd->io_request->ChainOffset); 1939 1942 1940 1943 /* ··· 2022 2025 if (reply_descript_type == 2023 2026 MPI2_RPY_DESCRIPT_FLAGS_SCSI_IO_SUCCESS) { 2024 2027 if (megasas_dbg_lvl == 5) 2025 - printk(KERN_ERR "\nmegasas: FAST Path " 2028 + dev_err(&instance->pdev->dev, "\nFAST Path " 2026 2029 "IO Success\n"); 2027 2030 } 2028 2031 /* Fall thru and complete IO */ ··· 2183 2186 else if (fw_state == MFI_STATE_FAULT) 2184 2187 schedule_work(&instance->work_init); 2185 2188 } else if (fw_state == MFI_STATE_FAULT) { 2186 - printk(KERN_WARNING "megaraid_sas: Iop2SysDoorbellInt" 2189 + dev_warn(&instance->pdev->dev, "Iop2SysDoorbellInt" 2187 2190 "for scsi%d\n", instance->host->host_no); 2188 2191 schedule_work(&instance->work_init); 2189 2192 } ··· 2266 2269 u16 index; 2267 2270 2268 2271 if (build_mpt_mfi_pass_thru(instance, cmd)) { 2269 - printk(KERN_ERR "Couldn't build MFI pass thru cmd\n"); 2272 + dev_err(&instance->pdev->dev, "Couldn't build MFI pass thru cmd\n"); 2270 2273 return NULL; 2271 2274 } 2272 2275 ··· 2300 2303 2301 2304 req_desc = build_mpt_cmd(instance, cmd); 2302 2305 if (!req_desc) { 2303 - printk(KERN_ERR "Couldn't issue MFI pass thru cmd\n"); 2306 + dev_err(&instance->pdev->dev, "Couldn't issue MFI pass thru cmd\n"); 2304 2307 return; 2305 2308 } 2306 2309 megasas_fire_cmd_fusion(instance, req_desc); ··· 2410 2413 fw_state = instance->instancet->read_fw_status_reg( 2411 2414 instance->reg_set) & MFI_STATE_MASK; 2412 2415 if (fw_state == MFI_STATE_FAULT) { 2413 - printk(KERN_WARNING "megasas: Found FW in FAULT state," 2416 + dev_warn(&instance->pdev->dev, "Found FW in FAULT state," 2414 2417 " will reset adapter scsi%d.\n", 2415 2418 instance->host->host_no); 2416 2419 retval = 1; ··· 2433 2436 hb_seconds_missed++; 2434 2437 if (hb_seconds_missed == 2435 2438 (MEGASAS_SRIOV_HEARTBEAT_INTERVAL_VF/HZ)) { 2436 - printk(KERN_WARNING "megasas: SR-IOV:" 2439 + dev_warn(&instance->pdev->dev, "SR-IOV:" 2437 2440 " Heartbeat never completed " 2438 2441 " while polling during I/O " 2439 2442 " timeout handling for " ··· 2451 2454 goto out; 2452 2455 2453 2456 if (!(i % MEGASAS_RESET_NOTICE_INTERVAL)) { 2454 - printk(KERN_NOTICE "megasas: [%2d]waiting for %d " 2457 + dev_notice(&instance->pdev->dev, "[%2d]waiting for %d " 2455 2458 "commands to complete for scsi%d\n", i, 2456 2459 outstanding, instance->host->host_no); 2457 2460 megasas_complete_cmd_dpc_fusion( ··· 2461 2464 } 2462 2465 2463 2466 if (atomic_read(&instance->fw_outstanding)) { 2464 - printk("megaraid_sas: pending commands remain after waiting, " 2467 + dev_err(&instance->pdev->dev, "pending commands remain after waiting, " 2465 2468 "will reset adapter scsi%d.\n", 2466 2469 instance->host->host_no); 2467 2470 retval = 1; ··· 2561 2564 mutex_lock(&instance->reset_mutex); 2562 2565 2563 2566 if (instance->adprecovery == MEGASAS_HW_CRITICAL_ERROR) { 2564 - printk(KERN_WARNING "megaraid_sas: Hardware critical error, " 2567 + dev_warn(&instance->pdev->dev, "Hardware critical error, " 2565 2568 "returning FAILED for scsi%d.\n", 2566 2569 instance->host->host_no); 2567 2570 mutex_unlock(&instance->reset_mutex); ··· 2615 2618 if (megasas_wait_for_outstanding_fusion(instance, iotimeout, 2616 2619 &convert)) { 2617 2620 instance->adprecovery = MEGASAS_ADPRESET_SM_INFAULT; 2618 - printk(KERN_WARNING "megaraid_sas: resetting fusion " 2621 + dev_warn(&instance->pdev->dev, "resetting fusion " 2619 2622 "adapter scsi%d.\n", instance->host->host_no); 2620 2623 if (convert) 2621 2624 iotimeout = 0; ··· 2642 2645 if (instance->disableOnlineCtrlReset || 2643 2646 (abs_state == MFI_STATE_FAULT && !reset_adapter)) { 2644 2647 /* Reset not supported, kill adapter */ 2645 - printk(KERN_WARNING "megaraid_sas: Reset not supported" 2648 + dev_warn(&instance->pdev->dev, "Reset not supported" 2646 2649 ", killing adapter scsi%d.\n", 2647 2650 instance->host->host_no); 2648 2651 megaraid_sas_kill_hba(instance); ··· 2660 2663 instance->hb_host_mem->HB.driverCounter)) { 2661 2664 instance->hb_host_mem->HB.driverCounter = 2662 2665 instance->hb_host_mem->HB.fwCounter; 2663 - printk(KERN_WARNING "megasas: SR-IOV:" 2666 + dev_warn(&instance->pdev->dev, "SR-IOV:" 2664 2667 "Late FW heartbeat update for " 2665 2668 "scsi%d.\n", 2666 2669 instance->host->host_no); ··· 2676 2679 abs_state = status_reg & 2677 2680 MFI_STATE_MASK; 2678 2681 if (abs_state == MFI_STATE_READY) { 2679 - printk(KERN_WARNING "megasas" 2680 - ": SR-IOV: FW was found" 2682 + dev_warn(&instance->pdev->dev, 2683 + "SR-IOV: FW was found" 2681 2684 "to be in ready state " 2682 2685 "for scsi%d.\n", 2683 2686 instance->host->host_no); ··· 2686 2689 msleep(20); 2687 2690 } 2688 2691 if (abs_state != MFI_STATE_READY) { 2689 - printk(KERN_WARNING "megasas: SR-IOV: " 2692 + dev_warn(&instance->pdev->dev, "SR-IOV: " 2690 2693 "FW not in ready state after %d" 2691 2694 " seconds for scsi%d, status_reg = " 2692 2695 "0x%x.\n", ··· 2728 2731 host_diag = 2729 2732 readl(&instance->reg_set->fusion_host_diag); 2730 2733 if (retry++ == 100) { 2731 - printk(KERN_WARNING "megaraid_sas: " 2734 + dev_warn(&instance->pdev->dev, 2732 2735 "Host diag unlock failed! " 2733 2736 "for scsi%d\n", 2734 2737 instance->host->host_no); ··· 2751 2754 host_diag = 2752 2755 readl(&instance->reg_set->fusion_host_diag); 2753 2756 if (retry++ == 1000) { 2754 - printk(KERN_WARNING "megaraid_sas: " 2757 + dev_warn(&instance->pdev->dev, 2755 2758 "Diag reset adapter never " 2756 2759 "cleared for scsi%d!\n", 2757 2760 instance->host->host_no); ··· 2774 2777 instance->reg_set) & MFI_STATE_MASK; 2775 2778 } 2776 2779 if (abs_state <= MFI_STATE_FW_INIT) { 2777 - printk(KERN_WARNING "megaraid_sas: firmware " 2780 + dev_warn(&instance->pdev->dev, "firmware " 2778 2781 "state < MFI_STATE_FW_INIT, state = " 2779 2782 "0x%x for scsi%d\n", abs_state, 2780 2783 instance->host->host_no); ··· 2783 2786 2784 2787 /* Wait for FW to become ready */ 2785 2788 if (megasas_transition_to_ready(instance, 1)) { 2786 - printk(KERN_WARNING "megaraid_sas: Failed to " 2789 + dev_warn(&instance->pdev->dev, "Failed to " 2787 2790 "transition controller to ready " 2788 2791 "for scsi%d.\n", 2789 2792 instance->host->host_no); ··· 2792 2795 2793 2796 megasas_reset_reply_desc(instance); 2794 2797 if (megasas_ioc_init_fusion(instance)) { 2795 - printk(KERN_WARNING "megaraid_sas: " 2798 + dev_warn(&instance->pdev->dev, 2796 2799 "megasas_ioc_init_fusion() failed!" 2797 2800 " for scsi%d\n", 2798 2801 instance->host->host_no); ··· 2833 2836 } 2834 2837 2835 2838 /* Adapter reset completed successfully */ 2836 - printk(KERN_WARNING "megaraid_sas: Reset " 2839 + dev_warn(&instance->pdev->dev, "Reset " 2837 2840 "successful for scsi%d.\n", 2838 2841 instance->host->host_no); 2839 2842 ··· 2849 2852 goto out; 2850 2853 } 2851 2854 /* Reset failed, kill the adapter */ 2852 - printk(KERN_WARNING "megaraid_sas: Reset failed, killing " 2855 + dev_warn(&instance->pdev->dev, "Reset failed, killing " 2853 2856 "adapter scsi%d.\n", instance->host->host_no); 2854 2857 megaraid_sas_kill_hba(instance); 2855 2858 instance->skip_heartbeat_timer_del = 1;
+9 -7
drivers/scsi/mpt2sas/mpt2sas_base.c
··· 1557 1557 goto out_fail; 1558 1558 } 1559 1559 1560 - for (i = 0, memap_sz = 0, pio_sz = 0 ; i < DEVICE_COUNT_RESOURCE; i++) { 1560 + for (i = 0, memap_sz = 0, pio_sz = 0; (i < DEVICE_COUNT_RESOURCE) && 1561 + (!memap_sz || !pio_sz); i++) { 1561 1562 if (pci_resource_flags(pdev, i) & IORESOURCE_IO) { 1562 1563 if (pio_sz) 1563 1564 continue; ··· 1573 1572 chip_phys = (u64)ioc->chip_phys; 1574 1573 memap_sz = pci_resource_len(pdev, i); 1575 1574 ioc->chip = ioremap(ioc->chip_phys, memap_sz); 1576 - if (ioc->chip == NULL) { 1577 - printk(MPT2SAS_ERR_FMT "unable to map " 1578 - "adapter memory!\n", ioc->name); 1579 - r = -EINVAL; 1580 - goto out_fail; 1581 - } 1582 1575 } 1583 1576 } 1577 + } 1578 + 1579 + if (ioc->chip == NULL) { 1580 + printk(MPT2SAS_ERR_FMT "unable to map adapter memory! " 1581 + "or resource not found\n", ioc->name); 1582 + r = -EINVAL; 1583 + goto out_fail; 1584 1584 } 1585 1585 1586 1586 _base_mask_interrupts(ioc);
+9 -7
drivers/scsi/mpt3sas/mpt3sas_base.c
··· 1843 1843 goto out_fail; 1844 1844 } 1845 1845 1846 - for (i = 0, memap_sz = 0, pio_sz = 0 ; i < DEVICE_COUNT_RESOURCE; i++) { 1846 + for (i = 0, memap_sz = 0, pio_sz = 0; (i < DEVICE_COUNT_RESOURCE) && 1847 + (!memap_sz || !pio_sz); i++) { 1847 1848 if (pci_resource_flags(pdev, i) & IORESOURCE_IO) { 1848 1849 if (pio_sz) 1849 1850 continue; ··· 1857 1856 chip_phys = (u64)ioc->chip_phys; 1858 1857 memap_sz = pci_resource_len(pdev, i); 1859 1858 ioc->chip = ioremap(ioc->chip_phys, memap_sz); 1860 - if (ioc->chip == NULL) { 1861 - pr_err(MPT3SAS_FMT "unable to map adapter memory!\n", 1862 - ioc->name); 1863 - r = -EINVAL; 1864 - goto out_fail; 1865 - } 1866 1859 } 1860 + } 1861 + 1862 + if (ioc->chip == NULL) { 1863 + pr_err(MPT3SAS_FMT "unable to map adapter memory! " 1864 + " or resource not found\n", ioc->name); 1865 + r = -EINVAL; 1866 + goto out_fail; 1867 1867 } 1868 1868 1869 1869 _base_mask_interrupts(ioc);
+4 -1
drivers/scsi/mvsas/mv_init.c
··· 338 338 339 339 res_start = pci_resource_start(pdev, bar); 340 340 res_len = pci_resource_len(pdev, bar); 341 - if (!res_start || !res_len) 341 + if (!res_start || !res_len) { 342 + iounmap(mvi->regs_ex); 343 + mvi->regs_ex = NULL; 342 344 goto err_out; 345 + } 343 346 344 347 res_flag = pci_resource_flags(pdev, bar); 345 348 if (res_flag & IORESOURCE_CACHEABLE)
+3 -1
drivers/scsi/pm8001/pm8001_defs.h
··· 49 49 chip_8019, 50 50 chip_8074, 51 51 chip_8076, 52 - chip_8077 52 + chip_8077, 53 + chip_8006, 53 54 }; 54 55 55 56 enum phy_speed { 56 57 PHY_SPEED_15 = 0x01, 57 58 PHY_SPEED_30 = 0x02, 58 59 PHY_SPEED_60 = 0x04, 60 + PHY_SPEED_120 = 0x08, 59 61 }; 60 62 61 63 enum data_direction {
+4
drivers/scsi/pm8001/pm8001_hwi.c
··· 3263 3263 struct sas_phy *sas_phy = phy->sas_phy.phy; 3264 3264 3265 3265 switch (link_rate) { 3266 + case PHY_SPEED_120: 3267 + phy->sas_phy.linkrate = SAS_LINK_RATE_12_0_GBPS; 3268 + phy->sas_phy.phy->negotiated_linkrate = SAS_LINK_RATE_12_0_GBPS; 3269 + break; 3266 3270 case PHY_SPEED_60: 3267 3271 phy->sas_phy.linkrate = SAS_LINK_RATE_6_0_GBPS; 3268 3272 phy->sas_phy.phy->negotiated_linkrate = SAS_LINK_RATE_6_0_GBPS;
+4 -1
drivers/scsi/pm8001/pm8001_init.c
··· 57 57 [chip_8074] = {0, 8, &pm8001_80xx_dispatch,}, 58 58 [chip_8076] = {0, 16, &pm8001_80xx_dispatch,}, 59 59 [chip_8077] = {0, 16, &pm8001_80xx_dispatch,}, 60 + [chip_8006] = {0, 16, &pm8001_80xx_dispatch,}, 60 61 }; 61 62 static int pm8001_id; 62 63 ··· 1108 1107 */ 1109 1108 static struct pci_device_id pm8001_pci_table[] = { 1110 1109 { PCI_VDEVICE(PMC_Sierra, 0x8001), chip_8001 }, 1110 + { PCI_VDEVICE(PMC_Sierra, 0x8006), chip_8006 }, 1111 + { PCI_VDEVICE(ADAPTEC2, 0x8006), chip_8006 }, 1111 1112 { PCI_VDEVICE(ATTO, 0x0042), chip_8001 }, 1112 1113 /* Support for SPC/SPCv/SPCve controllers */ 1113 1114 { PCI_VDEVICE(ADAPTEC2, 0x8001), chip_8001 }, ··· 1220 1217 MODULE_AUTHOR("Sangeetha Gnanasekaran <Sangeetha.Gnanasekaran@pmcs.com>"); 1221 1218 MODULE_AUTHOR("Nikith Ganigarakoppal <Nikith.Ganigarakoppal@pmcs.com>"); 1222 1219 MODULE_DESCRIPTION( 1223 - "PMC-Sierra PM8001/8081/8088/8089/8074/8076/8077 " 1220 + "PMC-Sierra PM8001/8006/8081/8088/8089/8074/8076/8077 " 1224 1221 "SAS/SATA controller driver"); 1225 1222 MODULE_VERSION(DRV_VERSION); 1226 1223 MODULE_LICENSE("GPL");
+14 -5
drivers/scsi/pm8001/pm8001_sas.c
··· 790 790 ccb->device = pm8001_dev; 791 791 ccb->ccb_tag = ccb_tag; 792 792 ccb->task = task; 793 + ccb->n_elem = 0; 793 794 794 795 res = PM8001_CHIP_DISP->task_abort(pm8001_ha, 795 796 pm8001_dev, flag, task_tag, ccb_tag); ··· 976 975 phy = sas_get_local_phy(dev); 977 976 978 977 if (dev_is_sata(dev)) { 979 - DECLARE_COMPLETION_ONSTACK(completion_setstate); 980 978 if (scsi_is_sas_phy_local(phy)) { 981 979 rc = 0; 982 980 goto out; 983 981 } 984 982 rc = sas_phy_reset(phy, 1); 983 + if (rc) { 984 + PM8001_EH_DBG(pm8001_ha, 985 + pm8001_printk("phy reset failed for device %x\n" 986 + "with rc %d\n", pm8001_dev->device_id, rc)); 987 + rc = TMF_RESP_FUNC_FAILED; 988 + goto out; 989 + } 985 990 msleep(2000); 986 991 rc = pm8001_exec_internal_task_abort(pm8001_ha, pm8001_dev , 987 992 dev, 1, 0); 988 - pm8001_dev->setds_completion = &completion_setstate; 989 - rc = PM8001_CHIP_DISP->set_dev_state_req(pm8001_ha, 990 - pm8001_dev, 0x01); 991 - wait_for_completion(&completion_setstate); 993 + if (rc) { 994 + PM8001_EH_DBG(pm8001_ha, 995 + pm8001_printk("task abort failed %x\n" 996 + "with rc %d\n", pm8001_dev->device_id, rc)); 997 + rc = TMF_RESP_FUNC_FAILED; 998 + } 992 999 } else { 993 1000 rc = sas_phy_reset(phy, 1); 994 1001 msleep(2000);
+10 -2
drivers/scsi/pm8001/pm8001_sas.h
··· 58 58 #include "pm8001_defs.h" 59 59 60 60 #define DRV_NAME "pm80xx" 61 - #define DRV_VERSION "0.1.37" 61 + #define DRV_VERSION "0.1.38" 62 62 #define PM8001_FAIL_LOGGING 0x01 /* Error message logging */ 63 63 #define PM8001_INIT_LOGGING 0x02 /* driver init logging */ 64 64 #define PM8001_DISC_LOGGING 0x04 /* discovery layer logging */ ··· 241 241 struct pm8001_port { 242 242 struct asd_sas_port sas_port; 243 243 u8 port_attached; 244 - u8 wide_port_phymap; 244 + u16 wide_port_phymap; 245 245 u8 port_state; 246 246 struct list_head list; 247 247 }; ··· 569 569 #define NCQ_READ_LOG_FLAG 0x80000000 570 570 #define NCQ_ABORT_ALL_FLAG 0x40000000 571 571 #define NCQ_2ND_RLE_FLAG 0x20000000 572 + 573 + /* Device states */ 574 + #define DS_OPERATIONAL 0x01 575 + #define DS_PORT_IN_RESET 0x02 576 + #define DS_IN_RECOVERY 0x03 577 + #define DS_IN_ERROR 0x04 578 + #define DS_NON_OPERATIONAL 0x07 579 + 572 580 /** 573 581 * brief param structure for firmware flash update. 574 582 */
+82 -29
drivers/scsi/pm8001/pm80xx_hwi.c
··· 309 309 pm8001_mr32(address, MAIN_INT_VECTOR_TABLE_OFFSET); 310 310 pm8001_ha->main_cfg_tbl.pm80xx_tbl.phy_attr_table_offset = 311 311 pm8001_mr32(address, MAIN_SAS_PHY_ATTR_TABLE_OFFSET); 312 + /* read port recover and reset timeout */ 313 + pm8001_ha->main_cfg_tbl.pm80xx_tbl.port_recovery_timer = 314 + pm8001_mr32(address, MAIN_PORT_RECOVERY_TIMER); 312 315 } 313 316 314 317 /** ··· 588 585 pm8001_ha->main_cfg_tbl.pm80xx_tbl.port_recovery_timer); 589 586 pm8001_mw32(address, MAIN_INT_REASSERTION_DELAY, 590 587 pm8001_ha->main_cfg_tbl.pm80xx_tbl.interrupt_reassertion_delay); 588 + 589 + pm8001_ha->main_cfg_tbl.pm80xx_tbl.port_recovery_timer &= 0xffff0000; 590 + pm8001_ha->main_cfg_tbl.pm80xx_tbl.port_recovery_timer |= 591 + PORT_RECOVERY_TIMEOUT; 592 + pm8001_mw32(address, MAIN_PORT_RECOVERY_TIMER, 593 + pm8001_ha->main_cfg_tbl.pm80xx_tbl.port_recovery_timer); 591 594 } 592 595 593 596 /** ··· 852 843 int rc; 853 844 u32 tag; 854 845 u32 opc = OPC_INB_SET_CONTROLLER_CONFIG; 846 + u32 page_code; 855 847 856 848 memset(&payload, 0, sizeof(struct set_ctrl_cfg_req)); 857 849 rc = pm8001_tag_alloc(pm8001_ha, &tag); ··· 861 851 862 852 circularQ = &pm8001_ha->inbnd_q_tbl[0]; 863 853 payload.tag = cpu_to_le32(tag); 854 + 855 + if (IS_SPCV_12G(pm8001_ha->pdev)) 856 + page_code = THERMAL_PAGE_CODE_7H; 857 + else 858 + page_code = THERMAL_PAGE_CODE_8H; 859 + 864 860 payload.cfg_pg[0] = (THERMAL_LOG_ENABLE << 9) | 865 - (THERMAL_ENABLE << 8) | THERMAL_OP_CODE; 861 + (THERMAL_ENABLE << 8) | page_code; 866 862 payload.cfg_pg[1] = (LTEMPHIL << 24) | (RTEMPHIL << 8); 867 863 868 864 rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 0); ··· 1605 1589 case IO_XFER_ERROR_PHY_NOT_READY: 1606 1590 PM8001_IO_DBG(pm8001_ha, 1607 1591 pm8001_printk("IO_XFER_ERROR_PHY_NOT_READY\n")); 1592 + ts->resp = SAS_TASK_COMPLETE; 1593 + ts->stat = SAS_OPEN_REJECT; 1594 + ts->open_rej_reason = SAS_OREJ_RSVD_RETRY; 1595 + break; 1596 + case IO_XFER_ERROR_INVALID_SSP_RSP_FRAME: 1597 + PM8001_IO_DBG(pm8001_ha, 1598 + pm8001_printk("IO_XFER_ERROR_INVALID_SSP_RSP_FRAME\n")); 1608 1599 ts->resp = SAS_TASK_COMPLETE; 1609 1600 ts->stat = SAS_OPEN_REJECT; 1610 1601 ts->open_rej_reason = SAS_OREJ_RSVD_RETRY; ··· 2852 2829 static int pm80xx_chip_phy_ctl_req(struct pm8001_hba_info *pm8001_ha, 2853 2830 u32 phyId, u32 phy_op); 2854 2831 2832 + static void hw_event_port_recover(struct pm8001_hba_info *pm8001_ha, 2833 + void *piomb) 2834 + { 2835 + struct hw_event_resp *pPayload = (struct hw_event_resp *)(piomb + 4); 2836 + u32 phyid_npip_portstate = le32_to_cpu(pPayload->phyid_npip_portstate); 2837 + u8 phy_id = (u8)((phyid_npip_portstate & 0xFF0000) >> 16); 2838 + u32 lr_status_evt_portid = 2839 + le32_to_cpu(pPayload->lr_status_evt_portid); 2840 + u8 deviceType = pPayload->sas_identify.dev_type; 2841 + u8 link_rate = (u8)((lr_status_evt_portid & 0xF0000000) >> 28); 2842 + struct pm8001_phy *phy = &pm8001_ha->phy[phy_id]; 2843 + u8 port_id = (u8)(lr_status_evt_portid & 0x000000FF); 2844 + struct pm8001_port *port = &pm8001_ha->port[port_id]; 2845 + 2846 + if (deviceType == SAS_END_DEVICE) { 2847 + pm80xx_chip_phy_ctl_req(pm8001_ha, phy_id, 2848 + PHY_NOTIFY_ENABLE_SPINUP); 2849 + } 2850 + 2851 + port->wide_port_phymap |= (1U << phy_id); 2852 + pm8001_get_lrate_mode(phy, link_rate); 2853 + phy->sas_phy.oob_mode = SAS_OOB_MODE; 2854 + phy->phy_state = PHY_STATE_LINK_UP_SPCV; 2855 + phy->phy_attached = 1; 2856 + } 2857 + 2855 2858 /** 2856 2859 * hw_event_sas_phy_up -FW tells me a SAS phy up event. 2857 2860 * @pm8001_ha: our hba card information ··· 2905 2856 unsigned long flags; 2906 2857 u8 deviceType = pPayload->sas_identify.dev_type; 2907 2858 port->port_state = portstate; 2859 + port->wide_port_phymap |= (1U << phy_id); 2908 2860 phy->phy_state = PHY_STATE_LINK_UP_SPCV; 2909 2861 PM8001_MSG_DBG(pm8001_ha, pm8001_printk( 2910 2862 "portid:%d; phyid:%d; linkrate:%d; " ··· 3031 2981 struct pm8001_port *port = &pm8001_ha->port[port_id]; 3032 2982 struct pm8001_phy *phy = &pm8001_ha->phy[phy_id]; 3033 2983 port->port_state = portstate; 3034 - phy->phy_type = 0; 3035 2984 phy->identify.device_type = 0; 3036 2985 phy->phy_attached = 0; 3037 2986 memset(&phy->dev_sas_addr, 0, SAS_ADDR_SIZE); ··· 3042 2993 pm8001_printk(" PortInvalid portID %d\n", port_id)); 3043 2994 PM8001_MSG_DBG(pm8001_ha, 3044 2995 pm8001_printk(" Last phy Down and port invalid\n")); 3045 - port->port_attached = 0; 3046 - pm80xx_hw_event_ack_req(pm8001_ha, 0, HW_EVENT_PHY_DOWN, 3047 - port_id, phy_id, 0, 0); 2996 + if (phy->phy_type & PORT_TYPE_SATA) { 2997 + phy->phy_type = 0; 2998 + port->port_attached = 0; 2999 + pm80xx_hw_event_ack_req(pm8001_ha, 0, HW_EVENT_PHY_DOWN, 3000 + port_id, phy_id, 0, 0); 3001 + } 3002 + sas_phy_disconnected(&phy->sas_phy); 3048 3003 break; 3049 3004 case PORT_IN_RESET: 3050 3005 PM8001_MSG_DBG(pm8001_ha, ··· 3056 3003 break; 3057 3004 case PORT_NOT_ESTABLISHED: 3058 3005 PM8001_MSG_DBG(pm8001_ha, 3059 - pm8001_printk(" phy Down and PORT_NOT_ESTABLISHED\n")); 3006 + pm8001_printk(" Phy Down and PORT_NOT_ESTABLISHED\n")); 3060 3007 port->port_attached = 0; 3061 3008 break; 3062 3009 case PORT_LOSTCOMM: 3063 3010 PM8001_MSG_DBG(pm8001_ha, 3064 - pm8001_printk(" phy Down and PORT_LOSTCOMM\n")); 3011 + pm8001_printk(" Phy Down and PORT_LOSTCOMM\n")); 3065 3012 PM8001_MSG_DBG(pm8001_ha, 3066 3013 pm8001_printk(" Last phy Down and port invalid\n")); 3067 - port->port_attached = 0; 3068 - pm80xx_hw_event_ack_req(pm8001_ha, 0, HW_EVENT_PHY_DOWN, 3069 - port_id, phy_id, 0, 0); 3014 + if (phy->phy_type & PORT_TYPE_SATA) { 3015 + port->port_attached = 0; 3016 + phy->phy_type = 0; 3017 + pm80xx_hw_event_ack_req(pm8001_ha, 0, HW_EVENT_PHY_DOWN, 3018 + port_id, phy_id, 0, 0); 3019 + } 3020 + sas_phy_disconnected(&phy->sas_phy); 3070 3021 break; 3071 3022 default: 3072 3023 port->port_attached = 0; 3073 3024 PM8001_MSG_DBG(pm8001_ha, 3074 - pm8001_printk(" phy Down and(default) = 0x%x\n", 3025 + pm8001_printk(" Phy Down and(default) = 0x%x\n", 3075 3026 portstate)); 3076 3027 break; 3077 3028 ··· 3141 3084 */ 3142 3085 static int mpi_hw_event(struct pm8001_hba_info *pm8001_ha, void *piomb) 3143 3086 { 3144 - unsigned long flags; 3087 + unsigned long flags, i; 3145 3088 struct hw_event_resp *pPayload = 3146 3089 (struct hw_event_resp *)(piomb + 4); 3147 3090 u32 lr_status_evt_portid = ··· 3154 3097 (u16)((lr_status_evt_portid & 0x00FFFF00) >> 8); 3155 3098 u8 status = 3156 3099 (u8)((lr_status_evt_portid & 0x0F000000) >> 24); 3157 - 3158 3100 struct sas_ha_struct *sas_ha = pm8001_ha->sas; 3159 3101 struct pm8001_phy *phy = &pm8001_ha->phy[phy_id]; 3102 + struct pm8001_port *port = &pm8001_ha->port[port_id]; 3160 3103 struct asd_sas_phy *sas_phy = sas_ha->sas_phy[phy_id]; 3161 3104 PM8001_MSG_DBG(pm8001_ha, 3162 3105 pm8001_printk("portid:%d phyid:%d event:0x%x status:0x%x\n", ··· 3182 3125 case HW_EVENT_PHY_DOWN: 3183 3126 PM8001_MSG_DBG(pm8001_ha, 3184 3127 pm8001_printk("HW_EVENT_PHY_DOWN\n")); 3185 - sas_ha->notify_phy_event(&phy->sas_phy, PHYE_LOSS_OF_SIGNAL); 3128 + if (phy->phy_type & PORT_TYPE_SATA) 3129 + sas_ha->notify_phy_event(&phy->sas_phy, 3130 + PHYE_LOSS_OF_SIGNAL); 3186 3131 phy->phy_attached = 0; 3187 3132 phy->phy_state = 0; 3188 3133 hw_event_phy_down(pm8001_ha, piomb); ··· 3228 3169 pm8001_printk("HW_EVENT_LINK_ERR_INVALID_DWORD\n")); 3229 3170 pm80xx_hw_event_ack_req(pm8001_ha, 0, 3230 3171 HW_EVENT_LINK_ERR_INVALID_DWORD, port_id, phy_id, 0, 0); 3231 - sas_phy_disconnected(sas_phy); 3232 - phy->phy_attached = 0; 3233 - sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR); 3234 3172 break; 3235 3173 case HW_EVENT_LINK_ERR_DISPARITY_ERROR: 3236 3174 PM8001_MSG_DBG(pm8001_ha, ··· 3235 3179 pm80xx_hw_event_ack_req(pm8001_ha, 0, 3236 3180 HW_EVENT_LINK_ERR_DISPARITY_ERROR, 3237 3181 port_id, phy_id, 0, 0); 3238 - sas_phy_disconnected(sas_phy); 3239 - phy->phy_attached = 0; 3240 - sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR); 3241 3182 break; 3242 3183 case HW_EVENT_LINK_ERR_CODE_VIOLATION: 3243 3184 PM8001_MSG_DBG(pm8001_ha, ··· 3242 3189 pm80xx_hw_event_ack_req(pm8001_ha, 0, 3243 3190 HW_EVENT_LINK_ERR_CODE_VIOLATION, 3244 3191 port_id, phy_id, 0, 0); 3245 - sas_phy_disconnected(sas_phy); 3246 - phy->phy_attached = 0; 3247 - sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR); 3248 3192 break; 3249 3193 case HW_EVENT_LINK_ERR_LOSS_OF_DWORD_SYNCH: 3250 3194 PM8001_MSG_DBG(pm8001_ha, pm8001_printk( ··· 3249 3199 pm80xx_hw_event_ack_req(pm8001_ha, 0, 3250 3200 HW_EVENT_LINK_ERR_LOSS_OF_DWORD_SYNCH, 3251 3201 port_id, phy_id, 0, 0); 3252 - sas_phy_disconnected(sas_phy); 3253 - phy->phy_attached = 0; 3254 - sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR); 3255 3202 break; 3256 3203 case HW_EVENT_MALFUNCTION: 3257 3204 PM8001_MSG_DBG(pm8001_ha, ··· 3304 3257 pm80xx_hw_event_ack_req(pm8001_ha, 0, 3305 3258 HW_EVENT_PORT_RECOVERY_TIMER_TMO, 3306 3259 port_id, phy_id, 0, 0); 3307 - sas_phy_disconnected(sas_phy); 3308 - phy->phy_attached = 0; 3309 - sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR); 3260 + for (i = 0; i < pm8001_ha->chip->n_phy; i++) { 3261 + if (port->wide_port_phymap & (1 << i)) { 3262 + phy = &pm8001_ha->phy[i]; 3263 + sas_ha->notify_phy_event(&phy->sas_phy, 3264 + PHYE_LOSS_OF_SIGNAL); 3265 + port->wide_port_phymap &= ~(1 << i); 3266 + } 3267 + } 3310 3268 break; 3311 3269 case HW_EVENT_PORT_RECOVER: 3312 3270 PM8001_MSG_DBG(pm8001_ha, 3313 3271 pm8001_printk("HW_EVENT_PORT_RECOVER\n")); 3272 + hw_event_port_recover(pm8001_ha, piomb); 3314 3273 break; 3315 3274 case HW_EVENT_PORT_RESET_COMPLETE: 3316 3275 PM8001_MSG_DBG(pm8001_ha,
+3 -2
drivers/scsi/pm8001/pm80xx_hwi.h
··· 177 177 /* Thermal related */ 178 178 #define THERMAL_ENABLE 0x1 179 179 #define THERMAL_LOG_ENABLE 0x1 180 - #define THERMAL_OP_CODE 0x6 180 + #define THERMAL_PAGE_CODE_7H 0x6 181 + #define THERMAL_PAGE_CODE_8H 0x7 181 182 #define LTEMPHIL 70 182 183 #define RTEMPHIL 100 183 184 ··· 1175 1174 #define IO_XFER_ERROR_INTERNAL_CRC_ERROR 0x54 1176 1175 #define MPI_IO_RQE_BUSY_FULL 0x55 1177 1176 #define IO_XFER_ERR_EOB_DATA_OVERRUN 0x56 1178 - #define IO_XFR_ERROR_INVALID_SSP_RSP_FRAME 0x57 1177 + #define IO_XFER_ERROR_INVALID_SSP_RSP_FRAME 0x57 1179 1178 #define IO_OPEN_CNX_ERROR_OPEN_PREEMPTED 0x58 1180 1179 1181 1180 #define MPI_ERR_IO_RESOURCE_UNAVAILABLE 0x1004
+19 -5
drivers/scsi/qla2xxx/qla_attr.c
··· 884 884 struct device, kobj))); 885 885 struct qla_hw_data *ha = vha->hw; 886 886 int rval; 887 - uint16_t actual_size; 888 887 889 888 if (!capable(CAP_SYS_ADMIN) || off != 0 || count > DCBX_TLV_DATA_SIZE) 890 889 return 0; ··· 900 901 } 901 902 902 903 do_read: 903 - actual_size = 0; 904 904 memset(ha->dcbx_tlv, 0, DCBX_TLV_DATA_SIZE); 905 905 906 906 rval = qla2x00_get_dcbx_params(vha, ha->dcbx_tlv_dma, ··· 1077 1079 char *buf) 1078 1080 { 1079 1081 scsi_qla_host_t *vha = shost_priv(class_to_shost(dev)); 1080 - return scnprintf(buf, PAGE_SIZE, "%s\n", 1081 - vha->hw->model_desc ? vha->hw->model_desc : ""); 1082 + return scnprintf(buf, PAGE_SIZE, "%s\n", vha->hw->model_desc); 1082 1083 } 1083 1084 1084 1085 static ssize_t ··· 1345 1348 scsi_qla_host_t *vha = shost_priv(class_to_shost(dev)); 1346 1349 struct qla_hw_data *ha = vha->hw; 1347 1350 1348 - if (!IS_QLA81XX(ha) && !IS_QLA8031(ha) && !IS_QLA8044(ha)) 1351 + if (!IS_QLA81XX(ha) && !IS_QLA8031(ha) && !IS_QLA8044(ha) && 1352 + !IS_QLA27XX(ha)) 1349 1353 return scnprintf(buf, PAGE_SIZE, "\n"); 1350 1354 1351 1355 return scnprintf(buf, PAGE_SIZE, "%d.%02d.%02d (%x)\n", ··· 1535 1537 return strlen(buf); 1536 1538 } 1537 1539 1540 + static ssize_t 1541 + qla2x00_pep_version_show(struct device *dev, struct device_attribute *attr, 1542 + char *buf) 1543 + { 1544 + scsi_qla_host_t *vha = shost_priv(class_to_shost(dev)); 1545 + struct qla_hw_data *ha = vha->hw; 1546 + 1547 + if (!IS_QLA27XX(ha)) 1548 + return scnprintf(buf, PAGE_SIZE, "\n"); 1549 + 1550 + return scnprintf(buf, PAGE_SIZE, "%d.%02d.%02d\n", 1551 + ha->pep_version[0], ha->pep_version[1], ha->pep_version[2]); 1552 + } 1553 + 1538 1554 static DEVICE_ATTR(driver_version, S_IRUGO, qla2x00_drvr_version_show, NULL); 1539 1555 static DEVICE_ATTR(fw_version, S_IRUGO, qla2x00_fw_version_show, NULL); 1540 1556 static DEVICE_ATTR(serial_num, S_IRUGO, qla2x00_serial_num_show, NULL); ··· 1593 1581 static DEVICE_ATTR(allow_cna_fw_dump, S_IRUGO | S_IWUSR, 1594 1582 qla2x00_allow_cna_fw_dump_show, 1595 1583 qla2x00_allow_cna_fw_dump_store); 1584 + static DEVICE_ATTR(pep_version, S_IRUGO, qla2x00_pep_version_show, NULL); 1596 1585 1597 1586 struct device_attribute *qla2x00_host_attrs[] = { 1598 1587 &dev_attr_driver_version, ··· 1627 1614 &dev_attr_diag_megabytes, 1628 1615 &dev_attr_fw_dump_size, 1629 1616 &dev_attr_allow_cna_fw_dump, 1617 + &dev_attr_pep_version, 1630 1618 NULL, 1631 1619 }; 1632 1620
+1 -6
drivers/scsi/qla2xxx/qla_bsg.c
··· 405 405 return rval; 406 406 } 407 407 408 - inline uint16_t 408 + static inline uint16_t 409 409 qla24xx_calc_ct_iocbs(uint16_t dsds) 410 410 { 411 411 uint16_t iocbs; ··· 1733 1733 struct Scsi_Host *host = bsg_job->shost; 1734 1734 scsi_qla_host_t *vha = shost_priv(host); 1735 1735 struct qla_hw_data *ha = vha->hw; 1736 - uint16_t thread_id; 1737 1736 uint32_t rval = EXT_STATUS_OK; 1738 1737 uint16_t req_sg_cnt = 0; 1739 1738 uint16_t rsp_sg_cnt = 0; ··· 1788 1789 rval = EXT_STATUS_INVALID_CFG; 1789 1790 goto done; 1790 1791 } 1791 - 1792 - thread_id = bsg_job->request->rqst_data.h_vendor.vendor_cmd[1]; 1793 1792 1794 1793 mutex_lock(&ha->selflogin_lock); 1795 1794 if (vha->self_login_loop_id == 0) { ··· 2171 2174 { 2172 2175 int ret = -EINVAL; 2173 2176 struct fc_rport *rport; 2174 - fc_port_t *fcport = NULL; 2175 2177 struct Scsi_Host *host; 2176 2178 scsi_qla_host_t *vha; 2177 2179 ··· 2179 2183 2180 2184 if (bsg_job->request->msgcode == FC_BSG_RPT_ELS) { 2181 2185 rport = bsg_job->rport; 2182 - fcport = *(fc_port_t **) rport->dd_data; 2183 2186 host = rport_to_shost(rport); 2184 2187 vha = shost_priv(host); 2185 2188 } else {
+63 -39
drivers/scsi/qla2xxx/qla_dbg.c
··· 19 19 * | Device Discovery | 0x2016 | 0x2020-0x2022, | 20 20 * | | | 0x2011-0x2012, | 21 21 * | | | 0x2099-0x20a4 | 22 - * | Queue Command and IO tracing | 0x3059 | 0x300b | 22 + * | Queue Command and IO tracing | 0x3075 | 0x300b | 23 23 * | | | 0x3027-0x3028 | 24 24 * | | | 0x303d-0x3041 | 25 25 * | | | 0x302d,0x3033 | 26 26 * | | | 0x3036,0x3038 | 27 27 * | | | 0x303a | 28 28 * | DPC Thread | 0x4023 | 0x4002,0x4013 | 29 - * | Async Events | 0x5087 | 0x502b-0x502f | 29 + * | Async Events | 0x508a | 0x502b-0x502f | 30 30 * | | | 0x5047 | 31 31 * | | | 0x5084,0x5075 | 32 32 * | | | 0x503d,0x5044 | ··· 117 117 { 118 118 int rval; 119 119 uint32_t cnt, stat, timer, dwords, idx; 120 - uint16_t mb0, mb1; 120 + uint16_t mb0; 121 121 struct device_reg_24xx __iomem *reg = &ha->iobase->isp24; 122 122 dma_addr_t dump_dma = ha->gid_list_dma; 123 123 uint32_t *dump = (uint32_t *)ha->gid_list; ··· 161 161 &ha->mbx_cmd_flags); 162 162 163 163 mb0 = RD_REG_WORD(&reg->mailbox0); 164 - mb1 = RD_REG_WORD(&reg->mailbox1); 164 + RD_REG_WORD(&reg->mailbox1); 165 165 166 166 WRT_REG_DWORD(&reg->hccr, 167 167 HCCRX_CLR_RISC_INT); ··· 486 486 return ptr; 487 487 488 488 *last_chain = &fcec->type; 489 - fcec->type = __constant_htonl(DUMP_CHAIN_FCE); 489 + fcec->type = htonl(DUMP_CHAIN_FCE); 490 490 fcec->chain_size = htonl(sizeof(struct qla2xxx_fce_chain) + 491 491 fce_calc_size(ha->fce_bufs)); 492 492 fcec->size = htonl(fce_calc_size(ha->fce_bufs)); ··· 527 527 /* aqp = ha->atio_q_map[que]; */ 528 528 q = ptr; 529 529 *last_chain = &q->type; 530 - q->type = __constant_htonl(DUMP_CHAIN_QUEUE); 530 + q->type = htonl(DUMP_CHAIN_QUEUE); 531 531 q->chain_size = htonl( 532 532 sizeof(struct qla2xxx_mqueue_chain) + 533 533 sizeof(struct qla2xxx_mqueue_header) + ··· 536 536 537 537 /* Add header. */ 538 538 qh = ptr; 539 - qh->queue = __constant_htonl(TYPE_ATIO_QUEUE); 539 + qh->queue = htonl(TYPE_ATIO_QUEUE); 540 540 qh->number = htonl(que); 541 541 qh->size = htonl(aqp->length * sizeof(request_t)); 542 542 ptr += sizeof(struct qla2xxx_mqueue_header); ··· 571 571 /* Add chain. */ 572 572 q = ptr; 573 573 *last_chain = &q->type; 574 - q->type = __constant_htonl(DUMP_CHAIN_QUEUE); 574 + q->type = htonl(DUMP_CHAIN_QUEUE); 575 575 q->chain_size = htonl( 576 576 sizeof(struct qla2xxx_mqueue_chain) + 577 577 sizeof(struct qla2xxx_mqueue_header) + ··· 580 580 581 581 /* Add header. */ 582 582 qh = ptr; 583 - qh->queue = __constant_htonl(TYPE_REQUEST_QUEUE); 583 + qh->queue = htonl(TYPE_REQUEST_QUEUE); 584 584 qh->number = htonl(que); 585 585 qh->size = htonl(req->length * sizeof(request_t)); 586 586 ptr += sizeof(struct qla2xxx_mqueue_header); ··· 599 599 /* Add chain. */ 600 600 q = ptr; 601 601 *last_chain = &q->type; 602 - q->type = __constant_htonl(DUMP_CHAIN_QUEUE); 602 + q->type = htonl(DUMP_CHAIN_QUEUE); 603 603 q->chain_size = htonl( 604 604 sizeof(struct qla2xxx_mqueue_chain) + 605 605 sizeof(struct qla2xxx_mqueue_header) + ··· 608 608 609 609 /* Add header. */ 610 610 qh = ptr; 611 - qh->queue = __constant_htonl(TYPE_RESPONSE_QUEUE); 611 + qh->queue = htonl(TYPE_RESPONSE_QUEUE); 612 612 qh->number = htonl(que); 613 613 qh->size = htonl(rsp->length * sizeof(response_t)); 614 614 ptr += sizeof(struct qla2xxx_mqueue_header); ··· 627 627 uint32_t cnt, que_idx; 628 628 uint8_t que_cnt; 629 629 struct qla2xxx_mq_chain *mq = ptr; 630 - device_reg_t __iomem *reg; 630 + device_reg_t *reg; 631 631 632 632 if (!ha->mqenable || IS_QLA83XX(ha) || IS_QLA27XX(ha)) 633 633 return ptr; 634 634 635 635 mq = ptr; 636 636 *last_chain = &mq->type; 637 - mq->type = __constant_htonl(DUMP_CHAIN_MQ); 638 - mq->chain_size = __constant_htonl(sizeof(struct qla2xxx_mq_chain)); 637 + mq->type = htonl(DUMP_CHAIN_MQ); 638 + mq->chain_size = htonl(sizeof(struct qla2xxx_mq_chain)); 639 639 640 640 que_cnt = ha->max_req_queues > ha->max_rsp_queues ? 641 641 ha->max_req_queues : ha->max_rsp_queues; ··· 695 695 696 696 flags = 0; 697 697 698 + #ifndef __CHECKER__ 698 699 if (!hardware_locked) 699 700 spin_lock_irqsave(&ha->hardware_lock, flags); 701 + #endif 700 702 701 703 if (!ha->fw_dump) { 702 704 ql_log(ql_log_warn, vha, 0xd002, ··· 834 832 qla2xxx_dump_post_process(base_vha, rval); 835 833 836 834 qla2300_fw_dump_failed: 835 + #ifndef __CHECKER__ 837 836 if (!hardware_locked) 838 837 spin_unlock_irqrestore(&ha->hardware_lock, flags); 838 + #else 839 + ; 840 + #endif 839 841 } 840 842 841 843 /** ··· 865 859 mb0 = mb2 = 0; 866 860 flags = 0; 867 861 862 + #ifndef __CHECKER__ 868 863 if (!hardware_locked) 869 864 spin_lock_irqsave(&ha->hardware_lock, flags); 865 + #endif 870 866 871 867 if (!ha->fw_dump) { 872 868 ql_log(ql_log_warn, vha, 0xd004, ··· 1038 1030 qla2xxx_dump_post_process(base_vha, rval); 1039 1031 1040 1032 qla2100_fw_dump_failed: 1033 + #ifndef __CHECKER__ 1041 1034 if (!hardware_locked) 1042 1035 spin_unlock_irqrestore(&ha->hardware_lock, flags); 1036 + #else 1037 + ; 1038 + #endif 1043 1039 } 1044 1040 1045 1041 void ··· 1051 1039 { 1052 1040 int rval; 1053 1041 uint32_t cnt; 1054 - uint32_t risc_address; 1055 1042 struct qla_hw_data *ha = vha->hw; 1056 1043 struct device_reg_24xx __iomem *reg = &ha->iobase->isp24; 1057 1044 uint32_t __iomem *dmp_reg; ··· 1058 1047 uint16_t __iomem *mbx_reg; 1059 1048 unsigned long flags; 1060 1049 struct qla24xx_fw_dump *fw; 1061 - uint32_t ext_mem_cnt; 1062 1050 void *nxt; 1063 1051 void *nxt_chain; 1064 1052 uint32_t *last_chain = NULL; ··· 1066 1056 if (IS_P3P_TYPE(ha)) 1067 1057 return; 1068 1058 1069 - risc_address = ext_mem_cnt = 0; 1070 1059 flags = 0; 1071 1060 ha->fw_dump_cap_flags = 0; 1072 1061 1062 + #ifndef __CHECKER__ 1073 1063 if (!hardware_locked) 1074 1064 spin_lock_irqsave(&ha->hardware_lock, flags); 1065 + #endif 1075 1066 1076 1067 if (!ha->fw_dump) { 1077 1068 ql_log(ql_log_warn, vha, 0xd006, ··· 1285 1274 nxt_chain = (void *)ha->fw_dump + ha->chain_offset; 1286 1275 nxt_chain = qla2xxx_copy_atioqueues(ha, nxt_chain, &last_chain); 1287 1276 if (last_chain) { 1288 - ha->fw_dump->version |= __constant_htonl(DUMP_CHAIN_VARIANT); 1289 - *last_chain |= __constant_htonl(DUMP_CHAIN_LAST); 1277 + ha->fw_dump->version |= htonl(DUMP_CHAIN_VARIANT); 1278 + *last_chain |= htonl(DUMP_CHAIN_LAST); 1290 1279 } 1291 1280 1292 1281 /* Adjust valid length. */ ··· 1296 1285 qla2xxx_dump_post_process(base_vha, rval); 1297 1286 1298 1287 qla24xx_fw_dump_failed: 1288 + #ifndef __CHECKER__ 1299 1289 if (!hardware_locked) 1300 1290 spin_unlock_irqrestore(&ha->hardware_lock, flags); 1291 + #else 1292 + ; 1293 + #endif 1301 1294 } 1302 1295 1303 1296 void ··· 1309 1294 { 1310 1295 int rval; 1311 1296 uint32_t cnt; 1312 - uint32_t risc_address; 1313 1297 struct qla_hw_data *ha = vha->hw; 1314 1298 struct device_reg_24xx __iomem *reg = &ha->iobase->isp24; 1315 1299 uint32_t __iomem *dmp_reg; ··· 1316 1302 uint16_t __iomem *mbx_reg; 1317 1303 unsigned long flags; 1318 1304 struct qla25xx_fw_dump *fw; 1319 - uint32_t ext_mem_cnt; 1320 1305 void *nxt, *nxt_chain; 1321 1306 uint32_t *last_chain = NULL; 1322 1307 struct scsi_qla_host *base_vha = pci_get_drvdata(ha->pdev); 1323 1308 1324 - risc_address = ext_mem_cnt = 0; 1325 1309 flags = 0; 1326 1310 ha->fw_dump_cap_flags = 0; 1327 1311 1312 + #ifndef __CHECKER__ 1328 1313 if (!hardware_locked) 1329 1314 spin_lock_irqsave(&ha->hardware_lock, flags); 1315 + #endif 1330 1316 1331 1317 if (!ha->fw_dump) { 1332 1318 ql_log(ql_log_warn, vha, 0xd008, ··· 1343 1329 } 1344 1330 fw = &ha->fw_dump->isp.isp25; 1345 1331 qla2xxx_prep_dump(ha, ha->fw_dump); 1346 - ha->fw_dump->version = __constant_htonl(2); 1332 + ha->fw_dump->version = htonl(2); 1347 1333 1348 1334 fw->host_status = htonl(RD_REG_DWORD(&reg->host_status)); 1349 1335 ··· 1607 1593 nxt_chain = qla25xx_copy_mqueues(ha, nxt_chain, &last_chain); 1608 1594 nxt_chain = qla2xxx_copy_atioqueues(ha, nxt_chain, &last_chain); 1609 1595 if (last_chain) { 1610 - ha->fw_dump->version |= __constant_htonl(DUMP_CHAIN_VARIANT); 1611 - *last_chain |= __constant_htonl(DUMP_CHAIN_LAST); 1596 + ha->fw_dump->version |= htonl(DUMP_CHAIN_VARIANT); 1597 + *last_chain |= htonl(DUMP_CHAIN_LAST); 1612 1598 } 1613 1599 1614 1600 /* Adjust valid length. */ ··· 1618 1604 qla2xxx_dump_post_process(base_vha, rval); 1619 1605 1620 1606 qla25xx_fw_dump_failed: 1607 + #ifndef __CHECKER__ 1621 1608 if (!hardware_locked) 1622 1609 spin_unlock_irqrestore(&ha->hardware_lock, flags); 1610 + #else 1611 + ; 1612 + #endif 1623 1613 } 1624 1614 1625 1615 void ··· 1631 1613 { 1632 1614 int rval; 1633 1615 uint32_t cnt; 1634 - uint32_t risc_address; 1635 1616 struct qla_hw_data *ha = vha->hw; 1636 1617 struct device_reg_24xx __iomem *reg = &ha->iobase->isp24; 1637 1618 uint32_t __iomem *dmp_reg; ··· 1638 1621 uint16_t __iomem *mbx_reg; 1639 1622 unsigned long flags; 1640 1623 struct qla81xx_fw_dump *fw; 1641 - uint32_t ext_mem_cnt; 1642 1624 void *nxt, *nxt_chain; 1643 1625 uint32_t *last_chain = NULL; 1644 1626 struct scsi_qla_host *base_vha = pci_get_drvdata(ha->pdev); 1645 1627 1646 - risc_address = ext_mem_cnt = 0; 1647 1628 flags = 0; 1648 1629 ha->fw_dump_cap_flags = 0; 1649 1630 1631 + #ifndef __CHECKER__ 1650 1632 if (!hardware_locked) 1651 1633 spin_lock_irqsave(&ha->hardware_lock, flags); 1634 + #endif 1652 1635 1653 1636 if (!ha->fw_dump) { 1654 1637 ql_log(ql_log_warn, vha, 0xd00a, ··· 1931 1914 nxt_chain = qla25xx_copy_mqueues(ha, nxt_chain, &last_chain); 1932 1915 nxt_chain = qla2xxx_copy_atioqueues(ha, nxt_chain, &last_chain); 1933 1916 if (last_chain) { 1934 - ha->fw_dump->version |= __constant_htonl(DUMP_CHAIN_VARIANT); 1935 - *last_chain |= __constant_htonl(DUMP_CHAIN_LAST); 1917 + ha->fw_dump->version |= htonl(DUMP_CHAIN_VARIANT); 1918 + *last_chain |= htonl(DUMP_CHAIN_LAST); 1936 1919 } 1937 1920 1938 1921 /* Adjust valid length. */ ··· 1942 1925 qla2xxx_dump_post_process(base_vha, rval); 1943 1926 1944 1927 qla81xx_fw_dump_failed: 1928 + #ifndef __CHECKER__ 1945 1929 if (!hardware_locked) 1946 1930 spin_unlock_irqrestore(&ha->hardware_lock, flags); 1931 + #else 1932 + ; 1933 + #endif 1947 1934 } 1948 1935 1949 1936 void 1950 1937 qla83xx_fw_dump(scsi_qla_host_t *vha, int hardware_locked) 1951 1938 { 1952 1939 int rval; 1953 - uint32_t cnt, reg_data; 1954 - uint32_t risc_address; 1940 + uint32_t cnt; 1955 1941 struct qla_hw_data *ha = vha->hw; 1956 1942 struct device_reg_24xx __iomem *reg = &ha->iobase->isp24; 1957 1943 uint32_t __iomem *dmp_reg; ··· 1962 1942 uint16_t __iomem *mbx_reg; 1963 1943 unsigned long flags; 1964 1944 struct qla83xx_fw_dump *fw; 1965 - uint32_t ext_mem_cnt; 1966 1945 void *nxt, *nxt_chain; 1967 1946 uint32_t *last_chain = NULL; 1968 1947 struct scsi_qla_host *base_vha = pci_get_drvdata(ha->pdev); 1969 1948 1970 - risc_address = ext_mem_cnt = 0; 1971 1949 flags = 0; 1972 1950 ha->fw_dump_cap_flags = 0; 1973 1951 1952 + #ifndef __CHECKER__ 1974 1953 if (!hardware_locked) 1975 1954 spin_lock_irqsave(&ha->hardware_lock, flags); 1955 + #endif 1976 1956 1977 1957 if (!ha->fw_dump) { 1978 1958 ql_log(ql_log_warn, vha, 0xd00c, ··· 1999 1979 2000 1980 WRT_REG_DWORD(&reg->iobase_addr, 0x6000); 2001 1981 dmp_reg = &reg->iobase_window; 2002 - reg_data = RD_REG_DWORD(dmp_reg); 1982 + RD_REG_DWORD(dmp_reg); 2003 1983 WRT_REG_DWORD(dmp_reg, 0); 2004 1984 2005 1985 dmp_reg = &reg->unused_4_1[0]; 2006 - reg_data = RD_REG_DWORD(dmp_reg); 1986 + RD_REG_DWORD(dmp_reg); 2007 1987 WRT_REG_DWORD(dmp_reg, 0); 2008 1988 2009 1989 WRT_REG_DWORD(&reg->iobase_addr, 0x6010); 2010 1990 dmp_reg = &reg->unused_4_1[2]; 2011 - reg_data = RD_REG_DWORD(dmp_reg); 1991 + RD_REG_DWORD(dmp_reg); 2012 1992 WRT_REG_DWORD(dmp_reg, 0); 2013 1993 2014 1994 /* select PCR and disable ecc checking and correction */ ··· 2440 2420 nxt_chain = qla25xx_copy_mqueues(ha, nxt_chain, &last_chain); 2441 2421 nxt_chain = qla2xxx_copy_atioqueues(ha, nxt_chain, &last_chain); 2442 2422 if (last_chain) { 2443 - ha->fw_dump->version |= __constant_htonl(DUMP_CHAIN_VARIANT); 2444 - *last_chain |= __constant_htonl(DUMP_CHAIN_LAST); 2423 + ha->fw_dump->version |= htonl(DUMP_CHAIN_VARIANT); 2424 + *last_chain |= htonl(DUMP_CHAIN_LAST); 2445 2425 } 2446 2426 2447 2427 /* Adjust valid length. */ ··· 2451 2431 qla2xxx_dump_post_process(base_vha, rval); 2452 2432 2453 2433 qla83xx_fw_dump_failed: 2434 + #ifndef __CHECKER__ 2454 2435 if (!hardware_locked) 2455 2436 spin_unlock_irqrestore(&ha->hardware_lock, flags); 2437 + #else 2438 + ; 2439 + #endif 2456 2440 } 2457 2441 2458 2442 /****************************************************************************/
+10 -5
drivers/scsi/qla2xxx/qla_def.h
··· 3061 3061 #define PCI_DEVICE_ID_QLOGIC_ISP2031 0x2031 3062 3062 #define PCI_DEVICE_ID_QLOGIC_ISP2071 0x2071 3063 3063 #define PCI_DEVICE_ID_QLOGIC_ISP2271 0x2271 3064 + #define PCI_DEVICE_ID_QLOGIC_ISP2261 0x2261 3064 3065 3065 3066 uint32_t device_type; 3066 3067 #define DT_ISP2100 BIT_0 ··· 3085 3084 #define DT_ISP8044 BIT_18 3086 3085 #define DT_ISP2071 BIT_19 3087 3086 #define DT_ISP2271 BIT_20 3088 - #define DT_ISP_LAST (DT_ISP2271 << 1) 3087 + #define DT_ISP2261 BIT_21 3088 + #define DT_ISP_LAST (DT_ISP2261 << 1) 3089 3089 3090 3090 #define DT_T10_PI BIT_25 3091 3091 #define DT_IIDMA BIT_26 ··· 3118 3116 #define IS_QLAFX00(ha) (DT_MASK(ha) & DT_ISPFX00) 3119 3117 #define IS_QLA2071(ha) (DT_MASK(ha) & DT_ISP2071) 3120 3118 #define IS_QLA2271(ha) (DT_MASK(ha) & DT_ISP2271) 3119 + #define IS_QLA2261(ha) (DT_MASK(ha) & DT_ISP2261) 3121 3120 3122 3121 #define IS_QLA23XX(ha) (IS_QLA2300(ha) || IS_QLA2312(ha) || IS_QLA2322(ha) || \ 3123 3122 IS_QLA6312(ha) || IS_QLA6322(ha)) ··· 3127 3124 #define IS_QLA25XX(ha) (IS_QLA2532(ha)) 3128 3125 #define IS_QLA83XX(ha) (IS_QLA2031(ha) || IS_QLA8031(ha)) 3129 3126 #define IS_QLA84XX(ha) (IS_QLA8432(ha)) 3130 - #define IS_QLA27XX(ha) (IS_QLA2071(ha) || IS_QLA2271(ha)) 3127 + #define IS_QLA27XX(ha) (IS_QLA2071(ha) || IS_QLA2271(ha) || IS_QLA2261(ha)) 3131 3128 #define IS_QLA24XX_TYPE(ha) (IS_QLA24XX(ha) || IS_QLA54XX(ha) || \ 3132 3129 IS_QLA84XX(ha)) 3133 3130 #define IS_CNA_CAPABLE(ha) (IS_QLA81XX(ha) || IS_QLA82XX(ha) || \ ··· 3169 3166 #define IS_TGT_MODE_CAPABLE(ha) (ha->tgt.atio_q_length) 3170 3167 #define IS_SHADOW_REG_CAPABLE(ha) (IS_QLA27XX(ha)) 3171 3168 #define IS_DPORT_CAPABLE(ha) (IS_QLA83XX(ha) || IS_QLA27XX(ha)) 3169 + #define IS_FAWWN_CAPABLE(ha) (IS_QLA83XX(ha) || IS_QLA27XX(ha)) 3172 3170 3173 3171 /* HBA serial number */ 3174 3172 uint8_t serial0; ··· 3292 3288 uint8_t mpi_version[3]; 3293 3289 uint32_t mpi_capabilities; 3294 3290 uint8_t phy_version[3]; 3291 + uint8_t pep_version[3]; 3295 3292 3296 3293 /* Firmware dump template */ 3297 3294 void *fw_dump_template; ··· 3425 3420 mempool_t *ctx_mempool; 3426 3421 #define FCP_CMND_DMA_POOL_SIZE 512 3427 3422 3428 - unsigned long nx_pcibase; /* Base I/O address */ 3429 - uint8_t *nxdb_rd_ptr; /* Doorbell read pointer */ 3430 - unsigned long nxdb_wr_ptr; /* Door bell write pointer */ 3423 + void __iomem *nx_pcibase; /* Base I/O address */ 3424 + void __iomem *nxdb_rd_ptr; /* Doorbell read pointer */ 3425 + void __iomem *nxdb_wr_ptr; /* Door bell write pointer */ 3431 3426 3432 3427 uint32_t crb_win; 3433 3428 uint32_t curr_window;
+26 -26
drivers/scsi/qla2xxx/qla_gs.c
··· 35 35 ms_pkt->entry_type = MS_IOCB_TYPE; 36 36 ms_pkt->entry_count = 1; 37 37 SET_TARGET_ID(ha, ms_pkt->loop_id, SIMPLE_NAME_SERVER); 38 - ms_pkt->control_flags = __constant_cpu_to_le16(CF_READ | CF_HEAD_TAG); 38 + ms_pkt->control_flags = cpu_to_le16(CF_READ | CF_HEAD_TAG); 39 39 ms_pkt->timeout = cpu_to_le16(ha->r_a_tov / 10 * 2); 40 - ms_pkt->cmd_dsd_count = __constant_cpu_to_le16(1); 41 - ms_pkt->total_dsd_count = __constant_cpu_to_le16(2); 40 + ms_pkt->cmd_dsd_count = cpu_to_le16(1); 41 + ms_pkt->total_dsd_count = cpu_to_le16(2); 42 42 ms_pkt->rsp_bytecount = cpu_to_le32(rsp_size); 43 43 ms_pkt->req_bytecount = cpu_to_le32(req_size); 44 44 ··· 74 74 75 75 ct_pkt->entry_type = CT_IOCB_TYPE; 76 76 ct_pkt->entry_count = 1; 77 - ct_pkt->nport_handle = __constant_cpu_to_le16(NPH_SNS); 77 + ct_pkt->nport_handle = cpu_to_le16(NPH_SNS); 78 78 ct_pkt->timeout = cpu_to_le16(ha->r_a_tov / 10 * 2); 79 - ct_pkt->cmd_dsd_count = __constant_cpu_to_le16(1); 80 - ct_pkt->rsp_dsd_count = __constant_cpu_to_le16(1); 79 + ct_pkt->cmd_dsd_count = cpu_to_le16(1); 80 + ct_pkt->rsp_dsd_count = cpu_to_le16(1); 81 81 ct_pkt->rsp_byte_count = cpu_to_le32(rsp_size); 82 82 ct_pkt->cmd_byte_count = cpu_to_le32(req_size); 83 83 ··· 142 142 case CS_DATA_UNDERRUN: 143 143 case CS_DATA_OVERRUN: /* Overrun? */ 144 144 if (ct_rsp->header.response != 145 - __constant_cpu_to_be16(CT_ACCEPT_RESPONSE)) { 145 + cpu_to_be16(CT_ACCEPT_RESPONSE)) { 146 146 ql_dbg(ql_dbg_disc + ql_dbg_buffer, vha, 0x2077, 147 147 "%s failed rejected request on port_id: %02x%02x%02x Compeltion status 0x%x, response 0x%x\n", 148 148 routine, vha->d_id.b.domain, ··· 1153 1153 ms_pkt->entry_type = MS_IOCB_TYPE; 1154 1154 ms_pkt->entry_count = 1; 1155 1155 SET_TARGET_ID(ha, ms_pkt->loop_id, vha->mgmt_svr_loop_id); 1156 - ms_pkt->control_flags = __constant_cpu_to_le16(CF_READ | CF_HEAD_TAG); 1156 + ms_pkt->control_flags = cpu_to_le16(CF_READ | CF_HEAD_TAG); 1157 1157 ms_pkt->timeout = cpu_to_le16(ha->r_a_tov / 10 * 2); 1158 - ms_pkt->cmd_dsd_count = __constant_cpu_to_le16(1); 1159 - ms_pkt->total_dsd_count = __constant_cpu_to_le16(2); 1158 + ms_pkt->cmd_dsd_count = cpu_to_le16(1); 1159 + ms_pkt->total_dsd_count = cpu_to_le16(2); 1160 1160 ms_pkt->rsp_bytecount = cpu_to_le32(rsp_size); 1161 1161 ms_pkt->req_bytecount = cpu_to_le32(req_size); 1162 1162 ··· 1193 1193 ct_pkt->entry_count = 1; 1194 1194 ct_pkt->nport_handle = cpu_to_le16(vha->mgmt_svr_loop_id); 1195 1195 ct_pkt->timeout = cpu_to_le16(ha->r_a_tov / 10 * 2); 1196 - ct_pkt->cmd_dsd_count = __constant_cpu_to_le16(1); 1197 - ct_pkt->rsp_dsd_count = __constant_cpu_to_le16(1); 1196 + ct_pkt->cmd_dsd_count = cpu_to_le16(1); 1197 + ct_pkt->rsp_dsd_count = cpu_to_le16(1); 1198 1198 ct_pkt->rsp_byte_count = cpu_to_le32(rsp_size); 1199 1199 ct_pkt->cmd_byte_count = cpu_to_le32(req_size); 1200 1200 ··· 1281 1281 1282 1282 /* Prepare FDMI command arguments -- attribute block, attributes. */ 1283 1283 memcpy(ct_req->req.rhba.hba_identifier, vha->port_name, WWN_SIZE); 1284 - ct_req->req.rhba.entry_count = __constant_cpu_to_be32(1); 1284 + ct_req->req.rhba.entry_count = cpu_to_be32(1); 1285 1285 memcpy(ct_req->req.rhba.port_name, vha->port_name, WWN_SIZE); 1286 1286 size = 2 * WWN_SIZE + 4 + 4; 1287 1287 1288 1288 /* Attributes */ 1289 1289 ct_req->req.rhba.attrs.count = 1290 - __constant_cpu_to_be32(FDMI_HBA_ATTR_COUNT); 1290 + cpu_to_be32(FDMI_HBA_ATTR_COUNT); 1291 1291 entries = ct_req->req.rhba.hba_identifier; 1292 1292 1293 1293 /* Nodename. */ 1294 1294 eiter = entries + size; 1295 - eiter->type = __constant_cpu_to_be16(FDMI_HBA_NODE_NAME); 1296 - eiter->len = __constant_cpu_to_be16(4 + WWN_SIZE); 1295 + eiter->type = cpu_to_be16(FDMI_HBA_NODE_NAME); 1296 + eiter->len = cpu_to_be16(4 + WWN_SIZE); 1297 1297 memcpy(eiter->a.node_name, vha->node_name, WWN_SIZE); 1298 1298 size += 4 + WWN_SIZE; 1299 1299 ··· 1302 1302 1303 1303 /* Manufacturer. */ 1304 1304 eiter = entries + size; 1305 - eiter->type = __constant_cpu_to_be16(FDMI_HBA_MANUFACTURER); 1305 + eiter->type = cpu_to_be16(FDMI_HBA_MANUFACTURER); 1306 1306 alen = strlen(QLA2XXX_MANUFACTURER); 1307 1307 snprintf(eiter->a.manufacturer, sizeof(eiter->a.manufacturer), 1308 1308 "%s", "QLogic Corporation"); ··· 1315 1315 1316 1316 /* Serial number. */ 1317 1317 eiter = entries + size; 1318 - eiter->type = __constant_cpu_to_be16(FDMI_HBA_SERIAL_NUMBER); 1318 + eiter->type = cpu_to_be16(FDMI_HBA_SERIAL_NUMBER); 1319 1319 if (IS_FWI2_CAPABLE(ha)) 1320 1320 qla2xxx_get_vpd_field(vha, "SN", eiter->a.serial_num, 1321 1321 sizeof(eiter->a.serial_num)); ··· 1335 1335 1336 1336 /* Model name. */ 1337 1337 eiter = entries + size; 1338 - eiter->type = __constant_cpu_to_be16(FDMI_HBA_MODEL); 1338 + eiter->type = cpu_to_be16(FDMI_HBA_MODEL); 1339 1339 snprintf(eiter->a.model, sizeof(eiter->a.model), 1340 1340 "%s", ha->model_number); 1341 1341 alen = strlen(eiter->a.model); ··· 1348 1348 1349 1349 /* Model description. */ 1350 1350 eiter = entries + size; 1351 - eiter->type = __constant_cpu_to_be16(FDMI_HBA_MODEL_DESCRIPTION); 1351 + eiter->type = cpu_to_be16(FDMI_HBA_MODEL_DESCRIPTION); 1352 1352 snprintf(eiter->a.model_desc, sizeof(eiter->a.model_desc), 1353 1353 "%s", ha->model_desc); 1354 1354 alen = strlen(eiter->a.model_desc); ··· 1361 1361 1362 1362 /* Hardware version. */ 1363 1363 eiter = entries + size; 1364 - eiter->type = __constant_cpu_to_be16(FDMI_HBA_HARDWARE_VERSION); 1364 + eiter->type = cpu_to_be16(FDMI_HBA_HARDWARE_VERSION); 1365 1365 if (!IS_FWI2_CAPABLE(ha)) { 1366 1366 snprintf(eiter->a.hw_version, sizeof(eiter->a.hw_version), 1367 1367 "HW:%s", ha->adapter_id); ··· 1385 1385 1386 1386 /* Driver version. */ 1387 1387 eiter = entries + size; 1388 - eiter->type = __constant_cpu_to_be16(FDMI_HBA_DRIVER_VERSION); 1388 + eiter->type = cpu_to_be16(FDMI_HBA_DRIVER_VERSION); 1389 1389 snprintf(eiter->a.driver_version, sizeof(eiter->a.driver_version), 1390 1390 "%s", qla2x00_version_str); 1391 1391 alen = strlen(eiter->a.driver_version); ··· 1398 1398 1399 1399 /* Option ROM version. */ 1400 1400 eiter = entries + size; 1401 - eiter->type = __constant_cpu_to_be16(FDMI_HBA_OPTION_ROM_VERSION); 1401 + eiter->type = cpu_to_be16(FDMI_HBA_OPTION_ROM_VERSION); 1402 1402 snprintf(eiter->a.orom_version, sizeof(eiter->a.orom_version), 1403 1403 "%d.%02d", ha->bios_revision[1], ha->bios_revision[0]); 1404 1404 alen = strlen(eiter->a.orom_version); ··· 1411 1411 1412 1412 /* Firmware version */ 1413 1413 eiter = entries + size; 1414 - eiter->type = __constant_cpu_to_be16(FDMI_HBA_FIRMWARE_VERSION); 1414 + eiter->type = cpu_to_be16(FDMI_HBA_FIRMWARE_VERSION); 1415 1415 ha->isp_ops->fw_version_str(vha, eiter->a.fw_version, 1416 1416 sizeof(eiter->a.fw_version)); 1417 1417 alen = strlen(eiter->a.fw_version); ··· 2484 2484 ct_pkt->entry_count = 1; 2485 2485 ct_pkt->nport_handle = cpu_to_le16(vha->mgmt_svr_loop_id); 2486 2486 ct_pkt->timeout = cpu_to_le16(ha->r_a_tov / 10 * 2); 2487 - ct_pkt->cmd_dsd_count = __constant_cpu_to_le16(1); 2488 - ct_pkt->rsp_dsd_count = __constant_cpu_to_le16(1); 2487 + ct_pkt->cmd_dsd_count = cpu_to_le16(1); 2488 + ct_pkt->rsp_dsd_count = cpu_to_le16(1); 2489 2489 ct_pkt->rsp_byte_count = cpu_to_le32(rsp_size); 2490 2490 ct_pkt->cmd_byte_count = cpu_to_le32(req_size); 2491 2491
+79 -83
drivers/scsi/qla2xxx/qla_init.c
··· 1132 1132 unsigned long flags = 0; 1133 1133 struct qla_hw_data *ha = vha->hw; 1134 1134 struct device_reg_24xx __iomem *reg = &ha->iobase->isp24; 1135 - uint32_t cnt, d2; 1135 + uint32_t cnt; 1136 1136 uint16_t wd; 1137 1137 static int abts_cnt; /* ISP abort retry counts */ 1138 1138 int rval = QLA_SUCCESS; ··· 1164 1164 udelay(100); 1165 1165 1166 1166 /* Wait for firmware to complete NVRAM accesses. */ 1167 - d2 = (uint32_t) RD_REG_WORD(&reg->mailbox0); 1167 + RD_REG_WORD(&reg->mailbox0); 1168 1168 for (cnt = 10000; RD_REG_WORD(&reg->mailbox0) != 0 && 1169 1169 rval == QLA_SUCCESS; cnt--) { 1170 1170 barrier(); ··· 1183 1183 RD_REG_DWORD(&reg->mailbox0)); 1184 1184 1185 1185 /* Wait for soft-reset to complete. */ 1186 - d2 = RD_REG_DWORD(&reg->ctrl_status); 1186 + RD_REG_DWORD(&reg->ctrl_status); 1187 1187 for (cnt = 0; cnt < 6000000; cnt++) { 1188 1188 barrier(); 1189 1189 if ((RD_REG_DWORD(&reg->ctrl_status) & ··· 1226 1226 WRT_REG_DWORD(&reg->hccr, HCCRX_CLR_RISC_RESET); 1227 1227 RD_REG_DWORD(&reg->hccr); 1228 1228 1229 - d2 = (uint32_t) RD_REG_WORD(&reg->mailbox0); 1229 + RD_REG_WORD(&reg->mailbox0); 1230 1230 for (cnt = 6000000; RD_REG_WORD(&reg->mailbox0) != 0 && 1231 1231 rval == QLA_SUCCESS; cnt--) { 1232 1232 barrier(); ··· 1277 1277 static void 1278 1278 qla25xx_manipulate_risc_semaphore(scsi_qla_host_t *vha) 1279 1279 { 1280 - struct qla_hw_data *ha = vha->hw; 1281 1280 uint32_t wd32 = 0; 1282 1281 uint delta_msec = 100; 1283 1282 uint elapsed_msec = 0; 1284 1283 uint timeout_msec; 1285 1284 ulong n; 1286 1285 1287 - if (!IS_QLA25XX(ha) && !IS_QLA2031(ha)) 1286 + if (vha->hw->pdev->subsystem_device != 0x0175 && 1287 + vha->hw->pdev->subsystem_device != 0x0240) 1288 1288 return; 1289 + 1290 + WRT_REG_DWORD(&vha->hw->iobase->isp24.hccr, HCCRX_SET_RISC_PAUSE); 1291 + udelay(100); 1289 1292 1290 1293 attempt: 1291 1294 timeout_msec = TIMEOUT_SEMAPHORE; ··· 1693 1690 ha->fw_dump->signature[1] = 'L'; 1694 1691 ha->fw_dump->signature[2] = 'G'; 1695 1692 ha->fw_dump->signature[3] = 'C'; 1696 - ha->fw_dump->version = __constant_htonl(1); 1693 + ha->fw_dump->version = htonl(1); 1697 1694 1698 1695 ha->fw_dump->fixed_size = htonl(fixed_size); 1699 1696 ha->fw_dump->mem_size = htonl(mem_size); ··· 2073 2070 struct rsp_que *rsp = ha->rsp_q_map[0]; 2074 2071 2075 2072 /* Setup ring parameters in initialization control block. */ 2076 - ha->init_cb->request_q_outpointer = __constant_cpu_to_le16(0); 2077 - ha->init_cb->response_q_inpointer = __constant_cpu_to_le16(0); 2073 + ha->init_cb->request_q_outpointer = cpu_to_le16(0); 2074 + ha->init_cb->response_q_inpointer = cpu_to_le16(0); 2078 2075 ha->init_cb->request_q_length = cpu_to_le16(req->length); 2079 2076 ha->init_cb->response_q_length = cpu_to_le16(rsp->length); 2080 2077 ha->init_cb->request_q_address[0] = cpu_to_le32(LSD(req->dma)); ··· 2093 2090 qla24xx_config_rings(struct scsi_qla_host *vha) 2094 2091 { 2095 2092 struct qla_hw_data *ha = vha->hw; 2096 - device_reg_t __iomem *reg = ISP_QUE_REG(ha, 0); 2093 + device_reg_t *reg = ISP_QUE_REG(ha, 0); 2097 2094 struct device_reg_2xxx __iomem *ioreg = &ha->iobase->isp; 2098 2095 struct qla_msix_entry *msix; 2099 2096 struct init_cb_24xx *icb; ··· 2103 2100 2104 2101 /* Setup ring parameters in initialization control block. */ 2105 2102 icb = (struct init_cb_24xx *)ha->init_cb; 2106 - icb->request_q_outpointer = __constant_cpu_to_le16(0); 2107 - icb->response_q_inpointer = __constant_cpu_to_le16(0); 2103 + icb->request_q_outpointer = cpu_to_le16(0); 2104 + icb->response_q_inpointer = cpu_to_le16(0); 2108 2105 icb->request_q_length = cpu_to_le16(req->length); 2109 2106 icb->response_q_length = cpu_to_le16(rsp->length); 2110 2107 icb->request_q_address[0] = cpu_to_le32(LSD(req->dma)); ··· 2113 2110 icb->response_q_address[1] = cpu_to_le32(MSD(rsp->dma)); 2114 2111 2115 2112 /* Setup ATIO queue dma pointers for target mode */ 2116 - icb->atio_q_inpointer = __constant_cpu_to_le16(0); 2113 + icb->atio_q_inpointer = cpu_to_le16(0); 2117 2114 icb->atio_q_length = cpu_to_le16(ha->tgt.atio_q_length); 2118 2115 icb->atio_q_address[0] = cpu_to_le32(LSD(ha->tgt.atio_dma)); 2119 2116 icb->atio_q_address[1] = cpu_to_le32(MSD(ha->tgt.atio_dma)); 2120 2117 2121 2118 if (IS_SHADOW_REG_CAPABLE(ha)) 2122 - icb->firmware_options_2 |= 2123 - __constant_cpu_to_le32(BIT_30|BIT_29); 2119 + icb->firmware_options_2 |= cpu_to_le32(BIT_30|BIT_29); 2124 2120 2125 2121 if (ha->mqenable || IS_QLA83XX(ha) || IS_QLA27XX(ha)) { 2126 - icb->qos = __constant_cpu_to_le16(QLA_DEFAULT_QUE_QOS); 2127 - icb->rid = __constant_cpu_to_le16(rid); 2122 + icb->qos = cpu_to_le16(QLA_DEFAULT_QUE_QOS); 2123 + icb->rid = cpu_to_le16(rid); 2128 2124 if (ha->flags.msix_enabled) { 2129 2125 msix = &ha->msix_entries[1]; 2130 2126 ql_dbg(ql_dbg_init, vha, 0x00fd, ··· 2133 2131 } 2134 2132 /* Use alternate PCI bus number */ 2135 2133 if (MSB(rid)) 2136 - icb->firmware_options_2 |= 2137 - __constant_cpu_to_le32(BIT_19); 2134 + icb->firmware_options_2 |= cpu_to_le32(BIT_19); 2138 2135 /* Use alternate PCI devfn */ 2139 2136 if (LSB(rid)) 2140 - icb->firmware_options_2 |= 2141 - __constant_cpu_to_le32(BIT_18); 2137 + icb->firmware_options_2 |= cpu_to_le32(BIT_18); 2142 2138 2143 2139 /* Use Disable MSIX Handshake mode for capable adapters */ 2144 2140 if ((ha->fw_attributes & BIT_6) && (IS_MSIX_NACK_CAPABLE(ha)) && 2145 2141 (ha->flags.msix_enabled)) { 2146 - icb->firmware_options_2 &= 2147 - __constant_cpu_to_le32(~BIT_22); 2142 + icb->firmware_options_2 &= cpu_to_le32(~BIT_22); 2148 2143 ha->flags.disable_msix_handshake = 1; 2149 2144 ql_dbg(ql_dbg_init, vha, 0x00fe, 2150 2145 "MSIX Handshake Disable Mode turned on.\n"); 2151 2146 } else { 2152 - icb->firmware_options_2 |= 2153 - __constant_cpu_to_le32(BIT_22); 2147 + icb->firmware_options_2 |= cpu_to_le32(BIT_22); 2154 2148 } 2155 - icb->firmware_options_2 |= __constant_cpu_to_le32(BIT_23); 2149 + icb->firmware_options_2 |= cpu_to_le32(BIT_23); 2156 2150 2157 2151 WRT_REG_DWORD(&reg->isp25mq.req_q_in, 0); 2158 2152 WRT_REG_DWORD(&reg->isp25mq.req_q_out, 0); ··· 2246 2248 } 2247 2249 2248 2250 if (IS_FWI2_CAPABLE(ha)) { 2249 - mid_init_cb->options = __constant_cpu_to_le16(BIT_1); 2251 + mid_init_cb->options = cpu_to_le16(BIT_1); 2250 2252 mid_init_cb->init_cb.execution_throttle = 2251 2253 cpu_to_le16(ha->fw_xcb_count); 2252 2254 /* D-Port Status */ ··· 2675 2677 nv->frame_payload_size = 1024; 2676 2678 } 2677 2679 2678 - nv->max_iocb_allocation = __constant_cpu_to_le16(256); 2679 - nv->execution_throttle = __constant_cpu_to_le16(16); 2680 + nv->max_iocb_allocation = cpu_to_le16(256); 2681 + nv->execution_throttle = cpu_to_le16(16); 2680 2682 nv->retry_count = 8; 2681 2683 nv->retry_delay = 1; 2682 2684 ··· 2694 2696 nv->host_p[1] = BIT_2; 2695 2697 nv->reset_delay = 5; 2696 2698 nv->port_down_retry_count = 8; 2697 - nv->max_luns_per_target = __constant_cpu_to_le16(8); 2699 + nv->max_luns_per_target = cpu_to_le16(8); 2698 2700 nv->link_down_timeout = 60; 2699 2701 2700 2702 rval = 1; ··· 2822 2824 memcpy(vha->node_name, icb->node_name, WWN_SIZE); 2823 2825 memcpy(vha->port_name, icb->port_name, WWN_SIZE); 2824 2826 2825 - icb->execution_throttle = __constant_cpu_to_le16(0xFFFF); 2827 + icb->execution_throttle = cpu_to_le16(0xFFFF); 2826 2828 2827 2829 ha->retry_count = nv->retry_count; 2828 2830 ··· 2874 2876 if (ql2xloginretrycount) 2875 2877 ha->login_retry_count = ql2xloginretrycount; 2876 2878 2877 - icb->lun_enables = __constant_cpu_to_le16(0); 2879 + icb->lun_enables = cpu_to_le16(0); 2878 2880 icb->command_resource_count = 0; 2879 2881 icb->immediate_notify_resource_count = 0; 2880 - icb->timeout = __constant_cpu_to_le16(0); 2882 + icb->timeout = cpu_to_le16(0); 2881 2883 2882 2884 if (IS_QLA2100(ha) || IS_QLA2200(ha)) { 2883 2885 /* Enable RIO */ ··· 3956 3958 uint16_t *next_loopid) 3957 3959 { 3958 3960 int rval; 3959 - int retry; 3960 3961 uint8_t opts; 3961 3962 struct qla_hw_data *ha = vha->hw; 3962 3963 3963 3964 rval = QLA_SUCCESS; 3964 - retry = 0; 3965 3965 3966 3966 if (IS_ALOGIO_CAPABLE(ha)) { 3967 3967 if (fcport->flags & FCF_ASYNC_SENT) ··· 5113 5117 /* Bad NVRAM data, set defaults parameters. */ 5114 5118 if (chksum || nv->id[0] != 'I' || nv->id[1] != 'S' || nv->id[2] != 'P' 5115 5119 || nv->id[3] != ' ' || 5116 - nv->nvram_version < __constant_cpu_to_le16(ICB_VERSION)) { 5120 + nv->nvram_version < cpu_to_le16(ICB_VERSION)) { 5117 5121 /* Reset NVRAM data. */ 5118 5122 ql_log(ql_log_warn, vha, 0x006b, 5119 5123 "Inconsistent NVRAM detected: checksum=0x%x id=%c " ··· 5126 5130 * Set default initialization control block. 5127 5131 */ 5128 5132 memset(nv, 0, ha->nvram_size); 5129 - nv->nvram_version = __constant_cpu_to_le16(ICB_VERSION); 5130 - nv->version = __constant_cpu_to_le16(ICB_VERSION); 5133 + nv->nvram_version = cpu_to_le16(ICB_VERSION); 5134 + nv->version = cpu_to_le16(ICB_VERSION); 5131 5135 nv->frame_payload_size = 2048; 5132 - nv->execution_throttle = __constant_cpu_to_le16(0xFFFF); 5133 - nv->exchange_count = __constant_cpu_to_le16(0); 5134 - nv->hard_address = __constant_cpu_to_le16(124); 5136 + nv->execution_throttle = cpu_to_le16(0xFFFF); 5137 + nv->exchange_count = cpu_to_le16(0); 5138 + nv->hard_address = cpu_to_le16(124); 5135 5139 nv->port_name[0] = 0x21; 5136 5140 nv->port_name[1] = 0x00 + ha->port_no + 1; 5137 5141 nv->port_name[2] = 0x00; ··· 5149 5153 nv->node_name[6] = 0x55; 5150 5154 nv->node_name[7] = 0x86; 5151 5155 qla24xx_nvram_wwn_from_ofw(vha, nv); 5152 - nv->login_retry_count = __constant_cpu_to_le16(8); 5153 - nv->interrupt_delay_timer = __constant_cpu_to_le16(0); 5154 - nv->login_timeout = __constant_cpu_to_le16(0); 5156 + nv->login_retry_count = cpu_to_le16(8); 5157 + nv->interrupt_delay_timer = cpu_to_le16(0); 5158 + nv->login_timeout = cpu_to_le16(0); 5155 5159 nv->firmware_options_1 = 5156 - __constant_cpu_to_le32(BIT_14|BIT_13|BIT_2|BIT_1); 5157 - nv->firmware_options_2 = __constant_cpu_to_le32(2 << 4); 5158 - nv->firmware_options_2 |= __constant_cpu_to_le32(BIT_12); 5159 - nv->firmware_options_3 = __constant_cpu_to_le32(2 << 13); 5160 - nv->host_p = __constant_cpu_to_le32(BIT_11|BIT_10); 5161 - nv->efi_parameters = __constant_cpu_to_le32(0); 5160 + cpu_to_le32(BIT_14|BIT_13|BIT_2|BIT_1); 5161 + nv->firmware_options_2 = cpu_to_le32(2 << 4); 5162 + nv->firmware_options_2 |= cpu_to_le32(BIT_12); 5163 + nv->firmware_options_3 = cpu_to_le32(2 << 13); 5164 + nv->host_p = cpu_to_le32(BIT_11|BIT_10); 5165 + nv->efi_parameters = cpu_to_le32(0); 5162 5166 nv->reset_delay = 5; 5163 - nv->max_luns_per_target = __constant_cpu_to_le16(128); 5164 - nv->port_down_retry_count = __constant_cpu_to_le16(30); 5165 - nv->link_down_timeout = __constant_cpu_to_le16(30); 5167 + nv->max_luns_per_target = cpu_to_le16(128); 5168 + nv->port_down_retry_count = cpu_to_le16(30); 5169 + nv->link_down_timeout = cpu_to_le16(30); 5166 5170 5167 5171 rval = 1; 5168 5172 } 5169 5173 5170 5174 if (!qla_ini_mode_enabled(vha)) { 5171 5175 /* Don't enable full login after initial LIP */ 5172 - nv->firmware_options_1 &= __constant_cpu_to_le32(~BIT_13); 5176 + nv->firmware_options_1 &= cpu_to_le32(~BIT_13); 5173 5177 /* Don't enable LIP full login for initiator */ 5174 - nv->host_p &= __constant_cpu_to_le32(~BIT_10); 5178 + nv->host_p &= cpu_to_le32(~BIT_10); 5175 5179 } 5176 5180 5177 5181 qlt_24xx_config_nvram_stage1(vha, nv); ··· 5205 5209 5206 5210 qlt_24xx_config_nvram_stage2(vha, icb); 5207 5211 5208 - if (nv->host_p & __constant_cpu_to_le32(BIT_15)) { 5212 + if (nv->host_p & cpu_to_le32(BIT_15)) { 5209 5213 /* Use alternate WWN? */ 5210 5214 memcpy(icb->node_name, nv->alternate_node_name, WWN_SIZE); 5211 5215 memcpy(icb->port_name, nv->alternate_port_name, WWN_SIZE); 5212 5216 } 5213 5217 5214 5218 /* Prepare nodename */ 5215 - if ((icb->firmware_options_1 & __constant_cpu_to_le32(BIT_14)) == 0) { 5219 + if ((icb->firmware_options_1 & cpu_to_le32(BIT_14)) == 0) { 5216 5220 /* 5217 5221 * Firmware will apply the following mask if the nodename was 5218 5222 * not provided. ··· 5244 5248 memcpy(vha->node_name, icb->node_name, WWN_SIZE); 5245 5249 memcpy(vha->port_name, icb->port_name, WWN_SIZE); 5246 5250 5247 - icb->execution_throttle = __constant_cpu_to_le16(0xFFFF); 5251 + icb->execution_throttle = cpu_to_le16(0xFFFF); 5248 5252 5249 5253 ha->retry_count = le16_to_cpu(nv->login_retry_count); 5250 5254 ··· 5252 5256 if (le16_to_cpu(nv->login_timeout) < ql2xlogintimeout) 5253 5257 nv->login_timeout = cpu_to_le16(ql2xlogintimeout); 5254 5258 if (le16_to_cpu(nv->login_timeout) < 4) 5255 - nv->login_timeout = __constant_cpu_to_le16(4); 5259 + nv->login_timeout = cpu_to_le16(4); 5256 5260 ha->login_timeout = le16_to_cpu(nv->login_timeout); 5257 5261 icb->login_timeout = nv->login_timeout; 5258 5262 ··· 5303 5307 ha->zio_timer = le16_to_cpu(icb->interrupt_delay_timer) ? 5304 5308 le16_to_cpu(icb->interrupt_delay_timer): 2; 5305 5309 } 5306 - icb->firmware_options_2 &= __constant_cpu_to_le32( 5310 + icb->firmware_options_2 &= cpu_to_le32( 5307 5311 ~(BIT_3 | BIT_2 | BIT_1 | BIT_0)); 5308 5312 vha->flags.process_response_queue = 0; 5309 5313 if (ha->zio_mode != QLA_ZIO_DISABLED) { ··· 6059 6063 /* Bad NVRAM data, set defaults parameters. */ 6060 6064 if (chksum || nv->id[0] != 'I' || nv->id[1] != 'S' || nv->id[2] != 'P' 6061 6065 || nv->id[3] != ' ' || 6062 - nv->nvram_version < __constant_cpu_to_le16(ICB_VERSION)) { 6066 + nv->nvram_version < cpu_to_le16(ICB_VERSION)) { 6063 6067 /* Reset NVRAM data. */ 6064 6068 ql_log(ql_log_info, vha, 0x0073, 6065 6069 "Inconsistent NVRAM detected: checksum=0x%x id=%c " ··· 6073 6077 * Set default initialization control block. 6074 6078 */ 6075 6079 memset(nv, 0, ha->nvram_size); 6076 - nv->nvram_version = __constant_cpu_to_le16(ICB_VERSION); 6077 - nv->version = __constant_cpu_to_le16(ICB_VERSION); 6080 + nv->nvram_version = cpu_to_le16(ICB_VERSION); 6081 + nv->version = cpu_to_le16(ICB_VERSION); 6078 6082 nv->frame_payload_size = 2048; 6079 - nv->execution_throttle = __constant_cpu_to_le16(0xFFFF); 6080 - nv->exchange_count = __constant_cpu_to_le16(0); 6083 + nv->execution_throttle = cpu_to_le16(0xFFFF); 6084 + nv->exchange_count = cpu_to_le16(0); 6081 6085 nv->port_name[0] = 0x21; 6082 6086 nv->port_name[1] = 0x00 + ha->port_no + 1; 6083 6087 nv->port_name[2] = 0x00; ··· 6094 6098 nv->node_name[5] = 0x1c; 6095 6099 nv->node_name[6] = 0x55; 6096 6100 nv->node_name[7] = 0x86; 6097 - nv->login_retry_count = __constant_cpu_to_le16(8); 6098 - nv->interrupt_delay_timer = __constant_cpu_to_le16(0); 6099 - nv->login_timeout = __constant_cpu_to_le16(0); 6101 + nv->login_retry_count = cpu_to_le16(8); 6102 + nv->interrupt_delay_timer = cpu_to_le16(0); 6103 + nv->login_timeout = cpu_to_le16(0); 6100 6104 nv->firmware_options_1 = 6101 - __constant_cpu_to_le32(BIT_14|BIT_13|BIT_2|BIT_1); 6102 - nv->firmware_options_2 = __constant_cpu_to_le32(2 << 4); 6103 - nv->firmware_options_2 |= __constant_cpu_to_le32(BIT_12); 6104 - nv->firmware_options_3 = __constant_cpu_to_le32(2 << 13); 6105 - nv->host_p = __constant_cpu_to_le32(BIT_11|BIT_10); 6106 - nv->efi_parameters = __constant_cpu_to_le32(0); 6105 + cpu_to_le32(BIT_14|BIT_13|BIT_2|BIT_1); 6106 + nv->firmware_options_2 = cpu_to_le32(2 << 4); 6107 + nv->firmware_options_2 |= cpu_to_le32(BIT_12); 6108 + nv->firmware_options_3 = cpu_to_le32(2 << 13); 6109 + nv->host_p = cpu_to_le32(BIT_11|BIT_10); 6110 + nv->efi_parameters = cpu_to_le32(0); 6107 6111 nv->reset_delay = 5; 6108 - nv->max_luns_per_target = __constant_cpu_to_le16(128); 6109 - nv->port_down_retry_count = __constant_cpu_to_le16(30); 6110 - nv->link_down_timeout = __constant_cpu_to_le16(180); 6112 + nv->max_luns_per_target = cpu_to_le16(128); 6113 + nv->port_down_retry_count = cpu_to_le16(30); 6114 + nv->link_down_timeout = cpu_to_le16(180); 6111 6115 nv->enode_mac[0] = 0x00; 6112 6116 nv->enode_mac[1] = 0xC0; 6113 6117 nv->enode_mac[2] = 0xDD; ··· 6166 6170 qlt_81xx_config_nvram_stage2(vha, icb); 6167 6171 6168 6172 /* Use alternate WWN? */ 6169 - if (nv->host_p & __constant_cpu_to_le32(BIT_15)) { 6173 + if (nv->host_p & cpu_to_le32(BIT_15)) { 6170 6174 memcpy(icb->node_name, nv->alternate_node_name, WWN_SIZE); 6171 6175 memcpy(icb->port_name, nv->alternate_port_name, WWN_SIZE); 6172 6176 } 6173 6177 6174 6178 /* Prepare nodename */ 6175 - if ((icb->firmware_options_1 & __constant_cpu_to_le32(BIT_14)) == 0) { 6179 + if ((icb->firmware_options_1 & cpu_to_le32(BIT_14)) == 0) { 6176 6180 /* 6177 6181 * Firmware will apply the following mask if the nodename was 6178 6182 * not provided. ··· 6201 6205 memcpy(vha->node_name, icb->node_name, WWN_SIZE); 6202 6206 memcpy(vha->port_name, icb->port_name, WWN_SIZE); 6203 6207 6204 - icb->execution_throttle = __constant_cpu_to_le16(0xFFFF); 6208 + icb->execution_throttle = cpu_to_le16(0xFFFF); 6205 6209 6206 6210 ha->retry_count = le16_to_cpu(nv->login_retry_count); 6207 6211 ··· 6209 6213 if (le16_to_cpu(nv->login_timeout) < ql2xlogintimeout) 6210 6214 nv->login_timeout = cpu_to_le16(ql2xlogintimeout); 6211 6215 if (le16_to_cpu(nv->login_timeout) < 4) 6212 - nv->login_timeout = __constant_cpu_to_le16(4); 6216 + nv->login_timeout = cpu_to_le16(4); 6213 6217 ha->login_timeout = le16_to_cpu(nv->login_timeout); 6214 6218 icb->login_timeout = nv->login_timeout; 6215 6219 ··· 6255 6259 6256 6260 /* if not running MSI-X we need handshaking on interrupts */ 6257 6261 if (!vha->hw->flags.msix_enabled && (IS_QLA83XX(ha) || IS_QLA27XX(ha))) 6258 - icb->firmware_options_2 |= __constant_cpu_to_le32(BIT_22); 6262 + icb->firmware_options_2 |= cpu_to_le32(BIT_22); 6259 6263 6260 6264 /* Enable ZIO. */ 6261 6265 if (!vha->flags.init_done) { ··· 6264 6268 ha->zio_timer = le16_to_cpu(icb->interrupt_delay_timer) ? 6265 6269 le16_to_cpu(icb->interrupt_delay_timer): 2; 6266 6270 } 6267 - icb->firmware_options_2 &= __constant_cpu_to_le32( 6271 + icb->firmware_options_2 &= cpu_to_le32( 6268 6272 ~(BIT_3 | BIT_2 | BIT_1 | BIT_0)); 6269 6273 vha->flags.process_response_queue = 0; 6270 6274 if (ha->zio_mode != QLA_ZIO_DISABLED) {
+49 -83
drivers/scsi/qla2xxx/qla_iocb.c
··· 108 108 cont_pkt = (cont_entry_t *)req->ring_ptr; 109 109 110 110 /* Load packet defaults. */ 111 - *((uint32_t *)(&cont_pkt->entry_type)) = 112 - __constant_cpu_to_le32(CONTINUE_TYPE); 111 + *((uint32_t *)(&cont_pkt->entry_type)) = cpu_to_le32(CONTINUE_TYPE); 113 112 114 113 return (cont_pkt); 115 114 } ··· 137 138 138 139 /* Load packet defaults. */ 139 140 *((uint32_t *)(&cont_pkt->entry_type)) = IS_QLAFX00(vha->hw) ? 140 - __constant_cpu_to_le32(CONTINUE_A64_TYPE_FX00) : 141 - __constant_cpu_to_le32(CONTINUE_A64_TYPE); 141 + cpu_to_le32(CONTINUE_A64_TYPE_FX00) : 142 + cpu_to_le32(CONTINUE_A64_TYPE); 142 143 143 144 return (cont_pkt); 144 145 } ··· 203 204 204 205 /* Update entry type to indicate Command Type 2 IOCB */ 205 206 *((uint32_t *)(&cmd_pkt->entry_type)) = 206 - __constant_cpu_to_le32(COMMAND_TYPE); 207 + cpu_to_le32(COMMAND_TYPE); 207 208 208 209 /* No data transfer */ 209 210 if (!scsi_bufflen(cmd) || cmd->sc_data_direction == DMA_NONE) { 210 - cmd_pkt->byte_count = __constant_cpu_to_le32(0); 211 + cmd_pkt->byte_count = cpu_to_le32(0); 211 212 return; 212 213 } 213 214 ··· 260 261 cmd = GET_CMD_SP(sp); 261 262 262 263 /* Update entry type to indicate Command Type 3 IOCB */ 263 - *((uint32_t *)(&cmd_pkt->entry_type)) = 264 - __constant_cpu_to_le32(COMMAND_A64_TYPE); 264 + *((uint32_t *)(&cmd_pkt->entry_type)) = cpu_to_le32(COMMAND_A64_TYPE); 265 265 266 266 /* No data transfer */ 267 267 if (!scsi_bufflen(cmd) || cmd->sc_data_direction == DMA_NONE) { 268 - cmd_pkt->byte_count = __constant_cpu_to_le32(0); 268 + cmd_pkt->byte_count = cpu_to_le32(0); 269 269 return; 270 270 } 271 271 ··· 308 310 int 309 311 qla2x00_start_scsi(srb_t *sp) 310 312 { 311 - int ret, nseg; 313 + int nseg; 312 314 unsigned long flags; 313 315 scsi_qla_host_t *vha; 314 316 struct scsi_cmnd *cmd; ··· 325 327 struct rsp_que *rsp; 326 328 327 329 /* Setup device pointers. */ 328 - ret = 0; 329 330 vha = sp->fcport->vha; 330 331 ha = vha->hw; 331 332 reg = &ha->iobase->isp; ··· 400 403 /* Set target ID and LUN number*/ 401 404 SET_TARGET_ID(ha, cmd_pkt->target, sp->fcport->loop_id); 402 405 cmd_pkt->lun = cpu_to_le16(cmd->device->lun); 403 - cmd_pkt->control_flags = __constant_cpu_to_le16(CF_SIMPLE_TAG); 406 + cmd_pkt->control_flags = cpu_to_le16(CF_SIMPLE_TAG); 404 407 405 408 /* Load SCSI command packet. */ 406 409 memcpy(cmd_pkt->scsi_cdb, cmd->cmnd, cmd->cmd_len); ··· 451 454 qla2x00_start_iocbs(struct scsi_qla_host *vha, struct req_que *req) 452 455 { 453 456 struct qla_hw_data *ha = vha->hw; 454 - device_reg_t __iomem *reg = ISP_QUE_REG(ha, req->id); 457 + device_reg_t *reg = ISP_QUE_REG(ha, req->id); 455 458 456 459 if (IS_P3P_TYPE(ha)) { 457 460 qla82xx_start_iocbs(vha); ··· 594 597 cmd = GET_CMD_SP(sp); 595 598 596 599 /* Update entry type to indicate Command Type 3 IOCB */ 597 - *((uint32_t *)(&cmd_pkt->entry_type)) = 598 - __constant_cpu_to_le32(COMMAND_TYPE_6); 600 + *((uint32_t *)(&cmd_pkt->entry_type)) = cpu_to_le32(COMMAND_TYPE_6); 599 601 600 602 /* No data transfer */ 601 603 if (!scsi_bufflen(cmd) || cmd->sc_data_direction == DMA_NONE) { 602 - cmd_pkt->byte_count = __constant_cpu_to_le32(0); 604 + cmd_pkt->byte_count = cpu_to_le32(0); 603 605 return 0; 604 606 } 605 607 ··· 607 611 608 612 /* Set transfer direction */ 609 613 if (cmd->sc_data_direction == DMA_TO_DEVICE) { 610 - cmd_pkt->control_flags = 611 - __constant_cpu_to_le16(CF_WRITE_DATA); 614 + cmd_pkt->control_flags = cpu_to_le16(CF_WRITE_DATA); 612 615 vha->qla_stats.output_bytes += scsi_bufflen(cmd); 613 616 vha->qla_stats.output_requests++; 614 617 } else if (cmd->sc_data_direction == DMA_FROM_DEVICE) { 615 - cmd_pkt->control_flags = 616 - __constant_cpu_to_le16(CF_READ_DATA); 618 + cmd_pkt->control_flags = cpu_to_le16(CF_READ_DATA); 617 619 vha->qla_stats.input_bytes += scsi_bufflen(cmd); 618 620 vha->qla_stats.input_requests++; 619 621 } ··· 674 680 * 675 681 * Returns the number of dsd list needed to store @dsds. 676 682 */ 677 - inline uint16_t 683 + static inline uint16_t 678 684 qla24xx_calc_dsd_lists(uint16_t dsds) 679 685 { 680 686 uint16_t dsd_lists = 0; ··· 694 700 * @cmd_pkt: Command type 3 IOCB 695 701 * @tot_dsds: Total number of segments to transfer 696 702 */ 697 - inline void 703 + static inline void 698 704 qla24xx_build_scsi_iocbs(srb_t *sp, struct cmd_type_7 *cmd_pkt, 699 705 uint16_t tot_dsds) 700 706 { ··· 704 710 struct scsi_cmnd *cmd; 705 711 struct scatterlist *sg; 706 712 int i; 707 - struct req_que *req; 708 713 709 714 cmd = GET_CMD_SP(sp); 710 715 711 716 /* Update entry type to indicate Command Type 3 IOCB */ 712 - *((uint32_t *)(&cmd_pkt->entry_type)) = 713 - __constant_cpu_to_le32(COMMAND_TYPE_7); 717 + *((uint32_t *)(&cmd_pkt->entry_type)) = cpu_to_le32(COMMAND_TYPE_7); 714 718 715 719 /* No data transfer */ 716 720 if (!scsi_bufflen(cmd) || cmd->sc_data_direction == DMA_NONE) { 717 - cmd_pkt->byte_count = __constant_cpu_to_le32(0); 721 + cmd_pkt->byte_count = cpu_to_le32(0); 718 722 return; 719 723 } 720 724 721 725 vha = sp->fcport->vha; 722 - req = vha->req; 723 726 724 727 /* Set transfer direction */ 725 728 if (cmd->sc_data_direction == DMA_TO_DEVICE) { 726 - cmd_pkt->task_mgmt_flags = 727 - __constant_cpu_to_le16(TMF_WRITE_DATA); 729 + cmd_pkt->task_mgmt_flags = cpu_to_le16(TMF_WRITE_DATA); 728 730 vha->qla_stats.output_bytes += scsi_bufflen(cmd); 729 731 vha->qla_stats.output_requests++; 730 732 } else if (cmd->sc_data_direction == DMA_FROM_DEVICE) { 731 - cmd_pkt->task_mgmt_flags = 732 - __constant_cpu_to_le16(TMF_READ_DATA); 733 + cmd_pkt->task_mgmt_flags = cpu_to_le16(TMF_READ_DATA); 733 734 vha->qla_stats.input_bytes += scsi_bufflen(cmd); 734 735 vha->qla_stats.input_requests++; 735 736 } ··· 798 809 * match LBA in CDB + N 799 810 */ 800 811 case SCSI_PROT_DIF_TYPE2: 801 - pkt->app_tag = __constant_cpu_to_le16(0); 812 + pkt->app_tag = cpu_to_le16(0); 802 813 pkt->app_tag_mask[0] = 0x0; 803 814 pkt->app_tag_mask[1] = 0x0; 804 815 ··· 829 840 case SCSI_PROT_DIF_TYPE1: 830 841 pkt->ref_tag = cpu_to_le32((uint32_t) 831 842 (0xffffffff & scsi_get_lba(cmd))); 832 - pkt->app_tag = __constant_cpu_to_le16(0); 843 + pkt->app_tag = cpu_to_le16(0); 833 844 pkt->app_tag_mask[0] = 0x0; 834 845 pkt->app_tag_mask[1] = 0x0; 835 846 ··· 922 933 dma_addr_t sle_dma; 923 934 uint32_t sle_dma_len, tot_prot_dma_len = 0; 924 935 struct scsi_cmnd *cmd; 925 - struct scsi_qla_host *vha; 926 936 927 937 memset(&sgx, 0, sizeof(struct qla2_sgx)); 928 938 if (sp) { 929 - vha = sp->fcport->vha; 930 939 cmd = GET_CMD_SP(sp); 931 940 prot_int = cmd->device->sector_size; 932 941 ··· 934 947 935 948 sg_prot = scsi_prot_sglist(cmd); 936 949 } else if (tc) { 937 - vha = tc->vha; 938 950 prot_int = tc->blk_sz; 939 951 sgx.tot_bytes = tc->bufflen; 940 952 sgx.cur_sg = tc->sg; ··· 1033 1047 int i; 1034 1048 uint16_t used_dsds = tot_dsds; 1035 1049 struct scsi_cmnd *cmd; 1036 - struct scsi_qla_host *vha; 1037 1050 1038 1051 if (sp) { 1039 1052 cmd = GET_CMD_SP(sp); 1040 1053 sgl = scsi_sglist(cmd); 1041 - vha = sp->fcport->vha; 1042 1054 } else if (tc) { 1043 1055 sgl = tc->sg; 1044 - vha = tc->vha; 1045 1056 } else { 1046 1057 BUG(); 1047 1058 return 1; ··· 1214 1231 uint32_t *cur_dsd, *fcp_dl; 1215 1232 scsi_qla_host_t *vha; 1216 1233 struct scsi_cmnd *cmd; 1217 - int sgc; 1218 1234 uint32_t total_bytes = 0; 1219 1235 uint32_t data_bytes; 1220 1236 uint32_t dif_bytes; ··· 1229 1247 1230 1248 cmd = GET_CMD_SP(sp); 1231 1249 1232 - sgc = 0; 1233 1250 /* Update entry type to indicate Command Type CRC_2 IOCB */ 1234 - *((uint32_t *)(&cmd_pkt->entry_type)) = 1235 - __constant_cpu_to_le32(COMMAND_TYPE_CRC_2); 1251 + *((uint32_t *)(&cmd_pkt->entry_type)) = cpu_to_le32(COMMAND_TYPE_CRC_2); 1236 1252 1237 1253 vha = sp->fcport->vha; 1238 1254 ha = vha->hw; ··· 1238 1258 /* No data transfer */ 1239 1259 data_bytes = scsi_bufflen(cmd); 1240 1260 if (!data_bytes || cmd->sc_data_direction == DMA_NONE) { 1241 - cmd_pkt->byte_count = __constant_cpu_to_le32(0); 1261 + cmd_pkt->byte_count = cpu_to_le32(0); 1242 1262 return QLA_SUCCESS; 1243 1263 } 1244 1264 ··· 1247 1267 /* Set transfer direction */ 1248 1268 if (cmd->sc_data_direction == DMA_TO_DEVICE) { 1249 1269 cmd_pkt->control_flags = 1250 - __constant_cpu_to_le16(CF_WRITE_DATA); 1270 + cpu_to_le16(CF_WRITE_DATA); 1251 1271 } else if (cmd->sc_data_direction == DMA_FROM_DEVICE) { 1252 1272 cmd_pkt->control_flags = 1253 - __constant_cpu_to_le16(CF_READ_DATA); 1273 + cpu_to_le16(CF_READ_DATA); 1254 1274 } 1255 1275 1256 1276 if ((scsi_get_prot_op(cmd) == SCSI_PROT_READ_INSERT) || ··· 1372 1392 crc_ctx_pkt->blk_size = cpu_to_le16(blk_size); 1373 1393 crc_ctx_pkt->prot_opts = cpu_to_le16(fw_prot_opts); 1374 1394 crc_ctx_pkt->byte_count = cpu_to_le32(data_bytes); 1375 - crc_ctx_pkt->guard_seed = __constant_cpu_to_le16(0); 1395 + crc_ctx_pkt->guard_seed = cpu_to_le16(0); 1376 1396 /* Fibre channel byte count */ 1377 1397 cmd_pkt->byte_count = cpu_to_le32(total_bytes); 1378 1398 fcp_dl = (uint32_t *)(crc_ctx_pkt->fcp_cmnd.cdb + 16 + ··· 1380 1400 *fcp_dl = htonl(total_bytes); 1381 1401 1382 1402 if (!data_bytes || cmd->sc_data_direction == DMA_NONE) { 1383 - cmd_pkt->byte_count = __constant_cpu_to_le32(0); 1403 + cmd_pkt->byte_count = cpu_to_le32(0); 1384 1404 return QLA_SUCCESS; 1385 1405 } 1386 1406 /* Walks data segments */ 1387 1407 1388 - cmd_pkt->control_flags |= 1389 - __constant_cpu_to_le16(CF_DATA_SEG_DESCR_ENABLE); 1408 + cmd_pkt->control_flags |= cpu_to_le16(CF_DATA_SEG_DESCR_ENABLE); 1390 1409 1391 1410 if (!bundling && tot_prot_dsds) { 1392 1411 if (qla24xx_walk_and_build_sglist_no_difb(ha, sp, ··· 1397 1418 1398 1419 if (bundling && tot_prot_dsds) { 1399 1420 /* Walks dif segments */ 1400 - cmd_pkt->control_flags |= 1401 - __constant_cpu_to_le16(CF_DIF_SEG_DESCR_ENABLE); 1421 + cmd_pkt->control_flags |= cpu_to_le16(CF_DIF_SEG_DESCR_ENABLE); 1402 1422 cur_dsd = (uint32_t *) &crc_ctx_pkt->u.bundling.dif_address; 1403 1423 if (qla24xx_walk_and_build_prot_sglist(ha, sp, cur_dsd, 1404 1424 tot_prot_dsds, NULL)) ··· 1420 1442 int 1421 1443 qla24xx_start_scsi(srb_t *sp) 1422 1444 { 1423 - int ret, nseg; 1445 + int nseg; 1424 1446 unsigned long flags; 1425 1447 uint32_t *clr_ptr; 1426 1448 uint32_t index; ··· 1436 1458 struct qla_hw_data *ha = vha->hw; 1437 1459 1438 1460 /* Setup device pointers. */ 1439 - ret = 0; 1440 - 1441 1461 qla25xx_set_que(sp, &rsp); 1442 1462 req = vha->req; 1443 1463 ··· 1729 1753 cmd_pkt->entry_count = (uint8_t)req_cnt; 1730 1754 /* Specify response queue number where completion should happen */ 1731 1755 cmd_pkt->entry_status = (uint8_t) rsp->id; 1732 - cmd_pkt->timeout = __constant_cpu_to_le16(0); 1756 + cmd_pkt->timeout = cpu_to_le16(0); 1733 1757 wmb(); 1734 1758 1735 1759 /* Adjust ring index. */ ··· 1795 1819 { 1796 1820 struct qla_hw_data *ha = vha->hw; 1797 1821 struct req_que *req = ha->req_q_map[0]; 1798 - device_reg_t __iomem *reg = ISP_QUE_REG(ha, req->id); 1822 + device_reg_t *reg = ISP_QUE_REG(ha, req->id); 1799 1823 uint32_t index, handle; 1800 1824 request_t *pkt; 1801 1825 uint16_t cnt, req_cnt; ··· 2020 2044 els_iocb->entry_status = 0; 2021 2045 els_iocb->handle = sp->handle; 2022 2046 els_iocb->nport_handle = cpu_to_le16(sp->fcport->loop_id); 2023 - els_iocb->tx_dsd_count = __constant_cpu_to_le16(bsg_job->request_payload.sg_cnt); 2047 + els_iocb->tx_dsd_count = cpu_to_le16(bsg_job->request_payload.sg_cnt); 2024 2048 els_iocb->vp_index = sp->fcport->vha->vp_idx; 2025 2049 els_iocb->sof_type = EST_SOFI3; 2026 - els_iocb->rx_dsd_count = __constant_cpu_to_le16(bsg_job->reply_payload.sg_cnt); 2050 + els_iocb->rx_dsd_count = cpu_to_le16(bsg_job->reply_payload.sg_cnt); 2027 2051 2028 2052 els_iocb->opcode = 2029 2053 sp->type == SRB_ELS_CMD_RPT ? ··· 2067 2091 struct qla_hw_data *ha = vha->hw; 2068 2092 struct fc_bsg_job *bsg_job = sp->u.bsg_job; 2069 2093 int loop_iterartion = 0; 2070 - int cont_iocb_prsnt = 0; 2071 2094 int entry_count = 1; 2072 2095 2073 2096 memset(ct_iocb, 0, sizeof(ms_iocb_entry_t)); ··· 2074 2099 ct_iocb->entry_status = 0; 2075 2100 ct_iocb->handle1 = sp->handle; 2076 2101 SET_TARGET_ID(ha, ct_iocb->loop_id, sp->fcport->loop_id); 2077 - ct_iocb->status = __constant_cpu_to_le16(0); 2078 - ct_iocb->control_flags = __constant_cpu_to_le16(0); 2102 + ct_iocb->status = cpu_to_le16(0); 2103 + ct_iocb->control_flags = cpu_to_le16(0); 2079 2104 ct_iocb->timeout = 0; 2080 2105 ct_iocb->cmd_dsd_count = 2081 - __constant_cpu_to_le16(bsg_job->request_payload.sg_cnt); 2106 + cpu_to_le16(bsg_job->request_payload.sg_cnt); 2082 2107 ct_iocb->total_dsd_count = 2083 - __constant_cpu_to_le16(bsg_job->request_payload.sg_cnt + 1); 2108 + cpu_to_le16(bsg_job->request_payload.sg_cnt + 1); 2084 2109 ct_iocb->req_bytecount = 2085 2110 cpu_to_le32(bsg_job->request_payload.payload_len); 2086 2111 ct_iocb->rsp_bytecount = ··· 2117 2142 vha->hw->req_q_map[0]); 2118 2143 cur_dsd = (uint32_t *) cont_pkt->dseg_0_address; 2119 2144 avail_dsds = 5; 2120 - cont_iocb_prsnt = 1; 2121 2145 entry_count++; 2122 2146 } 2123 2147 ··· 2144 2170 struct qla_hw_data *ha = vha->hw; 2145 2171 struct fc_bsg_job *bsg_job = sp->u.bsg_job; 2146 2172 int loop_iterartion = 0; 2147 - int cont_iocb_prsnt = 0; 2148 2173 int entry_count = 1; 2149 2174 2150 2175 ct_iocb->entry_type = CT_IOCB_TYPE; ··· 2153 2180 2154 2181 ct_iocb->nport_handle = cpu_to_le16(sp->fcport->loop_id); 2155 2182 ct_iocb->vp_index = sp->fcport->vha->vp_idx; 2156 - ct_iocb->comp_status = __constant_cpu_to_le16(0); 2183 + ct_iocb->comp_status = cpu_to_le16(0); 2157 2184 2158 2185 ct_iocb->cmd_dsd_count = 2159 - __constant_cpu_to_le16(bsg_job->request_payload.sg_cnt); 2186 + cpu_to_le16(bsg_job->request_payload.sg_cnt); 2160 2187 ct_iocb->timeout = 0; 2161 2188 ct_iocb->rsp_dsd_count = 2162 - __constant_cpu_to_le16(bsg_job->reply_payload.sg_cnt); 2189 + cpu_to_le16(bsg_job->reply_payload.sg_cnt); 2163 2190 ct_iocb->rsp_byte_count = 2164 2191 cpu_to_le32(bsg_job->reply_payload.payload_len); 2165 2192 ct_iocb->cmd_byte_count = ··· 2190 2217 ha->req_q_map[0]); 2191 2218 cur_dsd = (uint32_t *) cont_pkt->dseg_0_address; 2192 2219 avail_dsds = 5; 2193 - cont_iocb_prsnt = 1; 2194 2220 entry_count++; 2195 2221 } 2196 2222 ··· 2212 2240 int 2213 2241 qla82xx_start_scsi(srb_t *sp) 2214 2242 { 2215 - int ret, nseg; 2243 + int nseg; 2216 2244 unsigned long flags; 2217 2245 struct scsi_cmnd *cmd; 2218 2246 uint32_t *clr_ptr; ··· 2232 2260 struct rsp_que *rsp = NULL; 2233 2261 2234 2262 /* Setup device pointers. */ 2235 - ret = 0; 2236 2263 reg = &ha->iobase->isp82; 2237 2264 cmd = GET_CMD_SP(sp); 2238 2265 req = vha->req; ··· 2510 2539 /* write, read and verify logic */ 2511 2540 dbval = dbval | (req->id << 8) | (req->ring_index << 16); 2512 2541 if (ql2xdbwr) 2513 - qla82xx_wr_32(ha, ha->nxdb_wr_ptr, dbval); 2542 + qla82xx_wr_32(ha, (uintptr_t __force)ha->nxdb_wr_ptr, dbval); 2514 2543 else { 2515 - WRT_REG_DWORD( 2516 - (unsigned long __iomem *)ha->nxdb_wr_ptr, 2517 - dbval); 2544 + WRT_REG_DWORD(ha->nxdb_wr_ptr, dbval); 2518 2545 wmb(); 2519 - while (RD_REG_DWORD((void __iomem *)ha->nxdb_rd_ptr) != dbval) { 2520 - WRT_REG_DWORD( 2521 - (unsigned long __iomem *)ha->nxdb_wr_ptr, 2522 - dbval); 2546 + while (RD_REG_DWORD(ha->nxdb_rd_ptr) != dbval) { 2547 + WRT_REG_DWORD(ha->nxdb_wr_ptr, dbval); 2523 2548 wmb(); 2524 2549 } 2525 2550 } ··· 2649 2682 2650 2683 /*Update entry type to indicate bidir command */ 2651 2684 *((uint32_t *)(&cmd_pkt->entry_type)) = 2652 - __constant_cpu_to_le32(COMMAND_BIDIRECTIONAL); 2685 + cpu_to_le32(COMMAND_BIDIRECTIONAL); 2653 2686 2654 2687 /* Set the transfer direction, in this set both flags 2655 2688 * Also set the BD_WRAP_BACK flag, firmware will take care ··· 2657 2690 */ 2658 2691 cmd_pkt->wr_dseg_count = cpu_to_le16(bsg_job->request_payload.sg_cnt); 2659 2692 cmd_pkt->rd_dseg_count = cpu_to_le16(bsg_job->reply_payload.sg_cnt); 2660 - cmd_pkt->control_flags = 2661 - __constant_cpu_to_le16(BD_WRITE_DATA | BD_READ_DATA | 2693 + cmd_pkt->control_flags = cpu_to_le16(BD_WRITE_DATA | BD_READ_DATA | 2662 2694 BD_WRAP_BACK); 2663 2695 2664 2696 req_data_len = rsp_data_len = bsg_job->request_payload.payload_len;
+48 -22
drivers/scsi/qla2xxx/qla_isr.c
··· 116 116 qla2x00_check_reg32_for_disconnect(scsi_qla_host_t *vha, uint32_t reg) 117 117 { 118 118 /* Check for PCI disconnection */ 119 - if (reg == 0xffffffff) { 119 + if (reg == 0xffffffff && !pci_channel_offline(vha->hw->pdev)) { 120 120 if (!test_and_set_bit(PFLG_DISCONNECTED, &vha->pci_flags) && 121 121 !test_bit(PFLG_DRIVER_REMOVING, &vha->pci_flags) && 122 122 !test_bit(PFLG_DRIVER_PROBING, &vha->pci_flags)) { ··· 560 560 return ret; 561 561 } 562 562 563 + static inline fc_port_t * 564 + qla2x00_find_fcport_by_loopid(scsi_qla_host_t *vha, uint16_t loop_id) 565 + { 566 + fc_port_t *fcport; 567 + 568 + list_for_each_entry(fcport, &vha->vp_fcports, list) 569 + if (fcport->loop_id == loop_id) 570 + return fcport; 571 + return NULL; 572 + } 573 + 563 574 /** 564 575 * qla2x00_async_event() - Process aynchronous events. 565 576 * @ha: SCSI driver HA context ··· 586 575 struct device_reg_2xxx __iomem *reg = &ha->iobase->isp; 587 576 struct device_reg_24xx __iomem *reg24 = &ha->iobase->isp24; 588 577 struct device_reg_82xx __iomem *reg82 = &ha->iobase->isp82; 589 - uint32_t rscn_entry, host_pid, tmp_pid; 578 + uint32_t rscn_entry, host_pid; 590 579 unsigned long flags; 591 580 fc_port_t *fcport = NULL; 592 581 ··· 908 897 (mb[1] != 0xffff)) && vha->vp_idx != (mb[3] & 0xff)) 909 898 break; 910 899 911 - /* Global event -- port logout or port unavailable. */ 912 - if (mb[1] == 0xffff && mb[2] == 0x7) { 900 + if (mb[2] == 0x7) { 913 901 ql_dbg(ql_dbg_async, vha, 0x5010, 914 - "Port unavailable %04x %04x %04x.\n", 902 + "Port %s %04x %04x %04x.\n", 903 + mb[1] == 0xffff ? "unavailable" : "logout", 915 904 mb[1], mb[2], mb[3]); 905 + 906 + if (mb[1] == 0xffff) 907 + goto global_port_update; 908 + 909 + /* Port logout */ 910 + fcport = qla2x00_find_fcport_by_loopid(vha, mb[1]); 911 + if (!fcport) 912 + break; 913 + if (atomic_read(&fcport->state) != FCS_ONLINE) 914 + break; 915 + ql_dbg(ql_dbg_async, vha, 0x508a, 916 + "Marking port lost loopid=%04x portid=%06x.\n", 917 + fcport->loop_id, fcport->d_id.b24); 918 + qla2x00_mark_device_lost(fcport->vha, fcport, 1, 1); 919 + break; 920 + 921 + global_port_update: 922 + /* Port unavailable. */ 916 923 ql_log(ql_log_warn, vha, 0x505e, 917 924 "Link is offline.\n"); 918 925 ··· 1027 998 list_for_each_entry(fcport, &vha->vp_fcports, list) { 1028 999 if (atomic_read(&fcport->state) != FCS_ONLINE) 1029 1000 continue; 1030 - tmp_pid = fcport->d_id.b24; 1031 1001 if (fcport->d_id.b24 == rscn_entry) { 1032 1002 qla2x00_mark_device_lost(vha, fcport, 0, 0); 1033 1003 break; ··· 1593 1565 "Async-%s error - hdl=%x entry-status(%x).\n", 1594 1566 type, sp->handle, sts->entry_status); 1595 1567 iocb->u.tmf.data = QLA_FUNCTION_FAILED; 1596 - } else if (sts->comp_status != __constant_cpu_to_le16(CS_COMPLETE)) { 1568 + } else if (sts->comp_status != cpu_to_le16(CS_COMPLETE)) { 1597 1569 ql_log(ql_log_warn, fcport->vha, 0x5039, 1598 1570 "Async-%s error - hdl=%x completion status(%x).\n", 1599 1571 type, sp->handle, sts->comp_status); ··· 2073 2045 } 2074 2046 2075 2047 /* Validate handle. */ 2076 - if (handle < req->num_outstanding_cmds) 2048 + if (handle < req->num_outstanding_cmds) { 2077 2049 sp = req->outstanding_cmds[handle]; 2078 - else 2079 - sp = NULL; 2080 - 2081 - if (sp == NULL) { 2050 + if (!sp) { 2051 + ql_dbg(ql_dbg_io, vha, 0x3075, 2052 + "%s(%ld): Already returned command for status handle (0x%x).\n", 2053 + __func__, vha->host_no, sts->handle); 2054 + return; 2055 + } 2056 + } else { 2082 2057 ql_dbg(ql_dbg_io, vha, 0x3017, 2083 - "Invalid status handle (0x%x).\n", sts->handle); 2058 + "Invalid status handle, out of range (0x%x).\n", 2059 + sts->handle); 2084 2060 2085 2061 if (!test_bit(ABORT_ISP_ACTIVE, &vha->dpc_flags)) { 2086 2062 if (IS_P3P_TYPE(ha)) ··· 2371 2339 ql_dbg(ql_dbg_io, fcport->vha, 0x3022, 2372 2340 "FCP command status: 0x%x-0x%x (0x%x) nexus=%ld:%d:%llu " 2373 2341 "portid=%02x%02x%02x oxid=0x%x cdb=%10phN len=0x%x " 2374 - "rsp_info=0x%x resid=0x%x fw_resid=0x%x.\n", 2342 + "rsp_info=0x%x resid=0x%x fw_resid=0x%x sp=%p cp=%p.\n", 2375 2343 comp_status, scsi_status, res, vha->host_no, 2376 2344 cp->device->id, cp->device->lun, fcport->d_id.b.domain, 2377 2345 fcport->d_id.b.area, fcport->d_id.b.al_pa, ox_id, 2378 2346 cp->cmnd, scsi_bufflen(cp), rsp_info_len, 2379 - resid_len, fw_resid_len); 2347 + resid_len, fw_resid_len, sp, cp); 2380 2348 2381 2349 if (rsp->status_srb == NULL) 2382 2350 sp->done(ha, sp, res); ··· 2473 2441 } 2474 2442 fatal: 2475 2443 ql_log(ql_log_warn, vha, 0x5030, 2476 - "Error entry - invalid handle/queue.\n"); 2477 - 2478 - if (IS_P3P_TYPE(ha)) 2479 - set_bit(FCOE_CTX_RESET_NEEDED, &vha->dpc_flags); 2480 - else 2481 - set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags); 2482 - qla2xxx_wake_dpc(vha); 2444 + "Error entry - invalid handle/queue (%04x).\n", que); 2483 2445 } 2484 2446 2485 2447 /**
+45 -35
drivers/scsi/qla2xxx/qla_mbx.c
··· 555 555 if (IS_FWI2_CAPABLE(ha)) 556 556 mcp->in_mb |= MBX_17|MBX_16|MBX_15; 557 557 if (IS_QLA27XX(ha)) 558 - mcp->in_mb |= MBX_21|MBX_20|MBX_19|MBX_18; 558 + mcp->in_mb |= MBX_23 | MBX_22 | MBX_21 | MBX_20 | MBX_19 | 559 + MBX_18 | MBX_14 | MBX_13 | MBX_11 | MBX_10 | MBX_9 | MBX_8; 560 + 559 561 mcp->flags = 0; 560 562 mcp->tov = MBX_TOV_SECONDS; 561 563 rval = qla2x00_mailbox_command(vha, mcp); ··· 573 571 ha->fw_memory_size = 0x1FFFF; /* Defaults to 128KB. */ 574 572 else 575 573 ha->fw_memory_size = (mcp->mb[5] << 16) | mcp->mb[4]; 574 + 576 575 if (IS_QLA81XX(vha->hw) || IS_QLA8031(vha->hw) || IS_QLA8044(ha)) { 577 576 ha->mpi_version[0] = mcp->mb[10] & 0xff; 578 577 ha->mpi_version[1] = mcp->mb[11] >> 8; ··· 583 580 ha->phy_version[1] = mcp->mb[9] >> 8; 584 581 ha->phy_version[2] = mcp->mb[9] & 0xff; 585 582 } 583 + 586 584 if (IS_FWI2_CAPABLE(ha)) { 587 585 ha->fw_attributes_h = mcp->mb[15]; 588 586 ha->fw_attributes_ext[0] = mcp->mb[16]; ··· 595 591 "%s: Ext_FwAttributes Upper: 0x%x, Lower: 0x%x.\n", 596 592 __func__, mcp->mb[17], mcp->mb[16]); 597 593 } 594 + 598 595 if (IS_QLA27XX(ha)) { 596 + ha->mpi_version[0] = mcp->mb[10] & 0xff; 597 + ha->mpi_version[1] = mcp->mb[11] >> 8; 598 + ha->mpi_version[2] = mcp->mb[11] & 0xff; 599 + ha->pep_version[0] = mcp->mb[13] & 0xff; 600 + ha->pep_version[1] = mcp->mb[14] >> 8; 601 + ha->pep_version[2] = mcp->mb[14] & 0xff; 599 602 ha->fw_shared_ram_start = (mcp->mb[19] << 16) | mcp->mb[18]; 600 603 ha->fw_shared_ram_end = (mcp->mb[21] << 16) | mcp->mb[20]; 601 604 } ··· 1146 1135 vha->fcoe_vn_port_mac[0] = mcp->mb[13] & 0xff; 1147 1136 } 1148 1137 /* If FA-WWN supported */ 1149 - if (mcp->mb[7] & BIT_14) { 1150 - vha->port_name[0] = MSB(mcp->mb[16]); 1151 - vha->port_name[1] = LSB(mcp->mb[16]); 1152 - vha->port_name[2] = MSB(mcp->mb[17]); 1153 - vha->port_name[3] = LSB(mcp->mb[17]); 1154 - vha->port_name[4] = MSB(mcp->mb[18]); 1155 - vha->port_name[5] = LSB(mcp->mb[18]); 1156 - vha->port_name[6] = MSB(mcp->mb[19]); 1157 - vha->port_name[7] = LSB(mcp->mb[19]); 1158 - fc_host_port_name(vha->host) = 1159 - wwn_to_u64(vha->port_name); 1160 - ql_dbg(ql_dbg_mbx, vha, 0x10ca, 1161 - "FA-WWN acquired %016llx\n", 1162 - wwn_to_u64(vha->port_name)); 1138 + if (IS_FAWWN_CAPABLE(vha->hw)) { 1139 + if (mcp->mb[7] & BIT_14) { 1140 + vha->port_name[0] = MSB(mcp->mb[16]); 1141 + vha->port_name[1] = LSB(mcp->mb[16]); 1142 + vha->port_name[2] = MSB(mcp->mb[17]); 1143 + vha->port_name[3] = LSB(mcp->mb[17]); 1144 + vha->port_name[4] = MSB(mcp->mb[18]); 1145 + vha->port_name[5] = LSB(mcp->mb[18]); 1146 + vha->port_name[6] = MSB(mcp->mb[19]); 1147 + vha->port_name[7] = LSB(mcp->mb[19]); 1148 + fc_host_port_name(vha->host) = 1149 + wwn_to_u64(vha->port_name); 1150 + ql_dbg(ql_dbg_mbx, vha, 0x10ca, 1151 + "FA-WWN acquired %016llx\n", 1152 + wwn_to_u64(vha->port_name)); 1153 + } 1163 1154 } 1164 1155 } 1165 1156 ··· 1252 1239 "Entered %s.\n", __func__); 1253 1240 1254 1241 if (IS_P3P_TYPE(ha) && ql2xdbwr) 1255 - qla82xx_wr_32(ha, ha->nxdb_wr_ptr, 1242 + qla82xx_wr_32(ha, (uintptr_t __force)ha->nxdb_wr_ptr, 1256 1243 (0x04 | (ha->portnum << 5) | (0 << 8) | (0 << 16))); 1257 1244 1258 1245 if (ha->flags.npiv_supported) ··· 1878 1865 uint32_t iop[2]; 1879 1866 struct qla_hw_data *ha = vha->hw; 1880 1867 struct req_que *req; 1881 - struct rsp_que *rsp; 1882 1868 1883 1869 ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x1061, 1884 1870 "Entered %s.\n", __func__); ··· 1886 1874 req = ha->req_q_map[0]; 1887 1875 else 1888 1876 req = vha->req; 1889 - rsp = req->rsp; 1890 1877 1891 1878 lg = dma_pool_alloc(ha->s_dma_pool, GFP_KERNEL, &lg_dma); 1892 1879 if (lg == NULL) { ··· 1899 1888 lg->entry_count = 1; 1900 1889 lg->handle = MAKE_HANDLE(req->id, lg->handle); 1901 1890 lg->nport_handle = cpu_to_le16(loop_id); 1902 - lg->control_flags = __constant_cpu_to_le16(LCF_COMMAND_PLOGI); 1891 + lg->control_flags = cpu_to_le16(LCF_COMMAND_PLOGI); 1903 1892 if (opt & BIT_0) 1904 - lg->control_flags |= __constant_cpu_to_le16(LCF_COND_PLOGI); 1893 + lg->control_flags |= cpu_to_le16(LCF_COND_PLOGI); 1905 1894 if (opt & BIT_1) 1906 - lg->control_flags |= __constant_cpu_to_le16(LCF_SKIP_PRLI); 1895 + lg->control_flags |= cpu_to_le16(LCF_SKIP_PRLI); 1907 1896 lg->port_id[0] = al_pa; 1908 1897 lg->port_id[1] = area; 1909 1898 lg->port_id[2] = domain; ··· 1918 1907 "Failed to complete IOCB -- error status (%x).\n", 1919 1908 lg->entry_status); 1920 1909 rval = QLA_FUNCTION_FAILED; 1921 - } else if (lg->comp_status != __constant_cpu_to_le16(CS_COMPLETE)) { 1910 + } else if (lg->comp_status != cpu_to_le16(CS_COMPLETE)) { 1922 1911 iop[0] = le32_to_cpu(lg->io_parameter[0]); 1923 1912 iop[1] = le32_to_cpu(lg->io_parameter[1]); 1924 1913 ··· 1972 1961 mb[10] |= BIT_0; /* Class 2. */ 1973 1962 if (lg->io_parameter[9] || lg->io_parameter[10]) 1974 1963 mb[10] |= BIT_1; /* Class 3. */ 1975 - if (lg->io_parameter[0] & __constant_cpu_to_le32(BIT_7)) 1964 + if (lg->io_parameter[0] & cpu_to_le32(BIT_7)) 1976 1965 mb[10] |= BIT_7; /* Confirmed Completion 1977 1966 * Allowed 1978 1967 */ ··· 2153 2142 dma_addr_t lg_dma; 2154 2143 struct qla_hw_data *ha = vha->hw; 2155 2144 struct req_que *req; 2156 - struct rsp_que *rsp; 2157 2145 2158 2146 ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x106d, 2159 2147 "Entered %s.\n", __func__); ··· 2169 2159 req = ha->req_q_map[0]; 2170 2160 else 2171 2161 req = vha->req; 2172 - rsp = req->rsp; 2173 2162 lg->entry_type = LOGINOUT_PORT_IOCB_TYPE; 2174 2163 lg->entry_count = 1; 2175 2164 lg->handle = MAKE_HANDLE(req->id, lg->handle); 2176 2165 lg->nport_handle = cpu_to_le16(loop_id); 2177 2166 lg->control_flags = 2178 - __constant_cpu_to_le16(LCF_COMMAND_LOGO|LCF_IMPL_LOGO| 2167 + cpu_to_le16(LCF_COMMAND_LOGO|LCF_IMPL_LOGO| 2179 2168 LCF_FREE_NPORT); 2180 2169 lg->port_id[0] = al_pa; 2181 2170 lg->port_id[1] = area; ··· 2190 2181 "Failed to complete IOCB -- error status (%x).\n", 2191 2182 lg->entry_status); 2192 2183 rval = QLA_FUNCTION_FAILED; 2193 - } else if (lg->comp_status != __constant_cpu_to_le16(CS_COMPLETE)) { 2184 + } else if (lg->comp_status != cpu_to_le16(CS_COMPLETE)) { 2194 2185 ql_dbg(ql_dbg_mbx, vha, 0x1071, 2195 2186 "Failed to complete IOCB -- completion status (%x) " 2196 2187 "ioparam=%x/%x.\n", le16_to_cpu(lg->comp_status), ··· 2682 2673 "Failed to complete IOCB -- error status (%x).\n", 2683 2674 abt->entry_status); 2684 2675 rval = QLA_FUNCTION_FAILED; 2685 - } else if (abt->nport_handle != __constant_cpu_to_le16(0)) { 2676 + } else if (abt->nport_handle != cpu_to_le16(0)) { 2686 2677 ql_dbg(ql_dbg_mbx, vha, 0x1090, 2687 2678 "Failed to complete IOCB -- completion status (%x).\n", 2688 2679 le16_to_cpu(abt->nport_handle)); ··· 2765 2756 "Failed to complete IOCB -- error status (%x).\n", 2766 2757 sts->entry_status); 2767 2758 rval = QLA_FUNCTION_FAILED; 2768 - } else if (sts->comp_status != 2769 - __constant_cpu_to_le16(CS_COMPLETE)) { 2759 + } else if (sts->comp_status != cpu_to_le16(CS_COMPLETE)) { 2770 2760 ql_dbg(ql_dbg_mbx, vha, 0x1096, 2771 2761 "Failed to complete IOCB -- completion status (%x).\n", 2772 2762 le16_to_cpu(sts->comp_status)); ··· 2861 2853 mbx_cmd_t mc; 2862 2854 mbx_cmd_t *mcp = &mc; 2863 2855 2864 - if (!IS_QLA2031(vha->hw) && !IS_QLA27XX(vha->hw)) 2856 + if (!IS_QLA25XX(vha->hw) && !IS_QLA2031(vha->hw) && 2857 + !IS_QLA27XX(vha->hw)) 2865 2858 return QLA_FUNCTION_FAILED; 2866 2859 2867 2860 ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x1182, ··· 2900 2891 mbx_cmd_t mc; 2901 2892 mbx_cmd_t *mcp = &mc; 2902 2893 2903 - if (!IS_QLA2031(vha->hw) && !IS_QLA27XX(vha->hw)) 2894 + if (!IS_QLA25XX(vha->hw) && !IS_QLA2031(vha->hw) && 2895 + !IS_QLA27XX(vha->hw)) 2904 2896 return QLA_FUNCTION_FAILED; 2905 2897 2906 2898 ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x1185, ··· 3493 3483 "Failed to complete IOCB -- error status (%x).\n", 3494 3484 vpmod->comp_status); 3495 3485 rval = QLA_FUNCTION_FAILED; 3496 - } else if (vpmod->comp_status != __constant_cpu_to_le16(CS_COMPLETE)) { 3486 + } else if (vpmod->comp_status != cpu_to_le16(CS_COMPLETE)) { 3497 3487 ql_dbg(ql_dbg_mbx, vha, 0x10bf, 3498 3488 "Failed to complete IOCB -- completion status (%x).\n", 3499 3489 le16_to_cpu(vpmod->comp_status)); ··· 3552 3542 vce->entry_type = VP_CTRL_IOCB_TYPE; 3553 3543 vce->entry_count = 1; 3554 3544 vce->command = cpu_to_le16(cmd); 3555 - vce->vp_count = __constant_cpu_to_le16(1); 3545 + vce->vp_count = cpu_to_le16(1); 3556 3546 3557 3547 /* index map in firmware starts with 1; decrement index 3558 3548 * this is ok as we never use index 0 ··· 3572 3562 "Failed to complete IOCB -- error status (%x).\n", 3573 3563 vce->entry_status); 3574 3564 rval = QLA_FUNCTION_FAILED; 3575 - } else if (vce->comp_status != __constant_cpu_to_le16(CS_COMPLETE)) { 3565 + } else if (vce->comp_status != cpu_to_le16(CS_COMPLETE)) { 3576 3566 ql_dbg(ql_dbg_mbx, vha, 0x10c5, 3577 3567 "Failed to complet IOCB -- completion status (%x).\n", 3578 3568 le16_to_cpu(vce->comp_status));
+1 -2
drivers/scsi/qla2xxx/qla_mid.c
··· 371 371 void 372 372 qla2x00_do_dpc_all_vps(scsi_qla_host_t *vha) 373 373 { 374 - int ret; 375 374 struct qla_hw_data *ha = vha->hw; 376 375 scsi_qla_host_t *vp; 377 376 unsigned long flags = 0; ··· 391 392 atomic_inc(&vp->vref_count); 392 393 spin_unlock_irqrestore(&ha->vport_slock, flags); 393 394 394 - ret = qla2x00_do_dpc_vp(vp); 395 + qla2x00_do_dpc_vp(vp); 395 396 396 397 spin_lock_irqsave(&ha->vport_slock, flags); 397 398 atomic_dec(&vp->vref_count);
+9 -13
drivers/scsi/qla2xxx/qla_mr.c
··· 862 862 dma_addr_t bar2_hdl = pci_resource_start(ha->pdev, 2); 863 863 864 864 req->length = ha->req_que_len; 865 - req->ring = (void *)ha->iobase + ha->req_que_off; 865 + req->ring = (void __force *)ha->iobase + ha->req_que_off; 866 866 req->dma = bar2_hdl + ha->req_que_off; 867 867 if ((!req->ring) || (req->length == 0)) { 868 868 ql_log_pci(ql_log_info, ha->pdev, 0x012f, ··· 877 877 ha->req_que_off, (u64)req->dma); 878 878 879 879 rsp->length = ha->rsp_que_len; 880 - rsp->ring = (void *)ha->iobase + ha->rsp_que_off; 880 + rsp->ring = (void __force *)ha->iobase + ha->rsp_que_off; 881 881 rsp->dma = bar2_hdl + ha->rsp_que_off; 882 882 if ((!rsp->ring) || (rsp->length == 0)) { 883 883 ql_log_pci(ql_log_info, ha->pdev, 0x0131, ··· 1317 1317 qlafx00_configure_devices(scsi_qla_host_t *vha) 1318 1318 { 1319 1319 int rval; 1320 - unsigned long flags, save_flags; 1320 + unsigned long flags; 1321 1321 rval = QLA_SUCCESS; 1322 1322 1323 - save_flags = flags = vha->dpc_flags; 1323 + flags = vha->dpc_flags; 1324 1324 1325 1325 ql_dbg(ql_dbg_disc, vha, 0x2090, 1326 1326 "Configure devices -- dpc flags =0x%lx\n", flags); ··· 1425 1425 pkt = rsp->ring_ptr; 1426 1426 for (cnt = 0; cnt < rsp->length; cnt++) { 1427 1427 pkt->signature = RESPONSE_PROCESSED; 1428 - WRT_REG_DWORD((void __iomem *)&pkt->signature, 1428 + WRT_REG_DWORD((void __force __iomem *)&pkt->signature, 1429 1429 RESPONSE_PROCESSED); 1430 1430 pkt++; 1431 1431 } ··· 2279 2279 struct sts_entry_fx00 *sts; 2280 2280 __le16 comp_status; 2281 2281 __le16 scsi_status; 2282 - uint16_t ox_id; 2283 2282 __le16 lscsi_status; 2284 2283 int32_t resid; 2285 2284 uint32_t sense_len, par_sense_len, rsp_info_len, resid_len, ··· 2343 2344 2344 2345 fcport = sp->fcport; 2345 2346 2346 - ox_id = 0; 2347 2347 sense_len = par_sense_len = rsp_info_len = resid_len = 2348 2348 fw_resid_len = 0; 2349 2349 if (scsi_status & cpu_to_le16((uint16_t)SS_SENSE_LEN_VALID)) ··· 2526 2528 ql_dbg(ql_dbg_io, fcport->vha, 0x3058, 2527 2529 "FCP command status: 0x%x-0x%x (0x%x) nexus=%ld:%d:%llu " 2528 2530 "tgt_id: 0x%x lscsi_status: 0x%x cdb=%10phN len=0x%x " 2529 - "rsp_info=0x%x resid=0x%x fw_resid=0x%x sense_len=0x%x, " 2531 + "rsp_info=%p resid=0x%x fw_resid=0x%x sense_len=0x%x, " 2530 2532 "par_sense_len=0x%x, rsp_info_len=0x%x\n", 2531 2533 comp_status, scsi_status, res, vha->host_no, 2532 2534 cp->device->id, cp->device->lun, fcport->tgt_id, 2533 2535 lscsi_status, cp->cmnd, scsi_bufflen(cp), 2534 - rsp_info_len, resid_len, fw_resid_len, sense_len, 2536 + rsp_info, resid_len, fw_resid_len, sense_len, 2535 2537 par_sense_len, rsp_info_len); 2536 2538 2537 2539 if (rsp->status_srb == NULL) ··· 3007 3009 3008 3010 /* No data transfer */ 3009 3011 if (!scsi_bufflen(cmd) || cmd->sc_data_direction == DMA_NONE) { 3010 - lcmd_pkt->byte_count = __constant_cpu_to_le32(0); 3012 + lcmd_pkt->byte_count = cpu_to_le32(0); 3011 3013 return; 3012 3014 } 3013 3015 ··· 3069 3071 int 3070 3072 qlafx00_start_scsi(srb_t *sp) 3071 3073 { 3072 - int ret, nseg; 3074 + int nseg; 3073 3075 unsigned long flags; 3074 3076 uint32_t index; 3075 3077 uint32_t handle; ··· 3086 3088 struct scsi_lun llun; 3087 3089 3088 3090 /* Setup device pointers. */ 3089 - ret = 0; 3090 - 3091 3091 rsp = ha->rsp_q_map[0]; 3092 3092 req = vha->req; 3093 3093
+79 -86
drivers/scsi/qla2xxx/qla_nx.c
··· 347 347 } 348 348 349 349 /* 350 - * In: 'off' is offset from CRB space in 128M pci map 351 - * Out: 'off' is 2M pci map addr 350 + * In: 'off_in' is offset from CRB space in 128M pci map 351 + * Out: 'off_out' is 2M pci map addr 352 352 * side effect: lock crb window 353 353 */ 354 354 static void 355 - qla82xx_pci_set_crbwindow_2M(struct qla_hw_data *ha, ulong *off) 355 + qla82xx_pci_set_crbwindow_2M(struct qla_hw_data *ha, ulong off_in, 356 + void __iomem **off_out) 356 357 { 357 358 u32 win_read; 358 359 scsi_qla_host_t *vha = pci_get_drvdata(ha->pdev); 359 360 360 - ha->crb_win = CRB_HI(*off); 361 - writel(ha->crb_win, 362 - (void __iomem *)(CRB_WINDOW_2M + ha->nx_pcibase)); 361 + ha->crb_win = CRB_HI(off_in); 362 + writel(ha->crb_win, CRB_WINDOW_2M + ha->nx_pcibase); 363 363 364 364 /* Read back value to make sure write has gone through before trying 365 365 * to use it. 366 366 */ 367 - win_read = RD_REG_DWORD((void __iomem *) 368 - (CRB_WINDOW_2M + ha->nx_pcibase)); 367 + win_read = RD_REG_DWORD(CRB_WINDOW_2M + ha->nx_pcibase); 369 368 if (win_read != ha->crb_win) { 370 369 ql_dbg(ql_dbg_p3p, vha, 0xb000, 371 370 "%s: Written crbwin (0x%x) " 372 371 "!= Read crbwin (0x%x), off=0x%lx.\n", 373 - __func__, ha->crb_win, win_read, *off); 372 + __func__, ha->crb_win, win_read, off_in); 374 373 } 375 - *off = (*off & MASK(16)) + CRB_INDIRECT_2M + ha->nx_pcibase; 374 + *off_out = (off_in & MASK(16)) + CRB_INDIRECT_2M + ha->nx_pcibase; 376 375 } 377 376 378 377 static inline unsigned long ··· 416 417 } 417 418 418 419 static int 419 - qla82xx_pci_get_crb_addr_2M(struct qla_hw_data *ha, ulong *off) 420 + qla82xx_pci_get_crb_addr_2M(struct qla_hw_data *ha, ulong off_in, 421 + void __iomem **off_out) 420 422 { 421 423 struct crb_128M_2M_sub_block_map *m; 422 424 423 - if (*off >= QLA82XX_CRB_MAX) 425 + if (off_in >= QLA82XX_CRB_MAX) 424 426 return -1; 425 427 426 - if (*off >= QLA82XX_PCI_CAMQM && (*off < QLA82XX_PCI_CAMQM_2M_END)) { 427 - *off = (*off - QLA82XX_PCI_CAMQM) + 428 + if (off_in >= QLA82XX_PCI_CAMQM && off_in < QLA82XX_PCI_CAMQM_2M_END) { 429 + *off_out = (off_in - QLA82XX_PCI_CAMQM) + 428 430 QLA82XX_PCI_CAMQM_2M_BASE + ha->nx_pcibase; 429 431 return 0; 430 432 } 431 433 432 - if (*off < QLA82XX_PCI_CRBSPACE) 434 + if (off_in < QLA82XX_PCI_CRBSPACE) 433 435 return -1; 434 436 435 - *off -= QLA82XX_PCI_CRBSPACE; 437 + *off_out = (void __iomem *)(off_in - QLA82XX_PCI_CRBSPACE); 436 438 437 439 /* Try direct map */ 438 - m = &crb_128M_2M_map[CRB_BLK(*off)].sub_block[CRB_SUBBLK(*off)]; 440 + m = &crb_128M_2M_map[CRB_BLK(off_in)].sub_block[CRB_SUBBLK(off_in)]; 439 441 440 - if (m->valid && (m->start_128M <= *off) && (m->end_128M > *off)) { 441 - *off = *off + m->start_2M - m->start_128M + ha->nx_pcibase; 442 + if (m->valid && (m->start_128M <= off_in) && (m->end_128M > off_in)) { 443 + *off_out = off_in + m->start_2M - m->start_128M + ha->nx_pcibase; 442 444 return 0; 443 445 } 444 446 /* Not in direct map, use crb window */ ··· 465 465 } 466 466 467 467 int 468 - qla82xx_wr_32(struct qla_hw_data *ha, ulong off, u32 data) 468 + qla82xx_wr_32(struct qla_hw_data *ha, ulong off_in, u32 data) 469 469 { 470 + void __iomem *off; 470 471 unsigned long flags = 0; 471 472 int rv; 472 473 473 - rv = qla82xx_pci_get_crb_addr_2M(ha, &off); 474 + rv = qla82xx_pci_get_crb_addr_2M(ha, off_in, &off); 474 475 475 476 BUG_ON(rv == -1); 476 477 477 478 if (rv == 1) { 479 + #ifndef __CHECKER__ 478 480 write_lock_irqsave(&ha->hw_lock, flags); 481 + #endif 479 482 qla82xx_crb_win_lock(ha); 480 - qla82xx_pci_set_crbwindow_2M(ha, &off); 483 + qla82xx_pci_set_crbwindow_2M(ha, off_in, &off); 481 484 } 482 485 483 486 writel(data, (void __iomem *)off); 484 487 485 488 if (rv == 1) { 486 489 qla82xx_rd_32(ha, QLA82XX_PCIE_REG(PCIE_SEM7_UNLOCK)); 490 + #ifndef __CHECKER__ 487 491 write_unlock_irqrestore(&ha->hw_lock, flags); 492 + #endif 488 493 } 489 494 return 0; 490 495 } 491 496 492 497 int 493 - qla82xx_rd_32(struct qla_hw_data *ha, ulong off) 498 + qla82xx_rd_32(struct qla_hw_data *ha, ulong off_in) 494 499 { 500 + void __iomem *off; 495 501 unsigned long flags = 0; 496 502 int rv; 497 503 u32 data; 498 504 499 - rv = qla82xx_pci_get_crb_addr_2M(ha, &off); 505 + rv = qla82xx_pci_get_crb_addr_2M(ha, off_in, &off); 500 506 501 507 BUG_ON(rv == -1); 502 508 503 509 if (rv == 1) { 510 + #ifndef __CHECKER__ 504 511 write_lock_irqsave(&ha->hw_lock, flags); 512 + #endif 505 513 qla82xx_crb_win_lock(ha); 506 - qla82xx_pci_set_crbwindow_2M(ha, &off); 514 + qla82xx_pci_set_crbwindow_2M(ha, off_in, &off); 507 515 } 508 - data = RD_REG_DWORD((void __iomem *)off); 516 + data = RD_REG_DWORD(off); 509 517 510 518 if (rv == 1) { 511 519 qla82xx_rd_32(ha, QLA82XX_PCIE_REG(PCIE_SEM7_UNLOCK)); 520 + #ifndef __CHECKER__ 512 521 write_unlock_irqrestore(&ha->hw_lock, flags); 522 + #endif 513 523 } 514 524 return data; 515 525 } ··· 557 547 qla82xx_rd_32(ha, QLA82XX_PCIE_REG(PCIE_SEM5_UNLOCK)); 558 548 } 559 549 560 - /* PCI Windowing for DDR regions. */ 561 - #define QLA82XX_ADDR_IN_RANGE(addr, low, high) \ 562 - (((addr) <= (high)) && ((addr) >= (low))) 563 550 /* 564 551 * check memory access boundary. 565 552 * used by test agent. support ddr access only for now ··· 565 558 qla82xx_pci_mem_bound_check(struct qla_hw_data *ha, 566 559 unsigned long long addr, int size) 567 560 { 568 - if (!QLA82XX_ADDR_IN_RANGE(addr, QLA82XX_ADDR_DDR_NET, 561 + if (!addr_in_range(addr, QLA82XX_ADDR_DDR_NET, 569 562 QLA82XX_ADDR_DDR_NET_MAX) || 570 - !QLA82XX_ADDR_IN_RANGE(addr + size - 1, QLA82XX_ADDR_DDR_NET, 563 + !addr_in_range(addr + size - 1, QLA82XX_ADDR_DDR_NET, 571 564 QLA82XX_ADDR_DDR_NET_MAX) || 572 565 ((size != 1) && (size != 2) && (size != 4) && (size != 8))) 573 566 return 0; ··· 584 577 u32 win_read; 585 578 scsi_qla_host_t *vha = pci_get_drvdata(ha->pdev); 586 579 587 - if (QLA82XX_ADDR_IN_RANGE(addr, QLA82XX_ADDR_DDR_NET, 580 + if (addr_in_range(addr, QLA82XX_ADDR_DDR_NET, 588 581 QLA82XX_ADDR_DDR_NET_MAX)) { 589 582 /* DDR network side */ 590 583 window = MN_WIN(addr); ··· 599 592 __func__, window, win_read); 600 593 } 601 594 addr = GET_MEM_OFFS_2M(addr) + QLA82XX_PCI_DDR_NET; 602 - } else if (QLA82XX_ADDR_IN_RANGE(addr, QLA82XX_ADDR_OCM0, 595 + } else if (addr_in_range(addr, QLA82XX_ADDR_OCM0, 603 596 QLA82XX_ADDR_OCM0_MAX)) { 604 597 unsigned int temp1; 605 598 if ((addr & 0x00ff800) == 0xff800) { ··· 622 615 } 623 616 addr = GET_MEM_OFFS_2M(addr) + QLA82XX_PCI_OCM0_2M; 624 617 625 - } else if (QLA82XX_ADDR_IN_RANGE(addr, QLA82XX_ADDR_QDR_NET, 618 + } else if (addr_in_range(addr, QLA82XX_ADDR_QDR_NET, 626 619 QLA82XX_P3_ADDR_QDR_NET_MAX)) { 627 620 /* QDR network side */ 628 621 window = MS_WIN(addr); ··· 663 656 qdr_max = QLA82XX_P3_ADDR_QDR_NET_MAX; 664 657 665 658 /* DDR network side */ 666 - if (QLA82XX_ADDR_IN_RANGE(addr, QLA82XX_ADDR_DDR_NET, 659 + if (addr_in_range(addr, QLA82XX_ADDR_DDR_NET, 667 660 QLA82XX_ADDR_DDR_NET_MAX)) 668 661 BUG(); 669 - else if (QLA82XX_ADDR_IN_RANGE(addr, QLA82XX_ADDR_OCM0, 662 + else if (addr_in_range(addr, QLA82XX_ADDR_OCM0, 670 663 QLA82XX_ADDR_OCM0_MAX)) 671 664 return 1; 672 - else if (QLA82XX_ADDR_IN_RANGE(addr, QLA82XX_ADDR_OCM1, 665 + else if (addr_in_range(addr, QLA82XX_ADDR_OCM1, 673 666 QLA82XX_ADDR_OCM1_MAX)) 674 667 return 1; 675 - else if (QLA82XX_ADDR_IN_RANGE(addr, QLA82XX_ADDR_QDR_NET, qdr_max)) { 668 + else if (addr_in_range(addr, QLA82XX_ADDR_QDR_NET, qdr_max)) { 676 669 /* QDR network side */ 677 670 window = ((addr - QLA82XX_ADDR_QDR_NET) >> 22) & 0x3f; 678 671 if (ha->qdr_sn_window == window) ··· 929 922 { 930 923 uint32_t off_value, rval = 0; 931 924 932 - WRT_REG_DWORD((void __iomem *)(CRB_WINDOW_2M + ha->nx_pcibase), 933 - (off & 0xFFFF0000)); 925 + WRT_REG_DWORD(CRB_WINDOW_2M + ha->nx_pcibase, off & 0xFFFF0000); 934 926 935 927 /* Read back value to make sure write has gone through */ 936 - RD_REG_DWORD((void __iomem *)(CRB_WINDOW_2M + ha->nx_pcibase)); 928 + RD_REG_DWORD(CRB_WINDOW_2M + ha->nx_pcibase); 937 929 off_value = (off & 0x0000FFFF); 938 930 939 931 if (flag) 940 - WRT_REG_DWORD((void __iomem *) 941 - (off_value + CRB_INDIRECT_2M + ha->nx_pcibase), 942 - data); 932 + WRT_REG_DWORD(off_value + CRB_INDIRECT_2M + ha->nx_pcibase, 933 + data); 943 934 else 944 - rval = RD_REG_DWORD((void __iomem *) 945 - (off_value + CRB_INDIRECT_2M + ha->nx_pcibase)); 935 + rval = RD_REG_DWORD(off_value + CRB_INDIRECT_2M + 936 + ha->nx_pcibase); 946 937 947 938 return rval; 948 939 } ··· 1668 1663 } 1669 1664 1670 1665 len = pci_resource_len(ha->pdev, 0); 1671 - ha->nx_pcibase = 1672 - (unsigned long)ioremap(pci_resource_start(ha->pdev, 0), len); 1666 + ha->nx_pcibase = ioremap(pci_resource_start(ha->pdev, 0), len); 1673 1667 if (!ha->nx_pcibase) { 1674 1668 ql_log_pci(ql_log_fatal, ha->pdev, 0x000e, 1675 1669 "Cannot remap pcibase MMIO, aborting.\n"); ··· 1677 1673 1678 1674 /* Mapping of IO base pointer */ 1679 1675 if (IS_QLA8044(ha)) { 1680 - ha->iobase = 1681 - (device_reg_t *)((uint8_t *)ha->nx_pcibase); 1676 + ha->iobase = ha->nx_pcibase; 1682 1677 } else if (IS_QLA82XX(ha)) { 1683 - ha->iobase = 1684 - (device_reg_t *)((uint8_t *)ha->nx_pcibase + 1685 - 0xbc000 + (ha->pdev->devfn << 11)); 1678 + ha->iobase = ha->nx_pcibase + 0xbc000 + (ha->pdev->devfn << 11); 1686 1679 } 1687 1680 1688 1681 if (!ql2xdbwr) { 1689 - ha->nxdb_wr_ptr = 1690 - (unsigned long)ioremap((pci_resource_start(ha->pdev, 4) + 1682 + ha->nxdb_wr_ptr = ioremap((pci_resource_start(ha->pdev, 4) + 1691 1683 (ha->pdev->devfn << 12)), 4); 1692 1684 if (!ha->nxdb_wr_ptr) { 1693 1685 ql_log_pci(ql_log_fatal, ha->pdev, 0x000f, ··· 1694 1694 /* Mapping of IO base pointer, 1695 1695 * door bell read and write pointer 1696 1696 */ 1697 - ha->nxdb_rd_ptr = (uint8_t *) ha->nx_pcibase + (512 * 1024) + 1697 + ha->nxdb_rd_ptr = ha->nx_pcibase + (512 * 1024) + 1698 1698 (ha->pdev->devfn * 8); 1699 1699 } else { 1700 - ha->nxdb_wr_ptr = (ha->pdev->devfn == 6 ? 1700 + ha->nxdb_wr_ptr = (void __iomem *)(ha->pdev->devfn == 6 ? 1701 1701 QLA82XX_CAMRAM_DB1 : 1702 1702 QLA82XX_CAMRAM_DB2); 1703 1703 } ··· 1707 1707 ql_dbg_pci(ql_dbg_multiq, ha->pdev, 0xc006, 1708 1708 "nx_pci_base=%p iobase=%p " 1709 1709 "max_req_queues=%d msix_count=%d.\n", 1710 - (void *)ha->nx_pcibase, ha->iobase, 1710 + ha->nx_pcibase, ha->iobase, 1711 1711 ha->max_req_queues, ha->msix_count); 1712 1712 ql_dbg_pci(ql_dbg_init, ha->pdev, 0x0010, 1713 1713 "nx_pci_base=%p iobase=%p " 1714 1714 "max_req_queues=%d msix_count=%d.\n", 1715 - (void *)ha->nx_pcibase, ha->iobase, 1715 + ha->nx_pcibase, ha->iobase, 1716 1716 ha->max_req_queues, ha->msix_count); 1717 1717 return 0; 1718 1718 ··· 1740 1740 ret = pci_set_mwi(ha->pdev); 1741 1741 ha->chip_revision = ha->pdev->revision; 1742 1742 ql_dbg(ql_dbg_init, vha, 0x0043, 1743 - "Chip revision:%d.\n", 1744 - ha->chip_revision); 1743 + "Chip revision:%d; pci_set_mwi() returned %d.\n", 1744 + ha->chip_revision, ret); 1745 1745 return 0; 1746 1746 } 1747 1747 ··· 1768 1768 1769 1769 /* Setup ring parameters in initialization control block. */ 1770 1770 icb = (struct init_cb_81xx *)ha->init_cb; 1771 - icb->request_q_outpointer = __constant_cpu_to_le16(0); 1772 - icb->response_q_inpointer = __constant_cpu_to_le16(0); 1771 + icb->request_q_outpointer = cpu_to_le16(0); 1772 + icb->response_q_inpointer = cpu_to_le16(0); 1773 1773 icb->request_q_length = cpu_to_le16(req->length); 1774 1774 icb->response_q_length = cpu_to_le16(rsp->length); 1775 1775 icb->request_q_address[0] = cpu_to_le32(LSD(req->dma)); ··· 1777 1777 icb->response_q_address[0] = cpu_to_le32(LSD(rsp->dma)); 1778 1778 icb->response_q_address[1] = cpu_to_le32(MSD(rsp->dma)); 1779 1779 1780 - WRT_REG_DWORD((unsigned long __iomem *)&reg->req_q_out[0], 0); 1781 - WRT_REG_DWORD((unsigned long __iomem *)&reg->rsp_q_in[0], 0); 1782 - WRT_REG_DWORD((unsigned long __iomem *)&reg->rsp_q_out[0], 0); 1780 + WRT_REG_DWORD(&reg->req_q_out[0], 0); 1781 + WRT_REG_DWORD(&reg->rsp_q_in[0], 0); 1782 + WRT_REG_DWORD(&reg->rsp_q_out[0], 0); 1783 1783 } 1784 1784 1785 1785 static int ··· 2298 2298 ha->nx_legacy_intr.pci_int_reg = nx_legacy_intr->pci_int_reg; 2299 2299 } 2300 2300 2301 - inline void 2301 + static inline void 2302 2302 qla82xx_set_idc_version(scsi_qla_host_t *vha) 2303 2303 { 2304 2304 int idc_ver; ··· 2481 2481 ql_log(ql_log_info, vha, 0x00a5, 2482 2482 "Firmware loaded successfully from binary blob.\n"); 2483 2483 return QLA_SUCCESS; 2484 - } else { 2485 - ql_log(ql_log_fatal, vha, 0x00a6, 2486 - "Firmware load failed for binary blob.\n"); 2487 - blob->fw = NULL; 2488 - blob = NULL; 2489 - goto fw_load_failed; 2490 2484 } 2491 - return QLA_SUCCESS; 2485 + 2486 + ql_log(ql_log_fatal, vha, 0x00a6, 2487 + "Firmware load failed for binary blob.\n"); 2488 + blob->fw = NULL; 2489 + blob = NULL; 2492 2490 2493 2491 fw_load_failed: 2494 2492 return QLA_FUNCTION_FAILED; ··· 2547 2549 "Do ROM fast read failed.\n"); 2548 2550 goto done_read; 2549 2551 } 2550 - dwptr[i] = __constant_cpu_to_le32(val); 2552 + dwptr[i] = cpu_to_le32(val); 2551 2553 } 2552 2554 done_read: 2553 2555 return dwptr; ··· 2669 2671 { 2670 2672 int ret; 2671 2673 uint32_t liter; 2672 - uint32_t sec_mask, rest_addr; 2674 + uint32_t rest_addr; 2673 2675 dma_addr_t optrom_dma; 2674 2676 void *optrom = NULL; 2675 2677 int page_mode = 0; ··· 2691 2693 } 2692 2694 2693 2695 rest_addr = ha->fdt_block_size - 1; 2694 - sec_mask = ~rest_addr; 2695 2696 2696 2697 ret = qla82xx_unprotect_flash(ha); 2697 2698 if (ret) { ··· 2786 2789 { 2787 2790 struct qla_hw_data *ha = vha->hw; 2788 2791 struct req_que *req = ha->req_q_map[0]; 2789 - struct device_reg_82xx __iomem *reg; 2790 2792 uint32_t dbval; 2791 2793 2792 2794 /* Adjust ring index. */ ··· 2796 2800 } else 2797 2801 req->ring_ptr++; 2798 2802 2799 - reg = &ha->iobase->isp82; 2800 2803 dbval = 0x04 | (ha->portnum << 5); 2801 2804 2802 2805 dbval = dbval | (req->id << 8) | (req->ring_index << 16); 2803 2806 if (ql2xdbwr) 2804 - qla82xx_wr_32(ha, ha->nxdb_wr_ptr, dbval); 2807 + qla82xx_wr_32(ha, (unsigned long)ha->nxdb_wr_ptr, dbval); 2805 2808 else { 2806 - WRT_REG_DWORD((unsigned long __iomem *)ha->nxdb_wr_ptr, dbval); 2809 + WRT_REG_DWORD(ha->nxdb_wr_ptr, dbval); 2807 2810 wmb(); 2808 - while (RD_REG_DWORD((void __iomem *)ha->nxdb_rd_ptr) != dbval) { 2809 - WRT_REG_DWORD((unsigned long __iomem *)ha->nxdb_wr_ptr, 2810 - dbval); 2811 + while (RD_REG_DWORD(ha->nxdb_rd_ptr) != dbval) { 2812 + WRT_REG_DWORD(ha->nxdb_wr_ptr, dbval); 2811 2813 wmb(); 2812 2814 } 2813 2815 } ··· 3836 3842 loop_cnt = ocm_hdr->op_count; 3837 3843 3838 3844 for (i = 0; i < loop_cnt; i++) { 3839 - r_value = RD_REG_DWORD((void __iomem *) 3840 - (r_addr + ha->nx_pcibase)); 3845 + r_value = RD_REG_DWORD(r_addr + ha->nx_pcibase); 3841 3846 *data_ptr++ = cpu_to_le32(r_value); 3842 3847 r_addr += r_stride; 3843 3848 }
+8 -12
drivers/scsi/qla2xxx/qla_nx2.c
··· 462 462 static void 463 463 qla8044_flash_unlock(scsi_qla_host_t *vha) 464 464 { 465 - int ret_val; 466 465 struct qla_hw_data *ha = vha->hw; 467 466 468 467 /* Reading FLASH_UNLOCK register unlocks the Flash */ 469 468 qla8044_wr_reg(ha, QLA8044_FLASH_LOCK_ID, 0xFF); 470 - ret_val = qla8044_rd_reg(ha, QLA8044_FLASH_UNLOCK); 469 + qla8044_rd_reg(ha, QLA8044_FLASH_UNLOCK); 471 470 } 472 471 473 472 ··· 560 561 return buf; 561 562 } 562 563 563 - inline int 564 + static inline int 564 565 qla8044_need_reset(struct scsi_qla_host *vha) 565 566 { 566 567 uint32_t drv_state, drv_active; ··· 1129 1130 } 1130 1131 1131 1132 for (i = 0; i < count; i++, addr += 16) { 1132 - if (!((QLA8044_ADDR_IN_RANGE(addr, QLA8044_ADDR_QDR_NET, 1133 + if (!((addr_in_range(addr, QLA8044_ADDR_QDR_NET, 1133 1134 QLA8044_ADDR_QDR_NET_MAX)) || 1134 - (QLA8044_ADDR_IN_RANGE(addr, QLA8044_ADDR_DDR_NET, 1135 + (addr_in_range(addr, QLA8044_ADDR_DDR_NET, 1135 1136 QLA8044_ADDR_DDR_NET_MAX)))) { 1136 1137 ret_val = QLA_FUNCTION_FAILED; 1137 1138 goto exit_ms_mem_write_unlock; ··· 1604 1605 qla8044_wr_reg(ha, QLA8044_IDC_DRV_CTRL, idc_ctrl); 1605 1606 } 1606 1607 1607 - inline void 1608 + static inline void 1608 1609 qla8044_set_rst_ready(struct scsi_qla_host *vha) 1609 1610 { 1610 1611 uint32_t drv_state; ··· 2991 2992 uint32_t addr1, addr2, value, data, temp, wrVal; 2992 2993 uint8_t stride, stride2; 2993 2994 uint16_t count; 2994 - uint32_t poll, mask, data_size, modify_mask; 2995 + uint32_t poll, mask, modify_mask; 2995 2996 uint32_t wait_count = 0; 2996 2997 2997 2998 uint32_t *data_ptr = *d_ptr; ··· 3008 3009 poll = rddfe->poll; 3009 3010 mask = rddfe->mask; 3010 3011 modify_mask = rddfe->modify_mask; 3011 - data_size = rddfe->data_size; 3012 3012 3013 3013 addr2 = addr1 + stride; 3014 3014 ··· 3089 3091 uint8_t stride1, stride2; 3090 3092 uint32_t addr3, addr4, addr5, addr6, addr7; 3091 3093 uint16_t count, loop_cnt; 3092 - uint32_t poll, mask; 3094 + uint32_t mask; 3093 3095 uint32_t *data_ptr = *d_ptr; 3094 3096 3095 3097 struct qla8044_minidump_entry_rdmdio *rdmdio; ··· 3103 3105 stride2 = rdmdio->stride_2; 3104 3106 count = rdmdio->count; 3105 3107 3106 - poll = rdmdio->poll; 3107 3108 mask = rdmdio->mask; 3108 3109 value2 = rdmdio->value_2; 3109 3110 ··· 3161 3164 static uint32_t qla8044_minidump_process_pollwr(struct scsi_qla_host *vha, 3162 3165 struct qla8044_minidump_entry_hdr *entry_hdr, uint32_t **d_ptr) 3163 3166 { 3164 - uint32_t addr1, addr2, value1, value2, poll, mask, r_value; 3167 + uint32_t addr1, addr2, value1, value2, poll, r_value; 3165 3168 uint32_t wait_count = 0; 3166 3169 struct qla8044_minidump_entry_pollwr *pollwr_hdr; 3167 3170 ··· 3172 3175 value2 = pollwr_hdr->value_2; 3173 3176 3174 3177 poll = pollwr_hdr->poll; 3175 - mask = pollwr_hdr->mask; 3176 3178 3177 3179 while (wait_count < poll) { 3178 3180 qla8044_rd_reg_indirect(vha, addr1, &r_value);
+4 -2
drivers/scsi/qla2xxx/qla_nx2.h
··· 58 58 #define QLA8044_PCI_QDR_NET_MAX ((unsigned long)0x043fffff) 59 59 60 60 /* PCI Windowing for DDR regions. */ 61 - #define QLA8044_ADDR_IN_RANGE(addr, low, high) \ 62 - (((addr) <= (high)) && ((addr) >= (low))) 61 + static inline bool addr_in_range(u64 addr, u64 low, u64 high) 62 + { 63 + return addr <= high && addr >= low; 64 + } 63 65 64 66 /* Indirectly Mapped Registers */ 65 67 #define QLA8044_FLASH_SPI_STATUS 0x2808E010
+19 -22
drivers/scsi/qla2xxx/qla_os.c
··· 656 656 "SP reference-count to ZERO -- sp=%p cmd=%p.\n", 657 657 sp, GET_CMD_SP(sp)); 658 658 if (ql2xextended_error_logging & ql_dbg_io) 659 - BUG(); 659 + WARN_ON(atomic_read(&sp->ref_count) == 0); 660 660 return; 661 661 } 662 662 if (!atomic_dec_and_test(&sp->ref_count)) ··· 958 958 } 959 959 960 960 ql_dbg(ql_dbg_taskm, vha, 0x8002, 961 - "Aborting from RISC nexus=%ld:%d:%llu sp=%p cmd=%p\n", 962 - vha->host_no, id, lun, sp, cmd); 961 + "Aborting from RISC nexus=%ld:%d:%llu sp=%p cmd=%p handle=%x\n", 962 + vha->host_no, id, lun, sp, cmd, sp->handle); 963 963 964 964 /* Get a reference to the sp and drop the lock.*/ 965 965 sp_get(sp); ··· 967 967 spin_unlock_irqrestore(&ha->hardware_lock, flags); 968 968 rval = ha->isp_ops->abort_command(sp); 969 969 if (rval) { 970 - if (rval == QLA_FUNCTION_PARAMETER_ERROR) { 971 - /* 972 - * Decrement the ref_count since we can't find the 973 - * command 974 - */ 975 - atomic_dec(&sp->ref_count); 970 + if (rval == QLA_FUNCTION_PARAMETER_ERROR) 976 971 ret = SUCCESS; 977 - } else 972 + else 978 973 ret = FAILED; 979 974 980 975 ql_dbg(ql_dbg_taskm, vha, 0x8003, ··· 981 986 } 982 987 983 988 spin_lock_irqsave(&ha->hardware_lock, flags); 984 - /* 985 - * Clear the slot in the oustanding_cmds array if we can't find the 986 - * command to reclaim the resources. 987 - */ 988 - if (rval == QLA_FUNCTION_PARAMETER_ERROR) 989 - vha->req->outstanding_cmds[sp->handle] = NULL; 990 989 sp->done(ha, sp, 0); 991 990 spin_unlock_irqrestore(&ha->hardware_lock, flags); 992 991 ··· 2208 2219 ha->device_type |= DT_IIDMA; 2209 2220 ha->fw_srisc_address = RISC_START_ADDRESS_2400; 2210 2221 break; 2222 + case PCI_DEVICE_ID_QLOGIC_ISP2261: 2223 + ha->device_type |= DT_ISP2261; 2224 + ha->device_type |= DT_ZIO_SUPPORTED; 2225 + ha->device_type |= DT_FWI2; 2226 + ha->device_type |= DT_IIDMA; 2227 + ha->fw_srisc_address = RISC_START_ADDRESS_2400; 2228 + break; 2211 2229 } 2212 2230 2213 2231 if (IS_QLA82XX(ha)) ··· 2292 2296 pdev->device == PCI_DEVICE_ID_QLOGIC_ISPF001 || 2293 2297 pdev->device == PCI_DEVICE_ID_QLOGIC_ISP8044 || 2294 2298 pdev->device == PCI_DEVICE_ID_QLOGIC_ISP2071 || 2295 - pdev->device == PCI_DEVICE_ID_QLOGIC_ISP2271) { 2299 + pdev->device == PCI_DEVICE_ID_QLOGIC_ISP2271 || 2300 + pdev->device == PCI_DEVICE_ID_QLOGIC_ISP2261) { 2296 2301 bars = pci_select_bars(pdev, IORESOURCE_MEM); 2297 2302 mem_only = 1; 2298 2303 ql_dbg_pci(ql_dbg_init, pdev, 0x0007, ··· 2971 2974 static void 2972 2975 qla2x00_delete_all_vps(struct qla_hw_data *ha, scsi_qla_host_t *base_vha) 2973 2976 { 2974 - struct Scsi_Host *scsi_host; 2975 2977 scsi_qla_host_t *vha; 2976 2978 unsigned long flags; 2977 2979 ··· 2981 2985 BUG_ON(base_vha->list.next == &ha->vp_list); 2982 2986 /* This assumes first entry in ha->vp_list is always base vha */ 2983 2987 vha = list_first_entry(&base_vha->list, scsi_qla_host_t, list); 2984 - scsi_host = scsi_host_get(vha->host); 2988 + scsi_host_get(vha->host); 2985 2989 2986 2990 spin_unlock_irqrestore(&ha->vport_slock, flags); 2987 2991 mutex_unlock(&ha->vport_lock); ··· 3271 3275 if (!do_login) 3272 3276 return; 3273 3277 3278 + set_bit(RELOGIN_NEEDED, &vha->dpc_flags); 3279 + 3274 3280 if (fcport->login_retry == 0) { 3275 3281 fcport->login_retry = vha->hw->login_retry_count; 3276 - set_bit(RELOGIN_NEEDED, &vha->dpc_flags); 3277 3282 3278 3283 ql_dbg(ql_dbg_disc, vha, 0x2067, 3279 3284 "Port login retry %8phN, id = 0x%04x retry cnt=%d.\n", ··· 4798 4801 static int 4799 4802 qla2x00_do_dpc(void *data) 4800 4803 { 4801 - int rval; 4802 4804 scsi_qla_host_t *base_vha; 4803 4805 struct qla_hw_data *ha; 4804 4806 ··· 5029 5033 if (!(test_and_set_bit(LOOP_RESYNC_ACTIVE, 5030 5034 &base_vha->dpc_flags))) { 5031 5035 5032 - rval = qla2x00_loop_resync(base_vha); 5036 + qla2x00_loop_resync(base_vha); 5033 5037 5034 5038 clear_bit(LOOP_RESYNC_ACTIVE, 5035 5039 &base_vha->dpc_flags); ··· 5713 5717 { PCI_DEVICE(PCI_VENDOR_ID_QLOGIC, PCI_DEVICE_ID_QLOGIC_ISP8044) }, 5714 5718 { PCI_DEVICE(PCI_VENDOR_ID_QLOGIC, PCI_DEVICE_ID_QLOGIC_ISP2071) }, 5715 5719 { PCI_DEVICE(PCI_VENDOR_ID_QLOGIC, PCI_DEVICE_ID_QLOGIC_ISP2271) }, 5720 + { PCI_DEVICE(PCI_VENDOR_ID_QLOGIC, PCI_DEVICE_ID_QLOGIC_ISP2261) }, 5716 5721 { 0 }, 5717 5722 }; 5718 5723 MODULE_DEVICE_TABLE(pci, qla2xxx_pci_tbl);
+7 -7
drivers/scsi/qla2xxx/qla_sup.c
··· 316 316 317 317 wprot_old = cpu_to_le16(qla2x00_get_nvram_word(ha, ha->nvram_base)); 318 318 stat = qla2x00_write_nvram_word_tmo(ha, ha->nvram_base, 319 - __constant_cpu_to_le16(0x1234), 100000); 319 + cpu_to_le16(0x1234), 100000); 320 320 wprot = cpu_to_le16(qla2x00_get_nvram_word(ha, ha->nvram_base)); 321 321 if (stat != QLA_SUCCESS || wprot != 0x1234) { 322 322 /* Write enable. */ ··· 691 691 region = (struct qla_flt_region *)&flt[1]; 692 692 ha->isp_ops->read_optrom(vha, (uint8_t *)req->ring, 693 693 flt_addr << 2, OPTROM_BURST_SIZE); 694 - if (*wptr == __constant_cpu_to_le16(0xffff)) 694 + if (*wptr == cpu_to_le16(0xffff)) 695 695 goto no_flash_data; 696 - if (flt->version != __constant_cpu_to_le16(1)) { 696 + if (flt->version != cpu_to_le16(1)) { 697 697 ql_log(ql_log_warn, vha, 0x0047, 698 698 "Unsupported FLT detected: version=0x%x length=0x%x checksum=0x%x.\n", 699 699 le16_to_cpu(flt->version), le16_to_cpu(flt->length), ··· 892 892 fdt = (struct qla_fdt_layout *)req->ring; 893 893 ha->isp_ops->read_optrom(vha, (uint8_t *)req->ring, 894 894 ha->flt_region_fdt << 2, OPTROM_BURST_SIZE); 895 - if (*wptr == __constant_cpu_to_le16(0xffff)) 895 + if (*wptr == cpu_to_le16(0xffff)) 896 896 goto no_flash_data; 897 897 if (fdt->sig[0] != 'Q' || fdt->sig[1] != 'L' || fdt->sig[2] != 'I' || 898 898 fdt->sig[3] != 'D') ··· 991 991 ha->isp_ops->read_optrom(vha, (uint8_t *)req->ring, 992 992 QLA82XX_IDC_PARAM_ADDR , 8); 993 993 994 - if (*wptr == __constant_cpu_to_le32(0xffffffff)) { 994 + if (*wptr == cpu_to_le32(0xffffffff)) { 995 995 ha->fcoe_dev_init_timeout = QLA82XX_ROM_DEV_INIT_TIMEOUT; 996 996 ha->fcoe_reset_timeout = QLA82XX_ROM_DRV_RESET_ACK_TIMEOUT; 997 997 } else { ··· 1051 1051 1052 1052 ha->isp_ops->read_optrom(vha, (uint8_t *)&hdr, 1053 1053 ha->flt_region_npiv_conf << 2, sizeof(struct qla_npiv_header)); 1054 - if (hdr.version == __constant_cpu_to_le16(0xffff)) 1054 + if (hdr.version == cpu_to_le16(0xffff)) 1055 1055 return; 1056 - if (hdr.version != __constant_cpu_to_le16(1)) { 1056 + if (hdr.version != cpu_to_le16(1)) { 1057 1057 ql_dbg(ql_dbg_user, vha, 0x7090, 1058 1058 "Unsupported NPIV-Config " 1059 1059 "detected: version=0x%x entries=0x%x checksum=0x%x.\n",
+64 -75
drivers/scsi/qla2xxx/qla_target.c
··· 1141 1141 nack->u.isp24.nport_handle = ntfy->u.isp24.nport_handle; 1142 1142 if (le16_to_cpu(ntfy->u.isp24.status) == IMM_NTFY_ELS) { 1143 1143 nack->u.isp24.flags = ntfy->u.isp24.flags & 1144 - __constant_cpu_to_le32(NOTIFY24XX_FLAGS_PUREX_IOCB); 1144 + cpu_to_le32(NOTIFY24XX_FLAGS_PUREX_IOCB); 1145 1145 } 1146 1146 nack->u.isp24.srr_rx_id = ntfy->u.isp24.srr_rx_id; 1147 1147 nack->u.isp24.status = ntfy->u.isp24.status; ··· 1199 1199 resp->sof_type = abts->sof_type; 1200 1200 resp->exchange_address = abts->exchange_address; 1201 1201 resp->fcp_hdr_le = abts->fcp_hdr_le; 1202 - f_ctl = __constant_cpu_to_le32(F_CTL_EXCH_CONTEXT_RESP | 1202 + f_ctl = cpu_to_le32(F_CTL_EXCH_CONTEXT_RESP | 1203 1203 F_CTL_LAST_SEQ | F_CTL_END_SEQ | 1204 1204 F_CTL_SEQ_INITIATIVE); 1205 1205 p = (uint8_t *)&f_ctl; ··· 1274 1274 ctio->entry_count = 1; 1275 1275 ctio->nport_handle = entry->nport_handle; 1276 1276 ctio->handle = QLA_TGT_SKIP_HANDLE | CTIO_COMPLETION_HANDLE_MARK; 1277 - ctio->timeout = __constant_cpu_to_le16(QLA_TGT_TIMEOUT); 1277 + ctio->timeout = cpu_to_le16(QLA_TGT_TIMEOUT); 1278 1278 ctio->vp_index = vha->vp_idx; 1279 1279 ctio->initiator_id[0] = entry->fcp_hdr_le.d_id[0]; 1280 1280 ctio->initiator_id[1] = entry->fcp_hdr_le.d_id[1]; 1281 1281 ctio->initiator_id[2] = entry->fcp_hdr_le.d_id[2]; 1282 1282 ctio->exchange_addr = entry->exchange_addr_to_abort; 1283 - ctio->u.status1.flags = 1284 - __constant_cpu_to_le16(CTIO7_FLAGS_STATUS_MODE_1 | 1285 - CTIO7_FLAGS_TERMINATE); 1283 + ctio->u.status1.flags = cpu_to_le16(CTIO7_FLAGS_STATUS_MODE_1 | 1284 + CTIO7_FLAGS_TERMINATE); 1286 1285 ctio->u.status1.ox_id = cpu_to_le16(entry->fcp_hdr_le.ox_id); 1287 1286 1288 1287 /* Memory Barrier */ ··· 1521 1522 ctio->entry_count = 1; 1522 1523 ctio->handle = QLA_TGT_SKIP_HANDLE | CTIO_COMPLETION_HANDLE_MARK; 1523 1524 ctio->nport_handle = mcmd->sess->loop_id; 1524 - ctio->timeout = __constant_cpu_to_le16(QLA_TGT_TIMEOUT); 1525 + ctio->timeout = cpu_to_le16(QLA_TGT_TIMEOUT); 1525 1526 ctio->vp_index = ha->vp_idx; 1526 1527 ctio->initiator_id[0] = atio->u.isp24.fcp_hdr.s_id[2]; 1527 1528 ctio->initiator_id[1] = atio->u.isp24.fcp_hdr.s_id[1]; 1528 1529 ctio->initiator_id[2] = atio->u.isp24.fcp_hdr.s_id[0]; 1529 1530 ctio->exchange_addr = atio->u.isp24.exchange_addr; 1530 1531 ctio->u.status1.flags = (atio->u.isp24.attr << 9) | 1531 - __constant_cpu_to_le16(CTIO7_FLAGS_STATUS_MODE_1 | 1532 - CTIO7_FLAGS_SEND_STATUS); 1532 + cpu_to_le16(CTIO7_FLAGS_STATUS_MODE_1 | CTIO7_FLAGS_SEND_STATUS); 1533 1533 temp = be16_to_cpu(atio->u.isp24.fcp_hdr.ox_id); 1534 1534 ctio->u.status1.ox_id = cpu_to_le16(temp); 1535 1535 ctio->u.status1.scsi_status = 1536 - __constant_cpu_to_le16(SS_RESPONSE_INFO_LEN_VALID); 1537 - ctio->u.status1.response_len = __constant_cpu_to_le16(8); 1536 + cpu_to_le16(SS_RESPONSE_INFO_LEN_VALID); 1537 + ctio->u.status1.response_len = cpu_to_le16(8); 1538 1538 ctio->u.status1.sense_data[0] = resp_code; 1539 1539 1540 1540 /* Memory Barrier */ ··· 1784 1786 1785 1787 pkt->handle = h | CTIO_COMPLETION_HANDLE_MARK; 1786 1788 pkt->nport_handle = prm->cmd->loop_id; 1787 - pkt->timeout = __constant_cpu_to_le16(QLA_TGT_TIMEOUT); 1789 + pkt->timeout = cpu_to_le16(QLA_TGT_TIMEOUT); 1788 1790 pkt->initiator_id[0] = atio->u.isp24.fcp_hdr.s_id[2]; 1789 1791 pkt->initiator_id[1] = atio->u.isp24.fcp_hdr.s_id[1]; 1790 1792 pkt->initiator_id[2] = atio->u.isp24.fcp_hdr.s_id[0]; ··· 2085 2087 { 2086 2088 prm->sense_buffer_len = min_t(uint32_t, prm->sense_buffer_len, 2087 2089 (uint32_t)sizeof(ctio->u.status1.sense_data)); 2088 - ctio->u.status0.flags |= 2089 - __constant_cpu_to_le16(CTIO7_FLAGS_SEND_STATUS); 2090 + ctio->u.status0.flags |= cpu_to_le16(CTIO7_FLAGS_SEND_STATUS); 2090 2091 if (qlt_need_explicit_conf(prm->tgt->ha, prm->cmd, 0)) { 2091 - ctio->u.status0.flags |= __constant_cpu_to_le16( 2092 + ctio->u.status0.flags |= cpu_to_le16( 2092 2093 CTIO7_FLAGS_EXPLICIT_CONFORM | 2093 2094 CTIO7_FLAGS_CONFORM_REQ); 2094 2095 } ··· 2104 2107 "non GOOD status\n"); 2105 2108 goto skip_explict_conf; 2106 2109 } 2107 - ctio->u.status1.flags |= __constant_cpu_to_le16( 2110 + ctio->u.status1.flags |= cpu_to_le16( 2108 2111 CTIO7_FLAGS_EXPLICIT_CONFORM | 2109 2112 CTIO7_FLAGS_CONFORM_REQ); 2110 2113 } 2111 2114 skip_explict_conf: 2112 2115 ctio->u.status1.flags &= 2113 - ~__constant_cpu_to_le16(CTIO7_FLAGS_STATUS_MODE_0); 2116 + ~cpu_to_le16(CTIO7_FLAGS_STATUS_MODE_0); 2114 2117 ctio->u.status1.flags |= 2115 - __constant_cpu_to_le16(CTIO7_FLAGS_STATUS_MODE_1); 2118 + cpu_to_le16(CTIO7_FLAGS_STATUS_MODE_1); 2116 2119 ctio->u.status1.scsi_status |= 2117 - __constant_cpu_to_le16(SS_SENSE_LEN_VALID); 2120 + cpu_to_le16(SS_SENSE_LEN_VALID); 2118 2121 ctio->u.status1.sense_length = 2119 2122 cpu_to_le16(prm->sense_buffer_len); 2120 2123 for (i = 0; i < prm->sense_buffer_len/4; i++) ··· 2134 2137 #endif 2135 2138 } else { 2136 2139 ctio->u.status1.flags &= 2137 - ~__constant_cpu_to_le16(CTIO7_FLAGS_STATUS_MODE_0); 2140 + ~cpu_to_le16(CTIO7_FLAGS_STATUS_MODE_0); 2138 2141 ctio->u.status1.flags |= 2139 - __constant_cpu_to_le16(CTIO7_FLAGS_STATUS_MODE_1); 2142 + cpu_to_le16(CTIO7_FLAGS_STATUS_MODE_1); 2140 2143 ctio->u.status1.sense_length = 0; 2141 2144 memset(ctio->u.status1.sense_data, 0, 2142 2145 sizeof(ctio->u.status1.sense_data)); ··· 2258 2261 qlt_build_ctio_crc2_pkt(struct qla_tgt_prm *prm, scsi_qla_host_t *vha) 2259 2262 { 2260 2263 uint32_t *cur_dsd; 2261 - int sgc; 2262 2264 uint32_t transfer_length = 0; 2263 2265 uint32_t data_bytes; 2264 2266 uint32_t dif_bytes; ··· 2274 2278 struct atio_from_isp *atio = &prm->cmd->atio; 2275 2279 uint16_t t16; 2276 2280 2277 - sgc = 0; 2278 2281 ha = vha->hw; 2279 2282 2280 2283 pkt = (struct ctio_crc2_to_fw *)vha->req->ring_ptr; ··· 2363 2368 2364 2369 pkt->handle = h | CTIO_COMPLETION_HANDLE_MARK; 2365 2370 pkt->nport_handle = prm->cmd->loop_id; 2366 - pkt->timeout = __constant_cpu_to_le16(QLA_TGT_TIMEOUT); 2371 + pkt->timeout = cpu_to_le16(QLA_TGT_TIMEOUT); 2367 2372 pkt->initiator_id[0] = atio->u.isp24.fcp_hdr.s_id[2]; 2368 2373 pkt->initiator_id[1] = atio->u.isp24.fcp_hdr.s_id[1]; 2369 2374 pkt->initiator_id[2] = atio->u.isp24.fcp_hdr.s_id[0]; ··· 2379 2384 2380 2385 /* Set transfer direction */ 2381 2386 if (cmd->dma_data_direction == DMA_TO_DEVICE) 2382 - pkt->flags = __constant_cpu_to_le16(CTIO7_FLAGS_DATA_IN); 2387 + pkt->flags = cpu_to_le16(CTIO7_FLAGS_DATA_IN); 2383 2388 else if (cmd->dma_data_direction == DMA_FROM_DEVICE) 2384 - pkt->flags = __constant_cpu_to_le16(CTIO7_FLAGS_DATA_OUT); 2389 + pkt->flags = cpu_to_le16(CTIO7_FLAGS_DATA_OUT); 2385 2390 2386 2391 2387 2392 pkt->dseg_count = prm->tot_dsds; ··· 2433 2438 crc_ctx_pkt->blk_size = cpu_to_le16(cmd->blk_sz); 2434 2439 crc_ctx_pkt->prot_opts = cpu_to_le16(fw_prot_opts); 2435 2440 crc_ctx_pkt->byte_count = cpu_to_le32(data_bytes); 2436 - crc_ctx_pkt->guard_seed = __constant_cpu_to_le16(0); 2441 + crc_ctx_pkt->guard_seed = cpu_to_le16(0); 2437 2442 2438 2443 2439 2444 /* Walks data segments */ 2440 - pkt->flags |= __constant_cpu_to_le16(CTIO7_FLAGS_DSD_PTR); 2445 + pkt->flags |= cpu_to_le16(CTIO7_FLAGS_DSD_PTR); 2441 2446 2442 2447 if (!bundling && prm->prot_seg_cnt) { 2443 2448 if (qla24xx_walk_and_build_sglist_no_difb(ha, NULL, cur_dsd, ··· 2543 2548 2544 2549 if (qlt_has_data(cmd) && (xmit_type & QLA_TGT_XMIT_DATA)) { 2545 2550 pkt->u.status0.flags |= 2546 - __constant_cpu_to_le16(CTIO7_FLAGS_DATA_IN | 2551 + cpu_to_le16(CTIO7_FLAGS_DATA_IN | 2547 2552 CTIO7_FLAGS_STATUS_MODE_0); 2548 2553 2549 2554 if (cmd->se_cmd.prot_op == TARGET_PROT_NORMAL) ··· 2555 2560 cpu_to_le16(prm.rq_result); 2556 2561 pkt->u.status0.residual = 2557 2562 cpu_to_le32(prm.residual); 2558 - pkt->u.status0.flags |= __constant_cpu_to_le16( 2563 + pkt->u.status0.flags |= cpu_to_le16( 2559 2564 CTIO7_FLAGS_SEND_STATUS); 2560 2565 if (qlt_need_explicit_conf(ha, cmd, 0)) { 2561 2566 pkt->u.status0.flags |= 2562 - __constant_cpu_to_le16( 2567 + cpu_to_le16( 2563 2568 CTIO7_FLAGS_EXPLICIT_CONFORM | 2564 2569 CTIO7_FLAGS_CONFORM_REQ); 2565 2570 } ··· 2587 2592 ctio->entry_count = 1; 2588 2593 ctio->entry_type = CTIO_TYPE7; 2589 2594 ctio->dseg_count = 0; 2590 - ctio->u.status1.flags &= ~__constant_cpu_to_le16( 2595 + ctio->u.status1.flags &= ~cpu_to_le16( 2591 2596 CTIO7_FLAGS_DATA_IN); 2592 2597 2593 2598 /* Real finish is ctio_m1's finish */ 2594 2599 pkt->handle |= CTIO_INTERMEDIATE_HANDLE_MARK; 2595 - pkt->u.status0.flags |= __constant_cpu_to_le16( 2600 + pkt->u.status0.flags |= cpu_to_le16( 2596 2601 CTIO7_FLAGS_DONT_RET_CTIO); 2597 2602 2598 2603 /* qlt_24xx_init_ctio_to_isp will correct ··· 2682 2687 } 2683 2688 2684 2689 pkt = (struct ctio7_to_24xx *)prm.pkt; 2685 - pkt->u.status0.flags |= __constant_cpu_to_le16(CTIO7_FLAGS_DATA_OUT | 2690 + pkt->u.status0.flags |= cpu_to_le16(CTIO7_FLAGS_DATA_OUT | 2686 2691 CTIO7_FLAGS_STATUS_MODE_0); 2687 2692 2688 2693 if (cmd->se_cmd.prot_op == TARGET_PROT_NORMAL) ··· 2757 2762 2758 2763 /* Update protection tag */ 2759 2764 if (cmd->prot_sg_cnt) { 2760 - uint32_t i, j = 0, k = 0, num_ent; 2765 + uint32_t i, k = 0, num_ent; 2761 2766 struct scatterlist *sg, *sgl; 2762 2767 2763 2768 ··· 2770 2775 k += num_ent; 2771 2776 continue; 2772 2777 } 2773 - j = blocks_done - k - 1; 2774 2778 k = blocks_done; 2775 2779 break; 2776 2780 } ··· 2963 2969 ctio24 = (struct ctio7_to_24xx *)pkt; 2964 2970 ctio24->entry_type = CTIO_TYPE7; 2965 2971 ctio24->nport_handle = cmd ? cmd->loop_id : CTIO7_NHANDLE_UNRECOGNIZED; 2966 - ctio24->timeout = __constant_cpu_to_le16(QLA_TGT_TIMEOUT); 2972 + ctio24->timeout = cpu_to_le16(QLA_TGT_TIMEOUT); 2967 2973 ctio24->vp_index = vha->vp_idx; 2968 2974 ctio24->initiator_id[0] = atio->u.isp24.fcp_hdr.s_id[2]; 2969 2975 ctio24->initiator_id[1] = atio->u.isp24.fcp_hdr.s_id[1]; 2970 2976 ctio24->initiator_id[2] = atio->u.isp24.fcp_hdr.s_id[0]; 2971 2977 ctio24->exchange_addr = atio->u.isp24.exchange_addr; 2972 2978 ctio24->u.status1.flags = (atio->u.isp24.attr << 9) | 2973 - __constant_cpu_to_le16(CTIO7_FLAGS_STATUS_MODE_1 | 2979 + cpu_to_le16(CTIO7_FLAGS_STATUS_MODE_1 | 2974 2980 CTIO7_FLAGS_TERMINATE); 2975 2981 temp = be16_to_cpu(atio->u.isp24.fcp_hdr.ox_id); 2976 2982 ctio24->u.status1.ox_id = cpu_to_le16(temp); ··· 3210 3216 if (ctio != NULL) { 3211 3217 struct ctio7_from_24xx *c = (struct ctio7_from_24xx *)ctio; 3212 3218 term = !(c->flags & 3213 - __constant_cpu_to_le16(OF_TERM_EXCH)); 3219 + cpu_to_le16(OF_TERM_EXCH)); 3214 3220 } else 3215 3221 term = 1; 3216 3222 ··· 3358 3364 { 3359 3365 struct qla_hw_data *ha = vha->hw; 3360 3366 struct se_cmd *se_cmd; 3361 - const struct target_core_fabric_ops *tfo; 3362 3367 struct qla_tgt_cmd *cmd; 3363 3368 3364 3369 if (handle & CTIO_INTERMEDIATE_HANDLE_MARK) { ··· 3375 3382 return; 3376 3383 3377 3384 se_cmd = &cmd->se_cmd; 3378 - tfo = se_cmd->se_tfo; 3379 3385 cmd->cmd_sent_to_fw = 0; 3380 3386 3381 3387 qlt_unmap_sg(vha, cmd); ··· 3472 3480 if (cmd->state == QLA_TGT_STATE_PROCESSED) { 3473 3481 cmd->cmd_flags |= BIT_12; 3474 3482 } else if (cmd->state == QLA_TGT_STATE_NEED_DATA) { 3475 - int rx_status = 0; 3476 - 3477 3483 cmd->state = QLA_TGT_STATE_DATA_IN; 3478 3484 3479 - if (unlikely(status != CTIO_SUCCESS)) 3480 - rx_status = -EIO; 3481 - else 3485 + if (status == CTIO_SUCCESS) 3482 3486 cmd->write_data_transferred = 1; 3483 3487 3484 3488 ha->tgt.tgt_ops->handle_data(cmd); ··· 3916 3928 struct qla_tgt *tgt; 3917 3929 struct qla_tgt_sess *sess; 3918 3930 uint32_t lun, unpacked_lun; 3919 - int lun_size, fn; 3931 + int fn; 3920 3932 3921 3933 tgt = vha->vha_tgt.qla_tgt; 3922 3934 3923 3935 lun = a->u.isp24.fcp_cmnd.lun; 3924 - lun_size = sizeof(a->u.isp24.fcp_cmnd.lun); 3925 3936 fn = a->u.isp24.fcp_cmnd.task_mgmt_flags; 3926 3937 sess = ha->tgt.tgt_ops->find_sess_by_s_id(vha, 3927 3938 a->u.isp24.fcp_hdr.s_id); ··· 4565 4578 struct qla_hw_data *ha = vha->hw; 4566 4579 unsigned long flags = 0; 4567 4580 4581 + #ifndef __CHECKER__ 4568 4582 if (!ha_locked) 4569 4583 spin_lock_irqsave(&ha->hardware_lock, flags); 4584 + #endif 4570 4585 4571 4586 qlt_send_notify_ack(vha, (void *)&imm->imm_ntfy, 0, 0, 0, 4572 4587 NOTIFY_ACK_SRR_FLAGS_REJECT, 4573 4588 NOTIFY_ACK_SRR_REJECT_REASON_UNABLE_TO_PERFORM, 4574 4589 NOTIFY_ACK_SRR_FLAGS_REJECT_EXPL_NO_EXPL); 4575 4590 4591 + #ifndef __CHECKER__ 4576 4592 if (!ha_locked) 4577 4593 spin_unlock_irqrestore(&ha->hardware_lock, flags); 4594 + #endif 4578 4595 4579 4596 kfree(imm); 4580 4597 } ··· 4922 4931 ctio24 = (struct ctio7_to_24xx *)pkt; 4923 4932 ctio24->entry_type = CTIO_TYPE7; 4924 4933 ctio24->nport_handle = sess->loop_id; 4925 - ctio24->timeout = __constant_cpu_to_le16(QLA_TGT_TIMEOUT); 4934 + ctio24->timeout = cpu_to_le16(QLA_TGT_TIMEOUT); 4926 4935 ctio24->vp_index = vha->vp_idx; 4927 4936 ctio24->initiator_id[0] = atio->u.isp24.fcp_hdr.s_id[2]; 4928 4937 ctio24->initiator_id[1] = atio->u.isp24.fcp_hdr.s_id[1]; 4929 4938 ctio24->initiator_id[2] = atio->u.isp24.fcp_hdr.s_id[0]; 4930 4939 ctio24->exchange_addr = atio->u.isp24.exchange_addr; 4931 4940 ctio24->u.status1.flags = (atio->u.isp24.attr << 9) | 4932 - __constant_cpu_to_le16( 4941 + cpu_to_le16( 4933 4942 CTIO7_FLAGS_STATUS_MODE_1 | CTIO7_FLAGS_SEND_STATUS | 4934 4943 CTIO7_FLAGS_DONT_RET_CTIO); 4935 4944 /* ··· 5257 5266 struct atio_from_isp *atio = (struct atio_from_isp *)pkt; 5258 5267 int rc; 5259 5268 if (atio->u.isp2x.status != 5260 - __constant_cpu_to_le16(ATIO_CDB_VALID)) { 5269 + cpu_to_le16(ATIO_CDB_VALID)) { 5261 5270 ql_dbg(ql_dbg_tgt, vha, 0xe05e, 5262 5271 "qla_target(%d): ATIO with error " 5263 5272 "status %x received\n", vha->vp_idx, ··· 5331 5340 le16_to_cpu(entry->u.isp2x.status)); 5332 5341 tgt->notify_ack_expected--; 5333 5342 if (entry->u.isp2x.status != 5334 - __constant_cpu_to_le16(NOTIFY_ACK_SUCCESS)) { 5343 + cpu_to_le16(NOTIFY_ACK_SUCCESS)) { 5335 5344 ql_dbg(ql_dbg_tgt, vha, 0xe061, 5336 5345 "qla_target(%d): NOTIFY_ACK " 5337 5346 "failed %x\n", vha->vp_idx, ··· 5650 5659 uint8_t *s_id = NULL; /* to hide compiler warnings */ 5651 5660 int rc; 5652 5661 uint32_t lun, unpacked_lun; 5653 - int lun_size, fn; 5662 + int fn; 5654 5663 void *iocb; 5655 5664 5656 5665 spin_lock_irqsave(&ha->hardware_lock, flags); ··· 5682 5691 5683 5692 iocb = a; 5684 5693 lun = a->u.isp24.fcp_cmnd.lun; 5685 - lun_size = sizeof(lun); 5686 5694 fn = a->u.isp24.fcp_cmnd.task_mgmt_flags; 5687 5695 unpacked_lun = scsilun_to_int((struct scsi_lun *)&lun); 5688 5696 ··· 6205 6215 ha->tgt.saved_set = 1; 6206 6216 } 6207 6217 6208 - nv->exchange_count = __constant_cpu_to_le16(0xFFFF); 6218 + nv->exchange_count = cpu_to_le16(0xFFFF); 6209 6219 6210 6220 /* Enable target mode */ 6211 - nv->firmware_options_1 |= __constant_cpu_to_le32(BIT_4); 6221 + nv->firmware_options_1 |= cpu_to_le32(BIT_4); 6212 6222 6213 6223 /* Disable ini mode, if requested */ 6214 6224 if (!qla_ini_mode_enabled(vha)) 6215 - nv->firmware_options_1 |= __constant_cpu_to_le32(BIT_5); 6225 + nv->firmware_options_1 |= cpu_to_le32(BIT_5); 6216 6226 6217 6227 /* Disable Full Login after LIP */ 6218 - nv->firmware_options_1 &= __constant_cpu_to_le32(~BIT_13); 6228 + nv->firmware_options_1 &= cpu_to_le32(~BIT_13); 6219 6229 /* Enable initial LIP */ 6220 - nv->firmware_options_1 &= __constant_cpu_to_le32(~BIT_9); 6230 + nv->firmware_options_1 &= cpu_to_le32(~BIT_9); 6221 6231 if (ql2xtgt_tape_enable) 6222 6232 /* Enable FC Tape support */ 6223 6233 nv->firmware_options_2 |= cpu_to_le32(BIT_12); ··· 6226 6236 nv->firmware_options_2 &= cpu_to_le32(~BIT_12); 6227 6237 6228 6238 /* Disable Full Login after LIP */ 6229 - nv->host_p &= __constant_cpu_to_le32(~BIT_10); 6239 + nv->host_p &= cpu_to_le32(~BIT_10); 6230 6240 /* Enable target PRLI control */ 6231 - nv->firmware_options_2 |= __constant_cpu_to_le32(BIT_14); 6241 + nv->firmware_options_2 |= cpu_to_le32(BIT_14); 6232 6242 } else { 6233 6243 if (ha->tgt.saved_set) { 6234 6244 nv->exchange_count = ha->tgt.saved_exchange_count; ··· 6250 6260 fc_host_supported_classes(vha->host) = 6251 6261 FC_COS_CLASS2 | FC_COS_CLASS3; 6252 6262 6253 - nv->firmware_options_2 |= __constant_cpu_to_le32(BIT_8); 6263 + nv->firmware_options_2 |= cpu_to_le32(BIT_8); 6254 6264 } else { 6255 6265 if (vha->flags.init_done) 6256 6266 fc_host_supported_classes(vha->host) = FC_COS_CLASS3; 6257 6267 6258 - nv->firmware_options_2 &= ~__constant_cpu_to_le32(BIT_8); 6268 + nv->firmware_options_2 &= ~cpu_to_le32(BIT_8); 6259 6269 } 6260 6270 } 6261 6271 ··· 6267 6277 6268 6278 if (ha->tgt.node_name_set) { 6269 6279 memcpy(icb->node_name, ha->tgt.tgt_node_name, WWN_SIZE); 6270 - icb->firmware_options_1 |= __constant_cpu_to_le32(BIT_14); 6280 + icb->firmware_options_1 |= cpu_to_le32(BIT_14); 6271 6281 } 6272 6282 } 6273 6283 ··· 6292 6302 ha->tgt.saved_set = 1; 6293 6303 } 6294 6304 6295 - nv->exchange_count = __constant_cpu_to_le16(0xFFFF); 6305 + nv->exchange_count = cpu_to_le16(0xFFFF); 6296 6306 6297 6307 /* Enable target mode */ 6298 - nv->firmware_options_1 |= __constant_cpu_to_le32(BIT_4); 6308 + nv->firmware_options_1 |= cpu_to_le32(BIT_4); 6299 6309 6300 6310 /* Disable ini mode, if requested */ 6301 6311 if (!qla_ini_mode_enabled(vha)) 6302 - nv->firmware_options_1 |= 6303 - __constant_cpu_to_le32(BIT_5); 6312 + nv->firmware_options_1 |= cpu_to_le32(BIT_5); 6304 6313 6305 6314 /* Disable Full Login after LIP */ 6306 - nv->firmware_options_1 &= __constant_cpu_to_le32(~BIT_13); 6315 + nv->firmware_options_1 &= cpu_to_le32(~BIT_13); 6307 6316 /* Enable initial LIP */ 6308 - nv->firmware_options_1 &= __constant_cpu_to_le32(~BIT_9); 6317 + nv->firmware_options_1 &= cpu_to_le32(~BIT_9); 6309 6318 if (ql2xtgt_tape_enable) 6310 6319 /* Enable FC tape support */ 6311 6320 nv->firmware_options_2 |= cpu_to_le32(BIT_12); ··· 6313 6324 nv->firmware_options_2 &= cpu_to_le32(~BIT_12); 6314 6325 6315 6326 /* Disable Full Login after LIP */ 6316 - nv->host_p &= __constant_cpu_to_le32(~BIT_10); 6327 + nv->host_p &= cpu_to_le32(~BIT_10); 6317 6328 /* Enable target PRLI control */ 6318 - nv->firmware_options_2 |= __constant_cpu_to_le32(BIT_14); 6329 + nv->firmware_options_2 |= cpu_to_le32(BIT_14); 6319 6330 } else { 6320 6331 if (ha->tgt.saved_set) { 6321 6332 nv->exchange_count = ha->tgt.saved_exchange_count; ··· 6337 6348 fc_host_supported_classes(vha->host) = 6338 6349 FC_COS_CLASS2 | FC_COS_CLASS3; 6339 6350 6340 - nv->firmware_options_2 |= __constant_cpu_to_le32(BIT_8); 6351 + nv->firmware_options_2 |= cpu_to_le32(BIT_8); 6341 6352 } else { 6342 6353 if (vha->flags.init_done) 6343 6354 fc_host_supported_classes(vha->host) = FC_COS_CLASS3; 6344 6355 6345 - nv->firmware_options_2 &= ~__constant_cpu_to_le32(BIT_8); 6356 + nv->firmware_options_2 &= ~cpu_to_le32(BIT_8); 6346 6357 } 6347 6358 } 6348 6359 ··· 6357 6368 6358 6369 if (ha->tgt.node_name_set) { 6359 6370 memcpy(icb->node_name, ha->tgt.tgt_node_name, WWN_SIZE); 6360 - icb->firmware_options_1 |= __constant_cpu_to_le32(BIT_14); 6371 + icb->firmware_options_1 |= cpu_to_le32(BIT_14); 6361 6372 } 6362 6373 } 6363 6374
+15 -12
drivers/scsi/qla2xxx/qla_tmpl.c
··· 137 137 } 138 138 139 139 static inline void 140 - qla27xx_read8(void *window, void *buf, ulong *len) 140 + qla27xx_read8(void __iomem *window, void *buf, ulong *len) 141 141 { 142 142 uint8_t value = ~0; 143 143 144 144 if (buf) { 145 - value = RD_REG_BYTE((__iomem void *)window); 145 + value = RD_REG_BYTE(window); 146 146 } 147 147 qla27xx_insert32(value, buf, len); 148 148 } 149 149 150 150 static inline void 151 - qla27xx_read16(void *window, void *buf, ulong *len) 151 + qla27xx_read16(void __iomem *window, void *buf, ulong *len) 152 152 { 153 153 uint16_t value = ~0; 154 154 155 155 if (buf) { 156 - value = RD_REG_WORD((__iomem void *)window); 156 + value = RD_REG_WORD(window); 157 157 } 158 158 qla27xx_insert32(value, buf, len); 159 159 } 160 160 161 161 static inline void 162 - qla27xx_read32(void *window, void *buf, ulong *len) 162 + qla27xx_read32(void __iomem *window, void *buf, ulong *len) 163 163 { 164 164 uint32_t value = ~0; 165 165 166 166 if (buf) { 167 - value = RD_REG_DWORD((__iomem void *)window); 167 + value = RD_REG_DWORD(window); 168 168 } 169 169 qla27xx_insert32(value, buf, len); 170 170 } 171 171 172 - static inline void (*qla27xx_read_vector(uint width))(void *, void *, ulong *) 172 + static inline void (*qla27xx_read_vector(uint width))(void __iomem*, void *, ulong *) 173 173 { 174 174 return 175 175 (width == 1) ? qla27xx_read8 : ··· 181 181 qla27xx_read_reg(__iomem struct device_reg_24xx *reg, 182 182 uint offset, void *buf, ulong *len) 183 183 { 184 - void *window = (void *)reg + offset; 184 + void __iomem *window = (void __iomem *)reg + offset; 185 185 186 186 qla27xx_read32(window, buf, len); 187 187 } ··· 202 202 uint32_t addr, uint offset, uint count, uint width, void *buf, 203 203 ulong *len) 204 204 { 205 - void *window = (void *)reg + offset; 206 - void (*readn)(void *, void *, ulong *) = qla27xx_read_vector(width); 205 + void __iomem *window = (void __iomem *)reg + offset; 206 + void (*readn)(void __iomem*, void *, ulong *) = qla27xx_read_vector(width); 207 207 208 208 qla27xx_write_reg(reg, IOBASE_ADDR, addr, buf); 209 209 while (count--) { ··· 805 805 qla27xx_driver_info(struct qla27xx_fwdt_template *tmp) 806 806 { 807 807 uint8_t v[] = { 0, 0, 0, 0, 0, 0 }; 808 - int rval = 0; 809 808 810 - rval = sscanf(qla2x00_version_str, "%hhu.%hhu.%hhu.%hhu.%hhu.%hhu", 809 + sscanf(qla2x00_version_str, "%hhu.%hhu.%hhu.%hhu.%hhu.%hhu", 811 810 v+0, v+1, v+2, v+3, v+4, v+5); 812 811 813 812 tmp->driver_info[0] = v[3] << 24 | v[2] << 16 | v[1] << 8 | v[0]; ··· 939 940 { 940 941 ulong flags = 0; 941 942 943 + #ifndef __CHECKER__ 942 944 if (!hardware_locked) 943 945 spin_lock_irqsave(&vha->hw->hardware_lock, flags); 946 + #endif 944 947 945 948 if (!vha->hw->fw_dump) 946 949 ql_log(ql_log_warn, vha, 0xd01e, "fwdump buffer missing.\n"); ··· 955 954 else 956 955 qla27xx_execute_fwdt_template(vha); 957 956 957 + #ifndef __CHECKER__ 958 958 if (!hardware_locked) 959 959 spin_unlock_irqrestore(&vha->hw->hardware_lock, flags); 960 + #endif 960 961 }
+1 -1
drivers/scsi/qla2xxx/qla_version.h
··· 7 7 /* 8 8 * Driver version 9 9 */ 10 - #define QLA2XXX_VERSION "8.07.00.18-k" 10 + #define QLA2XXX_VERSION "8.07.00.26-k" 11 11 12 12 #define QLA_DRIVER_MAJOR_VER 8 13 13 #define QLA_DRIVER_MINOR_VER 7
+6
drivers/scsi/qla2xxx/tcm_qla2xxx.c
··· 420 420 421 421 static int tcm_qla2xxx_get_cmd_state(struct se_cmd *se_cmd) 422 422 { 423 + if (!(se_cmd->se_cmd_flags & SCF_SCSI_TMR_CDB)) { 424 + struct qla_tgt_cmd *cmd = container_of(se_cmd, 425 + struct qla_tgt_cmd, se_cmd); 426 + return cmd->state; 427 + } 428 + 423 429 return 0; 424 430 } 425 431
+9
drivers/scsi/scsi_error.c
··· 420 420 evt_type = SDEV_EVT_MODE_PARAMETER_CHANGE_REPORTED; 421 421 sdev_printk(KERN_WARNING, sdev, 422 422 "Mode parameters changed"); 423 + } else if (sshdr->asc == 0x2a && sshdr->ascq == 0x06) { 424 + evt_type = SDEV_EVT_ALUA_STATE_CHANGE_REPORTED; 425 + sdev_printk(KERN_WARNING, sdev, 426 + "Asymmetric access state changed"); 423 427 } else if (sshdr->asc == 0x2a && sshdr->ascq == 0x09) { 424 428 evt_type = SDEV_EVT_CAPACITY_CHANGE_REPORTED; 425 429 sdev_printk(KERN_WARNING, sdev, ··· 1159 1155 struct Scsi_Host *shost; 1160 1156 int rtn; 1161 1157 1158 + /* 1159 + * If SCSI_EH_ABORT_SCHEDULED has been set, it is timeout IO, 1160 + * should not get sense. 1161 + */ 1162 1162 list_for_each_entry_safe(scmd, next, work_q, eh_entry) { 1163 1163 if ((scmd->eh_eflags & SCSI_EH_CANCEL_CMD) || 1164 + (scmd->eh_eflags & SCSI_EH_ABORT_SCHEDULED) || 1164 1165 SCSI_SENSE_VALID(scmd)) 1165 1166 continue; 1166 1167
+10 -1
drivers/scsi/scsi_lib.c
··· 2423 2423 unsigned char cmd[12]; 2424 2424 int use_10_for_ms; 2425 2425 int header_length; 2426 - int result; 2426 + int result, retry_count = retries; 2427 2427 struct scsi_sense_hdr my_sshdr; 2428 2428 2429 2429 memset(data, 0, sizeof(*data)); ··· 2502 2502 data->block_descriptor_length = buffer[3]; 2503 2503 } 2504 2504 data->header_length = header_length; 2505 + } else if ((status_byte(result) == CHECK_CONDITION) && 2506 + scsi_sense_valid(sshdr) && 2507 + sshdr->sense_key == UNIT_ATTENTION && retry_count) { 2508 + retry_count--; 2509 + goto retry; 2505 2510 } 2506 2511 2507 2512 return result; ··· 2712 2707 case SDEV_EVT_LUN_CHANGE_REPORTED: 2713 2708 envp[idx++] = "SDEV_UA=REPORTED_LUNS_DATA_HAS_CHANGED"; 2714 2709 break; 2710 + case SDEV_EVT_ALUA_STATE_CHANGE_REPORTED: 2711 + envp[idx++] = "SDEV_UA=ASYMMETRIC_ACCESS_STATE_CHANGED"; 2712 + break; 2715 2713 default: 2716 2714 /* do nothing */ 2717 2715 break; ··· 2818 2810 case SDEV_EVT_SOFT_THRESHOLD_REACHED_REPORTED: 2819 2811 case SDEV_EVT_MODE_PARAMETER_CHANGE_REPORTED: 2820 2812 case SDEV_EVT_LUN_CHANGE_REPORTED: 2813 + case SDEV_EVT_ALUA_STATE_CHANGE_REPORTED: 2821 2814 default: 2822 2815 /* do nothing */ 2823 2816 break;
+8 -3
drivers/scsi/scsi_transport_iscsi.c
··· 2042 2042 session->transport = transport; 2043 2043 session->creator = -1; 2044 2044 session->recovery_tmo = 120; 2045 + session->recovery_tmo_sysfs_override = false; 2045 2046 session->state = ISCSI_SESSION_FREE; 2046 2047 INIT_DELAYED_WORK(&session->recovery_work, session_recovery_timedout); 2047 2048 INIT_LIST_HEAD(&session->sess_list); ··· 2787 2786 switch (ev->u.set_param.param) { 2788 2787 case ISCSI_PARAM_SESS_RECOVERY_TMO: 2789 2788 sscanf(data, "%d", &value); 2790 - session->recovery_tmo = value; 2789 + if (!session->recovery_tmo_sysfs_override) 2790 + session->recovery_tmo = value; 2791 2791 break; 2792 2792 default: 2793 2793 err = transport->set_param(conn, ev->u.set_param.param, ··· 4051 4049 if ((session->state == ISCSI_SESSION_FREE) || \ 4052 4050 (session->state == ISCSI_SESSION_FAILED)) \ 4053 4051 return -EBUSY; \ 4054 - if (strncmp(buf, "off", 3) == 0) \ 4052 + if (strncmp(buf, "off", 3) == 0) { \ 4055 4053 session->field = -1; \ 4056 - else { \ 4054 + session->field##_sysfs_override = true; \ 4055 + } else { \ 4057 4056 val = simple_strtoul(buf, &cp, 0); \ 4058 4057 if (*cp != '\0' && *cp != '\n') \ 4059 4058 return -EINVAL; \ 4060 4059 session->field = val; \ 4060 + session->field##_sysfs_override = true; \ 4061 4061 } \ 4062 4062 return count; \ 4063 4063 } ··· 4070 4066 static ISCSI_CLASS_ATTR(priv_sess, field, S_IRUGO | S_IWUSR, \ 4071 4067 show_priv_session_##field, \ 4072 4068 store_priv_session_##field) 4069 + 4073 4070 iscsi_priv_session_rw_attr(recovery_tmo, "%d"); 4074 4071 4075 4072 static struct attribute *iscsi_session_attrs[] = {
+23 -60
drivers/scsi/st.c
··· 85 85 86 86 static struct class st_sysfs_class; 87 87 static const struct attribute_group *st_dev_groups[]; 88 + static const struct attribute_group *st_drv_groups[]; 88 89 89 90 MODULE_AUTHOR("Kai Makisara"); 90 91 MODULE_DESCRIPTION("SCSI tape (st) driver"); ··· 199 198 static int st_probe(struct device *); 200 199 static int st_remove(struct device *); 201 200 202 - static int do_create_sysfs_files(void); 203 - static void do_remove_sysfs_files(void); 204 - 205 201 static struct scsi_driver st_template = { 206 202 .gendrv = { 207 203 .name = "st", 208 204 .owner = THIS_MODULE, 209 205 .probe = st_probe, 210 206 .remove = st_remove, 207 + .groups = st_drv_groups, 211 208 }, 212 209 }; 213 210 ··· 4403 4404 if (err) 4404 4405 goto err_chrdev; 4405 4406 4406 - err = do_create_sysfs_files(); 4407 - if (err) 4408 - goto err_scsidrv; 4409 - 4410 4407 return 0; 4411 4408 4412 - err_scsidrv: 4413 - scsi_unregister_driver(&st_template.gendrv); 4414 4409 err_chrdev: 4415 4410 unregister_chrdev_region(MKDEV(SCSI_TAPE_MAJOR, 0), 4416 4411 ST_MAX_TAPE_ENTRIES); ··· 4415 4422 4416 4423 static void __exit exit_st(void) 4417 4424 { 4418 - do_remove_sysfs_files(); 4419 4425 scsi_unregister_driver(&st_template.gendrv); 4420 4426 unregister_chrdev_region(MKDEV(SCSI_TAPE_MAJOR, 0), 4421 4427 ST_MAX_TAPE_ENTRIES); 4422 4428 class_unregister(&st_sysfs_class); 4429 + idr_destroy(&st_index_idr); 4423 4430 printk(KERN_INFO "st: Unloaded.\n"); 4424 4431 } 4425 4432 ··· 4428 4435 4429 4436 4430 4437 /* The sysfs driver interface. Read-only at the moment */ 4431 - static ssize_t st_try_direct_io_show(struct device_driver *ddp, char *buf) 4438 + static ssize_t try_direct_io_show(struct device_driver *ddp, char *buf) 4432 4439 { 4433 - return snprintf(buf, PAGE_SIZE, "%d\n", try_direct_io); 4440 + return scnprintf(buf, PAGE_SIZE, "%d\n", try_direct_io); 4434 4441 } 4435 - static DRIVER_ATTR(try_direct_io, S_IRUGO, st_try_direct_io_show, NULL); 4442 + static DRIVER_ATTR_RO(try_direct_io); 4436 4443 4437 - static ssize_t st_fixed_buffer_size_show(struct device_driver *ddp, char *buf) 4444 + static ssize_t fixed_buffer_size_show(struct device_driver *ddp, char *buf) 4438 4445 { 4439 - return snprintf(buf, PAGE_SIZE, "%d\n", st_fixed_buffer_size); 4446 + return scnprintf(buf, PAGE_SIZE, "%d\n", st_fixed_buffer_size); 4440 4447 } 4441 - static DRIVER_ATTR(fixed_buffer_size, S_IRUGO, st_fixed_buffer_size_show, NULL); 4448 + static DRIVER_ATTR_RO(fixed_buffer_size); 4442 4449 4443 - static ssize_t st_max_sg_segs_show(struct device_driver *ddp, char *buf) 4450 + static ssize_t max_sg_segs_show(struct device_driver *ddp, char *buf) 4444 4451 { 4445 - return snprintf(buf, PAGE_SIZE, "%d\n", st_max_sg_segs); 4452 + return scnprintf(buf, PAGE_SIZE, "%d\n", st_max_sg_segs); 4446 4453 } 4447 - static DRIVER_ATTR(max_sg_segs, S_IRUGO, st_max_sg_segs_show, NULL); 4454 + static DRIVER_ATTR_RO(max_sg_segs); 4448 4455 4449 - static ssize_t st_version_show(struct device_driver *ddd, char *buf) 4456 + static ssize_t version_show(struct device_driver *ddd, char *buf) 4450 4457 { 4451 - return snprintf(buf, PAGE_SIZE, "[%s]\n", verstr); 4458 + return scnprintf(buf, PAGE_SIZE, "[%s]\n", verstr); 4452 4459 } 4453 - static DRIVER_ATTR(version, S_IRUGO, st_version_show, NULL); 4460 + static DRIVER_ATTR_RO(version); 4454 4461 4455 - static int do_create_sysfs_files(void) 4456 - { 4457 - struct device_driver *sysfs = &st_template.gendrv; 4458 - int err; 4459 - 4460 - err = driver_create_file(sysfs, &driver_attr_try_direct_io); 4461 - if (err) 4462 - return err; 4463 - err = driver_create_file(sysfs, &driver_attr_fixed_buffer_size); 4464 - if (err) 4465 - goto err_try_direct_io; 4466 - err = driver_create_file(sysfs, &driver_attr_max_sg_segs); 4467 - if (err) 4468 - goto err_attr_fixed_buf; 4469 - err = driver_create_file(sysfs, &driver_attr_version); 4470 - if (err) 4471 - goto err_attr_max_sg; 4472 - 4473 - return 0; 4474 - 4475 - err_attr_max_sg: 4476 - driver_remove_file(sysfs, &driver_attr_max_sg_segs); 4477 - err_attr_fixed_buf: 4478 - driver_remove_file(sysfs, &driver_attr_fixed_buffer_size); 4479 - err_try_direct_io: 4480 - driver_remove_file(sysfs, &driver_attr_try_direct_io); 4481 - return err; 4482 - } 4483 - 4484 - static void do_remove_sysfs_files(void) 4485 - { 4486 - struct device_driver *sysfs = &st_template.gendrv; 4487 - 4488 - driver_remove_file(sysfs, &driver_attr_version); 4489 - driver_remove_file(sysfs, &driver_attr_max_sg_segs); 4490 - driver_remove_file(sysfs, &driver_attr_fixed_buffer_size); 4491 - driver_remove_file(sysfs, &driver_attr_try_direct_io); 4492 - } 4462 + static struct attribute *st_drv_attrs[] = { 4463 + &driver_attr_try_direct_io.attr, 4464 + &driver_attr_fixed_buffer_size.attr, 4465 + &driver_attr_max_sg_segs.attr, 4466 + &driver_attr_version.attr, 4467 + NULL, 4468 + }; 4469 + ATTRIBUTE_GROUPS(st_drv); 4493 4470 4494 4471 /* The sysfs simple class interface */ 4495 4472 static ssize_t
+141 -83
drivers/scsi/storvsc_drv.c
··· 56 56 * V1 RC > 2008/1/31: 2.0 57 57 * Win7: 4.2 58 58 * Win8: 5.1 59 + * Win8.1: 6.0 60 + * Win10: 6.2 59 61 */ 60 62 63 + #define VMSTOR_PROTO_VERSION(MAJOR_, MINOR_) ((((MAJOR_) & 0xff) << 8) | \ 64 + (((MINOR_) & 0xff))) 61 65 62 - #define VMSTOR_WIN7_MAJOR 4 63 - #define VMSTOR_WIN7_MINOR 2 64 - 65 - #define VMSTOR_WIN8_MAJOR 5 66 - #define VMSTOR_WIN8_MINOR 1 67 - 66 + #define VMSTOR_PROTO_VERSION_WIN6 VMSTOR_PROTO_VERSION(2, 0) 67 + #define VMSTOR_PROTO_VERSION_WIN7 VMSTOR_PROTO_VERSION(4, 2) 68 + #define VMSTOR_PROTO_VERSION_WIN8 VMSTOR_PROTO_VERSION(5, 1) 69 + #define VMSTOR_PROTO_VERSION_WIN8_1 VMSTOR_PROTO_VERSION(6, 0) 70 + #define VMSTOR_PROTO_VERSION_WIN10 VMSTOR_PROTO_VERSION(6, 2) 68 71 69 72 /* Packet structure describing virtual storage requests. */ 70 73 enum vstor_packet_operation { ··· 151 148 152 149 /* 153 150 * Sense buffer size changed in win8; have a run-time 154 - * variable to track the size we should use. 151 + * variable to track the size we should use. This value will 152 + * likely change during protocol negotiation but it is valid 153 + * to start by assuming pre-Win8. 155 154 */ 156 - static int sense_buffer_size; 155 + static int sense_buffer_size = PRE_WIN8_STORVSC_SENSE_BUFFER_SIZE; 157 156 158 157 /* 159 - * The size of the vmscsi_request has changed in win8. The 160 - * additional size is because of new elements added to the 161 - * structure. These elements are valid only when we are talking 162 - * to a win8 host. 163 - * Track the correction to size we need to apply. 164 - */ 165 - 166 - static int vmscsi_size_delta; 167 - static int vmstor_current_major; 168 - static int vmstor_current_minor; 158 + * The storage protocol version is determined during the 159 + * initial exchange with the host. It will indicate which 160 + * storage functionality is available in the host. 161 + */ 162 + static int vmstor_proto_version; 169 163 170 164 struct vmscsi_win8_extension { 171 165 /* ··· 204 204 struct vmscsi_win8_extension win8_extension; 205 205 206 206 } __attribute((packed)); 207 + 208 + 209 + /* 210 + * The size of the vmscsi_request has changed in win8. The 211 + * additional size is because of new elements added to the 212 + * structure. These elements are valid only when we are talking 213 + * to a win8 host. 214 + * Track the correction to size we need to apply. This value 215 + * will likely change during protocol negotiation but it is 216 + * valid to start by assuming pre-Win8. 217 + */ 218 + static int vmscsi_size_delta = sizeof(struct vmscsi_win8_extension); 219 + 220 + /* 221 + * The list of storage protocols in order of preference. 222 + */ 223 + struct vmstor_protocol { 224 + int protocol_version; 225 + int sense_buffer_size; 226 + int vmscsi_size_delta; 227 + }; 228 + 229 + 230 + static const struct vmstor_protocol vmstor_protocols[] = { 231 + { 232 + VMSTOR_PROTO_VERSION_WIN10, 233 + POST_WIN7_STORVSC_SENSE_BUFFER_SIZE, 234 + 0 235 + }, 236 + { 237 + VMSTOR_PROTO_VERSION_WIN8_1, 238 + POST_WIN7_STORVSC_SENSE_BUFFER_SIZE, 239 + 0 240 + }, 241 + { 242 + VMSTOR_PROTO_VERSION_WIN8, 243 + POST_WIN7_STORVSC_SENSE_BUFFER_SIZE, 244 + 0 245 + }, 246 + { 247 + VMSTOR_PROTO_VERSION_WIN7, 248 + PRE_WIN8_STORVSC_SENSE_BUFFER_SIZE, 249 + sizeof(struct vmscsi_win8_extension), 250 + }, 251 + { 252 + VMSTOR_PROTO_VERSION_WIN6, 253 + PRE_WIN8_STORVSC_SENSE_BUFFER_SIZE, 254 + sizeof(struct vmscsi_win8_extension), 255 + } 256 + }; 207 257 208 258 209 259 /* ··· 476 426 struct storvsc_scan_work *wrk; 477 427 struct Scsi_Host *host; 478 428 struct scsi_device *sdev; 479 - unsigned long flags; 480 429 481 430 wrk = container_of(work, struct storvsc_scan_work, work); 482 431 host = wrk->host; ··· 492 443 * may have been removed this way. 493 444 */ 494 445 mutex_lock(&host->scan_mutex); 495 - spin_lock_irqsave(host->host_lock, flags); 496 - list_for_each_entry(sdev, &host->__devices, siblings) { 497 - spin_unlock_irqrestore(host->host_lock, flags); 446 + shost_for_each_device(sdev, host) 498 447 scsi_test_unit_ready(sdev, 1, 1, NULL); 499 - spin_lock_irqsave(host->host_lock, flags); 500 - continue; 501 - } 502 - spin_unlock_irqrestore(host->host_lock, flags); 503 448 mutex_unlock(&host->scan_mutex); 504 449 /* 505 450 * Now scan the host to discover LUNs that may have been added. ··· 524 481 kfree(wrk); 525 482 } 526 483 527 - /* 528 - * Major/minor macros. Minor version is in LSB, meaning that earlier flat 529 - * version numbers will be interpreted as "0.x" (i.e., 1 becomes 0.1). 530 - */ 531 - 532 - static inline u16 storvsc_get_version(u8 major, u8 minor) 533 - { 534 - u16 version; 535 - 536 - version = ((major << 8) | minor); 537 - return version; 538 - } 539 484 540 485 /* 541 486 * We can get incoming messages from the host that are not in response to ··· 916 885 struct storvsc_device *stor_device; 917 886 struct storvsc_cmd_request *request; 918 887 struct vstor_packet *vstor_packet; 919 - int ret, t; 888 + int ret, t, i; 920 889 int max_chns; 921 890 bool process_sub_channels = false; 922 891 ··· 952 921 } 953 922 954 923 if (vstor_packet->operation != VSTOR_OPERATION_COMPLETE_IO || 955 - vstor_packet->status != 0) 924 + vstor_packet->status != 0) { 925 + ret = -EINVAL; 956 926 goto cleanup; 927 + } 957 928 958 929 959 - /* reuse the packet for version range supported */ 960 - memset(vstor_packet, 0, sizeof(struct vstor_packet)); 961 - vstor_packet->operation = VSTOR_OPERATION_QUERY_PROTOCOL_VERSION; 962 - vstor_packet->flags = REQUEST_COMPLETION_FLAG; 930 + for (i = 0; i < ARRAY_SIZE(vmstor_protocols); i++) { 931 + /* reuse the packet for version range supported */ 932 + memset(vstor_packet, 0, sizeof(struct vstor_packet)); 933 + vstor_packet->operation = 934 + VSTOR_OPERATION_QUERY_PROTOCOL_VERSION; 935 + vstor_packet->flags = REQUEST_COMPLETION_FLAG; 963 936 964 - vstor_packet->version.major_minor = 965 - storvsc_get_version(vmstor_current_major, vmstor_current_minor); 937 + vstor_packet->version.major_minor = 938 + vmstor_protocols[i].protocol_version; 966 939 967 - /* 968 - * The revision number is only used in Windows; set it to 0. 969 - */ 970 - vstor_packet->version.revision = 0; 940 + /* 941 + * The revision number is only used in Windows; set it to 0. 942 + */ 943 + vstor_packet->version.revision = 0; 971 944 972 - ret = vmbus_sendpacket(device->channel, vstor_packet, 945 + ret = vmbus_sendpacket(device->channel, vstor_packet, 973 946 (sizeof(struct vstor_packet) - 974 947 vmscsi_size_delta), 975 948 (unsigned long)request, 976 949 VM_PKT_DATA_INBAND, 977 950 VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); 978 - if (ret != 0) 979 - goto cleanup; 951 + if (ret != 0) 952 + goto cleanup; 980 953 981 - t = wait_for_completion_timeout(&request->wait_event, 5*HZ); 982 - if (t == 0) { 983 - ret = -ETIMEDOUT; 984 - goto cleanup; 954 + t = wait_for_completion_timeout(&request->wait_event, 5*HZ); 955 + if (t == 0) { 956 + ret = -ETIMEDOUT; 957 + goto cleanup; 958 + } 959 + 960 + if (vstor_packet->operation != VSTOR_OPERATION_COMPLETE_IO) { 961 + ret = -EINVAL; 962 + goto cleanup; 963 + } 964 + 965 + if (vstor_packet->status == 0) { 966 + vmstor_proto_version = 967 + vmstor_protocols[i].protocol_version; 968 + 969 + sense_buffer_size = 970 + vmstor_protocols[i].sense_buffer_size; 971 + 972 + vmscsi_size_delta = 973 + vmstor_protocols[i].vmscsi_size_delta; 974 + 975 + break; 976 + } 985 977 } 986 978 987 - if (vstor_packet->operation != VSTOR_OPERATION_COMPLETE_IO || 988 - vstor_packet->status != 0) 979 + if (vstor_packet->status != 0) { 980 + ret = -EINVAL; 989 981 goto cleanup; 982 + } 990 983 991 984 992 985 memset(vstor_packet, 0, sizeof(struct vstor_packet)); ··· 1034 979 } 1035 980 1036 981 if (vstor_packet->operation != VSTOR_OPERATION_COMPLETE_IO || 1037 - vstor_packet->status != 0) 982 + vstor_packet->status != 0) { 983 + ret = -EINVAL; 1038 984 goto cleanup; 985 + } 1039 986 1040 987 /* 1041 988 * Check to see if multi-channel support is there. ··· 1045 988 * support multi-channel. 1046 989 */ 1047 990 max_chns = vstor_packet->storage_channel_properties.max_channel_cnt; 1048 - if ((vmbus_proto_version != VERSION_WIN7) && 1049 - (vmbus_proto_version != VERSION_WS2008)) { 991 + if (vmstor_proto_version >= VMSTOR_PROTO_VERSION_WIN8) { 1050 992 if (vstor_packet->storage_channel_properties.flags & 1051 993 STORAGE_CHANNEL_SUPPORTS_MULTI_CHANNEL) 1052 994 process_sub_channels = true; ··· 1074 1018 } 1075 1019 1076 1020 if (vstor_packet->operation != VSTOR_OPERATION_COMPLETE_IO || 1077 - vstor_packet->status != 0) 1021 + vstor_packet->status != 0) { 1022 + ret = -EINVAL; 1078 1023 goto cleanup; 1024 + } 1079 1025 1080 1026 if (process_sub_channels) 1081 1027 handle_multichannel_storage(device, max_chns); ··· 1486 1428 1487 1429 /* 1488 1430 * If the host is WIN8 or WIN8 R2, claim conformance to SPC-3 1489 - * if the device is a MSFT virtual device. 1431 + * if the device is a MSFT virtual device. If the host is 1432 + * WIN10 or newer, allow write_same. 1490 1433 */ 1491 1434 if (!strncmp(sdevice->vendor, "Msft", 4)) { 1492 - switch (vmbus_proto_version) { 1493 - case VERSION_WIN8: 1494 - case VERSION_WIN8_1: 1435 + switch (vmstor_proto_version) { 1436 + case VMSTOR_PROTO_VERSION_WIN8: 1437 + case VMSTOR_PROTO_VERSION_WIN8_1: 1495 1438 sdevice->scsi_level = SCSI_SPC_3; 1496 1439 break; 1497 1440 } 1441 + 1442 + if (vmstor_proto_version >= VMSTOR_PROTO_VERSION_WIN10) 1443 + sdevice->no_write_same = 0; 1498 1444 } 1499 1445 1500 1446 return 0; ··· 1625 1563 u32 payload_sz; 1626 1564 u32 length; 1627 1565 1628 - if (vmstor_current_major <= VMSTOR_WIN8_MAJOR) { 1566 + if (vmstor_proto_version <= VMSTOR_PROTO_VERSION_WIN8) { 1629 1567 /* 1630 1568 * On legacy hosts filter unimplemented commands. 1631 1569 * Future hosts are expected to correctly handle ··· 1660 1598 vm_srb->data_in = READ_TYPE; 1661 1599 vm_srb->win8_extension.srb_flags |= SRB_FLAGS_DATA_IN; 1662 1600 break; 1663 - default: 1601 + case DMA_NONE: 1664 1602 vm_srb->data_in = UNKNOWN_TYPE; 1665 1603 vm_srb->win8_extension.srb_flags |= SRB_FLAGS_NO_DATA_TRANSFER; 1666 1604 break; 1605 + default: 1606 + /* 1607 + * This is DMA_BIDIRECTIONAL or something else we are never 1608 + * supposed to see here. 1609 + */ 1610 + WARN(1, "Unexpected data direction: %d\n", 1611 + scmnd->sc_data_direction); 1612 + return -EINVAL; 1667 1613 } 1668 1614 1669 1615 ··· 1828 1758 * set state to properly communicate with the host. 1829 1759 */ 1830 1760 1831 - switch (vmbus_proto_version) { 1832 - case VERSION_WS2008: 1833 - case VERSION_WIN7: 1834 - sense_buffer_size = PRE_WIN8_STORVSC_SENSE_BUFFER_SIZE; 1835 - vmscsi_size_delta = sizeof(struct vmscsi_win8_extension); 1836 - vmstor_current_major = VMSTOR_WIN7_MAJOR; 1837 - vmstor_current_minor = VMSTOR_WIN7_MINOR; 1761 + if (vmbus_proto_version < VERSION_WIN8) { 1838 1762 max_luns_per_target = STORVSC_IDE_MAX_LUNS_PER_TARGET; 1839 1763 max_targets = STORVSC_IDE_MAX_TARGETS; 1840 1764 max_channels = STORVSC_IDE_MAX_CHANNELS; 1841 - break; 1842 - default: 1843 - sense_buffer_size = POST_WIN7_STORVSC_SENSE_BUFFER_SIZE; 1844 - vmscsi_size_delta = 0; 1845 - vmstor_current_major = VMSTOR_WIN8_MAJOR; 1846 - vmstor_current_minor = VMSTOR_WIN8_MINOR; 1765 + } else { 1847 1766 max_luns_per_target = STORVSC_MAX_LUNS_PER_TARGET; 1848 1767 max_targets = STORVSC_MAX_TARGETS; 1849 1768 max_channels = STORVSC_MAX_CHANNELS; ··· 1842 1783 * VCPUs in the guest. 1843 1784 */ 1844 1785 max_sub_channels = (num_cpus / storvsc_vcpus_per_sub_channel); 1845 - break; 1846 1786 } 1847 1787 1848 1788 scsi_driver.can_queue = (max_outstanding_req_per_channel *
+2 -1
include/scsi/scsi_device.h
··· 57 57 SDEV_EVT_SOFT_THRESHOLD_REACHED_REPORTED, /* 38 07 UA reported */ 58 58 SDEV_EVT_MODE_PARAMETER_CHANGE_REPORTED, /* 2A 01 UA reported */ 59 59 SDEV_EVT_LUN_CHANGE_REPORTED, /* 3F 0E UA reported */ 60 + SDEV_EVT_ALUA_STATE_CHANGE_REPORTED, /* 2A 06 UA reported */ 60 61 61 62 SDEV_EVT_FIRST = SDEV_EVT_MEDIA_CHANGE, 62 - SDEV_EVT_LAST = SDEV_EVT_LUN_CHANGE_REPORTED, 63 + SDEV_EVT_LAST = SDEV_EVT_ALUA_STATE_CHANGE_REPORTED, 63 64 64 65 SDEV_EVT_MAXBITS = SDEV_EVT_LAST + 1 65 66 };
+1
include/scsi/scsi_transport_iscsi.h
··· 241 241 242 242 /* recovery fields */ 243 243 int recovery_tmo; 244 + bool recovery_tmo_sysfs_override; 244 245 struct delayed_work recovery_work; 245 246 246 247 unsigned int target_id;
+1
include/uapi/scsi/Kbuild
··· 3 3 header-y += scsi_bsg_fc.h 4 4 header-y += scsi_netlink.h 5 5 header-y += scsi_netlink_fc.h 6 + header-y += cxlflash_ioctl.h
+174
include/uapi/scsi/cxlflash_ioctl.h
··· 1 + /* 2 + * CXL Flash Device Driver 3 + * 4 + * Written by: Manoj N. Kumar <manoj@linux.vnet.ibm.com>, IBM Corporation 5 + * Matthew R. Ochs <mrochs@linux.vnet.ibm.com>, IBM Corporation 6 + * 7 + * Copyright (C) 2015 IBM Corporation 8 + * 9 + * This program is free software; you can redistribute it and/or 10 + * modify it under the terms of the GNU General Public License 11 + * as published by the Free Software Foundation; either version 12 + * 2 of the License, or (at your option) any later version. 13 + */ 14 + 15 + #ifndef _CXLFLASH_IOCTL_H 16 + #define _CXLFLASH_IOCTL_H 17 + 18 + #include <linux/types.h> 19 + 20 + /* 21 + * Structure and flag definitions CXL Flash superpipe ioctls 22 + */ 23 + 24 + #define DK_CXLFLASH_VERSION_0 0 25 + 26 + struct dk_cxlflash_hdr { 27 + __u16 version; /* Version data */ 28 + __u16 rsvd[3]; /* Reserved for future use */ 29 + __u64 flags; /* Input flags */ 30 + __u64 return_flags; /* Returned flags */ 31 + }; 32 + 33 + /* 34 + * Notes: 35 + * ----- 36 + * The 'context_id' field of all ioctl structures contains the context 37 + * identifier for a context in the lower 32-bits (upper 32-bits are not 38 + * to be used when identifying a context to the AFU). That said, the value 39 + * in its entirety (all 64-bits) is to be treated as an opaque cookie and 40 + * should be presented as such when issuing ioctls. 41 + * 42 + * For DK_CXLFLASH_ATTACH ioctl, user specifies read/write access 43 + * permissions via the O_RDONLY, O_WRONLY, and O_RDWR flags defined in 44 + * the fcntl.h header file. 45 + */ 46 + #define DK_CXLFLASH_ATTACH_REUSE_CONTEXT 0x8000000000000000ULL 47 + 48 + struct dk_cxlflash_attach { 49 + struct dk_cxlflash_hdr hdr; /* Common fields */ 50 + __u64 num_interrupts; /* Requested number of interrupts */ 51 + __u64 context_id; /* Returned context */ 52 + __u64 mmio_size; /* Returned size of MMIO area */ 53 + __u64 block_size; /* Returned block size, in bytes */ 54 + __u64 adap_fd; /* Returned adapter file descriptor */ 55 + __u64 last_lba; /* Returned last LBA on the device */ 56 + __u64 max_xfer; /* Returned max transfer size, blocks */ 57 + __u64 reserved[8]; /* Reserved for future use */ 58 + }; 59 + 60 + struct dk_cxlflash_detach { 61 + struct dk_cxlflash_hdr hdr; /* Common fields */ 62 + __u64 context_id; /* Context to detach */ 63 + __u64 reserved[8]; /* Reserved for future use */ 64 + }; 65 + 66 + struct dk_cxlflash_udirect { 67 + struct dk_cxlflash_hdr hdr; /* Common fields */ 68 + __u64 context_id; /* Context to own physical resources */ 69 + __u64 rsrc_handle; /* Returned resource handle */ 70 + __u64 last_lba; /* Returned last LBA on the device */ 71 + __u64 reserved[8]; /* Reserved for future use */ 72 + }; 73 + 74 + #define DK_CXLFLASH_UVIRTUAL_NEED_WRITE_SAME 0x8000000000000000ULL 75 + 76 + struct dk_cxlflash_uvirtual { 77 + struct dk_cxlflash_hdr hdr; /* Common fields */ 78 + __u64 context_id; /* Context to own virtual resources */ 79 + __u64 lun_size; /* Requested size, in 4K blocks */ 80 + __u64 rsrc_handle; /* Returned resource handle */ 81 + __u64 last_lba; /* Returned last LBA of LUN */ 82 + __u64 reserved[8]; /* Reserved for future use */ 83 + }; 84 + 85 + struct dk_cxlflash_release { 86 + struct dk_cxlflash_hdr hdr; /* Common fields */ 87 + __u64 context_id; /* Context owning resources */ 88 + __u64 rsrc_handle; /* Resource handle to release */ 89 + __u64 reserved[8]; /* Reserved for future use */ 90 + }; 91 + 92 + struct dk_cxlflash_resize { 93 + struct dk_cxlflash_hdr hdr; /* Common fields */ 94 + __u64 context_id; /* Context owning resources */ 95 + __u64 rsrc_handle; /* Resource handle of LUN to resize */ 96 + __u64 req_size; /* New requested size, in 4K blocks */ 97 + __u64 last_lba; /* Returned last LBA of LUN */ 98 + __u64 reserved[8]; /* Reserved for future use */ 99 + }; 100 + 101 + struct dk_cxlflash_clone { 102 + struct dk_cxlflash_hdr hdr; /* Common fields */ 103 + __u64 context_id_src; /* Context to clone from */ 104 + __u64 context_id_dst; /* Context to clone to */ 105 + __u64 adap_fd_src; /* Source context adapter fd */ 106 + __u64 reserved[8]; /* Reserved for future use */ 107 + }; 108 + 109 + #define DK_CXLFLASH_VERIFY_SENSE_LEN 18 110 + #define DK_CXLFLASH_VERIFY_HINT_SENSE 0x8000000000000000ULL 111 + 112 + struct dk_cxlflash_verify { 113 + struct dk_cxlflash_hdr hdr; /* Common fields */ 114 + __u64 context_id; /* Context owning resources to verify */ 115 + __u64 rsrc_handle; /* Resource handle of LUN */ 116 + __u64 hint; /* Reasons for verify */ 117 + __u64 last_lba; /* Returned last LBA of device */ 118 + __u8 sense_data[DK_CXLFLASH_VERIFY_SENSE_LEN]; /* SCSI sense data */ 119 + __u8 pad[6]; /* Pad to next 8-byte boundary */ 120 + __u64 reserved[8]; /* Reserved for future use */ 121 + }; 122 + 123 + #define DK_CXLFLASH_RECOVER_AFU_CONTEXT_RESET 0x8000000000000000ULL 124 + 125 + struct dk_cxlflash_recover_afu { 126 + struct dk_cxlflash_hdr hdr; /* Common fields */ 127 + __u64 reason; /* Reason for recovery request */ 128 + __u64 context_id; /* Context to recover / updated ID */ 129 + __u64 mmio_size; /* Returned size of MMIO area */ 130 + __u64 adap_fd; /* Returned adapter file descriptor */ 131 + __u64 reserved[8]; /* Reserved for future use */ 132 + }; 133 + 134 + #define DK_CXLFLASH_MANAGE_LUN_WWID_LEN 16 135 + #define DK_CXLFLASH_MANAGE_LUN_ENABLE_SUPERPIPE 0x8000000000000000ULL 136 + #define DK_CXLFLASH_MANAGE_LUN_DISABLE_SUPERPIPE 0x4000000000000000ULL 137 + #define DK_CXLFLASH_MANAGE_LUN_ALL_PORTS_ACCESSIBLE 0x2000000000000000ULL 138 + 139 + struct dk_cxlflash_manage_lun { 140 + struct dk_cxlflash_hdr hdr; /* Common fields */ 141 + __u8 wwid[DK_CXLFLASH_MANAGE_LUN_WWID_LEN]; /* Page83 WWID, NAA-6 */ 142 + __u64 reserved[8]; /* Rsvd, future use */ 143 + }; 144 + 145 + union cxlflash_ioctls { 146 + struct dk_cxlflash_attach attach; 147 + struct dk_cxlflash_detach detach; 148 + struct dk_cxlflash_udirect udirect; 149 + struct dk_cxlflash_uvirtual uvirtual; 150 + struct dk_cxlflash_release release; 151 + struct dk_cxlflash_resize resize; 152 + struct dk_cxlflash_clone clone; 153 + struct dk_cxlflash_verify verify; 154 + struct dk_cxlflash_recover_afu recover_afu; 155 + struct dk_cxlflash_manage_lun manage_lun; 156 + }; 157 + 158 + #define MAX_CXLFLASH_IOCTL_SZ (sizeof(union cxlflash_ioctls)) 159 + 160 + #define CXL_MAGIC 0xCA 161 + #define CXL_IOWR(_n, _s) _IOWR(CXL_MAGIC, _n, struct _s) 162 + 163 + #define DK_CXLFLASH_ATTACH CXL_IOWR(0x80, dk_cxlflash_attach) 164 + #define DK_CXLFLASH_USER_DIRECT CXL_IOWR(0x81, dk_cxlflash_udirect) 165 + #define DK_CXLFLASH_RELEASE CXL_IOWR(0x82, dk_cxlflash_release) 166 + #define DK_CXLFLASH_DETACH CXL_IOWR(0x83, dk_cxlflash_detach) 167 + #define DK_CXLFLASH_VERIFY CXL_IOWR(0x84, dk_cxlflash_verify) 168 + #define DK_CXLFLASH_RECOVER_AFU CXL_IOWR(0x85, dk_cxlflash_recover_afu) 169 + #define DK_CXLFLASH_MANAGE_LUN CXL_IOWR(0x86, dk_cxlflash_manage_lun) 170 + #define DK_CXLFLASH_USER_VIRTUAL CXL_IOWR(0x87, dk_cxlflash_uvirtual) 171 + #define DK_CXLFLASH_VLUN_RESIZE CXL_IOWR(0x88, dk_cxlflash_resize) 172 + #define DK_CXLFLASH_VLUN_CLONE CXL_IOWR(0x89, dk_cxlflash_clone) 173 + 174 + #endif /* ifndef _CXLFLASH_IOCTL_H */