Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI updates from James Bottomley:
"This is mostly update of the usual drivers: aacraid, ufs, zfcp,
NCR5380, lpfc, qla2xxx, smartpqi, hisi_sas, target, mpt3sas, pm80xx
plus a whole load of minor updates and fixes.

The major core changes are Al Viro's reworking of sg's handling of
copy to/from user, Ming Lei's removal of the host busy counter to
avoid contention in the multiqueue case and Damien Le Moal's fixing of
residual tracking across error handling"

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (251 commits)
scsi: bnx2fc: timeout calculation invalid for bnx2fc_eh_abort()
scsi: target: core: Fix a pr_debug() argument
scsi: iscsi: Don't send data to unbound connection
scsi: target: iscsi: Wait for all commands to finish before freeing a session
scsi: target: core: Release SPC-2 reservations when closing a session
scsi: target: core: Document target_cmd_size_check()
scsi: bnx2i: fix potential use after free
Revert "scsi: qla2xxx: Fix memory leak when sending I/O fails"
scsi: NCR5380: Add disconnect_mask module parameter
scsi: NCR5380: Unconditionally clear ICR after do_abort()
scsi: NCR5380: Call scsi_set_resid() on command completion
scsi: scsi_debug: num_tgts must be >= 0
scsi: lpfc: use hdwq assigned cpu for allocation
scsi: arcmsr: fix indentation issues
scsi: qla4xxx: fix double free bug
scsi: pm80xx: Modified the logic to collect fatal dump
scsi: pm80xx: Tie the interrupt name to the module instance
scsi: pm80xx: Controller fatal error through sysfs
scsi: pm80xx: Do not request 12G sas speeds
scsi: pm80xx: Cleanup command when a reset times out
...

+5655 -1986
+68
Documentation/devicetree/bindings/ufs/ti,j721e-ufs.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/ufs/ti,j721e-ufs.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: TI J721e UFS Host Controller Glue Driver 8 + 9 + maintainers: 10 + - Vignesh Raghavendra <vigneshr@ti.com> 11 + 12 + properties: 13 + compatible: 14 + items: 15 + - const: ti,j721e-ufs 16 + 17 + reg: 18 + maxItems: 1 19 + description: address of TI UFS glue registers 20 + 21 + clocks: 22 + maxItems: 1 23 + description: phandle to the M-PHY clock 24 + 25 + power-domains: 26 + maxItems: 1 27 + 28 + required: 29 + - compatible 30 + - reg 31 + - clocks 32 + - power-domains 33 + 34 + patternProperties: 35 + "^ufs@[0-9a-f]+$": 36 + type: object 37 + description: | 38 + Cadence UFS controller node must be the child node. Refer 39 + Documentation/devicetree/bindings/ufs/cdns,ufshc.txt for binding 40 + documentation of child node 41 + 42 + examples: 43 + - | 44 + #include <dt-bindings/interrupt-controller/irq.h> 45 + #include <dt-bindings/interrupt-controller/arm-gic.h> 46 + 47 + ufs_wrapper: ufs-wrapper@4e80000 { 48 + compatible = "ti,j721e-ufs"; 49 + reg = <0x0 0x4e80000 0x0 0x100>; 50 + power-domains = <&k3_pds 277>; 51 + clocks = <&k3_clks 277 1>; 52 + assigned-clocks = <&k3_clks 277 1>; 53 + assigned-clock-parents = <&k3_clks 277 4>; 54 + #address-cells = <2>; 55 + #size-cells = <2>; 56 + 57 + ufs@4e84000 { 58 + compatible = "cdns,ufshc-m31-16nm", "jedec,ufs-2.0"; 59 + reg = <0x0 0x4e84000 0x0 0x10000>; 60 + interrupts = <GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>; 61 + freq-table-hz = <19200000 19200000>; 62 + power-domains = <&k3_pds 277>; 63 + clocks = <&k3_clks 277 1>; 64 + assigned-clocks = <&k3_clks 277 1>; 65 + assigned-clock-parents = <&k3_clks 277 4>; 66 + clock-names = "core_clk"; 67 + }; 68 + };
+1
Documentation/devicetree/bindings/ufs/ufshcd-pltfrm.txt
··· 13 13 "qcom,msm8996-ufshc", "qcom,ufshc", "jedec,ufs-2.0" 14 14 "qcom,msm8998-ufshc", "qcom,ufshc", "jedec,ufs-2.0" 15 15 "qcom,sdm845-ufshc", "qcom,ufshc", "jedec,ufs-2.0" 16 + "qcom,sm8150-ufshc", "qcom,ufshc", "jedec,ufs-2.0" 16 17 - interrupts : <interrupt mapping for UFS host controller IRQ> 17 18 - reg : <registers mapping> 18 19
+2 -1
Documentation/scsi/scsi_mid_low_api.txt
··· 1084 1084 commands to the adapter. 1085 1085 this_id - scsi id of host (scsi initiator) or -1 if not known 1086 1086 sg_tablesize - maximum scatter gather elements allowed by host. 1087 - 0 implies scatter gather not supported by host 1087 + Set this to SG_ALL or less to avoid chained SG lists. 1088 + Must be at least 1. 1088 1089 max_sectors - maximum number of sectors (usually 512 bytes) allowed 1089 1090 in a single SCSI command. The default value of 0 leads 1090 1091 to a setting of SCSI_DEFAULT_MAX_SECTORS (defined in
-1
drivers/ata/pata_arasan_cf.c
··· 219 219 220 220 static struct scsi_host_template arasan_cf_sht = { 221 221 ATA_BASE_SHT(DRIVER_NAME), 222 - .sg_tablesize = SG_NONE, 223 222 .dma_boundary = 0xFFFFFFFFUL, 224 223 }; 225 224
+1 -1
drivers/s390/scsi/Makefile
··· 5 5 6 6 zfcp-objs := zfcp_aux.o zfcp_ccw.o zfcp_dbf.o zfcp_erp.o \ 7 7 zfcp_fc.o zfcp_fsf.o zfcp_qdio.o zfcp_scsi.o zfcp_sysfs.o \ 8 - zfcp_unit.o 8 + zfcp_unit.o zfcp_diag.o 9 9 10 10 obj-$(CONFIG_ZFCP) += zfcp.o
+11 -1
drivers/s390/scsi/zfcp_aux.c
··· 4 4 * 5 5 * Module interface and handling of zfcp data structures. 6 6 * 7 - * Copyright IBM Corp. 2002, 2017 7 + * Copyright IBM Corp. 2002, 2018 8 8 */ 9 9 10 10 /* ··· 25 25 * Martin Petermann 26 26 * Sven Schuetz 27 27 * Steffen Maier 28 + * Benjamin Block 28 29 */ 29 30 30 31 #define KMSG_COMPONENT "zfcp" ··· 37 36 #include "zfcp_ext.h" 38 37 #include "zfcp_fc.h" 39 38 #include "zfcp_reqlist.h" 39 + #include "zfcp_diag.h" 40 40 41 41 #define ZFCP_BUS_ID_SIZE 20 42 42 ··· 358 356 359 357 adapter->erp_action.adapter = adapter; 360 358 359 + if (zfcp_diag_adapter_setup(adapter)) 360 + goto failed; 361 + 361 362 if (zfcp_qdio_setup(adapter)) 362 363 goto failed; 363 364 ··· 407 402 &zfcp_sysfs_adapter_attrs)) 408 403 goto failed; 409 404 405 + if (zfcp_diag_sysfs_setup(adapter)) 406 + goto failed; 407 + 410 408 /* report size limit per scatter-gather segment */ 411 409 adapter->ccw_device->dev.dma_parms = &adapter->dma_parms; 412 410 ··· 434 426 435 427 zfcp_fc_wka_ports_force_offline(adapter->gs); 436 428 zfcp_scsi_adapter_unregister(adapter); 429 + zfcp_diag_sysfs_destroy(adapter); 437 430 sysfs_remove_group(&cdev->dev.kobj, &zfcp_sysfs_adapter_attrs); 438 431 439 432 zfcp_erp_thread_kill(adapter); ··· 458 449 dev_set_drvdata(&adapter->ccw_device->dev, NULL); 459 450 zfcp_fc_gs_destroy(adapter); 460 451 zfcp_free_low_mem_buffers(adapter); 452 + zfcp_diag_adapter_free(adapter); 461 453 kfree(adapter->req_list); 462 454 kfree(adapter->fc_stats); 463 455 kfree(adapter->stats_reset_data);
+3 -5
drivers/s390/scsi/zfcp_dbf.c
··· 95 95 memcpy(rec->u.res.fsf_status_qual, &q_head->fsf_status_qual, 96 96 FSF_STATUS_QUALIFIER_SIZE); 97 97 98 - if (q_head->fsf_command != FSF_QTCB_FCP_CMND) { 99 - rec->pl_len = q_head->log_length; 100 - zfcp_dbf_pl_write(dbf, (char *)q_pref + q_head->log_start, 101 - rec->pl_len, "fsf_res", req->req_id); 102 - } 98 + rec->pl_len = q_head->log_length; 99 + zfcp_dbf_pl_write(dbf, (char *)q_pref + q_head->log_start, 100 + rec->pl_len, "fsf_res", req->req_id); 103 101 104 102 debug_event(dbf->hba, level, rec, sizeof(*rec)); 105 103 spin_unlock_irqrestore(&dbf->hba_lock, flags);
+3 -1
drivers/s390/scsi/zfcp_def.h
··· 4 4 * 5 5 * Global definitions for the zfcp device driver. 6 6 * 7 - * Copyright IBM Corp. 2002, 2017 7 + * Copyright IBM Corp. 2002, 2018 8 8 */ 9 9 10 10 #ifndef ZFCP_DEF_H ··· 86 86 #define ZFCP_STATUS_FSFREQ_ABORTNOTNEEDED 0x00000080 87 87 #define ZFCP_STATUS_FSFREQ_TMFUNCFAILED 0x00000200 88 88 #define ZFCP_STATUS_FSFREQ_DISMISSED 0x00001000 89 + #define ZFCP_STATUS_FSFREQ_XDATAINCOMPLETE 0x00020000 89 90 90 91 /************************* STRUCTURE DEFINITIONS *****************************/ 91 92 ··· 198 197 struct device_dma_parameters dma_parms; 199 198 struct zfcp_fc_events events; 200 199 unsigned long next_port_scan; 200 + struct zfcp_diag_adapter *diagnostics; 201 201 }; 202 202 203 203 struct zfcp_port {
+305
drivers/s390/scsi/zfcp_diag.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * zfcp device driver 4 + * 5 + * Functions to handle diagnostics. 6 + * 7 + * Copyright IBM Corp. 2018 8 + */ 9 + 10 + #include <linux/spinlock.h> 11 + #include <linux/jiffies.h> 12 + #include <linux/string.h> 13 + #include <linux/kernfs.h> 14 + #include <linux/sysfs.h> 15 + #include <linux/errno.h> 16 + #include <linux/slab.h> 17 + 18 + #include "zfcp_diag.h" 19 + #include "zfcp_ext.h" 20 + #include "zfcp_def.h" 21 + 22 + static DECLARE_WAIT_QUEUE_HEAD(__zfcp_diag_publish_wait); 23 + 24 + /** 25 + * zfcp_diag_adapter_setup() - Setup storage for adapter diagnostics. 26 + * @adapter: the adapter to setup diagnostics for. 27 + * 28 + * Creates the data-structures to store the diagnostics for an adapter. This 29 + * overwrites whatever was stored before at &zfcp_adapter->diagnostics! 30 + * 31 + * Return: 32 + * * 0 - Everyting is OK 33 + * * -ENOMEM - Could not allocate all/parts of the data-structures; 34 + * &zfcp_adapter->diagnostics remains unchanged 35 + */ 36 + int zfcp_diag_adapter_setup(struct zfcp_adapter *const adapter) 37 + { 38 + struct zfcp_diag_adapter *diag; 39 + struct zfcp_diag_header *hdr; 40 + 41 + diag = kzalloc(sizeof(*diag), GFP_KERNEL); 42 + if (diag == NULL) 43 + return -ENOMEM; 44 + 45 + diag->max_age = (5 * 1000); /* default value: 5 s */ 46 + 47 + /* setup header for port_data */ 48 + hdr = &diag->port_data.header; 49 + 50 + spin_lock_init(&hdr->access_lock); 51 + hdr->buffer = &diag->port_data.data; 52 + hdr->buffer_size = sizeof(diag->port_data.data); 53 + /* set the timestamp so that the first test on age will always fail */ 54 + hdr->timestamp = jiffies - msecs_to_jiffies(diag->max_age); 55 + 56 + /* setup header for config_data */ 57 + hdr = &diag->config_data.header; 58 + 59 + spin_lock_init(&hdr->access_lock); 60 + hdr->buffer = &diag->config_data.data; 61 + hdr->buffer_size = sizeof(diag->config_data.data); 62 + /* set the timestamp so that the first test on age will always fail */ 63 + hdr->timestamp = jiffies - msecs_to_jiffies(diag->max_age); 64 + 65 + adapter->diagnostics = diag; 66 + return 0; 67 + } 68 + 69 + /** 70 + * zfcp_diag_adapter_free() - Frees all adapter diagnostics allocations. 71 + * @adapter: the adapter whose diagnostic structures should be freed. 72 + * 73 + * Frees all data-structures in the given adapter that store diagnostics 74 + * information. Can savely be called with partially setup diagnostics. 75 + */ 76 + void zfcp_diag_adapter_free(struct zfcp_adapter *const adapter) 77 + { 78 + kfree(adapter->diagnostics); 79 + adapter->diagnostics = NULL; 80 + } 81 + 82 + /** 83 + * zfcp_diag_sysfs_setup() - Setup the sysfs-group for adapter-diagnostics. 84 + * @adapter: target adapter to which the group should be added. 85 + * 86 + * Return: 0 on success; Something else otherwise (see sysfs_create_group()). 87 + */ 88 + int zfcp_diag_sysfs_setup(struct zfcp_adapter *const adapter) 89 + { 90 + int rc = sysfs_create_group(&adapter->ccw_device->dev.kobj, 91 + &zfcp_sysfs_diag_attr_group); 92 + if (rc == 0) 93 + adapter->diagnostics->sysfs_established = 1; 94 + 95 + return rc; 96 + } 97 + 98 + /** 99 + * zfcp_diag_sysfs_destroy() - Remove the sysfs-group for adapter-diagnostics. 100 + * @adapter: target adapter from which the group should be removed. 101 + */ 102 + void zfcp_diag_sysfs_destroy(struct zfcp_adapter *const adapter) 103 + { 104 + if (adapter->diagnostics == NULL || 105 + !adapter->diagnostics->sysfs_established) 106 + return; 107 + 108 + /* 109 + * We need this state-handling so we can prevent warnings being printed 110 + * on the kernel-console in case we have to abort a halfway done 111 + * zfcp_adapter_enqueue(), in which the sysfs-group was not yet 112 + * established. sysfs_remove_group() does this checking as well, but 113 + * still prints a warning in case we try to remove a group that has not 114 + * been established before 115 + */ 116 + adapter->diagnostics->sysfs_established = 0; 117 + sysfs_remove_group(&adapter->ccw_device->dev.kobj, 118 + &zfcp_sysfs_diag_attr_group); 119 + } 120 + 121 + 122 + /** 123 + * zfcp_diag_update_xdata() - Update a diagnostics buffer. 124 + * @hdr: the meta data to update. 125 + * @data: data to use for the update. 126 + * @incomplete: flag stating whether the data in @data is incomplete. 127 + */ 128 + void zfcp_diag_update_xdata(struct zfcp_diag_header *const hdr, 129 + const void *const data, const bool incomplete) 130 + { 131 + const unsigned long capture_timestamp = jiffies; 132 + unsigned long flags; 133 + 134 + spin_lock_irqsave(&hdr->access_lock, flags); 135 + 136 + /* make sure we never go into the past with an update */ 137 + if (!time_after_eq(capture_timestamp, hdr->timestamp)) 138 + goto out; 139 + 140 + hdr->timestamp = capture_timestamp; 141 + hdr->incomplete = incomplete; 142 + memcpy(hdr->buffer, data, hdr->buffer_size); 143 + out: 144 + spin_unlock_irqrestore(&hdr->access_lock, flags); 145 + } 146 + 147 + /** 148 + * zfcp_diag_update_port_data_buffer() - Implementation of 149 + * &typedef zfcp_diag_update_buffer_func 150 + * to collect and update Port Data. 151 + * @adapter: Adapter to collect Port Data from. 152 + * 153 + * This call is SYNCHRONOUS ! It blocks till the respective command has 154 + * finished completely, or has failed in some way. 155 + * 156 + * Return: 157 + * * 0 - Successfully retrieved new Diagnostics and Updated the buffer; 158 + * this also includes cases where data was retrieved, but 159 + * incomplete; you'll have to check the flag ``incomplete`` 160 + * of &struct zfcp_diag_header. 161 + * * see zfcp_fsf_exchange_port_data_sync() for possible error-codes ( 162 + * excluding -EAGAIN) 163 + */ 164 + int zfcp_diag_update_port_data_buffer(struct zfcp_adapter *const adapter) 165 + { 166 + int rc; 167 + 168 + rc = zfcp_fsf_exchange_port_data_sync(adapter->qdio, NULL); 169 + if (rc == -EAGAIN) 170 + rc = 0; /* signaling incomplete via struct zfcp_diag_header */ 171 + 172 + /* buffer-data was updated in zfcp_fsf_exchange_port_data_handler() */ 173 + 174 + return rc; 175 + } 176 + 177 + /** 178 + * zfcp_diag_update_config_data_buffer() - Implementation of 179 + * &typedef zfcp_diag_update_buffer_func 180 + * to collect and update Config Data. 181 + * @adapter: Adapter to collect Config Data from. 182 + * 183 + * This call is SYNCHRONOUS ! It blocks till the respective command has 184 + * finished completely, or has failed in some way. 185 + * 186 + * Return: 187 + * * 0 - Successfully retrieved new Diagnostics and Updated the buffer; 188 + * this also includes cases where data was retrieved, but 189 + * incomplete; you'll have to check the flag ``incomplete`` 190 + * of &struct zfcp_diag_header. 191 + * * see zfcp_fsf_exchange_config_data_sync() for possible error-codes ( 192 + * excluding -EAGAIN) 193 + */ 194 + int zfcp_diag_update_config_data_buffer(struct zfcp_adapter *const adapter) 195 + { 196 + int rc; 197 + 198 + rc = zfcp_fsf_exchange_config_data_sync(adapter->qdio, NULL); 199 + if (rc == -EAGAIN) 200 + rc = 0; /* signaling incomplete via struct zfcp_diag_header */ 201 + 202 + /* buffer-data was updated in zfcp_fsf_exchange_config_data_handler() */ 203 + 204 + return rc; 205 + } 206 + 207 + static int __zfcp_diag_update_buffer(struct zfcp_adapter *const adapter, 208 + struct zfcp_diag_header *const hdr, 209 + zfcp_diag_update_buffer_func buffer_update, 210 + unsigned long *const flags) 211 + __must_hold(hdr->access_lock) 212 + { 213 + int rc; 214 + 215 + if (hdr->updating == 1) { 216 + rc = wait_event_interruptible_lock_irq(__zfcp_diag_publish_wait, 217 + hdr->updating == 0, 218 + hdr->access_lock); 219 + rc = (rc == 0 ? -EAGAIN : -EINTR); 220 + } else { 221 + hdr->updating = 1; 222 + spin_unlock_irqrestore(&hdr->access_lock, *flags); 223 + 224 + /* unlocked, because update function sleeps */ 225 + rc = buffer_update(adapter); 226 + 227 + spin_lock_irqsave(&hdr->access_lock, *flags); 228 + hdr->updating = 0; 229 + 230 + /* 231 + * every thread waiting here went via an interruptible wait, 232 + * so its fine to only wake those 233 + */ 234 + wake_up_interruptible_all(&__zfcp_diag_publish_wait); 235 + } 236 + 237 + return rc; 238 + } 239 + 240 + static bool 241 + __zfcp_diag_test_buffer_age_isfresh(const struct zfcp_diag_adapter *const diag, 242 + const struct zfcp_diag_header *const hdr) 243 + __must_hold(hdr->access_lock) 244 + { 245 + const unsigned long now = jiffies; 246 + 247 + /* 248 + * Should not happen (data is from the future).. if it does, still 249 + * signal that it needs refresh 250 + */ 251 + if (!time_after_eq(now, hdr->timestamp)) 252 + return false; 253 + 254 + if (jiffies_to_msecs(now - hdr->timestamp) >= diag->max_age) 255 + return false; 256 + 257 + return true; 258 + } 259 + 260 + /** 261 + * zfcp_diag_update_buffer_limited() - Collect diagnostics and update a 262 + * diagnostics buffer rate limited. 263 + * @adapter: Adapter to collect the diagnostics from. 264 + * @hdr: buffer-header for which to update with the collected diagnostics. 265 + * @buffer_update: Specific implementation for collecting and updating. 266 + * 267 + * This function will cause an update of the given @hdr by calling the also 268 + * given @buffer_update function. If called by multiple sources at the same 269 + * time, it will synchornize the update by only allowing one source to call 270 + * @buffer_update and the others to wait for that source to complete instead 271 + * (the wait is interruptible). 272 + * 273 + * Additionally this version is rate-limited and will only exit if either the 274 + * buffer is fresh enough (within the limit) - it will do nothing if the buffer 275 + * is fresh enough to begin with -, or if the source/thread that started this 276 + * update is the one that made the update (to prevent endless loops). 277 + * 278 + * Return: 279 + * * 0 - If the update was successfully published and/or the buffer is 280 + * fresh enough 281 + * * -EINTR - If the thread went into the wait-state and was interrupted 282 + * * whatever @buffer_update returns 283 + */ 284 + int zfcp_diag_update_buffer_limited(struct zfcp_adapter *const adapter, 285 + struct zfcp_diag_header *const hdr, 286 + zfcp_diag_update_buffer_func buffer_update) 287 + { 288 + unsigned long flags; 289 + int rc; 290 + 291 + spin_lock_irqsave(&hdr->access_lock, flags); 292 + 293 + for (rc = 0; 294 + !__zfcp_diag_test_buffer_age_isfresh(adapter->diagnostics, hdr); 295 + rc = 0) { 296 + rc = __zfcp_diag_update_buffer(adapter, hdr, buffer_update, 297 + &flags); 298 + if (rc != -EAGAIN) 299 + break; 300 + } 301 + 302 + spin_unlock_irqrestore(&hdr->access_lock, flags); 303 + 304 + return rc; 305 + }
+101
drivers/s390/scsi/zfcp_diag.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * zfcp device driver 4 + * 5 + * Definitions for handling diagnostics in the the zfcp device driver. 6 + * 7 + * Copyright IBM Corp. 2018 8 + */ 9 + 10 + #ifndef ZFCP_DIAG_H 11 + #define ZFCP_DIAG_H 12 + 13 + #include <linux/spinlock.h> 14 + 15 + #include "zfcp_fsf.h" 16 + #include "zfcp_def.h" 17 + 18 + /** 19 + * struct zfcp_diag_header - general part of a diagnostic buffer. 20 + * @access_lock: lock protecting all the data in this buffer. 21 + * @updating: flag showing that an update for this buffer is currently running. 22 + * @incomplete: flag showing that the data in @buffer is incomplete. 23 + * @timestamp: time in jiffies when the data of this buffer was last captured. 24 + * @buffer: implementation-depending data of this buffer 25 + * @buffer_size: size of @buffer 26 + */ 27 + struct zfcp_diag_header { 28 + spinlock_t access_lock; 29 + 30 + /* Flags */ 31 + u64 updating :1; 32 + u64 incomplete :1; 33 + 34 + unsigned long timestamp; 35 + 36 + void *buffer; 37 + size_t buffer_size; 38 + }; 39 + 40 + /** 41 + * struct zfcp_diag_adapter - central storage for all diagnostics concerning an 42 + * adapter. 43 + * @sysfs_established: flag showing that the associated sysfs-group was created 44 + * during run of zfcp_adapter_enqueue(). 45 + * @max_age: maximum age of data in diagnostic buffers before they need to be 46 + * refreshed (in ms). 47 + * @port_data: data retrieved using exchange port data. 48 + * @port_data.header: header with metadata for the cache in @port_data.data. 49 + * @port_data.data: cached QTCB Bottom of command exchange port data. 50 + * @config_data: data retrieved using exchange config data. 51 + * @config_data.header: header with metadata for the cache in @config_data.data. 52 + * @config_data.data: cached QTCB Bottom of command exchange config data. 53 + */ 54 + struct zfcp_diag_adapter { 55 + u64 sysfs_established :1; 56 + 57 + unsigned long max_age; 58 + 59 + struct { 60 + struct zfcp_diag_header header; 61 + struct fsf_qtcb_bottom_port data; 62 + } port_data; 63 + struct { 64 + struct zfcp_diag_header header; 65 + struct fsf_qtcb_bottom_config data; 66 + } config_data; 67 + }; 68 + 69 + int zfcp_diag_adapter_setup(struct zfcp_adapter *const adapter); 70 + void zfcp_diag_adapter_free(struct zfcp_adapter *const adapter); 71 + 72 + int zfcp_diag_sysfs_setup(struct zfcp_adapter *const adapter); 73 + void zfcp_diag_sysfs_destroy(struct zfcp_adapter *const adapter); 74 + 75 + void zfcp_diag_update_xdata(struct zfcp_diag_header *const hdr, 76 + const void *const data, const bool incomplete); 77 + 78 + /* 79 + * Function-Type used in zfcp_diag_update_buffer_limited() for the function 80 + * that does the buffer-implementation dependent work. 81 + */ 82 + typedef int (*zfcp_diag_update_buffer_func)(struct zfcp_adapter *const adapter); 83 + 84 + int zfcp_diag_update_config_data_buffer(struct zfcp_adapter *const adapter); 85 + int zfcp_diag_update_port_data_buffer(struct zfcp_adapter *const adapter); 86 + int zfcp_diag_update_buffer_limited(struct zfcp_adapter *const adapter, 87 + struct zfcp_diag_header *const hdr, 88 + zfcp_diag_update_buffer_func buffer_update); 89 + 90 + /** 91 + * zfcp_diag_support_sfp() - Return %true if the @adapter supports reporting 92 + * SFP Data. 93 + * @adapter: adapter to test the availability of SFP Data reporting for. 94 + */ 95 + static inline bool 96 + zfcp_diag_support_sfp(const struct zfcp_adapter *const adapter) 97 + { 98 + return !!(adapter->adapter_features & FSF_FEATURE_REPORT_SFP_DATA); 99 + } 100 + 101 + #endif /* ZFCP_DIAG_H */
+2 -2
drivers/s390/scsi/zfcp_erp.c
··· 174 174 return 0; 175 175 p_status = atomic_read(&port->status); 176 176 if (!(p_status & ZFCP_STATUS_COMMON_RUNNING) || 177 - p_status & ZFCP_STATUS_COMMON_ERP_FAILED) 177 + p_status & ZFCP_STATUS_COMMON_ERP_FAILED) 178 178 return 0; 179 179 if (!(p_status & ZFCP_STATUS_COMMON_UNBLOCKED)) 180 180 need = ZFCP_ERP_ACTION_REOPEN_PORT; ··· 190 190 return 0; 191 191 a_status = atomic_read(&adapter->status); 192 192 if (!(a_status & ZFCP_STATUS_COMMON_RUNNING) || 193 - a_status & ZFCP_STATUS_COMMON_ERP_FAILED) 193 + a_status & ZFCP_STATUS_COMMON_ERP_FAILED) 194 194 return 0; 195 195 if (p_status & ZFCP_STATUS_COMMON_NOESC) 196 196 return need;
+1
drivers/s390/scsi/zfcp_ext.h
··· 167 167 extern struct mutex zfcp_sysfs_port_units_mutex; 168 168 extern struct device_attribute *zfcp_sysfs_sdev_attrs[]; 169 169 extern struct device_attribute *zfcp_sysfs_shost_attrs[]; 170 + extern const struct attribute_group zfcp_sysfs_diag_attr_group; 170 171 bool zfcp_sysfs_port_is_removing(const struct zfcp_port *const port); 171 172 172 173 /* zfcp_unit.c */
+66 -7
drivers/s390/scsi/zfcp_fsf.c
··· 11 11 #define pr_fmt(fmt) KMSG_COMPONENT ": " fmt 12 12 13 13 #include <linux/blktrace_api.h> 14 + #include <linux/jiffies.h> 14 15 #include <linux/types.h> 15 16 #include <linux/slab.h> 16 17 #include <scsi/fc/fc_els.h> ··· 20 19 #include "zfcp_dbf.h" 21 20 #include "zfcp_qdio.h" 22 21 #include "zfcp_reqlist.h" 22 + #include "zfcp_diag.h" 23 23 24 24 /* timeout for FSF requests sent during scsi_eh: abort or FCP TMF */ 25 25 #define ZFCP_FSF_SCSI_ER_TIMEOUT (10*HZ) ··· 556 554 static void zfcp_fsf_exchange_config_data_handler(struct zfcp_fsf_req *req) 557 555 { 558 556 struct zfcp_adapter *adapter = req->adapter; 557 + struct zfcp_diag_header *const diag_hdr = 558 + &adapter->diagnostics->config_data.header; 559 559 struct fsf_qtcb *qtcb = req->qtcb; 560 560 struct fsf_qtcb_bottom_config *bottom = &qtcb->bottom.config; 561 561 struct Scsi_Host *shost = adapter->scsi_host; ··· 574 570 575 571 switch (qtcb->header.fsf_status) { 576 572 case FSF_GOOD: 573 + /* 574 + * usually we wait with an update till the cache is too old, 575 + * but because we have the data available, update it anyway 576 + */ 577 + zfcp_diag_update_xdata(diag_hdr, bottom, false); 578 + 577 579 if (zfcp_fsf_exchange_config_evaluate(req)) 578 580 return; 579 581 ··· 595 585 &adapter->status); 596 586 break; 597 587 case FSF_EXCHANGE_CONFIG_DATA_INCOMPLETE: 588 + zfcp_diag_update_xdata(diag_hdr, bottom, true); 589 + req->status |= ZFCP_STATUS_FSFREQ_XDATAINCOMPLETE; 590 + 598 591 fc_host_node_name(shost) = 0; 599 592 fc_host_port_name(shost) = 0; 600 593 fc_host_port_id(shost) = 0; ··· 666 653 667 654 static void zfcp_fsf_exchange_port_data_handler(struct zfcp_fsf_req *req) 668 655 { 656 + struct zfcp_diag_header *const diag_hdr = 657 + &req->adapter->diagnostics->port_data.header; 669 658 struct fsf_qtcb *qtcb = req->qtcb; 659 + struct fsf_qtcb_bottom_port *bottom = &qtcb->bottom.port; 670 660 671 661 if (req->status & ZFCP_STATUS_FSFREQ_ERROR) 672 662 return; 673 663 674 664 switch (qtcb->header.fsf_status) { 675 665 case FSF_GOOD: 666 + /* 667 + * usually we wait with an update till the cache is too old, 668 + * but because we have the data available, update it anyway 669 + */ 670 + zfcp_diag_update_xdata(diag_hdr, bottom, false); 671 + 676 672 zfcp_fsf_exchange_port_evaluate(req); 677 673 break; 678 674 case FSF_EXCHANGE_CONFIG_DATA_INCOMPLETE: 675 + zfcp_diag_update_xdata(diag_hdr, bottom, true); 676 + req->status |= ZFCP_STATUS_FSFREQ_XDATAINCOMPLETE; 677 + 679 678 zfcp_fsf_exchange_port_evaluate(req); 680 679 zfcp_fsf_link_down_info_eval(req, 681 680 &qtcb->header.fsf_status_qual.link_down_info); ··· 1286 1261 1287 1262 req->qtcb->bottom.config.feature_selection = 1288 1263 FSF_FEATURE_NOTIFICATION_LOST | 1289 - FSF_FEATURE_UPDATE_ALERT; 1264 + FSF_FEATURE_UPDATE_ALERT | 1265 + FSF_FEATURE_REQUEST_SFP_DATA; 1290 1266 req->erp_action = erp_action; 1291 1267 req->handler = zfcp_fsf_exchange_config_data_handler; 1292 1268 erp_action->fsf_req_id = req->req_id; ··· 1304 1278 return retval; 1305 1279 } 1306 1280 1281 + 1282 + /** 1283 + * zfcp_fsf_exchange_config_data_sync() - Request information about FCP channel. 1284 + * @qdio: pointer to the QDIO-Queue to use for sending the command. 1285 + * @data: pointer to the QTCB-Bottom for storing the result of the command, 1286 + * might be %NULL. 1287 + * 1288 + * Returns: 1289 + * * 0 - Exchange Config Data was successful, @data is complete 1290 + * * -EIO - Exchange Config Data was not successful, @data is invalid 1291 + * * -EAGAIN - @data contains incomplete data 1292 + * * -ENOMEM - Some memory allocation failed along the way 1293 + */ 1307 1294 int zfcp_fsf_exchange_config_data_sync(struct zfcp_qdio *qdio, 1308 1295 struct fsf_qtcb_bottom_config *data) 1309 1296 { ··· 1340 1301 1341 1302 req->qtcb->bottom.config.feature_selection = 1342 1303 FSF_FEATURE_NOTIFICATION_LOST | 1343 - FSF_FEATURE_UPDATE_ALERT; 1304 + FSF_FEATURE_UPDATE_ALERT | 1305 + FSF_FEATURE_REQUEST_SFP_DATA; 1344 1306 1345 1307 if (data) 1346 1308 req->data = data; ··· 1349 1309 zfcp_fsf_start_timer(req, ZFCP_FSF_REQUEST_TIMEOUT); 1350 1310 retval = zfcp_fsf_req_send(req); 1351 1311 spin_unlock_irq(&qdio->req_q_lock); 1312 + 1352 1313 if (!retval) { 1353 1314 /* NOTE: ONLY TOUCH SYNC req AGAIN ON req->completion. */ 1354 1315 wait_for_completion(&req->completion); 1316 + 1317 + if (req->status & 1318 + (ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_DISMISSED)) 1319 + retval = -EIO; 1320 + else if (req->status & ZFCP_STATUS_FSFREQ_XDATAINCOMPLETE) 1321 + retval = -EAGAIN; 1355 1322 } 1356 1323 1357 1324 zfcp_fsf_req_free(req); ··· 1416 1369 } 1417 1370 1418 1371 /** 1419 - * zfcp_fsf_exchange_port_data_sync - request information about local port 1420 - * @qdio: pointer to struct zfcp_qdio 1421 - * @data: pointer to struct fsf_qtcb_bottom_port 1422 - * Returns: 0 on success, error otherwise 1372 + * zfcp_fsf_exchange_port_data_sync() - Request information about local port. 1373 + * @qdio: pointer to the QDIO-Queue to use for sending the command. 1374 + * @data: pointer to the QTCB-Bottom for storing the result of the command, 1375 + * might be %NULL. 1376 + * 1377 + * Returns: 1378 + * * 0 - Exchange Port Data was successful, @data is complete 1379 + * * -EIO - Exchange Port Data was not successful, @data is invalid 1380 + * * -EAGAIN - @data contains incomplete data 1381 + * * -ENOMEM - Some memory allocation failed along the way 1382 + * * -EOPNOTSUPP - This operation is not supported 1423 1383 */ 1424 1384 int zfcp_fsf_exchange_port_data_sync(struct zfcp_qdio *qdio, 1425 1385 struct fsf_qtcb_bottom_port *data) ··· 1462 1408 if (!retval) { 1463 1409 /* NOTE: ONLY TOUCH SYNC req AGAIN ON req->completion. */ 1464 1410 wait_for_completion(&req->completion); 1411 + 1412 + if (req->status & 1413 + (ZFCP_STATUS_FSFREQ_ERROR | ZFCP_STATUS_FSFREQ_DISMISSED)) 1414 + retval = -EIO; 1415 + else if (req->status & ZFCP_STATUS_FSFREQ_XDATAINCOMPLETE) 1416 + retval = -EAGAIN; 1465 1417 } 1466 1418 1467 1419 zfcp_fsf_req_free(req); 1468 - 1469 1420 return retval; 1470 1421 1471 1422 out_unlock:
+20 -1
drivers/s390/scsi/zfcp_fsf.h
··· 163 163 #define FSF_FEATURE_ELS_CT_CHAINED_SBALS 0x00000020 164 164 #define FSF_FEATURE_UPDATE_ALERT 0x00000100 165 165 #define FSF_FEATURE_MEASUREMENT_DATA 0x00000200 166 + #define FSF_FEATURE_REQUEST_SFP_DATA 0x00000200 167 + #define FSF_FEATURE_REPORT_SFP_DATA 0x00000800 166 168 #define FSF_FEATURE_DIF_PROT_TYPE1 0x00010000 167 169 #define FSF_FEATURE_DIX_PROT_TCPIP 0x00020000 168 170 ··· 409 407 u8 cp_util; 410 408 u8 cb_util; 411 409 u8 a_util; 412 - u8 res2[253]; 410 + u8 res2; 411 + u16 temperature; 412 + u16 vcc; 413 + u16 tx_bias; 414 + u16 tx_power; 415 + u16 rx_power; 416 + union { 417 + u16 raw; 418 + struct { 419 + u16 fec_active :1; 420 + u16:7; 421 + u16 connector_type :2; 422 + u16 sfp_invalid :1; 423 + u16 optical_port :1; 424 + u16 port_tx_type :4; 425 + }; 426 + } sfp_flags; 427 + u8 res3[240]; 413 428 } __attribute__ ((packed)); 414 429 415 430 union fsf_qtcb_bottom {
+2 -2
drivers/s390/scsi/zfcp_scsi.c
··· 605 605 return NULL; 606 606 607 607 ret = zfcp_fsf_exchange_port_data_sync(adapter->qdio, data); 608 - if (ret) { 608 + if (ret != 0 && ret != -EAGAIN) { 609 609 kfree(data); 610 610 return NULL; 611 611 } ··· 634 634 return; 635 635 636 636 ret = zfcp_fsf_exchange_port_data_sync(adapter->qdio, data); 637 - if (ret) 637 + if (ret != 0 && ret != -EAGAIN) 638 638 kfree(data); 639 639 else { 640 640 adapter->stats_reset = jiffies/HZ;
+168 -2
drivers/s390/scsi/zfcp_sysfs.c
··· 11 11 #define pr_fmt(fmt) KMSG_COMPONENT ": " fmt 12 12 13 13 #include <linux/slab.h> 14 + #include "zfcp_diag.h" 14 15 #include "zfcp_ext.h" 15 16 16 17 #define ZFCP_DEV_ATTR(_feat, _name, _mode, _show, _store) \ ··· 326 325 static ZFCP_DEV_ATTR(adapter, port_remove, S_IWUSR, NULL, 327 326 zfcp_sysfs_port_remove_store); 328 327 328 + static ssize_t 329 + zfcp_sysfs_adapter_diag_max_age_show(struct device *dev, 330 + struct device_attribute *attr, char *buf) 331 + { 332 + struct zfcp_adapter *adapter = zfcp_ccw_adapter_by_cdev(to_ccwdev(dev)); 333 + ssize_t rc; 334 + 335 + if (!adapter) 336 + return -ENODEV; 337 + 338 + /* ceil(log(2^64 - 1) / log(10)) = 20 */ 339 + rc = scnprintf(buf, 20 + 2, "%lu\n", adapter->diagnostics->max_age); 340 + 341 + zfcp_ccw_adapter_put(adapter); 342 + return rc; 343 + } 344 + 345 + static ssize_t 346 + zfcp_sysfs_adapter_diag_max_age_store(struct device *dev, 347 + struct device_attribute *attr, 348 + const char *buf, size_t count) 349 + { 350 + struct zfcp_adapter *adapter = zfcp_ccw_adapter_by_cdev(to_ccwdev(dev)); 351 + unsigned long max_age; 352 + ssize_t rc; 353 + 354 + if (!adapter) 355 + return -ENODEV; 356 + 357 + rc = kstrtoul(buf, 10, &max_age); 358 + if (rc != 0) 359 + goto out; 360 + 361 + adapter->diagnostics->max_age = max_age; 362 + 363 + rc = count; 364 + out: 365 + zfcp_ccw_adapter_put(adapter); 366 + return rc; 367 + } 368 + static ZFCP_DEV_ATTR(adapter, diag_max_age, 0644, 369 + zfcp_sysfs_adapter_diag_max_age_show, 370 + zfcp_sysfs_adapter_diag_max_age_store); 371 + 329 372 static struct attribute *zfcp_adapter_attrs[] = { 330 373 &dev_attr_adapter_failed.attr, 331 374 &dev_attr_adapter_in_recovery.attr, ··· 382 337 &dev_attr_adapter_lic_version.attr, 383 338 &dev_attr_adapter_status.attr, 384 339 &dev_attr_adapter_hardware_version.attr, 340 + &dev_attr_adapter_diag_max_age.attr, 385 341 NULL 386 342 }; 387 343 ··· 623 577 return -ENOMEM; 624 578 625 579 retval = zfcp_fsf_exchange_port_data_sync(adapter->qdio, qtcb_port); 626 - if (!retval) 580 + if (retval == 0 || retval == -EAGAIN) 627 581 retval = sprintf(buf, "%u %u %u\n", qtcb_port->cp_util, 628 582 qtcb_port->cb_util, qtcb_port->a_util); 629 583 kfree(qtcb_port); ··· 649 603 return -ENOMEM; 650 604 651 605 retval = zfcp_fsf_exchange_config_data_sync(adapter->qdio, qtcb_config); 652 - if (!retval) 606 + if (retval == 0 || retval == -EAGAIN) 653 607 *stat_inf = qtcb_config->stat_info; 654 608 655 609 kfree(qtcb_config); ··· 709 663 &dev_attr_seconds_active, 710 664 &dev_attr_queue_full, 711 665 NULL 666 + }; 667 + 668 + static ssize_t zfcp_sysfs_adapter_diag_b2b_credit_show( 669 + struct device *dev, struct device_attribute *attr, char *buf) 670 + { 671 + struct zfcp_adapter *adapter = zfcp_ccw_adapter_by_cdev(to_ccwdev(dev)); 672 + struct zfcp_diag_header *diag_hdr; 673 + struct fc_els_flogi *nsp; 674 + ssize_t rc = -ENOLINK; 675 + unsigned long flags; 676 + unsigned int status; 677 + 678 + if (!adapter) 679 + return -ENODEV; 680 + 681 + status = atomic_read(&adapter->status); 682 + if (0 == (status & ZFCP_STATUS_COMMON_OPEN) || 683 + 0 == (status & ZFCP_STATUS_COMMON_UNBLOCKED) || 684 + 0 != (status & ZFCP_STATUS_COMMON_ERP_FAILED)) 685 + goto out; 686 + 687 + diag_hdr = &adapter->diagnostics->config_data.header; 688 + 689 + rc = zfcp_diag_update_buffer_limited( 690 + adapter, diag_hdr, zfcp_diag_update_config_data_buffer); 691 + if (rc != 0) 692 + goto out; 693 + 694 + spin_lock_irqsave(&diag_hdr->access_lock, flags); 695 + /* nport_serv_param doesn't contain the ELS_Command code */ 696 + nsp = (struct fc_els_flogi *)((unsigned long) 697 + adapter->diagnostics->config_data 698 + .data.nport_serv_param - 699 + sizeof(u32)); 700 + 701 + rc = scnprintf(buf, 5 + 2, "%hu\n", 702 + be16_to_cpu(nsp->fl_csp.sp_bb_cred)); 703 + spin_unlock_irqrestore(&diag_hdr->access_lock, flags); 704 + 705 + out: 706 + zfcp_ccw_adapter_put(adapter); 707 + return rc; 708 + } 709 + static ZFCP_DEV_ATTR(adapter_diag, b2b_credit, 0400, 710 + zfcp_sysfs_adapter_diag_b2b_credit_show, NULL); 711 + 712 + #define ZFCP_DEFINE_DIAG_SFP_ATTR(_name, _qtcb_member, _prtsize, _prtfmt) \ 713 + static ssize_t zfcp_sysfs_adapter_diag_sfp_##_name##_show( \ 714 + struct device *dev, struct device_attribute *attr, char *buf) \ 715 + { \ 716 + struct zfcp_adapter *const adapter = \ 717 + zfcp_ccw_adapter_by_cdev(to_ccwdev(dev)); \ 718 + struct zfcp_diag_header *diag_hdr; \ 719 + ssize_t rc = -ENOLINK; \ 720 + unsigned long flags; \ 721 + unsigned int status; \ 722 + \ 723 + if (!adapter) \ 724 + return -ENODEV; \ 725 + \ 726 + status = atomic_read(&adapter->status); \ 727 + if (0 == (status & ZFCP_STATUS_COMMON_OPEN) || \ 728 + 0 == (status & ZFCP_STATUS_COMMON_UNBLOCKED) || \ 729 + 0 != (status & ZFCP_STATUS_COMMON_ERP_FAILED)) \ 730 + goto out; \ 731 + \ 732 + if (!zfcp_diag_support_sfp(adapter)) { \ 733 + rc = -EOPNOTSUPP; \ 734 + goto out; \ 735 + } \ 736 + \ 737 + diag_hdr = &adapter->diagnostics->port_data.header; \ 738 + \ 739 + rc = zfcp_diag_update_buffer_limited( \ 740 + adapter, diag_hdr, zfcp_diag_update_port_data_buffer); \ 741 + if (rc != 0) \ 742 + goto out; \ 743 + \ 744 + spin_lock_irqsave(&diag_hdr->access_lock, flags); \ 745 + rc = scnprintf( \ 746 + buf, (_prtsize) + 2, _prtfmt "\n", \ 747 + adapter->diagnostics->port_data.data._qtcb_member); \ 748 + spin_unlock_irqrestore(&diag_hdr->access_lock, flags); \ 749 + \ 750 + out: \ 751 + zfcp_ccw_adapter_put(adapter); \ 752 + return rc; \ 753 + } \ 754 + static ZFCP_DEV_ATTR(adapter_diag_sfp, _name, 0400, \ 755 + zfcp_sysfs_adapter_diag_sfp_##_name##_show, NULL) 756 + 757 + ZFCP_DEFINE_DIAG_SFP_ATTR(temperature, temperature, 5, "%hu"); 758 + ZFCP_DEFINE_DIAG_SFP_ATTR(vcc, vcc, 5, "%hu"); 759 + ZFCP_DEFINE_DIAG_SFP_ATTR(tx_bias, tx_bias, 5, "%hu"); 760 + ZFCP_DEFINE_DIAG_SFP_ATTR(tx_power, tx_power, 5, "%hu"); 761 + ZFCP_DEFINE_DIAG_SFP_ATTR(rx_power, rx_power, 5, "%hu"); 762 + ZFCP_DEFINE_DIAG_SFP_ATTR(port_tx_type, sfp_flags.port_tx_type, 2, "%hu"); 763 + ZFCP_DEFINE_DIAG_SFP_ATTR(optical_port, sfp_flags.optical_port, 1, "%hu"); 764 + ZFCP_DEFINE_DIAG_SFP_ATTR(sfp_invalid, sfp_flags.sfp_invalid, 1, "%hu"); 765 + ZFCP_DEFINE_DIAG_SFP_ATTR(connector_type, sfp_flags.connector_type, 1, "%hu"); 766 + ZFCP_DEFINE_DIAG_SFP_ATTR(fec_active, sfp_flags.fec_active, 1, "%hu"); 767 + 768 + static struct attribute *zfcp_sysfs_diag_attrs[] = { 769 + &dev_attr_adapter_diag_sfp_temperature.attr, 770 + &dev_attr_adapter_diag_sfp_vcc.attr, 771 + &dev_attr_adapter_diag_sfp_tx_bias.attr, 772 + &dev_attr_adapter_diag_sfp_tx_power.attr, 773 + &dev_attr_adapter_diag_sfp_rx_power.attr, 774 + &dev_attr_adapter_diag_sfp_port_tx_type.attr, 775 + &dev_attr_adapter_diag_sfp_optical_port.attr, 776 + &dev_attr_adapter_diag_sfp_sfp_invalid.attr, 777 + &dev_attr_adapter_diag_sfp_connector_type.attr, 778 + &dev_attr_adapter_diag_sfp_fec_active.attr, 779 + &dev_attr_adapter_diag_b2b_credit.attr, 780 + NULL, 781 + }; 782 + 783 + const struct attribute_group zfcp_sysfs_diag_attr_group = { 784 + .name = "diagnostics", 785 + .attrs = zfcp_sysfs_diag_attrs, 712 786 };
+28 -9
drivers/scsi/NCR5380.c
··· 129 129 #define NCR5380_release_dma_irq(x) 130 130 #endif 131 131 132 + static unsigned int disconnect_mask = ~0; 133 + module_param(disconnect_mask, int, 0444); 134 + 132 135 static int do_abort(struct Scsi_Host *); 133 136 static void do_reset(struct Scsi_Host *); 134 137 static void bus_reset_cleanup(struct Scsi_Host *); ··· 173 170 cmd->SCp.ptr = sg_virt(cmd->SCp.buffer); 174 171 cmd->SCp.this_residual = cmd->SCp.buffer->length; 175 172 } 173 + } 174 + 175 + static inline void set_resid_from_SCp(struct scsi_cmnd *cmd) 176 + { 177 + int resid = cmd->SCp.this_residual; 178 + struct scatterlist *s = cmd->SCp.buffer; 179 + 180 + if (s) 181 + while (!sg_is_last(s)) { 182 + s = sg_next(s); 183 + resid += s->length; 184 + } 185 + scsi_set_resid(cmd, resid); 176 186 } 177 187 178 188 /** ··· 970 954 int err; 971 955 bool ret = true; 972 956 bool can_disconnect = instance->irq != NO_IRQ && 973 - cmd->cmnd[0] != REQUEST_SENSE; 957 + cmd->cmnd[0] != REQUEST_SENSE && 958 + (disconnect_mask & BIT(scmd_id(cmd))); 974 959 975 960 NCR5380_dprint(NDEBUG_ARBITRATION, instance); 976 961 dsprintk(NDEBUG_ARBITRATION, instance, "starting arbitration, id = %d\n", ··· 1396 1379 * MESSAGE OUT phase and sending an ABORT message. 1397 1380 * @instance: relevant scsi host instance 1398 1381 * 1399 - * Returns 0 on success, -1 on failure. 1382 + * Returns 0 on success, negative error code on failure. 1400 1383 */ 1401 1384 1402 1385 static int do_abort(struct Scsi_Host *instance) ··· 1421 1404 1422 1405 rc = NCR5380_poll_politely(hostdata, STATUS_REG, SR_REQ, SR_REQ, 10 * HZ); 1423 1406 if (rc < 0) 1424 - goto timeout; 1407 + goto out; 1425 1408 1426 1409 tmp = NCR5380_read(STATUS_REG) & PHASE_MASK; 1427 1410 ··· 1432 1415 ICR_BASE | ICR_ASSERT_ATN | ICR_ASSERT_ACK); 1433 1416 rc = NCR5380_poll_politely(hostdata, STATUS_REG, SR_REQ, 0, 3 * HZ); 1434 1417 if (rc < 0) 1435 - goto timeout; 1418 + goto out; 1436 1419 NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE | ICR_ASSERT_ATN); 1437 1420 } 1438 1421 ··· 1441 1424 len = 1; 1442 1425 phase = PHASE_MSGOUT; 1443 1426 NCR5380_transfer_pio(instance, &phase, &len, &msgptr); 1427 + if (len) 1428 + rc = -ENXIO; 1444 1429 1445 1430 /* 1446 1431 * If we got here, and the command completed successfully, 1447 1432 * we're about to go into bus free state. 1448 1433 */ 1449 1434 1450 - return len ? -1 : 0; 1451 - 1452 - timeout: 1435 + out: 1453 1436 NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE); 1454 - return -1; 1437 + return rc; 1455 1438 } 1456 1439 1457 1440 /* ··· 1819 1802 cmd->result &= ~0xffff; 1820 1803 cmd->result |= cmd->SCp.Status; 1821 1804 cmd->result |= cmd->SCp.Message << 8; 1805 + 1806 + set_resid_from_SCp(cmd); 1822 1807 1823 1808 if (cmd->cmnd[0] == REQUEST_SENSE) 1824 1809 complete_cmd(instance, cmd); ··· 2283 2264 dsprintk(NDEBUG_ABORT, instance, "abort: cmd %p is connected\n", cmd); 2284 2265 hostdata->connected = NULL; 2285 2266 hostdata->dma_len = 0; 2286 - if (do_abort(instance)) { 2267 + if (do_abort(instance) < 0) { 2287 2268 set_host_byte(cmd, DID_ERROR); 2288 2269 complete_cmd(instance, cmd); 2289 2270 result = FAILED;
+6 -5
drivers/scsi/aacraid/aachba.c
··· 1477 1477 struct aac_srb * srbcmd; 1478 1478 u32 flag; 1479 1479 u32 timeout; 1480 + struct aac_dev *dev = fib->dev; 1480 1481 1481 1482 aac_fib_init(fib); 1482 1483 switch(cmd->sc_data_direction){ ··· 1504 1503 srbcmd->flags = cpu_to_le32(flag); 1505 1504 timeout = cmd->request->timeout/HZ; 1506 1505 if (timeout == 0) 1507 - timeout = 1; 1506 + timeout = (dev->sa_firmware ? AAC_SA_TIMEOUT : AAC_ARC_TIMEOUT); 1508 1507 srbcmd->timeout = cpu_to_le32(timeout); // timeout in seconds 1509 1508 srbcmd->retry_limit = 0; /* Obsolete parameter */ 1510 1509 srbcmd->cdb_size = cpu_to_le32(cmd->cmd_len); ··· 2468 2467 scsicmd->result = DID_OK << 16 | COMMAND_COMPLETE << 8 | 2469 2468 SAM_STAT_CHECK_CONDITION; 2470 2469 set_sense(&dev->fsa_dev[cid].sense_data, 2471 - HARDWARE_ERROR, SENCODE_INTERNAL_TARGET_FAILURE, 2470 + ILLEGAL_REQUEST, SENCODE_LBA_OUT_OF_RANGE, 2472 2471 ASENCODE_INTERNAL_TARGET_FAILURE, 0, 0); 2473 2472 memcpy(scsicmd->sense_buffer, &dev->fsa_dev[cid].sense_data, 2474 2473 min_t(size_t, sizeof(dev->fsa_dev[cid].sense_data), 2475 2474 SCSI_SENSE_BUFFERSIZE)); 2476 2475 scsicmd->scsi_done(scsicmd); 2477 - return 1; 2476 + return 0; 2478 2477 } 2479 2478 2480 2479 dprintk((KERN_DEBUG "aac_read[cpu %d]: lba = %llu, t = %ld.\n", ··· 2560 2559 scsicmd->result = DID_OK << 16 | COMMAND_COMPLETE << 8 | 2561 2560 SAM_STAT_CHECK_CONDITION; 2562 2561 set_sense(&dev->fsa_dev[cid].sense_data, 2563 - HARDWARE_ERROR, SENCODE_INTERNAL_TARGET_FAILURE, 2562 + ILLEGAL_REQUEST, SENCODE_LBA_OUT_OF_RANGE, 2564 2563 ASENCODE_INTERNAL_TARGET_FAILURE, 0, 0); 2565 2564 memcpy(scsicmd->sense_buffer, &dev->fsa_dev[cid].sense_data, 2566 2565 min_t(size_t, sizeof(dev->fsa_dev[cid].sense_data), 2567 2566 SCSI_SENSE_BUFFERSIZE)); 2568 2567 scsicmd->scsi_done(scsicmd); 2569 - return 1; 2568 + return 0; 2570 2569 } 2571 2570 2572 2571 dprintk((KERN_DEBUG "aac_write[cpu %d]: lba = %llu, t = %ld.\n",
+17 -6
drivers/scsi/aacraid/aacraid.h
··· 85 85 #define PMC_GLOBAL_INT_BIT0 0x00000001 86 86 87 87 #ifndef AAC_DRIVER_BUILD 88 - # define AAC_DRIVER_BUILD 50877 88 + # define AAC_DRIVER_BUILD 50983 89 89 # define AAC_DRIVER_BRANCH "-custom" 90 90 #endif 91 91 #define MAXIMUM_NUM_CONTAINERS 32 ··· 108 108 #define AAC_BUS_TARGET_LOOP (AAC_MAX_BUSES * AAC_MAX_TARGETS) 109 109 #define AAC_MAX_NATIVE_SIZE 2048 110 110 #define FW_ERROR_BUFFER_SIZE 512 111 + #define AAC_SA_TIMEOUT 180 112 + #define AAC_ARC_TIMEOUT 60 111 113 112 114 #define get_bus_number(x) (x/AAC_MAX_TARGETS) 113 115 #define get_target_number(x) (x%AAC_MAX_TARGETS) ··· 1330 1328 #define AAC_DEVTYPE_ARC_RAW 2 1331 1329 #define AAC_DEVTYPE_NATIVE_RAW 3 1332 1330 1333 - #define AAC_SAFW_RESCAN_DELAY (10 * HZ) 1331 + #define AAC_RESCAN_DELAY (10 * HZ) 1334 1332 1335 1333 struct aac_hba_map_info { 1336 1334 __le32 rmw_nexus; /* nexus for native HBA devices */ ··· 1603 1601 struct fsa_dev_info *fsa_dev; 1604 1602 struct task_struct *thread; 1605 1603 struct delayed_work safw_rescan_work; 1604 + struct delayed_work src_reinit_aif_worker; 1606 1605 int cardtype; 1607 1606 /* 1608 1607 *This lock will protect the two 32-bit ··· 1676 1673 u8 adapter_shutdown; 1677 1674 u32 handle_pci_error; 1678 1675 bool init_reset; 1676 + u8 soft_reset_support; 1679 1677 }; 1680 1678 1681 1679 #define aac_adapter_interrupt(dev) \ ··· 2648 2644 2649 2645 static inline void aac_schedule_safw_scan_worker(struct aac_dev *dev) 2650 2646 { 2651 - schedule_delayed_work(&dev->safw_rescan_work, AAC_SAFW_RESCAN_DELAY); 2647 + schedule_delayed_work(&dev->safw_rescan_work, AAC_RESCAN_DELAY); 2648 + } 2649 + 2650 + static inline void aac_schedule_src_reinit_aif_worker(struct aac_dev *dev) 2651 + { 2652 + schedule_delayed_work(&dev->src_reinit_aif_worker, AAC_RESCAN_DELAY); 2652 2653 } 2653 2654 2654 2655 static inline void aac_safw_rescan_worker(struct work_struct *work) ··· 2667 2658 aac_scan_host(dev); 2668 2659 } 2669 2660 2670 - static inline void aac_cancel_safw_rescan_worker(struct aac_dev *dev) 2661 + static inline void aac_cancel_rescan_worker(struct aac_dev *dev) 2671 2662 { 2672 - if (dev->sa_firmware) 2673 - cancel_delayed_work_sync(&dev->safw_rescan_work); 2663 + cancel_delayed_work_sync(&dev->safw_rescan_work); 2664 + cancel_delayed_work_sync(&dev->src_reinit_aif_worker); 2674 2665 } 2675 2666 2676 2667 /* SCp.phase values */ ··· 2680 2671 #define AAC_OWNER_FIRMWARE 0x106 2681 2672 2682 2673 void aac_safw_rescan_worker(struct work_struct *work); 2674 + void aac_src_reinit_aif_worker(struct work_struct *work); 2683 2675 int aac_acquire_irq(struct aac_dev *dev); 2684 2676 void aac_free_irq(struct aac_dev *dev); 2685 2677 int aac_setup_safw_adapter(struct aac_dev *dev); ··· 2738 2728 int _aac_rx_init(struct aac_dev *dev); 2739 2729 int aac_rx_select_comm(struct aac_dev *dev, int comm); 2740 2730 int aac_rx_deliver_producer(struct fib * fib); 2731 + void aac_reinit_aif(struct aac_dev *aac, unsigned int index); 2741 2732 2742 2733 static inline int aac_is_src(struct aac_dev *dev) 2743 2734 {
+5
drivers/scsi/aacraid/comminit.c
··· 571 571 else 572 572 dev->sa_firmware = 0; 573 573 574 + if (status[4] & le32_to_cpu(AAC_EXTOPT_SOFT_RESET)) 575 + dev->soft_reset_support = 1; 576 + else 577 + dev->soft_reset_support = 0; 578 + 574 579 if ((dev->comm_interface == AAC_COMM_MESSAGE) && 575 580 (status[2] > dev->base_size)) { 576 581 aac_adapter_ioremap(dev, 0);
+20 -1
drivers/scsi/aacraid/commsup.c
··· 232 232 fibptr->type = FSAFS_NTC_FIB_CONTEXT; 233 233 fibptr->callback_data = NULL; 234 234 fibptr->callback = NULL; 235 + fibptr->flags = 0; 235 236 236 237 return fibptr; 237 238 } ··· 1464 1463 } 1465 1464 } 1466 1465 1466 + static void aac_schedule_bus_scan(struct aac_dev *aac) 1467 + { 1468 + if (aac->sa_firmware) 1469 + aac_schedule_safw_scan_worker(aac); 1470 + else 1471 + aac_schedule_src_reinit_aif_worker(aac); 1472 + } 1473 + 1467 1474 static int _aac_reset_adapter(struct aac_dev *aac, int forced, u8 reset_type) 1468 1475 { 1469 1476 int index, quirks; ··· 1647 1638 */ 1648 1639 if (!retval && !is_kdump_kernel()) { 1649 1640 dev_info(&aac->pdev->dev, "Scheduling bus rescan\n"); 1650 - aac_schedule_safw_scan_worker(aac); 1641 + aac_schedule_bus_scan(aac); 1651 1642 } 1652 1643 1653 1644 if (jafo) { ··· 1966 1957 mutex_unlock(&dev->scan_mutex); 1967 1958 1968 1959 return rcode; 1960 + } 1961 + 1962 + void aac_src_reinit_aif_worker(struct work_struct *work) 1963 + { 1964 + struct aac_dev *dev = container_of(to_delayed_work(work), 1965 + struct aac_dev, src_reinit_aif_worker); 1966 + 1967 + wait_event(dev->scsi_host_ptr->host_wait, 1968 + !scsi_host_in_recovery(dev->scsi_host_ptr)); 1969 + aac_reinit_aif(dev, dev->cardtype); 1969 1970 } 1970 1971 1971 1972 /**
+29 -6
drivers/scsi/aacraid/linit.c
··· 391 391 int chn, tid; 392 392 unsigned int depth = 0; 393 393 unsigned int set_timeout = 0; 394 + int timeout = 0; 394 395 bool set_qd_dev_type = false; 395 396 u8 devtype = 0; 396 397 ··· 484 483 485 484 /* 486 485 * Firmware has an individual device recovery time typically 487 - * of 35 seconds, give us a margin. 486 + * of 35 seconds, give us a margin. Thor devices can take longer in 487 + * error recovery, hence different value. 488 488 */ 489 - if (set_timeout && sdev->request_queue->rq_timeout < (45 * HZ)) 490 - blk_queue_rq_timeout(sdev->request_queue, 45*HZ); 489 + if (set_timeout) { 490 + timeout = aac->sa_firmware ? AAC_SA_TIMEOUT : AAC_ARC_TIMEOUT; 491 + blk_queue_rq_timeout(sdev->request_queue, timeout * HZ); 492 + } 491 493 492 494 if (depth > 256) 493 495 depth = 256; ··· 612 608 static int aac_ioctl(struct scsi_device *sdev, unsigned int cmd, 613 609 void __user *arg) 614 610 { 611 + int retval; 615 612 struct aac_dev *dev = (struct aac_dev *)sdev->host->hostdata; 616 613 if (!capable(CAP_SYS_RAWIO)) 617 614 return -EPERM; 615 + retval = aac_adapter_check_health(dev); 616 + if (retval) 617 + return -EBUSY; 618 618 return aac_do_ioctl(dev, cmd, arg); 619 619 } 620 620 ··· 1593 1585 } 1594 1586 } 1595 1587 1588 + void aac_reinit_aif(struct aac_dev *aac, unsigned int index) 1589 + { 1590 + /* 1591 + * Firmware may send a AIF messages very early and the Driver may have 1592 + * ignored as it is not fully ready to process the messages. Send 1593 + * AIF to firmware so that if there are any unprocessed events they 1594 + * can be processed now. 1595 + */ 1596 + if (aac_drivers[index].quirks & AAC_QUIRK_SRC) 1597 + aac_intr_normal(aac, 0, 2, 0, NULL); 1598 + 1599 + } 1600 + 1596 1601 static int aac_probe_one(struct pci_dev *pdev, const struct pci_device_id *id) 1597 1602 { 1598 1603 unsigned index = id->driver_data; ··· 1703 1682 mutex_init(&aac->scan_mutex); 1704 1683 1705 1684 INIT_DELAYED_WORK(&aac->safw_rescan_work, aac_safw_rescan_worker); 1685 + INIT_DELAYED_WORK(&aac->src_reinit_aif_worker, 1686 + aac_src_reinit_aif_worker); 1706 1687 /* 1707 1688 * Map in the registers from the adapter. 1708 1689 */ ··· 1895 1872 struct aac_dev *aac = (struct aac_dev *)shost->hostdata; 1896 1873 1897 1874 scsi_block_requests(shost); 1898 - aac_cancel_safw_rescan_worker(aac); 1875 + aac_cancel_rescan_worker(aac); 1899 1876 aac_send_shutdown(aac); 1900 1877 1901 1878 aac_release_resources(aac); ··· 1954 1931 struct Scsi_Host *shost = pci_get_drvdata(pdev); 1955 1932 struct aac_dev *aac = (struct aac_dev *)shost->hostdata; 1956 1933 1957 - aac_cancel_safw_rescan_worker(aac); 1934 + aac_cancel_rescan_worker(aac); 1958 1935 scsi_remove_host(shost); 1959 1936 1960 1937 __aac_shutdown(aac); ··· 2012 1989 aac->handle_pci_error = 1; 2013 1990 2014 1991 scsi_block_requests(aac->scsi_host_ptr); 2015 - aac_cancel_safw_rescan_worker(aac); 1992 + aac_cancel_rescan_worker(aac); 2016 1993 aac_flush_ios(aac); 2017 1994 aac_release_resources(aac); 2018 1995
+10
drivers/scsi/aacraid/src.c
··· 733 733 return ctrl_up; 734 734 } 735 735 736 + static void aac_src_drop_io(struct aac_dev *dev) 737 + { 738 + if (!dev->soft_reset_support) 739 + return; 740 + 741 + aac_adapter_sync_cmd(dev, DROP_IO, 742 + 0, 0, 0, 0, 0, 0, NULL, NULL, NULL, NULL, NULL); 743 + } 744 + 736 745 static void aac_notify_fw_of_iop_reset(struct aac_dev *dev) 737 746 { 738 747 aac_adapter_sync_cmd(dev, IOP_RESET_ALWAYS, 0, 0, 0, 0, 0, 0, NULL, 739 748 NULL, NULL, NULL, NULL); 749 + aac_src_drop_io(dev); 740 750 } 741 751 742 752 static void aac_send_iop_reset(struct aac_dev *dev)
+3 -3
drivers/scsi/arcmsr/arcmsr_hba.c
··· 1400 1400 , pCCB->acb 1401 1401 , pCCB->startdone 1402 1402 , atomic_read(&acb->ccboutstandingcount)); 1403 - return; 1403 + return; 1404 1404 } 1405 1405 arcmsr_report_ccb_state(acb, pCCB, error); 1406 1406 } ··· 3476 3476 , pCCB->pcmd->device->id 3477 3477 , (u32)pCCB->pcmd->device->lun 3478 3478 , pCCB); 3479 - pCCB->pcmd->result = DID_ABORT << 16; 3480 - arcmsr_ccb_complete(pCCB); 3479 + pCCB->pcmd->result = DID_ABORT << 16; 3480 + arcmsr_ccb_complete(pCCB); 3481 3481 continue; 3482 3482 } 3483 3483 printk(KERN_NOTICE "arcmsr%d: polling get an illegal ccb"
+2 -2
drivers/scsi/arm/acornscsi.c
··· 1067 1067 * Purpose : ensure that all DMA transfers are up-to-date & host->scsi.SCp is correct 1068 1068 * Params : host - host to finish 1069 1069 * Notes : This is called when a command is: 1070 - * terminating, RESTORE_POINTERS, SAVE_POINTERS, DISCONECT 1070 + * terminating, RESTORE_POINTERS, SAVE_POINTERS, DISCONNECT 1071 1071 * : This must not return until all transfers are completed. 1072 1072 */ 1073 1073 static ··· 1816 1816 } 1817 1817 1818 1818 /* 1819 - * Function: int acornscsi_reconect_finish(AS_Host *host) 1819 + * Function: int acornscsi_reconnect_finish(AS_Host *host) 1820 1820 * Purpose : finish reconnecting a command 1821 1821 * Params : host - host to complete 1822 1822 * Returns : 0 if failed
+3 -3
drivers/scsi/atari_scsi.c
··· 742 742 atari_scsi_template.sg_tablesize = SG_ALL; 743 743 } else { 744 744 atari_scsi_template.can_queue = 1; 745 - atari_scsi_template.sg_tablesize = SG_NONE; 745 + atari_scsi_template.sg_tablesize = 1; 746 746 } 747 747 748 748 if (setup_can_queue > 0) ··· 751 751 if (setup_cmd_per_lun > 0) 752 752 atari_scsi_template.cmd_per_lun = setup_cmd_per_lun; 753 753 754 - /* Leave sg_tablesize at 0 on a Falcon! */ 755 - if (ATARIHW_PRESENT(TT_SCSI) && setup_sg_tablesize >= 0) 754 + /* Don't increase sg_tablesize on Falcon! */ 755 + if (ATARIHW_PRESENT(TT_SCSI) && setup_sg_tablesize > 0) 756 756 atari_scsi_template.sg_tablesize = setup_sg_tablesize; 757 757 758 758 if (setup_hostid >= 0) {
+1 -1
drivers/scsi/atp870u.c
··· 1680 1680 .bios_param = atp870u_biosparam /* biosparm */, 1681 1681 .can_queue = qcnt /* can_queue */, 1682 1682 .this_id = 7 /* SCSI ID */, 1683 - .sg_tablesize = ATP870U_SCATTER /*SG_ALL*/ /*SG_NONE*/, 1683 + .sg_tablesize = ATP870U_SCATTER /*SG_ALL*/, 1684 1684 .max_sectors = ATP870U_MAX_SECTORS, 1685 1685 }; 1686 1686
+1 -2
drivers/scsi/bfa/bfad.c
··· 1487 1487 return ret; 1488 1488 } 1489 1489 1490 - int 1491 - restart_bfa(struct bfad_s *bfad) 1490 + static int restart_bfa(struct bfad_s *bfad) 1492 1491 { 1493 1492 unsigned long flags; 1494 1493 struct pci_dev *pdev = bfad->pcidev;
+3 -1
drivers/scsi/bfa/bfad_attr.c
··· 275 275 rc = bfa_port_get_stats(BFA_FCPORT(&bfad->bfa), 276 276 fcstats, bfad_hcb_comp, &fcomp); 277 277 spin_unlock_irqrestore(&bfad->bfad_lock, flags); 278 - if (rc != BFA_STATUS_OK) 278 + if (rc != BFA_STATUS_OK) { 279 + kfree(fcstats); 279 280 return NULL; 281 + } 280 282 281 283 wait_for_completion(&fcomp.comp); 282 284
+1 -1
drivers/scsi/bnx2fc/57xx_hsi_bnx2fc.h
··· 813 813 814 814 815 815 /* 816 - * FCoE conection data base 816 + * FCoE connection data base 817 817 */ 818 818 struct fcoe_conn_db { 819 819 #if defined(__BIG_ENDIAN)
+1 -1
drivers/scsi/bnx2fc/bnx2fc_io.c
··· 1242 1242 1243 1243 /* Wait 2 * RA_TOV + 1 to be sure timeout function hasn't fired */ 1244 1244 time_left = wait_for_completion_timeout(&io_req->abts_done, 1245 - (2 * rp->r_a_tov + 1) * HZ); 1245 + msecs_to_jiffies(2 * rp->r_a_tov + 1)); 1246 1246 if (time_left) 1247 1247 BNX2FC_IO_DBG(io_req, 1248 1248 "Timed out in eh_abort waiting for abts_done");
+1 -1
drivers/scsi/bnx2i/bnx2i_iscsi.c
··· 915 915 INIT_LIST_HEAD(&hba->ep_ofld_list); 916 916 INIT_LIST_HEAD(&hba->ep_active_list); 917 917 INIT_LIST_HEAD(&hba->ep_destroy_list); 918 - pci_dev_put(hba->pcidev); 919 918 920 919 if (hba->regview) { 921 920 pci_iounmap(hba->pcidev, hba->regview); 922 921 hba->regview = NULL; 923 922 } 923 + pci_dev_put(hba->pcidev); 924 924 bnx2i_free_mp_bdt(hba); 925 925 bnx2i_release_free_cid_que(hba); 926 926 iscsi_host_free(shost);
+10 -10
drivers/scsi/csiostor/csio_hw.c
··· 793 793 goto found; 794 794 } 795 795 796 - /* Decode Flash part size. The code below looks repetative with 796 + /* Decode Flash part size. The code below looks repetitive with 797 797 * common encodings, but that's not guaranteed in the JEDEC 798 - * specification for the Read JADEC ID command. The only thing that 799 - * we're guaranteed by the JADEC specification is where the 798 + * specification for the Read JEDEC ID command. The only thing that 799 + * we're guaranteed by the JEDEC specification is where the 800 800 * Manufacturer ID is in the returned result. After that each 801 801 * Manufacturer ~could~ encode things completely differently. 802 802 * Note, all Flash parts must have 64KB sectors. ··· 983 983 waiting -= 50; 984 984 985 985 /* 986 - * If neither Error nor Initialialized are indicated 987 - * by the firmware keep waiting till we exaust our 986 + * If neither Error nor Initialized are indicated 987 + * by the firmware keep waiting till we exhaust our 988 988 * timeout ... and then retry if we haven't exhausted 989 989 * our retries ... 990 990 */ ··· 1738 1738 * Convert Common Code Forward Error Control settings into the 1739 1739 * Firmware's API. If the current Requested FEC has "Automatic" 1740 1740 * (IEEE 802.3) specified, then we use whatever the Firmware 1741 - * sent us as part of it's IEEE 802.3-based interpratation of 1741 + * sent us as part of it's IEEE 802.3-based interpretation of 1742 1742 * the Transceiver Module EPROM FEC parameters. Otherwise we 1743 1743 * use whatever is in the current Requested FEC settings. 1744 1744 */ ··· 2834 2834 } 2835 2835 2836 2836 /* 2837 - * csio_hws_initializing - Initialiazing state 2837 + * csio_hws_initializing - Initializing state 2838 2838 * @hw - HW module 2839 2839 * @evt - Event 2840 2840 * ··· 3049 3049 if (!csio_is_hw_master(hw)) 3050 3050 break; 3051 3051 /* 3052 - * The BYE should have alerady been issued, so we cant 3052 + * The BYE should have already been issued, so we can't 3053 3053 * use the mailbox interface. Hence we use the PL_RST 3054 3054 * register directly. 3055 3055 */ ··· 3104 3104 * 3105 3105 * A table driven interrupt handler that applies a set of masks to an 3106 3106 * interrupt status word and performs the corresponding actions if the 3107 - * interrupts described by the mask have occured. The actions include 3107 + * interrupts described by the mask have occurred. The actions include 3108 3108 * optionally emitting a warning or alert message. The table is terminated 3109 3109 * by an entry specifying mask 0. Returns the number of fatal interrupt 3110 3110 * conditions. ··· 4219 4219 * @hw: Pointer to HW module. 4220 4220 * 4221 4221 * It is assumed that the initialization is a synchronous operation. 4222 - * So when we return afer posting the event, the HW SM should be in 4222 + * So when we return after posting the event, the HW SM should be in 4223 4223 * the ready state, if there were no errors during init. 4224 4224 */ 4225 4225 int
+2 -5
drivers/scsi/csiostor/csio_init.c
··· 154 154 /* 155 155 * csio_dfs_destroy - Destroys per-hw debugfs. 156 156 */ 157 - static int 157 + static void 158 158 csio_dfs_destroy(struct csio_hw *hw) 159 159 { 160 - if (hw->debugfs_root) 161 - debugfs_remove_recursive(hw->debugfs_root); 162 - 163 - return 0; 160 + debugfs_remove_recursive(hw->debugfs_root); 164 161 } 165 162 166 163 /*
+10 -8
drivers/scsi/csiostor/csio_lnode.c
··· 301 301 struct fc_fdmi_port_name *port_name; 302 302 uint8_t buf[64]; 303 303 uint8_t *fc4_type; 304 + unsigned long flags; 304 305 305 306 if (fdmi_req->wr_status != FW_SUCCESS) { 306 307 csio_ln_dbg(ln, "WR error:%x in processing fdmi rhba cmd\n", ··· 386 385 len = (uint32_t)(pld - (uint8_t *)cmd); 387 386 388 387 /* Submit FDMI RPA request */ 389 - spin_lock_irq(&hw->lock); 388 + spin_lock_irqsave(&hw->lock, flags); 390 389 if (csio_ln_mgmt_submit_req(fdmi_req, csio_ln_fdmi_done, 391 390 FCOE_CT, &fdmi_req->dma_buf, len)) { 392 391 CSIO_INC_STATS(ln, n_fdmi_err); 393 392 csio_ln_dbg(ln, "Failed to issue fdmi rpa req\n"); 394 393 } 395 - spin_unlock_irq(&hw->lock); 394 + spin_unlock_irqrestore(&hw->lock, flags); 396 395 } 397 396 398 397 /* ··· 413 412 struct fc_fdmi_rpl *reg_pl; 414 413 struct fs_fdmi_attrs *attrib_blk; 415 414 uint8_t buf[64]; 415 + unsigned long flags; 416 416 417 417 if (fdmi_req->wr_status != FW_SUCCESS) { 418 418 csio_ln_dbg(ln, "WR error:%x in processing fdmi dprt cmd\n", ··· 493 491 attrib_blk->numattrs = htonl(numattrs); 494 492 495 493 /* Submit FDMI RHBA request */ 496 - spin_lock_irq(&hw->lock); 494 + spin_lock_irqsave(&hw->lock, flags); 497 495 if (csio_ln_mgmt_submit_req(fdmi_req, csio_ln_fdmi_rhba_cbfn, 498 496 FCOE_CT, &fdmi_req->dma_buf, len)) { 499 497 CSIO_INC_STATS(ln, n_fdmi_err); 500 498 csio_ln_dbg(ln, "Failed to issue fdmi rhba req\n"); 501 499 } 502 - spin_unlock_irq(&hw->lock); 500 + spin_unlock_irqrestore(&hw->lock, flags); 503 501 } 504 502 505 503 /* ··· 514 512 void *cmd; 515 513 struct fc_fdmi_port_name *port_name; 516 514 uint32_t len; 515 + unsigned long flags; 517 516 518 517 if (fdmi_req->wr_status != FW_SUCCESS) { 519 518 csio_ln_dbg(ln, "WR error:%x in processing fdmi dhba cmd\n", ··· 545 542 len += sizeof(*port_name); 546 543 547 544 /* Submit FDMI request */ 548 - spin_lock_irq(&hw->lock); 545 + spin_lock_irqsave(&hw->lock, flags); 549 546 if (csio_ln_mgmt_submit_req(fdmi_req, csio_ln_fdmi_dprt_cbfn, 550 547 FCOE_CT, &fdmi_req->dma_buf, len)) { 551 548 CSIO_INC_STATS(ln, n_fdmi_err); 552 549 csio_ln_dbg(ln, "Failed to issue fdmi dprt req\n"); 553 550 } 554 - spin_unlock_irq(&hw->lock); 551 + spin_unlock_irqrestore(&hw->lock, flags); 555 552 } 556 553 557 554 /** ··· 1992 1989 csio_ln_init(struct csio_lnode *ln) 1993 1990 { 1994 1991 int rv = -EINVAL; 1995 - struct csio_lnode *rln, *pln; 1992 + struct csio_lnode *pln; 1996 1993 struct csio_hw *hw = csio_lnode_to_hw(ln); 1997 1994 1998 1995 csio_init_state(&ln->sm, csio_lns_uninit); ··· 2022 2019 * THe rest is common for non-root physical and NPIV lnodes. 2023 2020 * Just get references to all other modules 2024 2021 */ 2025 - rln = csio_root_lnode(ln); 2026 2022 2027 2023 if (csio_is_npiv_ln(ln)) { 2028 2024 /* NPIV */
+1 -1
drivers/scsi/csiostor/csio_mb.c
··· 1210 1210 !csio_is_hw_intr_enabled(hw)) { 1211 1211 csio_err(hw, "Cannot issue mailbox in interrupt mode 0x%x\n", 1212 1212 *((uint8_t *)mbp->mb)); 1213 - goto error_out; 1213 + goto error_out; 1214 1214 } 1215 1215 1216 1216 if (mbm->mcurrent != NULL) {
-2
drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
··· 2073 2073 struct cxgb4_lld_info *lldi = cxgbi_cdev_priv(cdev); 2074 2074 struct net_device *ndev = cdev->ports[0]; 2075 2075 struct cxgbi_tag_format tformat; 2076 - unsigned int ppmax; 2077 2076 int i, err; 2078 2077 2079 2078 if (!lldi->vr->iscsi.size) { ··· 2081 2082 } 2082 2083 2083 2084 cdev->flags |= CXGBI_FLAG_USE_PPOD_OFLDQ; 2084 - ppmax = lldi->vr->iscsi.size >> PPOD_SIZE_SHIFT; 2085 2085 2086 2086 memset(&tformat, 0, sizeof(struct cxgbi_tag_format)); 2087 2087 for (i = 0; i < 4; i++)
-28
drivers/scsi/cxgbi/libcxgbi.c
··· 2284 2284 } 2285 2285 EXPORT_SYMBOL_GPL(cxgbi_set_conn_param); 2286 2286 2287 - static inline int csk_print_port(struct cxgbi_sock *csk, char *buf) 2288 - { 2289 - int len; 2290 - 2291 - cxgbi_sock_get(csk); 2292 - len = sprintf(buf, "%hu\n", ntohs(csk->daddr.sin_port)); 2293 - cxgbi_sock_put(csk); 2294 - 2295 - return len; 2296 - } 2297 - 2298 - static inline int csk_print_ip(struct cxgbi_sock *csk, char *buf) 2299 - { 2300 - int len; 2301 - 2302 - cxgbi_sock_get(csk); 2303 - if (csk->csk_family == AF_INET) 2304 - len = sprintf(buf, "%pI4", 2305 - &csk->daddr.sin_addr.s_addr); 2306 - else 2307 - len = sprintf(buf, "%pI6", 2308 - &csk->daddr6.sin6_addr); 2309 - 2310 - cxgbi_sock_put(csk); 2311 - 2312 - return len; 2313 - } 2314 - 2315 2287 int cxgbi_get_ep_param(struct iscsi_endpoint *ep, enum iscsi_param param, 2316 2288 char *buf) 2317 2289 {
-2
drivers/scsi/cxlflash/main.c
··· 44 44 struct afu *afu = cmd->parent; 45 45 struct cxlflash_cfg *cfg = afu->parent; 46 46 struct device *dev = &cfg->dev->dev; 47 - struct sisl_ioarcb *ioarcb; 48 47 struct sisl_ioasa *ioasa; 49 48 u32 resid; 50 49 51 50 if (unlikely(!cmd)) 52 51 return; 53 52 54 - ioarcb = &(cmd->rcb); 55 53 ioasa = &(cmd->sa); 56 54 57 55 if (ioasa->rc.flags & SISL_RC_FLAGS_UNDERRUN) {
+1
drivers/scsi/esas2r/esas2r_flash.c
··· 1197 1197 if (!esas2r_read_flash_block(a, a->nvram, FLS_OFFSET_NVR, 1198 1198 sizeof(struct esas2r_sas_nvram))) { 1199 1199 esas2r_hdebug("NVRAM read failed, using defaults"); 1200 + up(&a->nvram_semaphore); 1200 1201 return false; 1201 1202 } 1202 1203
+2 -1
drivers/scsi/fnic/fnic_scsi.c
··· 1024 1024 atomic64_inc(&fnic_stats->io_stats.io_completions); 1025 1025 1026 1026 1027 - io_duration_time = jiffies_to_msecs(jiffies) - jiffies_to_msecs(io_req->start_time); 1027 + io_duration_time = jiffies_to_msecs(jiffies) - 1028 + jiffies_to_msecs(start_time); 1028 1029 1029 1030 if(io_duration_time <= 10) 1030 1031 atomic64_inc(&fnic_stats->io_stats.io_btw_0_to_10_msec);
+1 -1
drivers/scsi/fnic/vnic_dev.c
··· 259 259 struct vnic_devcmd __iomem *devcmd = vdev->devcmd; 260 260 int delay; 261 261 u32 status; 262 - int dev_cmd_err[] = { 262 + static const int dev_cmd_err[] = { 263 263 /* convert from fw's version of error.h to host's version */ 264 264 0, /* ERR_SUCCESS */ 265 265 EINVAL, /* ERR_EINVAL */
+54 -11
drivers/scsi/hisi_sas/hisi_sas.h
··· 21 21 #include <linux/platform_device.h> 22 22 #include <linux/property.h> 23 23 #include <linux/regmap.h> 24 + #include <linux/timer.h> 24 25 #include <scsi/sas_ata.h> 25 26 #include <scsi/libsas.h> 26 27 ··· 85 84 #define HISI_SAS_PROT_MASK (HISI_SAS_DIF_PROT_MASK | HISI_SAS_DIX_PROT_MASK) 86 85 87 86 #define HISI_SAS_WAIT_PHYUP_TIMEOUT 20 87 + #define CLEAR_ITCT_TIMEOUT 20 88 88 89 89 struct hisi_hba; 90 90 ··· 169 167 enum sas_linkrate minimum_linkrate; 170 168 enum sas_linkrate maximum_linkrate; 171 169 int enable; 170 + atomic_t down_cnt; 172 171 }; 173 172 174 173 struct hisi_sas_port { ··· 299 296 void (*phy_set_linkrate)(struct hisi_hba *hisi_hba, int phy_no, 300 297 struct sas_phy_linkrates *linkrates); 301 298 enum sas_linkrate (*phy_get_max_linkrate)(void); 302 - void (*clear_itct)(struct hisi_hba *hisi_hba, 303 - struct hisi_sas_device *dev); 299 + int (*clear_itct)(struct hisi_hba *hisi_hba, 300 + struct hisi_sas_device *dev); 304 301 void (*free_device)(struct hisi_sas_device *sas_dev); 305 302 int (*get_wideport_bitmap)(struct hisi_hba *hisi_hba, int port_id); 306 303 void (*dereg_device)(struct hisi_hba *hisi_hba, ··· 322 319 323 320 const struct hisi_sas_debugfs_reg *debugfs_reg_array[DEBUGFS_REGS_NUM]; 324 321 const struct hisi_sas_debugfs_reg *debugfs_reg_port; 322 + }; 323 + 324 + #define HISI_SAS_MAX_DEBUGFS_DUMP (50) 325 + 326 + struct hisi_sas_debugfs_cq { 327 + struct hisi_sas_cq *cq; 328 + void *complete_hdr; 329 + }; 330 + 331 + struct hisi_sas_debugfs_dq { 332 + struct hisi_sas_dq *dq; 333 + struct hisi_sas_cmd_hdr *hdr; 334 + }; 335 + 336 + struct hisi_sas_debugfs_regs { 337 + struct hisi_hba *hisi_hba; 338 + u32 *data; 339 + }; 340 + 341 + struct hisi_sas_debugfs_port { 342 + struct hisi_sas_phy *phy; 343 + u32 *data; 344 + }; 345 + 346 + struct hisi_sas_debugfs_iost { 347 + struct hisi_sas_iost *iost; 348 + }; 349 + 350 + struct hisi_sas_debugfs_itct { 351 + struct hisi_sas_itct *itct; 352 + }; 353 + 354 + struct hisi_sas_debugfs_iost_cache { 355 + struct hisi_sas_iost_itct_cache *cache; 356 + }; 357 + 358 + struct hisi_sas_debugfs_itct_cache { 359 + struct hisi_sas_iost_itct_cache *cache; 325 360 }; 326 361 327 362 struct hisi_hba { ··· 443 402 444 403 /* debugfs memories */ 445 404 /* Put Global AXI and RAS Register into register array */ 446 - u32 *debugfs_regs[DEBUGFS_REGS_NUM]; 447 - u32 *debugfs_port_reg[HISI_SAS_MAX_PHYS]; 448 - void *debugfs_complete_hdr[HISI_SAS_MAX_QUEUES]; 449 - struct hisi_sas_cmd_hdr *debugfs_cmd_hdr[HISI_SAS_MAX_QUEUES]; 450 - struct hisi_sas_iost *debugfs_iost; 451 - struct hisi_sas_itct *debugfs_itct; 452 - u64 *debugfs_iost_cache; 453 - u64 *debugfs_itct_cache; 405 + struct hisi_sas_debugfs_regs debugfs_regs[HISI_SAS_MAX_DEBUGFS_DUMP][DEBUGFS_REGS_NUM]; 406 + struct hisi_sas_debugfs_port debugfs_port_reg[HISI_SAS_MAX_DEBUGFS_DUMP][HISI_SAS_MAX_PHYS]; 407 + struct hisi_sas_debugfs_cq debugfs_cq[HISI_SAS_MAX_DEBUGFS_DUMP][HISI_SAS_MAX_QUEUES]; 408 + struct hisi_sas_debugfs_dq debugfs_dq[HISI_SAS_MAX_DEBUGFS_DUMP][HISI_SAS_MAX_QUEUES]; 409 + struct hisi_sas_debugfs_iost debugfs_iost[HISI_SAS_MAX_DEBUGFS_DUMP]; 410 + struct hisi_sas_debugfs_itct debugfs_itct[HISI_SAS_MAX_DEBUGFS_DUMP]; 411 + struct hisi_sas_debugfs_iost_cache debugfs_iost_cache[HISI_SAS_MAX_DEBUGFS_DUMP]; 412 + struct hisi_sas_debugfs_itct_cache debugfs_itct_cache[HISI_SAS_MAX_DEBUGFS_DUMP]; 454 413 414 + u64 debugfs_timestamp[HISI_SAS_MAX_DEBUGFS_DUMP]; 415 + int debugfs_dump_index; 455 416 struct dentry *debugfs_dir; 456 417 struct dentry *debugfs_dump_dentry; 457 418 struct dentry *debugfs_bist_dentry; 458 - bool debugfs_snapshot; 459 419 }; 460 420 461 421 /* Generic HW DMA host memory structures */ ··· 598 556 extern struct scsi_transport_template *hisi_sas_stt; 599 557 600 558 extern bool hisi_sas_debugfs_enable; 559 + extern u32 hisi_sas_debugfs_dump_count; 601 560 extern struct dentry *hisi_sas_debugfs_dir; 602 561 603 562 extern void hisi_sas_stop_phys(struct hisi_hba *hisi_hba);
+250 -126
drivers/scsi/hisi_sas/hisi_sas_main.c
··· 587 587 dev = hisi_hba->dev; 588 588 589 589 if (unlikely(test_bit(HISI_SAS_REJECT_CMD_BIT, &hisi_hba->flags))) { 590 - if (in_softirq()) 590 + /* 591 + * For IOs from upper layer, it may already disable preempt 592 + * in the IO path, if disable preempt again in down(), 593 + * function schedule() will report schedule_bug(), so check 594 + * preemptible() before goto down(). 595 + */ 596 + if (!preemptible()) 591 597 return -EINVAL; 592 598 593 599 down(&hisi_hba->sem); ··· 974 968 struct hisi_hba *hisi_hba = sas_ha->lldd_ha; 975 969 struct hisi_sas_phy *phy = sas_phy->lldd_phy; 976 970 struct asd_sas_port *sas_port = sas_phy->port; 977 - struct hisi_sas_port *port = to_hisi_sas_port(sas_port); 971 + struct hisi_sas_port *port; 978 972 unsigned long flags; 979 973 980 974 if (!sas_port) 981 975 return; 982 976 977 + port = to_hisi_sas_port(sas_port); 983 978 spin_lock_irqsave(&hisi_hba->lock, flags); 984 979 port->port_attached = 1; 985 980 port->id = phy->port_id; ··· 1052 1045 struct hisi_sas_device *sas_dev = device->lldd_dev; 1053 1046 struct hisi_hba *hisi_hba = dev_to_hisi_hba(device); 1054 1047 struct device *dev = hisi_hba->dev; 1048 + int ret = 0; 1055 1049 1056 1050 dev_info(dev, "dev[%d:%x] is gone\n", 1057 1051 sas_dev->device_id, sas_dev->dev_type); ··· 1064 1056 1065 1057 hisi_sas_dereg_device(hisi_hba, device); 1066 1058 1067 - hisi_hba->hw->clear_itct(hisi_hba, sas_dev); 1059 + ret = hisi_hba->hw->clear_itct(hisi_hba, sas_dev); 1068 1060 device->lldd_dev = NULL; 1069 1061 } 1070 1062 1071 1063 if (hisi_hba->hw->free_device) 1072 1064 hisi_hba->hw->free_device(sas_dev); 1073 - sas_dev->dev_type = SAS_PHY_UNUSED; 1065 + 1066 + /* Don't mark it as SAS_PHY_UNUSED if failed to clear ITCT */ 1067 + if (!ret) 1068 + sas_dev->dev_type = SAS_PHY_UNUSED; 1074 1069 sas_dev->sas_device = NULL; 1075 1070 up(&hisi_hba->sem); 1076 1071 } ··· 1413 1402 struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no]; 1414 1403 struct asd_sas_phy *sas_phy = &phy->sas_phy; 1415 1404 struct asd_sas_port *sas_port = sas_phy->port; 1416 - bool do_port_check = !!(_sas_port != sas_port); 1405 + bool do_port_check = _sas_port != sas_port; 1417 1406 1418 1407 if (!sas_phy->phy->enabled) 1419 1408 continue; ··· 1574 1563 struct Scsi_Host *shost = hisi_hba->shost; 1575 1564 int rc; 1576 1565 1577 - if (hisi_sas_debugfs_enable && hisi_hba->debugfs_itct) 1566 + if (hisi_sas_debugfs_enable && hisi_hba->debugfs_itct[0].itct) 1578 1567 queue_work(hisi_hba->wq, &hisi_hba->debugfs_work); 1579 1568 1580 1569 if (!hisi_hba->hw->soft_reset) ··· 2066 2055 2067 2056 /* Internal abort timed out */ 2068 2057 if ((task->task_state_flags & SAS_TASK_STATE_ABORTED)) { 2069 - if (hisi_sas_debugfs_enable && hisi_hba->debugfs_itct) 2058 + if (hisi_sas_debugfs_enable && hisi_hba->debugfs_itct[0].itct) 2070 2059 queue_work(hisi_hba->wq, &hisi_hba->debugfs_work); 2071 2060 2072 2061 if (!(task->task_state_flags & SAS_TASK_STATE_DONE)) { ··· 2687 2676 err_out_register_ha: 2688 2677 scsi_remove_host(shost); 2689 2678 err_out_ha: 2679 + hisi_sas_debugfs_exit(hisi_hba); 2690 2680 hisi_sas_free(hisi_hba); 2691 2681 scsi_host_put(shost); 2692 2682 return rc; ··· 2699 2687 static void hisi_sas_debugfs_snapshot_cq_reg(struct hisi_hba *hisi_hba) 2700 2688 { 2701 2689 int queue_entry_size = hisi_hba->hw->complete_hdr_size; 2690 + int dump_index = hisi_hba->debugfs_dump_index; 2702 2691 int i; 2703 2692 2704 2693 for (i = 0; i < hisi_hba->queue_count; i++) 2705 - memcpy(hisi_hba->debugfs_complete_hdr[i], 2694 + memcpy(hisi_hba->debugfs_cq[dump_index][i].complete_hdr, 2706 2695 hisi_hba->complete_hdr[i], 2707 2696 HISI_SAS_QUEUE_SLOTS * queue_entry_size); 2708 2697 } ··· 2711 2698 static void hisi_sas_debugfs_snapshot_dq_reg(struct hisi_hba *hisi_hba) 2712 2699 { 2713 2700 int queue_entry_size = sizeof(struct hisi_sas_cmd_hdr); 2701 + int dump_index = hisi_hba->debugfs_dump_index; 2714 2702 int i; 2715 2703 2716 2704 for (i = 0; i < hisi_hba->queue_count; i++) { 2717 - struct hisi_sas_cmd_hdr *debugfs_cmd_hdr, *cmd_hdr; 2705 + struct hisi_sas_cmd_hdr *debugfs_cmd_hdr, *cmd_hdr; 2718 2706 int j; 2719 2707 2720 - debugfs_cmd_hdr = hisi_hba->debugfs_cmd_hdr[i]; 2708 + debugfs_cmd_hdr = hisi_hba->debugfs_dq[dump_index][i].hdr; 2721 2709 cmd_hdr = hisi_hba->cmd_hdr[i]; 2722 2710 2723 2711 for (j = 0; j < HISI_SAS_QUEUE_SLOTS; j++) ··· 2729 2715 2730 2716 static void hisi_sas_debugfs_snapshot_port_reg(struct hisi_hba *hisi_hba) 2731 2717 { 2718 + int dump_index = hisi_hba->debugfs_dump_index; 2732 2719 const struct hisi_sas_debugfs_reg *port = 2733 2720 hisi_hba->hw->debugfs_reg_port; 2734 2721 int i, phy_cnt; ··· 2737 2722 u32 *databuf; 2738 2723 2739 2724 for (phy_cnt = 0; phy_cnt < hisi_hba->n_phy; phy_cnt++) { 2740 - databuf = (u32 *)hisi_hba->debugfs_port_reg[phy_cnt]; 2725 + databuf = hisi_hba->debugfs_port_reg[dump_index][phy_cnt].data; 2741 2726 for (i = 0; i < port->count; i++, databuf++) { 2742 2727 offset = port->base_off + 4 * i; 2743 2728 *databuf = port->read_port_reg(hisi_hba, phy_cnt, ··· 2748 2733 2749 2734 static void hisi_sas_debugfs_snapshot_global_reg(struct hisi_hba *hisi_hba) 2750 2735 { 2751 - u32 *databuf = hisi_hba->debugfs_regs[DEBUGFS_GLOBAL]; 2736 + int dump_index = hisi_hba->debugfs_dump_index; 2737 + u32 *databuf = hisi_hba->debugfs_regs[dump_index][DEBUGFS_GLOBAL].data; 2752 2738 const struct hisi_sas_hw *hw = hisi_hba->hw; 2753 2739 const struct hisi_sas_debugfs_reg *global = 2754 2740 hw->debugfs_reg_array[DEBUGFS_GLOBAL]; ··· 2761 2745 2762 2746 static void hisi_sas_debugfs_snapshot_axi_reg(struct hisi_hba *hisi_hba) 2763 2747 { 2764 - u32 *databuf = hisi_hba->debugfs_regs[DEBUGFS_AXI]; 2748 + int dump_index = hisi_hba->debugfs_dump_index; 2749 + u32 *databuf = hisi_hba->debugfs_regs[dump_index][DEBUGFS_AXI].data; 2765 2750 const struct hisi_sas_hw *hw = hisi_hba->hw; 2766 2751 const struct hisi_sas_debugfs_reg *axi = 2767 2752 hw->debugfs_reg_array[DEBUGFS_AXI]; ··· 2775 2758 2776 2759 static void hisi_sas_debugfs_snapshot_ras_reg(struct hisi_hba *hisi_hba) 2777 2760 { 2778 - u32 *databuf = hisi_hba->debugfs_regs[DEBUGFS_RAS]; 2761 + int dump_index = hisi_hba->debugfs_dump_index; 2762 + u32 *databuf = hisi_hba->debugfs_regs[dump_index][DEBUGFS_RAS].data; 2779 2763 const struct hisi_sas_hw *hw = hisi_hba->hw; 2780 2764 const struct hisi_sas_debugfs_reg *ras = 2781 2765 hw->debugfs_reg_array[DEBUGFS_RAS]; ··· 2789 2771 2790 2772 static void hisi_sas_debugfs_snapshot_itct_reg(struct hisi_hba *hisi_hba) 2791 2773 { 2792 - void *cachebuf = hisi_hba->debugfs_itct_cache; 2793 - void *databuf = hisi_hba->debugfs_itct; 2774 + int dump_index = hisi_hba->debugfs_dump_index; 2775 + void *cachebuf = hisi_hba->debugfs_itct_cache[dump_index].cache; 2776 + void *databuf = hisi_hba->debugfs_itct[dump_index].itct; 2794 2777 struct hisi_sas_itct *itct; 2795 2778 int i; 2796 2779 ··· 2808 2789 2809 2790 static void hisi_sas_debugfs_snapshot_iost_reg(struct hisi_hba *hisi_hba) 2810 2791 { 2792 + int dump_index = hisi_hba->debugfs_dump_index; 2811 2793 int max_command_entries = HISI_SAS_MAX_COMMANDS; 2812 - void *cachebuf = hisi_hba->debugfs_iost_cache; 2813 - void *databuf = hisi_hba->debugfs_iost; 2794 + void *cachebuf = hisi_hba->debugfs_iost_cache[dump_index].cache; 2795 + void *databuf = hisi_hba->debugfs_iost[dump_index].iost; 2814 2796 struct hisi_sas_iost *iost; 2815 2797 int i; 2816 2798 ··· 2862 2842 2863 2843 static int hisi_sas_debugfs_global_show(struct seq_file *s, void *p) 2864 2844 { 2865 - struct hisi_hba *hisi_hba = s->private; 2845 + struct hisi_sas_debugfs_regs *global = s->private; 2846 + struct hisi_hba *hisi_hba = global->hisi_hba; 2866 2847 const struct hisi_sas_hw *hw = hisi_hba->hw; 2867 2848 const void *reg_global = hw->debugfs_reg_array[DEBUGFS_GLOBAL]; 2868 2849 2869 - hisi_sas_debugfs_print_reg(hisi_hba->debugfs_regs[DEBUGFS_GLOBAL], 2850 + hisi_sas_debugfs_print_reg(global->data, 2870 2851 reg_global, s); 2871 2852 2872 2853 return 0; ··· 2889 2868 2890 2869 static int hisi_sas_debugfs_axi_show(struct seq_file *s, void *p) 2891 2870 { 2892 - struct hisi_hba *hisi_hba = s->private; 2871 + struct hisi_sas_debugfs_regs *axi = s->private; 2872 + struct hisi_hba *hisi_hba = axi->hisi_hba; 2893 2873 const struct hisi_sas_hw *hw = hisi_hba->hw; 2894 2874 const void *reg_axi = hw->debugfs_reg_array[DEBUGFS_AXI]; 2895 2875 2896 - hisi_sas_debugfs_print_reg(hisi_hba->debugfs_regs[DEBUGFS_AXI], 2876 + hisi_sas_debugfs_print_reg(axi->data, 2897 2877 reg_axi, s); 2898 2878 2899 2879 return 0; ··· 2916 2894 2917 2895 static int hisi_sas_debugfs_ras_show(struct seq_file *s, void *p) 2918 2896 { 2919 - struct hisi_hba *hisi_hba = s->private; 2897 + struct hisi_sas_debugfs_regs *ras = s->private; 2898 + struct hisi_hba *hisi_hba = ras->hisi_hba; 2920 2899 const struct hisi_sas_hw *hw = hisi_hba->hw; 2921 2900 const void *reg_ras = hw->debugfs_reg_array[DEBUGFS_RAS]; 2922 2901 2923 - hisi_sas_debugfs_print_reg(hisi_hba->debugfs_regs[DEBUGFS_RAS], 2902 + hisi_sas_debugfs_print_reg(ras->data, 2924 2903 reg_ras, s); 2925 2904 2926 2905 return 0; ··· 2943 2920 2944 2921 static int hisi_sas_debugfs_port_show(struct seq_file *s, void *p) 2945 2922 { 2946 - struct hisi_sas_phy *phy = s->private; 2923 + struct hisi_sas_debugfs_port *port = s->private; 2924 + struct hisi_sas_phy *phy = port->phy; 2947 2925 struct hisi_hba *hisi_hba = phy->hisi_hba; 2948 2926 const struct hisi_sas_hw *hw = hisi_hba->hw; 2949 2927 const struct hisi_sas_debugfs_reg *reg_port = hw->debugfs_reg_port; 2950 - u32 *databuf = hisi_hba->debugfs_port_reg[phy->sas_phy.id]; 2951 2928 2952 - hisi_sas_debugfs_print_reg(databuf, reg_port, s); 2929 + hisi_sas_debugfs_print_reg(port->data, reg_port, s); 2953 2930 2954 2931 return 0; 2955 2932 } ··· 2998 2975 seq_puts(s, "\n"); 2999 2976 } 3000 2977 3001 - static void hisi_sas_cq_show_slot(struct seq_file *s, int slot, void *cq_ptr) 2978 + static void hisi_sas_cq_show_slot(struct seq_file *s, int slot, 2979 + struct hisi_sas_debugfs_cq *debugfs_cq) 3002 2980 { 3003 - struct hisi_sas_cq *cq = cq_ptr; 2981 + struct hisi_sas_cq *cq = debugfs_cq->cq; 3004 2982 struct hisi_hba *hisi_hba = cq->hisi_hba; 3005 - void *complete_queue = hisi_hba->debugfs_complete_hdr[cq->id]; 3006 - __le32 *complete_hdr = complete_queue + 3007 - (hisi_hba->hw->complete_hdr_size * slot); 2983 + __le32 *complete_hdr = debugfs_cq->complete_hdr + 2984 + (hisi_hba->hw->complete_hdr_size * slot); 3008 2985 3009 2986 hisi_sas_show_row_32(s, slot, 3010 2987 hisi_hba->hw->complete_hdr_size, ··· 3013 2990 3014 2991 static int hisi_sas_debugfs_cq_show(struct seq_file *s, void *p) 3015 2992 { 3016 - struct hisi_sas_cq *cq = s->private; 2993 + struct hisi_sas_debugfs_cq *debugfs_cq = s->private; 3017 2994 int slot; 3018 2995 3019 2996 for (slot = 0; slot < HISI_SAS_QUEUE_SLOTS; slot++) { 3020 - hisi_sas_cq_show_slot(s, slot, cq); 2997 + hisi_sas_cq_show_slot(s, slot, debugfs_cq); 3021 2998 } 3022 2999 return 0; 3023 3000 } ··· 3037 3014 3038 3015 static void hisi_sas_dq_show_slot(struct seq_file *s, int slot, void *dq_ptr) 3039 3016 { 3040 - struct hisi_sas_dq *dq = dq_ptr; 3041 - struct hisi_hba *hisi_hba = dq->hisi_hba; 3042 - void *cmd_queue = hisi_hba->debugfs_cmd_hdr[dq->id]; 3017 + struct hisi_sas_debugfs_dq *debugfs_dq = dq_ptr; 3018 + void *cmd_queue = debugfs_dq->hdr; 3043 3019 __le32 *cmd_hdr = cmd_queue + 3044 3020 sizeof(struct hisi_sas_cmd_hdr) * slot; 3045 3021 ··· 3070 3048 3071 3049 static int hisi_sas_debugfs_iost_show(struct seq_file *s, void *p) 3072 3050 { 3073 - struct hisi_hba *hisi_hba = s->private; 3074 - struct hisi_sas_iost *debugfs_iost = hisi_hba->debugfs_iost; 3051 + struct hisi_sas_debugfs_iost *debugfs_iost = s->private; 3052 + struct hisi_sas_iost *iost = debugfs_iost->iost; 3075 3053 int i, max_command_entries = HISI_SAS_MAX_COMMANDS; 3076 3054 3077 - for (i = 0; i < max_command_entries; i++, debugfs_iost++) { 3078 - __le64 *iost = &debugfs_iost->qw0; 3055 + for (i = 0; i < max_command_entries; i++, iost++) { 3056 + __le64 *data = &iost->qw0; 3079 3057 3080 - hisi_sas_show_row_64(s, i, sizeof(*debugfs_iost), iost); 3058 + hisi_sas_show_row_64(s, i, sizeof(*iost), data); 3081 3059 } 3082 3060 3083 3061 return 0; ··· 3098 3076 3099 3077 static int hisi_sas_debugfs_iost_cache_show(struct seq_file *s, void *p) 3100 3078 { 3101 - struct hisi_hba *hisi_hba = s->private; 3102 - struct hisi_sas_iost_itct_cache *iost_cache = 3103 - (struct hisi_sas_iost_itct_cache *)hisi_hba->debugfs_iost_cache; 3079 + struct hisi_sas_debugfs_iost_cache *debugfs_iost_cache = s->private; 3080 + struct hisi_sas_iost_itct_cache *iost_cache = debugfs_iost_cache->cache; 3104 3081 u32 cache_size = HISI_SAS_IOST_ITCT_CACHE_DW_SZ * 4; 3105 3082 int i, tab_idx; 3106 3083 __le64 *iost; ··· 3138 3117 static int hisi_sas_debugfs_itct_show(struct seq_file *s, void *p) 3139 3118 { 3140 3119 int i; 3141 - struct hisi_hba *hisi_hba = s->private; 3142 - struct hisi_sas_itct *debugfs_itct = hisi_hba->debugfs_itct; 3120 + struct hisi_sas_debugfs_itct *debugfs_itct = s->private; 3121 + struct hisi_sas_itct *itct = debugfs_itct->itct; 3143 3122 3144 - for (i = 0; i < HISI_SAS_MAX_ITCT_ENTRIES; i++, debugfs_itct++) { 3145 - __le64 *itct = &debugfs_itct->qw0; 3123 + for (i = 0; i < HISI_SAS_MAX_ITCT_ENTRIES; i++, itct++) { 3124 + __le64 *data = &itct->qw0; 3146 3125 3147 - hisi_sas_show_row_64(s, i, sizeof(*debugfs_itct), itct); 3126 + hisi_sas_show_row_64(s, i, sizeof(*itct), data); 3148 3127 } 3149 3128 3150 3129 return 0; ··· 3165 3144 3166 3145 static int hisi_sas_debugfs_itct_cache_show(struct seq_file *s, void *p) 3167 3146 { 3168 - struct hisi_hba *hisi_hba = s->private; 3169 - struct hisi_sas_iost_itct_cache *itct_cache = 3170 - (struct hisi_sas_iost_itct_cache *)hisi_hba->debugfs_itct_cache; 3147 + struct hisi_sas_debugfs_itct_cache *debugfs_itct_cache = s->private; 3148 + struct hisi_sas_iost_itct_cache *itct_cache = debugfs_itct_cache->cache; 3171 3149 u32 cache_size = HISI_SAS_IOST_ITCT_CACHE_DW_SZ * 4; 3172 3150 int i, tab_idx; 3173 3151 __le64 *itct; ··· 3204 3184 3205 3185 static void hisi_sas_debugfs_create_files(struct hisi_hba *hisi_hba) 3206 3186 { 3187 + u64 *debugfs_timestamp; 3188 + int dump_index = hisi_hba->debugfs_dump_index; 3207 3189 struct dentry *dump_dentry; 3208 3190 struct dentry *dentry; 3209 3191 char name[256]; ··· 3213 3191 int c; 3214 3192 int d; 3215 3193 3216 - /* Create dump dir inside device dir */ 3217 - dump_dentry = debugfs_create_dir("dump", hisi_hba->debugfs_dir); 3218 - hisi_hba->debugfs_dump_dentry = dump_dentry; 3194 + snprintf(name, 256, "%d", dump_index); 3219 3195 3220 - debugfs_create_file("global", 0400, dump_dentry, hisi_hba, 3221 - &hisi_sas_debugfs_global_fops); 3196 + dump_dentry = debugfs_create_dir(name, hisi_hba->debugfs_dump_dentry); 3197 + 3198 + debugfs_timestamp = &hisi_hba->debugfs_timestamp[dump_index]; 3199 + 3200 + debugfs_create_u64("timestamp", 0400, dump_dentry, 3201 + debugfs_timestamp); 3202 + 3203 + debugfs_create_file("global", 0400, dump_dentry, 3204 + &hisi_hba->debugfs_regs[dump_index][DEBUGFS_GLOBAL], 3205 + &hisi_sas_debugfs_global_fops); 3222 3206 3223 3207 /* Create port dir and files */ 3224 3208 dentry = debugfs_create_dir("port", dump_dentry); 3225 3209 for (p = 0; p < hisi_hba->n_phy; p++) { 3226 3210 snprintf(name, 256, "%d", p); 3227 3211 3228 - debugfs_create_file(name, 0400, dentry, &hisi_hba->phy[p], 3212 + debugfs_create_file(name, 0400, dentry, 3213 + &hisi_hba->debugfs_port_reg[dump_index][p], 3229 3214 &hisi_sas_debugfs_port_fops); 3230 3215 } 3231 3216 ··· 3241 3212 for (c = 0; c < hisi_hba->queue_count; c++) { 3242 3213 snprintf(name, 256, "%d", c); 3243 3214 3244 - debugfs_create_file(name, 0400, dentry, &hisi_hba->cq[c], 3215 + debugfs_create_file(name, 0400, dentry, 3216 + &hisi_hba->debugfs_cq[dump_index][c], 3245 3217 &hisi_sas_debugfs_cq_fops); 3246 3218 } 3247 3219 ··· 3251 3221 for (d = 0; d < hisi_hba->queue_count; d++) { 3252 3222 snprintf(name, 256, "%d", d); 3253 3223 3254 - debugfs_create_file(name, 0400, dentry, &hisi_hba->dq[d], 3224 + debugfs_create_file(name, 0400, dentry, 3225 + &hisi_hba->debugfs_dq[dump_index][d], 3255 3226 &hisi_sas_debugfs_dq_fops); 3256 3227 } 3257 3228 3258 - debugfs_create_file("iost", 0400, dump_dentry, hisi_hba, 3229 + debugfs_create_file("iost", 0400, dump_dentry, 3230 + &hisi_hba->debugfs_iost[dump_index], 3259 3231 &hisi_sas_debugfs_iost_fops); 3260 3232 3261 - debugfs_create_file("iost_cache", 0400, dump_dentry, hisi_hba, 3233 + debugfs_create_file("iost_cache", 0400, dump_dentry, 3234 + &hisi_hba->debugfs_iost_cache[dump_index], 3262 3235 &hisi_sas_debugfs_iost_cache_fops); 3263 3236 3264 - debugfs_create_file("itct", 0400, dump_dentry, hisi_hba, 3237 + debugfs_create_file("itct", 0400, dump_dentry, 3238 + &hisi_hba->debugfs_itct[dump_index], 3265 3239 &hisi_sas_debugfs_itct_fops); 3266 3240 3267 - debugfs_create_file("itct_cache", 0400, dump_dentry, hisi_hba, 3241 + debugfs_create_file("itct_cache", 0400, dump_dentry, 3242 + &hisi_hba->debugfs_itct_cache[dump_index], 3268 3243 &hisi_sas_debugfs_itct_cache_fops); 3269 3244 3270 - debugfs_create_file("axi", 0400, dump_dentry, hisi_hba, 3245 + debugfs_create_file("axi", 0400, dump_dentry, 3246 + &hisi_hba->debugfs_regs[dump_index][DEBUGFS_AXI], 3271 3247 &hisi_sas_debugfs_axi_fops); 3272 3248 3273 - debugfs_create_file("ras", 0400, dump_dentry, hisi_hba, 3249 + debugfs_create_file("ras", 0400, dump_dentry, 3250 + &hisi_hba->debugfs_regs[dump_index][DEBUGFS_RAS], 3274 3251 &hisi_sas_debugfs_ras_fops); 3275 3252 3276 3253 return; ··· 3308 3271 struct hisi_hba *hisi_hba = file->f_inode->i_private; 3309 3272 char buf[8]; 3310 3273 3311 - /* A bit racy, but don't care too much since it's only debugfs */ 3312 - if (hisi_hba->debugfs_snapshot) 3274 + if (hisi_hba->debugfs_dump_index >= hisi_sas_debugfs_dump_count) 3313 3275 return -EFAULT; 3314 3276 3315 3277 if (count > 8) ··· 3575 3539 int value; 3576 3540 char *name; 3577 3541 } hisi_sas_debugfs_loop_modes[] = { 3578 - { HISI_SAS_BIST_LOOPBACK_MODE_DIGITAL, "digial" }, 3542 + { HISI_SAS_BIST_LOOPBACK_MODE_DIGITAL, "digital" }, 3579 3543 { HISI_SAS_BIST_LOOPBACK_MODE_SERDES, "serdes" }, 3580 3544 { HISI_SAS_BIST_LOOPBACK_MODE_REMOTE, "remote" }, 3581 3545 }; ··· 3706 3670 .owner = THIS_MODULE, 3707 3671 }; 3708 3672 3673 + static ssize_t hisi_sas_debugfs_phy_down_cnt_write(struct file *filp, 3674 + const char __user *buf, 3675 + size_t count, loff_t *ppos) 3676 + { 3677 + struct seq_file *s = filp->private_data; 3678 + struct hisi_sas_phy *phy = s->private; 3679 + unsigned int set_val; 3680 + int res; 3681 + 3682 + res = kstrtouint_from_user(buf, count, 0, &set_val); 3683 + if (res) 3684 + return res; 3685 + 3686 + if (set_val > 0) 3687 + return -EINVAL; 3688 + 3689 + atomic_set(&phy->down_cnt, 0); 3690 + 3691 + return count; 3692 + } 3693 + 3694 + static int hisi_sas_debugfs_phy_down_cnt_show(struct seq_file *s, void *p) 3695 + { 3696 + struct hisi_sas_phy *phy = s->private; 3697 + 3698 + seq_printf(s, "%d\n", atomic_read(&phy->down_cnt)); 3699 + 3700 + return 0; 3701 + } 3702 + 3703 + static int hisi_sas_debugfs_phy_down_cnt_open(struct inode *inode, 3704 + struct file *filp) 3705 + { 3706 + return single_open(filp, hisi_sas_debugfs_phy_down_cnt_show, 3707 + inode->i_private); 3708 + } 3709 + 3710 + static const struct file_operations hisi_sas_debugfs_phy_down_cnt_ops = { 3711 + .open = hisi_sas_debugfs_phy_down_cnt_open, 3712 + .read = seq_read, 3713 + .write = hisi_sas_debugfs_phy_down_cnt_write, 3714 + .llseek = seq_lseek, 3715 + .release = single_release, 3716 + .owner = THIS_MODULE, 3717 + }; 3718 + 3709 3719 void hisi_sas_debugfs_work_handler(struct work_struct *work) 3710 3720 { 3711 3721 struct hisi_hba *hisi_hba = 3712 3722 container_of(work, struct hisi_hba, debugfs_work); 3723 + int debugfs_dump_index = hisi_hba->debugfs_dump_index; 3724 + struct device *dev = hisi_hba->dev; 3725 + u64 timestamp = local_clock(); 3713 3726 3714 - if (hisi_hba->debugfs_snapshot) 3727 + if (debugfs_dump_index >= hisi_sas_debugfs_dump_count) { 3728 + dev_warn(dev, "dump count exceeded!\n"); 3715 3729 return; 3716 - hisi_hba->debugfs_snapshot = true; 3730 + } 3731 + 3732 + do_div(timestamp, NSEC_PER_MSEC); 3733 + hisi_hba->debugfs_timestamp[debugfs_dump_index] = timestamp; 3717 3734 3718 3735 hisi_sas_debugfs_snapshot_regs(hisi_hba); 3736 + hisi_hba->debugfs_dump_index++; 3719 3737 } 3720 3738 EXPORT_SYMBOL_GPL(hisi_sas_debugfs_work_handler); 3721 3739 3722 - static void hisi_sas_debugfs_release(struct hisi_hba *hisi_hba) 3740 + static void hisi_sas_debugfs_release(struct hisi_hba *hisi_hba, int dump_index) 3723 3741 { 3724 3742 struct device *dev = hisi_hba->dev; 3725 3743 int i; 3726 3744 3727 - devm_kfree(dev, hisi_hba->debugfs_iost_cache); 3728 - devm_kfree(dev, hisi_hba->debugfs_itct_cache); 3729 - devm_kfree(dev, hisi_hba->debugfs_iost); 3745 + devm_kfree(dev, hisi_hba->debugfs_iost_cache[dump_index].cache); 3746 + devm_kfree(dev, hisi_hba->debugfs_itct_cache[dump_index].cache); 3747 + devm_kfree(dev, hisi_hba->debugfs_iost[dump_index].iost); 3748 + devm_kfree(dev, hisi_hba->debugfs_itct[dump_index].itct); 3730 3749 3731 3750 for (i = 0; i < hisi_hba->queue_count; i++) 3732 - devm_kfree(dev, hisi_hba->debugfs_cmd_hdr[i]); 3751 + devm_kfree(dev, hisi_hba->debugfs_dq[dump_index][i].hdr); 3733 3752 3734 3753 for (i = 0; i < hisi_hba->queue_count; i++) 3735 - devm_kfree(dev, hisi_hba->debugfs_complete_hdr[i]); 3754 + devm_kfree(dev, 3755 + hisi_hba->debugfs_cq[dump_index][i].complete_hdr); 3736 3756 3737 3757 for (i = 0; i < DEBUGFS_REGS_NUM; i++) 3738 - devm_kfree(dev, hisi_hba->debugfs_regs[i]); 3758 + devm_kfree(dev, hisi_hba->debugfs_regs[dump_index][i].data); 3739 3759 3740 3760 for (i = 0; i < hisi_hba->n_phy; i++) 3741 - devm_kfree(dev, hisi_hba->debugfs_port_reg[i]); 3761 + devm_kfree(dev, hisi_hba->debugfs_port_reg[dump_index][i].data); 3742 3762 } 3743 3763 3744 - static int hisi_sas_debugfs_alloc(struct hisi_hba *hisi_hba) 3764 + static int hisi_sas_debugfs_alloc(struct hisi_hba *hisi_hba, int dump_index) 3745 3765 { 3746 3766 const struct hisi_sas_hw *hw = hisi_hba->hw; 3747 3767 struct device *dev = hisi_hba->dev; 3748 - int p, c, d; 3768 + int p, c, d, r, i; 3749 3769 size_t sz; 3750 3770 3751 - hisi_hba->debugfs_dump_dentry = 3752 - debugfs_create_dir("dump", hisi_hba->debugfs_dir); 3771 + for (r = 0; r < DEBUGFS_REGS_NUM; r++) { 3772 + struct hisi_sas_debugfs_regs *regs = 3773 + &hisi_hba->debugfs_regs[dump_index][r]; 3753 3774 3754 - sz = hw->debugfs_reg_array[DEBUGFS_GLOBAL]->count * 4; 3755 - hisi_hba->debugfs_regs[DEBUGFS_GLOBAL] = 3756 - devm_kmalloc(dev, sz, GFP_KERNEL); 3757 - 3758 - if (!hisi_hba->debugfs_regs[DEBUGFS_GLOBAL]) 3759 - goto fail; 3775 + sz = hw->debugfs_reg_array[r]->count * 4; 3776 + regs->data = devm_kmalloc(dev, sz, GFP_KERNEL); 3777 + if (!regs->data) 3778 + goto fail; 3779 + regs->hisi_hba = hisi_hba; 3780 + } 3760 3781 3761 3782 sz = hw->debugfs_reg_port->count * 4; 3762 3783 for (p = 0; p < hisi_hba->n_phy; p++) { 3763 - hisi_hba->debugfs_port_reg[p] = 3764 - devm_kmalloc(dev, sz, GFP_KERNEL); 3784 + struct hisi_sas_debugfs_port *port = 3785 + &hisi_hba->debugfs_port_reg[dump_index][p]; 3765 3786 3766 - if (!hisi_hba->debugfs_port_reg[p]) 3787 + port->data = devm_kmalloc(dev, sz, GFP_KERNEL); 3788 + if (!port->data) 3767 3789 goto fail; 3790 + port->phy = &hisi_hba->phy[p]; 3768 3791 } 3769 - 3770 - sz = hw->debugfs_reg_array[DEBUGFS_AXI]->count * 4; 3771 - hisi_hba->debugfs_regs[DEBUGFS_AXI] = 3772 - devm_kmalloc(dev, sz, GFP_KERNEL); 3773 - 3774 - if (!hisi_hba->debugfs_regs[DEBUGFS_AXI]) 3775 - goto fail; 3776 - 3777 - sz = hw->debugfs_reg_array[DEBUGFS_RAS]->count * 4; 3778 - hisi_hba->debugfs_regs[DEBUGFS_RAS] = 3779 - devm_kmalloc(dev, sz, GFP_KERNEL); 3780 - 3781 - if (!hisi_hba->debugfs_regs[DEBUGFS_RAS]) 3782 - goto fail; 3783 3792 3784 3793 sz = hw->complete_hdr_size * HISI_SAS_QUEUE_SLOTS; 3785 3794 for (c = 0; c < hisi_hba->queue_count; c++) { 3786 - hisi_hba->debugfs_complete_hdr[c] = 3787 - devm_kmalloc(dev, sz, GFP_KERNEL); 3795 + struct hisi_sas_debugfs_cq *cq = 3796 + &hisi_hba->debugfs_cq[dump_index][c]; 3788 3797 3789 - if (!hisi_hba->debugfs_complete_hdr[c]) 3798 + cq->complete_hdr = devm_kmalloc(dev, sz, GFP_KERNEL); 3799 + if (!cq->complete_hdr) 3790 3800 goto fail; 3801 + cq->cq = &hisi_hba->cq[c]; 3791 3802 } 3792 3803 3793 3804 sz = sizeof(struct hisi_sas_cmd_hdr) * HISI_SAS_QUEUE_SLOTS; 3794 3805 for (d = 0; d < hisi_hba->queue_count; d++) { 3795 - hisi_hba->debugfs_cmd_hdr[d] = 3796 - devm_kmalloc(dev, sz, GFP_KERNEL); 3806 + struct hisi_sas_debugfs_dq *dq = 3807 + &hisi_hba->debugfs_dq[dump_index][d]; 3797 3808 3798 - if (!hisi_hba->debugfs_cmd_hdr[d]) 3809 + dq->hdr = devm_kmalloc(dev, sz, GFP_KERNEL); 3810 + if (!dq->hdr) 3799 3811 goto fail; 3812 + dq->dq = &hisi_hba->dq[d]; 3800 3813 } 3801 3814 3802 3815 sz = HISI_SAS_MAX_COMMANDS * sizeof(struct hisi_sas_iost); 3803 3816 3804 - hisi_hba->debugfs_iost = devm_kmalloc(dev, sz, GFP_KERNEL); 3805 - if (!hisi_hba->debugfs_iost) 3817 + hisi_hba->debugfs_iost[dump_index].iost = 3818 + devm_kmalloc(dev, sz, GFP_KERNEL); 3819 + if (!hisi_hba->debugfs_iost[dump_index].iost) 3806 3820 goto fail; 3807 3821 3808 3822 sz = HISI_SAS_IOST_ITCT_CACHE_NUM * 3809 3823 sizeof(struct hisi_sas_iost_itct_cache); 3810 3824 3811 - hisi_hba->debugfs_iost_cache = devm_kmalloc(dev, sz, GFP_KERNEL); 3812 - if (!hisi_hba->debugfs_iost_cache) 3825 + hisi_hba->debugfs_iost_cache[dump_index].cache = 3826 + devm_kmalloc(dev, sz, GFP_KERNEL); 3827 + if (!hisi_hba->debugfs_iost_cache[dump_index].cache) 3813 3828 goto fail; 3814 3829 3815 3830 sz = HISI_SAS_IOST_ITCT_CACHE_NUM * 3816 3831 sizeof(struct hisi_sas_iost_itct_cache); 3817 3832 3818 - hisi_hba->debugfs_itct_cache = devm_kmalloc(dev, sz, GFP_KERNEL); 3819 - if (!hisi_hba->debugfs_itct_cache) 3833 + hisi_hba->debugfs_itct_cache[dump_index].cache = 3834 + devm_kmalloc(dev, sz, GFP_KERNEL); 3835 + if (!hisi_hba->debugfs_itct_cache[dump_index].cache) 3820 3836 goto fail; 3821 3837 3822 3838 /* New memory allocation must be locate before itct */ 3823 3839 sz = HISI_SAS_MAX_ITCT_ENTRIES * sizeof(struct hisi_sas_itct); 3824 3840 3825 - hisi_hba->debugfs_itct = devm_kmalloc(dev, sz, GFP_KERNEL); 3826 - if (!hisi_hba->debugfs_itct) 3841 + hisi_hba->debugfs_itct[dump_index].itct = 3842 + devm_kmalloc(dev, sz, GFP_KERNEL); 3843 + if (!hisi_hba->debugfs_itct[dump_index].itct) 3827 3844 goto fail; 3828 3845 3829 3846 return 0; 3830 3847 fail: 3831 - hisi_sas_debugfs_release(hisi_hba); 3848 + for (i = 0; i < hisi_sas_debugfs_dump_count; i++) 3849 + hisi_sas_debugfs_release(hisi_hba, i); 3832 3850 return -ENOMEM; 3851 + } 3852 + 3853 + static void hisi_sas_debugfs_phy_down_cnt_init(struct hisi_hba *hisi_hba) 3854 + { 3855 + struct dentry *dir = debugfs_create_dir("phy_down_cnt", 3856 + hisi_hba->debugfs_dir); 3857 + char name[16]; 3858 + int phy_no; 3859 + 3860 + for (phy_no = 0; phy_no < hisi_hba->n_phy; phy_no++) { 3861 + snprintf(name, 16, "%d", phy_no); 3862 + debugfs_create_file(name, 0600, dir, 3863 + &hisi_hba->phy[phy_no], 3864 + &hisi_sas_debugfs_phy_down_cnt_ops); 3865 + } 3833 3866 } 3834 3867 3835 3868 static void hisi_sas_debugfs_bist_init(struct hisi_hba *hisi_hba) ··· 3932 3827 void hisi_sas_debugfs_init(struct hisi_hba *hisi_hba) 3933 3828 { 3934 3829 struct device *dev = hisi_hba->dev; 3830 + int i; 3935 3831 3936 3832 hisi_hba->debugfs_dir = debugfs_create_dir(dev_name(dev), 3937 3833 hisi_sas_debugfs_dir); ··· 3944 3838 /* create bist structures */ 3945 3839 hisi_sas_debugfs_bist_init(hisi_hba); 3946 3840 3947 - if (hisi_sas_debugfs_alloc(hisi_hba)) { 3948 - debugfs_remove_recursive(hisi_hba->debugfs_dir); 3949 - dev_dbg(dev, "failed to init debugfs!\n"); 3841 + hisi_hba->debugfs_dump_dentry = 3842 + debugfs_create_dir("dump", hisi_hba->debugfs_dir); 3843 + 3844 + hisi_sas_debugfs_phy_down_cnt_init(hisi_hba); 3845 + 3846 + for (i = 0; i < hisi_sas_debugfs_dump_count; i++) { 3847 + if (hisi_sas_debugfs_alloc(hisi_hba, i)) { 3848 + debugfs_remove_recursive(hisi_hba->debugfs_dir); 3849 + dev_dbg(dev, "failed to init debugfs!\n"); 3850 + break; 3851 + } 3950 3852 } 3951 3853 } 3952 3854 EXPORT_SYMBOL_GPL(hisi_sas_debugfs_init); ··· 3988 3874 module_param_named(debugfs_enable, hisi_sas_debugfs_enable, bool, 0444); 3989 3875 MODULE_PARM_DESC(hisi_sas_debugfs_enable, "Enable driver debugfs (default disabled)"); 3990 3876 3877 + u32 hisi_sas_debugfs_dump_count = 1; 3878 + EXPORT_SYMBOL_GPL(hisi_sas_debugfs_dump_count); 3879 + module_param_named(debugfs_dump_count, hisi_sas_debugfs_dump_count, uint, 0444); 3880 + MODULE_PARM_DESC(hisi_sas_debugfs_dump_count, "Number of debugfs dumps to allow"); 3881 + 3991 3882 static __init int hisi_sas_init(void) 3992 3883 { 3993 3884 hisi_sas_stt = sas_domain_attach_transport(&hisi_sas_transport_ops); 3994 3885 if (!hisi_sas_stt) 3995 3886 return -ENOMEM; 3996 3887 3997 - if (hisi_sas_debugfs_enable) 3888 + if (hisi_sas_debugfs_enable) { 3998 3889 hisi_sas_debugfs_dir = debugfs_create_dir("hisi_sas", NULL); 3890 + if (hisi_sas_debugfs_dump_count > HISI_SAS_MAX_DEBUGFS_DUMP) { 3891 + pr_info("hisi_sas: Limiting debugfs dump count\n"); 3892 + hisi_sas_debugfs_dump_count = HISI_SAS_MAX_DEBUGFS_DUMP; 3893 + } 3894 + } 3999 3895 4000 3896 return 0; 4001 3897 }
+4 -2
drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
··· 531 531 (0xff00ULL << ITCT_HDR_REJ_OPEN_TL_OFF)); 532 532 } 533 533 534 - static void clear_itct_v1_hw(struct hisi_hba *hisi_hba, 535 - struct hisi_sas_device *sas_dev) 534 + static int clear_itct_v1_hw(struct hisi_hba *hisi_hba, 535 + struct hisi_sas_device *sas_dev) 536 536 { 537 537 u64 dev_id = sas_dev->device_id; 538 538 struct hisi_sas_itct *itct = &hisi_hba->itct[dev_id]; ··· 551 551 qw0 = le64_to_cpu(itct->qw0); 552 552 qw0 &= ~ITCT_HDR_VALID_MSK; 553 553 itct->qw0 = cpu_to_le64(qw0); 554 + 555 + return 0; 554 556 } 555 557 556 558 static int reset_hw_v1_hw(struct hisi_hba *hisi_hba)
+10 -3
drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
··· 974 974 (0x1ULL << ITCT_HDR_RTOLT_OFF)); 975 975 } 976 976 977 - static void clear_itct_v2_hw(struct hisi_hba *hisi_hba, 978 - struct hisi_sas_device *sas_dev) 977 + static int clear_itct_v2_hw(struct hisi_hba *hisi_hba, 978 + struct hisi_sas_device *sas_dev) 979 979 { 980 980 DECLARE_COMPLETION_ONSTACK(completion); 981 981 u64 dev_id = sas_dev->device_id; 982 982 struct hisi_sas_itct *itct = &hisi_hba->itct[dev_id]; 983 983 u32 reg_val = hisi_sas_read32(hisi_hba, ENT_INT_SRC3); 984 + struct device *dev = hisi_hba->dev; 984 985 int i; 985 986 986 987 sas_dev->completion = &completion; ··· 991 990 hisi_sas_write32(hisi_hba, ENT_INT_SRC3, 992 991 ENT_INT_SRC3_ITC_INT_MSK); 993 992 993 + /* need to set register twice to clear ITCT for v2 hw */ 994 994 for (i = 0; i < 2; i++) { 995 995 reg_val = ITCT_CLR_EN_MSK | (dev_id & ITCT_DEV_MSK); 996 996 hisi_sas_write32(hisi_hba, ITCT_CLR, reg_val); 997 - wait_for_completion(sas_dev->completion); 997 + if (!wait_for_completion_timeout(sas_dev->completion, 998 + CLEAR_ITCT_TIMEOUT * HZ)) { 999 + dev_warn(dev, "failed to clear ITCT\n"); 1000 + return -ETIMEDOUT; 1001 + } 998 1002 999 1003 memset(itct, 0, sizeof(struct hisi_sas_itct)); 1000 1004 } 1005 + return 0; 1001 1006 } 1002 1007 1003 1008 static void free_device_v2_hw(struct hisi_sas_device *sas_dev)
+20 -10
drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
··· 795 795 (0x1ULL << ITCT_HDR_RTOLT_OFF)); 796 796 } 797 797 798 - static void clear_itct_v3_hw(struct hisi_hba *hisi_hba, 799 - struct hisi_sas_device *sas_dev) 798 + static int clear_itct_v3_hw(struct hisi_hba *hisi_hba, 799 + struct hisi_sas_device *sas_dev) 800 800 { 801 801 DECLARE_COMPLETION_ONSTACK(completion); 802 802 u64 dev_id = sas_dev->device_id; 803 803 struct hisi_sas_itct *itct = &hisi_hba->itct[dev_id]; 804 804 u32 reg_val = hisi_sas_read32(hisi_hba, ENT_INT_SRC3); 805 + struct device *dev = hisi_hba->dev; 805 806 806 807 sas_dev->completion = &completion; 807 808 ··· 815 814 reg_val = ITCT_CLR_EN_MSK | (dev_id & ITCT_DEV_MSK); 816 815 hisi_sas_write32(hisi_hba, ITCT_CLR, reg_val); 817 816 818 - wait_for_completion(sas_dev->completion); 817 + if (!wait_for_completion_timeout(sas_dev->completion, 818 + CLEAR_ITCT_TIMEOUT * HZ)) { 819 + dev_warn(dev, "failed to clear ITCT\n"); 820 + return -ETIMEDOUT; 821 + } 822 + 819 823 memset(itct, 0, sizeof(struct hisi_sas_itct)); 824 + return 0; 820 825 } 821 826 822 827 static void dereg_device_v3_hw(struct hisi_hba *hisi_hba, ··· 1548 1541 struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no]; 1549 1542 u32 phy_state, sl_ctrl, txid_auto; 1550 1543 struct device *dev = hisi_hba->dev; 1544 + 1545 + atomic_inc(&phy->down_cnt); 1551 1546 1552 1547 del_timer(&phy->timer); 1553 1548 hisi_sas_phy_write32(hisi_hba, phy_no, PHYCTRL_NOT_RDY_MSK, 1); ··· 3031 3022 hisi_sas_phy_write32(hisi_hba, phy_id, 3032 3023 SAS_PHY_BIST_CTRL, reg_val); 3033 3024 3034 - mdelay(100); 3035 - reg_val |= (CFG_RX_BIST_EN_MSK | CFG_TX_BIST_EN_MSK); 3036 - hisi_sas_phy_write32(hisi_hba, phy_id, 3037 - SAS_PHY_BIST_CTRL, reg_val); 3038 - 3039 3025 /* set the bist init value */ 3040 3026 hisi_sas_phy_write32(hisi_hba, phy_id, 3041 3027 SAS_PHY_BIST_CODE, ··· 3038 3034 hisi_sas_phy_write32(hisi_hba, phy_id, 3039 3035 SAS_PHY_BIST_CODE1, 3040 3036 SAS_PHY_BIST_CODE1_INIT); 3037 + 3038 + mdelay(100); 3039 + reg_val |= (CFG_RX_BIST_EN_MSK | CFG_TX_BIST_EN_MSK); 3040 + hisi_sas_phy_write32(hisi_hba, phy_id, 3041 + SAS_PHY_BIST_CTRL, reg_val); 3041 3042 3042 3043 /* clear error bit */ 3043 3044 mdelay(100); ··· 3268 3259 err_out_register_ha: 3269 3260 scsi_remove_host(shost); 3270 3261 err_out_ha: 3262 + hisi_sas_debugfs_exit(hisi_hba); 3271 3263 scsi_host_put(shost); 3272 3264 err_out_regions: 3273 3265 pci_release_regions(pdev); ··· 3302 3292 struct hisi_hba *hisi_hba = sha->lldd_ha; 3303 3293 struct Scsi_Host *shost = sha->core.shost; 3304 3294 3305 - hisi_sas_debugfs_exit(hisi_hba); 3306 - 3307 3295 if (timer_pending(&hisi_hba->timer)) 3308 3296 del_timer(&hisi_hba->timer); 3309 3297 ··· 3313 3305 pci_release_regions(pdev); 3314 3306 pci_disable_device(pdev); 3315 3307 hisi_sas_free(hisi_hba); 3308 + hisi_sas_debugfs_exit(hisi_hba); 3316 3309 scsi_host_put(shost); 3317 3310 } 3318 3311 ··· 3431 3422 if (rc) { 3432 3423 scsi_remove_host(shost); 3433 3424 pci_disable_device(pdev); 3425 + return rc; 3434 3426 } 3435 3427 hisi_hba->hw->phys_init(hisi_hba); 3436 3428 sas_resume_ha(sha);
+18 -1
drivers/scsi/hosts.c
··· 38 38 #include <scsi/scsi_device.h> 39 39 #include <scsi/scsi_host.h> 40 40 #include <scsi/scsi_transport.h> 41 + #include <scsi/scsi_cmnd.h> 41 42 42 43 #include "scsi_priv.h" 43 44 #include "scsi_logging.h" ··· 555 554 } 556 555 EXPORT_SYMBOL(scsi_host_get); 557 556 557 + static bool scsi_host_check_in_flight(struct request *rq, void *data, 558 + bool reserved) 559 + { 560 + int *count = data; 561 + struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(rq); 562 + 563 + if (test_bit(SCMD_STATE_INFLIGHT, &cmd->state)) 564 + (*count)++; 565 + 566 + return true; 567 + } 568 + 558 569 /** 559 570 * scsi_host_busy - Return the host busy counter 560 571 * @shost: Pointer to Scsi_Host to inc. 561 572 **/ 562 573 int scsi_host_busy(struct Scsi_Host *shost) 563 574 { 564 - return atomic_read(&shost->host_busy); 575 + int cnt = 0; 576 + 577 + blk_mq_tagset_busy_iter(&shost->tag_set, 578 + scsi_host_check_in_flight, &cnt); 579 + return cnt; 565 580 } 566 581 EXPORT_SYMBOL(scsi_host_busy); 567 582
+1 -1
drivers/scsi/ips.c
··· 498 498 int i; 499 499 char *key; 500 500 char *value; 501 - IPS_OPTION options[] = { 501 + static const IPS_OPTION options[] = { 502 502 {"noi2o", &ips_force_i2o, 0}, 503 503 {"nommap", &ips_force_memio, 0}, 504 504 {"ioctlsize", &ips_ioctlsize, IPS_IOCTL_SIZE},
+1 -1
drivers/scsi/isci/port_config.c
··· 147 147 /** 148 148 * 149 149 * @controller: This is the controller object that contains the port agent 150 - * @port_agent: This is the port configruation agent for the controller. 150 + * @port_agent: This is the port configuration agent for the controller. 151 151 * 152 152 * This routine will validate the port configuration is correct for the SCU 153 153 * hardware. The SCU hardware allows for port configurations as follows. LP0
+1 -1
drivers/scsi/isci/remote_device.c
··· 1504 1504 * This function builds the isci_remote_device when a libsas dev_found message 1505 1505 * is received. 1506 1506 * @isci_host: This parameter specifies the isci host object. 1507 - * @port: This parameter specifies the isci_port conected to this device. 1507 + * @port: This parameter specifies the isci_port connected to this device. 1508 1508 * 1509 1509 * pointer to new isci_remote_device. 1510 1510 */
+8
drivers/scsi/iscsi_tcp.c
··· 369 369 { 370 370 struct iscsi_conn *conn = task->conn; 371 371 unsigned int noreclaim_flag; 372 + struct iscsi_tcp_conn *tcp_conn = conn->dd_data; 373 + struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data; 372 374 int rc = 0; 375 + 376 + if (!tcp_sw_conn->sock) { 377 + iscsi_conn_printk(KERN_ERR, conn, 378 + "Transport not bound to socket!\n"); 379 + return -EINVAL; 380 + } 373 381 374 382 noreclaim_flag = memalloc_noreclaim_save(); 375 383
+37 -3
drivers/scsi/lpfc/lpfc.h
··· 605 605 spinlock_t lock; /* lock for expedite pool */ 606 606 }; 607 607 608 + enum ras_state { 609 + INACTIVE, 610 + REG_INPROGRESS, 611 + ACTIVE 612 + }; 613 + 608 614 struct lpfc_ras_fwlog { 609 615 uint8_t *fwlog_buff; 610 616 uint32_t fw_buffcount; /* Buffer size posted to FW */ ··· 627 621 bool ras_enabled; /* Ras Enabled for the function */ 628 622 #define LPFC_RAS_DISABLE_LOGGING 0x00 629 623 #define LPFC_RAS_ENABLE_LOGGING 0x01 630 - bool ras_active; /* RAS logging running state */ 624 + enum ras_state state; /* RAS logging running state */ 631 625 }; 632 626 633 627 struct lpfc_hba { ··· 731 725 #define HBA_FCOE_MODE 0x4 /* HBA function in FCoE Mode */ 732 726 #define HBA_SP_QUEUE_EVT 0x8 /* Slow-path qevt posted to worker thread*/ 733 727 #define HBA_POST_RECEIVE_BUFFER 0x10 /* Rcv buffers need to be posted */ 728 + #define HBA_PERSISTENT_TOPO 0x20 /* Persistent topology support in hba */ 734 729 #define ELS_XRI_ABORT_EVENT 0x40 735 730 #define ASYNC_EVENT 0x80 736 731 #define LINK_DISABLED 0x100 /* Link disabled by user */ ··· 837 830 uint32_t cfg_fcp_mq_threshold; 838 831 uint32_t cfg_hdw_queue; 839 832 uint32_t cfg_irq_chann; 833 + uint32_t cfg_irq_numa; 840 834 uint32_t cfg_suppress_rsp; 841 835 uint32_t cfg_nvme_oas; 842 836 uint32_t cfg_nvme_embed_cmd; ··· 880 872 uint32_t cfg_aer_support; 881 873 uint32_t cfg_sriov_nr_virtfn; 882 874 uint32_t cfg_request_firmware_upgrade; 883 - uint32_t cfg_iocb_cnt; 884 875 uint32_t cfg_suppress_link_up; 885 876 uint32_t cfg_rrq_xri_bitmap_sz; 886 877 uint32_t cfg_delay_discovery; ··· 997 990 struct dma_pool *lpfc_drb_pool; /* data receive buffer pool */ 998 991 struct dma_pool *lpfc_nvmet_drb_pool; /* data receive buffer pool */ 999 992 struct dma_pool *lpfc_hbq_pool; /* SLI3 hbq buffer pool */ 1000 - struct dma_pool *txrdy_payload_pool; 1001 993 struct dma_pool *lpfc_cmd_rsp_buf_pool; 1002 994 struct lpfc_dma_pool lpfc_mbuf_safety_pool; 1003 995 ··· 1061 1055 #ifdef LPFC_HDWQ_LOCK_STAT 1062 1056 struct dentry *debug_lockstat; 1063 1057 #endif 1058 + struct dentry *debug_ras_log; 1064 1059 atomic_t nvmeio_trc_cnt; 1065 1060 uint32_t nvmeio_trc_size; 1066 1061 uint32_t nvmeio_trc_output_idx; ··· 1216 1209 uint64_t ktime_seg10_min; 1217 1210 uint64_t ktime_seg10_max; 1218 1211 #endif 1212 + 1213 + struct hlist_node cpuhp; /* used for cpuhp per hba callback */ 1214 + struct timer_list cpuhp_poll_timer; 1215 + struct list_head poll_list; /* slowpath eq polling list */ 1216 + #define LPFC_POLL_HB 1 /* slowpath heartbeat */ 1217 + #define LPFC_POLL_FASTPATH 0 /* called from fastpath */ 1218 + #define LPFC_POLL_SLOWPATH 1 /* called from slowpath */ 1219 1219 }; 1220 1220 1221 1221 static inline struct Scsi_Host * ··· 1312 1298 return &phba->sli.sli3_ring[LPFC_ELS_RING]; 1313 1299 } 1314 1300 1301 + /** 1302 + * lpfc_next_online_numa_cpu - Finds next online CPU on NUMA node 1303 + * @numa_mask: Pointer to phba's numa_mask member. 1304 + * @start: starting cpu index 1305 + * 1306 + * Note: If no valid cpu found, then nr_cpu_ids is returned. 1307 + * 1308 + **/ 1309 + static inline unsigned int 1310 + lpfc_next_online_numa_cpu(const struct cpumask *numa_mask, unsigned int start) 1311 + { 1312 + unsigned int cpu_it; 1313 + 1314 + for_each_cpu_wrap(cpu_it, numa_mask, start) { 1315 + if (cpu_online(cpu_it)) 1316 + break; 1317 + } 1318 + 1319 + return cpu_it; 1320 + } 1315 1321 /** 1316 1322 * lpfc_sli4_mod_hba_eq_delay - update EQ delay 1317 1323 * @phba: Pointer to HBA context object.
+244 -54
drivers/scsi/lpfc/lpfc_attr.c
··· 176 176 int i; 177 177 int len = 0; 178 178 char tmp[LPFC_MAX_NVME_INFO_TMP_LEN] = {0}; 179 - unsigned long iflags = 0; 180 179 181 180 if (!(vport->cfg_enable_fc4_type & LPFC_ENABLE_NVME)) { 182 181 len = scnprintf(buf, PAGE_SIZE, "NVME Disabled\n"); ··· 346 347 if (strlcat(buf, "\nNVME Initiator Enabled\n", PAGE_SIZE) >= PAGE_SIZE) 347 348 goto buffer_done; 348 349 349 - rcu_read_lock(); 350 350 scnprintf(tmp, sizeof(tmp), 351 351 "XRI Dist lpfc%d Total %d IO %d ELS %d\n", 352 352 phba->brd_no, ··· 353 355 phba->sli4_hba.io_xri_max, 354 356 lpfc_sli4_get_els_iocb_cnt(phba)); 355 357 if (strlcat(buf, tmp, PAGE_SIZE) >= PAGE_SIZE) 356 - goto rcu_unlock_buf_done; 358 + goto buffer_done; 357 359 358 360 /* Port state is only one of two values for now. */ 359 361 if (localport->port_id) ··· 369 371 wwn_to_u64(vport->fc_nodename.u.wwn), 370 372 localport->port_id, statep); 371 373 if (strlcat(buf, tmp, PAGE_SIZE) >= PAGE_SIZE) 372 - goto rcu_unlock_buf_done; 374 + goto buffer_done; 375 + 376 + spin_lock_irq(shost->host_lock); 373 377 374 378 list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) { 375 379 nrport = NULL; 376 - spin_lock_irqsave(&vport->phba->hbalock, iflags); 380 + spin_lock(&vport->phba->hbalock); 377 381 rport = lpfc_ndlp_get_nrport(ndlp); 378 382 if (rport) 379 383 nrport = rport->remoteport; 380 - spin_unlock_irqrestore(&vport->phba->hbalock, iflags); 384 + spin_unlock(&vport->phba->hbalock); 381 385 if (!nrport) 382 386 continue; 383 387 ··· 398 398 399 399 /* Tab in to show lport ownership. */ 400 400 if (strlcat(buf, "NVME RPORT ", PAGE_SIZE) >= PAGE_SIZE) 401 - goto rcu_unlock_buf_done; 401 + goto unlock_buf_done; 402 402 if (phba->brd_no >= 10) { 403 403 if (strlcat(buf, " ", PAGE_SIZE) >= PAGE_SIZE) 404 - goto rcu_unlock_buf_done; 404 + goto unlock_buf_done; 405 405 } 406 406 407 407 scnprintf(tmp, sizeof(tmp), "WWPN x%llx ", 408 408 nrport->port_name); 409 409 if (strlcat(buf, tmp, PAGE_SIZE) >= PAGE_SIZE) 410 - goto rcu_unlock_buf_done; 410 + goto unlock_buf_done; 411 411 412 412 scnprintf(tmp, sizeof(tmp), "WWNN x%llx ", 413 413 nrport->node_name); 414 414 if (strlcat(buf, tmp, PAGE_SIZE) >= PAGE_SIZE) 415 - goto rcu_unlock_buf_done; 415 + goto unlock_buf_done; 416 416 417 417 scnprintf(tmp, sizeof(tmp), "DID x%06x ", 418 418 nrport->port_id); 419 419 if (strlcat(buf, tmp, PAGE_SIZE) >= PAGE_SIZE) 420 - goto rcu_unlock_buf_done; 420 + goto unlock_buf_done; 421 421 422 422 /* An NVME rport can have multiple roles. */ 423 423 if (nrport->port_role & FC_PORT_ROLE_NVME_INITIATOR) { 424 424 if (strlcat(buf, "INITIATOR ", PAGE_SIZE) >= PAGE_SIZE) 425 - goto rcu_unlock_buf_done; 425 + goto unlock_buf_done; 426 426 } 427 427 if (nrport->port_role & FC_PORT_ROLE_NVME_TARGET) { 428 428 if (strlcat(buf, "TARGET ", PAGE_SIZE) >= PAGE_SIZE) 429 - goto rcu_unlock_buf_done; 429 + goto unlock_buf_done; 430 430 } 431 431 if (nrport->port_role & FC_PORT_ROLE_NVME_DISCOVERY) { 432 432 if (strlcat(buf, "DISCSRVC ", PAGE_SIZE) >= PAGE_SIZE) 433 - goto rcu_unlock_buf_done; 433 + goto unlock_buf_done; 434 434 } 435 435 if (nrport->port_role & ~(FC_PORT_ROLE_NVME_INITIATOR | 436 436 FC_PORT_ROLE_NVME_TARGET | ··· 438 438 scnprintf(tmp, sizeof(tmp), "UNKNOWN ROLE x%x", 439 439 nrport->port_role); 440 440 if (strlcat(buf, tmp, PAGE_SIZE) >= PAGE_SIZE) 441 - goto rcu_unlock_buf_done; 441 + goto unlock_buf_done; 442 442 } 443 443 444 444 scnprintf(tmp, sizeof(tmp), "%s\n", statep); 445 445 if (strlcat(buf, tmp, PAGE_SIZE) >= PAGE_SIZE) 446 - goto rcu_unlock_buf_done; 446 + goto unlock_buf_done; 447 447 } 448 - rcu_read_unlock(); 448 + spin_unlock_irq(shost->host_lock); 449 449 450 450 if (!lport) 451 451 goto buffer_done; ··· 505 505 atomic_read(&lport->cmpl_fcp_err)); 506 506 strlcat(buf, tmp, PAGE_SIZE); 507 507 508 - /* RCU is already unlocked. */ 508 + /* host_lock is already unlocked. */ 509 509 goto buffer_done; 510 510 511 - rcu_unlock_buf_done: 512 - rcu_read_unlock(); 511 + unlock_buf_done: 512 + spin_unlock_irq(shost->host_lock); 513 513 514 514 buffer_done: 515 515 len = strnlen(buf, PAGE_SIZE); ··· 1475 1475 int i; 1476 1476 1477 1477 msleep(100); 1478 - lpfc_readl(phba->sli4_hba.u.if_type2.STATUSregaddr, 1479 - &portstat_reg.word0); 1478 + if (lpfc_readl(phba->sli4_hba.u.if_type2.STATUSregaddr, 1479 + &portstat_reg.word0)) 1480 + return -EIO; 1480 1481 1481 1482 /* verify if privileged for the request operation */ 1482 1483 if (!bf_get(lpfc_sliport_status_rn, &portstat_reg) && ··· 1487 1486 /* wait for the SLI port firmware ready after firmware reset */ 1488 1487 for (i = 0; i < LPFC_FW_RESET_MAXIMUM_WAIT_10MS_CNT; i++) { 1489 1488 msleep(10); 1490 - lpfc_readl(phba->sli4_hba.u.if_type2.STATUSregaddr, 1491 - &portstat_reg.word0); 1489 + if (lpfc_readl(phba->sli4_hba.u.if_type2.STATUSregaddr, 1490 + &portstat_reg.word0)) 1491 + continue; 1492 1492 if (!bf_get(lpfc_sliport_status_err, &portstat_reg)) 1493 1493 continue; 1494 1494 if (!bf_get(lpfc_sliport_status_rn, &portstat_reg)) ··· 1644 1642 { 1645 1643 LPFC_MBOXQ_t *mbox = NULL; 1646 1644 unsigned long val = 0; 1647 - char *pval = 0; 1645 + char *pval = NULL; 1648 1646 int rc = 0; 1649 1647 1650 1648 if (!strncmp("enable", buff_out, ··· 3535 3533 LPFC_ATTR_R(suppress_link_up, LPFC_INITIALIZE_LINK, LPFC_INITIALIZE_LINK, 3536 3534 LPFC_DELAY_INIT_LINK_INDEFINITELY, 3537 3535 "Suppress Link Up at initialization"); 3536 + 3537 + static ssize_t 3538 + lpfc_pls_show(struct device *dev, struct device_attribute *attr, char *buf) 3539 + { 3540 + struct Scsi_Host *shost = class_to_shost(dev); 3541 + struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba; 3542 + 3543 + return scnprintf(buf, PAGE_SIZE, "%d\n", 3544 + phba->sli4_hba.pc_sli4_params.pls); 3545 + } 3546 + static DEVICE_ATTR(pls, 0444, 3547 + lpfc_pls_show, NULL); 3548 + 3549 + static ssize_t 3550 + lpfc_pt_show(struct device *dev, struct device_attribute *attr, char *buf) 3551 + { 3552 + struct Scsi_Host *shost = class_to_shost(dev); 3553 + struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba; 3554 + 3555 + return scnprintf(buf, PAGE_SIZE, "%d\n", 3556 + (phba->hba_flag & HBA_PERSISTENT_TOPO) ? 1 : 0); 3557 + } 3558 + static DEVICE_ATTR(pt, 0444, 3559 + lpfc_pt_show, NULL); 3560 + 3538 3561 /* 3539 3562 # lpfc_cnt: Number of IOCBs allocated for ELS, CT, and ABTS 3540 3563 # 1 - (1024) ··· 3606 3579 3607 3580 static DEVICE_ATTR(txcmplq_hw, S_IRUGO, 3608 3581 lpfc_txcmplq_hw_show, NULL); 3609 - 3610 - LPFC_ATTR_R(iocb_cnt, 2, 1, 5, 3611 - "Number of IOCBs alloc for ELS, CT, and ABTS: 1k to 5k IOCBs"); 3612 3582 3613 3583 /* 3614 3584 # lpfc_nodev_tmo: If set, it will hold all I/O errors on devices that disappear ··· 4120 4096 val); 4121 4097 return -EINVAL; 4122 4098 } 4123 - if ((phba->pcidev->device == PCI_DEVICE_ID_LANCER_G6_FC || 4099 + /* 4100 + * The 'topology' is not a configurable parameter if : 4101 + * - persistent topology enabled 4102 + * - G7 adapters 4103 + * - G6 with no private loop support 4104 + */ 4105 + 4106 + if (((phba->hba_flag & HBA_PERSISTENT_TOPO) || 4107 + (!phba->sli4_hba.pc_sli4_params.pls && 4108 + phba->pcidev->device == PCI_DEVICE_ID_LANCER_G6_FC) || 4124 4109 phba->pcidev->device == PCI_DEVICE_ID_LANCER_G7_FC) && 4125 4110 val == 4) { 4126 4111 lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT, ··· 5331 5298 len += scnprintf(buf + len, PAGE_SIZE - len, 5332 5299 "CPU %02d not present\n", 5333 5300 phba->sli4_hba.curr_disp_cpu); 5334 - else if (cpup->irq == LPFC_VECTOR_MAP_EMPTY) { 5301 + else if (cpup->eq == LPFC_VECTOR_MAP_EMPTY) { 5335 5302 if (cpup->hdwq == LPFC_VECTOR_MAP_EMPTY) 5336 5303 len += scnprintf( 5337 5304 buf + len, PAGE_SIZE - len, ··· 5344 5311 else 5345 5312 len += scnprintf( 5346 5313 buf + len, PAGE_SIZE - len, 5347 - "CPU %02d EQ %04d hdwq %04d " 5314 + "CPU %02d EQ None hdwq %04d " 5348 5315 "physid %d coreid %d ht %d ua %d\n", 5349 5316 phba->sli4_hba.curr_disp_cpu, 5350 - cpup->eq, cpup->hdwq, cpup->phys_id, 5317 + cpup->hdwq, cpup->phys_id, 5351 5318 cpup->core_id, 5352 5319 (cpup->flag & LPFC_CPU_MAP_HYPER), 5353 5320 (cpup->flag & LPFC_CPU_MAP_UNASSIGN)); ··· 5362 5329 cpup->core_id, 5363 5330 (cpup->flag & LPFC_CPU_MAP_HYPER), 5364 5331 (cpup->flag & LPFC_CPU_MAP_UNASSIGN), 5365 - cpup->irq); 5332 + lpfc_get_irq(cpup->eq)); 5366 5333 else 5367 5334 len += scnprintf( 5368 5335 buf + len, PAGE_SIZE - len, ··· 5373 5340 cpup->core_id, 5374 5341 (cpup->flag & LPFC_CPU_MAP_HYPER), 5375 5342 (cpup->flag & LPFC_CPU_MAP_UNASSIGN), 5376 - cpup->irq); 5343 + lpfc_get_irq(cpup->eq)); 5377 5344 } 5378 5345 5379 5346 phba->sli4_hba.curr_disp_cpu++; ··· 5744 5711 * the driver will advertise it supports to the SCSI layer. 5745 5712 * 5746 5713 * 0 = Set nr_hw_queues by the number of CPUs or HW queues. 5747 - * 1,128 = Manually specify the maximum nr_hw_queue value to be set, 5714 + * 1,256 = Manually specify nr_hw_queue value to be advertised, 5748 5715 * 5749 5716 * Value range is [0,256]. Default value is 8. 5750 5717 */ ··· 5762 5729 * A hardware IO queue maps (qidx) to a specific driver CQ/WQ. 5763 5730 * 5764 5731 * 0 = Configure the number of hdw queues to the number of active CPUs. 5765 - * 1,128 = Manually specify how many hdw queues to use. 5732 + * 1,256 = Manually specify how many hdw queues to use. 5766 5733 * 5767 - * Value range is [0,128]. Default value is 0. 5734 + * Value range is [0,256]. Default value is 0. 5768 5735 */ 5769 5736 LPFC_ATTR_R(hdw_queue, 5770 5737 LPFC_HBA_HDWQ_DEF, 5771 5738 LPFC_HBA_HDWQ_MIN, LPFC_HBA_HDWQ_MAX, 5772 5739 "Set the number of I/O Hardware Queues"); 5740 + 5741 + static inline void 5742 + lpfc_assign_default_irq_numa(struct lpfc_hba *phba) 5743 + { 5744 + #if IS_ENABLED(CONFIG_X86) 5745 + /* If AMD architecture, then default is LPFC_IRQ_CHANN_NUMA */ 5746 + if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) 5747 + phba->cfg_irq_numa = 1; 5748 + else 5749 + phba->cfg_irq_numa = 0; 5750 + #else 5751 + phba->cfg_irq_numa = 0; 5752 + #endif 5753 + } 5773 5754 5774 5755 /* 5775 5756 * lpfc_irq_chann: Set the number of IRQ vectors that are available ··· 5791 5744 * of EQ / MSI-X vectors the driver will create. This should never be 5792 5745 * more than the number of Hardware Queues 5793 5746 * 5794 - * 0 = Configure number of IRQ Channels to the number of active CPUs. 5795 - * 1,128 = Manually specify how many IRQ Channels to use. 5747 + * 0 = Configure number of IRQ Channels to: 5748 + * if AMD architecture, number of CPUs on HBA's NUMA node 5749 + * otherwise, number of active CPUs. 5750 + * [1,256] = Manually specify how many IRQ Channels to use. 5796 5751 * 5797 - * Value range is [0,128]. Default value is 0. 5752 + * Value range is [0,256]. Default value is [0]. 5798 5753 */ 5799 - LPFC_ATTR_R(irq_chann, 5800 - LPFC_HBA_HDWQ_DEF, 5801 - LPFC_HBA_HDWQ_MIN, LPFC_HBA_HDWQ_MAX, 5802 - "Set the number of I/O IRQ Channels"); 5754 + static uint lpfc_irq_chann = LPFC_IRQ_CHANN_DEF; 5755 + module_param(lpfc_irq_chann, uint, 0444); 5756 + MODULE_PARM_DESC(lpfc_irq_chann, "Set number of interrupt vectors to allocate"); 5757 + 5758 + /* lpfc_irq_chann_init - Set the hba irq_chann initial value 5759 + * @phba: lpfc_hba pointer. 5760 + * @val: contains the initial value 5761 + * 5762 + * Description: 5763 + * Validates the initial value is within range and assigns it to the 5764 + * adapter. If not in range, an error message is posted and the 5765 + * default value is assigned. 5766 + * 5767 + * Returns: 5768 + * zero if value is in range and is set 5769 + * -EINVAL if value was out of range 5770 + **/ 5771 + static int 5772 + lpfc_irq_chann_init(struct lpfc_hba *phba, uint32_t val) 5773 + { 5774 + const struct cpumask *numa_mask; 5775 + 5776 + if (phba->cfg_use_msi != 2) { 5777 + lpfc_printf_log(phba, KERN_INFO, LOG_INIT, 5778 + "8532 use_msi = %u ignoring cfg_irq_numa\n", 5779 + phba->cfg_use_msi); 5780 + phba->cfg_irq_numa = 0; 5781 + phba->cfg_irq_chann = LPFC_IRQ_CHANN_MIN; 5782 + return 0; 5783 + } 5784 + 5785 + /* Check if default setting was passed */ 5786 + if (val == LPFC_IRQ_CHANN_DEF) 5787 + lpfc_assign_default_irq_numa(phba); 5788 + 5789 + if (phba->cfg_irq_numa) { 5790 + numa_mask = &phba->sli4_hba.numa_mask; 5791 + 5792 + if (cpumask_empty(numa_mask)) { 5793 + lpfc_printf_log(phba, KERN_INFO, LOG_INIT, 5794 + "8533 Could not identify NUMA node, " 5795 + "ignoring cfg_irq_numa\n"); 5796 + phba->cfg_irq_numa = 0; 5797 + phba->cfg_irq_chann = LPFC_IRQ_CHANN_MIN; 5798 + } else { 5799 + phba->cfg_irq_chann = cpumask_weight(numa_mask); 5800 + lpfc_printf_log(phba, KERN_INFO, LOG_INIT, 5801 + "8543 lpfc_irq_chann set to %u " 5802 + "(numa)\n", phba->cfg_irq_chann); 5803 + } 5804 + } else { 5805 + if (val > LPFC_IRQ_CHANN_MAX) { 5806 + lpfc_printf_log(phba, KERN_INFO, LOG_INIT, 5807 + "8545 lpfc_irq_chann attribute cannot " 5808 + "be set to %u, allowed range is " 5809 + "[%u,%u]\n", 5810 + val, 5811 + LPFC_IRQ_CHANN_MIN, 5812 + LPFC_IRQ_CHANN_MAX); 5813 + phba->cfg_irq_chann = LPFC_IRQ_CHANN_MIN; 5814 + return -EINVAL; 5815 + } 5816 + phba->cfg_irq_chann = val; 5817 + } 5818 + 5819 + return 0; 5820 + } 5821 + 5822 + /** 5823 + * lpfc_irq_chann_show - Display value of irq_chann 5824 + * @dev: class converted to a Scsi_host structure. 5825 + * @attr: device attribute, not used. 5826 + * @buf: on return contains a string with the list sizes 5827 + * 5828 + * Returns: size of formatted string. 5829 + **/ 5830 + static ssize_t 5831 + lpfc_irq_chann_show(struct device *dev, struct device_attribute *attr, 5832 + char *buf) 5833 + { 5834 + struct Scsi_Host *shost = class_to_shost(dev); 5835 + struct lpfc_vport *vport = (struct lpfc_vport *)shost->hostdata; 5836 + struct lpfc_hba *phba = vport->phba; 5837 + 5838 + return scnprintf(buf, PAGE_SIZE, "%u\n", phba->cfg_irq_chann); 5839 + } 5840 + 5841 + static DEVICE_ATTR_RO(lpfc_irq_chann); 5803 5842 5804 5843 /* 5805 5844 # lpfc_enable_hba_reset: Allow or prevent HBA resets to the hardware. ··· 6066 5933 * [1-4] = Multiple of 1/4th Mb of host memory for FW logging 6067 5934 * Value range [0..4]. Default value is 0 6068 5935 */ 6069 - LPFC_ATTR_RW(ras_fwlog_buffsize, 0, 0, 4, "Host memory for FW logging"); 5936 + LPFC_ATTR(ras_fwlog_buffsize, 0, 0, 4, "Host memory for FW logging"); 5937 + lpfc_param_show(ras_fwlog_buffsize); 5938 + 5939 + static ssize_t 5940 + lpfc_ras_fwlog_buffsize_set(struct lpfc_hba *phba, uint val) 5941 + { 5942 + int ret = 0; 5943 + enum ras_state state; 5944 + 5945 + if (!lpfc_rangecheck(val, 0, 4)) 5946 + return -EINVAL; 5947 + 5948 + if (phba->cfg_ras_fwlog_buffsize == val) 5949 + return 0; 5950 + 5951 + if (phba->cfg_ras_fwlog_func != PCI_FUNC(phba->pcidev->devfn)) 5952 + return -EINVAL; 5953 + 5954 + spin_lock_irq(&phba->hbalock); 5955 + state = phba->ras_fwlog.state; 5956 + spin_unlock_irq(&phba->hbalock); 5957 + 5958 + if (state == REG_INPROGRESS) { 5959 + lpfc_printf_log(phba, KERN_ERR, LOG_SLI, "6147 RAS Logging " 5960 + "registration is in progress\n"); 5961 + return -EBUSY; 5962 + } 5963 + 5964 + /* For disable logging: stop the logs and free the DMA. 5965 + * For ras_fwlog_buffsize size change we still need to free and 5966 + * reallocate the DMA in lpfc_sli4_ras_fwlog_init. 5967 + */ 5968 + phba->cfg_ras_fwlog_buffsize = val; 5969 + if (state == ACTIVE) { 5970 + lpfc_ras_stop_fwlog(phba); 5971 + lpfc_sli4_ras_dma_free(phba); 5972 + } 5973 + 5974 + lpfc_sli4_ras_init(phba); 5975 + if (phba->ras_fwlog.ras_enabled) 5976 + ret = lpfc_sli4_ras_fwlog_init(phba, phba->cfg_ras_fwlog_level, 5977 + LPFC_RAS_ENABLE_LOGGING); 5978 + return ret; 5979 + } 5980 + 5981 + lpfc_param_store(ras_fwlog_buffsize); 5982 + static DEVICE_ATTR_RW(lpfc_ras_fwlog_buffsize); 6070 5983 6071 5984 /* 6072 5985 * lpfc_ras_fwlog_level: Firmware logging verbosity level ··· 6250 6071 &dev_attr_lpfc_sriov_nr_virtfn, 6251 6072 &dev_attr_lpfc_req_fw_upgrade, 6252 6073 &dev_attr_lpfc_suppress_link_up, 6253 - &dev_attr_lpfc_iocb_cnt, 6254 6074 &dev_attr_iocb_hw, 6075 + &dev_attr_pls, 6076 + &dev_attr_pt, 6255 6077 &dev_attr_txq_hw, 6256 6078 &dev_attr_txcmplq_hw, 6257 6079 &dev_attr_lpfc_fips_level, ··· 7265 7085 static void 7266 7086 lpfc_get_hba_function_mode(struct lpfc_hba *phba) 7267 7087 { 7268 - /* If it's a SkyHawk FCoE adapter */ 7269 - if (phba->pcidev->device == PCI_DEVICE_ID_SKYHAWK) 7088 + /* If the adapter supports FCoE mode */ 7089 + switch (phba->pcidev->device) { 7090 + case PCI_DEVICE_ID_SKYHAWK: 7091 + case PCI_DEVICE_ID_SKYHAWK_VF: 7092 + case PCI_DEVICE_ID_LANCER_FCOE: 7093 + case PCI_DEVICE_ID_LANCER_FCOE_VF: 7094 + case PCI_DEVICE_ID_ZEPHYR_DCSP: 7095 + case PCI_DEVICE_ID_HORNET: 7096 + case PCI_DEVICE_ID_TIGERSHARK: 7097 + case PCI_DEVICE_ID_TOMCAT: 7270 7098 phba->hba_flag |= HBA_FCOE_MODE; 7271 - else 7099 + break; 7100 + default: 7101 + /* for others, clear the flag */ 7272 7102 phba->hba_flag &= ~HBA_FCOE_MODE; 7103 + } 7273 7104 } 7274 7105 7275 7106 /** ··· 7290 7099 void 7291 7100 lpfc_get_cfgparam(struct lpfc_hba *phba) 7292 7101 { 7102 + lpfc_hba_log_verbose_init(phba, lpfc_log_verbose); 7293 7103 lpfc_fcp_io_sched_init(phba, lpfc_fcp_io_sched); 7294 7104 lpfc_ns_query_init(phba, lpfc_ns_query); 7295 7105 lpfc_fcp2_no_tgt_reset_init(phba, lpfc_fcp2_no_tgt_reset); ··· 7397 7205 phba->cfg_soft_wwpn = 0L; 7398 7206 lpfc_sg_seg_cnt_init(phba, lpfc_sg_seg_cnt); 7399 7207 lpfc_hba_queue_depth_init(phba, lpfc_hba_queue_depth); 7400 - lpfc_hba_log_verbose_init(phba, lpfc_log_verbose); 7401 7208 lpfc_aer_support_init(phba, lpfc_aer_support); 7402 7209 lpfc_sriov_nr_virtfn_init(phba, lpfc_sriov_nr_virtfn); 7403 7210 lpfc_request_firmware_upgrade_init(phba, lpfc_req_fw_upgrade); 7404 7211 lpfc_suppress_link_up_init(phba, lpfc_suppress_link_up); 7405 - lpfc_iocb_cnt_init(phba, lpfc_iocb_cnt); 7406 7212 lpfc_delay_discovery_init(phba, lpfc_delay_discovery); 7407 7213 lpfc_sli_mode_init(phba, lpfc_sli_mode); 7408 7214 phba->cfg_enable_dss = 1; ··· 7446 7256 } 7447 7257 7448 7258 if (!phba->cfg_nvmet_mrq) 7449 - phba->cfg_nvmet_mrq = phba->cfg_irq_chann; 7259 + phba->cfg_nvmet_mrq = phba->cfg_hdw_queue; 7450 7260 7451 7261 /* Adjust lpfc_nvmet_mrq to avoid running out of WQE slots */ 7452 - if (phba->cfg_nvmet_mrq > phba->cfg_irq_chann) { 7453 - phba->cfg_nvmet_mrq = phba->cfg_irq_chann; 7262 + if (phba->cfg_nvmet_mrq > phba->cfg_hdw_queue) { 7263 + phba->cfg_nvmet_mrq = phba->cfg_hdw_queue; 7454 7264 lpfc_printf_log(phba, KERN_ERR, LOG_NVME_DISC, 7455 7265 "6018 Adjust lpfc_nvmet_mrq to %d\n", 7456 7266 phba->cfg_nvmet_mrq);
+14 -4
drivers/scsi/lpfc/lpfc_bsg.c
··· 5435 5435 bsg_reply->reply_data.vendor_reply.vendor_rsp; 5436 5436 5437 5437 /* Current logging state */ 5438 - if (ras_fwlog->ras_active == true) 5438 + spin_lock_irq(&phba->hbalock); 5439 + if (ras_fwlog->state == ACTIVE) 5439 5440 ras_reply->state = LPFC_RASLOG_STATE_RUNNING; 5440 5441 else 5441 5442 ras_reply->state = LPFC_RASLOG_STATE_STOPPED; 5443 + spin_unlock_irq(&phba->hbalock); 5442 5444 5443 5445 ras_reply->log_level = phba->ras_fwlog.fw_loglevel; 5444 5446 ras_reply->log_buff_sz = phba->cfg_ras_fwlog_buffsize; ··· 5497 5495 5498 5496 if (action == LPFC_RASACTION_STOP_LOGGING) { 5499 5497 /* Check if already disabled */ 5500 - if (ras_fwlog->ras_active == false) { 5498 + spin_lock_irq(&phba->hbalock); 5499 + if (ras_fwlog->state != ACTIVE) { 5500 + spin_unlock_irq(&phba->hbalock); 5501 5501 rc = -ESRCH; 5502 5502 goto ras_job_error; 5503 5503 } 5504 + spin_unlock_irq(&phba->hbalock); 5504 5505 5505 5506 /* Disable logging */ 5506 5507 lpfc_ras_stop_fwlog(phba); ··· 5514 5509 * FW-logging with new log-level. Return status 5515 5510 * "Logging already Running" to caller. 5516 5511 **/ 5517 - if (ras_fwlog->ras_active) 5512 + spin_lock_irq(&phba->hbalock); 5513 + if (ras_fwlog->state != INACTIVE) 5518 5514 action_status = -EINPROGRESS; 5515 + spin_unlock_irq(&phba->hbalock); 5519 5516 5520 5517 /* Enable logging */ 5521 5518 rc = lpfc_sli4_ras_fwlog_init(phba, log_level, ··· 5633 5626 goto ras_job_error; 5634 5627 5635 5628 /* Logging to be stopped before reading */ 5636 - if (ras_fwlog->ras_active == true) { 5629 + spin_lock_irq(&phba->hbalock); 5630 + if (ras_fwlog->state == ACTIVE) { 5631 + spin_unlock_irq(&phba->hbalock); 5637 5632 rc = -EINPROGRESS; 5638 5633 goto ras_job_error; 5639 5634 } 5635 + spin_unlock_irq(&phba->hbalock); 5640 5636 5641 5637 if (job->request_len < 5642 5638 sizeof(struct fc_bsg_request) +
+7
drivers/scsi/lpfc/lpfc_crtn.h
··· 215 215 irqreturn_t lpfc_sli4_intr_handler(int, void *); 216 216 irqreturn_t lpfc_sli4_hba_intr_handler(int, void *); 217 217 218 + void lpfc_sli4_cleanup_poll_list(struct lpfc_hba *phba); 219 + int lpfc_sli4_poll_eq(struct lpfc_queue *q, uint8_t path); 220 + void lpfc_sli4_poll_hbtimer(struct timer_list *t); 221 + void lpfc_sli4_start_polling(struct lpfc_queue *q); 222 + void lpfc_sli4_stop_polling(struct lpfc_queue *q); 223 + 218 224 void lpfc_read_rev(struct lpfc_hba *, LPFC_MBOXQ_t *); 219 225 void lpfc_sli4_swap_str(struct lpfc_hba *, LPFC_MBOXQ_t *); 220 226 void lpfc_config_ring(struct lpfc_hba *, int, LPFC_MBOXQ_t *); ··· 592 586 void lpfc_nvme_cmd_template(void); 593 587 void lpfc_nvmet_cmd_template(void); 594 588 void lpfc_nvme_cancel_iocb(struct lpfc_hba *phba, struct lpfc_iocbq *pwqeIn); 589 + void lpfc_nvme_prep_abort_wqe(struct lpfc_iocbq *pwqeq, u16 xritag, u8 opt); 595 590 extern int lpfc_enable_nvmet_cnt; 596 591 extern unsigned long long lpfc_enable_nvmet[]; 597 592 extern int lpfc_no_hba_reset_cnt;
+25 -3
drivers/scsi/lpfc/lpfc_ct.c
··· 763 763 cpu_to_be16(SLI_CT_RESPONSE_FS_ACC)) { 764 764 lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, 765 765 "0208 NameServer Rsp Data: x%x x%x " 766 - "sz x%x\n", 766 + "x%x x%x sz x%x\n", 767 767 vport->fc_flag, 768 768 CTreq->un.gid.Fc4Type, 769 + vport->num_disc_nodes, 770 + vport->gidft_inp, 769 771 irsp->un.genreq64.bdl.bdeSize); 770 772 771 773 lpfc_ns_rsp(vport, ··· 963 961 if (CTrsp->CommandResponse.bits.CmdRsp == 964 962 cpu_to_be16(SLI_CT_RESPONSE_FS_ACC)) { 965 963 lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, 966 - "4105 NameServer Rsp Data: x%x x%x\n", 964 + "4105 NameServer Rsp Data: x%x x%x " 965 + "x%x x%x sz x%x\n", 967 966 vport->fc_flag, 968 - CTreq->un.gid.Fc4Type); 967 + CTreq->un.gid.Fc4Type, 968 + vport->num_disc_nodes, 969 + vport->gidft_inp, 970 + irsp->un.genreq64.bdl.bdeSize); 969 971 970 972 lpfc_ns_rsp(vport, 971 973 outp, ··· 1031 1025 } 1032 1026 vport->gidft_inp--; 1033 1027 } 1028 + 1029 + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, 1030 + "6450 GID_PT cmpl inp %d disc %d\n", 1031 + vport->gidft_inp, vport->num_disc_nodes); 1032 + 1034 1033 /* Link up / RSCN discovery */ 1035 1034 if ((vport->num_disc_nodes == 0) && 1036 1035 (vport->gidft_inp == 0)) { ··· 1170 1159 /* Link up / RSCN discovery */ 1171 1160 if (vport->num_disc_nodes) 1172 1161 vport->num_disc_nodes--; 1162 + 1163 + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, 1164 + "6451 GFF_ID cmpl inp %d disc %d\n", 1165 + vport->gidft_inp, vport->num_disc_nodes); 1166 + 1173 1167 if (vport->num_disc_nodes == 0) { 1174 1168 /* 1175 1169 * The driver has cycled through all Nports in the RSCN payload. ··· 1884 1868 if (irsp->ulpStatus == IOSTAT_LOCAL_REJECT) { 1885 1869 switch ((irsp->un.ulpWord[4] & IOERR_PARAM_MASK)) { 1886 1870 case IOERR_SLI_ABORTED: 1871 + case IOERR_SLI_DOWN: 1872 + /* Driver aborted this IO. No retry as error 1873 + * is likely Offline->Online or some adapter 1874 + * error. Recovery will try again. 1875 + */ 1876 + break; 1887 1877 case IOERR_ABORT_IN_PROGRESS: 1888 1878 case IOERR_SEQUENCE_TIMEOUT: 1889 1879 case IOERR_ILLEGAL_FRAME:
+117 -1
drivers/scsi/lpfc/lpfc_debugfs.c
··· 31 31 #include <linux/pci.h> 32 32 #include <linux/spinlock.h> 33 33 #include <linux/ctype.h> 34 + #include <linux/vmalloc.h> 34 35 35 36 #include <scsi/scsi.h> 36 37 #include <scsi/scsi_device.h> ··· 2078 2077 return nbytes; 2079 2078 } 2080 2079 #endif 2080 + 2081 + static int lpfc_debugfs_ras_log_data(struct lpfc_hba *phba, 2082 + char *buffer, int size) 2083 + { 2084 + int copied = 0; 2085 + struct lpfc_dmabuf *dmabuf, *next; 2086 + 2087 + spin_lock_irq(&phba->hbalock); 2088 + if (phba->ras_fwlog.state != ACTIVE) { 2089 + spin_unlock_irq(&phba->hbalock); 2090 + return -EINVAL; 2091 + } 2092 + spin_unlock_irq(&phba->hbalock); 2093 + 2094 + list_for_each_entry_safe(dmabuf, next, 2095 + &phba->ras_fwlog.fwlog_buff_list, list) { 2096 + memcpy(buffer + copied, dmabuf->virt, LPFC_RAS_MAX_ENTRY_SIZE); 2097 + copied += LPFC_RAS_MAX_ENTRY_SIZE; 2098 + if (size > copied) 2099 + break; 2100 + } 2101 + return copied; 2102 + } 2103 + 2104 + static int 2105 + lpfc_debugfs_ras_log_release(struct inode *inode, struct file *file) 2106 + { 2107 + struct lpfc_debug *debug = file->private_data; 2108 + 2109 + vfree(debug->buffer); 2110 + kfree(debug); 2111 + 2112 + return 0; 2113 + } 2114 + 2115 + /** 2116 + * lpfc_debugfs_ras_log_open - Open the RAS log debugfs buffer 2117 + * @inode: The inode pointer that contains a vport pointer. 2118 + * @file: The file pointer to attach the log output. 2119 + * 2120 + * Description: 2121 + * This routine is the entry point for the debugfs open file operation. It gets 2122 + * the vport from the i_private field in @inode, allocates the necessary buffer 2123 + * for the log, fills the buffer from the in-memory log for this vport, and then 2124 + * returns a pointer to that log in the private_data field in @file. 2125 + * 2126 + * Returns: 2127 + * This function returns zero if successful. On error it will return a negative 2128 + * error value. 2129 + **/ 2130 + static int 2131 + lpfc_debugfs_ras_log_open(struct inode *inode, struct file *file) 2132 + { 2133 + struct lpfc_hba *phba = inode->i_private; 2134 + struct lpfc_debug *debug; 2135 + int size; 2136 + int rc = -ENOMEM; 2137 + 2138 + spin_lock_irq(&phba->hbalock); 2139 + if (phba->ras_fwlog.state != ACTIVE) { 2140 + spin_unlock_irq(&phba->hbalock); 2141 + rc = -EINVAL; 2142 + goto out; 2143 + } 2144 + spin_unlock_irq(&phba->hbalock); 2145 + debug = kmalloc(sizeof(*debug), GFP_KERNEL); 2146 + if (!debug) 2147 + goto out; 2148 + 2149 + size = LPFC_RAS_MIN_BUFF_POST_SIZE * phba->cfg_ras_fwlog_buffsize; 2150 + debug->buffer = vmalloc(size); 2151 + if (!debug->buffer) 2152 + goto free_debug; 2153 + 2154 + debug->len = lpfc_debugfs_ras_log_data(phba, debug->buffer, size); 2155 + if (debug->len < 0) { 2156 + rc = -EINVAL; 2157 + goto free_buffer; 2158 + } 2159 + file->private_data = debug; 2160 + 2161 + return 0; 2162 + 2163 + free_buffer: 2164 + vfree(debug->buffer); 2165 + free_debug: 2166 + kfree(debug); 2167 + out: 2168 + return rc; 2169 + } 2081 2170 2082 2171 /** 2083 2172 * lpfc_debugfs_dumpHBASlim_open - Open the Dump HBA SLIM debugfs buffer ··· 5377 5286 }; 5378 5287 #endif 5379 5288 5289 + #undef lpfc_debugfs_ras_log 5290 + static const struct file_operations lpfc_debugfs_ras_log = { 5291 + .owner = THIS_MODULE, 5292 + .open = lpfc_debugfs_ras_log_open, 5293 + .llseek = lpfc_debugfs_lseek, 5294 + .read = lpfc_debugfs_read, 5295 + .release = lpfc_debugfs_ras_log_release, 5296 + }; 5297 + #endif 5298 + 5380 5299 #undef lpfc_debugfs_op_dumpHBASlim 5381 5300 static const struct file_operations lpfc_debugfs_op_dumpHBASlim = { 5382 5301 .owner = THIS_MODULE, ··· 5558 5457 .release = lpfc_idiag_cmd_release, 5559 5458 }; 5560 5459 5561 - #endif 5562 5460 5563 5461 /* lpfc_idiag_mbxacc_dump_bsg_mbox - idiag debugfs dump bsg mailbox command 5564 5462 * @phba: Pointer to HBA context object. ··· 5804 5704 if (!phba->debug_multixri_pools) { 5805 5705 lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT, 5806 5706 "0527 Cannot create debugfs multixripools\n"); 5707 + goto debug_failed; 5708 + } 5709 + 5710 + /* RAS log */ 5711 + snprintf(name, sizeof(name), "ras_log"); 5712 + phba->debug_ras_log = 5713 + debugfs_create_file(name, 0644, 5714 + phba->hba_debugfs_root, 5715 + phba, &lpfc_debugfs_ras_log); 5716 + if (!phba->debug_ras_log) { 5717 + lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT, 5718 + "6148 Cannot create debugfs" 5719 + " ras_log\n"); 5807 5720 goto debug_failed; 5808 5721 } 5809 5722 ··· 6229 6116 6230 6117 debugfs_remove(phba->debug_hbqinfo); /* hbqinfo */ 6231 6118 phba->debug_hbqinfo = NULL; 6119 + 6120 + debugfs_remove(phba->debug_ras_log); 6121 + phba->debug_ras_log = NULL; 6232 6122 6233 6123 #ifdef LPFC_HDWQ_LOCK_STAT 6234 6124 debugfs_remove(phba->debug_lockstat); /* lockstat */
+45 -12
drivers/scsi/lpfc/lpfc_els.c
··· 2236 2236 struct Scsi_Host *shost = lpfc_shost_from_vport(vport); 2237 2237 IOCB_t *irsp; 2238 2238 struct lpfc_nodelist *ndlp; 2239 + char *mode; 2239 2240 2240 2241 /* we pass cmdiocb to state machine which needs rspiocb as well */ 2241 2242 cmdiocb->context_un.rsp_iocb = rspiocb; ··· 2274 2273 goto out; 2275 2274 } 2276 2275 2276 + /* If we don't send GFT_ID to Fabric, a PRLI error 2277 + * could be expected. 2278 + */ 2279 + if ((vport->fc_flag & FC_FABRIC) || 2280 + (vport->cfg_enable_fc4_type != LPFC_ENABLE_BOTH)) 2281 + mode = KERN_ERR; 2282 + else 2283 + mode = KERN_INFO; 2284 + 2277 2285 /* PRLI failed */ 2278 - lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS, 2286 + lpfc_printf_vlog(vport, mode, LOG_ELS, 2279 2287 "2754 PRLI failure DID:%06X Status:x%x/x%x, " 2280 2288 "data: x%x\n", 2281 2289 ndlp->nlp_DID, irsp->ulpStatus, ··· 4301 4291 4302 4292 irsp = &rspiocb->iocb; 4303 4293 4294 + if (!vport) { 4295 + lpfc_printf_log(phba, KERN_ERR, LOG_ELS, 4296 + "3177 ELS response failed\n"); 4297 + goto out; 4298 + } 4304 4299 if (cmdiocb->context_un.mbox) 4305 4300 mbox = cmdiocb->context_un.mbox; 4306 4301 ··· 4445 4430 mempool_free(mbox, phba->mbox_mem_pool); 4446 4431 } 4447 4432 out: 4448 - if (ndlp && NLP_CHK_NODE_ACT(ndlp)) { 4433 + if (ndlp && NLP_CHK_NODE_ACT(ndlp) && shost) { 4449 4434 spin_lock_irq(shost->host_lock); 4450 4435 ndlp->nlp_flag &= ~(NLP_ACC_REGLOGIN | NLP_RM_DFLT_RPI); 4451 4436 spin_unlock_irq(shost->host_lock); ··· 5275 5260 } 5276 5261 } 5277 5262 } 5263 + 5264 + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, 5265 + "6452 Discover PLOGI %d flag x%x\n", 5266 + sentplogi, vport->fc_flag); 5267 + 5278 5268 if (sentplogi) { 5279 5269 lpfc_set_disctmo(vport); 5280 5270 } ··· 6475 6455 uint32_t payload_len, length, nportid, *cmd; 6476 6456 int rscn_cnt; 6477 6457 int rscn_id = 0, hba_id = 0; 6478 - int i; 6458 + int i, tmo; 6479 6459 6480 6460 pcmd = (struct lpfc_dmabuf *) cmdiocb->context2; 6481 6461 lp = (uint32_t *) pcmd->virt; ··· 6581 6561 6582 6562 spin_lock_irq(shost->host_lock); 6583 6563 vport->fc_flag |= FC_RSCN_DEFERRED; 6564 + 6565 + /* Restart disctmo if its already running */ 6566 + if (vport->fc_flag & FC_DISC_TMO) { 6567 + tmo = ((phba->fc_ratov * 3) + 3); 6568 + mod_timer(&vport->fc_disctmo, 6569 + jiffies + msecs_to_jiffies(1000 * tmo)); 6570 + } 6584 6571 if ((rscn_cnt < FC_MAX_HOLD_RSCN) && 6585 6572 !(vport->fc_flag & FC_RSCN_DISCOVERY)) { 6586 6573 vport->fc_flag |= FC_RSCN_MODE; ··· 6690 6663 6691 6664 /* RSCN processed */ 6692 6665 lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, 6693 - "0215 RSCN processed Data: x%x x%x x%x x%x\n", 6666 + "0215 RSCN processed Data: x%x x%x x%x x%x x%x x%x\n", 6694 6667 vport->fc_flag, 0, vport->fc_rscn_id_cnt, 6695 - vport->port_state); 6668 + vport->port_state, vport->num_disc_nodes, 6669 + vport->gidft_inp); 6696 6670 6697 6671 /* To process RSCN, first compare RSCN data with NameServer */ 6698 6672 vport->fc_ns_retry = 0; ··· 8014 7986 struct lpfc_sli_ring *pring; 8015 7987 struct lpfc_iocbq *tmp_iocb, *piocb; 8016 7988 IOCB_t *cmd = NULL; 7989 + unsigned long iflags = 0; 8017 7990 8018 7991 lpfc_fabric_abort_vport(vport); 7992 + 8019 7993 /* 8020 7994 * For SLI3, only the hbalock is required. But SLI4 needs to coordinate 8021 7995 * with the ring insert operation. Because lpfc_sli_issue_abort_iotag 8022 7996 * ultimately grabs the ring_lock, the driver must splice the list into 8023 7997 * a working list and release the locks before calling the abort. 8024 7998 */ 8025 - spin_lock_irq(&phba->hbalock); 7999 + spin_lock_irqsave(&phba->hbalock, iflags); 8026 8000 pring = lpfc_phba_elsring(phba); 8027 8001 8028 8002 /* Bail out if we've no ELS wq, like in PCI error recovery case. */ 8029 8003 if (unlikely(!pring)) { 8030 - spin_unlock_irq(&phba->hbalock); 8004 + spin_unlock_irqrestore(&phba->hbalock, iflags); 8031 8005 return; 8032 8006 } 8033 8007 ··· 8042 8012 continue; 8043 8013 8044 8014 if (piocb->vport != vport) 8015 + continue; 8016 + 8017 + if (piocb->iocb_flag & LPFC_DRIVER_ABORTED) 8045 8018 continue; 8046 8019 8047 8020 /* On the ELS ring we can have ELS_REQUESTs or ··· 8070 8037 8071 8038 if (phba->sli_rev == LPFC_SLI_REV4) 8072 8039 spin_unlock(&pring->ring_lock); 8073 - spin_unlock_irq(&phba->hbalock); 8040 + spin_unlock_irqrestore(&phba->hbalock, iflags); 8074 8041 8075 8042 /* Abort each txcmpl iocb on aborted list and remove the dlist links. */ 8076 8043 list_for_each_entry_safe(piocb, tmp_iocb, &abort_list, dlist) { 8077 - spin_lock_irq(&phba->hbalock); 8044 + spin_lock_irqsave(&phba->hbalock, iflags); 8078 8045 list_del_init(&piocb->dlist); 8079 8046 lpfc_sli_issue_abort_iotag(phba, pring, piocb); 8080 - spin_unlock_irq(&phba->hbalock); 8047 + spin_unlock_irqrestore(&phba->hbalock, iflags); 8081 8048 } 8082 8049 if (!list_empty(&abort_list)) 8083 8050 lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS, 8084 8051 "3387 abort list for txq not empty\n"); 8085 8052 INIT_LIST_HEAD(&abort_list); 8086 8053 8087 - spin_lock_irq(&phba->hbalock); 8054 + spin_lock_irqsave(&phba->hbalock, iflags); 8088 8055 if (phba->sli_rev == LPFC_SLI_REV4) 8089 8056 spin_lock(&pring->ring_lock); 8090 8057 ··· 8124 8091 8125 8092 if (phba->sli_rev == LPFC_SLI_REV4) 8126 8093 spin_unlock(&pring->ring_lock); 8127 - spin_unlock_irq(&phba->hbalock); 8094 + spin_unlock_irqrestore(&phba->hbalock, iflags); 8128 8095 8129 8096 /* Cancel all the IOCBs from the completions list */ 8130 8097 lpfc_sli_cancel_iocbs(phba, &abort_list,
+136 -64
drivers/scsi/lpfc/lpfc_hbadisc.c
··· 700 700 if (!(phba->hba_flag & HBA_SP_QUEUE_EVT)) 701 701 set_bit(LPFC_DATA_READY, &phba->data_flags); 702 702 } else { 703 - if (phba->link_state >= LPFC_LINK_UP || 703 + /* Driver could have abort request completed in queue 704 + * when link goes down. Allow for this transition. 705 + */ 706 + if (phba->link_state >= LPFC_LINK_DOWN || 704 707 phba->link_flag & LS_MDS_LOOPBACK) { 705 708 pring->flag &= ~LPFC_DEFERRED_RING_EVENT; 706 709 lpfc_sli_handle_slow_ring_event(phba, pring, ··· 1138 1135 lpfc_mbx_cmpl_local_config_link(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) 1139 1136 { 1140 1137 struct lpfc_vport *vport = pmb->vport; 1141 - uint8_t bbscn = 0; 1142 1138 1143 1139 if (pmb->u.mb.mbxStatus) 1144 1140 goto out; ··· 1164 1162 /* Start discovery by sending a FLOGI. port_state is identically 1165 1163 * LPFC_FLOGI while waiting for FLOGI cmpl 1166 1164 */ 1167 - if (vport->port_state != LPFC_FLOGI) { 1168 - if (phba->bbcredit_support && phba->cfg_enable_bbcr) { 1169 - bbscn = bf_get(lpfc_bbscn_def, 1170 - &phba->sli4_hba.bbscn_params); 1171 - vport->fc_sparam.cmn.bbRcvSizeMsb &= 0xf; 1172 - vport->fc_sparam.cmn.bbRcvSizeMsb |= (bbscn << 4); 1173 - } 1165 + if (vport->port_state != LPFC_FLOGI) 1174 1166 lpfc_initial_flogi(vport); 1175 - } else if (vport->fc_flag & FC_PT2PT) { 1167 + else if (vport->fc_flag & FC_PT2PT) 1176 1168 lpfc_disc_start(vport); 1177 - } 1169 + 1178 1170 return; 1179 1171 1180 1172 out: ··· 3452 3456 phba->pport->port_state, vport->fc_flag); 3453 3457 else if (attn_type == LPFC_ATT_UNEXP_WWPN) 3454 3458 lpfc_printf_log(phba, KERN_ERR, LOG_LINK_EVENT, 3455 - "1313 Link Down UNEXP WWPN Event x%x received " 3456 - "Data: x%x x%x x%x x%x x%x\n", 3459 + "1313 Link Down Unexpected FA WWPN Event x%x " 3460 + "received Data: x%x x%x x%x x%x x%x\n", 3457 3461 la->eventTag, phba->fc_eventTag, 3458 3462 phba->pport->port_state, vport->fc_flag, 3459 3463 bf_get(lpfc_mbx_read_top_mm, la), ··· 4042 4046 ndlp->nlp_flag |= NLP_RPI_REGISTERED; 4043 4047 ndlp->nlp_type |= NLP_FABRIC; 4044 4048 lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNMAPPED_NODE); 4045 - lpfc_printf_vlog(vport, KERN_INFO, LOG_SLI, 4049 + lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE | LOG_DISCOVERY, 4046 4050 "0003 rpi:%x DID:%x flg:%x %d map%x x%px\n", 4047 4051 ndlp->nlp_rpi, ndlp->nlp_DID, ndlp->nlp_flag, 4048 4052 kref_read(&ndlp->kref), ··· 4571 4575 return ndlp; 4572 4576 4573 4577 free_rpi: 4574 - if (phba->sli_rev == LPFC_SLI_REV4) 4578 + if (phba->sli_rev == LPFC_SLI_REV4) { 4575 4579 lpfc_sli4_free_rpi(vport->phba, rpi); 4580 + ndlp->nlp_rpi = LPFC_RPI_ALLOC_ERROR; 4581 + } 4576 4582 return NULL; 4577 4583 } 4578 4584 ··· 4833 4835 if (ndlp->nlp_flag & NLP_RELEASE_RPI) { 4834 4836 lpfc_sli4_free_rpi(vport->phba, ndlp->nlp_rpi); 4835 4837 ndlp->nlp_flag &= ~NLP_RELEASE_RPI; 4838 + ndlp->nlp_rpi = LPFC_RPI_ALLOC_ERROR; 4836 4839 } 4837 4840 ndlp->nlp_flag &= ~NLP_UNREG_INP; 4841 + } 4842 + } 4843 + 4844 + /* 4845 + * Sets the mailbox completion handler to be used for the 4846 + * unreg_rpi command. The handler varies based on the state of 4847 + * the port and what will be happening to the rpi next. 4848 + */ 4849 + static void 4850 + lpfc_set_unreg_login_mbx_cmpl(struct lpfc_hba *phba, struct lpfc_vport *vport, 4851 + struct lpfc_nodelist *ndlp, LPFC_MBOXQ_t *mbox) 4852 + { 4853 + unsigned long iflags; 4854 + 4855 + if (ndlp->nlp_flag & NLP_ISSUE_LOGO) { 4856 + mbox->ctx_ndlp = ndlp; 4857 + mbox->mbox_cmpl = lpfc_nlp_logo_unreg; 4858 + 4859 + } else if (phba->sli_rev == LPFC_SLI_REV4 && 4860 + (!(vport->load_flag & FC_UNLOADING)) && 4861 + (bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) >= 4862 + LPFC_SLI_INTF_IF_TYPE_2) && 4863 + (kref_read(&ndlp->kref) > 0)) { 4864 + mbox->ctx_ndlp = lpfc_nlp_get(ndlp); 4865 + mbox->mbox_cmpl = lpfc_sli4_unreg_rpi_cmpl_clr; 4866 + } else { 4867 + if (vport->load_flag & FC_UNLOADING) { 4868 + if (phba->sli_rev == LPFC_SLI_REV4) { 4869 + spin_lock_irqsave(&vport->phba->ndlp_lock, 4870 + iflags); 4871 + ndlp->nlp_flag |= NLP_RELEASE_RPI; 4872 + spin_unlock_irqrestore(&vport->phba->ndlp_lock, 4873 + iflags); 4874 + } 4875 + lpfc_nlp_get(ndlp); 4876 + } 4877 + mbox->ctx_ndlp = ndlp; 4878 + mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl; 4838 4879 } 4839 4880 } 4840 4881 ··· 4897 4860 if (ndlp->nlp_flag & NLP_RPI_REGISTERED || 4898 4861 ndlp->nlp_flag & NLP_REG_LOGIN_SEND) { 4899 4862 if (ndlp->nlp_flag & NLP_REG_LOGIN_SEND) 4900 - lpfc_printf_vlog(vport, KERN_INFO, LOG_SLI, 4863 + lpfc_printf_vlog(vport, KERN_INFO, 4864 + LOG_NODE | LOG_DISCOVERY, 4901 4865 "3366 RPI x%x needs to be " 4902 4866 "unregistered nlp_flag x%x " 4903 4867 "did x%x\n", ··· 4909 4871 * no need to queue up another one. 4910 4872 */ 4911 4873 if (ndlp->nlp_flag & NLP_UNREG_INP) { 4912 - lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, 4874 + lpfc_printf_vlog(vport, KERN_INFO, 4875 + LOG_NODE | LOG_DISCOVERY, 4913 4876 "1436 unreg_rpi SKIP UNREG x%x on " 4914 4877 "NPort x%x deferred x%x flg x%x " 4915 4878 "Data: x%px\n", ··· 4929 4890 4930 4891 lpfc_unreg_login(phba, vport->vpi, rpi, mbox); 4931 4892 mbox->vport = vport; 4932 - if (ndlp->nlp_flag & NLP_ISSUE_LOGO) { 4933 - mbox->ctx_ndlp = ndlp; 4934 - mbox->mbox_cmpl = lpfc_nlp_logo_unreg; 4935 - } else { 4936 - if (phba->sli_rev == LPFC_SLI_REV4 && 4937 - (!(vport->load_flag & FC_UNLOADING)) && 4938 - (bf_get(lpfc_sli_intf_if_type, 4939 - &phba->sli4_hba.sli_intf) >= 4940 - LPFC_SLI_INTF_IF_TYPE_2) && 4941 - (kref_read(&ndlp->kref) > 0)) { 4942 - mbox->ctx_ndlp = lpfc_nlp_get(ndlp); 4943 - mbox->mbox_cmpl = 4944 - lpfc_sli4_unreg_rpi_cmpl_clr; 4945 - /* 4946 - * accept PLOGIs after unreg_rpi_cmpl 4947 - */ 4948 - acc_plogi = 0; 4949 - } else if (vport->load_flag & FC_UNLOADING) { 4950 - mbox->ctx_ndlp = NULL; 4951 - mbox->mbox_cmpl = 4952 - lpfc_sli_def_mbox_cmpl; 4953 - } else { 4954 - mbox->ctx_ndlp = ndlp; 4955 - mbox->mbox_cmpl = 4956 - lpfc_sli_def_mbox_cmpl; 4957 - } 4958 - } 4893 + lpfc_set_unreg_login_mbx_cmpl(phba, vport, ndlp, mbox); 4894 + if (mbox->mbox_cmpl == lpfc_sli4_unreg_rpi_cmpl_clr) 4895 + /* 4896 + * accept PLOGIs after unreg_rpi_cmpl 4897 + */ 4898 + acc_plogi = 0; 4959 4899 if (((ndlp->nlp_DID & Fabric_DID_MASK) != 4960 4900 Fabric_DID_MASK) && 4961 4901 (!(vport->fc_flag & FC_OFFLINE_MODE))) 4962 4902 ndlp->nlp_flag |= NLP_UNREG_INP; 4963 4903 4964 - lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, 4904 + lpfc_printf_vlog(vport, KERN_INFO, 4905 + LOG_NODE | LOG_DISCOVERY, 4965 4906 "1433 unreg_rpi UNREG x%x on " 4966 4907 "NPort x%x deferred flg x%x " 4967 4908 "Data:x%px\n", ··· 5076 5057 struct lpfc_hba *phba = vport->phba; 5077 5058 LPFC_MBOXQ_t *mb, *nextmb; 5078 5059 struct lpfc_dmabuf *mp; 5060 + unsigned long iflags; 5079 5061 5080 5062 /* Cleanup node for NPort <nlp_DID> */ 5081 5063 lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE, ··· 5158 5138 lpfc_cleanup_vports_rrqs(vport, ndlp); 5159 5139 if (phba->sli_rev == LPFC_SLI_REV4) 5160 5140 ndlp->nlp_flag |= NLP_RELEASE_RPI; 5161 - lpfc_unreg_rpi(vport, ndlp); 5162 - 5141 + if (!lpfc_unreg_rpi(vport, ndlp)) { 5142 + /* Clean up unregistered and non freed rpis */ 5143 + if ((ndlp->nlp_flag & NLP_RELEASE_RPI) && 5144 + !(ndlp->nlp_rpi == LPFC_RPI_ALLOC_ERROR)) { 5145 + lpfc_sli4_free_rpi(vport->phba, 5146 + ndlp->nlp_rpi); 5147 + spin_lock_irqsave(&vport->phba->ndlp_lock, 5148 + iflags); 5149 + ndlp->nlp_flag &= ~NLP_RELEASE_RPI; 5150 + ndlp->nlp_rpi = LPFC_RPI_ALLOC_ERROR; 5151 + spin_unlock_irqrestore(&vport->phba->ndlp_lock, 5152 + iflags); 5153 + } 5154 + } 5163 5155 return 0; 5164 5156 } 5165 5157 ··· 5197 5165 /* For this case we need to cleanup the default rpi 5198 5166 * allocated by the firmware. 5199 5167 */ 5200 - lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE, 5201 - "0005 rpi:%x DID:%x flg:%x %d map:%x x%px\n", 5168 + lpfc_printf_vlog(vport, KERN_INFO, 5169 + LOG_NODE | LOG_DISCOVERY, 5170 + "0005 Cleanup Default rpi:x%x DID:x%x flg:x%x " 5171 + "ref %d map:x%x ndlp x%px\n", 5202 5172 ndlp->nlp_rpi, ndlp->nlp_DID, ndlp->nlp_flag, 5203 5173 kref_read(&ndlp->kref), 5204 5174 ndlp->nlp_usg_map, ndlp); ··· 5237 5203 */ 5238 5204 lpfc_printf_vlog(vport, KERN_WARNING, LOG_NODE, 5239 5205 "0940 removed node x%px DID x%x " 5240 - " rport not null x%px\n", 5241 - ndlp, ndlp->nlp_DID, ndlp->rport); 5206 + "rpi %d rport not null x%px\n", 5207 + ndlp, ndlp->nlp_DID, ndlp->nlp_rpi, 5208 + ndlp->rport); 5242 5209 rport = ndlp->rport; 5243 5210 rdata = rport->dd_data; 5244 5211 rdata->pnode = NULL; ··· 5397 5362 if (!ndlp) 5398 5363 return NULL; 5399 5364 lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); 5365 + 5366 + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, 5367 + "6453 Setup New Node 2B_DISC x%x " 5368 + "Data:x%x x%x x%x\n", 5369 + ndlp->nlp_DID, ndlp->nlp_flag, 5370 + ndlp->nlp_state, vport->fc_flag); 5371 + 5400 5372 spin_lock_irq(shost->host_lock); 5401 5373 ndlp->nlp_flag |= NLP_NPR_2B_DISC; 5402 5374 spin_unlock_irq(shost->host_lock); ··· 5417 5375 "0014 Could not enable ndlp\n"); 5418 5376 return NULL; 5419 5377 } 5378 + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, 5379 + "6454 Setup Enabled Node 2B_DISC x%x " 5380 + "Data:x%x x%x x%x\n", 5381 + ndlp->nlp_DID, ndlp->nlp_flag, 5382 + ndlp->nlp_state, vport->fc_flag); 5383 + 5420 5384 spin_lock_irq(shost->host_lock); 5421 5385 ndlp->nlp_flag |= NLP_NPR_2B_DISC; 5422 5386 spin_unlock_irq(shost->host_lock); ··· 5442 5394 */ 5443 5395 lpfc_cancel_retry_delay_tmo(vport, ndlp); 5444 5396 5397 + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, 5398 + "6455 Setup RSCN Node 2B_DISC x%x " 5399 + "Data:x%x x%x x%x\n", 5400 + ndlp->nlp_DID, ndlp->nlp_flag, 5401 + ndlp->nlp_state, vport->fc_flag); 5402 + 5445 5403 /* NVME Target mode waits until rport is known to be 5446 5404 * impacted by the RSCN before it transitions. No 5447 5405 * active management - just go to NPR provided the ··· 5459 5405 /* If we've already received a PLOGI from this NPort 5460 5406 * we don't need to try to discover it again. 5461 5407 */ 5462 - if (ndlp->nlp_flag & NLP_RCV_PLOGI) 5408 + if (ndlp->nlp_flag & NLP_RCV_PLOGI && 5409 + !(ndlp->nlp_type & 5410 + (NLP_FCP_TARGET | NLP_NVME_TARGET))) 5463 5411 return NULL; 5412 + 5413 + ndlp->nlp_prev_state = ndlp->nlp_state; 5414 + lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); 5464 5415 5465 5416 spin_lock_irq(shost->host_lock); 5466 5417 ndlp->nlp_flag |= NLP_NPR_2B_DISC; 5467 5418 spin_unlock_irq(shost->host_lock); 5468 - } else 5419 + } else { 5420 + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, 5421 + "6456 Skip Setup RSCN Node x%x " 5422 + "Data:x%x x%x x%x\n", 5423 + ndlp->nlp_DID, ndlp->nlp_flag, 5424 + ndlp->nlp_state, vport->fc_flag); 5469 5425 ndlp = NULL; 5426 + } 5470 5427 } else { 5428 + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, 5429 + "6457 Setup Active Node 2B_DISC x%x " 5430 + "Data:x%x x%x x%x\n", 5431 + ndlp->nlp_DID, ndlp->nlp_flag, 5432 + ndlp->nlp_state, vport->fc_flag); 5433 + 5471 5434 /* If the initiator received a PLOGI from this NPort or if the 5472 5435 * initiator is already in the process of discovery on it, 5473 5436 * there's no need to try to discover it again. ··· 5636 5565 5637 5566 /* Start Discovery state <hba_state> */ 5638 5567 lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, 5639 - "0202 Start Discovery hba state x%x " 5640 - "Data: x%x x%x x%x\n", 5568 + "0202 Start Discovery port state x%x " 5569 + "flg x%x Data: x%x x%x x%x\n", 5641 5570 vport->port_state, vport->fc_flag, vport->fc_plogi_cnt, 5642 - vport->fc_adisc_cnt); 5571 + vport->fc_adisc_cnt, vport->fc_npr_cnt); 5643 5572 5644 5573 /* First do ADISCs - if any */ 5645 5574 num_sent = lpfc_els_disc_adisc(vport); ··· 6067 5996 ndlp->nlp_flag |= NLP_RPI_REGISTERED; 6068 5997 ndlp->nlp_type |= NLP_FABRIC; 6069 5998 lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNMAPPED_NODE); 6070 - lpfc_printf_vlog(vport, KERN_INFO, LOG_SLI, 5999 + lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE | LOG_DISCOVERY, 6071 6000 "0004 rpi:%x DID:%x flg:%x %d map:%x x%px\n", 6072 6001 ndlp->nlp_rpi, ndlp->nlp_DID, ndlp->nlp_flag, 6073 6002 kref_read(&ndlp->kref), ··· 6256 6185 INIT_LIST_HEAD(&ndlp->nlp_listp); 6257 6186 if (vport->phba->sli_rev == LPFC_SLI_REV4) { 6258 6187 ndlp->nlp_rpi = rpi; 6259 - lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE, 6260 - "0007 rpi:%x DID:%x flg:%x refcnt:%d " 6261 - "map:%x x%px\n", ndlp->nlp_rpi, ndlp->nlp_DID, 6262 - ndlp->nlp_flag, 6263 - kref_read(&ndlp->kref), 6264 - ndlp->nlp_usg_map, ndlp); 6188 + lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE | LOG_DISCOVERY, 6189 + "0007 Init New ndlp x%px, rpi:x%x DID:%x " 6190 + "flg:x%x refcnt:%d map:x%x\n", 6191 + ndlp, ndlp->nlp_rpi, ndlp->nlp_DID, 6192 + ndlp->nlp_flag, kref_read(&ndlp->kref), 6193 + ndlp->nlp_usg_map); 6265 6194 6266 6195 ndlp->active_rrqs_xri_bitmap = 6267 6196 mempool_alloc(vport->phba->active_rrq_pool, ··· 6490 6419 goto out; 6491 6420 } else if (ndlp->nlp_flag & NLP_RPI_REGISTERED) { 6492 6421 ret = 1; 6493 - lpfc_printf_log(phba, KERN_INFO, LOG_ELS, 6422 + lpfc_printf_log(phba, KERN_INFO, 6423 + LOG_NODE | LOG_DISCOVERY, 6494 6424 "2624 RPI %x DID %x flag %x " 6495 6425 "still logged in\n", 6496 6426 ndlp->nlp_rpi, ndlp->nlp_DID,
+28 -3
drivers/scsi/lpfc/lpfc_hw4.h
··· 210 210 #define LPFC_MAX_IMAX 5000000 211 211 #define LPFC_DEF_IMAX 0 212 212 213 - #define LPFC_IMAX_THRESHOLD 1000 214 213 #define LPFC_MAX_AUTO_EQ_DELAY 120 215 214 #define LPFC_EQ_DELAY_STEP 15 216 215 #define LPFC_EQD_ISR_TRIGGER 20000 ··· 2319 2320 #define ADD_STATUS_OPERATION_ALREADY_ACTIVE 0x67 2320 2321 #define ADD_STATUS_FW_NOT_SUPPORTED 0xEB 2321 2322 #define ADD_STATUS_INVALID_REQUEST 0x4B 2323 + #define ADD_STATUS_FW_DOWNLOAD_HW_DISABLED 0x58 2322 2324 2323 2325 struct lpfc_mbx_sli4_config { 2324 2326 struct mbox_header header; ··· 2809 2809 #define lpfc_mbx_rd_conf_trunk_SHIFT 12 2810 2810 #define lpfc_mbx_rd_conf_trunk_MASK 0x0000000F 2811 2811 #define lpfc_mbx_rd_conf_trunk_WORD word2 2812 + #define lpfc_mbx_rd_conf_pt_SHIFT 20 2813 + #define lpfc_mbx_rd_conf_pt_MASK 0x00000003 2814 + #define lpfc_mbx_rd_conf_pt_WORD word2 2815 + #define lpfc_mbx_rd_conf_tf_SHIFT 22 2816 + #define lpfc_mbx_rd_conf_tf_MASK 0x00000001 2817 + #define lpfc_mbx_rd_conf_tf_WORD word2 2818 + #define lpfc_mbx_rd_conf_ptv_SHIFT 23 2819 + #define lpfc_mbx_rd_conf_ptv_MASK 0x00000001 2820 + #define lpfc_mbx_rd_conf_ptv_WORD word2 2812 2821 #define lpfc_mbx_rd_conf_topology_SHIFT 24 2813 2822 #define lpfc_mbx_rd_conf_topology_MASK 0x000000FF 2814 2823 #define lpfc_mbx_rd_conf_topology_WORD word2 ··· 3488 3479 #define cfg_bv1s_SHIFT 10 3489 3480 #define cfg_bv1s_MASK 0x00000001 3490 3481 #define cfg_bv1s_WORD word19 3482 + #define cfg_pvl_SHIFT 13 3483 + #define cfg_pvl_MASK 0x00000001 3484 + #define cfg_pvl_WORD word19 3491 3485 3492 3486 #define cfg_nsler_SHIFT 12 3493 3487 #define cfg_nsler_MASK 0x00000001 ··· 3530 3518 3531 3519 #define LPFC_SET_UE_RECOVERY 0x10 3532 3520 #define LPFC_SET_MDS_DIAGS 0x11 3521 + #define LPFC_SET_DUAL_DUMP 0x1e 3533 3522 struct lpfc_mbx_set_feature { 3534 3523 struct mbox_header header; 3535 3524 uint32_t feature; ··· 3545 3532 #define lpfc_mbx_set_feature_mds_deep_loopbk_SHIFT 1 3546 3533 #define lpfc_mbx_set_feature_mds_deep_loopbk_MASK 0x00000001 3547 3534 #define lpfc_mbx_set_feature_mds_deep_loopbk_WORD word6 3535 + #define lpfc_mbx_set_feature_dd_SHIFT 0 3536 + #define lpfc_mbx_set_feature_dd_MASK 0x00000001 3537 + #define lpfc_mbx_set_feature_dd_WORD word6 3538 + #define lpfc_mbx_set_feature_ddquery_SHIFT 1 3539 + #define lpfc_mbx_set_feature_ddquery_MASK 0x00000001 3540 + #define lpfc_mbx_set_feature_ddquery_WORD word6 3541 + #define LPFC_DISABLE_DUAL_DUMP 0 3542 + #define LPFC_ENABLE_DUAL_DUMP 1 3543 + #define LPFC_QUERY_OP_DUAL_DUMP 2 3548 3544 uint32_t word7; 3549 3545 #define lpfc_mbx_set_feature_UERP_SHIFT 0 3550 3546 #define lpfc_mbx_set_feature_UERP_MASK 0x0000ffff ··· 4283 4261 #define LPFC_SLI_EVENT_TYPE_DIAG_DUMP 0x5 4284 4262 #define LPFC_SLI_EVENT_TYPE_MISCONFIGURED 0x9 4285 4263 #define LPFC_SLI_EVENT_TYPE_REMOTE_DPORT 0xA 4264 + #define LPFC_SLI_EVENT_TYPE_MISCONF_FAWWN 0xF 4265 + #define LPFC_SLI_EVENT_TYPE_EEPROM_FAILURE 0x10 4286 4266 }; 4287 4267 4288 4268 /* ··· 4683 4659 uint32_t rsvd_12_15[4]; /* word 12-15 */ 4684 4660 }; 4685 4661 4662 + #define INHIBIT_ABORT 1 4686 4663 #define T_REQUEST_TAG 3 4687 4664 #define T_XRI_TAG 1 4688 4665 ··· 4832 4807 struct send_frame_wqe send_frame; 4833 4808 }; 4834 4809 4835 - #define MAGIC_NUMER_G6 0xFEAA0003 4836 - #define MAGIC_NUMER_G7 0xFEAA0005 4810 + #define MAGIC_NUMBER_G6 0xFEAA0003 4811 + #define MAGIC_NUMBER_G7 0xFEAA0005 4837 4812 4838 4813 struct lpfc_grp_hdr { 4839 4814 uint32_t size;
+748 -214
drivers/scsi/lpfc/lpfc_init.c
··· 40 40 #include <linux/irq.h> 41 41 #include <linux/bitops.h> 42 42 #include <linux/crash_dump.h> 43 + #include <linux/cpu.h> 44 + #include <linux/cpuhotplug.h> 43 45 44 46 #include <scsi/scsi.h> 45 47 #include <scsi/scsi_device.h> ··· 68 66 #include "lpfc_version.h" 69 67 #include "lpfc_ids.h" 70 68 69 + static enum cpuhp_state lpfc_cpuhp_state; 71 70 /* Used when mapping IRQ vectors in a driver centric manner */ 72 71 static uint32_t lpfc_present_cpu; 73 72 73 + static void __lpfc_cpuhp_remove(struct lpfc_hba *phba); 74 + static void lpfc_cpuhp_remove(struct lpfc_hba *phba); 75 + static void lpfc_cpuhp_add(struct lpfc_hba *phba); 74 76 static void lpfc_get_hba_model_desc(struct lpfc_hba *, uint8_t *, uint8_t *); 75 77 static int lpfc_post_rcv_buf(struct lpfc_hba *); 76 78 static int lpfc_sli4_queue_verify(struct lpfc_hba *); ··· 1241 1235 struct lpfc_hba, eq_delay_work); 1242 1236 struct lpfc_eq_intr_info *eqi, *eqi_new; 1243 1237 struct lpfc_queue *eq, *eq_next; 1244 - unsigned char *eqcnt = NULL; 1238 + unsigned char *ena_delay = NULL; 1245 1239 uint32_t usdelay; 1246 1240 int i; 1247 - bool update = false; 1248 1241 1249 1242 if (!phba->cfg_auto_imax || phba->pport->load_flag & FC_UNLOADING) 1250 1243 return; ··· 1252 1247 phba->pport->fc_flag & FC_OFFLINE_MODE) 1253 1248 goto requeue; 1254 1249 1255 - eqcnt = kcalloc(num_possible_cpus(), sizeof(unsigned char), 1256 - GFP_KERNEL); 1257 - if (!eqcnt) 1250 + ena_delay = kcalloc(phba->sli4_hba.num_possible_cpu, sizeof(*ena_delay), 1251 + GFP_KERNEL); 1252 + if (!ena_delay) 1258 1253 goto requeue; 1259 1254 1260 - if (phba->cfg_irq_chann > 1) { 1261 - /* Loop thru all IRQ vectors */ 1262 - for (i = 0; i < phba->cfg_irq_chann; i++) { 1263 - /* Get the EQ corresponding to the IRQ vector */ 1264 - eq = phba->sli4_hba.hba_eq_hdl[i].eq; 1265 - if (!eq) 1266 - continue; 1267 - if (eq->q_mode) { 1268 - update = true; 1269 - break; 1270 - } 1271 - if (eqcnt[eq->last_cpu] < 2) 1272 - eqcnt[eq->last_cpu]++; 1255 + for (i = 0; i < phba->cfg_irq_chann; i++) { 1256 + /* Get the EQ corresponding to the IRQ vector */ 1257 + eq = phba->sli4_hba.hba_eq_hdl[i].eq; 1258 + if (!eq) 1259 + continue; 1260 + if (eq->q_mode || eq->q_flag & HBA_EQ_DELAY_CHK) { 1261 + eq->q_flag &= ~HBA_EQ_DELAY_CHK; 1262 + ena_delay[eq->last_cpu] = 1; 1273 1263 } 1274 - } else 1275 - update = true; 1264 + } 1276 1265 1277 1266 for_each_present_cpu(i) { 1278 1267 eqi = per_cpu_ptr(phba->sli4_hba.eq_info, i); 1279 - if (!update && eqcnt[i] < 2) { 1280 - eqi->icnt = 0; 1281 - continue; 1268 + if (ena_delay[i]) { 1269 + usdelay = (eqi->icnt >> 10) * LPFC_EQ_DELAY_STEP; 1270 + if (usdelay > LPFC_MAX_AUTO_EQ_DELAY) 1271 + usdelay = LPFC_MAX_AUTO_EQ_DELAY; 1272 + } else { 1273 + usdelay = 0; 1282 1274 } 1283 - 1284 - usdelay = (eqi->icnt / LPFC_IMAX_THRESHOLD) * 1285 - LPFC_EQ_DELAY_STEP; 1286 - if (usdelay > LPFC_MAX_AUTO_EQ_DELAY) 1287 - usdelay = LPFC_MAX_AUTO_EQ_DELAY; 1288 1275 1289 1276 eqi->icnt = 0; 1290 1277 1291 1278 list_for_each_entry_safe(eq, eq_next, &eqi->list, cpu_list) { 1292 - if (eq->last_cpu != i) { 1279 + if (unlikely(eq->last_cpu != i)) { 1293 1280 eqi_new = per_cpu_ptr(phba->sli4_hba.eq_info, 1294 1281 eq->last_cpu); 1295 1282 list_move_tail(&eq->cpu_list, &eqi_new->list); ··· 1293 1296 } 1294 1297 } 1295 1298 1296 - kfree(eqcnt); 1299 + kfree(ena_delay); 1297 1300 1298 1301 requeue: 1299 1302 queue_delayed_work(phba->wq, &phba->eq_delay_work, ··· 3050 3053 continue; 3051 3054 } 3052 3055 ndlp->nlp_rpi = rpi; 3053 - lpfc_printf_vlog(ndlp->vport, KERN_INFO, LOG_NODE, 3054 - "0009 rpi:%x DID:%x " 3055 - "flg:%x map:%x x%px\n", ndlp->nlp_rpi, 3056 - ndlp->nlp_DID, ndlp->nlp_flag, 3057 - ndlp->nlp_usg_map, ndlp); 3056 + lpfc_printf_vlog(ndlp->vport, KERN_INFO, 3057 + LOG_NODE | LOG_DISCOVERY, 3058 + "0009 Assign RPI x%x to ndlp x%px " 3059 + "DID:x%06x flg:x%x map:x%x\n", 3060 + ndlp->nlp_rpi, ndlp, ndlp->nlp_DID, 3061 + ndlp->nlp_flag, ndlp->nlp_usg_map); 3058 3062 } 3059 3063 } 3060 3064 lpfc_destroy_vport_work_array(phba, vports); ··· 3385 3387 if (phba->cfg_xri_rebalancing) 3386 3388 lpfc_create_multixri_pools(phba); 3387 3389 3390 + lpfc_cpuhp_add(phba); 3391 + 3388 3392 lpfc_unblock_mgmt_io(phba); 3389 3393 return 0; 3390 3394 } ··· 3453 3453 list_for_each_entry_safe(ndlp, next_ndlp, 3454 3454 &vports[i]->fc_nodes, 3455 3455 nlp_listp) { 3456 - if (!NLP_CHK_NODE_ACT(ndlp)) 3456 + if ((!NLP_CHK_NODE_ACT(ndlp)) || 3457 + ndlp->nlp_state == NLP_STE_UNUSED_NODE) { 3458 + /* Driver must assume RPI is invalid for 3459 + * any unused or inactive node. 3460 + */ 3461 + ndlp->nlp_rpi = LPFC_RPI_ALLOC_ERROR; 3457 3462 continue; 3458 - if (ndlp->nlp_state == NLP_STE_UNUSED_NODE) 3459 - continue; 3463 + } 3464 + 3460 3465 if (ndlp->nlp_type & NLP_FABRIC) { 3461 3466 lpfc_disc_state_machine(vports[i], ndlp, 3462 3467 NULL, NLP_EVT_DEVICE_RECOVERY); ··· 3477 3472 * comes back online. 3478 3473 */ 3479 3474 if (phba->sli_rev == LPFC_SLI_REV4) { 3480 - lpfc_printf_vlog(ndlp->vport, 3481 - KERN_INFO, LOG_NODE, 3482 - "0011 lpfc_offline: " 3483 - "ndlp:x%px did %x " 3484 - "usgmap:x%x rpi:%x\n", 3485 - ndlp, ndlp->nlp_DID, 3486 - ndlp->nlp_usg_map, 3487 - ndlp->nlp_rpi); 3488 - 3475 + lpfc_printf_vlog(ndlp->vport, KERN_INFO, 3476 + LOG_NODE | LOG_DISCOVERY, 3477 + "0011 Free RPI x%x on " 3478 + "ndlp:x%px did x%x " 3479 + "usgmap:x%x\n", 3480 + ndlp->nlp_rpi, ndlp, 3481 + ndlp->nlp_DID, 3482 + ndlp->nlp_usg_map); 3489 3483 lpfc_sli4_free_rpi(phba, ndlp->nlp_rpi); 3484 + ndlp->nlp_rpi = LPFC_RPI_ALLOC_ERROR; 3490 3485 } 3491 3486 lpfc_unreg_rpi(vports[i], ndlp); 3492 3487 } ··· 3550 3545 spin_unlock_irq(shost->host_lock); 3551 3546 } 3552 3547 lpfc_destroy_vport_work_array(phba, vports); 3548 + __lpfc_cpuhp_remove(phba); 3553 3549 3554 3550 if (phba->cfg_xri_rebalancing) 3555 3551 lpfc_destroy_multixri_pools(phba); ··· 5289 5283 evt_type = bf_get(lpfc_trailer_type, acqe_sli); 5290 5284 5291 5285 lpfc_printf_log(phba, KERN_INFO, LOG_SLI, 5292 - "2901 Async SLI event - Event Data1:x%08x Event Data2:" 5293 - "x%08x SLI Event Type:%d\n", 5286 + "2901 Async SLI event - Type:%d, Event Data: x%08x " 5287 + "x%08x x%08x x%08x\n", evt_type, 5294 5288 acqe_sli->event_data1, acqe_sli->event_data2, 5295 - evt_type); 5289 + acqe_sli->reserved, acqe_sli->trailer); 5296 5290 5297 5291 port_name = phba->Port[0]; 5298 5292 if (port_name == 0x00) ··· 5439 5433 "Event Data1:x%08x Event Data2: x%08x\n", 5440 5434 acqe_sli->event_data1, acqe_sli->event_data2); 5441 5435 break; 5436 + case LPFC_SLI_EVENT_TYPE_MISCONF_FAWWN: 5437 + /* Misconfigured WWN. Reports that the SLI Port is configured 5438 + * to use FA-WWN, but the attached device doesn’t support it. 5439 + * No driver action is required. 5440 + * Event Data1 - N.A, Event Data2 - N.A 5441 + */ 5442 + lpfc_log_msg(phba, KERN_WARNING, LOG_SLI, 5443 + "2699 Misconfigured FA-WWN - Attached device does " 5444 + "not support FA-WWN\n"); 5445 + break; 5446 + case LPFC_SLI_EVENT_TYPE_EEPROM_FAILURE: 5447 + /* EEPROM failure. No driver action is required */ 5448 + lpfc_printf_log(phba, KERN_WARNING, LOG_SLI, 5449 + "2518 EEPROM failure - " 5450 + "Event Data1: x%08x Event Data2: x%08x\n", 5451 + acqe_sli->event_data1, acqe_sli->event_data2); 5452 + break; 5442 5453 default: 5443 5454 lpfc_printf_log(phba, KERN_INFO, LOG_SLI, 5444 - "3193 Async SLI event - Event Data1:x%08x Event Data2:" 5445 - "x%08x SLI Event Type:%d\n", 5446 - acqe_sli->event_data1, acqe_sli->event_data2, 5455 + "3193 Unrecognized SLI event, type: 0x%x", 5447 5456 evt_type); 5448 5457 break; 5449 5458 } ··· 5997 5976 } 5998 5977 5999 5978 /** 5979 + * lpfc_cpumask_of_node_init - initalizes cpumask of phba's NUMA node 5980 + * @phba: Pointer to HBA context object. 5981 + * 5982 + **/ 5983 + static void 5984 + lpfc_cpumask_of_node_init(struct lpfc_hba *phba) 5985 + { 5986 + unsigned int cpu, numa_node; 5987 + struct cpumask *numa_mask = &phba->sli4_hba.numa_mask; 5988 + 5989 + cpumask_clear(numa_mask); 5990 + 5991 + /* Check if we're a NUMA architecture */ 5992 + numa_node = dev_to_node(&phba->pcidev->dev); 5993 + if (numa_node == NUMA_NO_NODE) 5994 + return; 5995 + 5996 + for_each_possible_cpu(cpu) 5997 + if (cpu_to_node(cpu) == numa_node) 5998 + cpumask_set_cpu(cpu, numa_mask); 5999 + } 6000 + 6001 + /** 6000 6002 * lpfc_enable_pci_dev - Enable a generic PCI device. 6001 6003 * @phba: pointer to lpfc hba data structure. 6002 6004 * ··· 6462 6418 phba->sli4_hba.num_present_cpu = lpfc_present_cpu; 6463 6419 phba->sli4_hba.num_possible_cpu = num_possible_cpus(); 6464 6420 phba->sli4_hba.curr_disp_cpu = 0; 6421 + lpfc_cpumask_of_node_init(phba); 6465 6422 6466 6423 /* Get all the module params for configuring this host */ 6467 6424 lpfc_get_cfgparam(phba); ··· 6998 6953 phba->sli4_hba.num_possible_cpu = 0; 6999 6954 phba->sli4_hba.num_present_cpu = 0; 7000 6955 phba->sli4_hba.curr_disp_cpu = 0; 6956 + cpumask_clear(&phba->sli4_hba.numa_mask); 7001 6957 7002 6958 /* Free memory allocated for fast-path work queue handles */ 7003 6959 kfree(phba->sli4_hba.hba_eq_hdl); ··· 7172 7126 if (iocbq_entry == NULL) { 7173 7127 printk(KERN_ERR "%s: only allocated %d iocbs of " 7174 7128 "expected %d count. Unloading driver.\n", 7175 - __func__, i, LPFC_IOCB_LIST_CNT); 7129 + __func__, i, iocb_count); 7176 7130 goto out_free_iocbq; 7177 7131 } 7178 7132 ··· 7591 7545 7592 7546 if (phba->nvmet_support) { 7593 7547 /* Only 1 vport (pport) will support NVME target */ 7594 - if (phba->txrdy_payload_pool == NULL) { 7595 - phba->txrdy_payload_pool = dma_pool_create( 7596 - "txrdy_pool", &phba->pcidev->dev, 7597 - TXRDY_PAYLOAD_LEN, 16, 0); 7598 - if (phba->txrdy_payload_pool) { 7599 - phba->targetport = NULL; 7600 - phba->cfg_enable_fc4_type = LPFC_ENABLE_NVME; 7601 - lpfc_printf_log(phba, KERN_INFO, 7602 - LOG_INIT | LOG_NVME_DISC, 7603 - "6076 NVME Target Found\n"); 7604 - } 7605 - } 7548 + phba->targetport = NULL; 7549 + phba->cfg_enable_fc4_type = LPFC_ENABLE_NVME; 7550 + lpfc_printf_log(phba, KERN_INFO, LOG_INIT | LOG_NVME_DISC, 7551 + "6076 NVME Target Found\n"); 7606 7552 } 7607 7553 7608 7554 lpfc_debugfs_initialize(vport); ··· 8273 8235 memset(&phba->sli4_hba.bmbx, 0, sizeof(struct lpfc_bmbx)); 8274 8236 } 8275 8237 8238 + static const char * const lpfc_topo_to_str[] = { 8239 + "Loop then P2P", 8240 + "Loopback", 8241 + "P2P Only", 8242 + "Unsupported", 8243 + "Loop Only", 8244 + "Unsupported", 8245 + "P2P then Loop", 8246 + }; 8247 + 8248 + /** 8249 + * lpfc_map_topology - Map the topology read from READ_CONFIG 8250 + * @phba: pointer to lpfc hba data structure. 8251 + * @rdconf: pointer to read config data 8252 + * 8253 + * This routine is invoked to map the topology values as read 8254 + * from the read config mailbox command. If the persistent 8255 + * topology feature is supported, the firmware will provide the 8256 + * saved topology information to be used in INIT_LINK 8257 + * 8258 + **/ 8259 + #define LINK_FLAGS_DEF 0x0 8260 + #define LINK_FLAGS_P2P 0x1 8261 + #define LINK_FLAGS_LOOP 0x2 8262 + static void 8263 + lpfc_map_topology(struct lpfc_hba *phba, struct lpfc_mbx_read_config *rd_config) 8264 + { 8265 + u8 ptv, tf, pt; 8266 + 8267 + ptv = bf_get(lpfc_mbx_rd_conf_ptv, rd_config); 8268 + tf = bf_get(lpfc_mbx_rd_conf_tf, rd_config); 8269 + pt = bf_get(lpfc_mbx_rd_conf_pt, rd_config); 8270 + 8271 + lpfc_printf_log(phba, KERN_INFO, LOG_SLI, 8272 + "2027 Read Config Data : ptv:0x%x, tf:0x%x pt:0x%x", 8273 + ptv, tf, pt); 8274 + if (!ptv) { 8275 + lpfc_printf_log(phba, KERN_WARNING, LOG_SLI, 8276 + "2019 FW does not support persistent topology " 8277 + "Using driver parameter defined value [%s]", 8278 + lpfc_topo_to_str[phba->cfg_topology]); 8279 + return; 8280 + } 8281 + /* FW supports persistent topology - override module parameter value */ 8282 + phba->hba_flag |= HBA_PERSISTENT_TOPO; 8283 + switch (phba->pcidev->device) { 8284 + case PCI_DEVICE_ID_LANCER_G7_FC: 8285 + if (tf || (pt == LINK_FLAGS_LOOP)) { 8286 + /* Invalid values from FW - use driver params */ 8287 + phba->hba_flag &= ~HBA_PERSISTENT_TOPO; 8288 + } else { 8289 + /* Prism only supports PT2PT topology */ 8290 + phba->cfg_topology = FLAGS_TOPOLOGY_MODE_PT_PT; 8291 + } 8292 + break; 8293 + case PCI_DEVICE_ID_LANCER_G6_FC: 8294 + if (!tf) { 8295 + phba->cfg_topology = ((pt == LINK_FLAGS_LOOP) 8296 + ? FLAGS_TOPOLOGY_MODE_LOOP 8297 + : FLAGS_TOPOLOGY_MODE_PT_PT); 8298 + } else { 8299 + phba->hba_flag &= ~HBA_PERSISTENT_TOPO; 8300 + } 8301 + break; 8302 + default: /* G5 */ 8303 + if (tf) { 8304 + /* If topology failover set - pt is '0' or '1' */ 8305 + phba->cfg_topology = (pt ? FLAGS_TOPOLOGY_MODE_PT_LOOP : 8306 + FLAGS_TOPOLOGY_MODE_LOOP_PT); 8307 + } else { 8308 + phba->cfg_topology = ((pt == LINK_FLAGS_P2P) 8309 + ? FLAGS_TOPOLOGY_MODE_PT_PT 8310 + : FLAGS_TOPOLOGY_MODE_LOOP); 8311 + } 8312 + break; 8313 + } 8314 + if (phba->hba_flag & HBA_PERSISTENT_TOPO) { 8315 + lpfc_printf_log(phba, KERN_INFO, LOG_SLI, 8316 + "2020 Using persistent topology value [%s]", 8317 + lpfc_topo_to_str[phba->cfg_topology]); 8318 + } else { 8319 + lpfc_printf_log(phba, KERN_WARNING, LOG_SLI, 8320 + "2021 Invalid topology values from FW " 8321 + "Using driver parameter defined value [%s]", 8322 + lpfc_topo_to_str[phba->cfg_topology]); 8323 + } 8324 + } 8325 + 8276 8326 /** 8277 8327 * lpfc_sli4_read_config - Get the config parameters. 8278 8328 * @phba: pointer to lpfc hba data structure. ··· 8472 8346 phba->max_vpi = (phba->sli4_hba.max_cfg_param.max_vpi > 0) ? 8473 8347 (phba->sli4_hba.max_cfg_param.max_vpi - 1) : 0; 8474 8348 phba->max_vports = phba->max_vpi; 8349 + lpfc_map_topology(phba, rd_config); 8475 8350 lpfc_printf_log(phba, KERN_INFO, LOG_SLI, 8476 8351 "2003 cfg params Extents? %d " 8477 8352 "XRI(B:%d M:%d), " ··· 8746 8619 */ 8747 8620 8748 8621 if (phba->nvmet_support) { 8749 - if (phba->cfg_irq_chann < phba->cfg_nvmet_mrq) 8750 - phba->cfg_nvmet_mrq = phba->cfg_irq_chann; 8622 + if (phba->cfg_hdw_queue < phba->cfg_nvmet_mrq) 8623 + phba->cfg_nvmet_mrq = phba->cfg_hdw_queue; 8751 8624 if (phba->cfg_nvmet_mrq > LPFC_NVMET_MRQ_MAX) 8752 8625 phba->cfg_nvmet_mrq = LPFC_NVMET_MRQ_MAX; 8753 8626 } ··· 9286 9159 spin_lock_irq(&phba->hbalock); 9287 9160 } 9288 9161 spin_unlock_irq(&phba->hbalock); 9162 + 9163 + lpfc_sli4_cleanup_poll_list(phba); 9289 9164 9290 9165 /* Release HBA eqs */ 9291 9166 if (phba->sli4_hba.hdwq) ··· 10710 10581 */ 10711 10582 if ((match == LPFC_FIND_BY_EQ) && 10712 10583 (cpup->flag & LPFC_CPU_FIRST_IRQ) && 10713 - (cpup->irq != LPFC_VECTOR_MAP_EMPTY) && 10714 10584 (cpup->eq == id)) 10715 10585 return cpu; 10716 10586 ··· 10747 10619 } 10748 10620 #endif 10749 10621 10622 + /* 10623 + * lpfc_assign_eq_map_info - Assigns eq for vector_map structure 10624 + * @phba: pointer to lpfc hba data structure. 10625 + * @eqidx: index for eq and irq vector 10626 + * @flag: flags to set for vector_map structure 10627 + * @cpu: cpu used to index vector_map structure 10628 + * 10629 + * The routine assigns eq info into vector_map structure 10630 + */ 10631 + static inline void 10632 + lpfc_assign_eq_map_info(struct lpfc_hba *phba, uint16_t eqidx, uint16_t flag, 10633 + unsigned int cpu) 10634 + { 10635 + struct lpfc_vector_map_info *cpup = &phba->sli4_hba.cpu_map[cpu]; 10636 + struct lpfc_hba_eq_hdl *eqhdl = lpfc_get_eq_hdl(eqidx); 10637 + 10638 + cpup->eq = eqidx; 10639 + cpup->flag |= flag; 10640 + 10641 + lpfc_printf_log(phba, KERN_INFO, LOG_INIT, 10642 + "3336 Set Affinity: CPU %d irq %d eq %d flag x%x\n", 10643 + cpu, eqhdl->irq, cpup->eq, cpup->flag); 10644 + } 10645 + 10646 + /** 10647 + * lpfc_cpu_map_array_init - Initialize cpu_map structure 10648 + * @phba: pointer to lpfc hba data structure. 10649 + * 10650 + * The routine initializes the cpu_map array structure 10651 + */ 10652 + static void 10653 + lpfc_cpu_map_array_init(struct lpfc_hba *phba) 10654 + { 10655 + struct lpfc_vector_map_info *cpup; 10656 + struct lpfc_eq_intr_info *eqi; 10657 + int cpu; 10658 + 10659 + for_each_possible_cpu(cpu) { 10660 + cpup = &phba->sli4_hba.cpu_map[cpu]; 10661 + cpup->phys_id = LPFC_VECTOR_MAP_EMPTY; 10662 + cpup->core_id = LPFC_VECTOR_MAP_EMPTY; 10663 + cpup->hdwq = LPFC_VECTOR_MAP_EMPTY; 10664 + cpup->eq = LPFC_VECTOR_MAP_EMPTY; 10665 + cpup->flag = 0; 10666 + eqi = per_cpu_ptr(phba->sli4_hba.eq_info, cpu); 10667 + INIT_LIST_HEAD(&eqi->list); 10668 + eqi->icnt = 0; 10669 + } 10670 + } 10671 + 10672 + /** 10673 + * lpfc_hba_eq_hdl_array_init - Initialize hba_eq_hdl structure 10674 + * @phba: pointer to lpfc hba data structure. 10675 + * 10676 + * The routine initializes the hba_eq_hdl array structure 10677 + */ 10678 + static void 10679 + lpfc_hba_eq_hdl_array_init(struct lpfc_hba *phba) 10680 + { 10681 + struct lpfc_hba_eq_hdl *eqhdl; 10682 + int i; 10683 + 10684 + for (i = 0; i < phba->cfg_irq_chann; i++) { 10685 + eqhdl = lpfc_get_eq_hdl(i); 10686 + eqhdl->irq = LPFC_VECTOR_MAP_EMPTY; 10687 + eqhdl->phba = phba; 10688 + } 10689 + } 10690 + 10750 10691 /** 10751 10692 * lpfc_cpu_affinity_check - Check vector CPU affinity mappings 10752 10693 * @phba: pointer to lpfc hba data structure. ··· 10834 10637 int max_core_id, min_core_id; 10835 10638 struct lpfc_vector_map_info *cpup; 10836 10639 struct lpfc_vector_map_info *new_cpup; 10837 - const struct cpumask *maskp; 10838 10640 #ifdef CONFIG_X86 10839 10641 struct cpuinfo_x86 *cpuinfo; 10840 10642 #endif 10841 - 10842 - /* Init cpu_map array */ 10843 - for_each_possible_cpu(cpu) { 10844 - cpup = &phba->sli4_hba.cpu_map[cpu]; 10845 - cpup->phys_id = LPFC_VECTOR_MAP_EMPTY; 10846 - cpup->core_id = LPFC_VECTOR_MAP_EMPTY; 10847 - cpup->hdwq = LPFC_VECTOR_MAP_EMPTY; 10848 - cpup->eq = LPFC_VECTOR_MAP_EMPTY; 10849 - cpup->irq = LPFC_VECTOR_MAP_EMPTY; 10850 - cpup->flag = 0; 10851 - } 10852 10643 10853 10644 max_phys_id = 0; 10854 10645 min_phys_id = LPFC_VECTOR_MAP_EMPTY; ··· 10873 10688 min_core_id = cpup->core_id; 10874 10689 } 10875 10690 10876 - for_each_possible_cpu(i) { 10877 - struct lpfc_eq_intr_info *eqi = 10878 - per_cpu_ptr(phba->sli4_hba.eq_info, i); 10879 - 10880 - INIT_LIST_HEAD(&eqi->list); 10881 - eqi->icnt = 0; 10882 - } 10883 - 10884 - /* This loop sets up all CPUs that are affinitized with a 10885 - * irq vector assigned to the driver. All affinitized CPUs 10886 - * will get a link to that vectors IRQ and EQ. 10887 - * 10888 - * NULL affinity mask handling: 10889 - * If irq count is greater than one, log an error message. 10890 - * If the null mask is received for the first irq, find the 10891 - * first present cpu, and assign the eq index to ensure at 10892 - * least one EQ is assigned. 10893 - */ 10894 - for (idx = 0; idx < phba->cfg_irq_chann; idx++) { 10895 - /* Get a CPU mask for all CPUs affinitized to this vector */ 10896 - maskp = pci_irq_get_affinity(phba->pcidev, idx); 10897 - if (!maskp) { 10898 - if (phba->cfg_irq_chann > 1) 10899 - lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 10900 - "3329 No affinity mask found " 10901 - "for vector %d (%d)\n", 10902 - idx, phba->cfg_irq_chann); 10903 - if (!idx) { 10904 - cpu = cpumask_first(cpu_present_mask); 10905 - cpup = &phba->sli4_hba.cpu_map[cpu]; 10906 - cpup->eq = idx; 10907 - cpup->irq = pci_irq_vector(phba->pcidev, idx); 10908 - cpup->flag |= LPFC_CPU_FIRST_IRQ; 10909 - } 10910 - break; 10911 - } 10912 - 10913 - i = 0; 10914 - /* Loop through all CPUs associated with vector idx */ 10915 - for_each_cpu_and(cpu, maskp, cpu_present_mask) { 10916 - /* Set the EQ index and IRQ for that vector */ 10917 - cpup = &phba->sli4_hba.cpu_map[cpu]; 10918 - cpup->eq = idx; 10919 - cpup->irq = pci_irq_vector(phba->pcidev, idx); 10920 - 10921 - /* If this is the first CPU thats assigned to this 10922 - * vector, set LPFC_CPU_FIRST_IRQ. 10923 - */ 10924 - if (!i) 10925 - cpup->flag |= LPFC_CPU_FIRST_IRQ; 10926 - i++; 10927 - 10928 - lpfc_printf_log(phba, KERN_INFO, LOG_INIT, 10929 - "3336 Set Affinity: CPU %d " 10930 - "irq %d eq %d flag x%x\n", 10931 - cpu, cpup->irq, cpup->eq, cpup->flag); 10932 - } 10933 - } 10934 - 10935 10691 /* After looking at each irq vector assigned to this pcidev, its 10936 10692 * possible to see that not ALL CPUs have been accounted for. 10937 10693 * Next we will set any unassigned (unaffinitized) cpu map ··· 10898 10772 for (i = 0; i < phba->sli4_hba.num_present_cpu; i++) { 10899 10773 new_cpup = &phba->sli4_hba.cpu_map[new_cpu]; 10900 10774 if (!(new_cpup->flag & LPFC_CPU_MAP_UNASSIGN) && 10901 - (new_cpup->irq != LPFC_VECTOR_MAP_EMPTY) && 10775 + (new_cpup->eq != LPFC_VECTOR_MAP_EMPTY) && 10902 10776 (new_cpup->phys_id == cpup->phys_id)) 10903 10777 goto found_same; 10904 10778 new_cpu = cpumask_next( ··· 10911 10785 found_same: 10912 10786 /* We found a matching phys_id, so copy the IRQ info */ 10913 10787 cpup->eq = new_cpup->eq; 10914 - cpup->irq = new_cpup->irq; 10915 10788 10916 10789 /* Bump start_cpu to the next slot to minmize the 10917 10790 * chance of having multiple unassigned CPU entries ··· 10922 10797 10923 10798 lpfc_printf_log(phba, KERN_INFO, LOG_INIT, 10924 10799 "3337 Set Affinity: CPU %d " 10925 - "irq %d from id %d same " 10800 + "eq %d from peer cpu %d same " 10926 10801 "phys_id (%d)\n", 10927 - cpu, cpup->irq, new_cpu, cpup->phys_id); 10802 + cpu, cpup->eq, new_cpu, 10803 + cpup->phys_id); 10928 10804 } 10929 10805 } 10930 10806 ··· 10949 10823 for (i = 0; i < phba->sli4_hba.num_present_cpu; i++) { 10950 10824 new_cpup = &phba->sli4_hba.cpu_map[new_cpu]; 10951 10825 if (!(new_cpup->flag & LPFC_CPU_MAP_UNASSIGN) && 10952 - (new_cpup->irq != LPFC_VECTOR_MAP_EMPTY)) 10826 + (new_cpup->eq != LPFC_VECTOR_MAP_EMPTY)) 10953 10827 goto found_any; 10954 10828 new_cpu = cpumask_next( 10955 10829 new_cpu, cpu_present_mask); ··· 10959 10833 /* We should never leave an entry unassigned */ 10960 10834 lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 10961 10835 "3339 Set Affinity: CPU %d " 10962 - "irq %d UNASSIGNED\n", 10963 - cpup->hdwq, cpup->irq); 10836 + "eq %d UNASSIGNED\n", 10837 + cpup->hdwq, cpup->eq); 10964 10838 continue; 10965 10839 found_any: 10966 10840 /* We found an available entry, copy the IRQ info */ 10967 10841 cpup->eq = new_cpup->eq; 10968 - cpup->irq = new_cpup->irq; 10969 10842 10970 10843 /* Bump start_cpu to the next slot to minmize the 10971 10844 * chance of having multiple unassigned CPU entries ··· 10976 10851 10977 10852 lpfc_printf_log(phba, KERN_INFO, LOG_INIT, 10978 10853 "3338 Set Affinity: CPU %d " 10979 - "irq %d from id %d (%d/%d)\n", 10980 - cpu, cpup->irq, new_cpu, 10854 + "eq %d from peer cpu %d (%d/%d)\n", 10855 + cpu, cpup->eq, new_cpu, 10981 10856 new_cpup->phys_id, new_cpup->core_id); 10982 10857 } 10983 10858 } ··· 10998 10873 idx++; 10999 10874 lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 11000 10875 "3333 Set Affinity: CPU %d (phys %d core %d): " 11001 - "hdwq %d eq %d irq %d flg x%x\n", 10876 + "hdwq %d eq %d flg x%x\n", 11002 10877 cpu, cpup->phys_id, cpup->core_id, 11003 - cpup->hdwq, cpup->eq, cpup->irq, cpup->flag); 10878 + cpup->hdwq, cpup->eq, cpup->flag); 11004 10879 } 11005 - /* Finally we need to associate a hdwq with each cpu_map entry 10880 + /* Associate a hdwq with each cpu_map entry 11006 10881 * This will be 1 to 1 - hdwq to cpu, unless there are less 11007 10882 * hardware queues then CPUs. For that case we will just round-robin 11008 10883 * the available hardware queues as they get assigned to CPUs. ··· 11076 10951 logit: 11077 10952 lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 11078 10953 "3335 Set Affinity: CPU %d (phys %d core %d): " 11079 - "hdwq %d eq %d irq %d flg x%x\n", 10954 + "hdwq %d eq %d flg x%x\n", 11080 10955 cpu, cpup->phys_id, cpup->core_id, 11081 - cpup->hdwq, cpup->eq, cpup->irq, cpup->flag); 10956 + cpup->hdwq, cpup->eq, cpup->flag); 10957 + } 10958 + 10959 + /* 10960 + * Initialize the cpu_map slots for not-present cpus in case 10961 + * a cpu is hot-added. Perform a simple hdwq round robin assignment. 10962 + */ 10963 + idx = 0; 10964 + for_each_possible_cpu(cpu) { 10965 + cpup = &phba->sli4_hba.cpu_map[cpu]; 10966 + if (cpup->hdwq != LPFC_VECTOR_MAP_EMPTY) 10967 + continue; 10968 + 10969 + cpup->hdwq = idx++ % phba->cfg_hdw_queue; 10970 + lpfc_printf_log(phba, KERN_INFO, LOG_INIT, 10971 + "3340 Set Affinity: not present " 10972 + "CPU %d hdwq %d\n", 10973 + cpu, cpup->hdwq); 11082 10974 } 11083 10975 11084 10976 /* The cpu_map array will be used later during initialization ··· 11105 10963 } 11106 10964 11107 10965 /** 10966 + * lpfc_cpuhp_get_eq 10967 + * 10968 + * @phba: pointer to lpfc hba data structure. 10969 + * @cpu: cpu going offline 10970 + * @eqlist: 10971 + */ 10972 + static void 10973 + lpfc_cpuhp_get_eq(struct lpfc_hba *phba, unsigned int cpu, 10974 + struct list_head *eqlist) 10975 + { 10976 + const struct cpumask *maskp; 10977 + struct lpfc_queue *eq; 10978 + cpumask_t tmp; 10979 + u16 idx; 10980 + 10981 + for (idx = 0; idx < phba->cfg_irq_chann; idx++) { 10982 + maskp = pci_irq_get_affinity(phba->pcidev, idx); 10983 + if (!maskp) 10984 + continue; 10985 + /* 10986 + * if irq is not affinitized to the cpu going 10987 + * then we don't need to poll the eq attached 10988 + * to it. 10989 + */ 10990 + if (!cpumask_and(&tmp, maskp, cpumask_of(cpu))) 10991 + continue; 10992 + /* get the cpus that are online and are affini- 10993 + * tized to this irq vector. If the count is 10994 + * more than 1 then cpuhp is not going to shut- 10995 + * down this vector. Since this cpu has not 10996 + * gone offline yet, we need >1. 10997 + */ 10998 + cpumask_and(&tmp, maskp, cpu_online_mask); 10999 + if (cpumask_weight(&tmp) > 1) 11000 + continue; 11001 + 11002 + /* Now that we have an irq to shutdown, get the eq 11003 + * mapped to this irq. Note: multiple hdwq's in 11004 + * the software can share an eq, but eventually 11005 + * only eq will be mapped to this vector 11006 + */ 11007 + eq = phba->sli4_hba.hba_eq_hdl[idx].eq; 11008 + list_add(&eq->_poll_list, eqlist); 11009 + } 11010 + } 11011 + 11012 + static void __lpfc_cpuhp_remove(struct lpfc_hba *phba) 11013 + { 11014 + if (phba->sli_rev != LPFC_SLI_REV4) 11015 + return; 11016 + 11017 + cpuhp_state_remove_instance_nocalls(lpfc_cpuhp_state, 11018 + &phba->cpuhp); 11019 + /* 11020 + * unregistering the instance doesn't stop the polling 11021 + * timer. Wait for the poll timer to retire. 11022 + */ 11023 + synchronize_rcu(); 11024 + del_timer_sync(&phba->cpuhp_poll_timer); 11025 + } 11026 + 11027 + static void lpfc_cpuhp_remove(struct lpfc_hba *phba) 11028 + { 11029 + if (phba->pport->fc_flag & FC_OFFLINE_MODE) 11030 + return; 11031 + 11032 + __lpfc_cpuhp_remove(phba); 11033 + } 11034 + 11035 + static void lpfc_cpuhp_add(struct lpfc_hba *phba) 11036 + { 11037 + if (phba->sli_rev != LPFC_SLI_REV4) 11038 + return; 11039 + 11040 + rcu_read_lock(); 11041 + 11042 + if (!list_empty(&phba->poll_list)) { 11043 + timer_setup(&phba->cpuhp_poll_timer, lpfc_sli4_poll_hbtimer, 0); 11044 + mod_timer(&phba->cpuhp_poll_timer, 11045 + jiffies + msecs_to_jiffies(LPFC_POLL_HB)); 11046 + } 11047 + 11048 + rcu_read_unlock(); 11049 + 11050 + cpuhp_state_add_instance_nocalls(lpfc_cpuhp_state, 11051 + &phba->cpuhp); 11052 + } 11053 + 11054 + static int __lpfc_cpuhp_checks(struct lpfc_hba *phba, int *retval) 11055 + { 11056 + if (phba->pport->load_flag & FC_UNLOADING) { 11057 + *retval = -EAGAIN; 11058 + return true; 11059 + } 11060 + 11061 + if (phba->sli_rev != LPFC_SLI_REV4) { 11062 + *retval = 0; 11063 + return true; 11064 + } 11065 + 11066 + /* proceed with the hotplug */ 11067 + return false; 11068 + } 11069 + 11070 + /** 11071 + * lpfc_irq_set_aff - set IRQ affinity 11072 + * @eqhdl: EQ handle 11073 + * @cpu: cpu to set affinity 11074 + * 11075 + **/ 11076 + static inline void 11077 + lpfc_irq_set_aff(struct lpfc_hba_eq_hdl *eqhdl, unsigned int cpu) 11078 + { 11079 + cpumask_clear(&eqhdl->aff_mask); 11080 + cpumask_set_cpu(cpu, &eqhdl->aff_mask); 11081 + irq_set_status_flags(eqhdl->irq, IRQ_NO_BALANCING); 11082 + irq_set_affinity_hint(eqhdl->irq, &eqhdl->aff_mask); 11083 + } 11084 + 11085 + /** 11086 + * lpfc_irq_clear_aff - clear IRQ affinity 11087 + * @eqhdl: EQ handle 11088 + * 11089 + **/ 11090 + static inline void 11091 + lpfc_irq_clear_aff(struct lpfc_hba_eq_hdl *eqhdl) 11092 + { 11093 + cpumask_clear(&eqhdl->aff_mask); 11094 + irq_clear_status_flags(eqhdl->irq, IRQ_NO_BALANCING); 11095 + irq_set_affinity_hint(eqhdl->irq, &eqhdl->aff_mask); 11096 + } 11097 + 11098 + /** 11099 + * lpfc_irq_rebalance - rebalances IRQ affinity according to cpuhp event 11100 + * @phba: pointer to HBA context object. 11101 + * @cpu: cpu going offline/online 11102 + * @offline: true, cpu is going offline. false, cpu is coming online. 11103 + * 11104 + * If cpu is going offline, we'll try our best effort to find the next 11105 + * online cpu on the phba's NUMA node and migrate all offlining IRQ affinities. 11106 + * 11107 + * If cpu is coming online, reaffinitize the IRQ back to the onlineng cpu. 11108 + * 11109 + * Note: Call only if cfg_irq_numa is enabled, otherwise rely on 11110 + * PCI_IRQ_AFFINITY to auto-manage IRQ affinity. 11111 + * 11112 + **/ 11113 + static void 11114 + lpfc_irq_rebalance(struct lpfc_hba *phba, unsigned int cpu, bool offline) 11115 + { 11116 + struct lpfc_vector_map_info *cpup; 11117 + struct cpumask *aff_mask; 11118 + unsigned int cpu_select, cpu_next, idx; 11119 + const struct cpumask *numa_mask; 11120 + 11121 + if (!phba->cfg_irq_numa) 11122 + return; 11123 + 11124 + numa_mask = &phba->sli4_hba.numa_mask; 11125 + 11126 + if (!cpumask_test_cpu(cpu, numa_mask)) 11127 + return; 11128 + 11129 + cpup = &phba->sli4_hba.cpu_map[cpu]; 11130 + 11131 + if (!(cpup->flag & LPFC_CPU_FIRST_IRQ)) 11132 + return; 11133 + 11134 + if (offline) { 11135 + /* Find next online CPU on NUMA node */ 11136 + cpu_next = cpumask_next_wrap(cpu, numa_mask, cpu, true); 11137 + cpu_select = lpfc_next_online_numa_cpu(numa_mask, cpu_next); 11138 + 11139 + /* Found a valid CPU */ 11140 + if ((cpu_select < nr_cpu_ids) && (cpu_select != cpu)) { 11141 + /* Go through each eqhdl and ensure offlining 11142 + * cpu aff_mask is migrated 11143 + */ 11144 + for (idx = 0; idx < phba->cfg_irq_chann; idx++) { 11145 + aff_mask = lpfc_get_aff_mask(idx); 11146 + 11147 + /* Migrate affinity */ 11148 + if (cpumask_test_cpu(cpu, aff_mask)) 11149 + lpfc_irq_set_aff(lpfc_get_eq_hdl(idx), 11150 + cpu_select); 11151 + } 11152 + } else { 11153 + /* Rely on irqbalance if no online CPUs left on NUMA */ 11154 + for (idx = 0; idx < phba->cfg_irq_chann; idx++) 11155 + lpfc_irq_clear_aff(lpfc_get_eq_hdl(idx)); 11156 + } 11157 + } else { 11158 + /* Migrate affinity back to this CPU */ 11159 + lpfc_irq_set_aff(lpfc_get_eq_hdl(cpup->eq), cpu); 11160 + } 11161 + } 11162 + 11163 + static int lpfc_cpu_offline(unsigned int cpu, struct hlist_node *node) 11164 + { 11165 + struct lpfc_hba *phba = hlist_entry_safe(node, struct lpfc_hba, cpuhp); 11166 + struct lpfc_queue *eq, *next; 11167 + LIST_HEAD(eqlist); 11168 + int retval; 11169 + 11170 + if (!phba) { 11171 + WARN_ONCE(!phba, "cpu: %u. phba:NULL", raw_smp_processor_id()); 11172 + return 0; 11173 + } 11174 + 11175 + if (__lpfc_cpuhp_checks(phba, &retval)) 11176 + return retval; 11177 + 11178 + lpfc_irq_rebalance(phba, cpu, true); 11179 + 11180 + lpfc_cpuhp_get_eq(phba, cpu, &eqlist); 11181 + 11182 + /* start polling on these eq's */ 11183 + list_for_each_entry_safe(eq, next, &eqlist, _poll_list) { 11184 + list_del_init(&eq->_poll_list); 11185 + lpfc_sli4_start_polling(eq); 11186 + } 11187 + 11188 + return 0; 11189 + } 11190 + 11191 + static int lpfc_cpu_online(unsigned int cpu, struct hlist_node *node) 11192 + { 11193 + struct lpfc_hba *phba = hlist_entry_safe(node, struct lpfc_hba, cpuhp); 11194 + struct lpfc_queue *eq, *next; 11195 + unsigned int n; 11196 + int retval; 11197 + 11198 + if (!phba) { 11199 + WARN_ONCE(!phba, "cpu: %u. phba:NULL", raw_smp_processor_id()); 11200 + return 0; 11201 + } 11202 + 11203 + if (__lpfc_cpuhp_checks(phba, &retval)) 11204 + return retval; 11205 + 11206 + lpfc_irq_rebalance(phba, cpu, false); 11207 + 11208 + list_for_each_entry_safe(eq, next, &phba->poll_list, _poll_list) { 11209 + n = lpfc_find_cpu_handle(phba, eq->hdwq, LPFC_FIND_BY_HDWQ); 11210 + if (n == cpu) 11211 + lpfc_sli4_stop_polling(eq); 11212 + } 11213 + 11214 + return 0; 11215 + } 11216 + 11217 + /** 11108 11218 * lpfc_sli4_enable_msix - Enable MSI-X interrupt mode to SLI-4 device 11109 11219 * @phba: pointer to lpfc hba data structure. 11110 11220 * 11111 11221 * This routine is invoked to enable the MSI-X interrupt vectors to device 11112 - * with SLI-4 interface spec. 11222 + * with SLI-4 interface spec. It also allocates MSI-X vectors and maps them 11223 + * to cpus on the system. 11224 + * 11225 + * When cfg_irq_numa is enabled, the adapter will only allocate vectors for 11226 + * the number of cpus on the same numa node as this adapter. The vectors are 11227 + * allocated without requesting OS affinity mapping. A vector will be 11228 + * allocated and assigned to each online and offline cpu. If the cpu is 11229 + * online, then affinity will be set to that cpu. If the cpu is offline, then 11230 + * affinity will be set to the nearest peer cpu within the numa node that is 11231 + * online. If there are no online cpus within the numa node, affinity is not 11232 + * assigned and the OS may do as it pleases. Note: cpu vector affinity mapping 11233 + * is consistent with the way cpu online/offline is handled when cfg_irq_numa is 11234 + * configured. 11235 + * 11236 + * If numa mode is not enabled and there is more than 1 vector allocated, then 11237 + * the driver relies on the managed irq interface where the OS assigns vector to 11238 + * cpu affinity. The driver will then use that affinity mapping to setup its 11239 + * cpu mapping table. 11113 11240 * 11114 11241 * Return codes 11115 11242 * 0 - successful ··· 11389 10978 { 11390 10979 int vectors, rc, index; 11391 10980 char *name; 10981 + const struct cpumask *numa_mask = NULL; 10982 + unsigned int cpu = 0, cpu_cnt = 0, cpu_select = nr_cpu_ids; 10983 + struct lpfc_hba_eq_hdl *eqhdl; 10984 + const struct cpumask *maskp; 10985 + bool first; 10986 + unsigned int flags = PCI_IRQ_MSIX; 11392 10987 11393 10988 /* Set up MSI-X multi-message vectors */ 11394 10989 vectors = phba->cfg_irq_chann; 11395 10990 11396 - rc = pci_alloc_irq_vectors(phba->pcidev, 11397 - 1, 11398 - vectors, PCI_IRQ_MSIX | PCI_IRQ_AFFINITY); 10991 + if (phba->cfg_irq_numa) { 10992 + numa_mask = &phba->sli4_hba.numa_mask; 10993 + cpu_cnt = cpumask_weight(numa_mask); 10994 + vectors = min(phba->cfg_irq_chann, cpu_cnt); 10995 + 10996 + /* cpu: iterates over numa_mask including offline or online 10997 + * cpu_select: iterates over online numa_mask to set affinity 10998 + */ 10999 + cpu = cpumask_first(numa_mask); 11000 + cpu_select = lpfc_next_online_numa_cpu(numa_mask, cpu); 11001 + } else { 11002 + flags |= PCI_IRQ_AFFINITY; 11003 + } 11004 + 11005 + rc = pci_alloc_irq_vectors(phba->pcidev, 1, vectors, flags); 11399 11006 if (rc < 0) { 11400 11007 lpfc_printf_log(phba, KERN_INFO, LOG_INIT, 11401 11008 "0484 PCI enable MSI-X failed (%d)\n", rc); ··· 11423 10994 11424 10995 /* Assign MSI-X vectors to interrupt handlers */ 11425 10996 for (index = 0; index < vectors; index++) { 11426 - name = phba->sli4_hba.hba_eq_hdl[index].handler_name; 10997 + eqhdl = lpfc_get_eq_hdl(index); 10998 + name = eqhdl->handler_name; 11427 10999 memset(name, 0, LPFC_SLI4_HANDLER_NAME_SZ); 11428 11000 snprintf(name, LPFC_SLI4_HANDLER_NAME_SZ, 11429 11001 LPFC_DRIVER_HANDLER_NAME"%d", index); 11430 11002 11431 - phba->sli4_hba.hba_eq_hdl[index].idx = index; 11432 - phba->sli4_hba.hba_eq_hdl[index].phba = phba; 11003 + eqhdl->idx = index; 11433 11004 rc = request_irq(pci_irq_vector(phba->pcidev, index), 11434 11005 &lpfc_sli4_hba_intr_handler, 0, 11435 - name, 11436 - &phba->sli4_hba.hba_eq_hdl[index]); 11006 + name, eqhdl); 11437 11007 if (rc) { 11438 11008 lpfc_printf_log(phba, KERN_WARNING, LOG_INIT, 11439 11009 "0486 MSI-X fast-path (%d) " 11440 11010 "request_irq failed (%d)\n", index, rc); 11441 11011 goto cfg_fail_out; 11012 + } 11013 + 11014 + eqhdl->irq = pci_irq_vector(phba->pcidev, index); 11015 + 11016 + if (phba->cfg_irq_numa) { 11017 + /* If found a neighboring online cpu, set affinity */ 11018 + if (cpu_select < nr_cpu_ids) 11019 + lpfc_irq_set_aff(eqhdl, cpu_select); 11020 + 11021 + /* Assign EQ to cpu_map */ 11022 + lpfc_assign_eq_map_info(phba, index, 11023 + LPFC_CPU_FIRST_IRQ, 11024 + cpu); 11025 + 11026 + /* Iterate to next offline or online cpu in numa_mask */ 11027 + cpu = cpumask_next(cpu, numa_mask); 11028 + 11029 + /* Find next online cpu in numa_mask to set affinity */ 11030 + cpu_select = lpfc_next_online_numa_cpu(numa_mask, cpu); 11031 + } else if (vectors == 1) { 11032 + cpu = cpumask_first(cpu_present_mask); 11033 + lpfc_assign_eq_map_info(phba, index, LPFC_CPU_FIRST_IRQ, 11034 + cpu); 11035 + } else { 11036 + maskp = pci_irq_get_affinity(phba->pcidev, index); 11037 + 11038 + first = true; 11039 + /* Loop through all CPUs associated with vector index */ 11040 + for_each_cpu_and(cpu, maskp, cpu_present_mask) { 11041 + /* If this is the first CPU thats assigned to 11042 + * this vector, set LPFC_CPU_FIRST_IRQ. 11043 + */ 11044 + lpfc_assign_eq_map_info(phba, index, 11045 + first ? 11046 + LPFC_CPU_FIRST_IRQ : 0, 11047 + cpu); 11048 + if (first) 11049 + first = false; 11050 + } 11442 11051 } 11443 11052 } 11444 11053 ··· 11487 11020 phba->cfg_irq_chann, vectors); 11488 11021 if (phba->cfg_irq_chann > vectors) 11489 11022 phba->cfg_irq_chann = vectors; 11490 - if (phba->nvmet_support && (phba->cfg_nvmet_mrq > vectors)) 11491 - phba->cfg_nvmet_mrq = vectors; 11492 11023 } 11493 11024 11494 11025 return rc; 11495 11026 11496 11027 cfg_fail_out: 11497 11028 /* free the irq already requested */ 11498 - for (--index; index >= 0; index--) 11499 - free_irq(pci_irq_vector(phba->pcidev, index), 11500 - &phba->sli4_hba.hba_eq_hdl[index]); 11029 + for (--index; index >= 0; index--) { 11030 + eqhdl = lpfc_get_eq_hdl(index); 11031 + lpfc_irq_clear_aff(eqhdl); 11032 + irq_set_affinity_hint(eqhdl->irq, NULL); 11033 + free_irq(eqhdl->irq, eqhdl); 11034 + } 11501 11035 11502 11036 /* Unconfigure MSI-X capability structure */ 11503 11037 pci_free_irq_vectors(phba->pcidev); ··· 11525 11057 lpfc_sli4_enable_msi(struct lpfc_hba *phba) 11526 11058 { 11527 11059 int rc, index; 11060 + unsigned int cpu; 11061 + struct lpfc_hba_eq_hdl *eqhdl; 11528 11062 11529 11063 rc = pci_alloc_irq_vectors(phba->pcidev, 1, 1, 11530 11064 PCI_IRQ_MSI | PCI_IRQ_AFFINITY); ··· 11548 11078 return rc; 11549 11079 } 11550 11080 11081 + eqhdl = lpfc_get_eq_hdl(0); 11082 + eqhdl->irq = pci_irq_vector(phba->pcidev, 0); 11083 + 11084 + cpu = cpumask_first(cpu_present_mask); 11085 + lpfc_assign_eq_map_info(phba, 0, LPFC_CPU_FIRST_IRQ, cpu); 11086 + 11551 11087 for (index = 0; index < phba->cfg_irq_chann; index++) { 11552 - phba->sli4_hba.hba_eq_hdl[index].idx = index; 11553 - phba->sli4_hba.hba_eq_hdl[index].phba = phba; 11088 + eqhdl = lpfc_get_eq_hdl(index); 11089 + eqhdl->idx = index; 11554 11090 } 11555 11091 11556 11092 return 0; ··· 11614 11138 IRQF_SHARED, LPFC_DRIVER_NAME, phba); 11615 11139 if (!retval) { 11616 11140 struct lpfc_hba_eq_hdl *eqhdl; 11141 + unsigned int cpu; 11617 11142 11618 11143 /* Indicate initialization to INTx mode */ 11619 11144 phba->intr_type = INTx; 11620 11145 intr_mode = 0; 11621 11146 11147 + eqhdl = lpfc_get_eq_hdl(0); 11148 + eqhdl->irq = pci_irq_vector(phba->pcidev, 0); 11149 + 11150 + cpu = cpumask_first(cpu_present_mask); 11151 + lpfc_assign_eq_map_info(phba, 0, LPFC_CPU_FIRST_IRQ, 11152 + cpu); 11622 11153 for (idx = 0; idx < phba->cfg_irq_chann; idx++) { 11623 - eqhdl = &phba->sli4_hba.hba_eq_hdl[idx]; 11154 + eqhdl = lpfc_get_eq_hdl(idx); 11624 11155 eqhdl->idx = idx; 11625 - eqhdl->phba = phba; 11626 11156 } 11627 11157 } 11628 11158 } ··· 11650 11168 /* Disable the currently initialized interrupt mode */ 11651 11169 if (phba->intr_type == MSIX) { 11652 11170 int index; 11171 + struct lpfc_hba_eq_hdl *eqhdl; 11653 11172 11654 11173 /* Free up MSI-X multi-message vectors */ 11655 11174 for (index = 0; index < phba->cfg_irq_chann; index++) { 11656 - irq_set_affinity_hint( 11657 - pci_irq_vector(phba->pcidev, index), 11658 - NULL); 11659 - free_irq(pci_irq_vector(phba->pcidev, index), 11660 - &phba->sli4_hba.hba_eq_hdl[index]); 11175 + eqhdl = lpfc_get_eq_hdl(index); 11176 + lpfc_irq_clear_aff(eqhdl); 11177 + irq_set_affinity_hint(eqhdl->irq, NULL); 11178 + free_irq(eqhdl->irq, eqhdl); 11661 11179 } 11662 11180 } else { 11663 11181 free_irq(phba->pcidev->irq, phba); ··· 11849 11367 /* Wait for completion of device XRI exchange busy */ 11850 11368 lpfc_sli4_xri_exchange_busy_wait(phba); 11851 11369 11370 + /* per-phba callback de-registration for hotplug event */ 11371 + lpfc_cpuhp_remove(phba); 11372 + 11852 11373 /* Disable PCI subsystem interrupt */ 11853 11374 lpfc_sli4_disable_intr(phba); 11854 11375 ··· 12023 11538 sli4_params->cqav = bf_get(cfg_cqav, mbx_sli4_parameters); 12024 11539 sli4_params->wqsize = bf_get(cfg_wqsize, mbx_sli4_parameters); 12025 11540 sli4_params->bv1s = bf_get(cfg_bv1s, mbx_sli4_parameters); 11541 + sli4_params->pls = bf_get(cfg_pvl, mbx_sli4_parameters); 12026 11542 sli4_params->sgl_pages_max = bf_get(cfg_sgl_page_cnt, 12027 11543 mbx_sli4_parameters); 12028 11544 sli4_params->wqpcnt = bf_get(cfg_wqpcnt, mbx_sli4_parameters); ··· 12075 11589 } 12076 11590 12077 11591 /* If the NVME FC4 type is enabled, scale the sg_seg_cnt to 12078 - * accommodate 512K and 1M IOs in a single nvme buf and supply 12079 - * enough NVME LS iocb buffers for larger connectivity counts. 11592 + * accommodate 512K and 1M IOs in a single nvme buf. 12080 11593 */ 12081 - if (phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME) { 11594 + if (phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME) 12082 11595 phba->cfg_sg_seg_cnt = LPFC_MAX_NVME_SEG_CNT; 12083 - phba->cfg_iocb_cnt = 5; 12084 - } 12085 11596 12086 11597 /* Only embed PBDE for if_type 6, PBDE support requires xib be set */ 12087 11598 if ((bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) != ··· 12795 12312 } 12796 12313 12797 12314 12798 - static void 12315 + static int 12799 12316 lpfc_log_write_firmware_error(struct lpfc_hba *phba, uint32_t offset, 12800 12317 uint32_t magic_number, uint32_t ftype, uint32_t fid, uint32_t fsize, 12801 12318 const struct firmware *fw) 12802 12319 { 12803 - if ((offset == ADD_STATUS_FW_NOT_SUPPORTED) || 12804 - (phba->pcidev->device == PCI_DEVICE_ID_LANCER_G6_FC && 12805 - magic_number != MAGIC_NUMER_G6) || 12806 - (phba->pcidev->device == PCI_DEVICE_ID_LANCER_G7_FC && 12807 - magic_number != MAGIC_NUMER_G7)) 12808 - lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 12809 - "3030 This firmware version is not supported on " 12810 - "this HBA model. Device:%x Magic:%x Type:%x " 12811 - "ID:%x Size %d %zd\n", 12812 - phba->pcidev->device, magic_number, ftype, fid, 12813 - fsize, fw->size); 12814 - else 12815 - lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 12816 - "3022 FW Download failed. Device:%x Magic:%x Type:%x " 12817 - "ID:%x Size %d %zd\n", 12818 - phba->pcidev->device, magic_number, ftype, fid, 12819 - fsize, fw->size); 12820 - } 12320 + int rc; 12821 12321 12322 + /* Three cases: (1) FW was not supported on the detected adapter. 12323 + * (2) FW update has been locked out administratively. 12324 + * (3) Some other error during FW update. 12325 + * In each case, an unmaskable message is written to the console 12326 + * for admin diagnosis. 12327 + */ 12328 + if (offset == ADD_STATUS_FW_NOT_SUPPORTED || 12329 + (phba->pcidev->device == PCI_DEVICE_ID_LANCER_G6_FC && 12330 + magic_number != MAGIC_NUMBER_G6) || 12331 + (phba->pcidev->device == PCI_DEVICE_ID_LANCER_G7_FC && 12332 + magic_number != MAGIC_NUMBER_G7)) { 12333 + lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 12334 + "3030 This firmware version is not supported on" 12335 + " this HBA model. Device:%x Magic:%x Type:%x " 12336 + "ID:%x Size %d %zd\n", 12337 + phba->pcidev->device, magic_number, ftype, fid, 12338 + fsize, fw->size); 12339 + rc = -EINVAL; 12340 + } else if (offset == ADD_STATUS_FW_DOWNLOAD_HW_DISABLED) { 12341 + lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 12342 + "3021 Firmware downloads have been prohibited " 12343 + "by a system configuration setting on " 12344 + "Device:%x Magic:%x Type:%x ID:%x Size %d " 12345 + "%zd\n", 12346 + phba->pcidev->device, magic_number, ftype, fid, 12347 + fsize, fw->size); 12348 + rc = -EACCES; 12349 + } else { 12350 + lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 12351 + "3022 FW Download failed. Add Status x%x " 12352 + "Device:%x Magic:%x Type:%x ID:%x Size %d " 12353 + "%zd\n", 12354 + offset, phba->pcidev->device, magic_number, 12355 + ftype, fid, fsize, fw->size); 12356 + rc = -EIO; 12357 + } 12358 + return rc; 12359 + } 12822 12360 12823 12361 /** 12824 12362 * lpfc_write_firmware - attempt to write a firmware image to the port 12825 12363 * @fw: pointer to firmware image returned from request_firmware. 12826 - * @phba: pointer to lpfc hba data structure. 12364 + * @context: pointer to firmware image returned from request_firmware. 12365 + * @ret: return value this routine provides to the caller. 12827 12366 * 12828 12367 **/ 12829 12368 static void ··· 12914 12409 rc = lpfc_wr_object(phba, &dma_buffer_list, 12915 12410 (fw->size - offset), &offset); 12916 12411 if (rc) { 12917 - lpfc_log_write_firmware_error(phba, offset, 12918 - magic_number, ftype, fid, fsize, fw); 12412 + rc = lpfc_log_write_firmware_error(phba, offset, 12413 + magic_number, 12414 + ftype, 12415 + fid, 12416 + fsize, 12417 + fw); 12919 12418 goto release_out; 12920 12419 } 12921 12420 } ··· 12939 12430 } 12940 12431 release_firmware(fw); 12941 12432 out: 12942 - lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 12943 - "3024 Firmware update done: %d.\n", rc); 12944 - return; 12433 + if (rc < 0) 12434 + lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 12435 + "3062 Firmware update error, status %d.\n", rc); 12436 + else 12437 + lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 12438 + "3024 Firmware update success: size %d.\n", rc); 12945 12439 } 12946 12440 12947 12441 /** ··· 13063 12551 phba->pport = NULL; 13064 12552 lpfc_stop_port(phba); 13065 12553 12554 + /* Init cpu_map array */ 12555 + lpfc_cpu_map_array_init(phba); 12556 + 12557 + /* Init hba_eq_hdl array */ 12558 + lpfc_hba_eq_hdl_array_init(phba); 12559 + 13066 12560 /* Configure and enable interrupt */ 13067 12561 intr_mode = lpfc_sli4_enable_intr(phba, cfg_mode); 13068 12562 if (intr_mode == LPFC_INTR_ERROR) { ··· 13149 12631 13150 12632 /* Enable RAS FW log support */ 13151 12633 lpfc_sli4_ras_setup(phba); 12634 + 12635 + INIT_LIST_HEAD(&phba->poll_list); 12636 + cpuhp_state_add_instance_nocalls(lpfc_cpuhp_state, &phba->cpuhp); 13152 12637 13153 12638 return 0; 13154 12639 ··· 13865 13344 phba->cfg_fof = 1; 13866 13345 } else { 13867 13346 phba->cfg_fof = 0; 13868 - if (phba->device_data_mem_pool) 13869 - mempool_destroy(phba->device_data_mem_pool); 13347 + mempool_destroy(phba->device_data_mem_pool); 13870 13348 phba->device_data_mem_pool = NULL; 13871 13349 } 13872 13350 ··· 13970 13450 /* Initialize in case vector mapping is needed */ 13971 13451 lpfc_present_cpu = num_present_cpus(); 13972 13452 13453 + error = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, 13454 + "lpfc/sli4:online", 13455 + lpfc_cpu_online, lpfc_cpu_offline); 13456 + if (error < 0) 13457 + goto cpuhp_failure; 13458 + lpfc_cpuhp_state = error; 13459 + 13973 13460 error = pci_register_driver(&lpfc_driver); 13974 - if (error) { 13975 - fc_release_transport(lpfc_transport_template); 13976 - fc_release_transport(lpfc_vport_transport_template); 13977 - } 13461 + if (error) 13462 + goto unwind; 13463 + 13464 + return error; 13465 + 13466 + unwind: 13467 + cpuhp_remove_multi_state(lpfc_cpuhp_state); 13468 + cpuhp_failure: 13469 + fc_release_transport(lpfc_transport_template); 13470 + fc_release_transport(lpfc_vport_transport_template); 13978 13471 13979 13472 return error; 13980 13473 } ··· 14004 13471 { 14005 13472 misc_deregister(&lpfc_mgmt_dev); 14006 13473 pci_unregister_driver(&lpfc_driver); 13474 + cpuhp_remove_multi_state(lpfc_cpuhp_state); 14007 13475 fc_release_transport(lpfc_transport_template); 14008 13476 fc_release_transport(lpfc_vport_transport_template); 14009 13477 idr_destroy(&lpfc_hba_index);
+17
drivers/scsi/lpfc/lpfc_logmsg.h
··· 46 46 #define LOG_NVME_IOERR 0x00800000 /* NVME IO Error events. */ 47 47 #define LOG_ALL_MSG 0xffffffff /* LOG all messages */ 48 48 49 + /* generate message by verbose log setting or severity */ 50 + #define lpfc_vlog_msg(vport, level, mask, fmt, arg...) \ 51 + { if (((mask) & (vport)->cfg_log_verbose) || (level[1] <= '4')) \ 52 + dev_printk(level, &((vport)->phba->pcidev)->dev, "%d:(%d):" \ 53 + fmt, (vport)->phba->brd_no, vport->vpi, ##arg); } 54 + 55 + #define lpfc_log_msg(phba, level, mask, fmt, arg...) \ 56 + do { \ 57 + { uint32_t log_verbose = (phba)->pport ? \ 58 + (phba)->pport->cfg_log_verbose : \ 59 + (phba)->cfg_log_verbose; \ 60 + if (((mask) & log_verbose) || (level[1] <= '4')) \ 61 + dev_printk(level, &((phba)->pcidev)->dev, "%d:" \ 62 + fmt, phba->brd_no, ##arg); \ 63 + } \ 64 + } while (0) 65 + 49 66 #define lpfc_printf_vlog(vport, level, mask, fmt, arg...) \ 50 67 do { \ 51 68 { if (((mask) & (vport)->cfg_log_verbose) || (level[1] <= '3')) \
+1
drivers/scsi/lpfc/lpfc_mbox.c
··· 515 515 516 516 if ((phba->pcidev->device == PCI_DEVICE_ID_LANCER_G6_FC || 517 517 phba->pcidev->device == PCI_DEVICE_ID_LANCER_G7_FC) && 518 + !(phba->sli4_hba.pc_sli4_params.pls) && 518 519 mb->un.varInitLnk.link_flags & FLAGS_TOPOLOGY_MODE_LOOP) { 519 520 mb->un.varInitLnk.link_flags = FLAGS_TOPOLOGY_MODE_PT_PT; 520 521 phba->cfg_topology = FLAGS_TOPOLOGY_MODE_PT_PT;
-3
drivers/scsi/lpfc/lpfc_mem.c
··· 230 230 dma_pool_destroy(phba->lpfc_hrb_pool); 231 231 phba->lpfc_hrb_pool = NULL; 232 232 233 - dma_pool_destroy(phba->txrdy_payload_pool); 234 - phba->txrdy_payload_pool = NULL; 235 - 236 233 dma_pool_destroy(phba->lpfc_hbq_pool); 237 234 phba->lpfc_hbq_pool = NULL; 238 235
+121 -28
drivers/scsi/lpfc/lpfc_nportdisc.c
··· 279 279 lpfc_cancel_retry_delay_tmo(phba->pport, ndlp); 280 280 } 281 281 282 + /* lpfc_defer_pt2pt_acc - Complete SLI3 pt2pt processing on link up 283 + * @phba: pointer to lpfc hba data structure. 284 + * @link_mbox: pointer to CONFIG_LINK mailbox object 285 + * 286 + * This routine is only called if we are SLI3, direct connect pt2pt 287 + * mode and the remote NPort issues the PLOGI after link up. 288 + */ 289 + static void 290 + lpfc_defer_pt2pt_acc(struct lpfc_hba *phba, LPFC_MBOXQ_t *link_mbox) 291 + { 292 + LPFC_MBOXQ_t *login_mbox; 293 + MAILBOX_t *mb = &link_mbox->u.mb; 294 + struct lpfc_iocbq *save_iocb; 295 + struct lpfc_nodelist *ndlp; 296 + int rc; 297 + 298 + ndlp = link_mbox->ctx_ndlp; 299 + login_mbox = link_mbox->context3; 300 + save_iocb = login_mbox->context3; 301 + link_mbox->context3 = NULL; 302 + login_mbox->context3 = NULL; 303 + 304 + /* Check for CONFIG_LINK error */ 305 + if (mb->mbxStatus) { 306 + lpfc_printf_log(phba, KERN_ERR, LOG_DISCOVERY, 307 + "4575 CONFIG_LINK fails pt2pt discovery: %x\n", 308 + mb->mbxStatus); 309 + mempool_free(login_mbox, phba->mbox_mem_pool); 310 + mempool_free(link_mbox, phba->mbox_mem_pool); 311 + lpfc_sli_release_iocbq(phba, save_iocb); 312 + return; 313 + } 314 + 315 + /* Now that CONFIG_LINK completed, and our SID is configured, 316 + * we can now proceed with sending the PLOGI ACC. 317 + */ 318 + rc = lpfc_els_rsp_acc(link_mbox->vport, ELS_CMD_PLOGI, 319 + save_iocb, ndlp, login_mbox); 320 + if (rc) { 321 + lpfc_printf_log(phba, KERN_ERR, LOG_DISCOVERY, 322 + "4576 PLOGI ACC fails pt2pt discovery: %x\n", 323 + rc); 324 + mempool_free(login_mbox, phba->mbox_mem_pool); 325 + } 326 + 327 + mempool_free(link_mbox, phba->mbox_mem_pool); 328 + lpfc_sli_release_iocbq(phba, save_iocb); 329 + } 330 + 282 331 static int 283 332 lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, 284 333 struct lpfc_iocbq *cmdiocb) ··· 340 291 IOCB_t *icmd; 341 292 struct serv_parm *sp; 342 293 uint32_t ed_tov; 343 - LPFC_MBOXQ_t *mbox; 294 + LPFC_MBOXQ_t *link_mbox; 295 + LPFC_MBOXQ_t *login_mbox; 296 + struct lpfc_iocbq *save_iocb; 344 297 struct ls_rjt stat; 345 298 uint32_t vid, flag; 346 - int rc; 299 + int rc, defer_acc; 347 300 348 301 memset(&stat, 0, sizeof (struct ls_rjt)); 349 302 pcmd = (struct lpfc_dmabuf *) cmdiocb->context2; ··· 394 343 else 395 344 ndlp->nlp_fcp_info |= CLASS3; 396 345 346 + defer_acc = 0; 397 347 ndlp->nlp_class_sup = 0; 398 348 if (sp->cls1.classValid) 399 349 ndlp->nlp_class_sup |= FC_COS_CLASS1; ··· 406 354 ndlp->nlp_class_sup |= FC_COS_CLASS4; 407 355 ndlp->nlp_maxframe = 408 356 ((sp->cmn.bbRcvSizeMsb & 0x0F) << 8) | sp->cmn.bbRcvSizeLsb; 409 - 410 357 /* if already logged in, do implicit logout */ 411 358 switch (ndlp->nlp_state) { 412 359 case NLP_STE_NPR_NODE: ··· 447 396 ndlp->nlp_fcp_info &= ~NLP_FCP_2_DEVICE; 448 397 ndlp->nlp_flag &= ~NLP_FIRSTBURST; 449 398 399 + login_mbox = NULL; 400 + link_mbox = NULL; 401 + save_iocb = NULL; 402 + 450 403 /* Check for Nport to NPort pt2pt protocol */ 451 404 if ((vport->fc_flag & FC_PT2PT) && 452 405 !(vport->fc_flag & FC_PT2PT_PLOGI)) { ··· 478 423 if (phba->sli_rev == LPFC_SLI_REV4) 479 424 lpfc_issue_reg_vfi(vport); 480 425 else { 481 - mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); 482 - if (mbox == NULL) 426 + defer_acc = 1; 427 + link_mbox = mempool_alloc(phba->mbox_mem_pool, 428 + GFP_KERNEL); 429 + if (!link_mbox) 483 430 goto out; 484 - lpfc_config_link(phba, mbox); 485 - mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl; 486 - mbox->vport = vport; 487 - rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT); 488 - if (rc == MBX_NOT_FINISHED) { 489 - mempool_free(mbox, phba->mbox_mem_pool); 431 + lpfc_config_link(phba, link_mbox); 432 + link_mbox->mbox_cmpl = lpfc_defer_pt2pt_acc; 433 + link_mbox->vport = vport; 434 + link_mbox->ctx_ndlp = ndlp; 435 + 436 + save_iocb = lpfc_sli_get_iocbq(phba); 437 + if (!save_iocb) 490 438 goto out; 491 - } 439 + /* Save info from cmd IOCB used in rsp */ 440 + memcpy((uint8_t *)save_iocb, (uint8_t *)cmdiocb, 441 + sizeof(struct lpfc_iocbq)); 492 442 } 493 443 494 444 lpfc_can_disctmo(vport); ··· 508 448 ndlp->nlp_flag |= NLP_SUPPRESS_RSP; 509 449 } 510 450 511 - mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); 512 - if (!mbox) 451 + login_mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); 452 + if (!login_mbox) 513 453 goto out; 514 454 515 455 /* Registering an existing RPI behaves differently for SLI3 vs SLI4 */ ··· 517 457 lpfc_unreg_rpi(vport, ndlp); 518 458 519 459 rc = lpfc_reg_rpi(phba, vport->vpi, icmd->un.rcvels.remoteID, 520 - (uint8_t *) sp, mbox, ndlp->nlp_rpi); 521 - if (rc) { 522 - mempool_free(mbox, phba->mbox_mem_pool); 460 + (uint8_t *)sp, login_mbox, ndlp->nlp_rpi); 461 + if (rc) 523 462 goto out; 524 - } 525 463 526 464 /* ACC PLOGI rsp command needs to execute first, 527 - * queue this mbox command to be processed later. 465 + * queue this login_mbox command to be processed later. 528 466 */ 529 - mbox->mbox_cmpl = lpfc_mbx_cmpl_reg_login; 467 + login_mbox->mbox_cmpl = lpfc_mbx_cmpl_reg_login; 530 468 /* 531 - * mbox->ctx_ndlp = lpfc_nlp_get(ndlp) deferred until mailbox 469 + * login_mbox->ctx_ndlp = lpfc_nlp_get(ndlp) deferred until mailbox 532 470 * command issued in lpfc_cmpl_els_acc(). 533 471 */ 534 - mbox->vport = vport; 472 + login_mbox->vport = vport; 535 473 spin_lock_irq(shost->host_lock); 536 474 ndlp->nlp_flag |= (NLP_ACC_REGLOGIN | NLP_RCV_PLOGI); 537 475 spin_unlock_irq(shost->host_lock); ··· 542 484 * single discovery thread, this will cause a huge delay in 543 485 * discovery. Also this will cause multiple state machines 544 486 * running in parallel for this node. 487 + * This only applies to a fabric environment. 545 488 */ 546 - if (ndlp->nlp_state == NLP_STE_PLOGI_ISSUE) { 489 + if ((ndlp->nlp_state == NLP_STE_PLOGI_ISSUE) && 490 + (vport->fc_flag & FC_FABRIC)) { 547 491 /* software abort outstanding PLOGI */ 548 492 lpfc_els_abort(phba, ndlp); 549 493 } ··· 564 504 stat.un.b.lsRjtRsnCode = LSRJT_INVALID_CMD; 565 505 stat.un.b.lsRjtRsnCodeExp = LSEXP_NOTHING_MORE; 566 506 rc = lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb, 567 - ndlp, mbox); 507 + ndlp, login_mbox); 568 508 if (rc) 569 - mempool_free(mbox, phba->mbox_mem_pool); 509 + mempool_free(login_mbox, phba->mbox_mem_pool); 570 510 return 1; 571 511 } 572 - rc = lpfc_els_rsp_acc(vport, ELS_CMD_PLOGI, cmdiocb, ndlp, mbox); 512 + if (defer_acc) { 513 + /* So the order here should be: 514 + * Issue CONFIG_LINK mbox 515 + * CONFIG_LINK cmpl 516 + * Issue PLOGI ACC 517 + * PLOGI ACC cmpl 518 + * Issue REG_LOGIN mbox 519 + */ 520 + 521 + /* Save the REG_LOGIN mbox for and rcv IOCB copy later */ 522 + link_mbox->context3 = login_mbox; 523 + login_mbox->context3 = save_iocb; 524 + 525 + /* Start the ball rolling by issuing CONFIG_LINK here */ 526 + rc = lpfc_sli_issue_mbox(phba, link_mbox, MBX_NOWAIT); 527 + if (rc == MBX_NOT_FINISHED) 528 + goto out; 529 + return 1; 530 + } 531 + 532 + rc = lpfc_els_rsp_acc(vport, ELS_CMD_PLOGI, cmdiocb, ndlp, login_mbox); 573 533 if (rc) 574 - mempool_free(mbox, phba->mbox_mem_pool); 534 + mempool_free(login_mbox, phba->mbox_mem_pool); 575 535 return 1; 576 536 out: 537 + if (defer_acc) 538 + lpfc_printf_log(phba, KERN_ERR, LOG_DISCOVERY, 539 + "4577 pt2pt discovery failure: %p %p %p\n", 540 + save_iocb, link_mbox, login_mbox); 541 + if (save_iocb) 542 + lpfc_sli_release_iocbq(phba, save_iocb); 543 + if (link_mbox) 544 + mempool_free(link_mbox, phba->mbox_mem_pool); 545 + if (login_mbox) 546 + mempool_free(login_mbox, phba->mbox_mem_pool); 547 + 577 548 stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC; 578 549 stat.un.b.lsRjtRsnCodeExp = LSEXP_OUT_OF_RESOURCE; 579 550 lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb, ndlp, NULL); ··· 2121 2030 if (bf_get_be32(prli_init, nvpr)) 2122 2031 ndlp->nlp_type |= NLP_NVME_INITIATOR; 2123 2032 2124 - if (phba->nsler && bf_get_be32(prli_nsler, nvpr)) 2033 + if (phba->nsler && bf_get_be32(prli_nsler, nvpr) && 2034 + bf_get_be32(prli_conf, nvpr)) 2035 + 2125 2036 ndlp->nlp_nvme_info |= NLP_NVME_NSLER; 2126 2037 else 2127 2038 ndlp->nlp_nvme_info &= ~NLP_NVME_NSLER;
+46 -39
drivers/scsi/lpfc/lpfc_nvme.c
··· 196 196 } 197 197 198 198 /** 199 + * lpfc_nvme_prep_abort_wqe - set up 'abort' work queue entry. 200 + * @pwqeq: Pointer to command iocb. 201 + * @xritag: Tag that uniqely identifies the local exchange resource. 202 + * @opt: Option bits - 203 + * bit 0 = inhibit sending abts on the link 204 + * 205 + * This function is called with hbalock held. 206 + **/ 207 + void 208 + lpfc_nvme_prep_abort_wqe(struct lpfc_iocbq *pwqeq, u16 xritag, u8 opt) 209 + { 210 + union lpfc_wqe128 *wqe = &pwqeq->wqe; 211 + 212 + /* WQEs are reused. Clear stale data and set key fields to 213 + * zero like ia, iaab, iaar, xri_tag, and ctxt_tag. 214 + */ 215 + memset(wqe, 0, sizeof(*wqe)); 216 + 217 + if (opt & INHIBIT_ABORT) 218 + bf_set(abort_cmd_ia, &wqe->abort_cmd, 1); 219 + /* Abort specified xri tag, with the mask deliberately zeroed */ 220 + bf_set(abort_cmd_criteria, &wqe->abort_cmd, T_XRI_TAG); 221 + 222 + bf_set(wqe_cmnd, &wqe->abort_cmd.wqe_com, CMD_ABORT_XRI_CX); 223 + 224 + /* Abort the IO associated with this outstanding exchange ID. */ 225 + wqe->abort_cmd.wqe_com.abort_tag = xritag; 226 + 227 + /* iotag for the wqe completion. */ 228 + bf_set(wqe_reqtag, &wqe->abort_cmd.wqe_com, pwqeq->iotag); 229 + 230 + bf_set(wqe_qosd, &wqe->abort_cmd.wqe_com, 1); 231 + bf_set(wqe_lenloc, &wqe->abort_cmd.wqe_com, LPFC_WQE_LENLOC_NONE); 232 + 233 + bf_set(wqe_cmd_type, &wqe->abort_cmd.wqe_com, OTHER_COMMAND); 234 + bf_set(wqe_wqec, &wqe->abort_cmd.wqe_com, 1); 235 + bf_set(wqe_cqid, &wqe->abort_cmd.wqe_com, LPFC_WQE_CQ_ID_DEFAULT); 236 + } 237 + 238 + /** 199 239 * lpfc_nvme_create_queue - 200 240 * @lpfc_pnvme: Pointer to the driver's nvme instance data 201 241 * @qidx: An cpu index used to affinitize IO queues and MSIX vectors. ··· 1831 1791 struct lpfc_iocbq *abts_buf; 1832 1792 struct lpfc_iocbq *nvmereq_wqe; 1833 1793 struct lpfc_nvme_fcpreq_priv *freqpriv; 1834 - union lpfc_wqe128 *abts_wqe; 1835 1794 unsigned long flags; 1836 1795 int ret_val; 1837 1796 ··· 1951 1912 /* Ready - mark outstanding as aborted by driver. */ 1952 1913 nvmereq_wqe->iocb_flag |= LPFC_DRIVER_ABORTED; 1953 1914 1954 - /* Complete prepping the abort wqe and issue to the FW. */ 1955 - abts_wqe = &abts_buf->wqe; 1956 - 1957 - /* WQEs are reused. Clear stale data and set key fields to 1958 - * zero like ia, iaab, iaar, xri_tag, and ctxt_tag. 1959 - */ 1960 - memset(abts_wqe, 0, sizeof(*abts_wqe)); 1961 - bf_set(abort_cmd_criteria, &abts_wqe->abort_cmd, T_XRI_TAG); 1962 - 1963 - /* word 7 */ 1964 - bf_set(wqe_cmnd, &abts_wqe->abort_cmd.wqe_com, CMD_ABORT_XRI_CX); 1965 - bf_set(wqe_class, &abts_wqe->abort_cmd.wqe_com, 1966 - nvmereq_wqe->iocb.ulpClass); 1967 - 1968 - /* word 8 - tell the FW to abort the IO associated with this 1969 - * outstanding exchange ID. 1970 - */ 1971 - abts_wqe->abort_cmd.wqe_com.abort_tag = nvmereq_wqe->sli4_xritag; 1972 - 1973 - /* word 9 - this is the iotag for the abts_wqe completion. */ 1974 - bf_set(wqe_reqtag, &abts_wqe->abort_cmd.wqe_com, 1975 - abts_buf->iotag); 1976 - 1977 - /* word 10 */ 1978 - bf_set(wqe_qosd, &abts_wqe->abort_cmd.wqe_com, 1); 1979 - bf_set(wqe_lenloc, &abts_wqe->abort_cmd.wqe_com, LPFC_WQE_LENLOC_NONE); 1980 - 1981 - /* word 11 */ 1982 - bf_set(wqe_cmd_type, &abts_wqe->abort_cmd.wqe_com, OTHER_COMMAND); 1983 - bf_set(wqe_wqec, &abts_wqe->abort_cmd.wqe_com, 1); 1984 - bf_set(wqe_cqid, &abts_wqe->abort_cmd.wqe_com, LPFC_WQE_CQ_ID_DEFAULT); 1915 + lpfc_nvme_prep_abort_wqe(abts_buf, nvmereq_wqe->sli4_xritag, 0); 1985 1916 1986 1917 /* ABTS WQE must go to the same WQ as the WQE to be aborted */ 1987 1918 abts_buf->iocb_flag |= LPFC_IO_NVME; ··· 2093 2084 lpfc_ncmd->flags &= ~LPFC_SBUF_BUMP_QDEPTH; 2094 2085 2095 2086 qp = lpfc_ncmd->hdwq; 2096 - if (lpfc_ncmd->flags & LPFC_SBUF_XBUSY) { 2087 + if (unlikely(lpfc_ncmd->flags & LPFC_SBUF_XBUSY)) { 2097 2088 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS, 2098 2089 "6310 XB release deferred for " 2099 2090 "ox_id x%x on reqtag x%x\n", ··· 2148 2139 */ 2149 2140 lpfc_nvme_template.max_sgl_segments = phba->cfg_nvme_seg_cnt + 1; 2150 2141 2151 - /* Advertise how many hw queues we support based on fcp_io_sched */ 2152 - if (phba->cfg_fcp_io_sched == LPFC_FCP_SCHED_BY_HDWQ) 2153 - lpfc_nvme_template.max_hw_queues = phba->cfg_hdw_queue; 2154 - else 2155 - lpfc_nvme_template.max_hw_queues = 2156 - phba->sli4_hba.num_present_cpu; 2142 + /* Advertise how many hw queues we support based on cfg_hdw_queue, 2143 + * which will not exceed cpu count. 2144 + */ 2145 + lpfc_nvme_template.max_hw_queues = phba->cfg_hdw_queue; 2157 2146 2158 2147 if (!IS_ENABLED(CONFIG_NVME_FC)) 2159 2148 return ret;
+30 -73
drivers/scsi/lpfc/lpfc_nvmet.c
··· 378 378 int cpu; 379 379 unsigned long iflag; 380 380 381 - if (ctxp->txrdy) { 382 - dma_pool_free(phba->txrdy_payload_pool, ctxp->txrdy, 383 - ctxp->txrdy_phys); 384 - ctxp->txrdy = NULL; 385 - ctxp->txrdy_phys = 0; 386 - } 387 - 388 381 if (ctxp->state == LPFC_NVMET_STE_FREE) { 389 382 lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR, 390 383 "6411 NVMET free, already free IO x%x: %d %d\n", ··· 423 430 424 431 ctxp = (struct lpfc_nvmet_rcv_ctx *)ctx_buf->context; 425 432 ctxp->wqeq = NULL; 426 - ctxp->txrdy = NULL; 427 433 ctxp->offset = 0; 428 434 ctxp->phba = phba; 429 435 ctxp->size = size; ··· 1950 1958 uint32_t *payload; 1951 1959 uint32_t size, oxid, sid, rc; 1952 1960 1953 - fc_hdr = (struct fc_frame_header *)(nvmebuf->hbuf.virt); 1954 - oxid = be16_to_cpu(fc_hdr->fh_ox_id); 1955 1961 1956 - if (!phba->targetport) { 1962 + if (!nvmebuf || !phba->targetport) { 1957 1963 lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR, 1958 - "6154 LS Drop IO x%x\n", oxid); 1964 + "6154 LS Drop IO\n"); 1959 1965 oxid = 0; 1960 1966 size = 0; 1961 1967 sid = 0; 1962 1968 ctxp = NULL; 1963 1969 goto dropit; 1964 1970 } 1971 + 1972 + fc_hdr = (struct fc_frame_header *)(nvmebuf->hbuf.virt); 1973 + oxid = be16_to_cpu(fc_hdr->fh_ox_id); 1965 1974 1966 1975 tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private; 1967 1976 payload = (uint32_t *)(nvmebuf->dbuf.virt); ··· 2319 2326 ctxp->state, ctxp->entry_cnt, ctxp->oxid); 2320 2327 } 2321 2328 ctxp->wqeq = NULL; 2322 - ctxp->txrdy = NULL; 2323 2329 ctxp->offset = 0; 2324 2330 ctxp->phba = phba; 2325 2331 ctxp->size = size; ··· 2393 2401 d_buf = piocb->context2; 2394 2402 nvmebuf = container_of(d_buf, struct hbq_dmabuf, dbuf); 2395 2403 2404 + if (!nvmebuf) { 2405 + lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR, 2406 + "3015 LS Drop IO\n"); 2407 + return; 2408 + } 2396 2409 if (phba->nvmet_support == 0) { 2397 2410 lpfc_in_buf_free(phba, &nvmebuf->dbuf); 2398 2411 return; ··· 2426 2429 uint64_t isr_timestamp, 2427 2430 uint8_t cqflag) 2428 2431 { 2432 + if (!nvmebuf) { 2433 + lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR, 2434 + "3167 NVMET FCP Drop IO\n"); 2435 + return; 2436 + } 2429 2437 if (phba->nvmet_support == 0) { 2430 2438 lpfc_rq_buf_free(phba, &nvmebuf->hbuf); 2431 2439 return; ··· 2597 2595 struct scatterlist *sgel; 2598 2596 union lpfc_wqe128 *wqe; 2599 2597 struct ulp_bde64 *bde; 2600 - uint32_t *txrdy; 2601 2598 dma_addr_t physaddr; 2602 2599 int i, cnt; 2603 2600 int do_pbde; ··· 2758 2757 &lpfc_treceive_cmd_template.words[3], 2759 2758 sizeof(uint32_t) * 9); 2760 2759 2761 - /* Words 0 - 2 : The first sg segment */ 2762 - txrdy = dma_pool_alloc(phba->txrdy_payload_pool, 2763 - GFP_KERNEL, &physaddr); 2764 - if (!txrdy) { 2765 - lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR, 2766 - "6041 Bad txrdy buffer: oxid x%x\n", 2767 - ctxp->oxid); 2768 - return NULL; 2769 - } 2770 - ctxp->txrdy = txrdy; 2771 - ctxp->txrdy_phys = physaddr; 2772 - wqe->fcp_treceive.bde.tus.f.bdeFlags = BUFF_TYPE_BDE_64; 2773 - wqe->fcp_treceive.bde.tus.f.bdeSize = TXRDY_PAYLOAD_LEN; 2774 - wqe->fcp_treceive.bde.addrLow = 2775 - cpu_to_le32(putPaddrLow(physaddr)); 2776 - wqe->fcp_treceive.bde.addrHigh = 2777 - cpu_to_le32(putPaddrHigh(physaddr)); 2760 + /* Words 0 - 2 : First SGE is skipped, set invalid BDE type */ 2761 + wqe->fcp_treceive.bde.tus.f.bdeFlags = LPFC_SGE_TYPE_SKIP; 2762 + wqe->fcp_treceive.bde.tus.f.bdeSize = 0; 2763 + wqe->fcp_treceive.bde.addrLow = 0; 2764 + wqe->fcp_treceive.bde.addrHigh = 0; 2778 2765 2779 2766 /* Word 4 */ 2780 2767 wqe->fcp_treceive.relative_offset = ctxp->offset; ··· 2797 2808 /* Word 12 */ 2798 2809 wqe->fcp_tsend.fcp_data_len = rsp->transfer_length; 2799 2810 2800 - /* Setup 1 TXRDY and 1 SKIP SGE */ 2801 - txrdy[0] = 0; 2802 - txrdy[1] = cpu_to_be32(rsp->transfer_length); 2803 - txrdy[2] = 0; 2804 - 2805 - sgl->addr_hi = putPaddrHigh(physaddr); 2806 - sgl->addr_lo = putPaddrLow(physaddr); 2811 + /* Setup 2 SKIP SGEs */ 2812 + sgl->addr_hi = 0; 2813 + sgl->addr_lo = 0; 2807 2814 sgl->word2 = 0; 2808 - bf_set(lpfc_sli4_sge_type, sgl, LPFC_SGE_TYPE_DATA); 2815 + bf_set(lpfc_sli4_sge_type, sgl, LPFC_SGE_TYPE_SKIP); 2809 2816 sgl->word2 = cpu_to_le32(sgl->word2); 2810 - sgl->sge_len = cpu_to_le32(TXRDY_PAYLOAD_LEN); 2817 + sgl->sge_len = 0; 2811 2818 sgl++; 2812 2819 sgl->addr_hi = 0; 2813 2820 sgl->addr_lo = 0; ··· 3224 3239 { 3225 3240 struct lpfc_nvmet_tgtport *tgtp; 3226 3241 struct lpfc_iocbq *abts_wqeq; 3227 - union lpfc_wqe128 *abts_wqe; 3228 3242 struct lpfc_nodelist *ndlp; 3229 3243 unsigned long flags; 3244 + u8 opt; 3230 3245 int rc; 3231 3246 3232 3247 tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private; ··· 3265 3280 return 0; 3266 3281 } 3267 3282 abts_wqeq = ctxp->abort_wqeq; 3268 - abts_wqe = &abts_wqeq->wqe; 3269 3283 ctxp->state = LPFC_NVMET_STE_ABORT; 3284 + opt = (ctxp->flag & LPFC_NVMET_ABTS_RCV) ? INHIBIT_ABORT : 0; 3270 3285 spin_unlock_irqrestore(&ctxp->ctxlock, flags); 3271 3286 3272 3287 /* Announce entry to new IO submit field. */ ··· 3312 3327 /* Ready - mark outstanding as aborted by driver. */ 3313 3328 abts_wqeq->iocb_flag |= LPFC_DRIVER_ABORTED; 3314 3329 3315 - /* WQEs are reused. Clear stale data and set key fields to 3316 - * zero like ia, iaab, iaar, xri_tag, and ctxt_tag. 3317 - */ 3318 - memset(abts_wqe, 0, sizeof(*abts_wqe)); 3319 - 3320 - /* word 3 */ 3321 - bf_set(abort_cmd_criteria, &abts_wqe->abort_cmd, T_XRI_TAG); 3322 - 3323 - /* word 7 */ 3324 - bf_set(wqe_ct, &abts_wqe->abort_cmd.wqe_com, 0); 3325 - bf_set(wqe_cmnd, &abts_wqe->abort_cmd.wqe_com, CMD_ABORT_XRI_CX); 3326 - 3327 - /* word 8 - tell the FW to abort the IO associated with this 3328 - * outstanding exchange ID. 3329 - */ 3330 - abts_wqe->abort_cmd.wqe_com.abort_tag = ctxp->wqeq->sli4_xritag; 3331 - 3332 - /* word 9 - this is the iotag for the abts_wqe completion. */ 3333 - bf_set(wqe_reqtag, &abts_wqe->abort_cmd.wqe_com, 3334 - abts_wqeq->iotag); 3335 - 3336 - /* word 10 */ 3337 - bf_set(wqe_qosd, &abts_wqe->abort_cmd.wqe_com, 1); 3338 - bf_set(wqe_lenloc, &abts_wqe->abort_cmd.wqe_com, LPFC_WQE_LENLOC_NONE); 3339 - 3340 - /* word 11 */ 3341 - bf_set(wqe_cmd_type, &abts_wqe->abort_cmd.wqe_com, OTHER_COMMAND); 3342 - bf_set(wqe_wqec, &abts_wqe->abort_cmd.wqe_com, 1); 3343 - bf_set(wqe_cqid, &abts_wqe->abort_cmd.wqe_com, LPFC_WQE_CQ_ID_DEFAULT); 3330 + lpfc_nvme_prep_abort_wqe(abts_wqeq, ctxp->wqeq->sli4_xritag, opt); 3344 3331 3345 3332 /* ABTS WQE must go to the same WQ as the WQE to be aborted */ 3346 3333 abts_wqeq->hba_wqidx = ctxp->wqeq->hba_wqidx; 3347 3334 abts_wqeq->wqe_cmpl = lpfc_nvmet_sol_fcp_abort_cmp; 3348 - abts_wqeq->iocb_cmpl = 0; 3335 + abts_wqeq->iocb_cmpl = NULL; 3349 3336 abts_wqeq->iocb_flag |= LPFC_IO_NVME; 3350 3337 abts_wqeq->context2 = ctxp; 3351 3338 abts_wqeq->vport = phba->pport; ··· 3452 3495 3453 3496 spin_lock_irqsave(&phba->hbalock, flags); 3454 3497 abts_wqeq->wqe_cmpl = lpfc_nvmet_xmt_ls_abort_cmp; 3455 - abts_wqeq->iocb_cmpl = 0; 3498 + abts_wqeq->iocb_cmpl = NULL; 3456 3499 abts_wqeq->iocb_flag |= LPFC_IO_NVME_LS; 3457 3500 rc = lpfc_sli4_issue_wqe(phba, ctxp->hdwq, abts_wqeq); 3458 3501 spin_unlock_irqrestore(&phba->hbalock, flags);
-2
drivers/scsi/lpfc/lpfc_nvmet.h
··· 112 112 struct lpfc_hba *phba; 113 113 struct lpfc_iocbq *wqeq; 114 114 struct lpfc_iocbq *abort_wqeq; 115 - dma_addr_t txrdy_phys; 116 115 spinlock_t ctxlock; /* protect flag access */ 117 - uint32_t *txrdy; 118 116 uint32_t sid; 119 117 uint32_t offset; 120 118 uint16_t oxid;
+24 -19
drivers/scsi/lpfc/lpfc_scsi.c
··· 134 134 135 135 /** 136 136 * lpfc_update_stats - Update statistical data for the command completion 137 - * @phba: Pointer to HBA object. 137 + * @vport: The virtual port on which this call is executing. 138 138 * @lpfc_cmd: lpfc scsi command object pointer. 139 139 * 140 140 * This function is called when there is a command completion and this 141 141 * function updates the statistical data for the command completion. 142 142 **/ 143 143 static void 144 - lpfc_update_stats(struct lpfc_hba *phba, struct lpfc_io_buf *lpfc_cmd) 144 + lpfc_update_stats(struct lpfc_vport *vport, struct lpfc_io_buf *lpfc_cmd) 145 145 { 146 + struct lpfc_hba *phba = vport->phba; 146 147 struct lpfc_rport_data *rdata; 147 148 struct lpfc_nodelist *pnode; 148 149 struct scsi_cmnd *cmd = lpfc_cmd->pCmd; 149 150 unsigned long flags; 150 - struct Scsi_Host *shost = cmd->device->host; 151 - struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata; 151 + struct Scsi_Host *shost = lpfc_shost_from_vport(vport); 152 152 unsigned long latency; 153 153 int i; 154 154 ··· 526 526 &qp->lpfc_abts_io_buf_list, list) { 527 527 if (psb->cur_iocbq.sli4_xritag == xri) { 528 528 list_del_init(&psb->list); 529 - psb->exch_busy = 0; 529 + psb->flags &= ~LPFC_SBUF_XBUSY; 530 530 psb->status = IOSTAT_SUCCESS; 531 531 if (psb->cur_iocbq.iocb_flag == LPFC_IO_NVME) { 532 532 qp->abts_nvme_io_bufs--; ··· 566 566 if (iocbq->sli4_xritag != xri) 567 567 continue; 568 568 psb = container_of(iocbq, struct lpfc_io_buf, cur_iocbq); 569 - psb->exch_busy = 0; 569 + psb->flags &= ~LPFC_SBUF_XBUSY; 570 570 spin_unlock_irqrestore(&phba->hbalock, iflag); 571 571 if (!list_empty(&pring->txq)) 572 572 lpfc_worker_wake_up(phba); ··· 786 786 psb->prot_seg_cnt = 0; 787 787 788 788 qp = psb->hdwq; 789 - if (psb->exch_busy) { 789 + if (psb->flags & LPFC_SBUF_XBUSY) { 790 790 spin_lock_irqsave(&qp->abts_io_buf_list_lock, iflag); 791 791 psb->pCmd = NULL; 792 792 list_add_tail(&psb->list, &qp->lpfc_abts_io_buf_list); ··· 3812 3812 3813 3813 /* Sanity check on return of outstanding command */ 3814 3814 cmd = lpfc_cmd->pCmd; 3815 - if (!cmd) { 3815 + if (!cmd || !phba) { 3816 3816 lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT, 3817 3817 "2621 IO completion: Not an active IO\n"); 3818 3818 spin_unlock(&lpfc_cmd->buf_lock); ··· 3824 3824 phba->sli4_hba.hdwq[idx].scsi_cstat.io_cmpls++; 3825 3825 3826 3826 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS 3827 - if (phba->cpucheck_on & LPFC_CHECK_SCSI_IO) { 3827 + if (unlikely(phba->cpucheck_on & LPFC_CHECK_SCSI_IO)) { 3828 3828 cpu = raw_smp_processor_id(); 3829 3829 if (cpu < LPFC_CHECK_CPU_CNT && phba->sli4_hba.hdwq) 3830 3830 phba->sli4_hba.hdwq[idx].cpucheck_cmpl_io[cpu]++; ··· 3835 3835 lpfc_cmd->result = (pIocbOut->iocb.un.ulpWord[4] & IOERR_PARAM_MASK); 3836 3836 lpfc_cmd->status = pIocbOut->iocb.ulpStatus; 3837 3837 /* pick up SLI4 exhange busy status from HBA */ 3838 - lpfc_cmd->exch_busy = pIocbOut->iocb_flag & LPFC_EXCHANGE_BUSY; 3838 + if (pIocbOut->iocb_flag & LPFC_EXCHANGE_BUSY) 3839 + lpfc_cmd->flags |= LPFC_SBUF_XBUSY; 3840 + else 3841 + lpfc_cmd->flags &= ~LPFC_SBUF_XBUSY; 3839 3842 3840 3843 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS 3841 3844 if (lpfc_cmd->prot_data_type) { ··· 3872 3869 } 3873 3870 #endif 3874 3871 3875 - if (lpfc_cmd->status) { 3872 + if (unlikely(lpfc_cmd->status)) { 3876 3873 if (lpfc_cmd->status == IOSTAT_LOCAL_REJECT && 3877 3874 (lpfc_cmd->result & IOERR_DRVR_MASK)) 3878 3875 lpfc_cmd->status = IOSTAT_DRIVER_REJECT; ··· 4005 4002 scsi_get_resid(cmd)); 4006 4003 } 4007 4004 4008 - lpfc_update_stats(phba, lpfc_cmd); 4005 + lpfc_update_stats(vport, lpfc_cmd); 4009 4006 if (vport->cfg_max_scsicmpl_time && 4010 4007 time_after(jiffies, lpfc_cmd->start_time + 4011 4008 msecs_to_jiffies(vport->cfg_max_scsicmpl_time))) { ··· 4613 4610 err = lpfc_scsi_prep_dma_buf(phba, lpfc_cmd); 4614 4611 } 4615 4612 4616 - if (err == 2) { 4617 - cmnd->result = DID_ERROR << 16; 4618 - goto out_fail_command_release_buf; 4619 - } else if (err) { 4613 + if (unlikely(err)) { 4614 + if (err == 2) { 4615 + cmnd->result = DID_ERROR << 16; 4616 + goto out_fail_command_release_buf; 4617 + } 4620 4618 goto out_host_busy_free_buf; 4621 4619 } 4622 4620 4623 4621 lpfc_scsi_prep_cmnd(vport, lpfc_cmd, ndlp); 4624 4622 4625 4623 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS 4626 - if (phba->cpucheck_on & LPFC_CHECK_SCSI_IO) { 4624 + if (unlikely(phba->cpucheck_on & LPFC_CHECK_SCSI_IO)) { 4627 4625 cpu = raw_smp_processor_id(); 4628 4626 if (cpu < LPFC_CHECK_CPU_CNT) { 4629 4627 struct lpfc_sli4_hdw_queue *hdwq = ··· 4847 4843 ret_val = __lpfc_sli_issue_iocb(phba, LPFC_FCP_RING, 4848 4844 abtsiocb, 0); 4849 4845 } 4850 - /* no longer need the lock after this point */ 4851 - spin_unlock_irqrestore(&phba->hbalock, flags); 4852 4846 4853 4847 if (ret_val == IOCB_ERROR) { 4854 4848 /* Indicate the IO is not being aborted by the driver. */ 4855 4849 iocb->iocb_flag &= ~LPFC_DRIVER_ABORTED; 4856 4850 lpfc_cmd->waitq = NULL; 4857 4851 spin_unlock(&lpfc_cmd->buf_lock); 4852 + spin_unlock_irqrestore(&phba->hbalock, flags); 4858 4853 lpfc_sli_release_iocbq(phba, abtsiocb); 4859 4854 ret = FAILED; 4860 4855 goto out; 4861 4856 } 4862 4857 4858 + /* no longer need the lock after this point */ 4863 4859 spin_unlock(&lpfc_cmd->buf_lock); 4860 + spin_unlock_irqrestore(&phba->hbalock, flags); 4864 4861 4865 4862 if (phba->cfg_poll & DISABLE_FCP_RING_INT) 4866 4863 lpfc_sli_handle_fast_ring_event(phba,
+325 -66
drivers/scsi/lpfc/lpfc_sli.c
··· 87 87 struct lpfc_eqe *eqe); 88 88 static bool lpfc_sli4_mbox_completions_pending(struct lpfc_hba *phba); 89 89 static bool lpfc_sli4_process_missed_mbox_completions(struct lpfc_hba *phba); 90 + static struct lpfc_cqe *lpfc_sli4_cq_get(struct lpfc_queue *q); 91 + static void __lpfc_sli4_consume_cqe(struct lpfc_hba *phba, 92 + struct lpfc_queue *cq, 93 + struct lpfc_cqe *cqe); 90 94 91 95 static IOCB_t * 92 96 lpfc_get_iocb_from_iocbq(struct lpfc_iocbq *iocbq) ··· 471 467 } 472 468 473 469 static void 474 - lpfc_sli4_eq_flush(struct lpfc_hba *phba, struct lpfc_queue *eq) 470 + lpfc_sli4_eqcq_flush(struct lpfc_hba *phba, struct lpfc_queue *eq) 475 471 { 476 - struct lpfc_eqe *eqe; 477 - uint32_t count = 0; 472 + struct lpfc_eqe *eqe = NULL; 473 + u32 eq_count = 0, cq_count = 0; 474 + struct lpfc_cqe *cqe = NULL; 475 + struct lpfc_queue *cq = NULL, *childq = NULL; 476 + int cqid = 0; 478 477 479 478 /* walk all the EQ entries and drop on the floor */ 480 479 eqe = lpfc_sli4_eq_get(eq); 481 480 while (eqe) { 481 + /* Get the reference to the corresponding CQ */ 482 + cqid = bf_get_le32(lpfc_eqe_resource_id, eqe); 483 + cq = NULL; 484 + 485 + list_for_each_entry(childq, &eq->child_list, list) { 486 + if (childq->queue_id == cqid) { 487 + cq = childq; 488 + break; 489 + } 490 + } 491 + /* If CQ is valid, iterate through it and drop all the CQEs */ 492 + if (cq) { 493 + cqe = lpfc_sli4_cq_get(cq); 494 + while (cqe) { 495 + __lpfc_sli4_consume_cqe(phba, cq, cqe); 496 + cq_count++; 497 + cqe = lpfc_sli4_cq_get(cq); 498 + } 499 + /* Clear and re-arm the CQ */ 500 + phba->sli4_hba.sli4_write_cq_db(phba, cq, cq_count, 501 + LPFC_QUEUE_REARM); 502 + cq_count = 0; 503 + } 482 504 __lpfc_sli4_consume_eqe(phba, eq, eqe); 483 - count++; 505 + eq_count++; 484 506 eqe = lpfc_sli4_eq_get(eq); 485 507 } 486 508 487 509 /* Clear and re-arm the EQ */ 488 - phba->sli4_hba.sli4_write_eq_db(phba, eq, count, LPFC_QUEUE_REARM); 510 + phba->sli4_hba.sli4_write_eq_db(phba, eq, eq_count, LPFC_QUEUE_REARM); 489 511 } 490 512 491 513 static int 492 - lpfc_sli4_process_eq(struct lpfc_hba *phba, struct lpfc_queue *eq) 514 + lpfc_sli4_process_eq(struct lpfc_hba *phba, struct lpfc_queue *eq, 515 + uint8_t rearm) 493 516 { 494 517 struct lpfc_eqe *eqe; 495 518 int count = 0, consumed = 0; ··· 550 519 eq->queue_claimed = 0; 551 520 552 521 rearm_and_exit: 553 - /* Always clear and re-arm the EQ */ 554 - phba->sli4_hba.sli4_write_eq_db(phba, eq, consumed, LPFC_QUEUE_REARM); 522 + /* Always clear the EQ. */ 523 + phba->sli4_hba.sli4_write_eq_db(phba, eq, consumed, rearm); 555 524 556 525 return count; 557 526 } ··· 2557 2526 } else { 2558 2527 __lpfc_sli_rpi_release(vport, ndlp); 2559 2528 } 2529 + if (vport->load_flag & FC_UNLOADING) 2530 + lpfc_nlp_put(ndlp); 2560 2531 pmb->ctx_ndlp = NULL; 2561 2532 } 2562 2533 } ··· 2705 2672 lpfc_printf_log(phba, KERN_ERR, LOG_MBOX | LOG_SLI, 2706 2673 "(%d):0323 Unknown Mailbox command " 2707 2674 "x%x (x%x/x%x) Cmpl\n", 2708 - pmb->vport ? pmb->vport->vpi : 0, 2675 + pmb->vport ? pmb->vport->vpi : 2676 + LPFC_VPORT_UNKNOWN, 2709 2677 pmbox->mbxCommand, 2710 2678 lpfc_sli_config_mbox_subsys_get(phba, 2711 2679 pmb), ··· 2727 2693 "(%d):0305 Mbox cmd cmpl " 2728 2694 "error - RETRYing Data: x%x " 2729 2695 "(x%x/x%x) x%x x%x x%x\n", 2730 - pmb->vport ? pmb->vport->vpi : 0, 2696 + pmb->vport ? pmb->vport->vpi : 2697 + LPFC_VPORT_UNKNOWN, 2731 2698 pmbox->mbxCommand, 2732 2699 lpfc_sli_config_mbox_subsys_get(phba, 2733 2700 pmb), ··· 2736 2701 pmb), 2737 2702 pmbox->mbxStatus, 2738 2703 pmbox->un.varWords[0], 2739 - pmb->vport->port_state); 2704 + pmb->vport ? pmb->vport->port_state : 2705 + LPFC_VPORT_UNKNOWN); 2740 2706 pmbox->mbxStatus = 0; 2741 2707 pmbox->mbxOwner = OWN_HOST; 2742 2708 rc = lpfc_sli_issue_mbox(phba, pmb, MBX_NOWAIT); ··· 6203 6167 mbox->u.mqe.un.set_feature.feature = LPFC_SET_MDS_DIAGS; 6204 6168 mbox->u.mqe.un.set_feature.param_len = 8; 6205 6169 break; 6170 + case LPFC_SET_DUAL_DUMP: 6171 + bf_set(lpfc_mbx_set_feature_dd, 6172 + &mbox->u.mqe.un.set_feature, LPFC_ENABLE_DUAL_DUMP); 6173 + bf_set(lpfc_mbx_set_feature_ddquery, 6174 + &mbox->u.mqe.un.set_feature, 0); 6175 + mbox->u.mqe.un.set_feature.feature = LPFC_SET_DUAL_DUMP; 6176 + mbox->u.mqe.un.set_feature.param_len = 4; 6177 + break; 6206 6178 } 6207 6179 6208 6180 return; ··· 6228 6184 { 6229 6185 struct lpfc_ras_fwlog *ras_fwlog = &phba->ras_fwlog; 6230 6186 6231 - ras_fwlog->ras_active = false; 6187 + spin_lock_irq(&phba->hbalock); 6188 + ras_fwlog->state = INACTIVE; 6189 + spin_unlock_irq(&phba->hbalock); 6232 6190 6233 6191 /* Disable FW logging to host memory */ 6234 6192 writel(LPFC_CTL_PDEV_CTL_DDL_RAS, 6235 6193 phba->sli4_hba.conf_regs_memmap_p + LPFC_CTL_PDEV_CTL_OFFSET); 6194 + 6195 + /* Wait 10ms for firmware to stop using DMA buffer */ 6196 + usleep_range(10 * 1000, 20 * 1000); 6236 6197 } 6237 6198 6238 6199 /** ··· 6273 6224 ras_fwlog->lwpd.virt = NULL; 6274 6225 } 6275 6226 6276 - ras_fwlog->ras_active = false; 6227 + spin_lock_irq(&phba->hbalock); 6228 + ras_fwlog->state = INACTIVE; 6229 + spin_unlock_irq(&phba->hbalock); 6277 6230 } 6278 6231 6279 6232 /** ··· 6377 6326 goto disable_ras; 6378 6327 } 6379 6328 6380 - ras_fwlog->ras_active = true; 6329 + spin_lock_irq(&phba->hbalock); 6330 + ras_fwlog->state = ACTIVE; 6331 + spin_unlock_irq(&phba->hbalock); 6381 6332 mempool_free(pmb, phba->mbox_mem_pool); 6382 6333 6383 6334 return; ··· 6410 6357 LPFC_MBOXQ_t *mbox; 6411 6358 uint32_t len = 0, fwlog_buffsize, fwlog_entry_count; 6412 6359 int rc = 0; 6360 + 6361 + spin_lock_irq(&phba->hbalock); 6362 + ras_fwlog->state = INACTIVE; 6363 + spin_unlock_irq(&phba->hbalock); 6413 6364 6414 6365 fwlog_buffsize = (LPFC_RAS_MIN_BUFF_POST_SIZE * 6415 6366 phba->cfg_ras_fwlog_buffsize); ··· 6474 6417 mbx_fwlog->u.request.lwpd.addr_lo = putPaddrLow(ras_fwlog->lwpd.phys); 6475 6418 mbx_fwlog->u.request.lwpd.addr_hi = putPaddrHigh(ras_fwlog->lwpd.phys); 6476 6419 6420 + spin_lock_irq(&phba->hbalock); 6421 + ras_fwlog->state = REG_INPROGRESS; 6422 + spin_unlock_irq(&phba->hbalock); 6477 6423 mbox->vport = phba->pport; 6478 6424 mbox->mbox_cmpl = lpfc_sli4_ras_mbox_cmpl; 6479 6425 ··· 7208 7148 int 7209 7149 lpfc_sli4_hba_setup(struct lpfc_hba *phba) 7210 7150 { 7211 - int rc, i, cnt, len; 7151 + int rc, i, cnt, len, dd; 7212 7152 LPFC_MBOXQ_t *mboxq; 7213 7153 struct lpfc_mqe *mqe; 7214 7154 uint8_t *vpd; ··· 7459 7399 phba->sli3_options |= (LPFC_SLI3_NPIV_ENABLED | LPFC_SLI3_HBQ_ENABLED); 7460 7400 spin_unlock_irq(&phba->hbalock); 7461 7401 7402 + /* Always try to enable dual dump feature if we can */ 7403 + lpfc_set_features(phba, mboxq, LPFC_SET_DUAL_DUMP); 7404 + rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL); 7405 + dd = bf_get(lpfc_mbx_set_feature_dd, &mboxq->u.mqe.un.set_feature); 7406 + if ((rc == MBX_SUCCESS) && (dd == LPFC_ENABLE_DUAL_DUMP)) 7407 + lpfc_printf_log(phba, KERN_ERR, LOG_SLI | LOG_INIT, 7408 + "6448 Dual Dump is enabled\n"); 7409 + else 7410 + lpfc_printf_log(phba, KERN_INFO, LOG_SLI | LOG_INIT, 7411 + "6447 Dual Dump Mailbox x%x (x%x/x%x) failed, " 7412 + "rc:x%x dd:x%x\n", 7413 + bf_get(lpfc_mqe_command, &mboxq->u.mqe), 7414 + lpfc_sli_config_mbox_subsys_get( 7415 + phba, mboxq), 7416 + lpfc_sli_config_mbox_opcode_get( 7417 + phba, mboxq), 7418 + rc, dd); 7462 7419 /* 7463 7420 * Allocate all resources (xri,rpi,vpi,vfi) now. Subsequent 7464 7421 * calls depends on these resources to complete port setup. ··· 7600 7523 } 7601 7524 phba->sli4_hba.nvmet_xri_cnt = rc; 7602 7525 7603 - cnt = phba->cfg_iocb_cnt * 1024; 7604 - /* We need 1 iocbq for every SGL, for IO processing */ 7605 - cnt += phba->sli4_hba.nvmet_xri_cnt; 7526 + /* We allocate an iocbq for every receive context SGL. 7527 + * The additional allocation is for abort and ls handling. 7528 + */ 7529 + cnt = phba->sli4_hba.nvmet_xri_cnt + 7530 + phba->sli4_hba.max_cfg_param.max_xri; 7606 7531 } else { 7607 7532 /* update host common xri-sgl sizes and mappings */ 7608 7533 rc = lpfc_sli4_io_sgl_update(phba); ··· 7626 7547 rc = -ENODEV; 7627 7548 goto out_destroy_queue; 7628 7549 } 7629 - cnt = phba->cfg_iocb_cnt * 1024; 7550 + /* Each lpfc_io_buf job structure has an iocbq element. 7551 + * This cnt provides for abort, els, ct and ls requests. 7552 + */ 7553 + cnt = phba->sli4_hba.max_cfg_param.max_xri; 7630 7554 } 7631 7555 7632 7556 if (!phba->sli.iocbq_lookup) { 7633 7557 /* Initialize and populate the iocb list per host */ 7634 7558 lpfc_printf_log(phba, KERN_INFO, LOG_INIT, 7635 - "2821 initialize iocb list %d total %d\n", 7636 - phba->cfg_iocb_cnt, cnt); 7559 + "2821 initialize iocb list with %d entries\n", 7560 + cnt); 7637 7561 rc = lpfc_init_iocb_list(phba, cnt); 7638 7562 if (rc) { 7639 7563 lpfc_printf_log(phba, KERN_ERR, LOG_INIT, ··· 7974 7892 7975 7893 if (mbox_pending) 7976 7894 /* process and rearm the EQ */ 7977 - lpfc_sli4_process_eq(phba, fpeq); 7895 + lpfc_sli4_process_eq(phba, fpeq, LPFC_QUEUE_REARM); 7978 7896 else 7979 7897 /* Always clear and re-arm the EQ */ 7980 7898 sli4_hba->sli4_write_eq_db(phba, fpeq, 0, LPFC_QUEUE_REARM); ··· 9046 8964 * @pring: Pointer to driver SLI ring object. 9047 8965 * @piocb: Pointer to address of newly added command iocb. 9048 8966 * 9049 - * This function is called with hbalock held to add a command 8967 + * This function is called with hbalock held for SLI3 ports or 8968 + * the ring lock held for SLI4 ports to add a command 9050 8969 * iocb to the txq when SLI layer cannot submit the command iocb 9051 8970 * to the ring. 9052 8971 **/ ··· 9055 8972 __lpfc_sli_ringtx_put(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, 9056 8973 struct lpfc_iocbq *piocb) 9057 8974 { 9058 - lockdep_assert_held(&phba->hbalock); 8975 + if (phba->sli_rev == LPFC_SLI_REV4) 8976 + lockdep_assert_held(&pring->ring_lock); 8977 + else 8978 + lockdep_assert_held(&phba->hbalock); 9059 8979 /* Insert the caller's iocb in the txq tail for later processing. */ 9060 8980 list_add_tail(&piocb->list, &pring->txq); 9061 8981 } ··· 9949 9863 * __lpfc_sli_issue_iocb_s4 is used by other functions in the driver to issue 9950 9864 * an iocb command to an HBA with SLI-4 interface spec. 9951 9865 * 9952 - * This function is called with hbalock held. The function will return success 9866 + * This function is called with ringlock held. The function will return success 9953 9867 * after it successfully submit the iocb to firmware or after adding to the 9954 9868 * txq. 9955 9869 **/ ··· 10139 10053 struct lpfc_iocbq *piocb, uint32_t flag) 10140 10054 { 10141 10055 struct lpfc_sli_ring *pring; 10056 + struct lpfc_queue *eq; 10142 10057 unsigned long iflags; 10143 10058 int rc; 10144 10059 10145 10060 if (phba->sli_rev == LPFC_SLI_REV4) { 10061 + eq = phba->sli4_hba.hdwq[piocb->hba_wqidx].hba_eq; 10062 + 10146 10063 pring = lpfc_sli4_calc_ring(phba, piocb); 10147 10064 if (unlikely(pring == NULL)) 10148 10065 return IOCB_ERROR; ··· 10153 10064 spin_lock_irqsave(&pring->ring_lock, iflags); 10154 10065 rc = __lpfc_sli_issue_iocb(phba, ring_number, piocb, flag); 10155 10066 spin_unlock_irqrestore(&pring->ring_lock, iflags); 10067 + 10068 + lpfc_sli4_poll_eq(eq, LPFC_POLL_FASTPATH); 10156 10069 } else { 10157 10070 /* For now, SLI2/3 will still use hbalock */ 10158 10071 spin_lock_irqsave(&phba->hbalock, iflags); ··· 10769 10678 set_bit(LPFC_DATA_READY, &phba->data_flags); 10770 10679 } 10771 10680 prev_pring_flag = pring->flag; 10772 - spin_lock_irq(&pring->ring_lock); 10681 + spin_lock(&pring->ring_lock); 10773 10682 list_for_each_entry_safe(iocb, next_iocb, 10774 10683 &pring->txq, list) { 10775 10684 if (iocb->vport != vport) 10776 10685 continue; 10777 10686 list_move_tail(&iocb->list, &completions); 10778 10687 } 10779 - spin_unlock_irq(&pring->ring_lock); 10688 + spin_unlock(&pring->ring_lock); 10780 10689 list_for_each_entry_safe(iocb, next_iocb, 10781 10690 &pring->txcmplq, list) { 10782 10691 if (iocb->vport != vport) ··· 11141 11050 irsp->ulpStatus, irsp->un.ulpWord[4]); 11142 11051 11143 11052 spin_unlock_irq(&phba->hbalock); 11144 - if (irsp->ulpStatus == IOSTAT_LOCAL_REJECT && 11145 - irsp->un.ulpWord[4] == IOERR_SLI_ABORTED) 11146 - lpfc_sli_release_iocbq(phba, abort_iocb); 11147 11053 } 11148 11054 release_iocb: 11149 11055 lpfc_sli_release_iocbq(phba, cmdiocb); ··· 11824 11736 !(cmdiocbq->iocb_flag & LPFC_IO_LIBDFC)) { 11825 11737 lpfc_cmd = container_of(cmdiocbq, struct lpfc_io_buf, 11826 11738 cur_iocbq); 11827 - lpfc_cmd->exch_busy = rspiocbq->iocb_flag & LPFC_EXCHANGE_BUSY; 11739 + if (rspiocbq && (rspiocbq->iocb_flag & LPFC_EXCHANGE_BUSY)) 11740 + lpfc_cmd->flags |= LPFC_SBUF_XBUSY; 11741 + else 11742 + lpfc_cmd->flags &= ~LPFC_SBUF_XBUSY; 11828 11743 } 11829 11744 11830 11745 pdone_q = cmdiocbq->context_un.wait_queue; ··· 13249 13158 phba->sli.sli_flag &= ~LPFC_SLI_MBOX_ACTIVE; 13250 13159 /* Setting active mailbox pointer need to be in sync to flag clear */ 13251 13160 phba->sli.mbox_active = NULL; 13161 + if (bf_get(lpfc_trailer_consumed, mcqe)) 13162 + lpfc_sli4_mq_release(phba->sli4_hba.mbx_wq); 13252 13163 spin_unlock_irqrestore(&phba->hbalock, iflags); 13253 13164 /* Wake up worker thread to post the next pending mailbox command */ 13254 13165 lpfc_worker_wake_up(phba); 13166 + return workposted; 13167 + 13255 13168 out_no_mqe_complete: 13169 + spin_lock_irqsave(&phba->hbalock, iflags); 13256 13170 if (bf_get(lpfc_trailer_consumed, mcqe)) 13257 13171 lpfc_sli4_mq_release(phba->sli4_hba.mbx_wq); 13258 - return workposted; 13172 + spin_unlock_irqrestore(&phba->hbalock, iflags); 13173 + return false; 13259 13174 } 13260 13175 13261 13176 /** ··· 13314 13217 struct lpfc_sli_ring *pring = cq->pring; 13315 13218 int txq_cnt = 0; 13316 13219 int txcmplq_cnt = 0; 13317 - int fcp_txcmplq_cnt = 0; 13318 13220 13319 13221 /* Check for response status */ 13320 13222 if (unlikely(bf_get(lpfc_wcqe_c_status, wcqe))) { ··· 13335 13239 txcmplq_cnt++; 13336 13240 lpfc_printf_log(phba, KERN_ERR, LOG_SLI, 13337 13241 "0387 NO IOCBQ data: txq_cnt=%d iocb_cnt=%d " 13338 - "fcp_txcmplq_cnt=%d, els_txcmplq_cnt=%d\n", 13242 + "els_txcmplq_cnt=%d\n", 13339 13243 txq_cnt, phba->iocb_cnt, 13340 - fcp_txcmplq_cnt, 13341 13244 txcmplq_cnt); 13342 13245 return false; 13343 13246 } ··· 13687 13592 phba->sli4_hba.sli4_write_cq_db(phba, cq, consumed, 13688 13593 LPFC_QUEUE_NOARM); 13689 13594 consumed = 0; 13595 + cq->assoc_qp->q_flag |= HBA_EQ_DELAY_CHK; 13690 13596 } 13691 13597 13692 13598 if (count == LPFC_NVMET_CQ_NOTIFY) ··· 14316 14220 spin_lock_irqsave(&phba->hbalock, iflag); 14317 14221 if (phba->link_state < LPFC_LINK_DOWN) 14318 14222 /* Flush, clear interrupt, and rearm the EQ */ 14319 - lpfc_sli4_eq_flush(phba, fpeq); 14223 + lpfc_sli4_eqcq_flush(phba, fpeq); 14320 14224 spin_unlock_irqrestore(&phba->hbalock, iflag); 14321 14225 return IRQ_NONE; 14322 14226 } ··· 14326 14230 fpeq->last_cpu = raw_smp_processor_id(); 14327 14231 14328 14232 if (icnt > LPFC_EQD_ISR_TRIGGER && 14329 - phba->cfg_irq_chann == 1 && 14233 + fpeq->q_flag & HBA_EQ_DELAY_CHK && 14330 14234 phba->cfg_auto_imax && 14331 14235 fpeq->q_mode != LPFC_MAX_AUTO_EQ_DELAY && 14332 14236 phba->sli.sli_flag & LPFC_SLI_USE_EQDR) 14333 14237 lpfc_sli4_mod_hba_eq_delay(phba, fpeq, LPFC_MAX_AUTO_EQ_DELAY); 14334 14238 14335 14239 /* process and rearm the EQ */ 14336 - ecount = lpfc_sli4_process_eq(phba, fpeq); 14240 + ecount = lpfc_sli4_process_eq(phba, fpeq, LPFC_QUEUE_REARM); 14337 14241 14338 14242 if (unlikely(ecount == 0)) { 14339 14243 fpeq->EQ_no_entry++; ··· 14392 14296 14393 14297 return (hba_handled == true) ? IRQ_HANDLED : IRQ_NONE; 14394 14298 } /* lpfc_sli4_intr_handler */ 14299 + 14300 + void lpfc_sli4_poll_hbtimer(struct timer_list *t) 14301 + { 14302 + struct lpfc_hba *phba = from_timer(phba, t, cpuhp_poll_timer); 14303 + struct lpfc_queue *eq; 14304 + int i = 0; 14305 + 14306 + rcu_read_lock(); 14307 + 14308 + list_for_each_entry_rcu(eq, &phba->poll_list, _poll_list) 14309 + i += lpfc_sli4_poll_eq(eq, LPFC_POLL_SLOWPATH); 14310 + if (!list_empty(&phba->poll_list)) 14311 + mod_timer(&phba->cpuhp_poll_timer, 14312 + jiffies + msecs_to_jiffies(LPFC_POLL_HB)); 14313 + 14314 + rcu_read_unlock(); 14315 + } 14316 + 14317 + inline int lpfc_sli4_poll_eq(struct lpfc_queue *eq, uint8_t path) 14318 + { 14319 + struct lpfc_hba *phba = eq->phba; 14320 + int i = 0; 14321 + 14322 + /* 14323 + * Unlocking an irq is one of the entry point to check 14324 + * for re-schedule, but we are good for io submission 14325 + * path as midlayer does a get_cpu to glue us in. Flush 14326 + * out the invalidate queue so we can see the updated 14327 + * value for flag. 14328 + */ 14329 + smp_rmb(); 14330 + 14331 + if (READ_ONCE(eq->mode) == LPFC_EQ_POLL) 14332 + /* We will not likely get the completion for the caller 14333 + * during this iteration but i guess that's fine. 14334 + * Future io's coming on this eq should be able to 14335 + * pick it up. As for the case of single io's, they 14336 + * will be handled through a sched from polling timer 14337 + * function which is currently triggered every 1msec. 14338 + */ 14339 + i = lpfc_sli4_process_eq(phba, eq, LPFC_QUEUE_NOARM); 14340 + 14341 + return i; 14342 + } 14343 + 14344 + static inline void lpfc_sli4_add_to_poll_list(struct lpfc_queue *eq) 14345 + { 14346 + struct lpfc_hba *phba = eq->phba; 14347 + 14348 + if (list_empty(&phba->poll_list)) { 14349 + timer_setup(&phba->cpuhp_poll_timer, lpfc_sli4_poll_hbtimer, 0); 14350 + /* kickstart slowpath processing for this eq */ 14351 + mod_timer(&phba->cpuhp_poll_timer, 14352 + jiffies + msecs_to_jiffies(LPFC_POLL_HB)); 14353 + } 14354 + 14355 + list_add_rcu(&eq->_poll_list, &phba->poll_list); 14356 + synchronize_rcu(); 14357 + } 14358 + 14359 + static inline void lpfc_sli4_remove_from_poll_list(struct lpfc_queue *eq) 14360 + { 14361 + struct lpfc_hba *phba = eq->phba; 14362 + 14363 + /* Disable slowpath processing for this eq. Kick start the eq 14364 + * by RE-ARMING the eq's ASAP 14365 + */ 14366 + list_del_rcu(&eq->_poll_list); 14367 + synchronize_rcu(); 14368 + 14369 + if (list_empty(&phba->poll_list)) 14370 + del_timer_sync(&phba->cpuhp_poll_timer); 14371 + } 14372 + 14373 + void lpfc_sli4_cleanup_poll_list(struct lpfc_hba *phba) 14374 + { 14375 + struct lpfc_queue *eq, *next; 14376 + 14377 + list_for_each_entry_safe(eq, next, &phba->poll_list, _poll_list) 14378 + list_del(&eq->_poll_list); 14379 + 14380 + INIT_LIST_HEAD(&phba->poll_list); 14381 + synchronize_rcu(); 14382 + } 14383 + 14384 + static inline void 14385 + __lpfc_sli4_switch_eqmode(struct lpfc_queue *eq, uint8_t mode) 14386 + { 14387 + if (mode == eq->mode) 14388 + return; 14389 + /* 14390 + * currently this function is only called during a hotplug 14391 + * event and the cpu on which this function is executing 14392 + * is going offline. By now the hotplug has instructed 14393 + * the scheduler to remove this cpu from cpu active mask. 14394 + * So we don't need to work about being put aside by the 14395 + * scheduler for a high priority process. Yes, the inte- 14396 + * rrupts could come but they are known to retire ASAP. 14397 + */ 14398 + 14399 + /* Disable polling in the fastpath */ 14400 + WRITE_ONCE(eq->mode, mode); 14401 + /* flush out the store buffer */ 14402 + smp_wmb(); 14403 + 14404 + /* 14405 + * Add this eq to the polling list and start polling. For 14406 + * a grace period both interrupt handler and poller will 14407 + * try to process the eq _but_ that's fine. We have a 14408 + * synchronization mechanism in place (queue_claimed) to 14409 + * deal with it. This is just a draining phase for int- 14410 + * errupt handler (not eq's) as we have guranteed through 14411 + * barrier that all the CPUs have seen the new CQ_POLLED 14412 + * state. which will effectively disable the REARMING of 14413 + * the EQ. The whole idea is eq's die off eventually as 14414 + * we are not rearming EQ's anymore. 14415 + */ 14416 + mode ? lpfc_sli4_add_to_poll_list(eq) : 14417 + lpfc_sli4_remove_from_poll_list(eq); 14418 + } 14419 + 14420 + void lpfc_sli4_start_polling(struct lpfc_queue *eq) 14421 + { 14422 + __lpfc_sli4_switch_eqmode(eq, LPFC_EQ_POLL); 14423 + } 14424 + 14425 + void lpfc_sli4_stop_polling(struct lpfc_queue *eq) 14426 + { 14427 + struct lpfc_hba *phba = eq->phba; 14428 + 14429 + __lpfc_sli4_switch_eqmode(eq, LPFC_EQ_INTERRUPT); 14430 + 14431 + /* Kick start for the pending io's in h/w. 14432 + * Once we switch back to interrupt processing on a eq 14433 + * the io path completion will only arm eq's when it 14434 + * receives a completion. But since eq's are in disa- 14435 + * rmed state it doesn't receive a completion. This 14436 + * creates a deadlock scenaro. 14437 + */ 14438 + phba->sli4_hba.sli4_write_eq_db(phba, eq, 0, LPFC_QUEUE_REARM); 14439 + } 14395 14440 14396 14441 /** 14397 14442 * lpfc_sli4_queue_free - free a queue structure and associated memory ··· 14608 14371 return NULL; 14609 14372 14610 14373 INIT_LIST_HEAD(&queue->list); 14374 + INIT_LIST_HEAD(&queue->_poll_list); 14611 14375 INIT_LIST_HEAD(&queue->wq_list); 14612 14376 INIT_LIST_HEAD(&queue->wqfull_list); 14613 14377 INIT_LIST_HEAD(&queue->page_list); ··· 18362 18124 phba->sli4_hba.max_cfg_param.rpi_used++; 18363 18125 phba->sli4_hba.rpi_count++; 18364 18126 } 18365 - lpfc_printf_log(phba, KERN_INFO, LOG_SLI, 18366 - "0001 rpi:%x max:%x lim:%x\n", 18127 + lpfc_printf_log(phba, KERN_INFO, 18128 + LOG_NODE | LOG_DISCOVERY, 18129 + "0001 Allocated rpi:x%x max:x%x lim:x%x\n", 18367 18130 (int) rpi, max_rpi, rpi_limit); 18368 18131 18369 18132 /* ··· 18420 18181 static void 18421 18182 __lpfc_sli4_free_rpi(struct lpfc_hba *phba, int rpi) 18422 18183 { 18184 + /* 18185 + * if the rpi value indicates a prior unreg has already 18186 + * been done, skip the unreg. 18187 + */ 18188 + if (rpi == LPFC_RPI_ALLOC_ERROR) 18189 + return; 18190 + 18423 18191 if (test_and_clear_bit(rpi, phba->sli4_hba.rpi_bmask)) { 18424 18192 phba->sli4_hba.rpi_count--; 18425 18193 phba->sli4_hba.max_cfg_param.rpi_used--; 18426 18194 } else { 18427 - lpfc_printf_log(phba, KERN_INFO, LOG_SLI, 18195 + lpfc_printf_log(phba, KERN_INFO, 18196 + LOG_NODE | LOG_DISCOVERY, 18428 18197 "2016 rpi %x not inuse\n", 18429 18198 rpi); 18430 18199 } ··· 19930 19683 19931 19684 lpfc_sli_ringtxcmpl_put(phba, pring, pwqe); 19932 19685 spin_unlock_irqrestore(&pring->ring_lock, iflags); 19686 + 19687 + lpfc_sli4_poll_eq(qp->hba_eq, LPFC_POLL_FASTPATH); 19933 19688 return 0; 19934 19689 } 19935 19690 ··· 19952 19703 } 19953 19704 lpfc_sli_ringtxcmpl_put(phba, pring, pwqe); 19954 19705 spin_unlock_irqrestore(&pring->ring_lock, iflags); 19706 + 19707 + lpfc_sli4_poll_eq(qp->hba_eq, LPFC_POLL_FASTPATH); 19955 19708 return 0; 19956 19709 } 19957 19710 ··· 19982 19731 } 19983 19732 lpfc_sli_ringtxcmpl_put(phba, pring, pwqe); 19984 19733 spin_unlock_irqrestore(&pring->ring_lock, iflags); 19734 + 19735 + lpfc_sli4_poll_eq(qp->hba_eq, LPFC_POLL_FASTPATH); 19985 19736 return 0; 19986 19737 } 19987 19738 return WQE_ERROR; ··· 20346 20093 lpfc_ncmd->cur_iocbq.wqe_cmpl = NULL; 20347 20094 lpfc_ncmd->cur_iocbq.iocb_cmpl = NULL; 20348 20095 20096 + if (phba->cfg_xpsgl && !phba->nvmet_support && 20097 + !list_empty(&lpfc_ncmd->dma_sgl_xtra_list)) 20098 + lpfc_put_sgl_per_hdwq(phba, lpfc_ncmd); 20099 + 20100 + if (!list_empty(&lpfc_ncmd->dma_cmd_rsp_list)) 20101 + lpfc_put_cmd_rsp_buf_per_hdwq(phba, lpfc_ncmd); 20102 + 20349 20103 if (phba->cfg_xri_rebalancing) { 20350 20104 if (lpfc_ncmd->expedite) { 20351 20105 /* Return to expedite pool */ ··· 20417 20157 spin_unlock_irqrestore(&qp->io_buf_list_put_lock, 20418 20158 iflag); 20419 20159 } 20420 - 20421 - if (phba->cfg_xpsgl && !phba->nvmet_support && 20422 - !list_empty(&lpfc_ncmd->dma_sgl_xtra_list)) 20423 - lpfc_put_sgl_per_hdwq(phba, lpfc_ncmd); 20424 - 20425 - if (!list_empty(&lpfc_ncmd->dma_cmd_rsp_list)) 20426 - lpfc_put_cmd_rsp_buf_per_hdwq(phba, lpfc_ncmd); 20427 20160 } 20428 20161 20429 20162 /** ··· 20652 20399 struct sli4_hybrid_sgl *allocated_sgl = NULL; 20653 20400 struct lpfc_sli4_hdw_queue *hdwq = lpfc_buf->hdwq; 20654 20401 struct list_head *buf_list = &hdwq->sgl_list; 20402 + unsigned long iflags; 20655 20403 20656 - spin_lock_irq(&hdwq->hdwq_lock); 20404 + spin_lock_irqsave(&hdwq->hdwq_lock, iflags); 20657 20405 20658 20406 if (likely(!list_empty(buf_list))) { 20659 20407 /* break off 1 chunk from the sgl_list */ ··· 20666 20412 } 20667 20413 } else { 20668 20414 /* allocate more */ 20669 - spin_unlock_irq(&hdwq->hdwq_lock); 20415 + spin_unlock_irqrestore(&hdwq->hdwq_lock, iflags); 20670 20416 tmp = kmalloc_node(sizeof(*tmp), GFP_ATOMIC, 20671 - cpu_to_node(smp_processor_id())); 20417 + cpu_to_node(hdwq->io_wq->chann)); 20672 20418 if (!tmp) { 20673 20419 lpfc_printf_log(phba, KERN_INFO, LOG_SLI, 20674 20420 "8353 error kmalloc memory for HDWQ " ··· 20688 20434 return NULL; 20689 20435 } 20690 20436 20691 - spin_lock_irq(&hdwq->hdwq_lock); 20437 + spin_lock_irqsave(&hdwq->hdwq_lock, iflags); 20692 20438 list_add_tail(&tmp->list_node, &lpfc_buf->dma_sgl_xtra_list); 20693 20439 } 20694 20440 ··· 20696 20442 struct sli4_hybrid_sgl, 20697 20443 list_node); 20698 20444 20699 - spin_unlock_irq(&hdwq->hdwq_lock); 20445 + spin_unlock_irqrestore(&hdwq->hdwq_lock, iflags); 20700 20446 20701 20447 return allocated_sgl; 20702 20448 } ··· 20720 20466 struct sli4_hybrid_sgl *tmp = NULL; 20721 20467 struct lpfc_sli4_hdw_queue *hdwq = lpfc_buf->hdwq; 20722 20468 struct list_head *buf_list = &hdwq->sgl_list; 20469 + unsigned long iflags; 20723 20470 20724 - spin_lock_irq(&hdwq->hdwq_lock); 20471 + spin_lock_irqsave(&hdwq->hdwq_lock, iflags); 20725 20472 20726 20473 if (likely(!list_empty(&lpfc_buf->dma_sgl_xtra_list))) { 20727 20474 list_for_each_entry_safe(list_entry, tmp, ··· 20735 20480 rc = -EINVAL; 20736 20481 } 20737 20482 20738 - spin_unlock_irq(&hdwq->hdwq_lock); 20483 + spin_unlock_irqrestore(&hdwq->hdwq_lock, iflags); 20739 20484 return rc; 20740 20485 } 20741 20486 ··· 20756 20501 struct list_head *buf_list = &hdwq->sgl_list; 20757 20502 struct sli4_hybrid_sgl *list_entry = NULL; 20758 20503 struct sli4_hybrid_sgl *tmp = NULL; 20504 + unsigned long iflags; 20759 20505 20760 - spin_lock_irq(&hdwq->hdwq_lock); 20506 + spin_lock_irqsave(&hdwq->hdwq_lock, iflags); 20761 20507 20762 20508 /* Free sgl pool */ 20763 20509 list_for_each_entry_safe(list_entry, tmp, ··· 20770 20514 kfree(list_entry); 20771 20515 } 20772 20516 20773 - spin_unlock_irq(&hdwq->hdwq_lock); 20517 + spin_unlock_irqrestore(&hdwq->hdwq_lock, iflags); 20774 20518 } 20775 20519 20776 20520 /** ··· 20794 20538 struct fcp_cmd_rsp_buf *allocated_buf = NULL; 20795 20539 struct lpfc_sli4_hdw_queue *hdwq = lpfc_buf->hdwq; 20796 20540 struct list_head *buf_list = &hdwq->cmd_rsp_buf_list; 20541 + unsigned long iflags; 20797 20542 20798 - spin_lock_irq(&hdwq->hdwq_lock); 20543 + spin_lock_irqsave(&hdwq->hdwq_lock, iflags); 20799 20544 20800 20545 if (likely(!list_empty(buf_list))) { 20801 20546 /* break off 1 chunk from the list */ ··· 20809 20552 } 20810 20553 } else { 20811 20554 /* allocate more */ 20812 - spin_unlock_irq(&hdwq->hdwq_lock); 20555 + spin_unlock_irqrestore(&hdwq->hdwq_lock, iflags); 20813 20556 tmp = kmalloc_node(sizeof(*tmp), GFP_ATOMIC, 20814 - cpu_to_node(smp_processor_id())); 20557 + cpu_to_node(hdwq->io_wq->chann)); 20815 20558 if (!tmp) { 20816 20559 lpfc_printf_log(phba, KERN_INFO, LOG_SLI, 20817 20560 "8355 error kmalloc memory for HDWQ " ··· 20836 20579 tmp->fcp_rsp = (struct fcp_rsp *)((uint8_t *)tmp->fcp_cmnd + 20837 20580 sizeof(struct fcp_cmnd)); 20838 20581 20839 - spin_lock_irq(&hdwq->hdwq_lock); 20582 + spin_lock_irqsave(&hdwq->hdwq_lock, iflags); 20840 20583 list_add_tail(&tmp->list_node, &lpfc_buf->dma_cmd_rsp_list); 20841 20584 } 20842 20585 ··· 20844 20587 struct fcp_cmd_rsp_buf, 20845 20588 list_node); 20846 20589 20847 - spin_unlock_irq(&hdwq->hdwq_lock); 20590 + spin_unlock_irqrestore(&hdwq->hdwq_lock, iflags); 20848 20591 20849 20592 return allocated_buf; 20850 20593 } ··· 20869 20612 struct fcp_cmd_rsp_buf *tmp = NULL; 20870 20613 struct lpfc_sli4_hdw_queue *hdwq = lpfc_buf->hdwq; 20871 20614 struct list_head *buf_list = &hdwq->cmd_rsp_buf_list; 20615 + unsigned long iflags; 20872 20616 20873 - spin_lock_irq(&hdwq->hdwq_lock); 20617 + spin_lock_irqsave(&hdwq->hdwq_lock, iflags); 20874 20618 20875 20619 if (likely(!list_empty(&lpfc_buf->dma_cmd_rsp_list))) { 20876 20620 list_for_each_entry_safe(list_entry, tmp, ··· 20884 20626 rc = -EINVAL; 20885 20627 } 20886 20628 20887 - spin_unlock_irq(&hdwq->hdwq_lock); 20629 + spin_unlock_irqrestore(&hdwq->hdwq_lock, iflags); 20888 20630 return rc; 20889 20631 } 20890 20632 ··· 20905 20647 struct list_head *buf_list = &hdwq->cmd_rsp_buf_list; 20906 20648 struct fcp_cmd_rsp_buf *list_entry = NULL; 20907 20649 struct fcp_cmd_rsp_buf *tmp = NULL; 20650 + unsigned long iflags; 20908 20651 20909 - spin_lock_irq(&hdwq->hdwq_lock); 20652 + spin_lock_irqsave(&hdwq->hdwq_lock, iflags); 20910 20653 20911 20654 /* Free cmd_rsp buf pool */ 20912 20655 list_for_each_entry_safe(list_entry, tmp, ··· 20920 20661 kfree(list_entry); 20921 20662 } 20922 20663 20923 - spin_unlock_irq(&hdwq->hdwq_lock); 20664 + spin_unlock_irqrestore(&hdwq->hdwq_lock, iflags); 20924 20665 }
+1 -2
drivers/scsi/lpfc/lpfc_sli.h
··· 384 384 385 385 struct lpfc_nodelist *ndlp; 386 386 uint32_t timeout; 387 - uint16_t flags; /* TBD convert exch_busy to flags */ 387 + uint16_t flags; 388 388 #define LPFC_SBUF_XBUSY 0x1 /* SLI4 hba reported XB on WCQE cmpl */ 389 389 #define LPFC_SBUF_BUMP_QDEPTH 0x2 /* bumped queue depth counter */ 390 390 /* External DIF device IO conversions */ 391 391 #define LPFC_SBUF_NORMAL_DIF 0x4 /* normal mode to insert/strip */ 392 392 #define LPFC_SBUF_PASS_DIF 0x8 /* insert/strip mode to passthru */ 393 393 #define LPFC_SBUF_NOT_POSTED 0x10 /* SGL failed post to FW. */ 394 - uint16_t exch_busy; /* SLI4 hba reported XB on complete WCQE */ 395 394 uint16_t status; /* From IOCB Word 7- ulpStatus */ 396 395 uint32_t result; /* From IOCB Word 4. */ 397 396
+35 -7
drivers/scsi/lpfc/lpfc_sli4.h
··· 41 41 42 42 /* Multi-queue arrangement for FCP EQ/CQ/WQ tuples */ 43 43 #define LPFC_HBA_HDWQ_MIN 0 44 - #define LPFC_HBA_HDWQ_MAX 128 45 - #define LPFC_HBA_HDWQ_DEF 0 44 + #define LPFC_HBA_HDWQ_MAX 256 45 + #define LPFC_HBA_HDWQ_DEF LPFC_HBA_HDWQ_MIN 46 + 47 + /* irq_chann range, values */ 48 + #define LPFC_IRQ_CHANN_MIN 0 49 + #define LPFC_IRQ_CHANN_MAX 256 50 + #define LPFC_IRQ_CHANN_DEF LPFC_IRQ_CHANN_MIN 46 51 47 52 /* FCP MQ queue count limiting */ 48 53 #define LPFC_FCP_MQ_THRESHOLD_MIN 0 ··· 138 133 struct lpfc_queue { 139 134 struct list_head list; 140 135 struct list_head wq_list; 136 + 137 + /* 138 + * If interrupts are in effect on _all_ the eq's the footprint 139 + * of polling code is zero (except mode). This memory is chec- 140 + * ked for every io to see if the io needs to be polled and 141 + * while completion to check if the eq's needs to be rearmed. 142 + * Keep in same cacheline as the queue ptr to avoid cpu fetch 143 + * stalls. Using 1B memory will leave us with 7B hole. Fill 144 + * it with other frequently used members. 145 + */ 146 + uint16_t last_cpu; /* most recent cpu */ 147 + uint16_t hdwq; 148 + uint8_t qe_valid; 149 + uint8_t mode; /* interrupt or polling */ 150 + #define LPFC_EQ_INTERRUPT 0 151 + #define LPFC_EQ_POLL 1 152 + 141 153 struct list_head wqfull_list; 142 154 enum lpfc_sli4_queue_type type; 143 155 enum lpfc_sli4_queue_subtype subtype; ··· 221 199 uint8_t q_flag; 222 200 #define HBA_NVMET_WQFULL 0x1 /* We hit WQ Full condition for NVMET */ 223 201 #define HBA_NVMET_CQ_NOTIFY 0x1 /* LPFC_NVMET_CQ_NOTIFY CQEs this EQE */ 202 + #define HBA_EQ_DELAY_CHK 0x2 /* EQ is a candidate for coalescing */ 224 203 #define LPFC_NVMET_CQ_NOTIFY 4 225 204 void __iomem *db_regaddr; 226 205 uint16_t dpp_enable; ··· 262 239 struct delayed_work sched_spwork; 263 240 264 241 uint64_t isr_timestamp; 265 - uint16_t hdwq; 266 - uint16_t last_cpu; /* most recent cpu */ 267 - uint8_t qe_valid; 268 242 struct lpfc_queue *assoc_qp; 243 + struct list_head _poll_list; 269 244 void **q_pgs; /* array to index entries per page */ 270 245 }; 271 246 ··· 472 451 #define LPFC_SLI4_HANDLER_NAME_SZ 16 473 452 struct lpfc_hba_eq_hdl { 474 453 uint32_t idx; 454 + uint16_t irq; 475 455 char handler_name[LPFC_SLI4_HANDLER_NAME_SZ]; 476 456 struct lpfc_hba *phba; 477 457 struct lpfc_queue *eq; 458 + struct cpumask aff_mask; 478 459 }; 460 + 461 + #define lpfc_get_eq_hdl(eqidx) (&phba->sli4_hba.hba_eq_hdl[eqidx]) 462 + #define lpfc_get_aff_mask(eqidx) (&phba->sli4_hba.hba_eq_hdl[eqidx].aff_mask) 463 + #define lpfc_get_irq(eqidx) (phba->sli4_hba.hba_eq_hdl[eqidx].irq) 479 464 480 465 /*BB Credit recovery value*/ 481 466 struct lpfc_bbscn_params { ··· 540 513 uint8_t cqav; 541 514 uint8_t wqsize; 542 515 uint8_t bv1s; 516 + uint8_t pls; 543 517 #define LPFC_WQ_SZ64_SUPPORT 1 544 518 #define LPFC_WQ_SZ128_SUPPORT 2 545 519 uint8_t wqpcnt; ··· 572 544 #define LPFC_SLI4_HANDLER_CNT (LPFC_HBA_IO_CHAN_MAX+ \ 573 545 LPFC_FOF_IO_CHAN_NUM) 574 546 575 - /* Used for IRQ vector to CPU mapping */ 547 + /* Used for tracking CPU mapping attributes */ 576 548 struct lpfc_vector_map_info { 577 549 uint16_t phys_id; 578 550 uint16_t core_id; 579 - uint16_t irq; 580 551 uint16_t eq; 581 552 uint16_t hdwq; 582 553 uint16_t flag; ··· 918 891 struct lpfc_vector_map_info *cpu_map; 919 892 uint16_t num_possible_cpu; 920 893 uint16_t num_present_cpu; 894 + struct cpumask numa_mask; 921 895 uint16_t curr_disp_cpu; 922 896 struct lpfc_eq_intr_info __percpu *eq_info; 923 897 uint32_t conf_trunk;
+1 -1
drivers/scsi/lpfc/lpfc_version.h
··· 20 20 * included with this package. * 21 21 *******************************************************************/ 22 22 23 - #define LPFC_DRIVER_VERSION "12.4.0.0" 23 + #define LPFC_DRIVER_VERSION "12.6.0.2" 24 24 #define LPFC_DRIVER_NAME "lpfc" 25 25 26 26 /* Used for SLI 2/3 */
+1 -1
drivers/scsi/mac_scsi.c
··· 464 464 mac_scsi_template.can_queue = setup_can_queue; 465 465 if (setup_cmd_per_lun > 0) 466 466 mac_scsi_template.cmd_per_lun = setup_cmd_per_lun; 467 - if (setup_sg_tablesize >= 0) 467 + if (setup_sg_tablesize > 0) 468 468 mac_scsi_template.sg_tablesize = setup_sg_tablesize; 469 469 if (setup_hostid >= 0) 470 470 mac_scsi_template.this_id = setup_hostid & 7;
+3
drivers/scsi/megaraid/megaraid_sas.h
··· 24 24 #define MEGASAS_VERSION "07.710.50.00-rc1" 25 25 #define MEGASAS_RELDATE "June 28, 2019" 26 26 27 + #define MEGASAS_MSIX_NAME_LEN 32 28 + 27 29 /* 28 30 * Device IDs 29 31 */ ··· 2205 2203 }; 2206 2204 2207 2205 struct megasas_irq_context { 2206 + char name[MEGASAS_MSIX_NAME_LEN]; 2208 2207 struct megasas_instance *instance; 2209 2208 u32 MSIxIndex; 2210 2209 u32 os_irq;
+6 -2
drivers/scsi/megaraid/megaraid_sas_base.c
··· 5546 5546 pdev = instance->pdev; 5547 5547 instance->irq_context[0].instance = instance; 5548 5548 instance->irq_context[0].MSIxIndex = 0; 5549 + snprintf(instance->irq_context->name, MEGASAS_MSIX_NAME_LEN, "%s%u", 5550 + "megasas", instance->host->host_no); 5549 5551 if (request_irq(pci_irq_vector(pdev, 0), 5550 5552 instance->instancet->service_isr, IRQF_SHARED, 5551 - "megasas", &instance->irq_context[0])) { 5553 + instance->irq_context->name, &instance->irq_context[0])) { 5552 5554 dev_err(&instance->pdev->dev, 5553 5555 "Failed to register IRQ from %s %d\n", 5554 5556 __func__, __LINE__); ··· 5582 5580 for (i = 0; i < instance->msix_vectors; i++) { 5583 5581 instance->irq_context[i].instance = instance; 5584 5582 instance->irq_context[i].MSIxIndex = i; 5583 + snprintf(instance->irq_context[i].name, MEGASAS_MSIX_NAME_LEN, "%s%u-msix%u", 5584 + "megasas", instance->host->host_no, i); 5585 5585 if (request_irq(pci_irq_vector(pdev, i), 5586 - instance->instancet->service_isr, 0, "megasas", 5586 + instance->instancet->service_isr, 0, instance->irq_context[i].name, 5587 5587 &instance->irq_context[i])) { 5588 5588 dev_err(&instance->pdev->dev, 5589 5589 "Failed to register IRQ for vector %d.\n", i);
+1 -6
drivers/scsi/megaraid/megaraid_sas_fp.c
··· 386 386 le64_to_cpu(quad->logEnd) && (mega_mod64(row - le64_to_cpu(quad->logStart), 387 387 le32_to_cpu(quad->diff))) == 0) { 388 388 if (span_blk != NULL) { 389 - u64 blk, debugBlk; 389 + u64 blk; 390 390 blk = mega_div64_32((row-le64_to_cpu(quad->logStart)), le32_to_cpu(quad->diff)); 391 - debugBlk = blk; 392 391 393 392 blk = (blk + le64_to_cpu(quad->offsetInSpan)) << raid->stripeShift; 394 393 *span_blk = blk; ··· 698 699 __le16 *pDevHandle = &io_info->devHandle; 699 700 u8 *pPdInterface = &io_info->pd_interface; 700 701 u32 logArm, rowMod, armQ, arm; 701 - struct fusion_context *fusion; 702 702 703 - fusion = instance->ctrl_context; 704 703 *pDevHandle = cpu_to_le16(MR_DEVHANDLE_INVALID); 705 704 706 705 /*Get row and span from io_info for Uneven Span IO.*/ ··· 798 801 u64 *pdBlock = &io_info->pdBlock; 799 802 __le16 *pDevHandle = &io_info->devHandle; 800 803 u8 *pPdInterface = &io_info->pd_interface; 801 - struct fusion_context *fusion; 802 804 803 - fusion = instance->ctrl_context; 804 805 *pDevHandle = cpu_to_le16(MR_DEVHANDLE_INVALID); 805 806 806 807 row = mega_div64_32(stripRow, raid->rowDataSize);
+25 -11
drivers/scsi/mpt3sas/mpt3sas_base.c
··· 3044 3044 descp = NULL; 3045 3045 3046 3046 ioc_info(ioc, " %d %d\n", ioc->high_iops_queues, 3047 - ioc->msix_vector_count); 3047 + ioc->reply_queue_count); 3048 3048 3049 3049 i = pci_alloc_irq_vectors_affinity(ioc->pdev, 3050 3050 ioc->high_iops_queues, 3051 - ioc->msix_vector_count, irq_flags, descp); 3051 + ioc->reply_queue_count, irq_flags, descp); 3052 3052 3053 3053 return i; 3054 3054 } ··· 4242 4242 static int 4243 4243 _base_display_fwpkg_version(struct MPT3SAS_ADAPTER *ioc) 4244 4244 { 4245 - Mpi2FWImageHeader_t *FWImgHdr; 4245 + Mpi2FWImageHeader_t *fw_img_hdr; 4246 + Mpi26ComponentImageHeader_t *cmp_img_hdr; 4246 4247 Mpi25FWUploadRequest_t *mpi_request; 4247 4248 Mpi2FWUploadReply_t mpi_reply; 4248 4249 int r = 0; 4250 + u32 package_version = 0; 4249 4251 void *fwpkg_data = NULL; 4250 4252 dma_addr_t fwpkg_data_dma; 4251 4253 u16 smid, ioc_status; ··· 4304 4302 ioc_status = le16_to_cpu(mpi_reply.IOCStatus) & 4305 4303 MPI2_IOCSTATUS_MASK; 4306 4304 if (ioc_status == MPI2_IOCSTATUS_SUCCESS) { 4307 - FWImgHdr = (Mpi2FWImageHeader_t *)fwpkg_data; 4308 - if (FWImgHdr->PackageVersion.Word) { 4309 - ioc_info(ioc, "FW Package Version (%02d.%02d.%02d.%02d)\n", 4310 - FWImgHdr->PackageVersion.Struct.Major, 4311 - FWImgHdr->PackageVersion.Struct.Minor, 4312 - FWImgHdr->PackageVersion.Struct.Unit, 4313 - FWImgHdr->PackageVersion.Struct.Dev); 4314 - } 4305 + fw_img_hdr = (Mpi2FWImageHeader_t *)fwpkg_data; 4306 + if (le32_to_cpu(fw_img_hdr->Signature) == 4307 + MPI26_IMAGE_HEADER_SIGNATURE0_MPI26) { 4308 + cmp_img_hdr = 4309 + (Mpi26ComponentImageHeader_t *) 4310 + (fwpkg_data); 4311 + package_version = 4312 + le32_to_cpu( 4313 + cmp_img_hdr->ApplicationSpecific); 4314 + } else 4315 + package_version = 4316 + le32_to_cpu( 4317 + fw_img_hdr->PackageVersion.Word); 4318 + if (package_version) 4319 + ioc_info(ioc, 4320 + "FW Package Ver(%02d.%02d.%02d.%02d)\n", 4321 + ((package_version) & 0xFF000000) >> 24, 4322 + ((package_version) & 0x00FF0000) >> 16, 4323 + ((package_version) & 0x0000FF00) >> 8, 4324 + (package_version) & 0x000000FF); 4315 4325 } else { 4316 4326 _debug_dump_mf(&mpi_reply, 4317 4327 sizeof(Mpi2FWUploadReply_t)/4);
+10 -5
drivers/scsi/mpt3sas/mpt3sas_base.h
··· 76 76 #define MPT3SAS_DRIVER_NAME "mpt3sas" 77 77 #define MPT3SAS_AUTHOR "Avago Technologies <MPT-FusionLinux.pdl@avagotech.com>" 78 78 #define MPT3SAS_DESCRIPTION "LSI MPT Fusion SAS 3.0 Device Driver" 79 - #define MPT3SAS_DRIVER_VERSION "31.100.00.00" 80 - #define MPT3SAS_MAJOR_VERSION 31 79 + #define MPT3SAS_DRIVER_VERSION "32.100.00.00" 80 + #define MPT3SAS_MAJOR_VERSION 32 81 81 #define MPT3SAS_MINOR_VERSION 100 82 82 #define MPT3SAS_BUILD_VERSION 0 83 83 #define MPT3SAS_RELEASE_VERSION 00 ··· 303 303 #define MPT3_DIAG_BUFFER_IS_REGISTERED (0x01) 304 304 #define MPT3_DIAG_BUFFER_IS_RELEASED (0x02) 305 305 #define MPT3_DIAG_BUFFER_IS_DIAG_RESET (0x04) 306 + #define MPT3_DIAG_BUFFER_IS_DRIVER_ALLOCATED (0x08) 307 + #define MPT3_DIAG_BUFFER_IS_APP_OWNED (0x10) 306 308 307 309 /* 308 310 * HP HBA branding ··· 393 391 u8 Reserved6; /* 2Fh */ 394 392 __le32 Reserved7[7]; /* 30h - 4Bh */ 395 393 u8 NVMeAbortTO; /* 4Ch */ 396 - u8 Reserved8; /* 4Dh */ 397 - u16 Reserved9; /* 4Eh */ 398 - __le32 Reserved10[4]; /* 50h - 60h */ 394 + u8 NumPerDevEvents; /* 4Dh */ 395 + u8 HostTraceBufferDecrementSizeKB; /* 4Eh */ 396 + u8 HostTraceBufferFlags; /* 4Fh */ 397 + u16 HostTraceBufferMaxSizeKB; /* 50h */ 398 + u16 HostTraceBufferMinSizeKB; /* 52h */ 399 + __le32 Reserved10[2]; /* 54h - 5Bh */ 399 400 }; 400 401 401 402 /**
+303 -41
drivers/scsi/mpt3sas/mpt3sas_ctl.c
··· 466 466 if ((ioc->diag_buffer_status[i] & 467 467 MPT3_DIAG_BUFFER_IS_RELEASED)) 468 468 continue; 469 + 470 + /* 471 + * add a log message to indicate the release 472 + */ 473 + ioc_info(ioc, 474 + "%s: Releasing the trace buffer due to adapter reset.", 475 + __func__); 469 476 mpt3sas_send_diag_release(ioc, i, &issue_reset); 470 477 } 471 478 } ··· 785 778 case MPI2_FUNCTION_NVME_ENCAPSULATED: 786 779 { 787 780 nvme_encap_request = (Mpi26NVMeEncapsulatedRequest_t *)request; 781 + if (!ioc->pcie_sg_lookup) { 782 + dtmprintk(ioc, ioc_info(ioc, 783 + "HBA doesn't support NVMe. Rejecting NVMe Encapsulated request.\n" 784 + )); 785 + 786 + if (ioc->logging_level & MPT_DEBUG_TM) 787 + _debug_dump_mf(nvme_encap_request, 788 + ioc->request_sz/4); 789 + mpt3sas_base_free_smid(ioc, smid); 790 + ret = -EINVAL; 791 + goto out; 792 + } 788 793 /* 789 794 * Get the Physical Address of the sense buffer. 790 795 * Use Error Response buffer address field to hold the sense ··· 1503 1484 return rc; 1504 1485 } 1505 1486 1487 + /** 1488 + * _ctl_diag_get_bufftype - return diag buffer type 1489 + * either TRACE, SNAPSHOT, or EXTENDED 1490 + * @ioc: per adapter object 1491 + * @unique_id: specifies the unique_id for the buffer 1492 + * 1493 + * returns MPT3_DIAG_UID_NOT_FOUND if the id not found 1494 + */ 1495 + static u8 1496 + _ctl_diag_get_bufftype(struct MPT3SAS_ADAPTER *ioc, u32 unique_id) 1497 + { 1498 + u8 index; 1499 + 1500 + for (index = 0; index < MPI2_DIAG_BUF_TYPE_COUNT; index++) { 1501 + if (ioc->unique_id[index] == unique_id) 1502 + return index; 1503 + } 1504 + 1505 + return MPT3_DIAG_UID_NOT_FOUND; 1506 + } 1506 1507 1507 1508 /** 1508 1509 * _ctl_diag_register_2 - wrapper for registering diag buffer support ··· 1570 1531 return -EPERM; 1571 1532 } 1572 1533 1534 + if (diag_register->unique_id == 0) { 1535 + ioc_err(ioc, 1536 + "%s: Invalid UID(0x%08x), buffer_type(0x%02x)\n", __func__, 1537 + diag_register->unique_id, buffer_type); 1538 + return -EINVAL; 1539 + } 1540 + 1541 + if ((ioc->diag_buffer_status[buffer_type] & 1542 + MPT3_DIAG_BUFFER_IS_APP_OWNED) && 1543 + !(ioc->diag_buffer_status[buffer_type] & 1544 + MPT3_DIAG_BUFFER_IS_RELEASED)) { 1545 + ioc_err(ioc, 1546 + "%s: buffer_type(0x%02x) is already registered by application with UID(0x%08x)\n", 1547 + __func__, buffer_type, ioc->unique_id[buffer_type]); 1548 + return -EINVAL; 1549 + } 1550 + 1573 1551 if (ioc->diag_buffer_status[buffer_type] & 1574 1552 MPT3_DIAG_BUFFER_IS_REGISTERED) { 1575 - ioc_err(ioc, "%s: already has a registered buffer for buffer_type(0x%02x)\n", 1576 - __func__, buffer_type); 1577 - return -EINVAL; 1553 + /* 1554 + * If driver posts buffer initially, then an application wants 1555 + * to Register that buffer (own it) without Releasing first, 1556 + * the application Register command MUST have the same buffer 1557 + * type and size in the Register command (obtained from the 1558 + * Query command). Otherwise that Register command will be 1559 + * failed. If the application has released the buffer but wants 1560 + * to re-register it, it should be allowed as long as the 1561 + * Unique-Id/Size match. 1562 + */ 1563 + 1564 + if (ioc->unique_id[buffer_type] == MPT3DIAGBUFFUNIQUEID && 1565 + ioc->diag_buffer_sz[buffer_type] == 1566 + diag_register->requested_buffer_size) { 1567 + 1568 + if (!(ioc->diag_buffer_status[buffer_type] & 1569 + MPT3_DIAG_BUFFER_IS_RELEASED)) { 1570 + dctlprintk(ioc, ioc_info(ioc, 1571 + "%s: diag_buffer (%d) ownership changed. old-ID(0x%08x), new-ID(0x%08x)\n", 1572 + __func__, buffer_type, 1573 + ioc->unique_id[buffer_type], 1574 + diag_register->unique_id)); 1575 + 1576 + /* 1577 + * Application wants to own the buffer with 1578 + * the same size. 1579 + */ 1580 + ioc->unique_id[buffer_type] = 1581 + diag_register->unique_id; 1582 + rc = 0; /* success */ 1583 + goto out; 1584 + } 1585 + } else if (ioc->unique_id[buffer_type] != 1586 + MPT3DIAGBUFFUNIQUEID) { 1587 + if (ioc->unique_id[buffer_type] != 1588 + diag_register->unique_id || 1589 + ioc->diag_buffer_sz[buffer_type] != 1590 + diag_register->requested_buffer_size || 1591 + !(ioc->diag_buffer_status[buffer_type] & 1592 + MPT3_DIAG_BUFFER_IS_RELEASED)) { 1593 + ioc_err(ioc, 1594 + "%s: already has a registered buffer for buffer_type(0x%02x)\n", 1595 + __func__, buffer_type); 1596 + return -EINVAL; 1597 + } 1598 + } else { 1599 + ioc_err(ioc, "%s: already has a registered buffer for buffer_type(0x%02x)\n", 1600 + __func__, buffer_type); 1601 + return -EINVAL; 1602 + } 1603 + } else if (ioc->diag_buffer_status[buffer_type] & 1604 + MPT3_DIAG_BUFFER_IS_DRIVER_ALLOCATED) { 1605 + 1606 + if (ioc->unique_id[buffer_type] != MPT3DIAGBUFFUNIQUEID || 1607 + ioc->diag_buffer_sz[buffer_type] != 1608 + diag_register->requested_buffer_size) { 1609 + 1610 + ioc_err(ioc, 1611 + "%s: already a buffer is allocated for buffer_type(0x%02x) of size %d bytes, so please try registering again with same size\n", 1612 + __func__, buffer_type, 1613 + ioc->diag_buffer_sz[buffer_type]); 1614 + return -EINVAL; 1615 + } 1578 1616 } 1579 1617 1580 1618 if (diag_register->requested_buffer_size % 4) { ··· 1676 1560 request_data = ioc->diag_buffer[buffer_type]; 1677 1561 request_data_sz = diag_register->requested_buffer_size; 1678 1562 ioc->unique_id[buffer_type] = diag_register->unique_id; 1679 - ioc->diag_buffer_status[buffer_type] = 0; 1563 + ioc->diag_buffer_status[buffer_type] &= 1564 + MPT3_DIAG_BUFFER_IS_DRIVER_ALLOCATED; 1680 1565 memcpy(ioc->product_specific[buffer_type], 1681 1566 diag_register->product_specific, MPT3_PRODUCT_SPECIFIC_DWORDS); 1682 1567 ioc->diagnostic_flags[buffer_type] = diag_register->diagnostic_flags; ··· 1701 1584 ioc_err(ioc, "%s: failed allocating memory for diag buffers, requested size(%d)\n", 1702 1585 __func__, request_data_sz); 1703 1586 mpt3sas_base_free_smid(ioc, smid); 1704 - return -ENOMEM; 1587 + rc = -ENOMEM; 1588 + goto out; 1705 1589 } 1706 1590 ioc->diag_buffer[buffer_type] = request_data; 1707 1591 ioc->diag_buffer_sz[buffer_type] = request_data_sz; ··· 1767 1649 1768 1650 out: 1769 1651 1770 - if (rc && request_data) 1652 + if (rc && request_data) { 1771 1653 dma_free_coherent(&ioc->pdev->dev, request_data_sz, 1772 1654 request_data, request_data_dma); 1655 + ioc->diag_buffer_status[buffer_type] &= 1656 + ~MPT3_DIAG_BUFFER_IS_DRIVER_ALLOCATED; 1657 + } 1773 1658 1774 1659 ioc->ctl_cmds.status = MPT3_CMD_NOT_USED; 1775 1660 return rc; ··· 1790 1669 mpt3sas_enable_diag_buffer(struct MPT3SAS_ADAPTER *ioc, u8 bits_to_register) 1791 1670 { 1792 1671 struct mpt3_diag_register diag_register; 1672 + u32 ret_val; 1673 + u32 trace_buff_size = ioc->manu_pg11.HostTraceBufferMaxSizeKB<<10; 1674 + u32 min_trace_buff_size = 0; 1675 + u32 decr_trace_buff_size = 0; 1793 1676 1794 1677 memset(&diag_register, 0, sizeof(struct mpt3_diag_register)); 1795 1678 ··· 1802 1677 ioc->diag_trigger_master.MasterData = 1803 1678 (MASTER_TRIGGER_FW_FAULT + MASTER_TRIGGER_ADAPTER_RESET); 1804 1679 diag_register.buffer_type = MPI2_DIAG_BUF_TYPE_TRACE; 1805 - /* register for 2MB buffers */ 1806 - diag_register.requested_buffer_size = 2 * (1024 * 1024); 1807 - diag_register.unique_id = 0x7075900; 1808 - _ctl_diag_register_2(ioc, &diag_register); 1680 + diag_register.unique_id = 1681 + (ioc->hba_mpi_version_belonged == MPI2_VERSION) ? 1682 + (MPT2DIAGBUFFUNIQUEID):(MPT3DIAGBUFFUNIQUEID); 1683 + 1684 + if (trace_buff_size != 0) { 1685 + diag_register.requested_buffer_size = trace_buff_size; 1686 + min_trace_buff_size = 1687 + ioc->manu_pg11.HostTraceBufferMinSizeKB<<10; 1688 + decr_trace_buff_size = 1689 + ioc->manu_pg11.HostTraceBufferDecrementSizeKB<<10; 1690 + 1691 + if (min_trace_buff_size > trace_buff_size) { 1692 + /* The buff size is not set correctly */ 1693 + ioc_err(ioc, 1694 + "Min Trace Buff size (%d KB) greater than Max Trace Buff size (%d KB)\n", 1695 + min_trace_buff_size>>10, 1696 + trace_buff_size>>10); 1697 + ioc_err(ioc, 1698 + "Using zero Min Trace Buff Size\n"); 1699 + min_trace_buff_size = 0; 1700 + } 1701 + 1702 + if (decr_trace_buff_size == 0) { 1703 + /* 1704 + * retry the min size if decrement 1705 + * is not available. 1706 + */ 1707 + decr_trace_buff_size = 1708 + trace_buff_size - min_trace_buff_size; 1709 + } 1710 + } else { 1711 + /* register for 2MB buffers */ 1712 + diag_register.requested_buffer_size = 2 * (1024 * 1024); 1713 + } 1714 + 1715 + do { 1716 + ret_val = _ctl_diag_register_2(ioc, &diag_register); 1717 + 1718 + if (ret_val == -ENOMEM && min_trace_buff_size && 1719 + (trace_buff_size - decr_trace_buff_size) >= 1720 + min_trace_buff_size) { 1721 + /* adjust the buffer size */ 1722 + trace_buff_size -= decr_trace_buff_size; 1723 + diag_register.requested_buffer_size = 1724 + trace_buff_size; 1725 + } else 1726 + break; 1727 + } while (true); 1728 + 1729 + if (ret_val == -ENOMEM) 1730 + ioc_err(ioc, 1731 + "Cannot allocate trace buffer memory. Last memory tried = %d KB\n", 1732 + diag_register.requested_buffer_size>>10); 1733 + else if (ioc->diag_buffer_status[MPI2_DIAG_BUF_TYPE_TRACE] 1734 + & MPT3_DIAG_BUFFER_IS_REGISTERED) { 1735 + ioc_err(ioc, "Trace buffer memory %d KB allocated\n", 1736 + diag_register.requested_buffer_size>>10); 1737 + if (ioc->hba_mpi_version_belonged != MPI2_VERSION) 1738 + ioc->diag_buffer_status[ 1739 + MPI2_DIAG_BUF_TYPE_TRACE] |= 1740 + MPT3_DIAG_BUFFER_IS_DRIVER_ALLOCATED; 1741 + } 1809 1742 } 1810 1743 1811 1744 if (bits_to_register & 2) { ··· 1906 1723 } 1907 1724 1908 1725 rc = _ctl_diag_register_2(ioc, &karg); 1726 + 1727 + if (!rc && (ioc->diag_buffer_status[karg.buffer_type] & 1728 + MPT3_DIAG_BUFFER_IS_REGISTERED)) 1729 + ioc->diag_buffer_status[karg.buffer_type] |= 1730 + MPT3_DIAG_BUFFER_IS_APP_OWNED; 1731 + 1909 1732 return rc; 1910 1733 } 1911 1734 ··· 1941 1752 dctlprintk(ioc, ioc_info(ioc, "%s\n", 1942 1753 __func__)); 1943 1754 1944 - buffer_type = karg.unique_id & 0x000000ff; 1755 + buffer_type = _ctl_diag_get_bufftype(ioc, karg.unique_id); 1756 + if (buffer_type == MPT3_DIAG_UID_NOT_FOUND) { 1757 + ioc_err(ioc, "%s: buffer with unique_id(0x%08x) not found\n", 1758 + __func__, karg.unique_id); 1759 + return -EINVAL; 1760 + } 1761 + 1945 1762 if (!_ctl_diag_capability(ioc, buffer_type)) { 1946 1763 ioc_err(ioc, "%s: doesn't have capability for buffer_type(0x%02x)\n", 1947 1764 __func__, buffer_type); ··· 1980 1785 return -ENOMEM; 1981 1786 } 1982 1787 1983 - request_data_sz = ioc->diag_buffer_sz[buffer_type]; 1984 - request_data_dma = ioc->diag_buffer_dma[buffer_type]; 1985 - dma_free_coherent(&ioc->pdev->dev, request_data_sz, 1986 - request_data, request_data_dma); 1987 - ioc->diag_buffer[buffer_type] = NULL; 1988 - ioc->diag_buffer_status[buffer_type] = 0; 1788 + if (ioc->diag_buffer_status[buffer_type] & 1789 + MPT3_DIAG_BUFFER_IS_DRIVER_ALLOCATED) { 1790 + ioc->unique_id[buffer_type] = MPT3DIAGBUFFUNIQUEID; 1791 + ioc->diag_buffer_status[buffer_type] &= 1792 + ~MPT3_DIAG_BUFFER_IS_APP_OWNED; 1793 + ioc->diag_buffer_status[buffer_type] &= 1794 + ~MPT3_DIAG_BUFFER_IS_REGISTERED; 1795 + } else { 1796 + request_data_sz = ioc->diag_buffer_sz[buffer_type]; 1797 + request_data_dma = ioc->diag_buffer_dma[buffer_type]; 1798 + dma_free_coherent(&ioc->pdev->dev, request_data_sz, 1799 + request_data, request_data_dma); 1800 + ioc->diag_buffer[buffer_type] = NULL; 1801 + ioc->diag_buffer_status[buffer_type] = 0; 1802 + } 1989 1803 return 0; 1990 1804 } 1991 1805 ··· 2033 1829 return -EPERM; 2034 1830 } 2035 1831 2036 - if ((ioc->diag_buffer_status[buffer_type] & 2037 - MPT3_DIAG_BUFFER_IS_REGISTERED) == 0) { 2038 - ioc_err(ioc, "%s: buffer_type(0x%02x) is not registered\n", 2039 - __func__, buffer_type); 2040 - return -EINVAL; 1832 + if (!(ioc->diag_buffer_status[buffer_type] & 1833 + MPT3_DIAG_BUFFER_IS_DRIVER_ALLOCATED)) { 1834 + if ((ioc->diag_buffer_status[buffer_type] & 1835 + MPT3_DIAG_BUFFER_IS_REGISTERED) == 0) { 1836 + ioc_err(ioc, "%s: buffer_type(0x%02x) is not registered\n", 1837 + __func__, buffer_type); 1838 + return -EINVAL; 1839 + } 2041 1840 } 2042 1841 2043 - if (karg.unique_id & 0xffffff00) { 1842 + if (karg.unique_id) { 2044 1843 if (karg.unique_id != ioc->unique_id[buffer_type]) { 2045 1844 ioc_err(ioc, "%s: unique_id(0x%08x) is not registered\n", 2046 1845 __func__, karg.unique_id); ··· 2058 1851 return -ENOMEM; 2059 1852 } 2060 1853 2061 - if (ioc->diag_buffer_status[buffer_type] & MPT3_DIAG_BUFFER_IS_RELEASED) 2062 - karg.application_flags = (MPT3_APP_FLAGS_APP_OWNED | 2063 - MPT3_APP_FLAGS_BUFFER_VALID); 2064 - else 2065 - karg.application_flags = (MPT3_APP_FLAGS_APP_OWNED | 2066 - MPT3_APP_FLAGS_BUFFER_VALID | 2067 - MPT3_APP_FLAGS_FW_BUFFER_ACCESS); 1854 + if ((ioc->diag_buffer_status[buffer_type] & 1855 + MPT3_DIAG_BUFFER_IS_REGISTERED)) 1856 + karg.application_flags |= MPT3_APP_FLAGS_BUFFER_VALID; 1857 + 1858 + if (!(ioc->diag_buffer_status[buffer_type] & 1859 + MPT3_DIAG_BUFFER_IS_RELEASED)) 1860 + karg.application_flags |= MPT3_APP_FLAGS_FW_BUFFER_ACCESS; 1861 + 1862 + if (!(ioc->diag_buffer_status[buffer_type] & 1863 + MPT3_DIAG_BUFFER_IS_DRIVER_ALLOCATED)) 1864 + karg.application_flags |= MPT3_APP_FLAGS_DYNAMIC_BUFFER_ALLOC; 1865 + 1866 + if ((ioc->diag_buffer_status[buffer_type] & 1867 + MPT3_DIAG_BUFFER_IS_APP_OWNED)) 1868 + karg.application_flags |= MPT3_APP_FLAGS_APP_OWNED; 2068 1869 2069 1870 for (i = 0; i < MPT3_PRODUCT_SPECIFIC_DWORDS; i++) 2070 1871 karg.product_specific[i] = ··· 2217 2002 dctlprintk(ioc, ioc_info(ioc, "%s\n", 2218 2003 __func__)); 2219 2004 2220 - buffer_type = karg.unique_id & 0x000000ff; 2005 + buffer_type = _ctl_diag_get_bufftype(ioc, karg.unique_id); 2006 + if (buffer_type == MPT3_DIAG_UID_NOT_FOUND) { 2007 + ioc_err(ioc, "%s: buffer with unique_id(0x%08x) not found\n", 2008 + __func__, karg.unique_id); 2009 + return -EINVAL; 2010 + } 2011 + 2221 2012 if (!_ctl_diag_capability(ioc, buffer_type)) { 2222 2013 ioc_err(ioc, "%s: doesn't have capability for buffer_type(0x%02x)\n", 2223 2014 __func__, buffer_type); ··· 2247 2026 MPT3_DIAG_BUFFER_IS_RELEASED) { 2248 2027 ioc_err(ioc, "%s: buffer_type(0x%02x) is already released\n", 2249 2028 __func__, buffer_type); 2250 - return 0; 2029 + return -EINVAL; 2251 2030 } 2252 2031 2253 2032 request_data = ioc->diag_buffer[buffer_type]; ··· 2307 2086 dctlprintk(ioc, ioc_info(ioc, "%s\n", 2308 2087 __func__)); 2309 2088 2310 - buffer_type = karg.unique_id & 0x000000ff; 2089 + buffer_type = _ctl_diag_get_bufftype(ioc, karg.unique_id); 2090 + if (buffer_type == MPT3_DIAG_UID_NOT_FOUND) { 2091 + ioc_err(ioc, "%s: buffer with unique_id(0x%08x) not found\n", 2092 + __func__, karg.unique_id); 2093 + return -EINVAL; 2094 + } 2095 + 2311 2096 if (!_ctl_diag_capability(ioc, buffer_type)) { 2312 2097 ioc_err(ioc, "%s: doesn't have capability for buffer_type(0x%02x)\n", 2313 2098 __func__, buffer_type); ··· 2437 2210 if (ioc_status == MPI2_IOCSTATUS_SUCCESS) { 2438 2211 ioc->diag_buffer_status[buffer_type] |= 2439 2212 MPT3_DIAG_BUFFER_IS_REGISTERED; 2213 + ioc->diag_buffer_status[buffer_type] &= 2214 + ~MPT3_DIAG_BUFFER_IS_RELEASED; 2440 2215 dctlprintk(ioc, ioc_info(ioc, "%s: success\n", __func__)); 2441 2216 } else { 2442 2217 ioc_info(ioc, "%s: ioc_status(0x%04x) log_info(0x%08x)\n", ··· 3359 3130 memset(&diag_register, 0, sizeof(struct mpt3_diag_register)); 3360 3131 ioc_info(ioc, "posting host trace buffers\n"); 3361 3132 diag_register.buffer_type = MPI2_DIAG_BUF_TYPE_TRACE; 3362 - diag_register.requested_buffer_size = (1024 * 1024); 3363 - diag_register.unique_id = 0x7075900; 3133 + 3134 + if (ioc->manu_pg11.HostTraceBufferMaxSizeKB != 0 && 3135 + ioc->diag_buffer_sz[MPI2_DIAG_BUF_TYPE_TRACE] != 0) { 3136 + /* post the same buffer allocated previously */ 3137 + diag_register.requested_buffer_size = 3138 + ioc->diag_buffer_sz[MPI2_DIAG_BUF_TYPE_TRACE]; 3139 + } else { 3140 + /* 3141 + * Free the diag buffer memory which was previously 3142 + * allocated by an application. 3143 + */ 3144 + if ((ioc->diag_buffer_sz[MPI2_DIAG_BUF_TYPE_TRACE] != 0) 3145 + && 3146 + (ioc->diag_buffer_status[MPI2_DIAG_BUF_TYPE_TRACE] & 3147 + MPT3_DIAG_BUFFER_IS_APP_OWNED)) { 3148 + pci_free_consistent(ioc->pdev, 3149 + ioc->diag_buffer_sz[ 3150 + MPI2_DIAG_BUF_TYPE_TRACE], 3151 + ioc->diag_buffer[MPI2_DIAG_BUF_TYPE_TRACE], 3152 + ioc->diag_buffer_dma[ 3153 + MPI2_DIAG_BUF_TYPE_TRACE]); 3154 + ioc->diag_buffer[MPI2_DIAG_BUF_TYPE_TRACE] = 3155 + NULL; 3156 + } 3157 + 3158 + diag_register.requested_buffer_size = (1024 * 1024); 3159 + } 3160 + 3161 + diag_register.unique_id = 3162 + (ioc->hba_mpi_version_belonged == MPI2_VERSION) ? 3163 + (MPT2DIAGBUFFUNIQUEID):(MPT3DIAGBUFFUNIQUEID); 3364 3164 ioc->diag_buffer_status[MPI2_DIAG_BUF_TYPE_TRACE] = 0; 3365 3165 _ctl_diag_register_2(ioc, &diag_register); 3166 + if (ioc->diag_buffer_status[MPI2_DIAG_BUF_TYPE_TRACE] & 3167 + MPT3_DIAG_BUFFER_IS_REGISTERED) { 3168 + ioc_info(ioc, 3169 + "Trace buffer %d KB allocated through sysfs\n", 3170 + diag_register.requested_buffer_size>>10); 3171 + if (ioc->hba_mpi_version_belonged != MPI2_VERSION) 3172 + ioc->diag_buffer_status[ 3173 + MPI2_DIAG_BUF_TYPE_TRACE] |= 3174 + MPT3_DIAG_BUFFER_IS_DRIVER_ALLOCATED; 3175 + } 3366 3176 } else if (!strcmp(str, "release")) { 3367 3177 /* exit out if host buffers are already released */ 3368 3178 if (!ioc->diag_buffer[MPI2_DIAG_BUF_TYPE_TRACE]) ··· 3969 3701 /* free memory associated to diag buffers */ 3970 3702 for (i = 0; i < MPI2_DIAG_BUF_TYPE_COUNT; i++) { 3971 3703 if (!ioc->diag_buffer[i]) 3972 - continue; 3973 - if (!(ioc->diag_buffer_status[i] & 3974 - MPT3_DIAG_BUFFER_IS_REGISTERED)) 3975 - continue; 3976 - if ((ioc->diag_buffer_status[i] & 3977 - MPT3_DIAG_BUFFER_IS_RELEASED)) 3978 3704 continue; 3979 3705 dma_free_coherent(&ioc->pdev->dev, 3980 3706 ioc->diag_buffer_sz[i],
+9
drivers/scsi/mpt3sas/mpt3sas_ctl.h
··· 95 95 #define MPT3DIAGREADBUFFER _IOWR(MPT3_MAGIC_NUMBER, 30, \ 96 96 struct mpt3_diag_read_buffer) 97 97 98 + /* Trace Buffer default UniqueId */ 99 + #define MPT2DIAGBUFFUNIQUEID (0x07075900) 100 + #define MPT3DIAGBUFFUNIQUEID (0x4252434D) 101 + 102 + /* UID not found */ 103 + #define MPT3_DIAG_UID_NOT_FOUND (0xFF) 104 + 105 + 98 106 /** 99 107 * struct mpt3_ioctl_header - main header structure 100 108 * @ioc_number - IOC unit number ··· 318 310 #define MPT3_APP_FLAGS_APP_OWNED (0x0001) 319 311 #define MPT3_APP_FLAGS_BUFFER_VALID (0x0002) 320 312 #define MPT3_APP_FLAGS_FW_BUFFER_ACCESS (0x0004) 313 + #define MPT3_APP_FLAGS_DYNAMIC_BUFFER_ALLOC (0x0008) 321 314 322 315 /* flags for mpt3_diag_read_buffer */ 323 316 #define MPT3_FLAGS_REREGISTER (0x0001)
+3 -1
drivers/scsi/mpt3sas/mpt3sas_scsih.c
··· 5161 5161 /* insert into event log */ 5162 5162 sz = offsetof(Mpi2EventNotificationReply_t, EventData) + 5163 5163 sizeof(Mpi2EventDataSasDeviceStatusChange_t); 5164 - event_reply = kzalloc(sz, GFP_KERNEL); 5164 + event_reply = kzalloc(sz, GFP_ATOMIC); 5165 5165 if (!event_reply) { 5166 5166 ioc_err(ioc, "failure at %s:%d/%s()!\n", 5167 5167 __FILE__, __LINE__, __func__); ··· 10193 10193 int rc; 10194 10194 if (diag_buffer_enable != -1 && diag_buffer_enable != 0) 10195 10195 mpt3sas_enable_diag_buffer(ioc, diag_buffer_enable); 10196 + else if (ioc->manu_pg11.HostTraceBufferMaxSizeKB != 0) 10197 + mpt3sas_enable_diag_buffer(ioc, 1); 10196 10198 10197 10199 if (disable_discovery > 0) 10198 10200 return;
+9 -3
drivers/scsi/mpt3sas/mpt3sas_trigger_diag.c
··· 113 113 struct SL_WH_TRIGGERS_EVENT_DATA_T *event_data) 114 114 { 115 115 u8 issue_reset = 0; 116 + u32 *trig_data = (u32 *)&event_data->u.master; 116 117 117 118 dTriggerDiagPrintk(ioc, ioc_info(ioc, "%s: enter\n", __func__)); 118 119 119 120 /* release the diag buffer trace */ 120 121 if ((ioc->diag_buffer_status[MPI2_DIAG_BUF_TYPE_TRACE] & 121 122 MPT3_DIAG_BUFFER_IS_RELEASED) == 0) { 122 - dTriggerDiagPrintk(ioc, 123 - ioc_info(ioc, "%s: release trace diag buffer\n", 124 - __func__)); 123 + /* 124 + * add a log message so that user knows which event caused 125 + * the release 126 + */ 127 + ioc_info(ioc, 128 + "%s: Releasing the trace buffer. Trigger_Type 0x%08x, Data[0] 0x%08x, Data[1] 0x%08x\n", 129 + __func__, event_data->trigger_type, 130 + trig_data[0], trig_data[1]); 125 131 mpt3sas_send_diag_release(ioc, MPI2_DIAG_BUF_TYPE_TRACE, 126 132 &issue_reset); 127 133 }
+1 -1
drivers/scsi/mvsas/mv_sas.c
··· 1541 1541 1542 1542 int mvs_abort_task_set(struct domain_device *dev, u8 *lun) 1543 1543 { 1544 - int rc = TMF_RESP_FUNC_FAILED; 1544 + int rc; 1545 1545 struct mvs_tmf_task tmf_task; 1546 1546 1547 1547 tmf_task.tmf = TMF_ABORT_TASK_SET;
+1 -1
drivers/scsi/ncr53c8xx.c
··· 1722 1722 ** Miscellaneous configuration and status parameters. 1723 1723 **---------------------------------------------------------------- 1724 1724 */ 1725 - u_char disc; /* Diconnection allowed */ 1725 + u_char disc; /* Disconnection allowed */ 1726 1726 u_char scsi_mode; /* Current SCSI BUS mode */ 1727 1727 u_char order; /* Tag order to use */ 1728 1728 u_char verbose; /* Verbosity for this controller*/
+1 -1
drivers/scsi/nsp32.c
··· 1542 1542 * with ACK reply when below condition is matched: 1543 1543 * MsgIn 00: Command Complete. 1544 1544 * MsgIn 02: Save Data Pointer. 1545 - * MsgIn 04: Diconnect. 1545 + * MsgIn 04: Disconnect. 1546 1546 * In other case, unexpected BUSFREE is detected. 1547 1547 */ 1548 1548 static int nsp32_busfree_occur(struct scsi_cmnd *SCpnt, unsigned short execph)
+1 -1
drivers/scsi/pcmcia/Kconfig
··· 32 32 33 33 config PCMCIA_NINJA_SCSI 34 34 tristate "NinjaSCSI-3 / NinjaSCSI-32Bi (16bit) PCMCIA support" 35 - depends on !64BIT 35 + depends on !64BIT || COMPILE_TEST 36 36 help 37 37 If you intend to attach this type of PCMCIA SCSI host adapter to 38 38 your computer, say Y here and read
-2
drivers/scsi/pcmcia/nsp_cs.c
··· 56 56 MODULE_AUTHOR("YOKOTA Hiroshi <yokota@netlab.is.tsukuba.ac.jp>"); 57 57 MODULE_DESCRIPTION("WorkBit NinjaSCSI-3 / NinjaSCSI-32Bi(16bit) PCMCIA SCSI host adapter module"); 58 58 MODULE_SUPPORTED_DEVICE("sd,sr,sg,st"); 59 - #ifdef MODULE_LICENSE 60 59 MODULE_LICENSE("GPL"); 61 - #endif 62 60 63 61 #include "nsp_io.h" 64 62
+20
drivers/scsi/pm8001/pm8001_ctl.c
··· 70 70 DEVICE_ATTR(interface_rev, S_IRUGO, pm8001_ctl_mpi_interface_rev_show, NULL); 71 71 72 72 /** 73 + * controller_fatal_error_show - check controller is under fatal err 74 + * @cdev: pointer to embedded class device 75 + * @buf: the buffer returned 76 + * 77 + * A sysfs 'read only' shost attribute. 78 + */ 79 + static ssize_t controller_fatal_error_show(struct device *cdev, 80 + struct device_attribute *attr, char *buf) 81 + { 82 + struct Scsi_Host *shost = class_to_shost(cdev); 83 + struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost); 84 + struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; 85 + 86 + return snprintf(buf, PAGE_SIZE, "%d\n", 87 + pm8001_ha->controller_fatal_error); 88 + } 89 + static DEVICE_ATTR_RO(controller_fatal_error); 90 + 91 + /** 73 92 * pm8001_ctl_fw_version_show - firmware version 74 93 * @cdev: pointer to embedded class device 75 94 * @buf: the buffer returned ··· 823 804 pm8001_show_update_fw, pm8001_store_update_fw); 824 805 struct device_attribute *pm8001_host_attrs[] = { 825 806 &dev_attr_interface_rev, 807 + &dev_attr_controller_fatal_error, 826 808 &dev_attr_fw_version, 827 809 &dev_attr_update_fw, 828 810 &dev_attr_aap_log,
+90 -41
drivers/scsi/pm8001/pm8001_hwi.c
··· 1336 1336 * @circularQ: the inbound queue we want to transfer to HBA. 1337 1337 * @opCode: the operation code represents commands which LLDD and fw recognized. 1338 1338 * @payload: the command payload of each operation command. 1339 + * @nb: size in bytes of the command payload 1340 + * @responseQueue: queue to interrupt on w/ command response (if any) 1339 1341 */ 1340 1342 int pm8001_mpi_build_cmd(struct pm8001_hba_info *pm8001_ha, 1341 1343 struct inbound_queue_table *circularQ, 1342 - u32 opCode, void *payload, u32 responseQueue) 1344 + u32 opCode, void *payload, size_t nb, 1345 + u32 responseQueue) 1343 1346 { 1344 1347 u32 Header = 0, hpriority = 0, bc = 1, category = 0x02; 1345 1348 void *pMessage; ··· 1353 1350 pm8001_printk("No free mpi buffer\n")); 1354 1351 return -ENOMEM; 1355 1352 } 1356 - BUG_ON(!payload); 1357 - /*Copy to the payload*/ 1358 - memcpy(pMessage, payload, (pm8001_ha->iomb_size - 1359 - sizeof(struct mpi_msg_hdr))); 1353 + 1354 + if (nb > (pm8001_ha->iomb_size - sizeof(struct mpi_msg_hdr))) 1355 + nb = pm8001_ha->iomb_size - sizeof(struct mpi_msg_hdr); 1356 + memcpy(pMessage, payload, nb); 1357 + if (nb + sizeof(struct mpi_msg_hdr) < pm8001_ha->iomb_size) 1358 + memset(pMessage + nb, 0, pm8001_ha->iomb_size - 1359 + (nb + sizeof(struct mpi_msg_hdr))); 1360 1360 1361 1361 /*Build the header*/ 1362 1362 Header = ((1 << 31) | (hpriority << 30) | ((bc & 0x1f) << 24) ··· 1370 1364 /*Update the PI to the firmware*/ 1371 1365 pm8001_cw32(pm8001_ha, circularQ->pi_pci_bar, 1372 1366 circularQ->pi_offset, circularQ->producer_idx); 1373 - PM8001_IO_DBG(pm8001_ha, 1367 + PM8001_DEVIO_DBG(pm8001_ha, 1374 1368 pm8001_printk("INB Q %x OPCODE:%x , UPDATED PI=%d CI=%d\n", 1375 1369 responseQueue, opCode, circularQ->producer_idx, 1376 1370 circularQ->consumer_index)); ··· 1442 1436 /* read header */ 1443 1437 header_tmp = pm8001_read_32(msgHeader); 1444 1438 msgHeader_tmp = cpu_to_le32(header_tmp); 1439 + PM8001_DEVIO_DBG(pm8001_ha, pm8001_printk( 1440 + "outbound opcode msgheader:%x ci=%d pi=%d\n", 1441 + msgHeader_tmp, circularQ->consumer_idx, 1442 + circularQ->producer_index)); 1445 1443 if (0 != (le32_to_cpu(msgHeader_tmp) & 0x80000000)) { 1446 1444 if (OPC_OUB_SKIP_ENTRY != 1447 1445 (le32_to_cpu(msgHeader_tmp) & 0xfff)) { ··· 1614 1604 break; 1615 1605 1616 1606 default: 1617 - pm8001_printk("...query task failed!!!\n"); 1607 + PM8001_DEVIO_DBG(pm8001_ha, pm8001_printk( 1608 + "...query task failed!!!\n")); 1618 1609 break; 1619 1610 }); 1620 1611 ··· 1769 1758 task_abort.device_id = cpu_to_le32(pm8001_ha_dev->device_id); 1770 1759 task_abort.tag = cpu_to_le32(ccb_tag); 1771 1760 1772 - ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &task_abort, 0); 1761 + ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &task_abort, 1762 + sizeof(task_abort), 0); 1773 1763 if (ret) 1774 1764 pm8001_tag_free(pm8001_ha, ccb_tag); 1775 1765 ··· 1843 1831 sata_cmd.ncqtag_atap_dir_m |= ((0x1 << 7) | (0x5 << 9)); 1844 1832 memcpy(&sata_cmd.sata_fis, &fis, sizeof(struct host_to_dev_fis)); 1845 1833 1846 - res = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &sata_cmd, 0); 1834 + res = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &sata_cmd, 1835 + sizeof(sata_cmd), 0); 1847 1836 if (res) { 1848 1837 sas_free_task(task); 1849 1838 pm8001_tag_free(pm8001_ha, ccb_tag); ··· 1902 1889 PM8001_FAIL_DBG(pm8001_ha, 1903 1890 pm8001_printk("SAS Address of IO Failure Drive:" 1904 1891 "%016llx", SAS_ADDR(t->dev->sas_addr))); 1892 + 1893 + if (status) 1894 + PM8001_IOERR_DBG(pm8001_ha, pm8001_printk( 1895 + "status:0x%x, tag:0x%x, task:0x%p\n", 1896 + status, tag, t)); 1905 1897 1906 1898 switch (status) { 1907 1899 case IO_SUCCESS: ··· 2090 2072 ts->open_rej_reason = SAS_OREJ_RSVD_RETRY; 2091 2073 break; 2092 2074 default: 2093 - PM8001_IO_DBG(pm8001_ha, 2075 + PM8001_DEVIO_DBG(pm8001_ha, 2094 2076 pm8001_printk("Unknown status 0x%x\n", status)); 2095 2077 /* not allowed case. Therefore, return failed status */ 2096 2078 ts->resp = SAS_TASK_COMPLETE; ··· 2143 2125 if (unlikely(!t || !t->lldd_task || !t->dev)) 2144 2126 return; 2145 2127 ts = &t->task_status; 2146 - PM8001_IO_DBG(pm8001_ha, 2128 + PM8001_DEVIO_DBG(pm8001_ha, 2147 2129 pm8001_printk("port_id = %x,device_id = %x\n", 2148 2130 port_id, dev_id)); 2149 2131 switch (event) { ··· 2281 2263 pm8001_printk(" IO_XFER_CMD_FRAME_ISSUED\n")); 2282 2264 return; 2283 2265 default: 2284 - PM8001_IO_DBG(pm8001_ha, 2266 + PM8001_DEVIO_DBG(pm8001_ha, 2285 2267 pm8001_printk("Unknown status 0x%x\n", event)); 2286 2268 /* not allowed case. Therefore, return failed status */ 2287 2269 ts->resp = SAS_TASK_COMPLETE; ··· 2370 2352 pm8001_printk("ts null\n")); 2371 2353 return; 2372 2354 } 2355 + 2356 + if (status) 2357 + PM8001_IOERR_DBG(pm8001_ha, pm8001_printk( 2358 + "status:0x%x, tag:0x%x, task::0x%p\n", 2359 + status, tag, t)); 2360 + 2373 2361 /* Print sas address of IO failed device */ 2374 2362 if ((status != IO_SUCCESS) && (status != IO_OVERFLOW) && 2375 2363 (status != IO_UNDERFLOW)) { ··· 2676 2652 ts->open_rej_reason = SAS_OREJ_RSVD_RETRY; 2677 2653 break; 2678 2654 default: 2679 - PM8001_IO_DBG(pm8001_ha, 2655 + PM8001_DEVIO_DBG(pm8001_ha, 2680 2656 pm8001_printk("Unknown status 0x%x\n", status)); 2681 2657 /* not allowed case. Therefore, return failed status */ 2682 2658 ts->resp = SAS_TASK_COMPLETE; ··· 2747 2723 if (unlikely(!t || !t->lldd_task || !t->dev)) 2748 2724 return; 2749 2725 ts = &t->task_status; 2750 - PM8001_IO_DBG(pm8001_ha, pm8001_printk( 2726 + PM8001_DEVIO_DBG(pm8001_ha, pm8001_printk( 2751 2727 "port_id:0x%x, device_id:0x%x, tag:0x%x, event:0x%x\n", 2752 2728 port_id, dev_id, tag, event)); 2753 2729 switch (event) { ··· 2896 2872 ts->stat = SAS_OPEN_TO; 2897 2873 break; 2898 2874 default: 2899 - PM8001_IO_DBG(pm8001_ha, 2875 + PM8001_DEVIO_DBG(pm8001_ha, 2900 2876 pm8001_printk("Unknown status 0x%x\n", event)); 2901 2877 /* not allowed case. Therefore, return failed status */ 2902 2878 ts->resp = SAS_TASK_COMPLETE; ··· 2941 2917 t = ccb->task; 2942 2918 ts = &t->task_status; 2943 2919 pm8001_dev = ccb->device; 2944 - if (status) 2920 + if (status) { 2945 2921 PM8001_FAIL_DBG(pm8001_ha, 2946 2922 pm8001_printk("smp IO status 0x%x\n", status)); 2923 + PM8001_IOERR_DBG(pm8001_ha, 2924 + pm8001_printk("status:0x%x, tag:0x%x, task:0x%p\n", 2925 + status, tag, t)); 2926 + } 2947 2927 if (unlikely(!t || !t->lldd_task || !t->dev)) 2948 2928 return; 2949 2929 ··· 3098 3070 ts->open_rej_reason = SAS_OREJ_RSVD_RETRY; 3099 3071 break; 3100 3072 default: 3101 - PM8001_IO_DBG(pm8001_ha, 3073 + PM8001_DEVIO_DBG(pm8001_ha, 3102 3074 pm8001_printk("Unknown status 0x%x\n", status)); 3103 3075 ts->resp = SAS_TASK_COMPLETE; 3104 3076 ts->stat = SAS_DEV_NO_RESPONSE; ··· 3383 3355 ((phyId & 0x0F) << 4) | (port_id & 0x0F)); 3384 3356 payload.param0 = cpu_to_le32(param0); 3385 3357 payload.param1 = cpu_to_le32(param1); 3386 - pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 0); 3358 + pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 3359 + sizeof(payload), 0); 3387 3360 } 3388 3361 3389 3362 static int pm8001_chip_phy_ctl_req(struct pm8001_hba_info *pm8001_ha, ··· 3445 3416 pm8001_get_lrate_mode(phy, link_rate); 3446 3417 break; 3447 3418 default: 3448 - PM8001_MSG_DBG(pm8001_ha, 3419 + PM8001_DEVIO_DBG(pm8001_ha, 3449 3420 pm8001_printk("unknown device type(%x)\n", deviceType)); 3450 3421 break; 3451 3422 } ··· 3492 3463 struct sas_ha_struct *sas_ha = pm8001_ha->sas; 3493 3464 struct pm8001_phy *phy = &pm8001_ha->phy[phy_id]; 3494 3465 unsigned long flags; 3495 - PM8001_MSG_DBG(pm8001_ha, 3466 + PM8001_DEVIO_DBG(pm8001_ha, 3496 3467 pm8001_printk("HW_EVENT_SATA_PHY_UP port id = %d," 3497 3468 " phy id = %d\n", port_id, phy_id)); 3498 3469 port->port_state = portstate; ··· 3570 3541 break; 3571 3542 default: 3572 3543 port->port_attached = 0; 3573 - PM8001_MSG_DBG(pm8001_ha, 3544 + PM8001_DEVIO_DBG(pm8001_ha, 3574 3545 pm8001_printk(" phy Down and(default) = %x\n", 3575 3546 portstate)); 3576 3547 break; ··· 3718 3689 pm8001_printk(": FLASH_UPDATE_DISABLED\n")); 3719 3690 break; 3720 3691 default: 3721 - PM8001_MSG_DBG(pm8001_ha, 3692 + PM8001_DEVIO_DBG(pm8001_ha, 3722 3693 pm8001_printk("No matched status = %d\n", status)); 3723 3694 break; 3724 3695 } ··· 3834 3805 struct sas_ha_struct *sas_ha = pm8001_ha->sas; 3835 3806 struct pm8001_phy *phy = &pm8001_ha->phy[phy_id]; 3836 3807 struct asd_sas_phy *sas_phy = sas_ha->sas_phy[phy_id]; 3837 - PM8001_MSG_DBG(pm8001_ha, 3838 - pm8001_printk("outbound queue HW event & event type : ")); 3808 + PM8001_DEVIO_DBG(pm8001_ha, pm8001_printk( 3809 + "SPC HW event for portid:%d, phyid:%d, event:%x, status:%x\n", 3810 + port_id, phy_id, eventType, status)); 3839 3811 switch (eventType) { 3840 3812 case HW_EVENT_PHY_START_STATUS: 3841 3813 PM8001_MSG_DBG(pm8001_ha, ··· 4020 3990 pm8001_printk("EVENT_BROADCAST_ASYNCH_EVENT\n")); 4021 3991 break; 4022 3992 default: 4023 - PM8001_MSG_DBG(pm8001_ha, 3993 + PM8001_DEVIO_DBG(pm8001_ha, 4024 3994 pm8001_printk("Unknown event type = %x\n", eventType)); 4025 3995 break; 4026 3996 } ··· 4191 4161 pm8001_printk("OPC_OUB_SAS_RE_INITIALIZE\n")); 4192 4162 break; 4193 4163 default: 4194 - PM8001_MSG_DBG(pm8001_ha, 4164 + PM8001_DEVIO_DBG(pm8001_ha, 4195 4165 pm8001_printk("Unknown outbound Queue IOMB OPC = %x\n", 4196 4166 opc)); 4197 4167 break; ··· 4314 4284 cpu_to_le32((u32)sg_dma_len(&task->smp_task.smp_resp)-4); 4315 4285 build_smp_cmd(pm8001_dev->device_id, smp_cmd.tag, &smp_cmd); 4316 4286 rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, 4317 - (u32 *)&smp_cmd, 0); 4287 + &smp_cmd, sizeof(smp_cmd), 0); 4318 4288 if (rc) 4319 4289 goto err_out_2; 4320 4290 ··· 4382 4352 ssp_cmd.len = cpu_to_le32(task->total_xfer_len); 4383 4353 ssp_cmd.esgl = 0; 4384 4354 } 4385 - ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &ssp_cmd, 0); 4355 + ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &ssp_cmd, 4356 + sizeof(ssp_cmd), 0); 4386 4357 return ret; 4387 4358 } 4388 4359 ··· 4492 4461 } 4493 4462 } 4494 4463 4495 - ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &sata_cmd, 0); 4464 + ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &sata_cmd, 4465 + sizeof(sata_cmd), 0); 4496 4466 return ret; 4497 4467 } 4498 4468 ··· 4528 4496 memcpy(payload.sas_identify.sas_addr, 4529 4497 pm8001_ha->sas_addr, SAS_ADDR_SIZE); 4530 4498 payload.sas_identify.phy_id = phy_id; 4531 - ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opcode, &payload, 0); 4499 + ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opcode, &payload, 4500 + sizeof(payload), 0); 4532 4501 return ret; 4533 4502 } 4534 4503 ··· 4551 4518 memset(&payload, 0, sizeof(payload)); 4552 4519 payload.tag = cpu_to_le32(tag); 4553 4520 payload.phy_id = cpu_to_le32(phy_id); 4554 - ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opcode, &payload, 0); 4521 + ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opcode, &payload, 4522 + sizeof(payload), 0); 4555 4523 return ret; 4556 4524 } 4557 4525 ··· 4611 4577 cpu_to_le32(ITNT | (firstBurstSize * 0x10000)); 4612 4578 memcpy(payload.sas_addr, pm8001_dev->sas_device->sas_addr, 4613 4579 SAS_ADDR_SIZE); 4614 - rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 0); 4580 + rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 4581 + sizeof(payload), 0); 4615 4582 return rc; 4616 4583 } 4617 4584 ··· 4633 4598 payload.device_id = cpu_to_le32(device_id); 4634 4599 PM8001_MSG_DBG(pm8001_ha, 4635 4600 pm8001_printk("unregister device device_id = %d\n", device_id)); 4636 - ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 0); 4601 + ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 4602 + sizeof(payload), 0); 4637 4603 return ret; 4638 4604 } 4639 4605 ··· 4657 4621 payload.tag = cpu_to_le32(1); 4658 4622 payload.phyop_phyid = 4659 4623 cpu_to_le32(((phy_op & 0xff) << 8) | (phyId & 0x0F)); 4660 - ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 0); 4624 + ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 4625 + sizeof(payload), 0); 4661 4626 return ret; 4662 4627 } 4663 4628 ··· 4686 4649 pm8001_chip_isr(struct pm8001_hba_info *pm8001_ha, u8 vec) 4687 4650 { 4688 4651 pm8001_chip_interrupt_disable(pm8001_ha, vec); 4652 + PM8001_DEVIO_DBG(pm8001_ha, pm8001_printk( 4653 + "irq vec %d, ODMR:0x%x\n", 4654 + vec, pm8001_cr32(pm8001_ha, 0, 0x30))); 4689 4655 process_oq(pm8001_ha, vec); 4690 4656 pm8001_chip_interrupt_enable(pm8001_ha, vec); 4691 4657 return IRQ_HANDLED; ··· 4712 4672 task_abort.device_id = cpu_to_le32(dev_id); 4713 4673 task_abort.tag = cpu_to_le32(cmd_tag); 4714 4674 } 4715 - ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &task_abort, 0); 4675 + ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &task_abort, 4676 + sizeof(task_abort), 0); 4716 4677 return ret; 4717 4678 } 4718 4679 ··· 4770 4729 if (pm8001_ha->chip_id != chip_8001) 4771 4730 sspTMCmd.ds_ads_m = 0x08; 4772 4731 circularQ = &pm8001_ha->inbnd_q_tbl[0]; 4773 - ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &sspTMCmd, 0); 4732 + ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &sspTMCmd, 4733 + sizeof(sspTMCmd), 0); 4774 4734 return ret; 4775 4735 } 4776 4736 ··· 4861 4819 default: 4862 4820 break; 4863 4821 } 4864 - rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &nvmd_req, 0); 4822 + rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &nvmd_req, 4823 + sizeof(nvmd_req), 0); 4865 4824 if (rc) { 4866 4825 kfree(fw_control_context); 4867 4826 pm8001_tag_free(pm8001_ha, tag); ··· 4946 4903 default: 4947 4904 break; 4948 4905 } 4949 - rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &nvmd_req, 0); 4906 + rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &nvmd_req, 4907 + sizeof(nvmd_req), 0); 4950 4908 if (rc) { 4951 4909 kfree(fw_control_context); 4952 4910 pm8001_tag_free(pm8001_ha, tag); ··· 4982 4938 cpu_to_le32(lower_32_bits(le64_to_cpu(info->sgl.addr))); 4983 4939 payload.sgl_addr_hi = 4984 4940 cpu_to_le32(upper_32_bits(le64_to_cpu(info->sgl.addr))); 4985 - ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 0); 4941 + ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 4942 + sizeof(payload), 0); 4986 4943 return ret; 4987 4944 } 4988 4945 ··· 5005 4960 if (!fw_control_context) 5006 4961 return -ENOMEM; 5007 4962 fw_control = (struct fw_control_info *)&ioctl_payload->func_specific; 4963 + PM8001_DEVIO_DBG(pm8001_ha, pm8001_printk( 4964 + "dma fw_control context input length :%x\n", fw_control->len)); 5008 4965 memcpy(buffer, fw_control->buffer, fw_control->len); 5009 4966 flash_update_info.sgl.addr = cpu_to_le64(phys_addr); 5010 4967 flash_update_info.sgl.im_len.len = cpu_to_le32(fw_control->len); ··· 5130 5083 payload.tag = cpu_to_le32(tag); 5131 5084 payload.device_id = cpu_to_le32(pm8001_dev->device_id); 5132 5085 payload.nds = cpu_to_le32(state); 5133 - rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 0); 5086 + rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 5087 + sizeof(payload), 0); 5134 5088 return rc; 5135 5089 5136 5090 } ··· 5156 5108 payload.SSAHOLT = cpu_to_le32(0xd << 25); 5157 5109 payload.sata_hol_tmo = cpu_to_le32(80); 5158 5110 payload.open_reject_cmdretries_data_retries = cpu_to_le32(0xff00ff); 5159 - rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 0); 5111 + rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 5112 + sizeof(payload), 0); 5160 5113 if (rc) 5161 5114 pm8001_tag_free(pm8001_ha, tag); 5162 5115 return rc;
+29 -7
drivers/scsi/pm8001/pm8001_init.c
··· 41 41 #include <linux/slab.h> 42 42 #include "pm8001_sas.h" 43 43 #include "pm8001_chips.h" 44 + #include "pm80xx_hwi.h" 45 + 46 + static ulong logging_level = PM8001_FAIL_LOGGING | PM8001_IOERR_LOGGING; 47 + module_param(logging_level, ulong, 0644); 48 + MODULE_PARM_DESC(logging_level, " bits for enabling logging info."); 49 + 50 + static ulong link_rate = LINKRATE_15 | LINKRATE_30 | LINKRATE_60 | LINKRATE_120; 51 + module_param(link_rate, ulong, 0644); 52 + MODULE_PARM_DESC(link_rate, "Enable link rate.\n" 53 + " 1: Link rate 1.5G\n" 54 + " 2: Link rate 3.0G\n" 55 + " 4: Link rate 6.0G\n" 56 + " 8: Link rate 12.0G\n"); 44 57 45 58 static struct scsi_transport_template *pm8001_stt; 46 59 ··· 445 432 } else { 446 433 pm8001_ha->io_mem[logicalBar].membase = 0; 447 434 pm8001_ha->io_mem[logicalBar].memsize = 0; 448 - pm8001_ha->io_mem[logicalBar].memvirtaddr = 0; 435 + pm8001_ha->io_mem[logicalBar].memvirtaddr = NULL; 449 436 } 450 437 logicalBar++; 451 438 } ··· 479 466 pm8001_ha->sas = sha; 480 467 pm8001_ha->shost = shost; 481 468 pm8001_ha->id = pm8001_id++; 482 - pm8001_ha->logging_level = 0x01; 469 + pm8001_ha->logging_level = logging_level; 470 + if (link_rate >= 1 && link_rate <= 15) 471 + pm8001_ha->link_rate = (link_rate << 8); 472 + else { 473 + pm8001_ha->link_rate = LINKRATE_15 | LINKRATE_30 | 474 + LINKRATE_60 | LINKRATE_120; 475 + PM8001_FAIL_DBG(pm8001_ha, pm8001_printk( 476 + "Setting link rate to default value\n")); 477 + } 483 478 sprintf(pm8001_ha->name, "%s%d", DRV_NAME, pm8001_ha->id); 484 479 /* IOMB size is 128 for 8088/89 controllers */ 485 480 if (pm8001_ha->chip_id != chip_8001) ··· 894 873 u32 number_of_intr; 895 874 int flag = 0; 896 875 int rc; 897 - static char intr_drvname[PM8001_MAX_MSIX_VEC][sizeof(DRV_NAME)+3]; 898 876 899 877 /* SPCv controllers supports 64 msi-x */ 900 878 if (pm8001_ha->chip_id == chip_8001) { ··· 914 894 rc, pm8001_ha->number_of_intr)); 915 895 916 896 for (i = 0; i < number_of_intr; i++) { 917 - snprintf(intr_drvname[i], sizeof(intr_drvname[0]), 918 - DRV_NAME"%d", i); 897 + snprintf(pm8001_ha->intr_drvname[i], 898 + sizeof(pm8001_ha->intr_drvname[0]), 899 + "%s-%d", pm8001_ha->name, i); 919 900 pm8001_ha->irq_vector[i].irq_id = i; 920 901 pm8001_ha->irq_vector[i].drv_inst = pm8001_ha; 921 902 922 903 rc = request_irq(pci_irq_vector(pm8001_ha->pdev, i), 923 904 pm8001_interrupt_handler_msix, flag, 924 - intr_drvname[i], &(pm8001_ha->irq_vector[i])); 905 + pm8001_ha->intr_drvname[i], 906 + &(pm8001_ha->irq_vector[i])); 925 907 if (rc) { 926 908 for (j = 0; j < i; j++) { 927 909 free_irq(pci_irq_vector(pm8001_ha->pdev, i), ··· 964 942 pm8001_ha->irq_vector[0].irq_id = 0; 965 943 pm8001_ha->irq_vector[0].drv_inst = pm8001_ha; 966 944 rc = request_irq(pdev->irq, pm8001_interrupt_handler_intx, IRQF_SHARED, 967 - DRV_NAME, SHOST_TO_SAS_HA(pm8001_ha->shost)); 945 + pm8001_ha->name, SHOST_TO_SAS_HA(pm8001_ha->shost)); 968 946 return rc; 969 947 } 970 948
+48 -22
drivers/scsi/pm8001/pm8001_sas.c
··· 119 119 mem_virt_alloc = dma_alloc_coherent(&pdev->dev, mem_size + align, 120 120 &mem_dma_handle, GFP_KERNEL); 121 121 if (!mem_virt_alloc) { 122 - pm8001_printk("memory allocation error\n"); 122 + pr_err("pm80xx: memory allocation error\n"); 123 123 return -1; 124 124 } 125 125 *pphys_addr = mem_dma_handle; ··· 249 249 spin_unlock_irqrestore(&pm8001_ha->lock, flags); 250 250 return 0; 251 251 default: 252 + PM8001_DEVIO_DBG(pm8001_ha, 253 + pm8001_printk("func 0x%x\n", func)); 252 254 rc = -EOPNOTSUPP; 253 255 } 254 256 msleep(300); ··· 386 384 struct pm8001_port *port = NULL; 387 385 struct sas_task *t = task; 388 386 struct pm8001_ccb_info *ccb; 389 - u32 tag = 0xdeadbeef, rc, n_elem = 0; 387 + u32 tag = 0xdeadbeef, rc = 0, n_elem = 0; 390 388 unsigned long flags = 0; 389 + enum sas_protocol task_proto = t->task_proto; 391 390 392 391 if (!dev->port) { 393 392 struct task_status_struct *tsm = &t->task_status; ··· 413 410 pm8001_dev = dev->lldd_dev; 414 411 port = &pm8001_ha->port[sas_find_local_port_id(dev)]; 415 412 if (DEV_IS_GONE(pm8001_dev) || !port->port_attached) { 416 - if (sas_protocol_ata(t->task_proto)) { 413 + if (sas_protocol_ata(task_proto)) { 417 414 struct task_status_struct *ts = &t->task_status; 418 415 ts->resp = SAS_TASK_UNDELIVERED; 419 416 ts->stat = SAS_PHY_DOWN; ··· 435 432 goto err_out; 436 433 ccb = &pm8001_ha->ccb_info[tag]; 437 434 438 - if (!sas_protocol_ata(t->task_proto)) { 435 + if (!sas_protocol_ata(task_proto)) { 439 436 if (t->num_scatter) { 440 437 n_elem = dma_map_sg(pm8001_ha->dev, 441 438 t->scatter, ··· 455 452 ccb->ccb_tag = tag; 456 453 ccb->task = t; 457 454 ccb->device = pm8001_dev; 458 - switch (t->task_proto) { 455 + switch (task_proto) { 459 456 case SAS_PROTOCOL_SMP: 460 457 rc = pm8001_task_prep_smp(pm8001_ha, ccb); 461 458 break; ··· 472 469 break; 473 470 default: 474 471 dev_printk(KERN_ERR, pm8001_ha->dev, 475 - "unknown sas_task proto: 0x%x\n", 476 - t->task_proto); 472 + "unknown sas_task proto: 0x%x\n", task_proto); 477 473 rc = -EINVAL; 478 474 break; 479 475 } ··· 495 493 pm8001_tag_free(pm8001_ha, tag); 496 494 err_out: 497 495 dev_printk(KERN_ERR, pm8001_ha->dev, "pm8001 exec failed[%d]!\n", rc); 498 - if (!sas_protocol_ata(t->task_proto)) 496 + if (!sas_protocol_ata(task_proto)) 499 497 if (n_elem) 500 498 dma_unmap_sg(pm8001_ha->dev, t->scatter, t->num_scatter, 501 499 t->data_dir); ··· 1181 1179 break; 1182 1180 } 1183 1181 } 1184 - pm8001_printk(":rc= %d\n", rc); 1182 + pr_err("pm80xx: rc= %d\n", rc); 1185 1183 return rc; 1186 1184 } 1187 1185 ··· 1204 1202 pm8001_dev = dev->lldd_dev; 1205 1203 pm8001_ha = pm8001_find_ha_by_dev(dev); 1206 1204 phy_id = pm8001_dev->attached_phy; 1207 - rc = pm8001_find_tag(task, &tag); 1208 - if (rc == 0) { 1205 + ret = pm8001_find_tag(task, &tag); 1206 + if (ret == 0) { 1209 1207 pm8001_printk("no tag for task:%p\n", task); 1210 1208 return TMF_RESP_FUNC_FAILED; 1211 1209 } ··· 1243 1241 1244 1242 /* 2. Send Phy Control Hard Reset */ 1245 1243 reinit_completion(&completion); 1244 + phy->port_reset_status = PORT_RESET_TMO; 1246 1245 phy->reset_success = false; 1247 1246 phy->enable_completion = &completion; 1248 1247 phy->reset_completion = &completion_reset; 1249 1248 ret = PM8001_CHIP_DISP->phy_ctl_req(pm8001_ha, phy_id, 1250 1249 PHY_HARD_RESET); 1251 - if (ret) 1250 + if (ret) { 1251 + phy->enable_completion = NULL; 1252 + phy->reset_completion = NULL; 1252 1253 goto out; 1254 + } 1255 + 1256 + /* In the case of the reset timeout/fail we still 1257 + * abort the command at the firmware. The assumption 1258 + * here is that the drive is off doing something so 1259 + * that it's not processing requests, and we want to 1260 + * avoid getting a completion for this and either 1261 + * leaking the task in libsas or losing the race and 1262 + * getting a double free. 1263 + */ 1253 1264 PM8001_MSG_DBG(pm8001_ha, 1254 1265 pm8001_printk("Waiting for local phy ctl\n")); 1255 - wait_for_completion(&completion); 1256 - if (!phy->reset_success) 1257 - goto out; 1258 - 1259 - /* 3. Wait for Port Reset complete / Port reset TMO */ 1260 - PM8001_MSG_DBG(pm8001_ha, 1266 + ret = wait_for_completion_timeout(&completion, 1267 + PM8001_TASK_TIMEOUT * HZ); 1268 + if (!ret || !phy->reset_success) { 1269 + phy->enable_completion = NULL; 1270 + phy->reset_completion = NULL; 1271 + } else { 1272 + /* 3. Wait for Port Reset complete or 1273 + * Port reset TMO 1274 + */ 1275 + PM8001_MSG_DBG(pm8001_ha, 1261 1276 pm8001_printk("Waiting for Port reset\n")); 1262 - wait_for_completion(&completion_reset); 1263 - if (phy->port_reset_status) { 1264 - pm8001_dev_gone_notify(dev); 1265 - goto out; 1277 + ret = wait_for_completion_timeout( 1278 + &completion_reset, 1279 + PM8001_TASK_TIMEOUT * HZ); 1280 + if (!ret) 1281 + phy->reset_completion = NULL; 1282 + WARN_ON(phy->port_reset_status == 1283 + PORT_RESET_TMO); 1284 + if (phy->port_reset_status == PORT_RESET_TMO) { 1285 + pm8001_dev_gone_notify(dev); 1286 + goto out; 1287 + } 1266 1288 } 1267 1289 1268 1290 /*
+21 -3
drivers/scsi/pm8001/pm8001_sas.h
··· 66 66 #define PM8001_EH_LOGGING 0x10 /* libsas EH function logging*/ 67 67 #define PM8001_IOCTL_LOGGING 0x20 /* IOCTL message logging */ 68 68 #define PM8001_MSG_LOGGING 0x40 /* misc message logging */ 69 - #define pm8001_printk(format, arg...) printk(KERN_INFO "pm80xx %s %d:" \ 70 - format, __func__, __LINE__, ## arg) 69 + #define PM8001_DEV_LOGGING 0x80 /* development message logging */ 70 + #define PM8001_DEVIO_LOGGING 0x100 /* development io message logging */ 71 + #define PM8001_IOERR_LOGGING 0x200 /* development io err message logging */ 72 + #define pm8001_printk(format, arg...) pr_info("%s:: %s %d:" \ 73 + format, pm8001_ha->name, __func__, __LINE__, ## arg) 71 74 #define PM8001_CHECK_LOGGING(HBA, LEVEL, CMD) \ 72 75 do { \ 73 76 if (unlikely(HBA->logging_level & LEVEL)) \ ··· 100 97 #define PM8001_MSG_DBG(HBA, CMD) \ 101 98 PM8001_CHECK_LOGGING(HBA, PM8001_MSG_LOGGING, CMD) 102 99 100 + #define PM8001_DEV_DBG(HBA, CMD) \ 101 + PM8001_CHECK_LOGGING(HBA, PM8001_DEV_LOGGING, CMD) 102 + 103 + #define PM8001_DEVIO_DBG(HBA, CMD) \ 104 + PM8001_CHECK_LOGGING(HBA, PM8001_DEVIO_LOGGING, CMD) 105 + 106 + #define PM8001_IOERR_DBG(HBA, CMD) \ 107 + PM8001_CHECK_LOGGING(HBA, PM8001_IOERR_LOGGING, CMD) 103 108 104 109 #define PM8001_USE_TASKLET 105 110 #define PM8001_USE_MSIX ··· 152 141 #define MPI_FATAL_EDUMP_TABLE_HANDSHAKE 0x0C /* FDDHSHK */ 153 142 #define MPI_FATAL_EDUMP_TABLE_STATUS 0x10 /* FDDTSTAT */ 154 143 #define MPI_FATAL_EDUMP_TABLE_ACCUM_LEN 0x14 /* ACCDDLEN */ 144 + #define MPI_FATAL_EDUMP_TABLE_TOTAL_LEN 0x18 /* TOTALLEN */ 145 + #define MPI_FATAL_EDUMP_TABLE_SIGNATURE 0x1C /* SIGNITURE */ 155 146 #define MPI_FATAL_EDUMP_HANDSHAKE_RDY 0x1 156 147 #define MPI_FATAL_EDUMP_HANDSHAKE_BUSY 0x0 157 148 #define MPI_FATAL_EDUMP_TABLE_STAT_RSVD 0x0 ··· 509 496 u32 forensic_last_offset; 510 497 u32 fatal_forensic_shift_offset; 511 498 u32 forensic_fatal_step; 499 + u32 forensic_preserved_accumulated_transfer; 512 500 u32 evtlog_ib_offset; 513 501 u32 evtlog_ob_offset; 514 502 void __iomem *msg_unit_tbl_addr;/*Message Unit Table Addr*/ ··· 544 530 struct pm8001_ccb_info *ccb_info; 545 531 #ifdef PM8001_USE_MSIX 546 532 int number_of_intr;/*will be used in remove()*/ 533 + char intr_drvname[PM8001_MAX_MSIX_VEC] 534 + [PM8001_NAME_LENGTH+1+3+1]; 547 535 #endif 548 536 #ifdef PM8001_USE_TASKLET 549 537 struct tasklet_struct tasklet[PM8001_MAX_MSIX_VEC]; 550 538 #endif 551 539 u32 logging_level; 540 + u32 link_rate; 552 541 u32 fw_status; 553 542 u32 smp_exp_mode; 554 543 bool controller_fatal_error; ··· 680 663 void pm8001_chip_iounmap(struct pm8001_hba_info *pm8001_ha); 681 664 int pm8001_mpi_build_cmd(struct pm8001_hba_info *pm8001_ha, 682 665 struct inbound_queue_table *circularQ, 683 - u32 opCode, void *payload, u32 responseQueue); 666 + u32 opCode, void *payload, size_t nb, 667 + u32 responseQueue); 684 668 int pm8001_mpi_msg_free_get(struct inbound_queue_table *circularQ, 685 669 u16 messageSize, void **messagePtr); 686 670 u32 pm8001_mpi_msg_free_set(struct pm8001_hba_info *pm8001_ha, void *pMsg,
+339 -110
drivers/scsi/pm8001/pm80xx_hwi.c
··· 37 37 * POSSIBILITY OF SUCH DAMAGES. 38 38 * 39 39 */ 40 + #include <linux/version.h> 40 41 #include <linux/slab.h> 41 42 #include "pm8001_sas.h" 42 43 #include "pm80xx_hwi.h" ··· 76 75 destination1 = (u32 *)destination; 77 76 78 77 for (index = 0; index < dw_count; index += 4, destination1++) { 79 - offset = (soffset + index / 4); 78 + offset = (soffset + index); 80 79 if (offset < (64 * 1024)) { 81 80 value = pm8001_cr32(pm8001_ha, bus_base_number, offset); 82 81 *destination1 = cpu_to_le32(value); ··· 93 92 struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; 94 93 void __iomem *fatal_table_address = pm8001_ha->fatal_tbl_addr; 95 94 u32 accum_len , reg_val, index, *temp; 95 + u32 status = 1; 96 96 unsigned long start; 97 97 u8 *direct_data; 98 98 char *fatal_error_data = buf; 99 + u32 length_to_read; 100 + u32 offset; 99 101 100 102 pm8001_ha->forensic_info.data_buf.direct_data = buf; 101 103 if (pm8001_ha->chip_id == chip_8001) { ··· 108 104 return (char *)pm8001_ha->forensic_info.data_buf.direct_data - 109 105 (char *)buf; 110 106 } 107 + /* initialize variables for very first call from host application */ 111 108 if (pm8001_ha->forensic_info.data_buf.direct_offset == 0) { 112 109 PM8001_IO_DBG(pm8001_ha, 113 110 pm8001_printk("forensic_info TYPE_NON_FATAL..............\n")); 114 111 direct_data = (u8 *)fatal_error_data; 115 112 pm8001_ha->forensic_info.data_type = TYPE_NON_FATAL; 116 113 pm8001_ha->forensic_info.data_buf.direct_len = SYSFS_OFFSET; 114 + pm8001_ha->forensic_info.data_buf.direct_offset = 0; 117 115 pm8001_ha->forensic_info.data_buf.read_len = 0; 116 + pm8001_ha->forensic_preserved_accumulated_transfer = 0; 117 + 118 + /* Write signature to fatal dump table */ 119 + pm8001_mw32(fatal_table_address, 120 + MPI_FATAL_EDUMP_TABLE_SIGNATURE, 0x1234abcd); 118 121 119 122 pm8001_ha->forensic_info.data_buf.direct_data = direct_data; 120 - 123 + PM8001_IO_DBG(pm8001_ha, 124 + pm8001_printk("ossaHwCB: status1 %d\n", status)); 125 + PM8001_IO_DBG(pm8001_ha, 126 + pm8001_printk("ossaHwCB: read_len 0x%x\n", 127 + pm8001_ha->forensic_info.data_buf.read_len)); 128 + PM8001_IO_DBG(pm8001_ha, 129 + pm8001_printk("ossaHwCB: direct_len 0x%x\n", 130 + pm8001_ha->forensic_info.data_buf.direct_len)); 131 + PM8001_IO_DBG(pm8001_ha, 132 + pm8001_printk("ossaHwCB: direct_offset 0x%x\n", 133 + pm8001_ha->forensic_info.data_buf.direct_offset)); 134 + } 135 + if (pm8001_ha->forensic_info.data_buf.direct_offset == 0) { 121 136 /* start to get data */ 122 137 /* Program the MEMBASE II Shifting Register with 0x00.*/ 123 138 pm8001_cw32(pm8001_ha, 0, MEMBASE_II_SHIFT_REGISTER, ··· 149 126 /* Read until accum_len is retrived */ 150 127 accum_len = pm8001_mr32(fatal_table_address, 151 128 MPI_FATAL_EDUMP_TABLE_ACCUM_LEN); 152 - PM8001_IO_DBG(pm8001_ha, pm8001_printk("accum_len 0x%x\n", 153 - accum_len)); 129 + /* Determine length of data between previously stored transfer length 130 + * and current accumulated transfer length 131 + */ 132 + length_to_read = 133 + accum_len - pm8001_ha->forensic_preserved_accumulated_transfer; 134 + PM8001_IO_DBG(pm8001_ha, 135 + pm8001_printk("get_fatal_spcv: accum_len 0x%x\n", accum_len)); 136 + PM8001_IO_DBG(pm8001_ha, 137 + pm8001_printk("get_fatal_spcv: length_to_read 0x%x\n", 138 + length_to_read)); 139 + PM8001_IO_DBG(pm8001_ha, 140 + pm8001_printk("get_fatal_spcv: last_offset 0x%x\n", 141 + pm8001_ha->forensic_last_offset)); 142 + PM8001_IO_DBG(pm8001_ha, 143 + pm8001_printk("get_fatal_spcv: read_len 0x%x\n", 144 + pm8001_ha->forensic_info.data_buf.read_len)); 145 + PM8001_IO_DBG(pm8001_ha, 146 + pm8001_printk("get_fatal_spcv:: direct_len 0x%x\n", 147 + pm8001_ha->forensic_info.data_buf.direct_len)); 148 + PM8001_IO_DBG(pm8001_ha, 149 + pm8001_printk("get_fatal_spcv:: direct_offset 0x%x\n", 150 + pm8001_ha->forensic_info.data_buf.direct_offset)); 151 + 152 + /* If accumulated length failed to read correctly fail the attempt.*/ 154 153 if (accum_len == 0xFFFFFFFF) { 155 154 PM8001_IO_DBG(pm8001_ha, 156 155 pm8001_printk("Possible PCI issue 0x%x not expected\n", 157 - accum_len)); 158 - return -EIO; 156 + accum_len)); 157 + return status; 159 158 } 160 - if (accum_len == 0 || accum_len >= 0x100000) { 159 + /* If accumulated length is zero fail the attempt */ 160 + if (accum_len == 0) { 161 161 pm8001_ha->forensic_info.data_buf.direct_data += 162 162 sprintf(pm8001_ha->forensic_info.data_buf.direct_data, 163 - "%08x ", 0xFFFFFFFF); 163 + "%08x ", 0xFFFFFFFF); 164 164 return (char *)pm8001_ha->forensic_info.data_buf.direct_data - 165 165 (char *)buf; 166 166 } 167 + /* Accumulated length is good so start capturing the first data */ 167 168 temp = (u32 *)pm8001_ha->memoryMap.region[FORENSIC_MEM].virt_ptr; 168 169 if (pm8001_ha->forensic_fatal_step == 0) { 169 170 moreData: 171 + /* If data to read is less than SYSFS_OFFSET then reduce the 172 + * length of dataLen 173 + */ 174 + if (pm8001_ha->forensic_last_offset + SYSFS_OFFSET 175 + > length_to_read) { 176 + pm8001_ha->forensic_info.data_buf.direct_len = 177 + length_to_read - 178 + pm8001_ha->forensic_last_offset; 179 + } else { 180 + pm8001_ha->forensic_info.data_buf.direct_len = 181 + SYSFS_OFFSET; 182 + } 170 183 if (pm8001_ha->forensic_info.data_buf.direct_data) { 171 184 /* Data is in bar, copy to host memory */ 172 - pm80xx_pci_mem_copy(pm8001_ha, pm8001_ha->fatal_bar_loc, 173 - pm8001_ha->memoryMap.region[FORENSIC_MEM].virt_ptr, 174 - pm8001_ha->forensic_info.data_buf.direct_len , 175 - 1); 185 + pm80xx_pci_mem_copy(pm8001_ha, 186 + pm8001_ha->fatal_bar_loc, 187 + pm8001_ha->memoryMap.region[FORENSIC_MEM].virt_ptr, 188 + pm8001_ha->forensic_info.data_buf.direct_len, 1); 176 189 } 177 190 pm8001_ha->fatal_bar_loc += 178 191 pm8001_ha->forensic_info.data_buf.direct_len; ··· 219 160 pm8001_ha->forensic_info.data_buf.read_len = 220 161 pm8001_ha->forensic_info.data_buf.direct_len; 221 162 222 - if (pm8001_ha->forensic_last_offset >= accum_len) { 163 + if (pm8001_ha->forensic_last_offset >= length_to_read) { 223 164 pm8001_ha->forensic_info.data_buf.direct_data += 224 165 sprintf(pm8001_ha->forensic_info.data_buf.direct_data, 225 166 "%08x ", 3); 226 - for (index = 0; index < (SYSFS_OFFSET / 4); index++) { 167 + for (index = 0; index < 168 + (pm8001_ha->forensic_info.data_buf.direct_len 169 + / 4); index++) { 227 170 pm8001_ha->forensic_info.data_buf.direct_data += 228 - sprintf(pm8001_ha-> 229 - forensic_info.data_buf.direct_data, 230 - "%08x ", *(temp + index)); 171 + sprintf( 172 + pm8001_ha->forensic_info.data_buf.direct_data, 173 + "%08x ", *(temp + index)); 231 174 } 232 175 233 176 pm8001_ha->fatal_bar_loc = 0; 234 177 pm8001_ha->forensic_fatal_step = 1; 235 178 pm8001_ha->fatal_forensic_shift_offset = 0; 236 179 pm8001_ha->forensic_last_offset = 0; 180 + status = 0; 181 + offset = (int) 182 + ((char *)pm8001_ha->forensic_info.data_buf.direct_data 183 + - (char *)buf); 184 + PM8001_IO_DBG(pm8001_ha, 185 + pm8001_printk("get_fatal_spcv:return1 0x%x\n", offset)); 237 186 return (char *)pm8001_ha-> 238 187 forensic_info.data_buf.direct_data - 239 188 (char *)buf; ··· 251 184 sprintf(pm8001_ha-> 252 185 forensic_info.data_buf.direct_data, 253 186 "%08x ", 2); 254 - for (index = 0; index < (SYSFS_OFFSET / 4); index++) { 255 - pm8001_ha->forensic_info.data_buf.direct_data += 256 - sprintf(pm8001_ha-> 187 + for (index = 0; index < 188 + (pm8001_ha->forensic_info.data_buf.direct_len 189 + / 4); index++) { 190 + pm8001_ha->forensic_info.data_buf.direct_data 191 + += sprintf(pm8001_ha-> 257 192 forensic_info.data_buf.direct_data, 258 193 "%08x ", *(temp + index)); 259 194 } 195 + status = 0; 196 + offset = (int) 197 + ((char *)pm8001_ha->forensic_info.data_buf.direct_data 198 + - (char *)buf); 199 + PM8001_IO_DBG(pm8001_ha, 200 + pm8001_printk("get_fatal_spcv:return2 0x%x\n", offset)); 260 201 return (char *)pm8001_ha-> 261 202 forensic_info.data_buf.direct_data - 262 203 (char *)buf; ··· 274 199 pm8001_ha->forensic_info.data_buf.direct_data += 275 200 sprintf(pm8001_ha->forensic_info.data_buf.direct_data, 276 201 "%08x ", 2); 277 - for (index = 0; index < 256; index++) { 202 + for (index = 0; index < 203 + (pm8001_ha->forensic_info.data_buf.direct_len 204 + / 4) ; index++) { 278 205 pm8001_ha->forensic_info.data_buf.direct_data += 279 206 sprintf(pm8001_ha-> 280 - forensic_info.data_buf.direct_data, 281 - "%08x ", *(temp + index)); 207 + forensic_info.data_buf.direct_data, 208 + "%08x ", *(temp + index)); 282 209 } 283 210 pm8001_ha->fatal_forensic_shift_offset += 0x100; 284 211 pm8001_cw32(pm8001_ha, 0, MEMBASE_II_SHIFT_REGISTER, 285 212 pm8001_ha->fatal_forensic_shift_offset); 286 213 pm8001_ha->fatal_bar_loc = 0; 214 + status = 0; 215 + offset = (int) 216 + ((char *)pm8001_ha->forensic_info.data_buf.direct_data 217 + - (char *)buf); 218 + PM8001_IO_DBG(pm8001_ha, 219 + pm8001_printk("get_fatal_spcv: return3 0x%x\n", offset)); 287 220 return (char *)pm8001_ha->forensic_info.data_buf.direct_data - 288 221 (char *)buf; 289 222 } 290 223 if (pm8001_ha->forensic_fatal_step == 1) { 291 - pm8001_ha->fatal_forensic_shift_offset = 0; 292 - /* Read 64K of the debug data. */ 293 - pm8001_cw32(pm8001_ha, 0, MEMBASE_II_SHIFT_REGISTER, 294 - pm8001_ha->fatal_forensic_shift_offset); 295 - pm8001_mw32(fatal_table_address, 296 - MPI_FATAL_EDUMP_TABLE_HANDSHAKE, 224 + /* store previous accumulated length before triggering next 225 + * accumulated length update 226 + */ 227 + pm8001_ha->forensic_preserved_accumulated_transfer = 228 + pm8001_mr32(fatal_table_address, 229 + MPI_FATAL_EDUMP_TABLE_ACCUM_LEN); 230 + 231 + /* continue capturing the fatal log until Dump status is 0x3 */ 232 + if (pm8001_mr32(fatal_table_address, 233 + MPI_FATAL_EDUMP_TABLE_STATUS) < 234 + MPI_FATAL_EDUMP_TABLE_STAT_NF_SUCCESS_DONE) { 235 + 236 + /* reset fddstat bit by writing to zero*/ 237 + pm8001_mw32(fatal_table_address, 238 + MPI_FATAL_EDUMP_TABLE_STATUS, 0x0); 239 + 240 + /* set dump control value to '1' so that new data will 241 + * be transferred to shared memory 242 + */ 243 + pm8001_mw32(fatal_table_address, 244 + MPI_FATAL_EDUMP_TABLE_HANDSHAKE, 297 245 MPI_FATAL_EDUMP_HANDSHAKE_RDY); 298 246 299 - /* Poll FDDHSHK until clear */ 300 - start = jiffies + (2 * HZ); /* 2 sec */ 247 + /*Poll FDDHSHK until clear */ 248 + start = jiffies + (2 * HZ); /* 2 sec */ 301 249 302 - do { 303 - reg_val = pm8001_mr32(fatal_table_address, 250 + do { 251 + reg_val = pm8001_mr32(fatal_table_address, 304 252 MPI_FATAL_EDUMP_TABLE_HANDSHAKE); 305 - } while ((reg_val) && time_before(jiffies, start)); 253 + } while ((reg_val) && time_before(jiffies, start)); 306 254 307 - if (reg_val != 0) { 308 - PM8001_FAIL_DBG(pm8001_ha, 309 - pm8001_printk("TIMEOUT:MEMBASE_II_SHIFT_REGISTER" 310 - " = 0x%x\n", reg_val)); 311 - return -EIO; 312 - } 255 + if (reg_val != 0) { 256 + PM8001_FAIL_DBG(pm8001_ha, pm8001_printk( 257 + "TIMEOUT:MPI_FATAL_EDUMP_TABLE_HDSHAKE 0x%x\n", 258 + reg_val)); 259 + /* Fail the dump if a timeout occurs */ 260 + pm8001_ha->forensic_info.data_buf.direct_data += 261 + sprintf( 262 + pm8001_ha->forensic_info.data_buf.direct_data, 263 + "%08x ", 0xFFFFFFFF); 264 + return((char *) 265 + pm8001_ha->forensic_info.data_buf.direct_data 266 + - (char *)buf); 267 + } 268 + /* Poll status register until set to 2 or 269 + * 3 for up to 2 seconds 270 + */ 271 + start = jiffies + (2 * HZ); /* 2 sec */ 313 272 314 - /* Read the next 64K of the debug data. */ 315 - pm8001_ha->forensic_fatal_step = 0; 316 - if (pm8001_mr32(fatal_table_address, 317 - MPI_FATAL_EDUMP_TABLE_STATUS) != 318 - MPI_FATAL_EDUMP_TABLE_STAT_NF_SUCCESS_DONE) { 319 - pm8001_mw32(fatal_table_address, 320 - MPI_FATAL_EDUMP_TABLE_HANDSHAKE, 0); 321 - goto moreData; 322 - } else { 323 - pm8001_ha->forensic_info.data_buf.direct_data += 324 - sprintf(pm8001_ha-> 325 - forensic_info.data_buf.direct_data, 326 - "%08x ", 4); 327 - pm8001_ha->forensic_info.data_buf.read_len = 0xFFFFFFFF; 328 - pm8001_ha->forensic_info.data_buf.direct_len = 0; 329 - pm8001_ha->forensic_info.data_buf.direct_offset = 0; 330 - pm8001_ha->forensic_info.data_buf.read_len = 0; 273 + do { 274 + reg_val = pm8001_mr32(fatal_table_address, 275 + MPI_FATAL_EDUMP_TABLE_STATUS); 276 + } while (((reg_val != 2) || (reg_val != 3)) && 277 + time_before(jiffies, start)); 278 + 279 + if (reg_val < 2) { 280 + PM8001_FAIL_DBG(pm8001_ha, pm8001_printk( 281 + "TIMEOUT:MPI_FATAL_EDUMP_TABLE_STATUS = 0x%x\n", 282 + reg_val)); 283 + /* Fail the dump if a timeout occurs */ 284 + pm8001_ha->forensic_info.data_buf.direct_data += 285 + sprintf( 286 + pm8001_ha->forensic_info.data_buf.direct_data, 287 + "%08x ", 0xFFFFFFFF); 288 + pm8001_cw32(pm8001_ha, 0, 289 + MEMBASE_II_SHIFT_REGISTER, 290 + pm8001_ha->fatal_forensic_shift_offset); 291 + } 292 + /* Read the next block of the debug data.*/ 293 + length_to_read = pm8001_mr32(fatal_table_address, 294 + MPI_FATAL_EDUMP_TABLE_ACCUM_LEN) - 295 + pm8001_ha->forensic_preserved_accumulated_transfer; 296 + if (length_to_read != 0x0) { 297 + pm8001_ha->forensic_fatal_step = 0; 298 + goto moreData; 299 + } else { 300 + pm8001_ha->forensic_info.data_buf.direct_data += 301 + sprintf( 302 + pm8001_ha->forensic_info.data_buf.direct_data, 303 + "%08x ", 4); 304 + pm8001_ha->forensic_info.data_buf.read_len 305 + = 0xFFFFFFFF; 306 + pm8001_ha->forensic_info.data_buf.direct_len 307 + = 0; 308 + pm8001_ha->forensic_info.data_buf.direct_offset 309 + = 0; 310 + pm8001_ha->forensic_info.data_buf.read_len = 0; 311 + } 331 312 } 332 313 } 333 - 314 + offset = (int)((char *)pm8001_ha->forensic_info.data_buf.direct_data 315 + - (char *)buf); 316 + PM8001_IO_DBG(pm8001_ha, 317 + pm8001_printk("get_fatal_spcv: return4 0x%x\n", offset)); 334 318 return (char *)pm8001_ha->forensic_info.data_buf.direct_data - 335 319 (char *)buf; 336 320 } ··· 451 317 pm8001_mr32(address, MAIN_MPI_ILA_RELEASE_TYPE); 452 318 pm8001_ha->main_cfg_tbl.pm80xx_tbl.inc_fw_version = 453 319 pm8001_mr32(address, MAIN_MPI_INACTIVE_FW_VERSION); 320 + 321 + PM8001_DEV_DBG(pm8001_ha, pm8001_printk( 322 + "Main cfg table: sign:%x interface rev:%x fw_rev:%x\n", 323 + pm8001_ha->main_cfg_tbl.pm80xx_tbl.signature, 324 + pm8001_ha->main_cfg_tbl.pm80xx_tbl.interface_rev, 325 + pm8001_ha->main_cfg_tbl.pm80xx_tbl.firmware_rev)); 326 + 327 + PM8001_DEV_DBG(pm8001_ha, pm8001_printk( 328 + "table offset: gst:%x iq:%x oq:%x int vec:%x phy attr:%x\n", 329 + pm8001_ha->main_cfg_tbl.pm80xx_tbl.gst_offset, 330 + pm8001_ha->main_cfg_tbl.pm80xx_tbl.inbound_queue_offset, 331 + pm8001_ha->main_cfg_tbl.pm80xx_tbl.outbound_queue_offset, 332 + pm8001_ha->main_cfg_tbl.pm80xx_tbl.int_vec_table_offset, 333 + pm8001_ha->main_cfg_tbl.pm80xx_tbl.phy_attr_table_offset)); 334 + 335 + PM8001_DEV_DBG(pm8001_ha, pm8001_printk( 336 + "Main cfg table; ila rev:%x Inactive fw rev:%x\n", 337 + pm8001_ha->main_cfg_tbl.pm80xx_tbl.ila_version, 338 + pm8001_ha->main_cfg_tbl.pm80xx_tbl.inc_fw_version)); 454 339 } 455 340 456 341 /** ··· 674 521 pm8001_mr32(addressib, (offsetib + 0x18)); 675 522 pm8001_ha->inbnd_q_tbl[i].producer_idx = 0; 676 523 pm8001_ha->inbnd_q_tbl[i].consumer_index = 0; 524 + 525 + PM8001_DEV_DBG(pm8001_ha, pm8001_printk( 526 + "IQ %d pi_bar 0x%x pi_offset 0x%x\n", i, 527 + pm8001_ha->inbnd_q_tbl[i].pi_pci_bar, 528 + pm8001_ha->inbnd_q_tbl[i].pi_offset)); 677 529 } 678 530 for (i = 0; i < PM8001_MAX_SPCV_OUTB_NUM; i++) { 679 531 pm8001_ha->outbnd_q_tbl[i].element_size_cnt = ··· 707 549 pm8001_mr32(addressob, (offsetob + 0x18)); 708 550 pm8001_ha->outbnd_q_tbl[i].consumer_idx = 0; 709 551 pm8001_ha->outbnd_q_tbl[i].producer_index = 0; 552 + 553 + PM8001_DEV_DBG(pm8001_ha, pm8001_printk( 554 + "OQ %d ci_bar 0x%x ci_offset 0x%x\n", i, 555 + pm8001_ha->outbnd_q_tbl[i].ci_pci_bar, 556 + pm8001_ha->outbnd_q_tbl[i].ci_offset)); 710 557 } 711 558 } 712 559 ··· 745 582 ((pm8001_ha->number_of_intr - 1) << 8); 746 583 pm8001_mw32(address, MAIN_FATAL_ERROR_INTERRUPT, 747 584 pm8001_ha->main_cfg_tbl.pm80xx_tbl.fatal_err_interrupt); 585 + PM8001_DEV_DBG(pm8001_ha, pm8001_printk( 586 + "Updated Fatal error interrupt vector 0x%x\n", 587 + pm8001_mr32(address, MAIN_FATAL_ERROR_INTERRUPT))); 588 + 748 589 pm8001_mw32(address, MAIN_EVENT_CRC_CHECK, 749 590 pm8001_ha->main_cfg_tbl.pm80xx_tbl.crc_core_dump); 750 591 ··· 758 591 pm8001_ha->main_cfg_tbl.pm80xx_tbl.gpio_led_mapping |= 0x20000000; 759 592 pm8001_mw32(address, MAIN_GPIO_LED_FLAGS_OFFSET, 760 593 pm8001_ha->main_cfg_tbl.pm80xx_tbl.gpio_led_mapping); 594 + PM8001_DEV_DBG(pm8001_ha, pm8001_printk( 595 + "Programming DW 0x21 in main cfg table with 0x%x\n", 596 + pm8001_mr32(address, MAIN_GPIO_LED_FLAGS_OFFSET))); 761 597 762 598 pm8001_mw32(address, MAIN_PORT_RECOVERY_TIMER, 763 599 pm8001_ha->main_cfg_tbl.pm80xx_tbl.port_recovery_timer); ··· 799 629 pm8001_ha->inbnd_q_tbl[number].ci_upper_base_addr); 800 630 pm8001_mw32(address, offset + IB_CI_BASE_ADDR_LO_OFFSET, 801 631 pm8001_ha->inbnd_q_tbl[number].ci_lower_base_addr); 632 + 633 + PM8001_DEV_DBG(pm8001_ha, pm8001_printk( 634 + "IQ %d: Element pri size 0x%x\n", 635 + number, 636 + pm8001_ha->inbnd_q_tbl[number].element_pri_size_cnt)); 637 + 638 + PM8001_DEV_DBG(pm8001_ha, pm8001_printk( 639 + "IQ upr base addr 0x%x IQ lwr base addr 0x%x\n", 640 + pm8001_ha->inbnd_q_tbl[number].upper_base_addr, 641 + pm8001_ha->inbnd_q_tbl[number].lower_base_addr)); 642 + 643 + PM8001_DEV_DBG(pm8001_ha, pm8001_printk( 644 + "CI upper base addr 0x%x CI lower base addr 0x%x\n", 645 + pm8001_ha->inbnd_q_tbl[number].ci_upper_base_addr, 646 + pm8001_ha->inbnd_q_tbl[number].ci_lower_base_addr)); 802 647 } 803 648 804 649 /** ··· 837 652 pm8001_ha->outbnd_q_tbl[number].pi_lower_base_addr); 838 653 pm8001_mw32(address, offset + OB_INTERRUPT_COALES_OFFSET, 839 654 pm8001_ha->outbnd_q_tbl[number].interrup_vec_cnt_delay); 655 + 656 + PM8001_DEV_DBG(pm8001_ha, pm8001_printk( 657 + "OQ %d: Element pri size 0x%x\n", 658 + number, 659 + pm8001_ha->outbnd_q_tbl[number].element_size_cnt)); 660 + 661 + PM8001_DEV_DBG(pm8001_ha, pm8001_printk( 662 + "OQ upr base addr 0x%x OQ lwr base addr 0x%x\n", 663 + pm8001_ha->outbnd_q_tbl[number].upper_base_addr, 664 + pm8001_ha->outbnd_q_tbl[number].lower_base_addr)); 665 + 666 + PM8001_DEV_DBG(pm8001_ha, pm8001_printk( 667 + "PI upper base addr 0x%x PI lower base addr 0x%x\n", 668 + pm8001_ha->outbnd_q_tbl[number].pi_upper_base_addr, 669 + pm8001_ha->outbnd_q_tbl[number].pi_lower_base_addr)); 840 670 } 841 671 842 672 /** ··· 869 669 pm8001_cw32(pm8001_ha, 0, MSGU_IBDB_SET, SPCv_MSGU_CFG_TABLE_UPDATE); 870 670 /* wait until Inbound DoorBell Clear Register toggled */ 871 671 if (IS_SPCV_12G(pm8001_ha->pdev)) { 872 - max_wait_count = 4 * 1000 * 1000;/* 4 sec */ 672 + max_wait_count = SPCV_DOORBELL_CLEAR_TIMEOUT; 873 673 } else { 874 - max_wait_count = 2 * 1000 * 1000;/* 2 sec */ 674 + max_wait_count = SPC_DOORBELL_CLEAR_TIMEOUT; 875 675 } 876 676 do { 877 677 udelay(1); ··· 997 797 value = pm8001_cr32(pm8001_ha, 0, MSGU_SCRATCH_PAD_0); 998 798 offset = value & 0x03FFFFFF; /* scratch pad 0 TBL address */ 999 799 1000 - PM8001_INIT_DBG(pm8001_ha, 800 + PM8001_DEV_DBG(pm8001_ha, 1001 801 pm8001_printk("Scratchpad 0 Offset: 0x%x value 0x%x\n", 1002 802 offset, value)); 1003 803 pcilogic = (value & 0xFC000000) >> 26; ··· 1085 885 (THERMAL_ENABLE << 8) | page_code; 1086 886 payload.cfg_pg[1] = (LTEMPHIL << 24) | (RTEMPHIL << 8); 1087 887 1088 - rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 0); 888 + PM8001_DEV_DBG(pm8001_ha, pm8001_printk( 889 + "Setting up thermal config. cfg_pg 0 0x%x cfg_pg 1 0x%x\n", 890 + payload.cfg_pg[0], payload.cfg_pg[1])); 891 + 892 + rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 893 + sizeof(payload), 0); 1089 894 if (rc) 1090 895 pm8001_tag_free(pm8001_ha, tag); 1091 896 return rc; ··· 1172 967 memcpy(&payload.cfg_pg, &SASConfigPage, 1173 968 sizeof(SASProtocolTimerConfig_t)); 1174 969 1175 - rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 0); 970 + rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 971 + sizeof(payload), 0); 1176 972 if (rc) 1177 973 pm8001_tag_free(pm8001_ha, tag); 1178 974 ··· 1296 1090 payload.new_curidx_ksop = ((1 << 24) | (1 << 16) | (1 << 8) | 1297 1091 KEK_MGMT_SUBOP_KEYCARDUPDATE); 1298 1092 1299 - rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 0); 1093 + PM8001_DEV_DBG(pm8001_ha, pm8001_printk( 1094 + "Saving Encryption info to flash. payload 0x%x\n", 1095 + payload.new_curidx_ksop)); 1096 + 1097 + rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 1098 + sizeof(payload), 0); 1300 1099 if (rc) 1301 1100 pm8001_tag_free(pm8001_ha, tag); 1302 1101 ··· 1452 1241 pm8001_printk("reset register before write : 0x%x\n", regval)); 1453 1242 1454 1243 pm8001_cw32(pm8001_ha, 0, SPC_REG_SOFT_RESET, SPCv_NORMAL_RESET_VALUE); 1455 - mdelay(500); 1244 + msleep(500); 1456 1245 1457 1246 regval = pm8001_cr32(pm8001_ha, 0, SPC_REG_SOFT_RESET); 1458 1247 PM8001_INIT_DBG(pm8001_ha, ··· 1654 1443 task_abort.device_id = cpu_to_le32(pm8001_ha_dev->device_id); 1655 1444 task_abort.tag = cpu_to_le32(ccb_tag); 1656 1445 1657 - ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &task_abort, 0); 1446 + ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &task_abort, 1447 + sizeof(task_abort), 0); 1448 + PM8001_FAIL_DBG(pm8001_ha, 1449 + pm8001_printk("Executing abort task end\n")); 1658 1450 if (ret) { 1659 1451 sas_free_task(task); 1660 1452 pm8001_tag_free(pm8001_ha, ccb_tag); ··· 1733 1519 sata_cmd.ncqtag_atap_dir_m_dad |= ((0x1 << 7) | (0x5 << 9)); 1734 1520 memcpy(&sata_cmd.sata_fis, &fis, sizeof(struct host_to_dev_fis)); 1735 1521 1736 - res = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &sata_cmd, 0); 1522 + res = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &sata_cmd, 1523 + sizeof(sata_cmd), 0); 1524 + PM8001_FAIL_DBG(pm8001_ha, pm8001_printk("Executing read log end\n")); 1737 1525 if (res) { 1738 1526 sas_free_task(task); 1739 1527 pm8001_tag_free(pm8001_ha, ccb_tag); ··· 1786 1570 if (unlikely(!t || !t->lldd_task || !t->dev)) 1787 1571 return; 1788 1572 ts = &t->task_status; 1573 + 1574 + PM8001_DEV_DBG(pm8001_ha, pm8001_printk( 1575 + "tag::0x%x, status::0x%x task::0x%p\n", tag, status, t)); 1576 + 1789 1577 /* Print sas address of IO failed device */ 1790 1578 if ((status != IO_SUCCESS) && (status != IO_OVERFLOW) && 1791 1579 (status != IO_UNDERFLOW)) ··· 1992 1772 ts->open_rej_reason = SAS_OREJ_RSVD_RETRY; 1993 1773 break; 1994 1774 default: 1995 - PM8001_IO_DBG(pm8001_ha, 1775 + PM8001_DEVIO_DBG(pm8001_ha, 1996 1776 pm8001_printk("Unknown status 0x%x\n", status)); 1997 1777 /* not allowed case. Therefore, return failed status */ 1998 1778 ts->resp = SAS_TASK_COMPLETE; ··· 2046 1826 if (unlikely(!t || !t->lldd_task || !t->dev)) 2047 1827 return; 2048 1828 ts = &t->task_status; 2049 - PM8001_IO_DBG(pm8001_ha, 1829 + PM8001_IOERR_DBG(pm8001_ha, 2050 1830 pm8001_printk("port_id:0x%x, tag:0x%x, event:0x%x\n", 2051 1831 port_id, tag, event)); 2052 1832 switch (event) { ··· 2183 1963 ts->stat = SAS_DATA_OVERRUN; 2184 1964 break; 2185 1965 case IO_XFER_ERROR_INTERNAL_CRC_ERROR: 2186 - PM8001_IO_DBG(pm8001_ha, 1966 + PM8001_IOERR_DBG(pm8001_ha, 2187 1967 pm8001_printk("IO_XFR_ERROR_INTERNAL_CRC_ERROR\n")); 2188 1968 /* TBC: used default set values */ 2189 1969 ts->resp = SAS_TASK_COMPLETE; ··· 2194 1974 pm8001_printk("IO_XFER_CMD_FRAME_ISSUED\n")); 2195 1975 return; 2196 1976 default: 2197 - PM8001_IO_DBG(pm8001_ha, 1977 + PM8001_DEVIO_DBG(pm8001_ha, 2198 1978 pm8001_printk("Unknown status 0x%x\n", event)); 2199 1979 /* not allowed case. Therefore, return failed status */ 2200 1980 ts->resp = SAS_TASK_COMPLETE; ··· 2282 2062 pm8001_printk("ts null\n")); 2283 2063 return; 2284 2064 } 2065 + 2066 + if (unlikely(status)) 2067 + PM8001_IOERR_DBG(pm8001_ha, pm8001_printk( 2068 + "status:0x%x, tag:0x%x, task::0x%p\n", 2069 + status, tag, t)); 2070 + 2285 2071 /* Print sas address of IO failed device */ 2286 2072 if ((status != IO_SUCCESS) && (status != IO_OVERFLOW) && 2287 2073 (status != IO_UNDERFLOW)) { ··· 2591 2365 ts->open_rej_reason = SAS_OREJ_RSVD_RETRY; 2592 2366 break; 2593 2367 default: 2594 - PM8001_IO_DBG(pm8001_ha, 2368 + PM8001_DEVIO_DBG(pm8001_ha, 2595 2369 pm8001_printk("Unknown status 0x%x\n", status)); 2596 2370 /* not allowed case. Therefore, return failed status */ 2597 2371 ts->resp = SAS_TASK_COMPLETE; ··· 2608 2382 pm8001_printk("task 0x%p done with io_status 0x%x" 2609 2383 " resp 0x%x stat 0x%x but aborted by upper layer!\n", 2610 2384 t, status, ts->resp, ts->stat)); 2385 + if (t->slow_task) 2386 + complete(&t->slow_task->completion); 2611 2387 pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); 2612 2388 } else { 2613 2389 spin_unlock_irqrestore(&t->task_state_lock, flags); ··· 2663 2435 } 2664 2436 2665 2437 ts = &t->task_status; 2666 - PM8001_IO_DBG(pm8001_ha, 2438 + PM8001_IOERR_DBG(pm8001_ha, 2667 2439 pm8001_printk("port_id:0x%x, tag:0x%x, event:0x%x\n", 2668 2440 port_id, tag, event)); 2669 2441 switch (event) { ··· 2883 2655 if (unlikely(!t || !t->lldd_task || !t->dev)) 2884 2656 return; 2885 2657 2658 + PM8001_DEV_DBG(pm8001_ha, 2659 + pm8001_printk("tag::0x%x status::0x%x\n", tag, status)); 2660 + 2886 2661 switch (status) { 2887 2662 2888 2663 case IO_SUCCESS: ··· 3053 2822 ts->open_rej_reason = SAS_OREJ_RSVD_RETRY; 3054 2823 break; 3055 2824 default: 3056 - PM8001_IO_DBG(pm8001_ha, 2825 + PM8001_DEVIO_DBG(pm8001_ha, 3057 2826 pm8001_printk("Unknown status 0x%x\n", status)); 3058 2827 ts->resp = SAS_TASK_COMPLETE; 3059 2828 ts->stat = SAS_DEV_NO_RESPONSE; ··· 3104 2873 ((phyId & 0xFF) << 24) | (port_id & 0xFF)); 3105 2874 payload.param0 = cpu_to_le32(param0); 3106 2875 payload.param1 = cpu_to_le32(param1); 3107 - pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 0); 2876 + pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 2877 + sizeof(payload), 0); 3108 2878 } 3109 2879 3110 2880 static int pm80xx_chip_phy_ctl_req(struct pm8001_hba_info *pm8001_ha, ··· 3196 2964 pm8001_get_lrate_mode(phy, link_rate); 3197 2965 break; 3198 2966 default: 3199 - PM8001_MSG_DBG(pm8001_ha, 2967 + PM8001_DEVIO_DBG(pm8001_ha, 3200 2968 pm8001_printk("unknown device type(%x)\n", deviceType)); 3201 2969 break; 3202 2970 } ··· 3216 2984 pm8001_get_attached_sas_addr(phy, phy->sas_phy.attached_sas_addr); 3217 2985 spin_unlock_irqrestore(&phy->sas_phy.frame_rcvd_lock, flags); 3218 2986 if (pm8001_ha->flags == PM8001F_RUN_TIME) 3219 - mdelay(200);/*delay a moment to wait disk to spinup*/ 2987 + msleep(200);/*delay a moment to wait disk to spinup*/ 3220 2988 pm8001_bytes_dmaed(pm8001_ha, phy_id); 3221 2989 } 3222 2990 ··· 3245 3013 struct sas_ha_struct *sas_ha = pm8001_ha->sas; 3246 3014 struct pm8001_phy *phy = &pm8001_ha->phy[phy_id]; 3247 3015 unsigned long flags; 3248 - PM8001_MSG_DBG(pm8001_ha, pm8001_printk( 3016 + PM8001_DEVIO_DBG(pm8001_ha, pm8001_printk( 3249 3017 "port id %d, phy id %d link_rate %d portstate 0x%x\n", 3250 3018 port_id, phy_id, link_rate, portstate)); 3251 3019 ··· 3333 3101 break; 3334 3102 default: 3335 3103 port->port_attached = 0; 3336 - PM8001_MSG_DBG(pm8001_ha, 3104 + PM8001_DEVIO_DBG(pm8001_ha, 3337 3105 pm8001_printk(" Phy Down and(default) = 0x%x\n", 3338 3106 portstate)); 3339 3107 break; ··· 3362 3130 if (status == 0) { 3363 3131 phy->phy_state = PHY_LINK_DOWN; 3364 3132 if (pm8001_ha->flags == PM8001F_RUN_TIME && 3365 - phy->enable_completion != NULL) 3133 + phy->enable_completion != NULL) { 3366 3134 complete(phy->enable_completion); 3135 + phy->enable_completion = NULL; 3136 + } 3367 3137 } 3368 3138 return 0; 3369 3139 ··· 3425 3191 struct pm8001_phy *phy = &pm8001_ha->phy[phy_id]; 3426 3192 struct pm8001_port *port = &pm8001_ha->port[port_id]; 3427 3193 struct asd_sas_phy *sas_phy = sas_ha->sas_phy[phy_id]; 3428 - PM8001_MSG_DBG(pm8001_ha, 3194 + PM8001_DEV_DBG(pm8001_ha, 3429 3195 pm8001_printk("portid:%d phyid:%d event:0x%x status:0x%x\n", 3430 3196 port_id, phy_id, eventType, status)); 3431 3197 ··· 3610 3376 pm8001_printk("EVENT_BROADCAST_ASYNCH_EVENT\n")); 3611 3377 break; 3612 3378 default: 3613 - PM8001_MSG_DBG(pm8001_ha, 3379 + PM8001_DEVIO_DBG(pm8001_ha, 3614 3380 pm8001_printk("Unknown event type 0x%x\n", eventType)); 3615 3381 break; 3616 3382 } ··· 3992 3758 ssp_coalesced_comp_resp(pm8001_ha, piomb); 3993 3759 break; 3994 3760 default: 3995 - PM8001_MSG_DBG(pm8001_ha, pm8001_printk( 3761 + PM8001_DEVIO_DBG(pm8001_ha, pm8001_printk( 3996 3762 "Unknown outbound Queue IOMB OPC = 0x%x\n", opc)); 3997 3763 break; 3998 3764 } ··· 4225 3991 4226 3992 build_smp_cmd(pm8001_dev->device_id, smp_cmd.tag, 4227 3993 &smp_cmd, pm8001_ha->smp_exp_mode, length); 4228 - rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, 4229 - (u32 *)&smp_cmd, 0); 3994 + rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &smp_cmd, 3995 + sizeof(smp_cmd), 0); 4230 3996 if (rc) 4231 3997 goto err_out_2; 4232 3998 return 0; ··· 4434 4200 } 4435 4201 q_index = (u32) (pm8001_dev->id & 0x00ffffff) % PM8001_MAX_OUTB_NUM; 4436 4202 ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, 4437 - &ssp_cmd, q_index); 4203 + &ssp_cmd, sizeof(ssp_cmd), q_index); 4438 4204 return ret; 4439 4205 } 4440 4206 ··· 4675 4441 } 4676 4442 q_index = (u32) (pm8001_ha_dev->id & 0x00ffffff) % PM8001_MAX_OUTB_NUM; 4677 4443 ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, 4678 - &sata_cmd, q_index); 4444 + &sata_cmd, sizeof(sata_cmd), q_index); 4679 4445 return ret; 4680 4446 } 4681 4447 ··· 4699 4465 4700 4466 PM8001_INIT_DBG(pm8001_ha, 4701 4467 pm8001_printk("PHY START REQ for phy_id %d\n", phy_id)); 4702 - /* 4703 - ** [0:7] PHY Identifier 4704 - ** [8:11] link rate 1.5G, 3G, 6G 4705 - ** [12:13] link mode 01b SAS mode; 10b SATA mode; 11b Auto mode 4706 - ** [14] 0b disable spin up hold; 1b enable spin up hold 4707 - ** [15] ob no change in current PHY analig setup 1b enable using SPAST 4708 - */ 4709 - if (!IS_SPCV_12G(pm8001_ha->pdev)) 4710 - payload.ase_sh_lm_slr_phyid = cpu_to_le32(SPINHOLD_DISABLE | 4711 - LINKMODE_AUTO | LINKRATE_15 | 4712 - LINKRATE_30 | LINKRATE_60 | phy_id); 4713 - else 4714 - payload.ase_sh_lm_slr_phyid = cpu_to_le32(SPINHOLD_DISABLE | 4715 - LINKMODE_AUTO | LINKRATE_15 | 4716 - LINKRATE_30 | LINKRATE_60 | LINKRATE_120 | 4717 - phy_id); 4718 4468 4469 + payload.ase_sh_lm_slr_phyid = cpu_to_le32(SPINHOLD_DISABLE | 4470 + LINKMODE_AUTO | pm8001_ha->link_rate | phy_id); 4719 4471 /* SSC Disable and SAS Analog ST configuration */ 4720 4472 /** 4721 4473 payload.ase_sh_lm_slr_phyid = ··· 4714 4494 payload.sas_identify.dev_type = SAS_END_DEVICE; 4715 4495 payload.sas_identify.initiator_bits = SAS_PROTOCOL_ALL; 4716 4496 memcpy(payload.sas_identify.sas_addr, 4717 - &pm8001_ha->phy[phy_id].dev_sas_addr, SAS_ADDR_SIZE); 4497 + &pm8001_ha->sas_addr, SAS_ADDR_SIZE); 4718 4498 payload.sas_identify.phy_id = phy_id; 4719 - ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opcode, &payload, 0); 4499 + ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opcode, &payload, 4500 + sizeof(payload), 0); 4720 4501 return ret; 4721 4502 } 4722 4503 ··· 4739 4518 memset(&payload, 0, sizeof(payload)); 4740 4519 payload.tag = cpu_to_le32(tag); 4741 4520 payload.phy_id = cpu_to_le32(phy_id); 4742 - ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opcode, &payload, 0); 4521 + ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opcode, &payload, 4522 + sizeof(payload), 0); 4743 4523 return ret; 4744 4524 } 4745 4525 ··· 4806 4584 memcpy(payload.sas_addr, pm8001_dev->sas_device->sas_addr, 4807 4585 SAS_ADDR_SIZE); 4808 4586 4809 - rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 0); 4587 + rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 4588 + sizeof(payload), 0); 4810 4589 if (rc) 4811 4590 pm8001_tag_free(pm8001_ha, tag); 4812 4591 ··· 4837 4614 payload.tag = cpu_to_le32(tag); 4838 4615 payload.phyop_phyid = 4839 4616 cpu_to_le32(((phy_op & 0xFF) << 8) | (phyId & 0xFF)); 4840 - return pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 0); 4617 + return pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 4618 + sizeof(payload), 0); 4841 4619 } 4842 4620 4843 4621 static u32 pm80xx_chip_is_our_interrupt(struct pm8001_hba_info *pm8001_ha) ··· 4865 4641 pm80xx_chip_isr(struct pm8001_hba_info *pm8001_ha, u8 vec) 4866 4642 { 4867 4643 pm80xx_chip_interrupt_disable(pm8001_ha, vec); 4644 + PM8001_DEVIO_DBG(pm8001_ha, pm8001_printk( 4645 + "irq vec %d, ODMR:0x%x\n", 4646 + vec, pm8001_cr32(pm8001_ha, 0, 0x30))); 4868 4647 process_oq(pm8001_ha, vec); 4869 4648 pm80xx_chip_interrupt_enable(pm8001_ha, vec); 4870 4649 return IRQ_HANDLED; ··· 4896 4669 payload.reserved[j] = cpu_to_le32(*((u32 *)buf + i)); 4897 4670 j++; 4898 4671 } 4899 - rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 0); 4672 + rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 4673 + sizeof(payload), 0); 4900 4674 if (rc) 4901 4675 pm8001_tag_free(pm8001_ha, tag); 4902 4676 } ··· 4939 4711 for (i = 0; i < length; i++) 4940 4712 payload.reserved[i] = cpu_to_le32(*(buf + i)); 4941 4713 4942 - rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 0); 4714 + rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload, 4715 + sizeof(payload), 0); 4943 4716 if (rc) 4944 4717 pm8001_tag_free(pm8001_ha, tag); 4945 4718
+3
drivers/scsi/pm8001/pm80xx_hwi.h
··· 220 220 #define SAS_DOPNRJT_RTRY_TMO 128 221 221 #define SAS_COPNRJT_RTRY_TMO 128 222 222 223 + #define SPCV_DOORBELL_CLEAR_TIMEOUT (30 * 1000 * 1000) /* 30 sec */ 224 + #define SPC_DOORBELL_CLEAR_TIMEOUT (15 * 1000 * 1000) /* 15 sec */ 225 + 223 226 /* 224 227 Making ORR bigger than IT NEXUS LOSS which is 2000000us = 2 second. 225 228 Assuming a bigger value 3 second, 3000000/128 = 23437.5 where 128
+1 -1
drivers/scsi/qedf/qedf_dbg.h
··· 42 42 #define QEDF_LOG_LPORT 0x4000 /* lport logs */ 43 43 #define QEDF_LOG_ELS 0x8000 /* ELS logs */ 44 44 #define QEDF_LOG_NPIV 0x10000 /* NPIV logs */ 45 - #define QEDF_LOG_SESS 0x20000 /* Conection setup, cleanup */ 45 + #define QEDF_LOG_SESS 0x20000 /* Connection setup, cleanup */ 46 46 #define QEDF_LOG_TID 0x80000 /* 47 47 * FW TID context acquire 48 48 * free
+8
drivers/scsi/qedf/qedf_main.c
··· 1926 1926 return 0; 1927 1927 } 1928 1928 1929 + static void qedf_get_host_port_id(struct Scsi_Host *shost) 1930 + { 1931 + struct fc_lport *lport = shost_priv(shost); 1932 + 1933 + fc_host_port_id(shost) = lport->port_id; 1934 + } 1935 + 1929 1936 static struct fc_host_statistics *qedf_fc_get_host_stats(struct Scsi_Host 1930 1937 *shost) 1931 1938 { ··· 2003 1996 .show_host_active_fc4s = 1, 2004 1997 .show_host_maxframe_size = 1, 2005 1998 1999 + .get_host_port_id = qedf_get_host_port_id, 2006 2000 .show_host_port_id = 1, 2007 2001 .show_host_supported_speeds = 1, 2008 2002 .get_host_speed = fc_get_host_speed,
+1 -1
drivers/scsi/qedi/qedi_dbg.h
··· 44 44 #define QEDI_LOG_LPORT 0x4000 /* lport logs */ 45 45 #define QEDI_LOG_ELS 0x8000 /* ELS logs */ 46 46 #define QEDI_LOG_NPIV 0x10000 /* NPIV logs */ 47 - #define QEDI_LOG_SESS 0x20000 /* Conection setup, cleanup */ 47 + #define QEDI_LOG_SESS 0x20000 /* Connection setup, cleanup */ 48 48 #define QEDI_LOG_UIO 0x40000 /* iSCSI UIO logs */ 49 49 #define QEDI_LOG_TID 0x80000 /* FW TID context acquire, 50 50 * free
+3 -1
drivers/scsi/qla2xxx/qla_attr.c
··· 102 102 qla8044_idc_lock(ha); 103 103 qla82xx_set_reset_owner(vha); 104 104 qla8044_idc_unlock(ha); 105 - } else 105 + } else { 106 + ha->fw_dump_mpi = 1; 106 107 qla2x00_system_error(vha); 108 + } 107 109 break; 108 110 case 4: 109 111 if (IS_P3P_TYPE(ha)) {
+31 -3
drivers/scsi/qla2xxx/qla_def.h
··· 591 591 */ 592 592 uint8_t cmd_type; 593 593 uint8_t pad[3]; 594 - atomic_t ref_count; 595 594 struct kref cmd_kref; /* need to migrate ref_count over to this */ 596 595 void *priv; 597 596 wait_queue_head_t nvme_ls_waitq; 598 597 struct fc_port *fcport; 599 598 struct scsi_qla_host *vha; 600 599 unsigned int start_timer:1; 600 + unsigned int abort:1; 601 + unsigned int aborted:1; 602 + unsigned int completed:1; 603 + 601 604 uint32_t handle; 602 605 uint16_t flags; 603 606 uint16_t type; 604 607 const char *name; 605 608 int iocbs; 606 609 struct qla_qpair *qpair; 610 + struct srb *cmd_sp; 607 611 struct list_head elem; 608 612 u32 gen1; /* scratch */ 609 613 u32 gen2; /* scratch */ ··· 2281 2277 uint8_t fabric_port_name[WWN_SIZE]; 2282 2278 uint16_t fp_speed; 2283 2279 uint8_t fc4_type; 2284 - uint8_t fc4f_nvme; /* nvme fc4 feature bits */ 2280 + uint8_t fc4_features; 2285 2281 } sw_info_t; 2286 2282 2287 2283 /* FCP-4 types */ ··· 2449 2445 u32 supported_classes; 2450 2446 2451 2447 uint8_t fc4_type; 2452 - uint8_t fc4f_nvme; 2448 + uint8_t fc4_features; 2453 2449 uint8_t scan_state; 2454 2450 2455 2451 unsigned long last_queue_full; ··· 2479 2475 u16 n2n_link_reset_cnt; 2480 2476 u16 n2n_chip_reset; 2481 2477 } fc_port_t; 2478 + 2479 + enum { 2480 + FC4_PRIORITY_NVME = 1, 2481 + FC4_PRIORITY_FCP = 2, 2482 + }; 2482 2483 2483 2484 #define QLA_FCPORT_SCAN 1 2484 2485 #define QLA_FCPORT_FOUND 2 ··· 4300 4291 atomic_t nvme_active_aen_cnt; 4301 4292 uint16_t nvme_last_rptd_aen; /* Last recorded aen count */ 4302 4293 4294 + uint8_t fc4_type_priority; 4295 + 4303 4296 atomic_t zio_threshold; 4304 4297 uint16_t last_zio_threshold; 4305 4298 ··· 4826 4815 ((ha->prev_topology == ISP_CFG_N && !ha->current_topology) || \ 4827 4816 ha->current_topology == ISP_CFG_N || \ 4828 4817 !ha->current_topology) 4818 + 4819 + #define NVME_TYPE(fcport) \ 4820 + (fcport->fc4_type & FS_FC4TYPE_NVME) \ 4821 + 4822 + #define FCP_TYPE(fcport) \ 4823 + (fcport->fc4_type & FS_FC4TYPE_FCP) \ 4824 + 4825 + #define NVME_ONLY_TARGET(fcport) \ 4826 + (NVME_TYPE(fcport) && !FCP_TYPE(fcport)) \ 4827 + 4828 + #define NVME_FCP_TARGET(fcport) \ 4829 + (FCP_TYPE(fcport) && NVME_TYPE(fcport)) \ 4830 + 4831 + #define NVME_TARGET(ha, fcport) \ 4832 + ((NVME_FCP_TARGET(fcport) && \ 4833 + (ha->fc4_type_priority == FC4_PRIORITY_NVME)) || \ 4834 + NVME_ONLY_TARGET(fcport)) \ 4829 4835 4830 4836 #include "qla_target.h" 4831 4837 #include "qla_gbl.h"
+2
drivers/scsi/qla2xxx/qla_fw.h
··· 2101 2101 #define FA_FLASH_LAYOUT_ADDR_83 (0x3F1000/4) 2102 2102 #define FA_FLASH_LAYOUT_ADDR_28 (0x11000/4) 2103 2103 2104 + #define NVRAM_DUAL_FCP_NVME_FLAG_OFFSET 0x196 2105 + 2104 2106 #endif
+1
drivers/scsi/qla2xxx/qla_gbl.h
··· 917 917 918 918 /* nvme.c */ 919 919 void qla_nvme_unregister_remote_port(struct fc_port *fcport); 920 + void qla_handle_els_plogi_done(scsi_qla_host_t *vha, struct event_arg *ea); 920 921 #endif /* _QLA_GBL_H */
+38 -28
drivers/scsi/qla2xxx/qla_gs.c
··· 248 248 WWN_SIZE); 249 249 250 250 fcport->fc4_type = (ct_rsp->rsp.ga_nxt.fc4_types[2] & BIT_0) ? 251 - FC4_TYPE_FCP_SCSI : FC4_TYPE_OTHER; 251 + FS_FC4TYPE_FCP : FC4_TYPE_OTHER; 252 252 253 253 if (ct_rsp->rsp.ga_nxt.port_type != NS_N_PORT_TYPE && 254 254 ct_rsp->rsp.ga_nxt.port_type != NS_NL_PORT_TYPE) ··· 2887 2887 struct ct_sns_req *ct_req; 2888 2888 struct ct_sns_rsp *ct_rsp; 2889 2889 struct qla_hw_data *ha = vha->hw; 2890 - uint8_t fcp_scsi_features = 0; 2890 + uint8_t fcp_scsi_features = 0, nvme_features = 0; 2891 2891 struct ct_arg arg; 2892 2892 2893 2893 for (i = 0; i < ha->max_fibre_devices; i++) { ··· 2933 2933 ct_rsp->rsp.gff_id.fc4_features[GFF_FCP_SCSI_OFFSET]; 2934 2934 fcp_scsi_features &= 0x0f; 2935 2935 2936 - if (fcp_scsi_features) 2937 - list[i].fc4_type = FC4_TYPE_FCP_SCSI; 2938 - else 2939 - list[i].fc4_type = FC4_TYPE_OTHER; 2936 + if (fcp_scsi_features) { 2937 + list[i].fc4_type = FS_FC4TYPE_FCP; 2938 + list[i].fc4_features = fcp_scsi_features; 2939 + } 2940 2940 2941 - list[i].fc4f_nvme = 2941 + nvme_features = 2942 2942 ct_rsp->rsp.gff_id.fc4_features[GFF_NVME_OFFSET]; 2943 - list[i].fc4f_nvme &= 0xf; 2943 + nvme_features &= 0xf; 2944 + 2945 + if (nvme_features) { 2946 + list[i].fc4_type |= FS_FC4TYPE_NVME; 2947 + list[i].fc4_features = nvme_features; 2948 + } 2944 2949 } 2945 2950 2946 2951 /* Last device exit. */ ··· 3010 3005 fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE); 3011 3006 3012 3007 if (res == QLA_FUNCTION_TIMEOUT) 3013 - return; 3008 + goto done; 3014 3009 3015 3010 if (res == (DID_ERROR << 16)) { 3016 3011 /* entry status error */ ··· 3440 3435 fc_port_t *fcport = sp->fcport; 3441 3436 struct ct_sns_rsp *ct_rsp; 3442 3437 struct event_arg ea; 3438 + uint8_t fc4_scsi_feat; 3439 + uint8_t fc4_nvme_feat; 3443 3440 3444 3441 ql_dbg(ql_dbg_disc, vha, 0x2133, 3445 3442 "Async done-%s res %x ID %x. %8phC\n", ··· 3449 3442 3450 3443 fcport->flags &= ~FCF_ASYNC_SENT; 3451 3444 ct_rsp = &fcport->ct_desc.ct_sns->p.rsp; 3445 + fc4_scsi_feat = ct_rsp->rsp.gff_id.fc4_features[GFF_FCP_SCSI_OFFSET]; 3446 + fc4_nvme_feat = ct_rsp->rsp.gff_id.fc4_features[GFF_NVME_OFFSET]; 3447 + 3452 3448 /* 3453 3449 * FC-GS-7, 5.2.3.12 FC-4 Features - format 3454 3450 * The format of the FC-4 Features object, as defined by the FC-4, 3455 3451 * Shall be an array of 4-bit values, one for each type code value 3456 3452 */ 3457 3453 if (!res) { 3458 - if (ct_rsp->rsp.gff_id.fc4_features[GFF_FCP_SCSI_OFFSET] & 0xf) { 3454 + if (fc4_scsi_feat & 0xf) { 3459 3455 /* w1 b00:03 */ 3460 - fcport->fc4_type = 3461 - ct_rsp->rsp.gff_id.fc4_features[GFF_FCP_SCSI_OFFSET]; 3462 - fcport->fc4_type &= 0xf; 3463 - } 3456 + fcport->fc4_type = FS_FC4TYPE_FCP; 3457 + fcport->fc4_features = fc4_scsi_feat & 0xf; 3458 + } 3464 3459 3465 - if (ct_rsp->rsp.gff_id.fc4_features[GFF_NVME_OFFSET] & 0xf) { 3460 + if (fc4_nvme_feat & 0xf) { 3466 3461 /* w5 [00:03]/28h */ 3467 - fcport->fc4f_nvme = 3468 - ct_rsp->rsp.gff_id.fc4_features[GFF_NVME_OFFSET]; 3469 - fcport->fc4f_nvme &= 0xf; 3462 + fcport->fc4_type |= FS_FC4TYPE_NVME; 3463 + fcport->fc4_features = fc4_nvme_feat & 0xf; 3470 3464 } 3471 3465 } 3472 3466 ··· 3571 3563 u8 recheck = 0; 3572 3564 u16 dup = 0, dup_cnt = 0; 3573 3565 3574 - ql_dbg(ql_dbg_disc, vha, 0xffff, 3566 + ql_dbg(ql_dbg_disc + ql_dbg_verbose, vha, 0xffff, 3575 3567 "%s enter\n", __func__); 3576 3568 3577 3569 if (sp->gen1 != vha->hw->base_qpair->chip_reset) { ··· 3588 3580 set_bit(LOCAL_LOOP_UPDATE, &vha->dpc_flags); 3589 3581 set_bit(LOOP_RESYNC_NEEDED, &vha->dpc_flags); 3590 3582 } else { 3591 - ql_dbg(ql_dbg_disc, vha, 0xffff, 3592 - "Fabric scan failed on all retries.\n"); 3583 + ql_dbg(ql_dbg_disc + ql_dbg_verbose, vha, 0xffff, 3584 + "%s: Fabric scan failed for %d retries.\n", 3585 + __func__, vha->scan.scan_retry); 3593 3586 } 3594 3587 goto out; 3595 3588 } ··· 4056 4047 4057 4048 void qla24xx_async_gpnft_done(scsi_qla_host_t *vha, srb_t *sp) 4058 4049 { 4059 - ql_dbg(ql_dbg_disc, vha, 0xffff, 4050 + ql_dbg(ql_dbg_disc + ql_dbg_verbose, vha, 0xffff, 4060 4051 "%s enter\n", __func__); 4061 4052 qla24xx_async_gnnft(vha, sp, sp->gen2); 4062 4053 } ··· 4070 4061 u32 rspsz; 4071 4062 unsigned long flags; 4072 4063 4073 - ql_dbg(ql_dbg_disc, vha, 0xffff, 4064 + ql_dbg(ql_dbg_disc + ql_dbg_verbose, vha, 0xffff, 4074 4065 "%s enter\n", __func__); 4075 4066 4076 4067 if (!vha->flags.online) ··· 4079 4070 spin_lock_irqsave(&vha->work_lock, flags); 4080 4071 if (vha->scan.scan_flags & SF_SCANNING) { 4081 4072 spin_unlock_irqrestore(&vha->work_lock, flags); 4082 - ql_dbg(ql_dbg_disc, vha, 0xffff, "scan active\n"); 4073 + ql_dbg(ql_dbg_disc + ql_dbg_verbose, vha, 0xffff, 4074 + "%s: scan active\n", __func__); 4083 4075 return rval; 4084 4076 } 4085 4077 vha->scan.scan_flags |= SF_SCANNING; 4086 4078 spin_unlock_irqrestore(&vha->work_lock, flags); 4087 4079 4088 4080 if (fc4_type == FC4_TYPE_FCP_SCSI) { 4089 - ql_dbg(ql_dbg_disc, vha, 0xffff, 4081 + ql_dbg(ql_dbg_disc + ql_dbg_verbose, vha, 0xffff, 4090 4082 "%s: Performing FCP Scan\n", __func__); 4091 4083 4092 4084 if (sp) ··· 4142 4132 } 4143 4133 sp->u.iocb_cmd.u.ctarg.rsp_size = rspsz; 4144 4134 4145 - ql_dbg(ql_dbg_disc, vha, 0xffff, 4135 + ql_dbg(ql_dbg_disc + ql_dbg_verbose, vha, 0xffff, 4146 4136 "%s scan list size %d\n", __func__, vha->scan.size); 4147 4137 4148 4138 memset(vha->scan.l, 0, vha->scan.size); ··· 4207 4197 spin_lock_irqsave(&vha->work_lock, flags); 4208 4198 vha->scan.scan_flags &= ~SF_SCANNING; 4209 4199 if (vha->scan.scan_flags == 0) { 4210 - ql_dbg(ql_dbg_disc, vha, 0xffff, 4211 - "%s: schedule\n", __func__); 4200 + ql_dbg(ql_dbg_disc + ql_dbg_verbose, vha, 0xffff, 4201 + "%s: Scan scheduled.\n", __func__); 4212 4202 vha->scan.scan_flags |= SF_QUEUED; 4213 4203 schedule_delayed_work(&vha->scan.scan_work, 5); 4214 4204 }
+80 -60
drivers/scsi/qla2xxx/qla_init.c
··· 17 17 #include <asm/prom.h> 18 18 #endif 19 19 20 - #include <target/target_core_base.h> 21 20 #include "qla_target.h" 22 21 23 22 /* ··· 100 101 u32 handle; 101 102 unsigned long flags; 102 103 104 + if (sp->cmd_sp) 105 + ql_dbg(ql_dbg_async, sp->vha, 0x507c, 106 + "Abort timeout - cmd hdl=%x, cmd type=%x hdl=%x, type=%x\n", 107 + sp->cmd_sp->handle, sp->cmd_sp->type, 108 + sp->handle, sp->type); 109 + else 110 + ql_dbg(ql_dbg_async, sp->vha, 0x507c, 111 + "Abort timeout 2 - hdl=%x, type=%x\n", 112 + sp->handle, sp->type); 113 + 103 114 spin_lock_irqsave(qpair->qp_lock_ptr, flags); 104 115 for (handle = 1; handle < qpair->req->num_outstanding_cmds; handle++) { 116 + if (sp->cmd_sp && (qpair->req->outstanding_cmds[handle] == 117 + sp->cmd_sp)) 118 + qpair->req->outstanding_cmds[handle] = NULL; 119 + 105 120 /* removing the abort */ 106 121 if (qpair->req->outstanding_cmds[handle] == sp) { 107 122 qpair->req->outstanding_cmds[handle] = NULL; ··· 123 110 } 124 111 } 125 112 spin_unlock_irqrestore(qpair->qp_lock_ptr, flags); 113 + 114 + if (sp->cmd_sp) 115 + sp->cmd_sp->done(sp->cmd_sp, QLA_OS_TIMER_EXPIRED); 126 116 127 117 abt->u.abt.comp_status = CS_TIMEOUT; 128 118 sp->done(sp, QLA_OS_TIMER_EXPIRED); ··· 158 142 sp->type = SRB_ABT_CMD; 159 143 sp->name = "abort"; 160 144 sp->qpair = cmd_sp->qpair; 145 + sp->cmd_sp = cmd_sp; 161 146 if (wait) 162 147 sp->flags = SRB_WAKEUP_ON_COMP; 163 148 ··· 345 328 else 346 329 lio->u.logio.flags |= SRB_LOGIN_COND_PLOGI; 347 330 348 - if (fcport->fc4f_nvme) 331 + if (NVME_TARGET(vha->hw, fcport)) 349 332 lio->u.logio.flags |= SRB_LOGIN_SKIP_PRLI; 350 333 351 334 ql_dbg(ql_dbg_disc, vha, 0x2072, ··· 743 726 744 727 loop_id = le16_to_cpu(e->nport_handle); 745 728 loop_id = (loop_id & 0x7fff); 746 - if (fcport->fc4f_nvme) 729 + if (NVME_TARGET(vha->hw, fcport)) 747 730 current_login_state = e->current_login_state >> 4; 748 731 else 749 732 current_login_state = e->current_login_state & 0xf; 750 733 751 - 752 734 ql_dbg(ql_dbg_disc, vha, 0x20e2, 753 - "%s found %8phC CLS [%x|%x] nvme %d ID[%02x%02x%02x|%02x%02x%02x] lid[%d|%d]\n", 735 + "%s found %8phC CLS [%x|%x] fc4_type %d ID[%06x|%06x] lid[%d|%d]\n", 754 736 __func__, fcport->port_name, 755 737 e->current_login_state, fcport->fw_login_state, 756 - fcport->fc4f_nvme, id.b.domain, id.b.area, id.b.al_pa, 757 - fcport->d_id.b.domain, fcport->d_id.b.area, 758 - fcport->d_id.b.al_pa, loop_id, fcport->loop_id); 738 + fcport->fc4_type, id.b24, fcport->d_id.b24, 739 + loop_id, fcport->loop_id); 759 740 760 741 switch (fcport->disc_state) { 761 742 case DSC_DELETE_PEND: ··· 1150 1135 "Async done-%s res %x, WWPN %8phC mb[1]=%x mb[2]=%x \n", 1151 1136 sp->name, res, fcport->port_name, mb[1], mb[2]); 1152 1137 1153 - if (res == QLA_FUNCTION_TIMEOUT) { 1154 - dma_pool_free(sp->vha->hw->s_dma_pool, sp->u.iocb_cmd.u.mbx.in, 1155 - sp->u.iocb_cmd.u.mbx.in_dma); 1156 - return; 1157 - } 1158 - 1159 1138 fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE); 1139 + 1140 + if (res == QLA_FUNCTION_TIMEOUT) 1141 + goto done; 1142 + 1160 1143 memset(&ea, 0, sizeof(ea)); 1161 1144 ea.fcport = fcport; 1162 1145 ea.sp = sp; 1163 1146 1164 1147 qla24xx_handle_gpdb_event(vha, &ea); 1165 1148 1149 + done: 1166 1150 dma_pool_free(ha->s_dma_pool, sp->u.iocb_cmd.u.mbx.in, 1167 1151 sp->u.iocb_cmd.u.mbx.in_dma); 1168 1152 ··· 1239 1225 sp->done = qla2x00_async_prli_sp_done; 1240 1226 lio->u.logio.flags = 0; 1241 1227 1242 - if (fcport->fc4f_nvme) 1228 + if (NVME_TARGET(vha->hw, fcport)) 1243 1229 lio->u.logio.flags |= SRB_LOGIN_NVME_PRLI; 1244 1230 1245 1231 ql_dbg(ql_dbg_disc, vha, 0x211b, 1246 1232 "Async-prli - %8phC hdl=%x, loopid=%x portid=%06x retries=%d %s.\n", 1247 1233 fcport->port_name, sp->handle, fcport->loop_id, fcport->d_id.b24, 1248 - fcport->login_retry, fcport->fc4f_nvme ? "nvme" : "fc"); 1234 + fcport->login_retry, NVME_TARGET(vha->hw, fcport) ? "nvme" : "fc"); 1249 1235 1250 1236 rval = qla2x00_start_sp(sp); 1251 1237 if (rval != QLA_SUCCESS) { ··· 1396 1382 fcport->flags &= ~FCF_ASYNC_SENT; 1397 1383 1398 1384 ql_dbg(ql_dbg_disc, vha, 0x20d2, 1399 - "%s %8phC DS %d LS %d nvme %x rc %d\n", __func__, fcport->port_name, 1400 - fcport->disc_state, pd->current_login_state, fcport->fc4f_nvme, 1401 - ea->rc); 1385 + "%s %8phC DS %d LS %d fc4_type %x rc %d\n", __func__, 1386 + fcport->port_name, fcport->disc_state, pd->current_login_state, 1387 + fcport->fc4_type, ea->rc); 1402 1388 1403 1389 if (fcport->disc_state == DSC_DELETE_PEND) 1404 1390 return; 1405 1391 1406 - if (fcport->fc4f_nvme) 1392 + if (NVME_TARGET(vha->hw, fcport)) 1407 1393 ls = pd->current_login_state >> 4; 1408 1394 else 1409 1395 ls = pd->current_login_state & 0xf; ··· 1592 1578 ql_dbg(ql_dbg_disc, vha, 0x2118, 1593 1579 "%s %d %8phC post %s PRLI\n", 1594 1580 __func__, __LINE__, fcport->port_name, 1595 - fcport->fc4f_nvme ? "NVME" : "FC"); 1581 + NVME_TARGET(vha->hw, fcport) ? "NVME" : 1582 + "FC"); 1596 1583 qla24xx_post_prli_work(vha, fcport); 1597 1584 } 1598 1585 break; ··· 1714 1699 } 1715 1700 1716 1701 qla24xx_fcport_handle_login(vha, fcport); 1702 + } 1703 + 1704 + void qla_handle_els_plogi_done(scsi_qla_host_t *vha, 1705 + struct event_arg *ea) 1706 + { 1707 + ql_dbg(ql_dbg_disc, vha, 0x2118, 1708 + "%s %d %8phC post PRLI\n", 1709 + __func__, __LINE__, ea->fcport->port_name); 1710 + qla24xx_post_prli_work(vha, ea->fcport); 1717 1711 } 1718 1712 1719 1713 /* ··· 1884 1860 break; 1885 1861 } 1886 1862 1887 - if (ea->fcport->fc4f_nvme) { 1863 + /* 1864 + * Retry PRLI with other FC-4 type if failure occurred on dual 1865 + * FCP/NVMe port 1866 + */ 1867 + if (NVME_FCP_TARGET(ea->fcport)) { 1888 1868 ql_dbg(ql_dbg_disc, vha, 0x2118, 1889 - "%s %d %8phC post fc4 prli\n", 1890 - __func__, __LINE__, ea->fcport->port_name); 1891 - ea->fcport->fc4f_nvme = 0; 1892 - qla24xx_post_prli_work(vha, ea->fcport); 1893 - return; 1869 + "%s %d %8phC post %s prli\n", 1870 + __func__, __LINE__, ea->fcport->port_name, 1871 + (ea->fcport->fc4_type & FS_FC4TYPE_NVME) ? 1872 + "NVMe" : "FCP"); 1873 + if (vha->hw->fc4_type_priority == FC4_PRIORITY_NVME) 1874 + ea->fcport->fc4_type &= ~FS_FC4TYPE_NVME; 1875 + else 1876 + ea->fcport->fc4_type &= ~FS_FC4TYPE_FCP; 1894 1877 } 1895 1878 1896 - /* at this point both PRLI NVME & PRLI FCP failed */ 1897 - if (N2N_TOPO(vha->hw)) { 1898 - if (ea->fcport->n2n_link_reset_cnt < 3) { 1899 - ea->fcport->n2n_link_reset_cnt++; 1900 - /* 1901 - * remote port is not sending Plogi. Reset 1902 - * link to kick start his state machine 1903 - */ 1904 - set_bit(N2N_LINK_RESET, &vha->dpc_flags); 1905 - } else { 1906 - ql_log(ql_log_warn, vha, 0x2119, 1907 - "%s %d %8phC Unable to reconnect\n", 1908 - __func__, __LINE__, ea->fcport->port_name); 1909 - } 1910 - } else { 1911 - /* 1912 - * switch connect. login failed. Take connection 1913 - * down and allow relogin to retrigger 1914 - */ 1915 - ea->fcport->flags &= ~FCF_ASYNC_SENT; 1916 - ea->fcport->keep_nport_handle = 0; 1917 - qlt_schedule_sess_for_deletion(ea->fcport); 1918 - } 1879 + ea->fcport->flags &= ~FCF_ASYNC_SENT; 1880 + ea->fcport->keep_nport_handle = 0; 1881 + ea->fcport->logout_on_delete = 1; 1882 + qlt_schedule_sess_for_deletion(ea->fcport); 1919 1883 break; 1920 1884 } 1921 1885 } ··· 1964 1952 * force a relogin attempt via implicit LOGO, PLOGI, and PRLI 1965 1953 * requests. 1966 1954 */ 1967 - if (ea->fcport->fc4f_nvme) { 1955 + if (NVME_TARGET(vha->hw, ea->fcport)) { 1968 1956 ql_dbg(ql_dbg_disc, vha, 0x2117, 1969 1957 "%s %d %8phC post prli\n", 1970 1958 __func__, __LINE__, ea->fcport->port_name); ··· 2218 2206 ql_dbg(ql_dbg_init, vha, 0x0061, 2219 2207 "Configure NVRAM parameters...\n"); 2220 2208 2209 + /* Let priority default to FCP, can be overridden by nvram_config */ 2210 + ha->fc4_type_priority = FC4_PRIORITY_FCP; 2211 + 2221 2212 ha->isp_ops->nvram_config(vha); 2213 + 2214 + if (ha->fc4_type_priority != FC4_PRIORITY_FCP && 2215 + ha->fc4_type_priority != FC4_PRIORITY_NVME) 2216 + ha->fc4_type_priority = FC4_PRIORITY_FCP; 2217 + 2218 + ql_log(ql_log_info, vha, 0xffff, "FC4 priority set to %s\n", 2219 + ha->fc4_type_priority == FC4_PRIORITY_FCP ? "FCP" : "NVMe"); 2222 2220 2223 2221 if (ha->flags.disable_serdes) { 2224 2222 /* Mask HBA via NVRAM settings? */ ··· 5404 5382 5405 5383 qla2x00_iidma_fcport(vha, fcport); 5406 5384 5407 - if (fcport->fc4f_nvme) { 5385 + if (NVME_TARGET(vha->hw, fcport)) { 5408 5386 qla_nvme_register_remote(vha, fcport); 5409 5387 fcport->disc_state = DSC_LOGIN_COMPLETE; 5410 5388 qla2x00_set_fcport_state(fcport, FCS_ONLINE); ··· 5732 5710 new_fcport->fc4_type = swl[swl_idx].fc4_type; 5733 5711 5734 5712 new_fcport->nvme_flag = 0; 5735 - new_fcport->fc4f_nvme = 0; 5736 5713 if (vha->flags.nvme_enabled && 5737 - swl[swl_idx].fc4f_nvme) { 5738 - new_fcport->fc4f_nvme = 5739 - swl[swl_idx].fc4f_nvme; 5714 + swl[swl_idx].fc4_type & FS_FC4TYPE_NVME) { 5740 5715 ql_log(ql_log_info, vha, 0x2131, 5741 5716 "FOUND: NVME port %8phC as FC Type 28h\n", 5742 5717 new_fcport->port_name); ··· 5789 5770 5790 5771 /* Bypass ports whose FCP-4 type is not FCP_SCSI */ 5791 5772 if (ql2xgffidenable && 5792 - (new_fcport->fc4_type != FC4_TYPE_FCP_SCSI && 5773 + (!(new_fcport->fc4_type & FS_FC4TYPE_FCP) && 5793 5774 new_fcport->fc4_type != FC4_TYPE_UNKNOWN)) 5794 5775 continue; 5795 5776 ··· 5858 5839 break; 5859 5840 } 5860 5841 5861 - if (fcport->fc4f_nvme) { 5842 + if (NVME_TARGET(vha->hw, fcport)) { 5862 5843 if (fcport->disc_state == DSC_DELETE_PEND) { 5863 5844 fcport->disc_state = DSC_GNL; 5864 5845 vha->fcport_count--; ··· 8533 8514 /* N2N: driver will initiate Login instead of FW */ 8534 8515 icb->firmware_options_3 |= BIT_8; 8535 8516 8517 + /* Determine NVMe/FCP priority for target ports */ 8518 + ha->fc4_type_priority = qla2xxx_get_fc4_priority(vha); 8519 + 8536 8520 if (rval) { 8537 8521 ql_log(ql_log_warn, vha, 0x0076, 8538 8522 "NVRAM configuration failed.\n"); ··· 9025 9003 struct qla_hw_data *ha = qpair->hw; 9026 9004 9027 9005 qpair->delete_in_progress = 1; 9028 - while (atomic_read(&qpair->ref_count)) 9029 - msleep(500); 9030 9006 9031 9007 ret = qla25xx_delete_req_que(vha, qpair->req); 9032 9008 if (ret != QLA_SUCCESS)
+12
drivers/scsi/qla2xxx/qla_inline.h
··· 307 307 308 308 WRT_REG_DWORD(req->req_q_in, req->ring_index); 309 309 } 310 + 311 + static inline int 312 + qla2xxx_get_fc4_priority(struct scsi_qla_host *vha) 313 + { 314 + uint32_t data; 315 + 316 + data = 317 + ((uint8_t *)vha->hw->nvram)[NVRAM_DUAL_FCP_NVME_FLAG_OFFSET]; 318 + 319 + 320 + return (data >> 6) & BIT_0 ? FC4_PRIORITY_FCP : FC4_PRIORITY_NVME; 321 + }
+99 -7
drivers/scsi/qla2xxx/qla_iocb.c
··· 2740 2740 struct scsi_qla_host *vha = sp->vha; 2741 2741 struct event_arg ea; 2742 2742 struct qla_work_evt *e; 2743 + struct fc_port *conflict_fcport; 2744 + port_id_t cid; /* conflict Nport id */ 2745 + u32 *fw_status = sp->u.iocb_cmd.u.els_plogi.fw_status; 2746 + u16 lid; 2743 2747 2744 2748 ql_dbg(ql_dbg_disc, vha, 0x3072, 2745 2749 "%s ELS done rc %d hdl=%x, portid=%06x %8phC\n", ··· 2755 2751 if (sp->flags & SRB_WAKEUP_ON_COMP) 2756 2752 complete(&lio->u.els_plogi.comp); 2757 2753 else { 2758 - if (res) { 2759 - set_bit(RELOGIN_NEEDED, &vha->dpc_flags); 2760 - } else { 2754 + switch (fw_status[0]) { 2755 + case CS_DATA_UNDERRUN: 2756 + case CS_COMPLETE: 2761 2757 memset(&ea, 0, sizeof(ea)); 2762 2758 ea.fcport = fcport; 2763 - ea.data[0] = MBS_COMMAND_COMPLETE; 2764 - ea.sp = sp; 2765 - qla24xx_handle_plogi_done_event(vha, &ea); 2759 + ea.rc = res; 2760 + qla_handle_els_plogi_done(vha, &ea); 2761 + break; 2762 + 2763 + case CS_IOCB_ERROR: 2764 + switch (fw_status[1]) { 2765 + case LSC_SCODE_PORTID_USED: 2766 + lid = fw_status[2] & 0xffff; 2767 + qlt_find_sess_invalidate_other(vha, 2768 + wwn_to_u64(fcport->port_name), 2769 + fcport->d_id, lid, &conflict_fcport); 2770 + if (conflict_fcport) { 2771 + /* 2772 + * Another fcport shares the same 2773 + * loop_id & nport id; conflict 2774 + * fcport needs to finish cleanup 2775 + * before this fcport can proceed 2776 + * to login. 2777 + */ 2778 + conflict_fcport->conflict = fcport; 2779 + fcport->login_pause = 1; 2780 + ql_dbg(ql_dbg_disc, vha, 0x20ed, 2781 + "%s %d %8phC pid %06x inuse with lid %#x post gidpn\n", 2782 + __func__, __LINE__, 2783 + fcport->port_name, 2784 + fcport->d_id.b24, lid); 2785 + } else { 2786 + ql_dbg(ql_dbg_disc, vha, 0x20ed, 2787 + "%s %d %8phC pid %06x inuse with lid %#x sched del\n", 2788 + __func__, __LINE__, 2789 + fcport->port_name, 2790 + fcport->d_id.b24, lid); 2791 + qla2x00_clear_loop_id(fcport); 2792 + set_bit(lid, vha->hw->loop_id_map); 2793 + fcport->loop_id = lid; 2794 + fcport->keep_nport_handle = 0; 2795 + qlt_schedule_sess_for_deletion(fcport); 2796 + } 2797 + break; 2798 + 2799 + case LSC_SCODE_NPORT_USED: 2800 + cid.b.domain = (fw_status[2] >> 16) & 0xff; 2801 + cid.b.area = (fw_status[2] >> 8) & 0xff; 2802 + cid.b.al_pa = fw_status[2] & 0xff; 2803 + cid.b.rsvd_1 = 0; 2804 + 2805 + ql_dbg(ql_dbg_disc, vha, 0x20ec, 2806 + "%s %d %8phC lid %#x in use with pid %06x post gnl\n", 2807 + __func__, __LINE__, fcport->port_name, 2808 + fcport->loop_id, cid.b24); 2809 + set_bit(fcport->loop_id, 2810 + vha->hw->loop_id_map); 2811 + fcport->loop_id = FC_NO_LOOP_ID; 2812 + qla24xx_post_gnl_work(vha, fcport); 2813 + break; 2814 + 2815 + case LSC_SCODE_NOXCB: 2816 + vha->hw->exch_starvation++; 2817 + if (vha->hw->exch_starvation > 5) { 2818 + ql_log(ql_log_warn, vha, 0xd046, 2819 + "Exchange starvation. Resetting RISC\n"); 2820 + vha->hw->exch_starvation = 0; 2821 + set_bit(ISP_ABORT_NEEDED, 2822 + &vha->dpc_flags); 2823 + qla2xxx_wake_dpc(vha); 2824 + } 2825 + /* fall through */ 2826 + default: 2827 + ql_dbg(ql_dbg_disc, vha, 0x20eb, 2828 + "%s %8phC cmd error fw_status 0x%x 0x%x 0x%x\n", 2829 + __func__, sp->fcport->port_name, 2830 + fw_status[0], fw_status[1], fw_status[2]); 2831 + 2832 + fcport->flags &= ~FCF_ASYNC_SENT; 2833 + fcport->disc_state = DSC_LOGIN_FAILED; 2834 + set_bit(RELOGIN_NEEDED, &vha->dpc_flags); 2835 + break; 2836 + } 2837 + break; 2838 + 2839 + default: 2840 + ql_dbg(ql_dbg_disc, vha, 0x20eb, 2841 + "%s %8phC cmd error 2 fw_status 0x%x 0x%x 0x%x\n", 2842 + __func__, sp->fcport->port_name, 2843 + fw_status[0], fw_status[1], fw_status[2]); 2844 + 2845 + sp->fcport->flags &= ~FCF_ASYNC_SENT; 2846 + sp->fcport->disc_state = DSC_LOGIN_FAILED; 2847 + set_bit(RELOGIN_NEEDED, &vha->dpc_flags); 2848 + break; 2766 2849 } 2767 2850 2768 2851 e = qla2x00_alloc_work(vha, QLA_EVT_UNMAP); ··· 2883 2792 return -ENOMEM; 2884 2793 } 2885 2794 2795 + fcport->flags |= FCF_ASYNC_SENT; 2796 + fcport->disc_state = DSC_LOGIN_PEND; 2886 2797 elsio = &sp->u.iocb_cmd; 2887 2798 ql_dbg(ql_dbg_io, vha, 0x3073, 2888 2799 "Enter: PLOGI portid=%06x\n", fcport->d_id.b24); 2889 2800 2890 - fcport->flags |= FCF_ASYNC_SENT; 2891 2801 sp->type = SRB_ELS_DCMD; 2892 2802 sp->name = "ELS_DCMD"; 2893 2803 sp->fcport = fcport;
+31 -5
drivers/scsi/qla2xxx/qla_isr.c
··· 1227 1227 break; 1228 1228 1229 1229 case MBA_IDC_AEN: 1230 - mb[4] = RD_REG_WORD(&reg24->mailbox4); 1231 - mb[5] = RD_REG_WORD(&reg24->mailbox5); 1232 - mb[6] = RD_REG_WORD(&reg24->mailbox6); 1233 - mb[7] = RD_REG_WORD(&reg24->mailbox7); 1234 - qla83xx_handle_8200_aen(vha, mb); 1230 + if (IS_QLA27XX(ha) || IS_QLA28XX(ha)) { 1231 + ha->flags.fw_init_done = 0; 1232 + ql_log(ql_log_warn, vha, 0xffff, 1233 + "MPI Heartbeat stop. Chip reset needed. MB0[%xh] MB1[%xh] MB2[%xh] MB3[%xh]\n", 1234 + mb[0], mb[1], mb[2], mb[3]); 1235 + 1236 + if ((mb[1] & BIT_8) || 1237 + (mb[2] & BIT_8)) { 1238 + ql_log(ql_log_warn, vha, 0xd013, 1239 + "MPI Heartbeat stop. FW dump needed\n"); 1240 + ha->fw_dump_mpi = 1; 1241 + ha->isp_ops->fw_dump(vha, 1); 1242 + } 1243 + set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags); 1244 + qla2xxx_wake_dpc(vha); 1245 + } else if (IS_QLA83XX(ha)) { 1246 + mb[4] = RD_REG_WORD(&reg24->mailbox4); 1247 + mb[5] = RD_REG_WORD(&reg24->mailbox5); 1248 + mb[6] = RD_REG_WORD(&reg24->mailbox6); 1249 + mb[7] = RD_REG_WORD(&reg24->mailbox7); 1250 + qla83xx_handle_8200_aen(vha, mb); 1251 + } else { 1252 + ql_dbg(ql_dbg_async, vha, 0x5052, 1253 + "skip Heartbeat processing mb0-3=[0x%04x] [0x%04x] [0x%04x] [0x%04x]\n", 1254 + mb[0], mb[1], mb[2], mb[3]); 1255 + } 1235 1256 break; 1236 1257 1237 1258 case MBA_DPORT_DIAGNOSTICS: ··· 2486 2465 } 2487 2466 return; 2488 2467 } 2468 + 2469 + if (sp->abort) 2470 + sp->aborted = 1; 2471 + else 2472 + sp->completed = 1; 2489 2473 2490 2474 if (sp->cmd_type != TYPE_SRB) { 2491 2475 req->outstanding_cmds[handle] = NULL;
+6 -9
drivers/scsi/qla2xxx/qla_mbx.c
··· 1932 1932 pd24 = (struct port_database_24xx *) pd; 1933 1933 1934 1934 /* Check for logged in state. */ 1935 - if (fcport->fc4f_nvme) { 1935 + if (NVME_TARGET(ha, fcport)) { 1936 1936 current_login_state = pd24->current_login_state >> 4; 1937 1937 last_login_state = pd24->last_login_state >> 4; 1938 1938 } else { ··· 3899 3899 fcport->scan_state = QLA_FCPORT_FOUND; 3900 3900 fcport->n2n_flag = 1; 3901 3901 fcport->keep_nport_handle = 1; 3902 + fcport->fc4_type = FS_FC4TYPE_FCP; 3902 3903 if (vha->flags.nvme_enabled) 3903 - fcport->fc4f_nvme = 1; 3904 + fcport->fc4_type |= FS_FC4TYPE_NVME; 3904 3905 3905 3906 switch (fcport->disc_state) { 3906 3907 case DSC_DELETED: ··· 6288 6287 case QLA_SUCCESS: 6289 6288 ql_dbg(ql_dbg_mbx, vha, 0x119d, "%s: %s done.\n", 6290 6289 __func__, sp->name); 6291 - sp->free(sp); 6292 6290 break; 6293 6291 default: 6294 6292 ql_dbg(ql_dbg_mbx, vha, 0x119e, "%s: %s Failed. %x.\n", 6295 6293 __func__, sp->name, rval); 6296 - sp->free(sp); 6297 6294 break; 6298 6295 } 6299 - 6300 - return rval; 6301 6296 6302 6297 done_free_sp: 6303 6298 sp->free(sp); ··· 6359 6362 uint64_t zero = 0; 6360 6363 u8 current_login_state, last_login_state; 6361 6364 6362 - if (fcport->fc4f_nvme) { 6365 + if (NVME_TARGET(vha->hw, fcport)) { 6363 6366 current_login_state = pd->current_login_state >> 4; 6364 6367 last_login_state = pd->last_login_state >> 4; 6365 6368 } else { ··· 6394 6397 fcport->d_id.b.al_pa = pd->port_id[2]; 6395 6398 fcport->d_id.b.rsvd_1 = 0; 6396 6399 6397 - if (fcport->fc4f_nvme) { 6398 - fcport->port_type = 0; 6400 + if (NVME_TARGET(vha->hw, fcport)) { 6401 + fcport->port_type = FCT_NVME; 6399 6402 if ((pd->prli_svc_param_word_3[0] & BIT_5) == 0) 6400 6403 fcport->port_type |= FCT_NVME_INITIATOR; 6401 6404 if ((pd->prli_svc_param_word_3[0] & BIT_4) == 0)
+4 -7
drivers/scsi/qla2xxx/qla_mid.c
··· 946 946 947 947 sp = qla2x00_get_sp(base_vha, NULL, GFP_KERNEL); 948 948 if (!sp) 949 - goto done; 949 + return rval; 950 950 951 951 sp->type = SRB_CTRL_VP; 952 952 sp->name = "ctrl_vp"; ··· 962 962 ql_dbg(ql_dbg_async, vha, 0xffff, 963 963 "%s: %s Failed submission. %x.\n", 964 964 __func__, sp->name, rval); 965 - goto done_free_sp; 965 + goto done; 966 966 } 967 967 968 968 ql_dbg(ql_dbg_vport, vha, 0x113f, "%s hndl %x submitted\n", ··· 980 980 case QLA_SUCCESS: 981 981 ql_dbg(ql_dbg_vport, vha, 0xffff, "%s: %s done.\n", 982 982 __func__, sp->name); 983 - goto done_free_sp; 983 + break; 984 984 default: 985 985 ql_dbg(ql_dbg_vport, vha, 0xffff, "%s: %s Failed. %x.\n", 986 986 __func__, sp->name, rval); 987 - goto done_free_sp; 987 + break; 988 988 } 989 989 done: 990 - return rval; 991 - 992 - done_free_sp: 993 990 sp->free(sp); 994 991 return rval; 995 992 }
+2 -2
drivers/scsi/qla2xxx/qla_nvme.c
··· 224 224 225 225 if (ha->flags.host_shutting_down) { 226 226 ql_log(ql_log_info, sp->fcport->vha, 0xffff, 227 - "%s Calling done on sp: %p, type: 0x%x, sp->ref_count: 0x%x\n", 228 - __func__, sp, sp->type, atomic_read(&sp->ref_count)); 227 + "%s Calling done on sp: %p, type: 0x%x\n", 228 + __func__, sp, sp->type); 229 229 sp->done(sp, 0); 230 230 goto out; 231 231 }
+104 -74
drivers/scsi/qla2xxx/qla_os.c
··· 698 698 struct scsi_cmnd *cmd = GET_CMD_SP(sp); 699 699 struct completion *comp = sp->comp; 700 700 701 - if (WARN_ON_ONCE(atomic_read(&sp->ref_count) == 0)) 702 - return; 703 - 704 - atomic_dec(&sp->ref_count); 705 - 706 701 sp->free(sp); 707 702 cmd->result = res; 708 703 CMD_SP(cmd) = NULL; ··· 788 793 { 789 794 struct scsi_cmnd *cmd = GET_CMD_SP(sp); 790 795 struct completion *comp = sp->comp; 791 - 792 - if (WARN_ON_ONCE(atomic_read(&sp->ref_count) == 0)) 793 - return; 794 - 795 - atomic_dec(&sp->ref_count); 796 796 797 797 sp->free(sp); 798 798 cmd->result = res; ··· 893 903 894 904 sp->u.scmd.cmd = cmd; 895 905 sp->type = SRB_SCSI_CMD; 896 - atomic_set(&sp->ref_count, 1); 906 + 897 907 CMD_SP(cmd) = (void *)sp; 898 908 sp->free = qla2x00_sp_free_dma; 899 909 sp->done = qla2x00_sp_compl; ··· 975 985 976 986 sp->u.scmd.cmd = cmd; 977 987 sp->type = SRB_SCSI_CMD; 978 - atomic_set(&sp->ref_count, 1); 979 988 CMD_SP(cmd) = (void *)sp; 980 989 sp->free = qla2xxx_qpair_sp_free_dma; 981 990 sp->done = qla2xxx_qpair_sp_compl; 982 - sp->qpair = qpair; 983 991 984 992 rval = ha->isp_ops->start_scsi_mq(sp); 985 993 if (rval != QLA_SUCCESS) { 986 994 ql_dbg(ql_dbg_io + ql_dbg_verbose, vha, 0x3078, 987 995 "Start scsi failed rval=%d for cmd=%p.\n", rval, cmd); 988 996 if (rval == QLA_INTERFACE_ERROR) 989 - goto qc24_fail_command; 997 + goto qc24_free_sp_fail_command; 990 998 goto qc24_host_busy_free_sp; 991 999 } 992 1000 ··· 995 1007 996 1008 qc24_target_busy: 997 1009 return SCSI_MLQUEUE_TARGET_BUSY; 1010 + 1011 + qc24_free_sp_fail_command: 1012 + sp->free(sp); 1013 + CMD_SP(cmd) = NULL; 1014 + qla2xxx_rel_qpair_sp(sp->qpair, sp); 998 1015 999 1016 qc24_fail_command: 1000 1017 cmd->scsi_done(cmd); ··· 1177 1184 return return_status; 1178 1185 } 1179 1186 1180 - static int 1181 - sp_get(struct srb *sp) 1182 - { 1183 - if (!refcount_inc_not_zero((refcount_t *)&sp->ref_count)) 1184 - /* kref get fail */ 1185 - return ENXIO; 1186 - else 1187 - return 0; 1188 - } 1189 - 1190 1187 #define ISP_REG_DISCONNECT 0xffffffffU 1191 1188 /************************************************************************** 1192 1189 * qla2x00_isp_reg_stat ··· 1232 1249 uint64_t lun; 1233 1250 int rval; 1234 1251 struct qla_hw_data *ha = vha->hw; 1252 + uint32_t ratov_j; 1253 + struct qla_qpair *qpair; 1254 + unsigned long flags; 1235 1255 1236 1256 if (qla2x00_isp_reg_stat(ha)) { 1237 1257 ql_log(ql_log_info, vha, 0x8042, ··· 1247 1261 return ret; 1248 1262 1249 1263 sp = scsi_cmd_priv(cmd); 1264 + qpair = sp->qpair; 1250 1265 1251 - if (sp->fcport && sp->fcport->deleted) 1266 + if ((sp->fcport && sp->fcport->deleted) || !qpair) 1252 1267 return SUCCESS; 1253 1268 1254 - /* Return if the command has already finished. */ 1255 - if (sp_get(sp)) 1269 + spin_lock_irqsave(qpair->qp_lock_ptr, flags); 1270 + if (sp->completed) { 1271 + spin_unlock_irqrestore(qpair->qp_lock_ptr, flags); 1256 1272 return SUCCESS; 1273 + } 1274 + 1275 + if (sp->abort || sp->aborted) { 1276 + spin_unlock_irqrestore(qpair->qp_lock_ptr, flags); 1277 + return FAILED; 1278 + } 1279 + 1280 + sp->abort = 1; 1281 + sp->comp = &comp; 1282 + spin_unlock_irqrestore(qpair->qp_lock_ptr, flags); 1283 + 1257 1284 1258 1285 id = cmd->device->id; 1259 1286 lun = cmd->device->lun; ··· 1275 1276 "Aborting from RISC nexus=%ld:%d:%llu sp=%p cmd=%p handle=%x\n", 1276 1277 vha->host_no, id, lun, sp, cmd, sp->handle); 1277 1278 1279 + /* 1280 + * Abort will release the original Command/sp from FW. Let the 1281 + * original command call scsi_done. In return, he will wakeup 1282 + * this sleeping thread. 1283 + */ 1278 1284 rval = ha->isp_ops->abort_command(sp); 1285 + 1279 1286 ql_dbg(ql_dbg_taskm, vha, 0x8003, 1280 1287 "Abort command mbx cmd=%p, rval=%x.\n", cmd, rval); 1281 1288 1289 + /* Wait for the command completion. */ 1290 + ratov_j = ha->r_a_tov/10 * 4 * 1000; 1291 + ratov_j = msecs_to_jiffies(ratov_j); 1282 1292 switch (rval) { 1283 1293 case QLA_SUCCESS: 1284 - /* 1285 - * The command has been aborted. That means that the firmware 1286 - * won't report a completion. 1287 - */ 1288 - sp->done(sp, DID_ABORT << 16); 1289 - ret = SUCCESS; 1290 - break; 1291 - case QLA_FUNCTION_PARAMETER_ERROR: { 1292 - /* Wait for the command completion. */ 1293 - uint32_t ratov = ha->r_a_tov/10; 1294 - uint32_t ratov_j = msecs_to_jiffies(4 * ratov * 1000); 1295 - 1296 - WARN_ON_ONCE(sp->comp); 1297 - sp->comp = &comp; 1298 1294 if (!wait_for_completion_timeout(&comp, ratov_j)) { 1299 1295 ql_dbg(ql_dbg_taskm, vha, 0xffff, 1300 1296 "%s: Abort wait timer (4 * R_A_TOV[%d]) expired\n", 1301 - __func__, ha->r_a_tov); 1297 + __func__, ha->r_a_tov/10); 1302 1298 ret = FAILED; 1303 1299 } else { 1304 1300 ret = SUCCESS; 1305 1301 } 1306 1302 break; 1307 - } 1308 1303 default: 1309 - /* 1310 - * Either abort failed or abort and completion raced. Let 1311 - * the SCSI core retry the abort in the former case. 1312 - */ 1313 1304 ret = FAILED; 1314 1305 break; 1315 1306 } 1316 1307 1317 1308 sp->comp = NULL; 1318 - atomic_dec(&sp->ref_count); 1309 + 1319 1310 ql_log(ql_log_info, vha, 0x801c, 1320 1311 "Abort command issued nexus=%ld:%d:%llu -- %x.\n", 1321 1312 vha->host_no, id, lun, ret); ··· 1697 1708 scsi_qla_host_t *vha = qp->vha; 1698 1709 struct qla_hw_data *ha = vha->hw; 1699 1710 int rval; 1711 + bool ret_cmd; 1712 + uint32_t ratov_j; 1700 1713 1701 - if (sp_get(sp)) 1714 + if (qla2x00_chip_is_down(vha)) { 1715 + sp->done(sp, res); 1702 1716 return; 1717 + } 1703 1718 1704 1719 if (sp->type == SRB_NVME_CMD || sp->type == SRB_NVME_LS || 1705 1720 (sp->type == SRB_SCSI_CMD && !ha->flags.eeh_busy && 1706 1721 !test_bit(ABORT_ISP_ACTIVE, &vha->dpc_flags) && 1707 1722 !qla2x00_isp_reg_stat(ha))) { 1708 - sp->comp = &comp; 1709 - spin_unlock_irqrestore(qp->qp_lock_ptr, *flags); 1710 - rval = ha->isp_ops->abort_command(sp); 1723 + if (sp->comp) { 1724 + sp->done(sp, res); 1725 + return; 1726 + } 1711 1727 1728 + sp->comp = &comp; 1729 + sp->abort = 1; 1730 + spin_unlock_irqrestore(qp->qp_lock_ptr, *flags); 1731 + 1732 + rval = ha->isp_ops->abort_command(sp); 1733 + /* Wait for command completion. */ 1734 + ret_cmd = false; 1735 + ratov_j = ha->r_a_tov/10 * 4 * 1000; 1736 + ratov_j = msecs_to_jiffies(ratov_j); 1712 1737 switch (rval) { 1713 1738 case QLA_SUCCESS: 1714 - sp->done(sp, res); 1739 + if (wait_for_completion_timeout(&comp, ratov_j)) { 1740 + ql_dbg(ql_dbg_taskm, vha, 0xffff, 1741 + "%s: Abort wait timer (4 * R_A_TOV[%d]) expired\n", 1742 + __func__, ha->r_a_tov/10); 1743 + ret_cmd = true; 1744 + } 1745 + /* else FW return SP to driver */ 1715 1746 break; 1716 - case QLA_FUNCTION_PARAMETER_ERROR: 1717 - wait_for_completion(&comp); 1747 + default: 1748 + ret_cmd = true; 1718 1749 break; 1719 1750 } 1720 1751 1721 1752 spin_lock_irqsave(qp->qp_lock_ptr, *flags); 1722 - sp->comp = NULL; 1753 + if (ret_cmd && (!sp->completed || !sp->aborted)) 1754 + sp->done(sp, res); 1755 + } else { 1756 + sp->done(sp, res); 1723 1757 } 1724 - 1725 - atomic_dec(&sp->ref_count); 1726 1758 } 1727 1759 1728 1760 static void ··· 1765 1755 for (cnt = 1; cnt < req->num_outstanding_cmds; cnt++) { 1766 1756 sp = req->outstanding_cmds[cnt]; 1767 1757 if (sp) { 1768 - req->outstanding_cmds[cnt] = NULL; 1769 1758 switch (sp->cmd_type) { 1770 1759 case TYPE_SRB: 1771 1760 qla2x00_abort_srb(qp, sp, res, &flags); ··· 1786 1777 default: 1787 1778 break; 1788 1779 } 1780 + req->outstanding_cmds[cnt] = NULL; 1789 1781 } 1790 1782 } 1791 1783 spin_unlock_irqrestore(qp->qp_lock_ptr, flags); ··· 3502 3492 return ret; 3503 3493 } 3504 3494 3495 + static void __qla_set_remove_flag(scsi_qla_host_t *base_vha) 3496 + { 3497 + scsi_qla_host_t *vp; 3498 + unsigned long flags; 3499 + struct qla_hw_data *ha; 3500 + 3501 + if (!base_vha) 3502 + return; 3503 + 3504 + ha = base_vha->hw; 3505 + 3506 + spin_lock_irqsave(&ha->vport_slock, flags); 3507 + list_for_each_entry(vp, &ha->vp_list, list) 3508 + set_bit(PFLG_DRIVER_REMOVING, &vp->pci_flags); 3509 + 3510 + /* 3511 + * Indicate device removal to prevent future board_disable 3512 + * and wait until any pending board_disable has completed. 3513 + */ 3514 + set_bit(PFLG_DRIVER_REMOVING, &base_vha->pci_flags); 3515 + spin_unlock_irqrestore(&ha->vport_slock, flags); 3516 + } 3517 + 3505 3518 static void 3506 3519 qla2x00_shutdown(struct pci_dev *pdev) 3507 3520 { ··· 3541 3508 * Prevent future board_disable and wait 3542 3509 * until any pending board_disable has completed. 3543 3510 */ 3544 - set_bit(PFLG_DRIVER_REMOVING, &vha->pci_flags); 3511 + __qla_set_remove_flag(vha); 3545 3512 cancel_work_sync(&ha->board_disable); 3546 3513 3547 3514 if (!atomic_read(&pdev->enable_cnt)) ··· 3701 3668 ha = base_vha->hw; 3702 3669 ql_log(ql_log_info, base_vha, 0xb079, 3703 3670 "Removing driver\n"); 3704 - 3705 - /* Indicate device removal to prevent future board_disable and wait 3706 - * until any pending board_disable has completed. */ 3707 - set_bit(PFLG_DRIVER_REMOVING, &base_vha->pci_flags); 3671 + __qla_set_remove_flag(base_vha); 3708 3672 cancel_work_sync(&ha->board_disable); 3709 3673 3710 3674 /* ··· 4696 4666 ha->sfp_data = NULL; 4697 4667 4698 4668 if (ha->flt) 4699 - dma_free_coherent(&ha->pdev->dev, SFP_DEV_SIZE, 4669 + dma_free_coherent(&ha->pdev->dev, 4670 + sizeof(struct qla_flt_header) + FLT_REGIONS_SIZE, 4700 4671 ha->flt, ha->flt_dma); 4701 4672 ha->flt = NULL; 4702 4673 ha->flt_dma = 0; ··· 5073 5042 fcport->d_id = e->u.new_sess.id; 5074 5043 fcport->flags |= FCF_FABRIC_DEVICE; 5075 5044 fcport->fw_login_state = DSC_LS_PLOGI_PEND; 5076 - if (e->u.new_sess.fc4_type == FS_FC4TYPE_FCP) 5077 - fcport->fc4_type = FC4_TYPE_FCP_SCSI; 5078 - 5079 - if (e->u.new_sess.fc4_type == FS_FC4TYPE_NVME) { 5080 - fcport->fc4_type = FC4_TYPE_OTHER; 5081 - fcport->fc4f_nvme = FC4_TYPE_NVME; 5082 - } 5083 5045 5084 5046 memcpy(fcport->port_name, e->u.new_sess.port_name, 5085 5047 WWN_SIZE); 5086 5048 5087 - if (e->u.new_sess.fc4_type & FS_FCP_IS_N2N) 5049 + fcport->fc4_type = e->u.new_sess.fc4_type; 5050 + if (e->u.new_sess.fc4_type & FS_FCP_IS_N2N) { 5051 + fcport->fc4_type = FS_FC4TYPE_FCP; 5088 5052 fcport->n2n_flag = 1; 5053 + if (vha->flags.nvme_enabled) 5054 + fcport->fc4_type |= FS_FC4TYPE_NVME; 5055 + } 5089 5056 5090 5057 } else { 5091 5058 ql_dbg(ql_dbg_disc, vha, 0xffff, ··· 5187 5158 fcport->flags &= ~FCF_FABRIC_DEVICE; 5188 5159 fcport->keep_nport_handle = 1; 5189 5160 if (vha->flags.nvme_enabled) { 5190 - fcport->fc4f_nvme = 1; 5161 + fcport->fc4_type = 5162 + (FS_FC4TYPE_NVME | FS_FC4TYPE_FCP); 5191 5163 fcport->n2n_flag = 1; 5192 5164 } 5193 5165 fcport->fw_login_state = 0;
+1 -1
drivers/scsi/qla2xxx/qla_target.c
··· 463 463 464 464 case IMMED_NOTIFY_TYPE: 465 465 { 466 - struct scsi_qla_host *host = vha; 466 + struct scsi_qla_host *host; 467 467 struct imm_ntfy_from_isp *entry = 468 468 (struct imm_ntfy_from_isp *)pkt; 469 469
+26 -3
drivers/scsi/qla2xxx/qla_tmpl.c
··· 10 10 #define ISPREG(vha) (&(vha)->hw->iobase->isp24) 11 11 #define IOBAR(reg) offsetof(typeof(*(reg)), iobase_addr) 12 12 #define IOBASE(vha) IOBAR(ISPREG(vha)) 13 + #define INVALID_ENTRY ((struct qla27xx_fwdt_entry *)0xffffffffffffffffUL) 13 14 14 15 static inline void 15 16 qla27xx_insert16(uint16_t value, void *buf, ulong *len) ··· 262 261 ulong start = le32_to_cpu(ent->t262.start_addr); 263 262 ulong end = le32_to_cpu(ent->t262.end_addr); 264 263 ulong dwords; 264 + int rc; 265 265 266 266 ql_dbg(ql_dbg_misc, vha, 0xd206, 267 267 "%s: rdram(%x) [%lx]\n", __func__, ent->t262.ram_area, *len); ··· 310 308 dwords = end - start + 1; 311 309 if (buf) { 312 310 buf += *len; 313 - qla24xx_dump_ram(vha->hw, start, buf, dwords, &buf); 311 + rc = qla24xx_dump_ram(vha->hw, start, buf, dwords, &buf); 312 + if (rc != QLA_SUCCESS) { 313 + ql_dbg(ql_dbg_async, vha, 0xffff, 314 + "%s: dump ram MB failed. Area %xh start %lxh end %lxh\n", 315 + __func__, area, start, end); 316 + return INVALID_ENTRY; 317 + } 314 318 } 315 319 *len += dwords * sizeof(uint32_t); 316 320 done: ··· 846 838 ent = qla27xx_find_entry(type)(vha, ent, buf, len); 847 839 if (!ent) 848 840 break; 841 + 842 + if (ent == INVALID_ENTRY) { 843 + *len = 0; 844 + ql_dbg(ql_dbg_async, vha, 0xffff, 845 + "Unable to capture FW dump"); 846 + goto bailout; 847 + } 849 848 } 850 849 851 850 if (tmp->count) ··· 862 847 if (ent) 863 848 ql_dbg(ql_dbg_misc, vha, 0xd019, 864 849 "%s: missing end entry\n", __func__); 850 + 851 + bailout: 852 + cpu_to_le32s(&tmp->count); /* endianize residual count */ 865 853 } 866 854 867 855 static void ··· 1017 999 uint j; 1018 1000 ulong len; 1019 1001 void *buf = vha->hw->fw_dump; 1002 + uint count = vha->hw->fw_dump_mpi ? 2 : 1; 1020 1003 1021 - for (j = 0; j < 2; j++, fwdt++, buf += len) { 1004 + for (j = 0; j < count; j++, fwdt++, buf += len) { 1022 1005 ql_log(ql_log_warn, vha, 0xd011, 1023 1006 "-> fwdt%u running...\n", j); 1024 1007 if (!fwdt->template) { ··· 1029 1010 } 1030 1011 len = qla27xx_execute_fwdt_template(vha, 1031 1012 fwdt->template, buf); 1032 - if (len != fwdt->dump_size) { 1013 + if (len == 0) { 1014 + goto bailout; 1015 + } else if (len != fwdt->dump_size) { 1033 1016 ql_log(ql_log_warn, vha, 0xd013, 1034 1017 "-> fwdt%u fwdump residual=%+ld\n", 1035 1018 j, fwdt->dump_size - len); ··· 1046 1025 qla2x00_post_uevent_work(vha, QLA_UEVENT_CODE_FW_DUMP); 1047 1026 } 1048 1027 1028 + bailout: 1029 + vha->hw->fw_dump_mpi = 0; 1049 1030 #ifndef __CHECKER__ 1050 1031 if (!hardware_locked) 1051 1032 spin_unlock_irqrestore(&vha->hw->hardware_lock, flags);
+1 -1
drivers/scsi/qla2xxx/qla_version.h
··· 7 7 /* 8 8 * Driver version 9 9 */ 10 - #define QLA2XXX_VERSION "10.01.00.19-k" 10 + #define QLA2XXX_VERSION "10.01.00.21-k" 11 11 12 12 #define QLA_DRIVER_MAJOR_VER 10 13 13 #define QLA_DRIVER_MINOR_VER 1
-3
drivers/scsi/qla4xxx/ql4_mbx.c
··· 640 640 641 641 if (qla4xxx_get_ifcb(ha, &mbox_cmd[0], &mbox_sts[0], init_fw_cb_dma) != 642 642 QLA_SUCCESS) { 643 - dma_free_coherent(&ha->pdev->dev, 644 - sizeof(struct addr_ctrl_blk), 645 - init_fw_cb, init_fw_cb_dma); 646 643 goto exit_init_fw_cb; 647 644 } 648 645
+5 -1
drivers/scsi/scsi.c
··· 186 186 struct scsi_driver *drv; 187 187 unsigned int good_bytes; 188 188 189 - scsi_device_unbusy(sdev); 189 + scsi_device_unbusy(sdev, cmd); 190 190 191 191 /* 192 192 * Clear the flags that say that the device/target/host is no longer ··· 465 465 return; 466 466 467 467 for (i = 4; i < vpd_buf->len; i++) { 468 + if (vpd_buf->data[i] == 0x0) 469 + scsi_update_vpd_page(sdev, 0x0, &sdev->vpd_pg0); 468 470 if (vpd_buf->data[i] == 0x80) 469 471 scsi_update_vpd_page(sdev, 0x80, &sdev->vpd_pg80); 470 472 if (vpd_buf->data[i] == 0x83) 471 473 scsi_update_vpd_page(sdev, 0x83, &sdev->vpd_pg83); 474 + if (vpd_buf->data[i] == 0x89) 475 + scsi_update_vpd_page(sdev, 0x89, &sdev->vpd_pg89); 472 476 } 473 477 kfree(vpd_buf); 474 478 }
+7 -2
drivers/scsi/scsi_debug.c
··· 1025 1025 static int p_fill_from_dev_buffer(struct scsi_cmnd *scp, const void *arr, 1026 1026 int arr_len, unsigned int off_dst) 1027 1027 { 1028 - int act_len, n; 1028 + unsigned int act_len, n; 1029 1029 struct scsi_data_buffer *sdb = &scp->sdb; 1030 1030 off_t skip = off_dst; 1031 1031 ··· 1039 1039 pr_debug("%s: off_dst=%u, scsi_bufflen=%u, act_len=%u, resid=%d\n", 1040 1040 __func__, off_dst, scsi_bufflen(scp), act_len, 1041 1041 scsi_get_resid(scp)); 1042 - n = (int)scsi_bufflen(scp) - ((int)off_dst + act_len); 1042 + n = scsi_bufflen(scp) - (off_dst + act_len); 1043 1043 scsi_set_resid(scp, min(scsi_get_resid(scp), n)); 1044 1044 return 0; 1045 1045 } ··· 5260 5260 5261 5261 default: 5262 5262 pr_err("dif must be 0, 1, 2 or 3\n"); 5263 + return -EINVAL; 5264 + } 5265 + 5266 + if (sdebug_num_tgts < 0) { 5267 + pr_err("num_tgts must be >= 0\n"); 5263 5268 return -EINVAL; 5264 5269 } 5265 5270
+22 -23
drivers/scsi/scsi_lib.c
··· 189 189 * active on the host/device. 190 190 */ 191 191 if (unbusy) 192 - scsi_device_unbusy(device); 192 + scsi_device_unbusy(device, cmd); 193 193 194 194 /* 195 195 * Requeue this command. It will go before all other commands ··· 321 321 } 322 322 323 323 /* 324 - * Decrement the host_busy counter and wake up the error handler if necessary. 325 - * Avoid as follows that the error handler is not woken up if shost->host_busy 326 - * == shost->host_failed: use call_rcu() in scsi_eh_scmd_add() in combination 327 - * with an RCU read lock in this function to ensure that this function in its 328 - * entirety either finishes before scsi_eh_scmd_add() increases the 324 + * Wake up the error handler if necessary. Avoid as follows that the error 325 + * handler is not woken up if host in-flight requests number == 326 + * shost->host_failed: use call_rcu() in scsi_eh_scmd_add() in combination 327 + * with an RCU read lock in this function to ensure that this function in 328 + * its entirety either finishes before scsi_eh_scmd_add() increases the 329 329 * host_failed counter or that it notices the shost state change made by 330 330 * scsi_eh_scmd_add(). 331 331 */ 332 - static void scsi_dec_host_busy(struct Scsi_Host *shost) 332 + static void scsi_dec_host_busy(struct Scsi_Host *shost, struct scsi_cmnd *cmd) 333 333 { 334 334 unsigned long flags; 335 335 336 336 rcu_read_lock(); 337 - atomic_dec(&shost->host_busy); 337 + __clear_bit(SCMD_STATE_INFLIGHT, &cmd->state); 338 338 if (unlikely(scsi_host_in_recovery(shost))) { 339 339 spin_lock_irqsave(shost->host_lock, flags); 340 340 if (shost->host_failed || shost->host_eh_scheduled) ··· 344 344 rcu_read_unlock(); 345 345 } 346 346 347 - void scsi_device_unbusy(struct scsi_device *sdev) 347 + void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd) 348 348 { 349 349 struct Scsi_Host *shost = sdev->host; 350 350 struct scsi_target *starget = scsi_target(sdev); 351 351 352 - scsi_dec_host_busy(shost); 352 + scsi_dec_host_busy(shost, cmd); 353 353 354 354 if (starget->can_queue > 0) 355 355 atomic_dec(&starget->target_busy); ··· 430 430 431 431 static inline bool scsi_host_is_busy(struct Scsi_Host *shost) 432 432 { 433 - if (shost->can_queue > 0 && 434 - atomic_read(&shost->host_busy) >= shost->can_queue) 435 - return true; 436 433 if (atomic_read(&shost->host_blocked) > 0) 437 434 return true; 438 435 if (shost->host_self_blocked) ··· 1136 1139 unsigned int flags = cmd->flags & SCMD_PRESERVED_FLAGS; 1137 1140 unsigned long jiffies_at_alloc; 1138 1141 int retries; 1142 + bool in_flight; 1139 1143 1140 1144 if (!blk_rq_is_scsi(rq) && !(flags & SCMD_INITIALIZED)) { 1141 1145 flags |= SCMD_INITIALIZED; ··· 1145 1147 1146 1148 jiffies_at_alloc = cmd->jiffies_at_alloc; 1147 1149 retries = cmd->retries; 1150 + in_flight = test_bit(SCMD_STATE_INFLIGHT, &cmd->state); 1148 1151 /* zero out the cmd, except for the embedded scsi_request */ 1149 1152 memset((char *)cmd + sizeof(cmd->req), 0, 1150 1153 sizeof(*cmd) - sizeof(cmd->req) + dev->host->hostt->cmd_size); ··· 1157 1158 INIT_DELAYED_WORK(&cmd->abort_work, scmd_eh_abort_handler); 1158 1159 cmd->jiffies_at_alloc = jiffies_at_alloc; 1159 1160 cmd->retries = retries; 1161 + if (in_flight) 1162 + __set_bit(SCMD_STATE_INFLIGHT, &cmd->state); 1160 1163 1161 1164 scsi_add_cmd_to_list(cmd); 1162 1165 } ··· 1368 1367 */ 1369 1368 static inline int scsi_host_queue_ready(struct request_queue *q, 1370 1369 struct Scsi_Host *shost, 1371 - struct scsi_device *sdev) 1370 + struct scsi_device *sdev, 1371 + struct scsi_cmnd *cmd) 1372 1372 { 1373 - unsigned int busy; 1374 - 1375 1373 if (scsi_host_in_recovery(shost)) 1376 1374 return 0; 1377 1375 1378 - busy = atomic_inc_return(&shost->host_busy) - 1; 1379 1376 if (atomic_read(&shost->host_blocked) > 0) { 1380 - if (busy) 1377 + if (scsi_host_busy(shost) > 0) 1381 1378 goto starved; 1382 1379 1383 1380 /* ··· 1389 1390 "unblocking host at zero depth\n")); 1390 1391 } 1391 1392 1392 - if (shost->can_queue > 0 && busy >= shost->can_queue) 1393 - goto starved; 1394 1393 if (shost->host_self_blocked) 1395 1394 goto starved; 1396 1395 ··· 1400 1403 spin_unlock_irq(shost->host_lock); 1401 1404 } 1402 1405 1406 + __set_bit(SCMD_STATE_INFLIGHT, &cmd->state); 1407 + 1403 1408 return 1; 1404 1409 1405 1410 starved: ··· 1410 1411 list_add_tail(&sdev->starved_entry, &shost->starved_list); 1411 1412 spin_unlock_irq(shost->host_lock); 1412 1413 out_dec: 1413 - scsi_dec_host_busy(shost); 1414 + scsi_dec_host_busy(shost, cmd); 1414 1415 return 0; 1415 1416 } 1416 1417 ··· 1664 1665 ret = BLK_STS_RESOURCE; 1665 1666 if (!scsi_target_queue_ready(shost, sdev)) 1666 1667 goto out_put_budget; 1667 - if (!scsi_host_queue_ready(q, shost, sdev)) 1668 + if (!scsi_host_queue_ready(q, shost, sdev, cmd)) 1668 1669 goto out_dec_target_busy; 1669 1670 1670 1671 if (!(req->rq_flags & RQF_DONTPREP)) { ··· 1696 1697 return BLK_STS_OK; 1697 1698 1698 1699 out_dec_host_busy: 1699 - scsi_dec_host_busy(shost); 1700 + scsi_dec_host_busy(shost, cmd); 1700 1701 out_dec_target_busy: 1701 1702 if (scsi_target(sdev)->can_queue > 0) 1702 1703 atomic_dec(&scsi_target(sdev)->target_busy);
+8 -2
drivers/scsi/scsi_logging.c
··· 390 390 const char *mlret_string = scsi_mlreturn_string(disposition); 391 391 const char *hb_string = scsi_hostbyte_string(cmd->result); 392 392 const char *db_string = scsi_driverbyte_string(cmd->result); 393 + unsigned long cmd_age = (jiffies - cmd->jiffies_at_alloc) / HZ; 393 394 394 395 logbuf = scsi_log_reserve_buffer(&logbuf_len); 395 396 if (!logbuf) ··· 432 431 433 432 if (db_string) 434 433 off += scnprintf(logbuf + off, logbuf_len - off, 435 - "driverbyte=%s", db_string); 434 + "driverbyte=%s ", db_string); 436 435 else 437 436 off += scnprintf(logbuf + off, logbuf_len - off, 438 - "driverbyte=0x%02x", driver_byte(cmd->result)); 437 + "driverbyte=0x%02x ", 438 + driver_byte(cmd->result)); 439 + 440 + off += scnprintf(logbuf + off, logbuf_len - off, 441 + "cmd_age=%lus", cmd_age); 442 + 439 443 out_printk: 440 444 dev_printk(KERN_INFO, &cmd->device->sdev_gendev, "%s", logbuf); 441 445 scsi_log_release_buffer(logbuf);
+1 -1
drivers/scsi/scsi_priv.h
··· 87 87 extern void scsi_add_cmd_to_list(struct scsi_cmnd *cmd); 88 88 extern void scsi_del_cmd_from_list(struct scsi_cmnd *cmd); 89 89 extern int scsi_maybe_unblock_host(struct scsi_device *sdev); 90 - extern void scsi_device_unbusy(struct scsi_device *sdev); 90 + extern void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd); 91 91 extern void scsi_queue_insert(struct scsi_cmnd *cmd, int reason); 92 92 extern void scsi_io_completion(struct scsi_cmnd *, unsigned int); 93 93 extern void scsi_run_host_queues(struct Scsi_Host *shost);
+21 -1
drivers/scsi/scsi_sysfs.c
··· 437 437 struct device *parent; 438 438 struct list_head *this, *tmp; 439 439 struct scsi_vpd *vpd_pg80 = NULL, *vpd_pg83 = NULL; 440 + struct scsi_vpd *vpd_pg0 = NULL, *vpd_pg89 = NULL; 440 441 unsigned long flags; 441 442 442 443 sdev = container_of(work, struct scsi_device, ew.work); ··· 467 466 sdev->request_queue = NULL; 468 467 469 468 mutex_lock(&sdev->inquiry_mutex); 469 + vpd_pg0 = rcu_replace_pointer(sdev->vpd_pg0, vpd_pg0, 470 + lockdep_is_held(&sdev->inquiry_mutex)); 470 471 vpd_pg80 = rcu_replace_pointer(sdev->vpd_pg80, vpd_pg80, 471 472 lockdep_is_held(&sdev->inquiry_mutex)); 472 473 vpd_pg83 = rcu_replace_pointer(sdev->vpd_pg83, vpd_pg83, 473 474 lockdep_is_held(&sdev->inquiry_mutex)); 475 + vpd_pg89 = rcu_replace_pointer(sdev->vpd_pg89, vpd_pg89, 476 + lockdep_is_held(&sdev->inquiry_mutex)); 474 477 mutex_unlock(&sdev->inquiry_mutex); 475 478 479 + if (vpd_pg0) 480 + kfree_rcu(vpd_pg0, rcu); 476 481 if (vpd_pg83) 477 482 kfree_rcu(vpd_pg83, rcu); 478 483 if (vpd_pg80) 479 484 kfree_rcu(vpd_pg80, rcu); 485 + if (vpd_pg89) 486 + kfree_rcu(vpd_pg89, rcu); 480 487 kfree(sdev->inquiry); 481 488 kfree(sdev); 482 489 ··· 877 868 878 869 sdev_vpd_pg_attr(pg83); 879 870 sdev_vpd_pg_attr(pg80); 871 + sdev_vpd_pg_attr(pg89); 872 + sdev_vpd_pg_attr(pg0); 880 873 881 874 static ssize_t show_inquiry(struct file *filep, struct kobject *kobj, 882 875 struct bin_attribute *bin_attr, ··· 1211 1200 struct scsi_device *sdev = to_scsi_device(dev); 1212 1201 1213 1202 1203 + if (attr == &dev_attr_vpd_pg0 && !sdev->vpd_pg0) 1204 + return 0; 1205 + 1214 1206 if (attr == &dev_attr_vpd_pg80 && !sdev->vpd_pg80) 1215 1207 return 0; 1216 1208 1217 1209 if (attr == &dev_attr_vpd_pg83 && !sdev->vpd_pg83) 1210 + return 0; 1211 + 1212 + if (attr == &dev_attr_vpd_pg89 && !sdev->vpd_pg89) 1218 1213 return 0; 1219 1214 1220 1215 return S_IRUGO; ··· 1265 1248 }; 1266 1249 1267 1250 static struct bin_attribute *scsi_sdev_bin_attrs[] = { 1251 + &dev_attr_vpd_pg0, 1268 1252 &dev_attr_vpd_pg83, 1269 1253 &dev_attr_vpd_pg80, 1254 + &dev_attr_vpd_pg89, 1270 1255 &dev_attr_inquiry, 1271 1256 NULL 1272 1257 }; ··· 1328 1309 device_enable_async_suspend(&sdev->sdev_gendev); 1329 1310 scsi_autopm_get_target(starget); 1330 1311 pm_runtime_set_active(&sdev->sdev_gendev); 1331 - pm_runtime_forbid(&sdev->sdev_gendev); 1312 + if (!sdev->rpm_autosuspend) 1313 + pm_runtime_forbid(&sdev->sdev_gendev); 1332 1314 pm_runtime_enable(&sdev->sdev_gendev); 1333 1315 scsi_autopm_put_target(starget); 1334 1316
+34 -78
drivers/scsi/scsi_trace.c
··· 9 9 #include <trace/events/scsi.h> 10 10 11 11 #define SERVICE_ACTION16(cdb) (cdb[1] & 0x1f) 12 - #define SERVICE_ACTION32(cdb) ((cdb[8] << 8) | cdb[9]) 12 + #define SERVICE_ACTION32(cdb) (get_unaligned_be16(&cdb[8])) 13 13 14 14 static const char * 15 15 scsi_trace_misc(struct trace_seq *, unsigned char *, int); ··· 18 18 scsi_trace_rw6(struct trace_seq *p, unsigned char *cdb, int len) 19 19 { 20 20 const char *ret = trace_seq_buffer_ptr(p); 21 - sector_t lba = 0, txlen = 0; 21 + u32 lba = 0, txlen; 22 22 23 23 lba |= ((cdb[1] & 0x1F) << 16); 24 24 lba |= (cdb[2] << 8); 25 25 lba |= cdb[3]; 26 - txlen = cdb[4]; 26 + /* 27 + * From SBC-2: a TRANSFER LENGTH field set to zero specifies that 256 28 + * logical blocks shall be read (READ(6)) or written (WRITE(6)). 29 + */ 30 + txlen = cdb[4] ? cdb[4] : 256; 27 31 28 - trace_seq_printf(p, "lba=%llu txlen=%llu", 29 - (unsigned long long)lba, (unsigned long long)txlen); 32 + trace_seq_printf(p, "lba=%u txlen=%u", lba, txlen); 30 33 trace_seq_putc(p, 0); 31 34 32 35 return ret; ··· 39 36 scsi_trace_rw10(struct trace_seq *p, unsigned char *cdb, int len) 40 37 { 41 38 const char *ret = trace_seq_buffer_ptr(p); 42 - sector_t lba = 0, txlen = 0; 39 + u32 lba, txlen; 43 40 44 - lba |= (cdb[2] << 24); 45 - lba |= (cdb[3] << 16); 46 - lba |= (cdb[4] << 8); 47 - lba |= cdb[5]; 48 - txlen |= (cdb[7] << 8); 49 - txlen |= cdb[8]; 41 + lba = get_unaligned_be32(&cdb[2]); 42 + txlen = get_unaligned_be16(&cdb[7]); 50 43 51 - trace_seq_printf(p, "lba=%llu txlen=%llu protect=%u", 52 - (unsigned long long)lba, (unsigned long long)txlen, 44 + trace_seq_printf(p, "lba=%u txlen=%u protect=%u", lba, txlen, 53 45 cdb[1] >> 5); 54 46 55 47 if (cdb[0] == WRITE_SAME) ··· 59 61 scsi_trace_rw12(struct trace_seq *p, unsigned char *cdb, int len) 60 62 { 61 63 const char *ret = trace_seq_buffer_ptr(p); 62 - sector_t lba = 0, txlen = 0; 64 + u32 lba, txlen; 63 65 64 - lba |= (cdb[2] << 24); 65 - lba |= (cdb[3] << 16); 66 - lba |= (cdb[4] << 8); 67 - lba |= cdb[5]; 68 - txlen |= (cdb[6] << 24); 69 - txlen |= (cdb[7] << 16); 70 - txlen |= (cdb[8] << 8); 71 - txlen |= cdb[9]; 66 + lba = get_unaligned_be32(&cdb[2]); 67 + txlen = get_unaligned_be32(&cdb[6]); 72 68 73 - trace_seq_printf(p, "lba=%llu txlen=%llu protect=%u", 74 - (unsigned long long)lba, (unsigned long long)txlen, 69 + trace_seq_printf(p, "lba=%u txlen=%u protect=%u", lba, txlen, 75 70 cdb[1] >> 5); 76 71 trace_seq_putc(p, 0); 77 72 ··· 75 84 scsi_trace_rw16(struct trace_seq *p, unsigned char *cdb, int len) 76 85 { 77 86 const char *ret = trace_seq_buffer_ptr(p); 78 - sector_t lba = 0, txlen = 0; 87 + u64 lba; 88 + u32 txlen; 79 89 80 - lba |= ((u64)cdb[2] << 56); 81 - lba |= ((u64)cdb[3] << 48); 82 - lba |= ((u64)cdb[4] << 40); 83 - lba |= ((u64)cdb[5] << 32); 84 - lba |= (cdb[6] << 24); 85 - lba |= (cdb[7] << 16); 86 - lba |= (cdb[8] << 8); 87 - lba |= cdb[9]; 88 - txlen |= (cdb[10] << 24); 89 - txlen |= (cdb[11] << 16); 90 - txlen |= (cdb[12] << 8); 91 - txlen |= cdb[13]; 90 + lba = get_unaligned_be64(&cdb[2]); 91 + txlen = get_unaligned_be32(&cdb[10]); 92 92 93 - trace_seq_printf(p, "lba=%llu txlen=%llu protect=%u", 94 - (unsigned long long)lba, (unsigned long long)txlen, 93 + trace_seq_printf(p, "lba=%llu txlen=%u protect=%u", lba, txlen, 95 94 cdb[1] >> 5); 96 95 97 96 if (cdb[0] == WRITE_SAME_16) ··· 96 115 scsi_trace_rw32(struct trace_seq *p, unsigned char *cdb, int len) 97 116 { 98 117 const char *ret = trace_seq_buffer_ptr(p), *cmd; 99 - sector_t lba = 0, txlen = 0; 100 - u32 ei_lbrt = 0; 118 + u64 lba; 119 + u32 ei_lbrt, txlen; 101 120 102 121 switch (SERVICE_ACTION32(cdb)) { 103 122 case READ_32: ··· 117 136 goto out; 118 137 } 119 138 120 - lba |= ((u64)cdb[12] << 56); 121 - lba |= ((u64)cdb[13] << 48); 122 - lba |= ((u64)cdb[14] << 40); 123 - lba |= ((u64)cdb[15] << 32); 124 - lba |= (cdb[16] << 24); 125 - lba |= (cdb[17] << 16); 126 - lba |= (cdb[18] << 8); 127 - lba |= cdb[19]; 128 - ei_lbrt |= (cdb[20] << 24); 129 - ei_lbrt |= (cdb[21] << 16); 130 - ei_lbrt |= (cdb[22] << 8); 131 - ei_lbrt |= cdb[23]; 132 - txlen |= (cdb[28] << 24); 133 - txlen |= (cdb[29] << 16); 134 - txlen |= (cdb[30] << 8); 135 - txlen |= cdb[31]; 139 + lba = get_unaligned_be64(&cdb[12]); 140 + ei_lbrt = get_unaligned_be32(&cdb[20]); 141 + txlen = get_unaligned_be32(&cdb[28]); 136 142 137 - trace_seq_printf(p, "%s_32 lba=%llu txlen=%llu protect=%u ei_lbrt=%u", 138 - cmd, (unsigned long long)lba, 139 - (unsigned long long)txlen, cdb[10] >> 5, ei_lbrt); 143 + trace_seq_printf(p, "%s_32 lba=%llu txlen=%u protect=%u ei_lbrt=%u", 144 + cmd, lba, txlen, cdb[10] >> 5, ei_lbrt); 140 145 141 146 if (SERVICE_ACTION32(cdb) == WRITE_SAME_32) 142 147 trace_seq_printf(p, " unmap=%u", cdb[10] >> 3 & 1); ··· 137 170 scsi_trace_unmap(struct trace_seq *p, unsigned char *cdb, int len) 138 171 { 139 172 const char *ret = trace_seq_buffer_ptr(p); 140 - unsigned int regions = cdb[7] << 8 | cdb[8]; 173 + unsigned int regions = get_unaligned_be16(&cdb[7]); 141 174 142 175 trace_seq_printf(p, "regions=%u", (regions - 8) / 16); 143 176 trace_seq_putc(p, 0); ··· 149 182 scsi_trace_service_action_in(struct trace_seq *p, unsigned char *cdb, int len) 150 183 { 151 184 const char *ret = trace_seq_buffer_ptr(p), *cmd; 152 - sector_t lba = 0; 153 - u32 alloc_len = 0; 185 + u64 lba; 186 + u32 alloc_len; 154 187 155 188 switch (SERVICE_ACTION16(cdb)) { 156 189 case SAI_READ_CAPACITY_16: ··· 164 197 goto out; 165 198 } 166 199 167 - lba |= ((u64)cdb[2] << 56); 168 - lba |= ((u64)cdb[3] << 48); 169 - lba |= ((u64)cdb[4] << 40); 170 - lba |= ((u64)cdb[5] << 32); 171 - lba |= (cdb[6] << 24); 172 - lba |= (cdb[7] << 16); 173 - lba |= (cdb[8] << 8); 174 - lba |= cdb[9]; 175 - alloc_len |= (cdb[10] << 24); 176 - alloc_len |= (cdb[11] << 16); 177 - alloc_len |= (cdb[12] << 8); 178 - alloc_len |= cdb[13]; 200 + lba = get_unaligned_be64(&cdb[2]); 201 + alloc_len = get_unaligned_be32(&cdb[10]); 179 202 180 - trace_seq_printf(p, "%s lba=%llu alloc_len=%u", cmd, 181 - (unsigned long long)lba, alloc_len); 203 + trace_seq_printf(p, "%s lba=%llu alloc_len=%u", cmd, lba, alloc_len); 182 204 183 205 out: 184 206 trace_seq_putc(p, 0);
+4
drivers/scsi/sd.c
··· 3390 3390 } 3391 3391 3392 3392 blk_pm_runtime_init(sdp->request_queue, dev); 3393 + if (sdp->rpm_autosuspend) { 3394 + pm_runtime_set_autosuspend_delay(dev, 3395 + sdp->host->hostt->rpm_autosuspend_delay); 3396 + } 3393 3397 device_add_disk(dev, gd, NULL); 3394 3398 if (sdkp->capacity) 3395 3399 sd_dif_config_host(sdkp);
+41 -50
drivers/scsi/sg.c
··· 429 429 SCSI_LOG_TIMEOUT(3, sg_printk(KERN_INFO, sdp, 430 430 "sg_read: count=%d\n", (int) count)); 431 431 432 - if (!access_ok(buf, count)) 433 - return -EFAULT; 434 432 if (sfp->force_packid && (count >= SZ_SG_HEADER)) { 435 - old_hdr = kmalloc(SZ_SG_HEADER, GFP_KERNEL); 436 - if (!old_hdr) 437 - return -ENOMEM; 438 - if (__copy_from_user(old_hdr, buf, SZ_SG_HEADER)) { 439 - retval = -EFAULT; 440 - goto free_old_hdr; 441 - } 433 + old_hdr = memdup_user(buf, SZ_SG_HEADER); 434 + if (IS_ERR(old_hdr)) 435 + return PTR_ERR(old_hdr); 442 436 if (old_hdr->reply_len < 0) { 443 437 if (count >= SZ_SG_IO_HDR) { 438 + /* 439 + * This is stupid. 440 + * 441 + * We're copying the whole sg_io_hdr_t from user 442 + * space just to get the 'pack_id' field. But the 443 + * field is at different offsets for the compat 444 + * case, so we'll use "get_sg_io_hdr()" to copy 445 + * the whole thing and convert it. 446 + * 447 + * We could do something like just calculating the 448 + * offset based of 'in_compat_syscall()', but the 449 + * 'compat_sg_io_hdr' definition is in the wrong 450 + * place for that. 451 + */ 444 452 sg_io_hdr_t *new_hdr; 445 453 new_hdr = kmalloc(SZ_SG_IO_HDR, GFP_KERNEL); 446 454 if (!new_hdr) { ··· 545 537 546 538 /* Now copy the result back to the user buffer. */ 547 539 if (count >= SZ_SG_HEADER) { 548 - if (__copy_to_user(buf, old_hdr, SZ_SG_HEADER)) { 540 + if (copy_to_user(buf, old_hdr, SZ_SG_HEADER)) { 549 541 retval = -EFAULT; 550 542 goto free_old_hdr; 551 543 } ··· 631 623 scsi_block_when_processing_errors(sdp->device))) 632 624 return -ENXIO; 633 625 634 - if (!access_ok(buf, count)) 635 - return -EFAULT; /* protects following copy_from_user()s + get_user()s */ 636 626 if (count < SZ_SG_HEADER) 637 627 return -EIO; 638 - if (__copy_from_user(&old_hdr, buf, SZ_SG_HEADER)) 628 + if (copy_from_user(&old_hdr, buf, SZ_SG_HEADER)) 639 629 return -EFAULT; 640 630 blocking = !(filp->f_flags & O_NONBLOCK); 641 631 if (old_hdr.reply_len < 0) ··· 642 636 if (count < (SZ_SG_HEADER + 6)) 643 637 return -EIO; /* The minimum scsi command length is 6 bytes. */ 644 638 639 + buf += SZ_SG_HEADER; 640 + if (get_user(opcode, buf)) 641 + return -EFAULT; 642 + 645 643 if (!(srp = sg_add_request(sfp))) { 646 644 SCSI_LOG_TIMEOUT(1, sg_printk(KERN_INFO, sdp, 647 645 "sg_write: queue full\n")); 648 646 return -EDOM; 649 647 } 650 - buf += SZ_SG_HEADER; 651 - __get_user(opcode, buf); 652 648 mutex_lock(&sfp->f_mutex); 653 649 if (sfp->next_cmd_len > 0) { 654 650 cmd_size = sfp->next_cmd_len; ··· 693 685 hp->flags = input_size; /* structure abuse ... */ 694 686 hp->pack_id = old_hdr.pack_id; 695 687 hp->usr_ptr = NULL; 696 - if (__copy_from_user(cmnd, buf, cmd_size)) 688 + if (copy_from_user(cmnd, buf, cmd_size)) 697 689 return -EFAULT; 698 690 /* 699 691 * SG_DXFER_TO_FROM_DEV is functionally equivalent to SG_DXFER_FROM_DEV, ··· 728 720 729 721 if (count < SZ_SG_IO_HDR) 730 722 return -EINVAL; 731 - if (!access_ok(buf, count)) 732 - return -EFAULT; /* protects following copy_from_user()s + get_user()s */ 733 723 734 724 sfp->cmd_q = 1; /* when sg_io_hdr seen, set command queuing on */ 735 725 if (!(srp = sg_add_request(sfp))) { ··· 765 759 sg_remove_request(sfp, srp); 766 760 return -EMSGSIZE; 767 761 } 768 - if (!access_ok(hp->cmdp, hp->cmd_len)) { 769 - sg_remove_request(sfp, srp); 770 - return -EFAULT; /* protects following copy_from_user()s + get_user()s */ 771 - } 772 - if (__copy_from_user(cmnd, hp->cmdp, hp->cmd_len)) { 762 + if (copy_from_user(cmnd, hp->cmdp, hp->cmd_len)) { 773 763 sg_remove_request(sfp, srp); 774 764 return -EFAULT; 775 765 } ··· 942 940 return -ENODEV; 943 941 if (!scsi_block_when_processing_errors(sdp->device)) 944 942 return -ENXIO; 945 - if (!access_ok(p, SZ_SG_IO_HDR)) 946 - return -EFAULT; 947 943 result = sg_new_write(sfp, filp, p, SZ_SG_IO_HDR, 948 944 1, read_only, 1, &srp); 949 945 if (result < 0) ··· 986 986 case SG_GET_LOW_DMA: 987 987 return put_user((int) sdp->device->host->unchecked_isa_dma, ip); 988 988 case SG_GET_SCSI_ID: 989 - if (!access_ok(p, sizeof (sg_scsi_id_t))) 990 - return -EFAULT; 991 - else { 992 - sg_scsi_id_t __user *sg_idp = p; 989 + { 990 + sg_scsi_id_t v; 993 991 994 992 if (atomic_read(&sdp->detaching)) 995 993 return -ENODEV; 996 - __put_user((int) sdp->device->host->host_no, 997 - &sg_idp->host_no); 998 - __put_user((int) sdp->device->channel, 999 - &sg_idp->channel); 1000 - __put_user((int) sdp->device->id, &sg_idp->scsi_id); 1001 - __put_user((int) sdp->device->lun, &sg_idp->lun); 1002 - __put_user((int) sdp->device->type, &sg_idp->scsi_type); 1003 - __put_user((short) sdp->device->host->cmd_per_lun, 1004 - &sg_idp->h_cmd_per_lun); 1005 - __put_user((short) sdp->device->queue_depth, 1006 - &sg_idp->d_queue_depth); 1007 - __put_user(0, &sg_idp->unused[0]); 1008 - __put_user(0, &sg_idp->unused[1]); 994 + memset(&v, 0, sizeof(v)); 995 + v.host_no = sdp->device->host->host_no; 996 + v.channel = sdp->device->channel; 997 + v.scsi_id = sdp->device->id; 998 + v.lun = sdp->device->lun; 999 + v.scsi_type = sdp->device->type; 1000 + v.h_cmd_per_lun = sdp->device->host->cmd_per_lun; 1001 + v.d_queue_depth = sdp->device->queue_depth; 1002 + if (copy_to_user(p, &v, sizeof(sg_scsi_id_t))) 1003 + return -EFAULT; 1009 1004 return 0; 1010 1005 } 1011 1006 case SG_SET_FORCE_PACK_ID: ··· 1010 1015 sfp->force_packid = val ? 1 : 0; 1011 1016 return 0; 1012 1017 case SG_GET_PACK_ID: 1013 - if (!access_ok(ip, sizeof (int))) 1014 - return -EFAULT; 1015 1018 read_lock_irqsave(&sfp->rq_list_lock, iflags); 1016 1019 list_for_each_entry(srp, &sfp->rq_list, entry) { 1017 1020 if ((1 == srp->done) && (!srp->sg_io_owned)) { 1018 1021 read_unlock_irqrestore(&sfp->rq_list_lock, 1019 1022 iflags); 1020 - __put_user(srp->header.pack_id, ip); 1021 - return 0; 1023 + return put_user(srp->header.pack_id, ip); 1022 1024 } 1023 1025 } 1024 1026 read_unlock_irqrestore(&sfp->rq_list_lock, iflags); 1025 - __put_user(-1, ip); 1026 - return 0; 1027 + return put_user(-1, ip); 1027 1028 case SG_GET_NUM_WAITING: 1028 1029 read_lock_irqsave(&sfp->rq_list_lock, iflags); 1029 1030 val = 0; ··· 2008 2017 num = 1 << (PAGE_SHIFT + schp->page_order); 2009 2018 for (k = 0; k < schp->k_use_sg && schp->pages[k]; k++) { 2010 2019 if (num > num_read_xfer) { 2011 - if (__copy_to_user(outp, page_address(schp->pages[k]), 2020 + if (copy_to_user(outp, page_address(schp->pages[k]), 2012 2021 num_read_xfer)) 2013 2022 return -EFAULT; 2014 2023 break; 2015 2024 } else { 2016 - if (__copy_to_user(outp, page_address(schp->pages[k]), 2025 + if (copy_to_user(outp, page_address(schp->pages[k]), 2017 2026 num)) 2018 2027 return -EFAULT; 2019 2028 num_read_xfer -= num;
+38 -39
drivers/scsi/smartpqi/smartpqi.h
··· 276 276 u8 reserved4 : 2; 277 277 u8 additional_cdb_bytes_usage : 3; 278 278 u8 reserved5 : 3; 279 - u8 cdb[32]; 279 + u8 cdb[16]; 280 + u8 reserved6[12]; 281 + __le32 timeout; 280 282 struct pqi_sg_descriptor 281 283 sg_descriptors[PQI_MAX_EMBEDDED_SG_DESCRIPTORS]; 282 284 }; ··· 387 385 struct pqi_iu_header header; 388 386 __le16 request_id; 389 387 __le16 nexus_id; 390 - u8 reserved[4]; 388 + u8 reserved[2]; 389 + __le16 timeout; 391 390 u8 lun_number[8]; 392 391 __le16 protocol_specific; 393 392 __le16 outbound_queue_id_to_manage; ··· 448 445 449 446 struct pqi_ofa_memory { 450 447 __le64 signature; /* "OFA_QRM" */ 451 - __le16 version; /* version of this struct(1 = 1st version) */ 448 + __le16 version; /* version of this struct (1 = 1st version) */ 452 449 u8 reserved[62]; 453 450 __le32 bytes_allocated; /* total allocated memory in bytes */ 454 451 __le16 num_memory_descriptors; ··· 764 761 #define PQI_FIRMWARE_FEATURE_OFA 0 765 762 #define PQI_FIRMWARE_FEATURE_SMP 1 766 763 #define PQI_FIRMWARE_FEATURE_SOFT_RESET_HANDSHAKE 11 764 + #define PQI_FIRMWARE_FEATURE_RAID_IU_TIMEOUT 13 765 + #define PQI_FIRMWARE_FEATURE_TMF_IU_TIMEOUT 14 767 766 768 767 struct pqi_config_table_debug { 769 768 struct pqi_config_table_section_header header; ··· 831 826 832 827 struct report_lun_header { 833 828 __be32 list_length; 834 - u8 extended_response; 829 + u8 flags; 835 830 u8 reserved[3]; 836 831 }; 832 + 833 + /* for flags field of struct report_lun_header */ 834 + #define CISS_REPORT_LOG_FLAG_UNIQUE_LUN_ID (1 << 0) 835 + #define CISS_REPORT_LOG_FLAG_QUEUE_DEPTH (1 << 5) 836 + #define CISS_REPORT_LOG_FLAG_DRIVE_TYPE_MIX (1 << 6) 837 + 838 + #define CISS_REPORT_PHYS_FLAG_OTHER (1 << 1) 837 839 838 840 struct report_log_lun_extended_entry { 839 841 u8 lunid[8]; ··· 863 851 }; 864 852 865 853 /* for device_flags field of struct report_phys_lun_extended_entry */ 866 - #define REPORT_PHYS_LUN_DEV_FLAG_AIO_ENABLED 0x8 854 + #define CISS_REPORT_PHYS_DEV_FLAG_AIO_ENABLED 0x8 867 855 868 856 struct report_phys_lun_extended { 869 857 struct report_lun_header header; ··· 876 864 u8 reserved[2]; 877 865 }; 878 866 879 - /* constants for flags field of RAID map */ 867 + /* for flags field of RAID map */ 880 868 #define RAID_MAP_ENCRYPTION_ENABLED 0x1 881 869 882 870 struct raid_map { ··· 919 907 u8 scsi3addr[8]; 920 908 __be64 wwid; 921 909 u8 volume_id[16]; 922 - u8 unique_id[16]; 923 910 u8 is_physical_device : 1; 924 911 u8 is_external_raid_device : 1; 925 912 u8 is_expander_smp_device : 1; ··· 965 954 }; 966 955 967 956 /* VPD inquiry pages */ 968 - #define SCSI_VPD_SUPPORTED_PAGES 0x0 /* standard page */ 969 - #define SCSI_VPD_DEVICE_ID 0x83 /* standard page */ 970 957 #define CISS_VPD_LV_DEVICE_GEOMETRY 0xc1 /* vendor-specific page */ 971 958 #define CISS_VPD_LV_BYPASS_STATUS 0xc2 /* vendor-specific page */ 972 959 #define CISS_VPD_LV_STATUS 0xc3 /* vendor-specific page */ 973 - #define SCSI_VPD_HEADER_SZ 4 974 - #define SCSI_VPD_DEVICE_ID_IDX 8 /* Index of page id in page */ 975 960 976 961 #define VPD_PAGE (1 << 8) 977 962 ··· 1137 1130 struct mutex ofa_mutex; /* serialize ofa */ 1138 1131 bool controller_online; 1139 1132 bool block_requests; 1140 - bool in_shutdown; 1133 + bool block_device_reset; 1141 1134 bool in_ofa; 1135 + bool in_shutdown; 1142 1136 u8 inbound_spanning_supported : 1; 1143 1137 u8 outbound_spanning_supported : 1; 1144 1138 u8 pqi_mode_enabled : 1; 1145 1139 u8 pqi_reset_quiesce_supported : 1; 1146 1140 u8 soft_reset_handshake_supported : 1; 1141 + u8 raid_iu_timeout_supported: 1; 1142 + u8 tmf_iu_timeout_supported: 1; 1147 1143 1148 1144 struct list_head scsi_device_list; 1149 1145 spinlock_t scsi_device_list_lock; ··· 1180 1170 spinlock_t raid_bypass_retry_list_lock; 1181 1171 struct work_struct raid_bypass_retry_work; 1182 1172 1183 - struct pqi_ofa_memory *pqi_ofa_mem_virt_addr; 1184 - dma_addr_t pqi_ofa_mem_dma_handle; 1185 - void **pqi_ofa_chunk_virt_addr; 1173 + struct pqi_ofa_memory *pqi_ofa_mem_virt_addr; 1174 + dma_addr_t pqi_ofa_mem_dma_handle; 1175 + void **pqi_ofa_chunk_virt_addr; 1176 + atomic_t sync_cmds_outstanding; 1186 1177 }; 1187 1178 1188 1179 enum pqi_ctrl_mode { ··· 1202 1191 #define CISS_REPORT_PHYS 0xc3 /* Report Physical LUNs */ 1203 1192 #define CISS_GET_RAID_MAP 0xc8 1204 1193 1205 - /* constants for CISS_REPORT_LOG/CISS_REPORT_PHYS commands */ 1206 - #define CISS_REPORT_LOG_EXTENDED 0x1 1207 - #define CISS_REPORT_PHYS_EXTENDED 0x2 1208 - 1209 1194 /* BMIC commands */ 1210 1195 #define BMIC_IDENTIFY_CONTROLLER 0x11 1211 1196 #define BMIC_IDENTIFY_PHYSICAL_DEVICE 0x15 ··· 1215 1208 #define BMIC_SET_DIAG_OPTIONS 0xf4 1216 1209 #define BMIC_SENSE_DIAG_OPTIONS 0xf5 1217 1210 1218 - #define CSMI_CC_SAS_SMP_PASSTHRU 0X17 1211 + #define CSMI_CC_SAS_SMP_PASSTHRU 0x17 1219 1212 1220 1213 #define SA_FLUSH_CACHE 0x1 1221 1214 ··· 1251 1244 u8 ctrl_serial_number[16]; 1252 1245 }; 1253 1246 1254 - #define SA_EXPANDER_SMP_DEVICE 0x05 1255 - #define SA_CONTROLLER_DEVICE 0x07 1256 - /*SCSI Invalid Device Type for SAS devices*/ 1257 - #define PQI_SAS_SCSI_INVALID_DEVTYPE 0xff 1247 + /* constants for device_type field */ 1248 + #define SA_DEVICE_TYPE_SATA 0x1 1249 + #define SA_DEVICE_TYPE_SAS 0x2 1250 + #define SA_DEVICE_TYPE_EXPANDER_SMP 0x5 1251 + #define SA_DEVICE_TYPE_CONTROLLER 0x7 1252 + #define SA_DEVICE_TYPE_NVME 0x9 1258 1253 1259 1254 struct bmic_identify_physical_device { 1260 1255 u8 scsi_bus; /* SCSI Bus number on controller */ ··· 1282 1273 __le32 rpm; /* drive rotational speed in RPM */ 1283 1274 u8 device_type; /* type of drive */ 1284 1275 u8 sata_version; /* only valid when device_type = */ 1285 - /* BMIC_DEVICE_TYPE_SATA */ 1276 + /* SA_DEVICE_TYPE_SATA */ 1286 1277 __le64 big_total_block_count; 1287 1278 __le64 ris_starting_lba; 1288 1279 __le32 ris_size; ··· 1405 1396 1406 1397 #pragma pack() 1407 1398 1408 - static inline struct pqi_ctrl_info *shost_to_hba(struct Scsi_Host *shost) 1409 - { 1410 - void *hostdata = shost_priv(shost); 1411 - 1412 - return *((struct pqi_ctrl_info **)hostdata); 1413 - } 1414 - 1415 - static inline bool pqi_ctrl_offline(struct pqi_ctrl_info *ctrl_info) 1416 - { 1417 - return !ctrl_info->controller_online; 1418 - } 1419 - 1420 1399 static inline void pqi_ctrl_busy(struct pqi_ctrl_info *ctrl_info) 1421 1400 { 1422 1401 atomic_inc(&ctrl_info->num_busy_threads); ··· 1415 1418 atomic_dec(&ctrl_info->num_busy_threads); 1416 1419 } 1417 1420 1418 - static inline bool pqi_ctrl_blocked(struct pqi_ctrl_info *ctrl_info) 1421 + static inline struct pqi_ctrl_info *shost_to_hba(struct Scsi_Host *shost) 1419 1422 { 1420 - return ctrl_info->block_requests; 1423 + void *hostdata = shost_priv(shost); 1424 + 1425 + return *((struct pqi_ctrl_info **)hostdata); 1421 1426 } 1422 1427 1423 1428 void pqi_sas_smp_handler(struct bsg_job *job, struct Scsi_Host *shost,
+249 -190
drivers/scsi/smartpqi/smartpqi_init.c
··· 33 33 #define BUILD_TIMESTAMP 34 34 #endif 35 35 36 - #define DRIVER_VERSION "1.2.8-026" 36 + #define DRIVER_VERSION "1.2.10-025" 37 37 #define DRIVER_MAJOR 1 38 38 #define DRIVER_MINOR 2 39 - #define DRIVER_RELEASE 8 40 - #define DRIVER_REVISION 26 39 + #define DRIVER_RELEASE 10 40 + #define DRIVER_REVISION 25 41 41 42 42 #define DRIVER_NAME "Microsemi PQI Driver (v" \ 43 43 DRIVER_VERSION BUILD_TIMESTAMP ")" ··· 211 211 return scsi3addr[2] != 0; 212 212 } 213 213 214 + static inline bool pqi_ctrl_offline(struct pqi_ctrl_info *ctrl_info) 215 + { 216 + return !ctrl_info->controller_online; 217 + } 218 + 214 219 static inline void pqi_check_ctrl_health(struct pqi_ctrl_info *ctrl_info) 215 220 { 216 221 if (ctrl_info->controller_online) ··· 238 233 enum pqi_ctrl_mode mode) 239 234 { 240 235 sis_write_driver_scratch(ctrl_info, mode); 236 + } 237 + 238 + static inline void pqi_ctrl_block_device_reset(struct pqi_ctrl_info *ctrl_info) 239 + { 240 + ctrl_info->block_device_reset = true; 241 + } 242 + 243 + static inline bool pqi_device_reset_blocked(struct pqi_ctrl_info *ctrl_info) 244 + { 245 + return ctrl_info->block_device_reset; 246 + } 247 + 248 + static inline bool pqi_ctrl_blocked(struct pqi_ctrl_info *ctrl_info) 249 + { 250 + return ctrl_info->block_requests; 241 251 } 242 252 243 253 static inline void pqi_ctrl_block_requests(struct pqi_ctrl_info *ctrl_info) ··· 351 331 return device->in_remove && !ctrl_info->in_shutdown; 352 332 } 353 333 334 + static inline void pqi_ctrl_shutdown_start(struct pqi_ctrl_info *ctrl_info) 335 + { 336 + ctrl_info->in_shutdown = true; 337 + } 338 + 339 + static inline bool pqi_ctrl_in_shutdown(struct pqi_ctrl_info *ctrl_info) 340 + { 341 + return ctrl_info->in_shutdown; 342 + } 343 + 354 344 static inline void pqi_schedule_rescan_worker_with_delay( 355 345 struct pqi_ctrl_info *ctrl_info, unsigned long delay) 356 346 { ··· 390 360 cancel_delayed_work_sync(&ctrl_info->rescan_work); 391 361 } 392 362 363 + static inline void pqi_cancel_event_worker(struct pqi_ctrl_info *ctrl_info) 364 + { 365 + cancel_work_sync(&ctrl_info->event_work); 366 + } 367 + 393 368 static inline u32 pqi_read_heartbeat_counter(struct pqi_ctrl_info *ctrl_info) 394 369 { 395 370 if (!ctrl_info->heartbeat_counter) ··· 412 377 } 413 378 414 379 static inline void pqi_clear_soft_reset_status(struct pqi_ctrl_info *ctrl_info, 415 - u8 clear) 380 + u8 clear) 416 381 { 417 382 u8 status; 418 383 ··· 497 462 request->data_direction = SOP_READ_FLAG; 498 463 cdb[0] = cmd; 499 464 if (cmd == CISS_REPORT_PHYS) 500 - cdb[1] = CISS_REPORT_PHYS_EXTENDED; 465 + cdb[1] = CISS_REPORT_PHYS_FLAG_OTHER; 501 466 else 502 - cdb[1] = CISS_REPORT_LOG_EXTENDED; 467 + cdb[1] = CISS_REPORT_LOG_FLAG_UNIQUE_LUN_ID; 503 468 put_unaligned_be32(cdb_length, &cdb[6]); 504 469 break; 505 470 case CISS_GET_RAID_MAP: ··· 602 567 } 603 568 604 569 static int pqi_send_scsi_raid_request(struct pqi_ctrl_info *ctrl_info, u8 cmd, 605 - u8 *scsi3addr, void *buffer, size_t buffer_length, u16 vpd_page, 606 - struct pqi_raid_error_info *error_info, 607 - unsigned long timeout_msecs) 570 + u8 *scsi3addr, void *buffer, size_t buffer_length, u16 vpd_page, 571 + struct pqi_raid_error_info *error_info, unsigned long timeout_msecs) 608 572 { 609 573 int rc; 610 - enum dma_data_direction dir; 611 574 struct pqi_raid_path_request request; 575 + enum dma_data_direction dir; 612 576 613 577 rc = pqi_build_raid_path_request(ctrl_info, &request, 614 578 cmd, scsi3addr, buffer, ··· 615 581 if (rc) 616 582 return rc; 617 583 618 - rc = pqi_submit_raid_request_synchronous(ctrl_info, &request.header, 619 - 0, error_info, timeout_msecs); 584 + rc = pqi_submit_raid_request_synchronous(ctrl_info, &request.header, 0, 585 + error_info, timeout_msecs); 620 586 621 587 pqi_pci_unmap(ctrl_info->pci_dev, request.sg_descriptors, 1, dir); 588 + 622 589 return rc; 623 590 } 624 591 625 - /* Helper functions for pqi_send_scsi_raid_request */ 592 + /* helper functions for pqi_send_scsi_raid_request */ 626 593 627 594 static inline int pqi_send_ctrl_raid_request(struct pqi_ctrl_info *ctrl_info, 628 - u8 cmd, void *buffer, size_t buffer_length) 595 + u8 cmd, void *buffer, size_t buffer_length) 629 596 { 630 597 return pqi_send_scsi_raid_request(ctrl_info, cmd, RAID_CTLR_LUNID, 631 - buffer, buffer_length, 0, NULL, NO_TIMEOUT); 598 + buffer, buffer_length, 0, NULL, NO_TIMEOUT); 632 599 } 633 600 634 601 static inline int pqi_send_ctrl_raid_with_error(struct pqi_ctrl_info *ctrl_info, 635 - u8 cmd, void *buffer, size_t buffer_length, 636 - struct pqi_raid_error_info *error_info) 602 + u8 cmd, void *buffer, size_t buffer_length, 603 + struct pqi_raid_error_info *error_info) 637 604 { 638 605 return pqi_send_scsi_raid_request(ctrl_info, cmd, RAID_CTLR_LUNID, 639 - buffer, buffer_length, 0, error_info, NO_TIMEOUT); 606 + buffer, buffer_length, 0, error_info, NO_TIMEOUT); 640 607 } 641 608 642 - 643 609 static inline int pqi_identify_controller(struct pqi_ctrl_info *ctrl_info, 644 - struct bmic_identify_controller *buffer) 610 + struct bmic_identify_controller *buffer) 645 611 { 646 612 return pqi_send_ctrl_raid_request(ctrl_info, BMIC_IDENTIFY_CONTROLLER, 647 - buffer, sizeof(*buffer)); 613 + buffer, sizeof(*buffer)); 648 614 } 649 615 650 616 static inline int pqi_sense_subsystem_info(struct pqi_ctrl_info *ctrl_info, 651 - struct bmic_sense_subsystem_info *sense_info) 617 + struct bmic_sense_subsystem_info *sense_info) 652 618 { 653 619 return pqi_send_ctrl_raid_request(ctrl_info, 654 - BMIC_SENSE_SUBSYSTEM_INFORMATION, 655 - sense_info, sizeof(*sense_info)); 620 + BMIC_SENSE_SUBSYSTEM_INFORMATION, sense_info, 621 + sizeof(*sense_info)); 656 622 } 657 623 658 624 static inline int pqi_scsi_inquiry(struct pqi_ctrl_info *ctrl_info, ··· 662 628 buffer, buffer_length, vpd_page, NULL, NO_TIMEOUT); 663 629 } 664 630 665 - static bool pqi_vpd_page_supported(struct pqi_ctrl_info *ctrl_info, 666 - u8 *scsi3addr, u16 vpd_page) 667 - { 668 - int rc; 669 - int i; 670 - int pages; 671 - unsigned char *buf, bufsize; 672 - 673 - buf = kzalloc(256, GFP_KERNEL); 674 - if (!buf) 675 - return false; 676 - 677 - /* Get the size of the page list first */ 678 - rc = pqi_scsi_inquiry(ctrl_info, scsi3addr, 679 - VPD_PAGE | SCSI_VPD_SUPPORTED_PAGES, 680 - buf, SCSI_VPD_HEADER_SZ); 681 - if (rc != 0) 682 - goto exit_unsupported; 683 - 684 - pages = buf[3]; 685 - if ((pages + SCSI_VPD_HEADER_SZ) <= 255) 686 - bufsize = pages + SCSI_VPD_HEADER_SZ; 687 - else 688 - bufsize = 255; 689 - 690 - /* Get the whole VPD page list */ 691 - rc = pqi_scsi_inquiry(ctrl_info, scsi3addr, 692 - VPD_PAGE | SCSI_VPD_SUPPORTED_PAGES, 693 - buf, bufsize); 694 - if (rc != 0) 695 - goto exit_unsupported; 696 - 697 - pages = buf[3]; 698 - for (i = 1; i <= pages; i++) 699 - if (buf[3 + i] == vpd_page) 700 - goto exit_supported; 701 - 702 - exit_unsupported: 703 - kfree(buf); 704 - return false; 705 - 706 - exit_supported: 707 - kfree(buf); 708 - return true; 709 - } 710 - 711 - static int pqi_get_device_id(struct pqi_ctrl_info *ctrl_info, 712 - u8 *scsi3addr, u8 *device_id, int buflen) 713 - { 714 - int rc; 715 - unsigned char *buf; 716 - 717 - if (!pqi_vpd_page_supported(ctrl_info, scsi3addr, SCSI_VPD_DEVICE_ID)) 718 - return 1; /* function not supported */ 719 - 720 - buf = kzalloc(64, GFP_KERNEL); 721 - if (!buf) 722 - return -ENOMEM; 723 - 724 - rc = pqi_scsi_inquiry(ctrl_info, scsi3addr, 725 - VPD_PAGE | SCSI_VPD_DEVICE_ID, 726 - buf, 64); 727 - if (rc == 0) { 728 - if (buflen > 16) 729 - buflen = 16; 730 - memcpy(device_id, &buf[SCSI_VPD_DEVICE_ID_IDX], buflen); 731 - } 732 - 733 - kfree(buf); 734 - 735 - return rc; 736 - } 737 - 738 631 static int pqi_identify_physical_device(struct pqi_ctrl_info *ctrl_info, 739 632 struct pqi_scsi_dev *device, 740 - struct bmic_identify_physical_device *buffer, 741 - size_t buffer_length) 633 + struct bmic_identify_physical_device *buffer, size_t buffer_length) 742 634 { 743 635 int rc; 744 636 enum dma_data_direction dir; ··· 685 725 0, NULL, NO_TIMEOUT); 686 726 687 727 pqi_pci_unmap(ctrl_info->pci_dev, request.sg_descriptors, 1, dir); 728 + 688 729 return rc; 689 730 } 690 731 ··· 724 763 buffer, buffer_length, error_info); 725 764 } 726 765 727 - #define PQI_FETCH_PTRAID_DATA (1UL<<31) 766 + #define PQI_FETCH_PTRAID_DATA (1 << 31) 728 767 729 768 static int pqi_set_diag_rescan(struct pqi_ctrl_info *ctrl_info) 730 769 { ··· 736 775 return -ENOMEM; 737 776 738 777 rc = pqi_send_ctrl_raid_request(ctrl_info, BMIC_SENSE_DIAG_OPTIONS, 739 - diag, sizeof(*diag)); 778 + diag, sizeof(*diag)); 740 779 if (rc) 741 780 goto out; 742 781 743 782 diag->options |= cpu_to_le32(PQI_FETCH_PTRAID_DATA); 744 783 745 - rc = pqi_send_ctrl_raid_request(ctrl_info, BMIC_SET_DIAG_OPTIONS, 746 - diag, sizeof(*diag)); 784 + rc = pqi_send_ctrl_raid_request(ctrl_info, BMIC_SET_DIAG_OPTIONS, diag, 785 + sizeof(*diag)); 786 + 747 787 out: 748 788 kfree(diag); 749 789 ··· 755 793 void *buffer, size_t buffer_length) 756 794 { 757 795 return pqi_send_ctrl_raid_request(ctrl_info, BMIC_WRITE_HOST_WELLNESS, 758 - buffer, buffer_length); 796 + buffer, buffer_length); 759 797 } 760 798 761 799 #pragma pack(1) ··· 908 946 void *buffer, size_t buffer_length) 909 947 { 910 948 return pqi_send_ctrl_raid_request(ctrl_info, cmd, buffer, 911 - buffer_length); 949 + buffer_length); 912 950 } 913 951 914 952 static int pqi_report_phys_logical_luns(struct pqi_ctrl_info *ctrl_info, u8 cmd, ··· 1242 1280 if (rc) 1243 1281 goto out; 1244 1282 1245 - #define RAID_BYPASS_STATUS 4 1246 - #define RAID_BYPASS_CONFIGURED 0x1 1247 - #define RAID_BYPASS_ENABLED 0x2 1283 + #define RAID_BYPASS_STATUS 4 1284 + #define RAID_BYPASS_CONFIGURED 0x1 1285 + #define RAID_BYPASS_ENABLED 0x2 1248 1286 1249 1287 bypass_status = buffer[RAID_BYPASS_STATUS]; 1250 1288 device->raid_bypass_configured = ··· 1347 1385 } 1348 1386 } 1349 1387 1350 - if (pqi_get_device_id(ctrl_info, device->scsi3addr, 1351 - device->unique_id, sizeof(device->unique_id)) < 0) 1352 - dev_warn(&ctrl_info->pci_dev->dev, 1353 - "Can't get device id for scsi %d:%d:%d:%d\n", 1354 - ctrl_info->scsi_host->host_no, 1355 - device->bus, device->target, 1356 - device->lun); 1357 - 1358 1388 out: 1359 1389 kfree(buffer); 1360 1390 ··· 1367 1413 device->queue_depth = PQI_PHYSICAL_DISK_DEFAULT_MAX_QUEUE_DEPTH; 1368 1414 return; 1369 1415 } 1416 + 1370 1417 device->box_index = id_phys->box_index; 1371 1418 device->phys_box_on_bus = id_phys->phys_box_on_bus; 1372 1419 device->phy_connected_dev_type = id_phys->phy_connected_dev_type[0]; ··· 1783 1828 device = new_device_list[i]; 1784 1829 1785 1830 find_result = pqi_scsi_find_entry(ctrl_info, device, 1786 - &matching_device); 1831 + &matching_device); 1787 1832 1788 1833 switch (find_result) { 1789 1834 case DEVICE_SAME: ··· 2012 2057 rc = -ENOMEM; 2013 2058 goto out; 2014 2059 } 2015 - if (pqi_hide_vsep) { 2016 - int i; 2017 2060 2061 + if (pqi_hide_vsep) { 2018 2062 for (i = num_physicals - 1; i >= 0; i--) { 2019 2063 phys_lun_ext_entry = 2020 2064 &physdev_list->lun_entries[i]; ··· 2086 2132 device->is_physical_device = is_physical_device; 2087 2133 if (is_physical_device) { 2088 2134 if (phys_lun_ext_entry->device_type == 2089 - SA_EXPANDER_SMP_DEVICE) 2135 + SA_DEVICE_TYPE_EXPANDER_SMP) 2090 2136 device->is_expander_smp_device = true; 2091 2137 } else { 2092 2138 device->is_external_raid_device = ··· 2123 2169 if (device->is_physical_device) { 2124 2170 device->wwid = phys_lun_ext_entry->wwid; 2125 2171 if ((phys_lun_ext_entry->device_flags & 2126 - REPORT_PHYS_LUN_DEV_FLAG_AIO_ENABLED) && 2172 + CISS_REPORT_PHYS_DEV_FLAG_AIO_ENABLED) && 2127 2173 phys_lun_ext_entry->aio_handle) { 2128 2174 device->aio_enabled = true; 2129 - device->aio_handle = 2130 - phys_lun_ext_entry->aio_handle; 2175 + device->aio_handle = 2176 + phys_lun_ext_entry->aio_handle; 2131 2177 } 2132 - 2133 - pqi_get_physical_disk_info(ctrl_info, 2134 - device, id_phys); 2135 - 2178 + pqi_get_physical_disk_info(ctrl_info, device, id_phys); 2136 2179 } else { 2137 2180 memcpy(device->volume_id, log_lun_ext_entry->volume_id, 2138 2181 sizeof(device->volume_id)); ··· 3109 3158 } 3110 3159 3111 3160 static void pqi_process_soft_reset(struct pqi_ctrl_info *ctrl_info, 3112 - enum pqi_soft_reset_status reset_status) 3161 + enum pqi_soft_reset_status reset_status) 3113 3162 { 3114 3163 int rc; 3115 3164 ··· 3153 3202 3154 3203 if (event_id == PQI_EVENT_OFA_QUIESCE) { 3155 3204 dev_info(&ctrl_info->pci_dev->dev, 3156 - "Received Online Firmware Activation quiesce event for controller %u\n", 3157 - ctrl_info->ctrl_id); 3205 + "Received Online Firmware Activation quiesce event for controller %u\n", 3206 + ctrl_info->ctrl_id); 3158 3207 pqi_ofa_ctrl_quiesce(ctrl_info); 3159 3208 pqi_acknowledge_event(ctrl_info, event); 3160 3209 if (ctrl_info->soft_reset_handshake_supported) { ··· 3174 3223 pqi_ofa_free_host_buffer(ctrl_info); 3175 3224 pqi_acknowledge_event(ctrl_info, event); 3176 3225 dev_info(&ctrl_info->pci_dev->dev, 3177 - "Online Firmware Activation(%u) cancel reason : %u\n", 3178 - ctrl_info->ctrl_id, event->ofa_cancel_reason); 3226 + "Online Firmware Activation(%u) cancel reason : %u\n", 3227 + ctrl_info->ctrl_id, event->ofa_cancel_reason); 3179 3228 } 3180 3229 3181 3230 mutex_unlock(&ctrl_info->ofa_mutex); ··· 3354 3403 #define PQI_LEGACY_INTX_MASK 0x1 3355 3404 3356 3405 static inline void pqi_configure_legacy_intx(struct pqi_ctrl_info *ctrl_info, 3357 - bool enable_intx) 3406 + bool enable_intx) 3358 3407 { 3359 3408 u32 intx_mask; 3360 3409 struct pqi_device_registers __iomem *pqi_registers; ··· 3792 3841 &pqi_registers->admin_oq_pi_addr); 3793 3842 3794 3843 reg = PQI_ADMIN_IQ_NUM_ELEMENTS | 3795 - (PQI_ADMIN_OQ_NUM_ELEMENTS) << 8 | 3844 + (PQI_ADMIN_OQ_NUM_ELEMENTS << 8) | 3796 3845 (admin_queues->int_msg_num << 16); 3797 3846 writel(reg, &pqi_registers->admin_iq_num_elements); 3798 3847 writel(PQI_CREATE_ADMIN_QUEUE_PAIR, ··· 3999 4048 complete(waiting); 4000 4049 } 4001 4050 4002 - static int pqi_process_raid_io_error_synchronous(struct pqi_raid_error_info 4003 - *error_info) 4051 + static int pqi_process_raid_io_error_synchronous( 4052 + struct pqi_raid_error_info *error_info) 4004 4053 { 4005 4054 int rc = -EIO; 4006 4055 ··· 4073 4122 goto out; 4074 4123 } 4075 4124 4125 + atomic_inc(&ctrl_info->sync_cmds_outstanding); 4126 + 4076 4127 io_request = pqi_alloc_io_request(ctrl_info); 4077 4128 4078 4129 put_unaligned_le16(io_request->index, ··· 4121 4168 4122 4169 pqi_free_io_request(io_request); 4123 4170 4171 + atomic_dec(&ctrl_info->sync_cmds_outstanding); 4124 4172 out: 4125 4173 up(&ctrl_info->sync_request_sem); 4126 4174 ··· 4619 4665 4620 4666 static inline int pqi_alloc_error_buffer(struct pqi_ctrl_info *ctrl_info) 4621 4667 { 4622 - ctrl_info->error_buffer = dma_alloc_coherent(&ctrl_info->pci_dev->dev, 4623 - ctrl_info->error_buffer_length, 4624 - &ctrl_info->error_buffer_dma_handle, 4625 - GFP_KERNEL); 4626 4668 4669 + ctrl_info->error_buffer = dma_alloc_coherent(&ctrl_info->pci_dev->dev, 4670 + ctrl_info->error_buffer_length, 4671 + &ctrl_info->error_buffer_dma_handle, 4672 + GFP_KERNEL); 4627 4673 if (!ctrl_info->error_buffer) 4628 4674 return -ENOMEM; 4629 4675 ··· 5356 5402 5357 5403 pqi_ctrl_busy(ctrl_info); 5358 5404 if (pqi_ctrl_blocked(ctrl_info) || pqi_device_in_reset(device) || 5359 - pqi_ctrl_in_ofa(ctrl_info)) { 5405 + pqi_ctrl_in_ofa(ctrl_info) || pqi_ctrl_in_shutdown(ctrl_info)) { 5360 5406 rc = SCSI_MLQUEUE_HOST_BUSY; 5361 5407 goto out; 5362 5408 } ··· 5373 5419 if (pqi_is_logical_device(device)) { 5374 5420 raid_bypassed = false; 5375 5421 if (device->raid_bypass_enabled && 5376 - !blk_rq_is_passthrough(scmd->request)) { 5422 + !blk_rq_is_passthrough(scmd->request)) { 5377 5423 rc = pqi_raid_bypass_submit_scsi_cmd(ctrl_info, device, 5378 5424 scmd, queue_group); 5379 5425 if (rc == 0 || rc == SCSI_MLQUEUE_HOST_BUSY) ··· 5604 5650 return 0; 5605 5651 } 5606 5652 5653 + static int pqi_ctrl_wait_for_pending_sync_cmds(struct pqi_ctrl_info *ctrl_info) 5654 + { 5655 + while (atomic_read(&ctrl_info->sync_cmds_outstanding)) { 5656 + pqi_check_ctrl_health(ctrl_info); 5657 + if (pqi_ctrl_offline(ctrl_info)) 5658 + return -ENXIO; 5659 + usleep_range(1000, 2000); 5660 + } 5661 + 5662 + return 0; 5663 + } 5664 + 5607 5665 static void pqi_lun_reset_complete(struct pqi_io_request *io_request, 5608 5666 void *context) 5609 5667 { ··· 5624 5658 complete(waiting); 5625 5659 } 5626 5660 5627 - #define PQI_LUN_RESET_TIMEOUT_SECS 10 5661 + #define PQI_LUN_RESET_TIMEOUT_SECS 30 5662 + #define PQI_LUN_RESET_POLL_COMPLETION_SECS 10 5628 5663 5629 5664 static int pqi_wait_for_lun_reset_completion(struct pqi_ctrl_info *ctrl_info, 5630 5665 struct pqi_scsi_dev *device, struct completion *wait) ··· 5634 5667 5635 5668 while (1) { 5636 5669 if (wait_for_completion_io_timeout(wait, 5637 - PQI_LUN_RESET_TIMEOUT_SECS * PQI_HZ)) { 5670 + PQI_LUN_RESET_POLL_COMPLETION_SECS * PQI_HZ)) { 5638 5671 rc = 0; 5639 5672 break; 5640 5673 } ··· 5671 5704 memcpy(request->lun_number, device->scsi3addr, 5672 5705 sizeof(request->lun_number)); 5673 5706 request->task_management_function = SOP_TASK_MANAGEMENT_LUN_RESET; 5707 + if (ctrl_info->tmf_iu_timeout_supported) 5708 + put_unaligned_le16(PQI_LUN_RESET_TIMEOUT_SECS, 5709 + &request->timeout); 5674 5710 5675 5711 pqi_start_io(ctrl_info, 5676 5712 &ctrl_info->queue_groups[PQI_DEFAULT_QUEUE_GROUP], RAID_PATH, ··· 5703 5733 5704 5734 for (retries = 0;;) { 5705 5735 rc = pqi_lun_reset(ctrl_info, device); 5706 - if (rc != -EAGAIN || ++retries > PQI_LUN_RESET_RETRIES) 5736 + if (rc == 0 || ++retries > PQI_LUN_RESET_RETRIES) 5707 5737 break; 5708 5738 msleep(PQI_LUN_RESET_RETRY_INTERVAL_MSECS); 5709 5739 } ··· 5757 5787 shost->host_no, device->bus, device->target, device->lun); 5758 5788 5759 5789 pqi_check_ctrl_health(ctrl_info); 5760 - if (pqi_ctrl_offline(ctrl_info)) { 5761 - dev_err(&ctrl_info->pci_dev->dev, 5762 - "controller %u offlined - cannot send device reset\n", 5763 - ctrl_info->ctrl_id); 5790 + if (pqi_ctrl_offline(ctrl_info) || 5791 + pqi_device_reset_blocked(ctrl_info)) { 5764 5792 rc = FAILED; 5765 5793 goto out; 5766 5794 } 5767 5795 5768 5796 pqi_wait_until_ofa_finished(ctrl_info); 5769 5797 5798 + atomic_inc(&ctrl_info->sync_cmds_outstanding); 5770 5799 rc = pqi_device_reset(ctrl_info, device); 5800 + atomic_dec(&ctrl_info->sync_cmds_outstanding); 5771 5801 5772 5802 out: 5773 5803 dev_err(&ctrl_info->pci_dev->dev, ··· 6036 6066 6037 6067 put_unaligned_le16(iu_length, &request.header.iu_length); 6038 6068 6069 + if (ctrl_info->raid_iu_timeout_supported) 6070 + put_unaligned_le32(iocommand.Request.Timeout, &request.timeout); 6071 + 6039 6072 rc = pqi_submit_raid_request_synchronous(ctrl_info, &request.header, 6040 6073 PQI_SYNC_FLAGS_INTERRUPTABLE, &pqi_error_info, NO_TIMEOUT); 6041 6074 ··· 6092 6119 6093 6120 ctrl_info = shost_to_hba(sdev->host); 6094 6121 6095 - if (pqi_ctrl_in_ofa(ctrl_info)) 6122 + if (pqi_ctrl_in_ofa(ctrl_info) || pqi_ctrl_in_shutdown(ctrl_info)) 6096 6123 return -EBUSY; 6097 6124 6098 6125 switch (cmd) { ··· 6133 6160 static ssize_t pqi_driver_version_show(struct device *dev, 6134 6161 struct device_attribute *attr, char *buffer) 6135 6162 { 6136 - struct Scsi_Host *shost; 6137 - struct pqi_ctrl_info *ctrl_info; 6138 - 6139 - shost = class_to_shost(dev); 6140 - ctrl_info = shost_to_hba(shost); 6141 - 6142 - return snprintf(buffer, PAGE_SIZE, 6143 - "%s\n", DRIVER_VERSION BUILD_TIMESTAMP); 6163 + return snprintf(buffer, PAGE_SIZE, "%s\n", 6164 + DRIVER_VERSION BUILD_TIMESTAMP); 6144 6165 } 6145 6166 6146 6167 static ssize_t pqi_serial_number_show(struct device *dev, ··· 6250 6283 struct scsi_device *sdev; 6251 6284 struct pqi_scsi_dev *device; 6252 6285 unsigned long flags; 6253 - unsigned char uid[16]; 6286 + u8 unique_id[16]; 6254 6287 6255 6288 sdev = to_scsi_device(dev); 6256 6289 ctrl_info = shost_to_hba(sdev->host); ··· 6263 6296 flags); 6264 6297 return -ENODEV; 6265 6298 } 6266 - memcpy(uid, device->unique_id, sizeof(uid)); 6299 + 6300 + if (device->is_physical_device) { 6301 + memset(unique_id, 0, 8); 6302 + memcpy(unique_id + 8, &device->wwid, sizeof(device->wwid)); 6303 + } else { 6304 + memcpy(unique_id, device->volume_id, sizeof(device->volume_id)); 6305 + } 6267 6306 6268 6307 spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, flags); 6269 6308 6270 6309 return snprintf(buffer, PAGE_SIZE, 6271 6310 "%02X%02X%02X%02X%02X%02X%02X%02X%02X%02X%02X%02X%02X%02X%02X%02X\n", 6272 - uid[0], uid[1], uid[2], uid[3], 6273 - uid[4], uid[5], uid[6], uid[7], 6274 - uid[8], uid[9], uid[10], uid[11], 6275 - uid[12], uid[13], uid[14], uid[15]); 6311 + unique_id[0], unique_id[1], unique_id[2], unique_id[3], 6312 + unique_id[4], unique_id[5], unique_id[6], unique_id[7], 6313 + unique_id[8], unique_id[9], unique_id[10], unique_id[11], 6314 + unique_id[12], unique_id[13], unique_id[14], unique_id[15]); 6276 6315 } 6277 6316 6278 6317 static ssize_t pqi_lunid_show(struct device *dev, ··· 6301 6328 flags); 6302 6329 return -ENODEV; 6303 6330 } 6331 + 6304 6332 memcpy(lunid, device->scsi3addr, sizeof(lunid)); 6305 6333 6306 6334 spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, flags); ··· 6309 6335 return snprintf(buffer, PAGE_SIZE, "0x%8phN\n", lunid); 6310 6336 } 6311 6337 6312 - #define MAX_PATHS 8 6338 + #define MAX_PATHS 8 6339 + 6313 6340 static ssize_t pqi_path_info_show(struct device *dev, 6314 6341 struct device_attribute *attr, char *buf) 6315 6342 { ··· 6322 6347 int output_len = 0; 6323 6348 u8 box; 6324 6349 u8 bay; 6325 - u8 path_map_index = 0; 6350 + u8 path_map_index; 6326 6351 char *active; 6327 - unsigned char phys_connector[2]; 6352 + u8 phys_connector[2]; 6328 6353 6329 6354 sdev = to_scsi_device(dev); 6330 6355 ctrl_info = shost_to_hba(sdev->host); ··· 6340 6365 6341 6366 bay = device->bay; 6342 6367 for (i = 0; i < MAX_PATHS; i++) { 6343 - path_map_index = 1<<i; 6368 + path_map_index = 1 << i; 6344 6369 if (i == device->active_path_index) 6345 6370 active = "Active"; 6346 6371 else if (device->path_map & path_map_index) ··· 6391 6416 } 6392 6417 6393 6418 spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, flags); 6419 + 6394 6420 return output_len; 6395 6421 } 6396 - 6397 6422 6398 6423 static ssize_t pqi_sas_address_show(struct device *dev, 6399 6424 struct device_attribute *attr, char *buffer) ··· 6415 6440 flags); 6416 6441 return -ENODEV; 6417 6442 } 6443 + 6418 6444 sas_address = device->sas_address; 6419 6445 6420 6446 spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, flags); ··· 6820 6844 firmware_feature->feature_name); 6821 6845 } 6822 6846 6847 + static void pqi_ctrl_update_feature_flags(struct pqi_ctrl_info *ctrl_info, 6848 + struct pqi_firmware_feature *firmware_feature) 6849 + { 6850 + switch (firmware_feature->feature_bit) { 6851 + case PQI_FIRMWARE_FEATURE_SOFT_RESET_HANDSHAKE: 6852 + ctrl_info->soft_reset_handshake_supported = 6853 + firmware_feature->enabled; 6854 + break; 6855 + case PQI_FIRMWARE_FEATURE_RAID_IU_TIMEOUT: 6856 + ctrl_info->raid_iu_timeout_supported = 6857 + firmware_feature->enabled; 6858 + break; 6859 + case PQI_FIRMWARE_FEATURE_TMF_IU_TIMEOUT: 6860 + ctrl_info->tmf_iu_timeout_supported = 6861 + firmware_feature->enabled; 6862 + break; 6863 + } 6864 + 6865 + pqi_firmware_feature_status(ctrl_info, firmware_feature); 6866 + } 6867 + 6823 6868 static inline void pqi_firmware_feature_update(struct pqi_ctrl_info *ctrl_info, 6824 6869 struct pqi_firmware_feature *firmware_feature) 6825 6870 { ··· 6864 6867 { 6865 6868 .feature_name = "New Soft Reset Handshake", 6866 6869 .feature_bit = PQI_FIRMWARE_FEATURE_SOFT_RESET_HANDSHAKE, 6867 - .feature_status = pqi_firmware_feature_status, 6870 + .feature_status = pqi_ctrl_update_feature_flags, 6871 + }, 6872 + { 6873 + .feature_name = "RAID IU Timeout", 6874 + .feature_bit = PQI_FIRMWARE_FEATURE_RAID_IU_TIMEOUT, 6875 + .feature_status = pqi_ctrl_update_feature_flags, 6876 + }, 6877 + { 6878 + .feature_name = "TMF IU Timeout", 6879 + .feature_bit = PQI_FIRMWARE_FEATURE_TMF_IU_TIMEOUT, 6880 + .feature_status = pqi_ctrl_update_feature_flags, 6868 6881 }, 6869 6882 }; 6870 6883 ··· 6928 6921 return; 6929 6922 } 6930 6923 6931 - ctrl_info->soft_reset_handshake_supported = false; 6932 6924 for (i = 0; i < ARRAY_SIZE(pqi_firmware_features); i++) { 6933 6925 if (!pqi_firmware_features[i].supported) 6934 6926 continue; ··· 6935 6929 firmware_features_iomem_addr, 6936 6930 pqi_firmware_features[i].feature_bit)) { 6937 6931 pqi_firmware_features[i].enabled = true; 6938 - if (pqi_firmware_features[i].feature_bit == 6939 - PQI_FIRMWARE_FEATURE_SOFT_RESET_HANDSHAKE) 6940 - ctrl_info->soft_reset_handshake_supported = 6941 - true; 6942 6932 } 6943 6933 pqi_firmware_feature_update(ctrl_info, 6944 6934 &pqi_firmware_features[i]); ··· 7076 7074 return pqi_revert_to_sis_mode(ctrl_info); 7077 7075 } 7078 7076 7077 + #define PQI_POST_RESET_DELAY_B4_MSGU_READY 5000 7078 + 7079 7079 static int pqi_ctrl_init(struct pqi_ctrl_info *ctrl_info) 7080 7080 { 7081 7081 int rc; 7082 7082 7083 - rc = pqi_force_sis_mode(ctrl_info); 7084 - if (rc) 7085 - return rc; 7083 + if (reset_devices) { 7084 + sis_soft_reset(ctrl_info); 7085 + msleep(PQI_POST_RESET_DELAY_B4_MSGU_READY); 7086 + } else { 7087 + rc = pqi_force_sis_mode(ctrl_info); 7088 + if (rc) 7089 + return rc; 7090 + } 7086 7091 7087 7092 /* 7088 7093 * Wait until the controller is ready to start accepting SIS ··· 7395 7386 rc = pqi_get_ctrl_product_details(ctrl_info); 7396 7387 if (rc) { 7397 7388 dev_err(&ctrl_info->pci_dev->dev, 7398 - "error obtaining product detail\n"); 7389 + "error obtaining product details\n"); 7399 7390 return rc; 7400 7391 } 7401 7392 ··· 7523 7514 7524 7515 INIT_WORK(&ctrl_info->event_work, pqi_event_worker); 7525 7516 atomic_set(&ctrl_info->num_interrupts, 0); 7517 + atomic_set(&ctrl_info->sync_cmds_outstanding, 0); 7526 7518 7527 7519 INIT_DELAYED_WORK(&ctrl_info->rescan_work, pqi_rescan_worker); 7528 7520 INIT_DELAYED_WORK(&ctrl_info->update_time_work, pqi_update_time_worker); ··· 7731 7721 dev_err(dev, "Failed to allocate host buffer of size = %u", 7732 7722 bytes_requested); 7733 7723 } 7724 + 7725 + return; 7734 7726 } 7735 7727 7736 7728 static void pqi_ofa_free_host_buffer(struct pqi_ctrl_info *ctrl_info) ··· 7798 7786 return pqi_submit_raid_request_synchronous(ctrl_info, &request.header, 7799 7787 0, NULL, NO_TIMEOUT); 7800 7788 } 7801 - 7802 - #define PQI_POST_RESET_DELAY_B4_MSGU_READY 5000 7803 7789 7804 7790 static int pqi_ofa_ctrl_restart(struct pqi_ctrl_info *ctrl_info) 7805 7791 { ··· 7966 7956 pqi_remove_ctrl(ctrl_info); 7967 7957 } 7968 7958 7959 + static void pqi_crash_if_pending_command(struct pqi_ctrl_info *ctrl_info) 7960 + { 7961 + unsigned int i; 7962 + struct pqi_io_request *io_request; 7963 + struct scsi_cmnd *scmd; 7964 + 7965 + for (i = 0; i < ctrl_info->max_io_slots; i++) { 7966 + io_request = &ctrl_info->io_request_pool[i]; 7967 + if (atomic_read(&io_request->refcount) == 0) 7968 + continue; 7969 + scmd = io_request->scmd; 7970 + WARN_ON(scmd != NULL); /* IO command from SML */ 7971 + WARN_ON(scmd == NULL); /* Non-IO cmd or driver initiated*/ 7972 + } 7973 + } 7974 + 7969 7975 static void pqi_shutdown(struct pci_dev *pci_dev) 7970 7976 { 7971 7977 int rc; 7972 7978 struct pqi_ctrl_info *ctrl_info; 7973 7979 7974 7980 ctrl_info = pci_get_drvdata(pci_dev); 7975 - if (!ctrl_info) 7976 - goto error; 7981 + if (!ctrl_info) { 7982 + dev_err(&pci_dev->dev, 7983 + "cache could not be flushed\n"); 7984 + return; 7985 + } 7986 + 7987 + pqi_disable_events(ctrl_info); 7988 + pqi_wait_until_ofa_finished(ctrl_info); 7989 + pqi_cancel_update_time_worker(ctrl_info); 7990 + pqi_cancel_rescan_worker(ctrl_info); 7991 + pqi_cancel_event_worker(ctrl_info); 7992 + 7993 + pqi_ctrl_shutdown_start(ctrl_info); 7994 + pqi_ctrl_wait_until_quiesced(ctrl_info); 7995 + 7996 + rc = pqi_ctrl_wait_for_pending_io(ctrl_info, NO_TIMEOUT); 7997 + if (rc) { 7998 + dev_err(&pci_dev->dev, 7999 + "wait for pending I/O failed\n"); 8000 + return; 8001 + } 8002 + 8003 + pqi_ctrl_block_device_reset(ctrl_info); 8004 + pqi_wait_until_lun_reset_finished(ctrl_info); 7977 8005 7978 8006 /* 7979 8007 * Write all data in the controller's battery-backed cache to 7980 8008 * storage. 7981 8009 */ 7982 8010 rc = pqi_flush_cache(ctrl_info, SHUTDOWN); 7983 - pqi_free_interrupts(ctrl_info); 7984 - pqi_reset(ctrl_info); 7985 - if (rc == 0) 7986 - return; 8011 + if (rc) 8012 + dev_err(&pci_dev->dev, 8013 + "unable to flush controller cache\n"); 7987 8014 7988 - error: 7989 - dev_warn(&pci_dev->dev, 7990 - "unable to flush controller cache\n"); 8015 + pqi_ctrl_block_requests(ctrl_info); 8016 + 8017 + rc = pqi_ctrl_wait_for_pending_sync_cmds(ctrl_info); 8018 + if (rc) { 8019 + dev_err(&pci_dev->dev, 8020 + "wait for pending sync cmds failed\n"); 8021 + return; 8022 + } 8023 + 8024 + pqi_crash_if_pending_command(ctrl_info); 8025 + pqi_reset(ctrl_info); 7991 8026 } 7992 8027 7993 8028 static void pqi_process_lockup_action_param(void) ··· 8741 8686 BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, 8742 8687 cdb) != 32); 8743 8688 BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, 8689 + timeout) != 60); 8690 + BUILD_BUG_ON(offsetof(struct pqi_raid_path_request, 8744 8691 sg_descriptors) != 64); 8745 8692 BUILD_BUG_ON(sizeof(struct pqi_raid_path_request) != 8746 8693 PQI_OPERATIONAL_IQ_ELEMENT_LENGTH); ··· 8896 8839 request_id) != 8); 8897 8840 BUILD_BUG_ON(offsetof(struct pqi_task_management_request, 8898 8841 nexus_id) != 10); 8842 + BUILD_BUG_ON(offsetof(struct pqi_task_management_request, 8843 + timeout) != 14); 8899 8844 BUILD_BUG_ON(offsetof(struct pqi_task_management_request, 8900 8845 lun_number) != 16); 8901 8846 BUILD_BUG_ON(offsetof(struct pqi_task_management_request,
+4 -18
drivers/scsi/smartpqi/smartpqi_sas_transport.c
··· 45 45 struct sas_phy *phy = pqi_sas_phy->phy; 46 46 47 47 sas_port_delete_phy(pqi_sas_phy->parent_port->port, phy); 48 - sas_phy_free(phy); 49 48 if (pqi_sas_phy->added_to_port) 50 49 list_del(&pqi_sas_phy->phy_list_entry); 50 + sas_phy_delete(phy); 51 51 kfree(pqi_sas_phy); 52 52 } 53 53 ··· 312 312 static int pqi_sas_get_enclosure_identifier(struct sas_rphy *rphy, 313 313 u64 *identifier) 314 314 { 315 - 316 315 int rc; 317 316 unsigned long flags; 318 317 struct Scsi_Host *shost; ··· 360 361 } 361 362 } 362 363 363 - if (found_device->phy_connected_dev_type != SA_CONTROLLER_DEVICE) { 364 + if (found_device->phy_connected_dev_type != SA_DEVICE_TYPE_CONTROLLER) { 364 365 rc = -EINVAL; 365 366 goto out; 366 367 } ··· 381 382 spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, flags); 382 383 383 384 return rc; 384 - 385 385 } 386 386 387 387 static int pqi_sas_get_bay_identifier(struct sas_rphy *rphy) 388 388 { 389 - 390 389 int rc; 391 390 unsigned long flags; 392 391 struct pqi_ctrl_info *ctrl_info; ··· 479 482 req_size -= SMP_CRC_FIELD_LENGTH; 480 483 481 484 put_unaligned_le32(req_size, &parameters->request_length); 482 - 483 485 put_unaligned_le32(resp_size, &parameters->response_length); 484 486 485 487 sg_copy_to_buffer(job->request_payload.sg_list, ··· 508 512 struct sas_rphy *rphy) 509 513 { 510 514 int rc; 511 - struct pqi_ctrl_info *ctrl_info = shost_to_hba(shost); 515 + struct pqi_ctrl_info *ctrl_info; 512 516 struct bmic_csmi_smp_passthru_buffer *smp_buf; 513 517 struct pqi_raid_error_info error_info; 514 518 unsigned int reslen = 0; 515 519 516 - pqi_ctrl_busy(ctrl_info); 520 + ctrl_info = shost_to_hba(shost); 517 521 518 522 if (job->reply_payload.payload_len == 0) { 519 523 rc = -ENOMEM; ··· 532 536 533 537 if (job->request_payload.sg_cnt > 1 || job->reply_payload.sg_cnt > 1) { 534 538 rc = -EINVAL; 535 - goto out; 536 - } 537 - 538 - if (pqi_ctrl_offline(ctrl_info)) { 539 - rc = -ENXIO; 540 - goto out; 541 - } 542 - 543 - if (pqi_ctrl_blocked(ctrl_info)) { 544 - rc = -EBUSY; 545 539 goto out; 546 540 } 547 541
+2 -2
drivers/scsi/sun3_scsi.c
··· 501 501 .eh_host_reset_handler = sun3scsi_host_reset, 502 502 .can_queue = 16, 503 503 .this_id = 7, 504 - .sg_tablesize = SG_NONE, 504 + .sg_tablesize = 1, 505 505 .cmd_per_lun = 2, 506 506 .dma_boundary = PAGE_SIZE - 1, 507 507 .cmd_size = NCR5380_CMD_SIZE, ··· 523 523 sun3_scsi_template.can_queue = setup_can_queue; 524 524 if (setup_cmd_per_lun > 0) 525 525 sun3_scsi_template.cmd_per_lun = setup_cmd_per_lun; 526 - if (setup_sg_tablesize >= 0) 526 + if (setup_sg_tablesize > 0) 527 527 sun3_scsi_template.sg_tablesize = setup_sg_tablesize; 528 528 if (setup_hostid >= 0) 529 529 sun3_scsi_template.this_id = setup_hostid & 7;
+10
drivers/scsi/ufs/Kconfig
··· 132 132 Select this if you have UFS controller on Hisilicon chipset. 133 133 If unsure, say N. 134 134 135 + config SCSI_UFS_TI_J721E 136 + tristate "TI glue layer for Cadence UFS Controller" 137 + depends on OF && HAS_IOMEM && (ARCH_K3 || COMPILE_TEST) 138 + help 139 + This selects driver for TI glue layer for Cadence UFS Host 140 + Controller IP. 141 + 142 + Selects this if you have TI platform with UFS controller. 143 + If unsure, say N. 144 + 135 145 config SCSI_UFS_BSG 136 146 bool "Universal Flash Storage BSG device node" 137 147 depends on SCSI_UFSHCD
+1
drivers/scsi/ufs/Makefile
··· 11 11 obj-$(CONFIG_SCSI_UFSHCD_PLATFORM) += ufshcd-pltfrm.o 12 12 obj-$(CONFIG_SCSI_UFS_HISI) += ufs-hisi.o 13 13 obj-$(CONFIG_SCSI_UFS_MEDIATEK) += ufs-mediatek.o 14 + obj-$(CONFIG_SCSI_UFS_TI_J721E) += ti-j721e-ufs.o
+90
drivers/scsi/ufs/ti-j721e-ufs.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // 3 + // Copyright (C) 2019 Texas Instruments Incorporated - http://www.ti.com/ 4 + // 5 + 6 + #include <linux/clk.h> 7 + #include <linux/io.h> 8 + #include <linux/kernel.h> 9 + #include <linux/module.h> 10 + #include <linux/of_platform.h> 11 + #include <linux/platform_device.h> 12 + #include <linux/pm_runtime.h> 13 + 14 + #define TI_UFS_SS_CTRL 0x4 15 + #define TI_UFS_SS_RST_N_PCS BIT(0) 16 + #define TI_UFS_SS_CLK_26MHZ BIT(4) 17 + 18 + static int ti_j721e_ufs_probe(struct platform_device *pdev) 19 + { 20 + struct device *dev = &pdev->dev; 21 + unsigned long clk_rate; 22 + void __iomem *regbase; 23 + struct clk *clk; 24 + u32 reg = 0; 25 + int ret; 26 + 27 + regbase = devm_platform_ioremap_resource(pdev, 0); 28 + if (IS_ERR(regbase)) 29 + return PTR_ERR(regbase); 30 + 31 + pm_runtime_enable(dev); 32 + ret = pm_runtime_get_sync(dev); 33 + if (ret < 0) { 34 + pm_runtime_put_noidle(dev); 35 + return ret; 36 + } 37 + 38 + /* Select MPHY refclk frequency */ 39 + clk = devm_clk_get(dev, NULL); 40 + if (IS_ERR(clk)) { 41 + dev_err(dev, "Cannot claim MPHY clock.\n"); 42 + return PTR_ERR(clk); 43 + } 44 + clk_rate = clk_get_rate(clk); 45 + if (clk_rate == 26000000) 46 + reg |= TI_UFS_SS_CLK_26MHZ; 47 + devm_clk_put(dev, clk); 48 + 49 + /* Take UFS slave device out of reset */ 50 + reg |= TI_UFS_SS_RST_N_PCS; 51 + writel(reg, regbase + TI_UFS_SS_CTRL); 52 + 53 + ret = of_platform_populate(pdev->dev.of_node, NULL, NULL, 54 + dev); 55 + if (ret) { 56 + dev_err(dev, "failed to populate child nodes %d\n", ret); 57 + pm_runtime_put_sync(dev); 58 + } 59 + 60 + return ret; 61 + } 62 + 63 + static int ti_j721e_ufs_remove(struct platform_device *pdev) 64 + { 65 + of_platform_depopulate(&pdev->dev); 66 + pm_runtime_put_sync(&pdev->dev); 67 + 68 + return 0; 69 + } 70 + 71 + static const struct of_device_id ti_j721e_ufs_of_match[] = { 72 + { 73 + .compatible = "ti,j721e-ufs", 74 + }, 75 + { }, 76 + }; 77 + 78 + static struct platform_driver ti_j721e_ufs_driver = { 79 + .probe = ti_j721e_ufs_probe, 80 + .remove = ti_j721e_ufs_remove, 81 + .driver = { 82 + .name = "ti-j721e-ufs", 83 + .of_match_table = ti_j721e_ufs_of_match, 84 + }, 85 + }; 86 + module_platform_driver(ti_j721e_ufs_driver); 87 + 88 + MODULE_AUTHOR("Vignesh Raghavendra <vigneshr@ti.com>"); 89 + MODULE_DESCRIPTION("TI UFS host controller glue driver"); 90 + MODULE_LICENSE("GPL v2");
+1 -4
drivers/scsi/ufs/ufs-hisi.c
··· 452 452 453 453 /* get resource of ufs sys ctrl */ 454 454 host->ufs_sys_ctrl = devm_platform_ioremap_resource(pdev, 1); 455 - if (IS_ERR(host->ufs_sys_ctrl)) 456 - return PTR_ERR(host->ufs_sys_ctrl); 457 - 458 - return 0; 455 + return PTR_ERR_OR_ZERO(host->ufs_sys_ctrl); 459 456 } 460 457 461 458 static void ufs_hisi_set_pm_lvl(struct ufs_hba *hba)
+3
drivers/scsi/ufs/ufs-mediatek.c
··· 147 147 if (err) 148 148 goto out_variant_clear; 149 149 150 + /* Enable runtime autosuspend */ 151 + hba->caps |= UFSHCD_CAP_RPM_AUTOSUSPEND; 152 + 150 153 /* 151 154 * ufshcd_vops_init() is invoked after 152 155 * ufshcd_setup_clock(true) in ufshcd_hba_init() thus
+53
drivers/scsi/ufs/ufs-qcom.c
··· 246 246 mb(); 247 247 } 248 248 249 + /** 250 + * ufs_qcom_host_reset - reset host controller and PHY 251 + */ 252 + static int ufs_qcom_host_reset(struct ufs_hba *hba) 253 + { 254 + int ret = 0; 255 + struct ufs_qcom_host *host = ufshcd_get_variant(hba); 256 + 257 + if (!host->core_reset) { 258 + dev_warn(hba->dev, "%s: reset control not set\n", __func__); 259 + goto out; 260 + } 261 + 262 + ret = reset_control_assert(host->core_reset); 263 + if (ret) { 264 + dev_err(hba->dev, "%s: core_reset assert failed, err = %d\n", 265 + __func__, ret); 266 + goto out; 267 + } 268 + 269 + /* 270 + * The hardware requirement for delay between assert/deassert 271 + * is at least 3-4 sleep clock (32.7KHz) cycles, which comes to 272 + * ~125us (4/32768). To be on the safe side add 200us delay. 273 + */ 274 + usleep_range(200, 210); 275 + 276 + ret = reset_control_deassert(host->core_reset); 277 + if (ret) 278 + dev_err(hba->dev, "%s: core_reset deassert failed, err = %d\n", 279 + __func__, ret); 280 + 281 + usleep_range(1000, 1100); 282 + 283 + out: 284 + return ret; 285 + } 286 + 249 287 static int ufs_qcom_power_up_sequence(struct ufs_hba *hba) 250 288 { 251 289 struct ufs_qcom_host *host = ufshcd_get_variant(hba); ··· 291 253 int ret = 0; 292 254 bool is_rate_B = (UFS_QCOM_LIMIT_HS_RATE == PA_HS_MODE_B) 293 255 ? true : false; 256 + 257 + /* Reset UFS Host Controller and PHY */ 258 + ret = ufs_qcom_host_reset(hba); 259 + if (ret) 260 + dev_warn(hba->dev, "%s: host reset returned %d\n", 261 + __func__, ret); 294 262 295 263 if (is_rate_B) 296 264 phy_set_mode(phy, PHY_MODE_UFS_HS_B); ··· 1144 1100 /* Make a two way bind between the qcom host and the hba */ 1145 1101 host->hba = hba; 1146 1102 ufshcd_set_variant(hba, host); 1103 + 1104 + /* Setup the reset control of HCI */ 1105 + host->core_reset = devm_reset_control_get(hba->dev, "rst"); 1106 + if (IS_ERR(host->core_reset)) { 1107 + err = PTR_ERR(host->core_reset); 1108 + dev_warn(dev, "Failed to get reset control %d\n", err); 1109 + host->core_reset = NULL; 1110 + err = 0; 1111 + } 1147 1112 1148 1113 /* Fire up the reset controller. Failure here is non-fatal. */ 1149 1114 host->rcdev.of_node = dev->of_node;
+3
drivers/scsi/ufs/ufs-qcom.h
··· 6 6 #define UFS_QCOM_H_ 7 7 8 8 #include <linux/reset-controller.h> 9 + #include <linux/reset.h> 9 10 10 11 #define MAX_UFS_QCOM_HOSTS 1 11 12 #define MAX_U32 (~(u32)0) ··· 234 233 u32 dbg_print_en; 235 234 struct ufs_qcom_testbus testbus; 236 235 236 + /* Reset control of HCI */ 237 + struct reset_control *core_reset; 237 238 struct reset_controller_dev rcdev; 238 239 239 240 struct gpio_desc *device_reset;
+9 -6
drivers/scsi/ufs/ufs-sysfs.c
··· 126 126 return; 127 127 128 128 spin_lock_irqsave(hba->host->host_lock, flags); 129 - if (hba->ahit == ahit) 130 - goto out_unlock; 131 - hba->ahit = ahit; 132 - if (!pm_runtime_suspended(hba->dev)) 133 - ufshcd_writel(hba, hba->ahit, REG_AUTO_HIBERNATE_IDLE_TIMER); 134 - out_unlock: 129 + if (hba->ahit != ahit) 130 + hba->ahit = ahit; 135 131 spin_unlock_irqrestore(hba->host->host_lock, flags); 132 + if (!pm_runtime_suspended(hba->dev)) { 133 + pm_runtime_get_sync(hba->dev); 134 + ufshcd_hold(hba, false); 135 + ufshcd_auto_hibern8_enable(hba); 136 + ufshcd_release(hba); 137 + pm_runtime_put(hba->dev); 138 + } 136 139 } 137 140 138 141 /* Convert Auto-Hibernate Idle Timer register value to microseconds */
+1
drivers/scsi/ufs/ufs_bsg.c
··· 162 162 163 163 /** 164 164 * ufs_bsg_remove - detach and remove the added ufs-bsg node 165 + * @hba: per adapter object 165 166 * 166 167 * Should be called when unloading the driver. 167 168 */
+1 -1
drivers/scsi/ufs/ufshcd-dwc.c
··· 80 80 */ 81 81 static int ufshcd_dwc_connection_setup(struct ufs_hba *hba) 82 82 { 83 - const struct ufshcd_dme_attr_val setup_attrs[] = { 83 + static const struct ufshcd_dme_attr_val setup_attrs[] = { 84 84 { UIC_ARG_MIB(T_CONNECTIONSTATE), 0, DME_LOCAL }, 85 85 { UIC_ARG_MIB(N_DEVICEID), 0, DME_LOCAL }, 86 86 { UIC_ARG_MIB(N_DEVICEID_VALID), 0, DME_LOCAL },
-1
drivers/scsi/ufs/ufshcd-pltfrm.c
··· 402 402 403 403 irq = platform_get_irq(pdev, 0); 404 404 if (irq < 0) { 405 - dev_err(dev, "IRQ resource not available\n"); 406 405 err = -ENODEV; 407 406 goto out; 408 407 }
+144 -70
drivers/scsi/ufs/ufshcd.c
··· 88 88 /* Interrupt aggregation default timeout, unit: 40us */ 89 89 #define INT_AGGR_DEF_TO 0x02 90 90 91 + /* default delay of autosuspend: 2000 ms */ 92 + #define RPM_AUTOSUSPEND_DELAY_MS 2000 93 + 91 94 #define ufshcd_toggle_vreg(_dev, _vreg, _on) \ 92 95 ({ \ 93 96 int _ret; \ ··· 117 114 if (offset % 4 != 0 || len % 4 != 0) /* keep readl happy */ 118 115 return -EINVAL; 119 116 120 - regs = kzalloc(len, GFP_KERNEL); 117 + regs = kzalloc(len, GFP_ATOMIC); 121 118 if (!regs) 122 119 return -ENOMEM; 123 120 ··· 240 237 END_FIX 241 238 }; 242 239 243 - static void ufshcd_tmc_handler(struct ufs_hba *hba); 240 + static irqreturn_t ufshcd_tmc_handler(struct ufs_hba *hba); 244 241 static void ufshcd_async_scan(void *data, async_cookie_t cookie); 245 242 static int ufshcd_reset_and_restore(struct ufs_hba *hba); 246 243 static int ufshcd_eh_host_reset_handler(struct scsi_cmnd *cmd); ··· 1610 1607 * state to CLKS_ON. 1611 1608 */ 1612 1609 if (hba->clk_gating.is_suspended || 1613 - (hba->clk_gating.state == REQ_CLKS_ON)) { 1610 + (hba->clk_gating.state != REQ_CLKS_OFF)) { 1614 1611 hba->clk_gating.state = CLKS_ON; 1615 1612 trace_ufshcd_clk_gating(dev_name(hba->dev), 1616 1613 hba->clk_gating.state); ··· 1938 1935 memcpy(hba->dev_cmd.query.descriptor, descp, resp_len); 1939 1936 } else { 1940 1937 dev_warn(hba->dev, 1941 - "%s: Response size is bigger than buffer", 1942 - __func__); 1938 + "%s: rsp size %d is bigger than buffer size %d", 1939 + __func__, resp_len, buf_len); 1943 1940 return -EINVAL; 1944 1941 } 1945 1942 } ··· 2989 2986 goto out_unlock; 2990 2987 } 2991 2988 2992 - hba->dev_cmd.query.descriptor = NULL; 2993 2989 *buf_len = be16_to_cpu(response->upiu_res.length); 2994 2990 2995 2991 out_unlock: 2992 + hba->dev_cmd.query.descriptor = NULL; 2996 2993 mutex_unlock(&hba->dev_cmd.lock); 2997 2994 out: 2998 2995 ufshcd_release(hba); ··· 3859 3856 ufshcd_set_eh_in_progress(hba); 3860 3857 spin_unlock_irqrestore(hba->host->host_lock, flags); 3861 3858 3859 + /* Reset the attached device */ 3860 + ufshcd_vops_device_reset(hba); 3861 + 3862 3862 ret = ufshcd_host_reset_and_restore(hba); 3863 3863 3864 3864 spin_lock_irqsave(hba->host->host_lock, flags); ··· 3891 3885 ktime_to_us(ktime_sub(ktime_get(), start)), ret); 3892 3886 3893 3887 if (ret) { 3888 + int err; 3889 + 3894 3890 dev_err(hba->dev, "%s: hibern8 enter failed. ret = %d\n", 3895 3891 __func__, ret); 3896 3892 3897 3893 /* 3898 - * If link recovery fails then return error so that caller 3899 - * don't retry the hibern8 enter again. 3894 + * If link recovery fails then return error code returned from 3895 + * ufshcd_link_recovery(). 3896 + * If link recovery succeeds then return -EAGAIN to attempt 3897 + * hibern8 enter retry again. 3900 3898 */ 3901 - if (ufshcd_link_recovery(hba)) 3902 - ret = -ENOLINK; 3899 + err = ufshcd_link_recovery(hba); 3900 + if (err) { 3901 + dev_err(hba->dev, "%s: link recovery failed", __func__); 3902 + ret = err; 3903 + } else { 3904 + ret = -EAGAIN; 3905 + } 3903 3906 } else 3904 3907 ufshcd_vops_hibern8_notify(hba, UIC_CMD_DME_HIBER_ENTER, 3905 3908 POST_CHANGE); ··· 3922 3907 3923 3908 for (retries = UIC_HIBERN8_ENTER_RETRIES; retries > 0; retries--) { 3924 3909 ret = __ufshcd_uic_hibern8_enter(hba); 3925 - if (!ret || ret == -ENOLINK) 3910 + if (!ret) 3926 3911 goto out; 3927 3912 } 3928 3913 out: ··· 3956 3941 return ret; 3957 3942 } 3958 3943 3959 - static void ufshcd_auto_hibern8_enable(struct ufs_hba *hba) 3944 + void ufshcd_auto_hibern8_enable(struct ufs_hba *hba) 3960 3945 { 3961 3946 unsigned long flags; 3962 3947 ··· 4646 4631 */ 4647 4632 static int ufshcd_slave_configure(struct scsi_device *sdev) 4648 4633 { 4634 + struct ufs_hba *hba = shost_priv(sdev->host); 4649 4635 struct request_queue *q = sdev->request_queue; 4650 4636 4651 4637 blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1); 4638 + 4639 + if (ufshcd_is_rpm_autosuspend_allowed(hba)) 4640 + sdev->rpm_autosuspend = 1; 4641 + 4652 4642 return 0; 4653 4643 } 4654 4644 ··· 4808 4788 * ufshcd_uic_cmd_compl - handle completion of uic command 4809 4789 * @hba: per adapter instance 4810 4790 * @intr_status: interrupt status generated by the controller 4791 + * 4792 + * Returns 4793 + * IRQ_HANDLED - If interrupt is valid 4794 + * IRQ_NONE - If invalid interrupt 4811 4795 */ 4812 - static void ufshcd_uic_cmd_compl(struct ufs_hba *hba, u32 intr_status) 4796 + static irqreturn_t ufshcd_uic_cmd_compl(struct ufs_hba *hba, u32 intr_status) 4813 4797 { 4798 + irqreturn_t retval = IRQ_NONE; 4799 + 4814 4800 if ((intr_status & UIC_COMMAND_COMPL) && hba->active_uic_cmd) { 4815 4801 hba->active_uic_cmd->argument2 |= 4816 4802 ufshcd_get_uic_cmd_result(hba); 4817 4803 hba->active_uic_cmd->argument3 = 4818 4804 ufshcd_get_dme_attr_val(hba); 4819 4805 complete(&hba->active_uic_cmd->done); 4806 + retval = IRQ_HANDLED; 4820 4807 } 4821 4808 4822 - if ((intr_status & UFSHCD_UIC_PWR_MASK) && hba->uic_async_done) 4809 + if ((intr_status & UFSHCD_UIC_PWR_MASK) && hba->uic_async_done) { 4823 4810 complete(hba->uic_async_done); 4811 + retval = IRQ_HANDLED; 4812 + } 4813 + return retval; 4824 4814 } 4825 4815 4826 4816 /** ··· 4886 4856 /** 4887 4857 * ufshcd_transfer_req_compl - handle SCSI and query command completion 4888 4858 * @hba: per adapter instance 4859 + * 4860 + * Returns 4861 + * IRQ_HANDLED - If interrupt is valid 4862 + * IRQ_NONE - If invalid interrupt 4889 4863 */ 4890 - static void ufshcd_transfer_req_compl(struct ufs_hba *hba) 4864 + static irqreturn_t ufshcd_transfer_req_compl(struct ufs_hba *hba) 4891 4865 { 4892 4866 unsigned long completed_reqs; 4893 4867 u32 tr_doorbell; ··· 4910 4876 tr_doorbell = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL); 4911 4877 completed_reqs = tr_doorbell ^ hba->outstanding_reqs; 4912 4878 4913 - __ufshcd_transfer_req_compl(hba, completed_reqs); 4879 + if (completed_reqs) { 4880 + __ufshcd_transfer_req_compl(hba, completed_reqs); 4881 + return IRQ_HANDLED; 4882 + } else { 4883 + return IRQ_NONE; 4884 + } 4914 4885 } 4915 4886 4916 4887 /** ··· 5434 5395 /** 5435 5396 * ufshcd_update_uic_error - check and set fatal UIC error flags. 5436 5397 * @hba: per-adapter instance 5398 + * 5399 + * Returns 5400 + * IRQ_HANDLED - If interrupt is valid 5401 + * IRQ_NONE - If invalid interrupt 5437 5402 */ 5438 - static void ufshcd_update_uic_error(struct ufs_hba *hba) 5403 + static irqreturn_t ufshcd_update_uic_error(struct ufs_hba *hba) 5439 5404 { 5440 5405 u32 reg; 5406 + irqreturn_t retval = IRQ_NONE; 5441 5407 5442 5408 /* PHY layer lane error */ 5443 5409 reg = ufshcd_readl(hba, REG_UIC_ERROR_CODE_PHY_ADAPTER_LAYER); 5444 5410 /* Ignore LINERESET indication, as this is not an error */ 5445 5411 if ((reg & UIC_PHY_ADAPTER_LAYER_ERROR) && 5446 - (reg & UIC_PHY_ADAPTER_LAYER_LANE_ERR_MASK)) { 5412 + (reg & UIC_PHY_ADAPTER_LAYER_LANE_ERR_MASK)) { 5447 5413 /* 5448 5414 * To know whether this error is fatal or not, DB timeout 5449 5415 * must be checked but this error is handled separately. 5450 5416 */ 5451 5417 dev_dbg(hba->dev, "%s: UIC Lane error reported\n", __func__); 5452 5418 ufshcd_update_reg_hist(&hba->ufs_stats.pa_err, reg); 5419 + retval |= IRQ_HANDLED; 5453 5420 } 5454 5421 5455 5422 /* PA_INIT_ERROR is fatal and needs UIC reset */ 5456 5423 reg = ufshcd_readl(hba, REG_UIC_ERROR_CODE_DATA_LINK_LAYER); 5457 - if (reg) 5424 + if ((reg & UIC_DATA_LINK_LAYER_ERROR) && 5425 + (reg & UIC_DATA_LINK_LAYER_ERROR_CODE_MASK)) { 5458 5426 ufshcd_update_reg_hist(&hba->ufs_stats.dl_err, reg); 5459 5427 5460 - if (reg & UIC_DATA_LINK_LAYER_ERROR_PA_INIT) 5461 - hba->uic_error |= UFSHCD_UIC_DL_PA_INIT_ERROR; 5462 - else if (hba->dev_quirks & 5463 - UFS_DEVICE_QUIRK_RECOVERY_FROM_DL_NAC_ERRORS) { 5464 - if (reg & UIC_DATA_LINK_LAYER_ERROR_NAC_RECEIVED) 5465 - hba->uic_error |= 5466 - UFSHCD_UIC_DL_NAC_RECEIVED_ERROR; 5467 - else if (reg & UIC_DATA_LINK_LAYER_ERROR_TCx_REPLAY_TIMEOUT) 5468 - hba->uic_error |= UFSHCD_UIC_DL_TCx_REPLAY_ERROR; 5428 + if (reg & UIC_DATA_LINK_LAYER_ERROR_PA_INIT) 5429 + hba->uic_error |= UFSHCD_UIC_DL_PA_INIT_ERROR; 5430 + else if (hba->dev_quirks & 5431 + UFS_DEVICE_QUIRK_RECOVERY_FROM_DL_NAC_ERRORS) { 5432 + if (reg & UIC_DATA_LINK_LAYER_ERROR_NAC_RECEIVED) 5433 + hba->uic_error |= 5434 + UFSHCD_UIC_DL_NAC_RECEIVED_ERROR; 5435 + else if (reg & UIC_DATA_LINK_LAYER_ERROR_TCx_REPLAY_TIMEOUT) 5436 + hba->uic_error |= UFSHCD_UIC_DL_TCx_REPLAY_ERROR; 5437 + } 5438 + retval |= IRQ_HANDLED; 5469 5439 } 5470 5440 5471 5441 /* UIC NL/TL/DME errors needs software retry */ 5472 5442 reg = ufshcd_readl(hba, REG_UIC_ERROR_CODE_NETWORK_LAYER); 5473 - if (reg) { 5443 + if ((reg & UIC_NETWORK_LAYER_ERROR) && 5444 + (reg & UIC_NETWORK_LAYER_ERROR_CODE_MASK)) { 5474 5445 ufshcd_update_reg_hist(&hba->ufs_stats.nl_err, reg); 5475 5446 hba->uic_error |= UFSHCD_UIC_NL_ERROR; 5447 + retval |= IRQ_HANDLED; 5476 5448 } 5477 5449 5478 5450 reg = ufshcd_readl(hba, REG_UIC_ERROR_CODE_TRANSPORT_LAYER); 5479 - if (reg) { 5451 + if ((reg & UIC_TRANSPORT_LAYER_ERROR) && 5452 + (reg & UIC_TRANSPORT_LAYER_ERROR_CODE_MASK)) { 5480 5453 ufshcd_update_reg_hist(&hba->ufs_stats.tl_err, reg); 5481 5454 hba->uic_error |= UFSHCD_UIC_TL_ERROR; 5455 + retval |= IRQ_HANDLED; 5482 5456 } 5483 5457 5484 5458 reg = ufshcd_readl(hba, REG_UIC_ERROR_CODE_DME); 5485 - if (reg) { 5459 + if ((reg & UIC_DME_ERROR) && 5460 + (reg & UIC_DME_ERROR_CODE_MASK)) { 5486 5461 ufshcd_update_reg_hist(&hba->ufs_stats.dme_err, reg); 5487 5462 hba->uic_error |= UFSHCD_UIC_DME_ERROR; 5463 + retval |= IRQ_HANDLED; 5488 5464 } 5489 5465 5490 5466 dev_dbg(hba->dev, "%s: UIC error flags = 0x%08x\n", 5491 5467 __func__, hba->uic_error); 5468 + return retval; 5492 5469 } 5493 5470 5494 5471 static bool ufshcd_is_auto_hibern8_error(struct ufs_hba *hba, ··· 5527 5472 /** 5528 5473 * ufshcd_check_errors - Check for errors that need s/w attention 5529 5474 * @hba: per-adapter instance 5475 + * 5476 + * Returns 5477 + * IRQ_HANDLED - If interrupt is valid 5478 + * IRQ_NONE - If invalid interrupt 5530 5479 */ 5531 - static void ufshcd_check_errors(struct ufs_hba *hba) 5480 + static irqreturn_t ufshcd_check_errors(struct ufs_hba *hba) 5532 5481 { 5533 5482 bool queue_eh_work = false; 5483 + irqreturn_t retval = IRQ_NONE; 5534 5484 5535 5485 if (hba->errors & INT_FATAL_ERRORS) { 5536 5486 ufshcd_update_reg_hist(&hba->ufs_stats.fatal_err, hba->errors); ··· 5544 5484 5545 5485 if (hba->errors & UIC_ERROR) { 5546 5486 hba->uic_error = 0; 5547 - ufshcd_update_uic_error(hba); 5487 + retval = ufshcd_update_uic_error(hba); 5548 5488 if (hba->uic_error) 5549 5489 queue_eh_work = true; 5550 5490 } ··· 5592 5532 } 5593 5533 schedule_work(&hba->eh_work); 5594 5534 } 5535 + retval |= IRQ_HANDLED; 5595 5536 } 5596 5537 /* 5597 5538 * if (!queue_eh_work) - ··· 5600 5539 * itself without s/w intervention or errors that will be 5601 5540 * handled by the SCSI core layer. 5602 5541 */ 5542 + return retval; 5603 5543 } 5604 5544 5605 5545 /** 5606 5546 * ufshcd_tmc_handler - handle task management function completion 5607 5547 * @hba: per adapter instance 5548 + * 5549 + * Returns 5550 + * IRQ_HANDLED - If interrupt is valid 5551 + * IRQ_NONE - If invalid interrupt 5608 5552 */ 5609 - static void ufshcd_tmc_handler(struct ufs_hba *hba) 5553 + static irqreturn_t ufshcd_tmc_handler(struct ufs_hba *hba) 5610 5554 { 5611 5555 u32 tm_doorbell; 5612 5556 5613 5557 tm_doorbell = ufshcd_readl(hba, REG_UTP_TASK_REQ_DOOR_BELL); 5614 5558 hba->tm_condition = tm_doorbell ^ hba->outstanding_tasks; 5615 - wake_up(&hba->tm_wq); 5559 + if (hba->tm_condition) { 5560 + wake_up(&hba->tm_wq); 5561 + return IRQ_HANDLED; 5562 + } else { 5563 + return IRQ_NONE; 5564 + } 5616 5565 } 5617 5566 5618 5567 /** 5619 5568 * ufshcd_sl_intr - Interrupt service routine 5620 5569 * @hba: per adapter instance 5621 5570 * @intr_status: contains interrupts generated by the controller 5571 + * 5572 + * Returns 5573 + * IRQ_HANDLED - If interrupt is valid 5574 + * IRQ_NONE - If invalid interrupt 5622 5575 */ 5623 - static void ufshcd_sl_intr(struct ufs_hba *hba, u32 intr_status) 5576 + static irqreturn_t ufshcd_sl_intr(struct ufs_hba *hba, u32 intr_status) 5624 5577 { 5578 + irqreturn_t retval = IRQ_NONE; 5579 + 5625 5580 hba->errors = UFSHCD_ERROR_MASK & intr_status; 5626 5581 5627 5582 if (ufshcd_is_auto_hibern8_error(hba, intr_status)) 5628 5583 hba->errors |= (UFSHCD_UIC_HIBERN8_MASK & intr_status); 5629 5584 5630 5585 if (hba->errors) 5631 - ufshcd_check_errors(hba); 5586 + retval |= ufshcd_check_errors(hba); 5632 5587 5633 5588 if (intr_status & UFSHCD_UIC_MASK) 5634 - ufshcd_uic_cmd_compl(hba, intr_status); 5589 + retval |= ufshcd_uic_cmd_compl(hba, intr_status); 5635 5590 5636 5591 if (intr_status & UTP_TASK_REQ_COMPL) 5637 - ufshcd_tmc_handler(hba); 5592 + retval |= ufshcd_tmc_handler(hba); 5638 5593 5639 5594 if (intr_status & UTP_TRANSFER_REQ_COMPL) 5640 - ufshcd_transfer_req_compl(hba); 5595 + retval |= ufshcd_transfer_req_compl(hba); 5596 + 5597 + return retval; 5641 5598 } 5642 5599 5643 5600 /** ··· 5663 5584 * @irq: irq number 5664 5585 * @__hba: pointer to adapter instance 5665 5586 * 5666 - * Returns IRQ_HANDLED - If interrupt is valid 5667 - * IRQ_NONE - If invalid interrupt 5587 + * Returns 5588 + * IRQ_HANDLED - If interrupt is valid 5589 + * IRQ_NONE - If invalid interrupt 5668 5590 */ 5669 5591 static irqreturn_t ufshcd_intr(int irq, void *__hba) 5670 5592 { ··· 5688 5608 intr_status & ufshcd_readl(hba, REG_INTERRUPT_ENABLE); 5689 5609 if (intr_status) 5690 5610 ufshcd_writel(hba, intr_status, REG_INTERRUPT_STATUS); 5691 - if (enabled_intr_status) { 5692 - ufshcd_sl_intr(hba, enabled_intr_status); 5693 - retval = IRQ_HANDLED; 5694 - } 5611 + if (enabled_intr_status) 5612 + retval |= ufshcd_sl_intr(hba, enabled_intr_status); 5695 5613 5696 5614 intr_status = ufshcd_readl(hba, REG_INTERRUPT_STATUS); 5697 5615 } while (intr_status && --retries); 5616 + 5617 + if (retval == IRQ_NONE) { 5618 + dev_err(hba->dev, "%s: Unhandled interrupt 0x%08x\n", 5619 + __func__, intr_status); 5620 + ufshcd_dump_regs(hba, 0, UFSHCI_REG_SPACE_SIZE, "host_regs: "); 5621 + } 5698 5622 5699 5623 spin_unlock(hba->host->host_lock); 5700 5624 return retval; ··· 5844 5760 * @hba: per-adapter instance 5845 5761 * @req_upiu: upiu request 5846 5762 * @rsp_upiu: upiu reply 5847 - * @msgcode: message code, one of UPIU Transaction Codes Initiator to Target 5848 5763 * @desc_buff: pointer to descriptor buffer, NULL if NA 5849 5764 * @buff_len: descriptor size, 0 if NA 5765 + * @cmd_type: specifies the type (NOP, Query...) 5850 5766 * @desc_op: descriptor operation 5851 5767 * 5852 5768 * Those type of requests uses UTP Transfer Request Descriptor - utrd. ··· 5860 5776 struct utp_upiu_req *req_upiu, 5861 5777 struct utp_upiu_req *rsp_upiu, 5862 5778 u8 *desc_buff, int *buff_len, 5863 - int cmd_type, 5779 + enum dev_cmd_type cmd_type, 5864 5780 enum query_opcode desc_op) 5865 5781 { 5866 5782 struct ufshcd_lrb *lrbp; ··· 5940 5856 memcpy(desc_buff, descp, resp_len); 5941 5857 *buff_len = resp_len; 5942 5858 } else { 5943 - dev_warn(hba->dev, "rsp size is bigger than buffer"); 5859 + dev_warn(hba->dev, 5860 + "%s: rsp size %d is bigger than buffer size %d", 5861 + __func__, resp_len, *buff_len); 5944 5862 *buff_len = 0; 5945 5863 err = -EINVAL; 5946 5864 } ··· 5977 5891 enum query_opcode desc_op) 5978 5892 { 5979 5893 int err; 5980 - int cmd_type = DEV_CMD_TYPE_QUERY; 5894 + enum dev_cmd_type cmd_type = DEV_CMD_TYPE_QUERY; 5981 5895 struct utp_task_req_desc treq = { { 0 }, }; 5982 5896 int ocs_value; 5983 5897 u8 tm_f = be32_to_cpu(req_upiu->header.dword_1) >> 16 & MASK_TM_FUNC; ··· 6856 6770 &hba->desc_size.geom_desc); 6857 6771 if (err) 6858 6772 hba->desc_size.geom_desc = QUERY_DESC_GEOMETRY_DEF_SIZE; 6773 + 6859 6774 err = ufshcd_read_desc_length(hba, QUERY_DESC_IDN_HEALTH, 0, 6860 6775 &hba->desc_size.hlth_desc); 6861 6776 if (err) 6862 6777 hba->desc_size.hlth_desc = QUERY_DESC_HEALTH_DEF_SIZE; 6863 - } 6864 - 6865 - static void ufshcd_def_desc_sizes(struct ufs_hba *hba) 6866 - { 6867 - hba->desc_size.dev_desc = QUERY_DESC_DEVICE_DEF_SIZE; 6868 - hba->desc_size.pwr_desc = QUERY_DESC_POWER_DEF_SIZE; 6869 - hba->desc_size.interc_desc = QUERY_DESC_INTERCONNECT_DEF_SIZE; 6870 - hba->desc_size.conf_desc = QUERY_DESC_CONFIGURATION_DEF_SIZE; 6871 - hba->desc_size.unit_desc = QUERY_DESC_UNIT_DEF_SIZE; 6872 - hba->desc_size.geom_desc = QUERY_DESC_GEOMETRY_DEF_SIZE; 6873 - hba->desc_size.hlth_desc = QUERY_DESC_HEALTH_DEF_SIZE; 6874 6778 } 6875 6779 6876 6780 static struct ufs_ref_clk ufs_ref_clk_freqs[] = { ··· 6957 6881 /* UniPro link is active now */ 6958 6882 ufshcd_set_link_active(hba); 6959 6883 6960 - /* Enable Auto-Hibernate if configured */ 6961 - ufshcd_auto_hibern8_enable(hba); 6962 - 6963 6884 ret = ufshcd_verify_dev_init(hba); 6964 6885 if (ret) 6965 6886 goto out; ··· 7006 6933 7007 6934 /* set the state as operational after switching to desired gear */ 7008 6935 hba->ufshcd_state = UFSHCD_STATE_OPERATIONAL; 6936 + 6937 + /* Enable Auto-Hibernate if configured */ 6938 + ufshcd_auto_hibern8_enable(hba); 7009 6939 7010 6940 /* 7011 6941 * If we are in error handling context or in power management callbacks ··· 7145 7069 .track_queue_depth = 1, 7146 7070 .sdev_groups = ufshcd_driver_groups, 7147 7071 .dma_boundary = PAGE_SIZE - 1, 7072 + .rpm_autosuspend_delay = RPM_AUTOSUSPEND_DELAY_MS, 7148 7073 }; 7149 7074 7150 7075 static int ufshcd_config_vreg_load(struct device *dev, struct ufs_vreg *vreg, ··· 8027 7950 if (hba->clk_scaling.is_allowed) 8028 7951 ufshcd_resume_clkscaling(hba); 8029 7952 8030 - /* Schedule clock gating in case of no access to UFS device yet */ 8031 - ufshcd_release(hba); 8032 - 8033 7953 /* Enable Auto-Hibernate if configured */ 8034 7954 ufshcd_auto_hibern8_enable(hba); 7955 + 7956 + /* Schedule clock gating in case of no access to UFS device yet */ 7957 + ufshcd_release(hba); 8035 7958 8036 7959 goto out; 8037 7960 ··· 8350 8273 8351 8274 hba->mmio_base = mmio_base; 8352 8275 hba->irq = irq; 8353 - 8354 - /* Set descriptor lengths to specification defaults */ 8355 - ufshcd_def_desc_sizes(hba); 8356 8276 8357 8277 err = ufshcd_hba_init(hba); 8358 8278 if (err)
+12
drivers/scsi/ufs/ufshcd.h
··· 716 716 * the performance of ongoing read/write operations. 717 717 */ 718 718 #define UFSHCD_CAP_KEEP_AUTO_BKOPS_ENABLED_EXCEPT_SUSPEND (1 << 5) 719 + /* 720 + * This capability allows host controller driver to automatically 721 + * enable runtime power management by itself instead of waiting 722 + * for userspace to control the power management. 723 + */ 724 + #define UFSHCD_CAP_RPM_AUTOSUSPEND (1 << 6) 719 725 720 726 struct devfreq *devfreq; 721 727 struct ufs_clk_scaling clk_scaling; ··· 754 748 static inline bool ufshcd_can_autobkops_during_suspend(struct ufs_hba *hba) 755 749 { 756 750 return hba->caps & UFSHCD_CAP_AUTO_BKOPS_SUSPEND; 751 + } 752 + static inline bool ufshcd_is_rpm_autosuspend_allowed(struct ufs_hba *hba) 753 + { 754 + return hba->caps & UFSHCD_CAP_RPM_AUTOSUSPEND; 757 755 } 758 756 759 757 static inline bool ufshcd_is_intr_aggr_allowed(struct ufs_hba *hba) ··· 925 915 enum attr_idn idn, u8 index, u8 selector, u32 *attr_val); 926 916 int ufshcd_query_flag(struct ufs_hba *hba, enum query_opcode opcode, 927 917 enum flag_idn idn, bool *flag_res); 918 + 919 + void ufshcd_auto_hibern8_enable(struct ufs_hba *hba); 928 920 929 921 #define SD_ASCII_STD true 930 922 #define SD_RAW false
+1 -1
drivers/scsi/ufs/ufshci.h
··· 195 195 196 196 /* UECDL - Host UIC Error Code Data Link Layer 3Ch */ 197 197 #define UIC_DATA_LINK_LAYER_ERROR 0x80000000 198 - #define UIC_DATA_LINK_LAYER_ERROR_CODE_MASK 0x7FFF 198 + #define UIC_DATA_LINK_LAYER_ERROR_CODE_MASK 0xFFFF 199 199 #define UIC_DATA_LINK_LAYER_ERROR_TCX_REP_TIMER_EXP 0x2 200 200 #define UIC_DATA_LINK_LAYER_ERROR_AFCX_REQ_TIMER_EXP 0x4 201 201 #define UIC_DATA_LINK_LAYER_ERROR_FCX_PRO_TIMER_EXP 0x8
+9 -2
drivers/scsi/zorro_esp.c
··· 218 218 static u32 zorro_esp_dma_length_limit(struct esp *esp, u32 dma_addr, 219 219 u32 dma_len) 220 220 { 221 - return dma_len > 0xFFFF ? 0xFFFF : dma_len; 221 + return dma_len > (1U << 16) ? (1U << 16) : dma_len; 222 + } 223 + 224 + static u32 fastlane_esp_dma_length_limit(struct esp *esp, u32 dma_addr, 225 + u32 dma_len) 226 + { 227 + /* The old driver used 0xfffc as limit, so do that here too */ 228 + return dma_len > 0xfffc ? 0xfffc : dma_len; 222 229 } 223 230 224 231 static void zorro_esp_reset_dma(struct esp *esp) ··· 611 604 .esp_write8 = zorro_esp_write8, 612 605 .esp_read8 = zorro_esp_read8, 613 606 .irq_pending = fastlane_esp_irq_pending, 614 - .dma_length_limit = zorro_esp_dma_length_limit, 607 + .dma_length_limit = fastlane_esp_dma_length_limit, 615 608 .reset_dma = zorro_esp_reset_dma, 616 609 .dma_drain = zorro_esp_dma_drain, 617 610 .dma_invalidate = fastlane_esp_dma_invalidate,
-3
drivers/target/iscsi/cxgbit/cxgbit_ddp.c
··· 297 297 struct cxgb4_lld_info *lldi = &cdev->lldi; 298 298 struct net_device *ndev = cdev->lldi.ports[0]; 299 299 struct cxgbi_tag_format tformat; 300 - unsigned int ppmax; 301 300 int ret, i; 302 301 303 302 if (!lldi->vr->iscsi.size) { 304 303 pr_warn("%s, iscsi NOT enabled, check config!\n", ndev->name); 305 304 return -EACCES; 306 305 } 307 - 308 - ppmax = lldi->vr->iscsi.size >> PPOD_SIZE_SHIFT; 309 306 310 307 memset(&tformat, 0, sizeof(struct cxgbi_tag_format)); 311 308 for (i = 0; i < 4; i++)
+14 -10
drivers/target/iscsi/iscsi_target.c
··· 1165 1165 hdr->cmdsn, be32_to_cpu(hdr->data_length), payload_length, 1166 1166 conn->cid); 1167 1167 1168 - target_get_sess_cmd(&cmd->se_cmd, true); 1168 + if (target_get_sess_cmd(&cmd->se_cmd, true) < 0) 1169 + return iscsit_add_reject_cmd(cmd, 1170 + ISCSI_REASON_WAITING_FOR_LOGOUT, buf); 1169 1171 1170 1172 cmd->sense_reason = transport_lookup_cmd_lun(&cmd->se_cmd, 1171 1173 scsilun_to_int(&hdr->lun)); ··· 2004 2002 conn->sess->se_sess, 0, DMA_NONE, 2005 2003 TCM_SIMPLE_TAG, cmd->sense_buffer + 2); 2006 2004 2007 - target_get_sess_cmd(&cmd->se_cmd, true); 2005 + if (target_get_sess_cmd(&cmd->se_cmd, true) < 0) 2006 + return iscsit_add_reject_cmd(cmd, 2007 + ISCSI_REASON_WAITING_FOR_LOGOUT, buf); 2008 2008 2009 2009 /* 2010 2010 * TASK_REASSIGN for ERL=2 / connection stays inside of ··· 2193 2189 } 2194 2190 goto empty_sendtargets; 2195 2191 } 2196 - if (strncmp("SendTargets", text_in, 11) != 0) { 2192 + if (strncmp("SendTargets=", text_in, 12) != 0) { 2197 2193 pr_err("Received Text Data that is not" 2198 2194 " SendTargets, cannot continue.\n"); 2199 2195 goto reject; 2200 2196 } 2197 + /* '=' confirmed in strncmp */ 2201 2198 text_ptr = strchr(text_in, '='); 2202 - if (!text_ptr) { 2203 - pr_err("No \"=\" separator found in Text Data," 2204 - " cannot continue.\n"); 2205 - goto reject; 2206 - } 2207 - if (!strncmp("=All", text_ptr, 4)) { 2199 + BUG_ON(!text_ptr); 2200 + if (!strncmp("=All", text_ptr, 5)) { 2208 2201 cmd->cmd_flags |= ICF_SENDTARGETS_ALL; 2209 2202 } else if (!strncmp("=iqn.", text_ptr, 5) || 2210 2203 !strncmp("=eui.", text_ptr, 5)) { 2211 2204 cmd->cmd_flags |= ICF_SENDTARGETS_SINGLE; 2212 2205 } else { 2213 - pr_err("Unable to locate valid SendTargets=%s value\n", text_ptr); 2206 + pr_err("Unable to locate valid SendTargets%s value\n", 2207 + text_ptr); 2214 2208 goto reject; 2215 2209 } 2216 2210 ··· 4234 4232 * must wait until they have completed. 4235 4233 */ 4236 4234 iscsit_check_conn_usage_count(conn); 4235 + target_sess_cmd_list_set_waiting(sess->se_sess); 4236 + target_wait_for_sess_cmds(sess->se_sess); 4237 4237 4238 4238 ahash_request_free(conn->conn_tx_hash); 4239 4239 if (conn->conn_rx_hash) {
+149 -83
drivers/target/iscsi/iscsi_target_auth.c
··· 18 18 #include "iscsi_target_nego.h" 19 19 #include "iscsi_target_auth.h" 20 20 21 + static char *chap_get_digest_name(const int digest_type) 22 + { 23 + switch (digest_type) { 24 + case CHAP_DIGEST_MD5: 25 + return "md5"; 26 + case CHAP_DIGEST_SHA1: 27 + return "sha1"; 28 + case CHAP_DIGEST_SHA256: 29 + return "sha256"; 30 + case CHAP_DIGEST_SHA3_256: 31 + return "sha3-256"; 32 + default: 33 + return NULL; 34 + } 35 + } 36 + 21 37 static int chap_gen_challenge( 22 38 struct iscsi_conn *conn, 23 39 int caller, ··· 41 25 unsigned int *c_len) 42 26 { 43 27 int ret; 44 - unsigned char challenge_asciihex[CHAP_CHALLENGE_LENGTH * 2 + 1]; 28 + unsigned char *challenge_asciihex; 45 29 struct iscsi_chap *chap = conn->auth_protocol; 46 30 47 - memset(challenge_asciihex, 0, CHAP_CHALLENGE_LENGTH * 2 + 1); 31 + challenge_asciihex = kzalloc(chap->challenge_len * 2 + 1, GFP_KERNEL); 32 + if (!challenge_asciihex) 33 + return -ENOMEM; 48 34 49 - ret = get_random_bytes_wait(chap->challenge, CHAP_CHALLENGE_LENGTH); 35 + memset(chap->challenge, 0, MAX_CHAP_CHALLENGE_LEN); 36 + 37 + ret = get_random_bytes_wait(chap->challenge, chap->challenge_len); 50 38 if (unlikely(ret)) 51 - return ret; 39 + goto out; 40 + 52 41 bin2hex(challenge_asciihex, chap->challenge, 53 - CHAP_CHALLENGE_LENGTH); 42 + chap->challenge_len); 54 43 /* 55 44 * Set CHAP_C, and copy the generated challenge into c_str. 56 45 */ ··· 64 43 65 44 pr_debug("[%s] Sending CHAP_C=0x%s\n\n", (caller) ? "server" : "client", 66 45 challenge_asciihex); 46 + 47 + out: 48 + kfree(challenge_asciihex); 49 + return ret; 50 + } 51 + 52 + static int chap_test_algorithm(const char *name) 53 + { 54 + struct crypto_shash *tfm; 55 + 56 + tfm = crypto_alloc_shash(name, 0, 0); 57 + if (IS_ERR(tfm)) 58 + return -1; 59 + 60 + crypto_free_shash(tfm); 67 61 return 0; 68 62 } 69 63 70 64 static int chap_check_algorithm(const char *a_str) 71 65 { 72 - char *tmp, *orig, *token; 66 + char *tmp, *orig, *token, *digest_name; 67 + long digest_type; 68 + int r = CHAP_DIGEST_UNKNOWN; 73 69 74 70 tmp = kstrdup(a_str, GFP_KERNEL); 75 71 if (!tmp) { ··· 108 70 if (!token) 109 71 goto out; 110 72 111 - if (!strncmp(token, "5", 1)) { 112 - pr_debug("Selected MD5 Algorithm\n"); 113 - kfree(orig); 114 - return CHAP_DIGEST_MD5; 73 + if (kstrtol(token, 10, &digest_type)) 74 + continue; 75 + 76 + digest_name = chap_get_digest_name(digest_type); 77 + if (!digest_name) 78 + continue; 79 + 80 + pr_debug("Selected %s Algorithm\n", digest_name); 81 + if (chap_test_algorithm(digest_name) < 0) { 82 + pr_err("failed to allocate %s algo\n", digest_name); 83 + } else { 84 + r = digest_type; 85 + goto out; 115 86 } 116 87 } 117 88 out: 118 89 kfree(orig); 119 - return CHAP_DIGEST_UNKNOWN; 90 + return r; 120 91 } 121 92 122 93 static void chap_close(struct iscsi_conn *conn) ··· 141 94 char *aic_str, 142 95 unsigned int *aic_len) 143 96 { 144 - int ret; 97 + int digest_type; 145 98 struct iscsi_chap *chap; 146 99 147 100 if (!(auth->naf_flags & NAF_USERID_SET) || ··· 156 109 return NULL; 157 110 158 111 chap = conn->auth_protocol; 159 - ret = chap_check_algorithm(a_str); 160 - switch (ret) { 112 + digest_type = chap_check_algorithm(a_str); 113 + switch (digest_type) { 161 114 case CHAP_DIGEST_MD5: 162 - pr_debug("[server] Got CHAP_A=5\n"); 163 - /* 164 - * Send back CHAP_A set to MD5. 165 - */ 166 - *aic_len = sprintf(aic_str, "CHAP_A=5"); 167 - *aic_len += 1; 168 - chap->digest_type = CHAP_DIGEST_MD5; 169 - pr_debug("[server] Sending CHAP_A=%d\n", chap->digest_type); 115 + chap->digest_size = MD5_SIGNATURE_SIZE; 116 + break; 117 + case CHAP_DIGEST_SHA1: 118 + chap->digest_size = SHA1_SIGNATURE_SIZE; 119 + break; 120 + case CHAP_DIGEST_SHA256: 121 + chap->digest_size = SHA256_SIGNATURE_SIZE; 122 + break; 123 + case CHAP_DIGEST_SHA3_256: 124 + chap->digest_size = SHA3_256_SIGNATURE_SIZE; 170 125 break; 171 126 case CHAP_DIGEST_UNKNOWN: 172 127 default: ··· 176 127 chap_close(conn); 177 128 return NULL; 178 129 } 130 + 131 + chap->digest_name = chap_get_digest_name(digest_type); 132 + 133 + /* Tie the challenge length to the digest size */ 134 + chap->challenge_len = chap->digest_size; 135 + 136 + pr_debug("[server] Got CHAP_A=%d\n", digest_type); 137 + *aic_len = sprintf(aic_str, "CHAP_A=%d", digest_type); 138 + *aic_len += 1; 139 + pr_debug("[server] Sending CHAP_A=%d\n", digest_type); 179 140 180 141 /* 181 142 * Set Identifier. ··· 205 146 return chap; 206 147 } 207 148 208 - static int chap_server_compute_md5( 149 + static int chap_server_compute_hash( 209 150 struct iscsi_conn *conn, 210 151 struct iscsi_node_auth *auth, 211 152 char *nr_in_ptr, ··· 214 155 { 215 156 unsigned long id; 216 157 unsigned char id_as_uchar; 217 - unsigned char digest[MD5_SIGNATURE_SIZE]; 218 - unsigned char type, response[MD5_SIGNATURE_SIZE * 2 + 2]; 219 - unsigned char identifier[10], *challenge = NULL; 220 - unsigned char *challenge_binhex = NULL; 221 - unsigned char client_digest[MD5_SIGNATURE_SIZE]; 222 - unsigned char server_digest[MD5_SIGNATURE_SIZE]; 158 + unsigned char type; 159 + unsigned char identifier[10], *initiatorchg = NULL; 160 + unsigned char *initiatorchg_binhex = NULL; 161 + unsigned char *digest = NULL; 162 + unsigned char *response = NULL; 163 + unsigned char *client_digest = NULL; 164 + unsigned char *server_digest = NULL; 223 165 unsigned char chap_n[MAX_CHAP_N_SIZE], chap_r[MAX_RESPONSE_LENGTH]; 224 166 size_t compare_len; 225 167 struct iscsi_chap *chap = conn->auth_protocol; 226 168 struct crypto_shash *tfm = NULL; 227 169 struct shash_desc *desc = NULL; 228 - int auth_ret = -1, ret, challenge_len; 170 + int auth_ret = -1, ret, initiatorchg_len; 171 + 172 + digest = kzalloc(chap->digest_size, GFP_KERNEL); 173 + if (!digest) { 174 + pr_err("Unable to allocate the digest buffer\n"); 175 + goto out; 176 + } 177 + 178 + response = kzalloc(chap->digest_size * 2 + 2, GFP_KERNEL); 179 + if (!response) { 180 + pr_err("Unable to allocate the response buffer\n"); 181 + goto out; 182 + } 183 + 184 + client_digest = kzalloc(chap->digest_size, GFP_KERNEL); 185 + if (!client_digest) { 186 + pr_err("Unable to allocate the client_digest buffer\n"); 187 + goto out; 188 + } 189 + 190 + server_digest = kzalloc(chap->digest_size, GFP_KERNEL); 191 + if (!server_digest) { 192 + pr_err("Unable to allocate the server_digest buffer\n"); 193 + goto out; 194 + } 229 195 230 196 memset(identifier, 0, 10); 231 197 memset(chap_n, 0, MAX_CHAP_N_SIZE); 232 198 memset(chap_r, 0, MAX_RESPONSE_LENGTH); 233 - memset(digest, 0, MD5_SIGNATURE_SIZE); 234 - memset(response, 0, MD5_SIGNATURE_SIZE * 2 + 2); 235 - memset(client_digest, 0, MD5_SIGNATURE_SIZE); 236 - memset(server_digest, 0, MD5_SIGNATURE_SIZE); 237 199 238 - challenge = kzalloc(CHAP_CHALLENGE_STR_LEN, GFP_KERNEL); 239 - if (!challenge) { 200 + initiatorchg = kzalloc(CHAP_CHALLENGE_STR_LEN, GFP_KERNEL); 201 + if (!initiatorchg) { 240 202 pr_err("Unable to allocate challenge buffer\n"); 241 203 goto out; 242 204 } 243 205 244 - challenge_binhex = kzalloc(CHAP_CHALLENGE_STR_LEN, GFP_KERNEL); 245 - if (!challenge_binhex) { 246 - pr_err("Unable to allocate challenge_binhex buffer\n"); 206 + initiatorchg_binhex = kzalloc(CHAP_CHALLENGE_STR_LEN, GFP_KERNEL); 207 + if (!initiatorchg_binhex) { 208 + pr_err("Unable to allocate initiatorchg_binhex buffer\n"); 247 209 goto out; 248 210 } 249 211 /* ··· 299 219 pr_err("Could not find CHAP_R.\n"); 300 220 goto out; 301 221 } 302 - if (strlen(chap_r) != MD5_SIGNATURE_SIZE * 2) { 222 + if (strlen(chap_r) != chap->digest_size * 2) { 303 223 pr_err("Malformed CHAP_R\n"); 304 224 goto out; 305 225 } 306 - if (hex2bin(client_digest, chap_r, MD5_SIGNATURE_SIZE) < 0) { 226 + if (hex2bin(client_digest, chap_r, chap->digest_size) < 0) { 307 227 pr_err("Malformed CHAP_R\n"); 308 228 goto out; 309 229 } 310 230 311 231 pr_debug("[server] Got CHAP_R=%s\n", chap_r); 312 232 313 - tfm = crypto_alloc_shash("md5", 0, 0); 233 + tfm = crypto_alloc_shash(chap->digest_name, 0, 0); 314 234 if (IS_ERR(tfm)) { 315 235 tfm = NULL; 316 236 pr_err("Unable to allocate struct crypto_shash\n"); ··· 345 265 } 346 266 347 267 ret = crypto_shash_finup(desc, chap->challenge, 348 - CHAP_CHALLENGE_LENGTH, server_digest); 268 + chap->challenge_len, server_digest); 349 269 if (ret < 0) { 350 270 pr_err("crypto_shash_finup() failed for challenge\n"); 351 271 goto out; 352 272 } 353 273 354 - bin2hex(response, server_digest, MD5_SIGNATURE_SIZE); 355 - pr_debug("[server] MD5 Server Digest: %s\n", response); 274 + bin2hex(response, server_digest, chap->digest_size); 275 + pr_debug("[server] %s Server Digest: %s\n", 276 + chap->digest_name, response); 356 277 357 - if (memcmp(server_digest, client_digest, MD5_SIGNATURE_SIZE) != 0) { 358 - pr_debug("[server] MD5 Digests do not match!\n\n"); 278 + if (memcmp(server_digest, client_digest, chap->digest_size) != 0) { 279 + pr_debug("[server] %s Digests do not match!\n\n", 280 + chap->digest_name); 359 281 goto out; 360 282 } else 361 - pr_debug("[server] MD5 Digests match, CHAP connection" 362 - " successful.\n\n"); 283 + pr_debug("[server] %s Digests match, CHAP connection" 284 + " successful.\n\n", chap->digest_name); 363 285 /* 364 286 * One way authentication has succeeded, return now if mutual 365 287 * authentication is not enabled. ··· 399 317 * Get CHAP_C. 400 318 */ 401 319 if (extract_param(nr_in_ptr, "CHAP_C", CHAP_CHALLENGE_STR_LEN, 402 - challenge, &type) < 0) { 320 + initiatorchg, &type) < 0) { 403 321 pr_err("Could not find CHAP_C.\n"); 404 322 goto out; 405 323 } ··· 408 326 pr_err("Could not find CHAP_C.\n"); 409 327 goto out; 410 328 } 411 - challenge_len = DIV_ROUND_UP(strlen(challenge), 2); 412 - if (!challenge_len) { 329 + initiatorchg_len = DIV_ROUND_UP(strlen(initiatorchg), 2); 330 + if (!initiatorchg_len) { 413 331 pr_err("Unable to convert incoming challenge\n"); 414 332 goto out; 415 333 } 416 - if (challenge_len > 1024) { 334 + if (initiatorchg_len > 1024) { 417 335 pr_err("CHAP_C exceeds maximum binary size of 1024 bytes\n"); 418 336 goto out; 419 337 } 420 - if (hex2bin(challenge_binhex, challenge, challenge_len) < 0) { 338 + if (hex2bin(initiatorchg_binhex, initiatorchg, initiatorchg_len) < 0) { 421 339 pr_err("Malformed CHAP_C\n"); 422 340 goto out; 423 341 } 424 - pr_debug("[server] Got CHAP_C=%s\n", challenge); 342 + pr_debug("[server] Got CHAP_C=%s\n", initiatorchg); 425 343 /* 426 344 * During mutual authentication, the CHAP_C generated by the 427 345 * initiator must not match the original CHAP_C generated by 428 346 * the target. 429 347 */ 430 - if (!memcmp(challenge_binhex, chap->challenge, CHAP_CHALLENGE_LENGTH)) { 348 + if (initiatorchg_len == chap->challenge_len && 349 + !memcmp(initiatorchg_binhex, chap->challenge, 350 + initiatorchg_len)) { 431 351 pr_err("initiator CHAP_C matches target CHAP_C, failing" 432 352 " login attempt\n"); 433 353 goto out; ··· 461 377 /* 462 378 * Convert received challenge to binary hex. 463 379 */ 464 - ret = crypto_shash_finup(desc, challenge_binhex, challenge_len, 380 + ret = crypto_shash_finup(desc, initiatorchg_binhex, initiatorchg_len, 465 381 digest); 466 382 if (ret < 0) { 467 383 pr_err("crypto_shash_finup() failed for ma challenge\n"); ··· 477 393 /* 478 394 * Convert response from binary hex to ascii hext. 479 395 */ 480 - bin2hex(response, digest, MD5_SIGNATURE_SIZE); 396 + bin2hex(response, digest, chap->digest_size); 481 397 *nr_out_len += sprintf(nr_out_ptr + *nr_out_len, "CHAP_R=0x%s", 482 398 response); 483 399 *nr_out_len += 1; ··· 487 403 kzfree(desc); 488 404 if (tfm) 489 405 crypto_free_shash(tfm); 490 - kfree(challenge); 491 - kfree(challenge_binhex); 406 + kfree(initiatorchg); 407 + kfree(initiatorchg_binhex); 408 + kfree(digest); 409 + kfree(response); 410 + kfree(server_digest); 411 + kfree(client_digest); 492 412 return auth_ret; 493 - } 494 - 495 - static int chap_got_response( 496 - struct iscsi_conn *conn, 497 - struct iscsi_node_auth *auth, 498 - char *nr_in_ptr, 499 - char *nr_out_ptr, 500 - unsigned int *nr_out_len) 501 - { 502 - struct iscsi_chap *chap = conn->auth_protocol; 503 - 504 - switch (chap->digest_type) { 505 - case CHAP_DIGEST_MD5: 506 - if (chap_server_compute_md5(conn, auth, nr_in_ptr, 507 - nr_out_ptr, nr_out_len) < 0) 508 - return -1; 509 - return 0; 510 - default: 511 - pr_err("Unknown CHAP digest type %d!\n", 512 - chap->digest_type); 513 - return -1; 514 - } 515 413 } 516 414 517 415 u32 chap_main_loop( ··· 514 448 return 0; 515 449 } else if (chap->chap_state == CHAP_STAGE_SERVER_AIC) { 516 450 convert_null_to_semi(in_text, *in_len); 517 - if (chap_got_response(conn, auth, in_text, out_text, 451 + if (chap_server_compute_hash(conn, auth, in_text, out_text, 518 452 out_len) < 0) { 519 453 chap_close(conn); 520 454 return 2;
+12 -5
drivers/target/iscsi/iscsi_target_auth.h
··· 6 6 7 7 #define CHAP_DIGEST_UNKNOWN 0 8 8 #define CHAP_DIGEST_MD5 5 9 - #define CHAP_DIGEST_SHA 6 9 + #define CHAP_DIGEST_SHA1 6 10 + #define CHAP_DIGEST_SHA256 7 11 + #define CHAP_DIGEST_SHA3_256 8 10 12 11 - #define CHAP_CHALLENGE_LENGTH 16 13 + #define MAX_CHAP_CHALLENGE_LEN 32 12 14 #define CHAP_CHALLENGE_STR_LEN 4096 13 - #define MAX_RESPONSE_LENGTH 64 /* sufficient for MD5 */ 15 + #define MAX_RESPONSE_LENGTH 128 /* sufficient for SHA3 256 */ 14 16 #define MAX_CHAP_N_SIZE 512 15 17 16 18 #define MD5_SIGNATURE_SIZE 16 /* 16 bytes in a MD5 message digest */ 19 + #define SHA1_SIGNATURE_SIZE 20 /* 20 bytes in a SHA1 message digest */ 20 + #define SHA256_SIGNATURE_SIZE 32 /* 32 bytes in a SHA256 message digest */ 21 + #define SHA3_256_SIGNATURE_SIZE 32 /* 32 bytes in a SHA3 256 message digest */ 17 22 18 23 #define CHAP_STAGE_CLIENT_A 1 19 24 #define CHAP_STAGE_SERVER_AIC 2 ··· 33 28 int *, int *); 34 29 35 30 struct iscsi_chap { 36 - unsigned char digest_type; 37 31 unsigned char id; 38 - unsigned char challenge[CHAP_CHALLENGE_LENGTH]; 32 + unsigned char challenge[MAX_CHAP_CHALLENGE_LEN]; 33 + unsigned int challenge_len; 34 + unsigned char *digest_name; 35 + unsigned int digest_size; 39 36 unsigned int authenticate_target; 40 37 unsigned int chap_state; 41 38 } ____cacheline_aligned;
-3
drivers/target/iscsi/iscsi_target_parameters.h
··· 93 93 #define OFMARKER "OFMarker" 94 94 #define IFMARKINT "IFMarkInt" 95 95 #define OFMARKINT "OFMarkInt" 96 - #define X_EXTENSIONKEY "X-com.sbei.version" 97 - #define X_EXTENSIONKEY_CISCO_NEW "X-com.cisco.protocol" 98 - #define X_EXTENSIONKEY_CISCO_OLD "X-com.cisco.iscsi.draft" 99 96 100 97 /* 101 98 * Parameter names of iSCSI Extentions for RDMA (iSER). See RFC-5046
+1 -1
drivers/target/target_core_fabric_lib.c
··· 118 118 memset(buf + 8, 0, leading_zero_bytes); 119 119 rc = hex2bin(buf + 8 + leading_zero_bytes, p, count); 120 120 if (rc < 0) { 121 - pr_debug("hex2bin failed for %s: %d\n", __func__, rc); 121 + pr_debug("hex2bin failed for %s: %d\n", p, rc); 122 122 return rc; 123 123 } 124 124
-12
drivers/target/target_core_tpg.c
··· 32 32 33 33 extern struct se_device *g_lun0_dev; 34 34 35 - static DEFINE_SPINLOCK(tpg_lock); 36 - static LIST_HEAD(tpg_list); 37 - 38 35 /* __core_tpg_get_initiator_node_acl(): 39 36 * 40 37 * mutex_lock(&tpg->acl_node_mutex); must be held when calling ··· 472 475 se_tpg->se_tpg_wwn = se_wwn; 473 476 atomic_set(&se_tpg->tpg_pr_ref_count, 0); 474 477 INIT_LIST_HEAD(&se_tpg->acl_node_list); 475 - INIT_LIST_HEAD(&se_tpg->se_tpg_node); 476 478 INIT_LIST_HEAD(&se_tpg->tpg_sess_list); 477 479 spin_lock_init(&se_tpg->session_lock); 478 480 mutex_init(&se_tpg->tpg_lun_mutex); ··· 489 493 return ret; 490 494 } 491 495 } 492 - 493 - spin_lock_bh(&tpg_lock); 494 - list_add_tail(&se_tpg->se_tpg_node, &tpg_list); 495 - spin_unlock_bh(&tpg_lock); 496 496 497 497 pr_debug("TARGET_CORE[%s]: Allocated portal_group for endpoint: %s, " 498 498 "Proto: %d, Portal Tag: %u\n", se_tpg->se_tpg_tfo->fabric_name, ··· 510 518 "Proto: %d, Portal Tag: %u\n", tfo->fabric_name, 511 519 tfo->tpg_get_wwn(se_tpg) ? tfo->tpg_get_wwn(se_tpg) : NULL, 512 520 se_tpg->proto_id, tfo->tpg_get_tag(se_tpg)); 513 - 514 - spin_lock_bh(&tpg_lock); 515 - list_del(&se_tpg->se_tpg_node); 516 - spin_unlock_bh(&tpg_lock); 517 521 518 522 while (atomic_read(&se_tpg->tpg_pr_ref_count) != 0) 519 523 cpu_relax();
+28
drivers/target/target_core_transport.c
··· 584 584 } 585 585 EXPORT_SYMBOL(transport_free_session); 586 586 587 + static int target_release_res(struct se_device *dev, void *data) 588 + { 589 + struct se_session *sess = data; 590 + 591 + if (dev->reservation_holder == sess) 592 + target_release_reservation(dev); 593 + return 0; 594 + } 595 + 587 596 void transport_deregister_session(struct se_session *se_sess) 588 597 { 589 598 struct se_portal_group *se_tpg = se_sess->se_tpg; ··· 608 599 se_sess->se_tpg = NULL; 609 600 se_sess->fabric_sess_ptr = NULL; 610 601 spin_unlock_irqrestore(&se_tpg->session_lock, flags); 602 + 603 + /* 604 + * Since the session is being removed, release SPC-2 605 + * reservations held by the session that is disappearing. 606 + */ 607 + target_for_each_device(target_release_res, se_sess); 611 608 612 609 pr_debug("TARGET_CORE[%s]: Deregistered fabric_sess\n", 613 610 se_tpg->se_tpg_tfo->fabric_name); ··· 1258 1243 return TCM_NO_SENSE; 1259 1244 } 1260 1245 1246 + /** 1247 + * target_cmd_size_check - Check whether there will be a residual. 1248 + * @cmd: SCSI command. 1249 + * @size: Data buffer size derived from CDB. The data buffer size provided by 1250 + * the SCSI transport driver is available in @cmd->data_length. 1251 + * 1252 + * Compare the data buffer size from the CDB with the data buffer limit from the transport 1253 + * header. Set @cmd->residual_count and SCF_OVERFLOW_BIT or SCF_UNDERFLOW_BIT if necessary. 1254 + * 1255 + * Note: target drivers set @cmd->data_length by calling transport_init_se_cmd(). 1256 + * 1257 + * Return: TCM_NO_SENSE 1258 + */ 1261 1259 sense_reason_t 1262 1260 target_cmd_size_check(struct se_cmd *cmd, unsigned int size) 1263 1261 {
+3 -3
drivers/target/target_core_user.c
··· 499 499 schedule_delayed_work(&tcmu_unmap_work, 0); 500 500 501 501 /* try to get new page from the mm */ 502 - page = alloc_page(GFP_KERNEL); 502 + page = alloc_page(GFP_NOIO); 503 503 if (!page) 504 504 goto err_alloc; 505 505 ··· 573 573 struct tcmu_dev *udev = TCMU_DEV(se_dev); 574 574 struct tcmu_cmd *tcmu_cmd; 575 575 576 - tcmu_cmd = kmem_cache_zalloc(tcmu_cmd_cache, GFP_KERNEL); 576 + tcmu_cmd = kmem_cache_zalloc(tcmu_cmd_cache, GFP_NOIO); 577 577 if (!tcmu_cmd) 578 578 return NULL; 579 579 ··· 584 584 tcmu_cmd_reset_dbi_cur(tcmu_cmd); 585 585 tcmu_cmd->dbi_cnt = tcmu_cmd_get_block_cnt(tcmu_cmd); 586 586 tcmu_cmd->dbi = kcalloc(tcmu_cmd->dbi_cnt, sizeof(uint32_t), 587 - GFP_KERNEL); 587 + GFP_NOIO); 588 588 if (!tcmu_cmd->dbi) { 589 589 kmem_cache_free(tcmu_cmd_cache, tcmu_cmd); 590 590 return NULL;
-1
drivers/target/target_core_xcopy.c
··· 467 467 } 468 468 469 469 memset(&xcopy_pt_tpg, 0, sizeof(struct se_portal_group)); 470 - INIT_LIST_HEAD(&xcopy_pt_tpg.se_tpg_node); 471 470 INIT_LIST_HEAD(&xcopy_pt_tpg.acl_node_list); 472 471 INIT_LIST_HEAD(&xcopy_pt_tpg.tpg_sess_list); 473 472
+1 -1
drivers/usb/storage/ene_ub6250.c
··· 561 561 residue = min(residue, transfer_length); 562 562 if (us->srb != NULL) 563 563 scsi_set_resid(us->srb, max(scsi_get_resid(us->srb), 564 - (int)residue)); 564 + residue)); 565 565 } 566 566 567 567 if (bcs->Status != US_BULK_STAT_OK)
+1 -2
drivers/usb/storage/transport.c
··· 1284 1284 1285 1285 } else { 1286 1286 residue = min(residue, transfer_length); 1287 - scsi_set_resid(srb, max(scsi_get_resid(srb), 1288 - (int) residue)); 1287 + scsi_set_resid(srb, max(scsi_get_resid(srb), residue)); 1289 1288 } 1290 1289 } 1291 1290
-1
drivers/usb/storage/uas.c
··· 869 869 .eh_abort_handler = uas_eh_abort_handler, 870 870 .eh_device_reset_handler = uas_eh_device_reset_handler, 871 871 .this_id = -1, 872 - .sg_tablesize = SG_NONE, 873 872 .skip_settle_delay = 1, 874 873 .dma_boundary = PAGE_SIZE - 1, 875 874 };
+1
include/scsi/iscsi_proto.h
··· 627 627 #define ISCSI_REASON_BOOKMARK_INVALID 9 628 628 #define ISCSI_REASON_BOOKMARK_NO_RESOURCES 10 629 629 #define ISCSI_REASON_NEGOTIATION_RESET 11 630 + #define ISCSI_REASON_WAITING_FOR_LOGOUT 12 630 631 631 632 /* Max. number of Key=Value pairs in a text message */ 632 633 #define MAX_KEY_VALUE_PAIRS 8192
+3 -2
include/scsi/scsi_cmnd.h
··· 63 63 64 64 /* for scmd->state */ 65 65 #define SCMD_STATE_COMPLETE 0 66 + #define SCMD_STATE_INFLIGHT 1 66 67 67 68 struct scsi_cmnd { 68 69 struct scsi_request req; ··· 191 190 return cmd->sdb.length; 192 191 } 193 192 194 - static inline void scsi_set_resid(struct scsi_cmnd *cmd, int resid) 193 + static inline void scsi_set_resid(struct scsi_cmnd *cmd, unsigned int resid) 195 194 { 196 195 cmd->req.resid_len = resid; 197 196 } 198 197 199 - static inline int scsi_get_resid(struct scsi_cmnd *cmd) 198 + static inline unsigned int scsi_get_resid(struct scsi_cmnd *cmd) 200 199 { 201 200 return cmd->req.resid_len; 202 201 }
+4 -1
include/scsi/scsi_device.h
··· 140 140 const char * rev; /* ... "nullnullnullnull" before scan */ 141 141 142 142 #define SCSI_VPD_PG_LEN 255 143 + struct scsi_vpd __rcu *vpd_pg0; 143 144 struct scsi_vpd __rcu *vpd_pg83; 144 145 struct scsi_vpd __rcu *vpd_pg80; 146 + struct scsi_vpd __rcu *vpd_pg89; 145 147 unsigned char current_tag; /* current tag */ 146 148 struct scsi_target *sdev_target; /* used only for single_lun */ 147 149 ··· 201 199 unsigned broken_fua:1; /* Don't set FUA bit */ 202 200 unsigned lun_in_cdb:1; /* Store LUN bits in CDB[1] */ 203 201 unsigned unmap_limit_for_ws:1; /* Use the UNMAP limit for WRITE SAME */ 204 - 202 + unsigned rpm_autosuspend:1; /* Enable runtime autosuspend at device 203 + * creation time */ 205 204 atomic_t disk_events_disable_depth; /* disable depth for disk events */ 206 205 207 206 DECLARE_BITMAP(supported_events, SDEV_EVT_MAXBITS); /* supported events */
+4 -15
include/scsi/scsi_host.h
··· 23 23 struct scsi_transport_template; 24 24 25 25 26 - /* 27 - * The various choices mean: 28 - * NONE: Self evident. Host adapter is not capable of scatter-gather. 29 - * ALL: Means that the host adapter module can do scatter-gather, 30 - * and that there is no limit to the size of the table to which 31 - * we scatter/gather data. The value we set here is the maximum 32 - * single element sglist. To use chained sglists, the adapter 33 - * has to set a value beyond ALL (and correctly use the chain 34 - * handling API. 35 - * Anything else: Indicates the maximum number of chains that can be 36 - * used in one scatter-gather request. 37 - */ 38 - #define SG_NONE 0 39 26 #define SG_ALL SG_CHUNK_SIZE 40 27 41 28 #define MODE_UNKNOWN 0x00 ··· 332 345 /* 333 346 * This determines if we will use a non-interrupt driven 334 347 * or an interrupt driven scheme. It is set to the maximum number 335 - * of simultaneous commands a given host adapter will accept. 348 + * of simultaneous commands a single hw queue in HBA will accept. 336 349 */ 337 350 int can_queue; 338 351 ··· 473 486 */ 474 487 unsigned int cmd_size; 475 488 struct scsi_host_cmd_pool *cmd_pool; 489 + 490 + /* Delay for runtime autosuspend */ 491 + int rpm_autosuspend_delay; 476 492 }; 477 493 478 494 /* ··· 541 551 /* Area to keep a shared tag map */ 542 552 struct blk_mq_tag_set tag_set; 543 553 544 - atomic_t host_busy; /* commands actually active on low-level */ 545 554 atomic_t host_blocked; 546 555 547 556 unsigned int host_failed; /* commands that failed.
-1
include/target/target_core_base.h
··· 876 876 /* Spinlock for adding/removing sessions */ 877 877 spinlock_t session_lock; 878 878 struct mutex tpg_lun_mutex; 879 - struct list_head se_tpg_node; 880 879 /* linked list for initiator ACL list */ 881 880 struct list_head acl_node_list; 882 881 struct hlist_head tpg_lun_hlist;
+4 -7
include/uapi/linux/chio.h
··· 3 3 * ioctl interface for the scsi media changer driver 4 4 */ 5 5 6 + #ifndef _UAPI_LINUX_CHIO_H 7 + #define _UAPI_LINUX_CHIO_H 8 + 6 9 /* changer element types */ 7 10 #define CHET_MT 0 /* media transport element (robot) */ 8 11 #define CHET_ST 1 /* storage element (media slots) */ ··· 163 160 #define CHIOSVOLTAG _IOW('c',18,struct changer_set_voltag) 164 161 #define CHIOGVPARAMS _IOR('c',19,struct changer_vendor_params) 165 162 166 - /* ---------------------------------------------------------------------- */ 167 - 168 - /* 169 - * Local variables: 170 - * c-basic-offset: 8 171 - * End: 172 - */ 163 + #endif /* _UAPI_LINUX_CHIO_H */