Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi-misc-2.6

Pull SCSI updates from James Bottomley:
"This is primarily another round of driver updates (lpfc, bfa, fcoe,
ipr) plus a new ufshcd driver. There shouldn't be anything
controversial in here (The final deletion of scsi proc_ops which
caused some build breakage has been held over until the next merge
window to give us more time to stabilise it).

I'm afraid, with me moving continents at exactly the wrong time,
anything submitted after the merge window opened has been held over to
the next merge window."

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi-misc-2.6: (63 commits)
[SCSI] ipr: Driver version 2.5.3
[SCSI] ipr: Increase alignment boundary of command blocks
[SCSI] ipr: Increase max concurrent oustanding commands
[SCSI] ipr: Remove unnecessary memory barriers
[SCSI] ipr: Remove unnecessary interrupt clearing on new adapters
[SCSI] ipr: Fix target id allocation re-use problem
[SCSI] atp870u, mpt2sas, qla4xxx use pci_dev->revision
[SCSI] fcoe: Drop the rtnl_mutex before calling fcoe_ctlr_link_up
[SCSI] bfa: Update the driver version to 3.0.23.0
[SCSI] bfa: BSG and User interface fixes.
[SCSI] bfa: Fix to avoid vport delete hang on request queue full scenario.
[SCSI] bfa: Move service parameter programming logic into firmware.
[SCSI] bfa: Revised Fabric Assigned Address(FAA) feature implementation.
[SCSI] bfa: Flash controller IOC pll init fixes.
[SCSI] bfa: Serialize the IOC hw semaphore unlock logic.
[SCSI] bfa: Modify ISR to process pending completions
[SCSI] bfa: Add fc host issue lip support
[SCSI] mpt2sas: remove extraneous sas_log_info messages
[SCSI] libfc: fcoe_transport_create fails in single-CPU environment
[SCSI] fcoe: reduce contention for fcoe_rx_list lock [v2]
...

+4417 -951
+2
Documentation/scsi/00-INDEX
··· 94 94 - info on second generation driver for sym53c8xx based adapters 95 95 tmscsim.txt 96 96 - info on driver for AM53c974 based adapters 97 + ufs.txt 98 + - info on Universal Flash Storage(UFS) and UFS host controller driver.
+4
Documentation/scsi/st.txt
··· 390 390 MT_ST_SYSV sets the SYSV semantics (mode) 391 391 MT_ST_NOWAIT enables immediate mode (i.e., don't wait for 392 392 the command to finish) for some commands (e.g., rewind) 393 + MT_ST_NOWAIT_EOF enables immediate filemark mode (i.e. when 394 + writing a filemark, don't wait for it to complete). Please 395 + see the BASICS note about MTWEOFI with respect to the 396 + possible dangers of writing immediate filemarks. 393 397 MT_ST_SILI enables setting the SILI bit in SCSI commands when 394 398 reading in variable block mode to enhance performance when 395 399 reading blocks shorter than the byte count; set this only
+133
Documentation/scsi/ufs.txt
··· 1 + Universal Flash Storage 2 + ======================= 3 + 4 + 5 + Contents 6 + -------- 7 + 8 + 1. Overview 9 + 2. UFS Architecture Overview 10 + 2.1 Application Layer 11 + 2.2 UFS Transport Protocol(UTP) layer 12 + 2.3 UFS Interconnect(UIC) Layer 13 + 3. UFSHCD Overview 14 + 3.1 UFS controller initialization 15 + 3.2 UTP Transfer requests 16 + 3.3 UFS error handling 17 + 3.4 SCSI Error handling 18 + 19 + 20 + 1. Overview 21 + ----------- 22 + 23 + Universal Flash Storage(UFS) is a storage specification for flash devices. 24 + It is aimed to provide a universal storage interface for both 25 + embedded and removable flash memory based storage in mobile 26 + devices such as smart phones and tablet computers. The specification 27 + is defined by JEDEC Solid State Technology Association. UFS is based 28 + on MIPI M-PHY physical layer standard. UFS uses MIPI M-PHY as the 29 + physical layer and MIPI Unipro as the link layer. 30 + 31 + The main goals of UFS is to provide, 32 + * Optimized performance: 33 + For UFS version 1.0 and 1.1 the target performance is as follows, 34 + Support for Gear1 is mandatory (rate A: 1248Mbps, rate B: 1457.6Mbps) 35 + Support for Gear2 is optional (rate A: 2496Mbps, rate B: 2915.2Mbps) 36 + Future version of the standard, 37 + Gear3 (rate A: 4992Mbps, rate B: 5830.4Mbps) 38 + * Low power consumption 39 + * High random IOPs and low latency 40 + 41 + 42 + 2. UFS Architecture Overview 43 + ---------------------------- 44 + 45 + UFS has a layered communication architecture which is based on SCSI 46 + SAM-5 architectural model. 47 + 48 + UFS communication architecture consists of following layers, 49 + 50 + 2.1 Application Layer 51 + 52 + The Application layer is composed of UFS command set layer(UCS), 53 + Task Manager and Device manager. The UFS interface is designed to be 54 + protocol agnostic, however SCSI has been selected as a baseline 55 + protocol for versions 1.0 and 1.1 of UFS protocol layer. 56 + UFS supports subset of SCSI commands defined by SPC-4 and SBC-3. 57 + * UCS: It handles SCSI commands supported by UFS specification. 58 + * Task manager: It handles task management functions defined by the 59 + UFS which are meant for command queue control. 60 + * Device manager: It handles device level operations and device 61 + configuration operations. Device level operations mainly involve 62 + device power management operations and commands to Interconnect 63 + layers. Device level configurations involve handling of query 64 + requests which are used to modify and retrieve configuration 65 + information of the device. 66 + 67 + 2.2 UFS Transport Protocol(UTP) layer 68 + 69 + UTP layer provides services for 70 + the higher layers through Service Access Points. UTP defines 3 71 + service access points for higher layers. 72 + * UDM_SAP: Device manager service access point is exposed to device 73 + manager for device level operations. These device level operations 74 + are done through query requests. 75 + * UTP_CMD_SAP: Command service access point is exposed to UFS command 76 + set layer(UCS) to transport commands. 77 + * UTP_TM_SAP: Task management service access point is exposed to task 78 + manager to transport task management functions. 79 + UTP transports messages through UFS protocol information unit(UPIU). 80 + 81 + 2.3 UFS Interconnect(UIC) Layer 82 + 83 + UIC is the lowest layer of UFS layered architecture. It handles 84 + connection between UFS host and UFS device. UIC consists of 85 + MIPI UniPro and MIPI M-PHY. UIC provides 2 service access points 86 + to upper layer, 87 + * UIC_SAP: To transport UPIU between UFS host and UFS device. 88 + * UIO_SAP: To issue commands to Unipro layers. 89 + 90 + 91 + 3. UFSHCD Overview 92 + ------------------ 93 + 94 + The UFS host controller driver is based on Linux SCSI Framework. 95 + UFSHCD is a low level device driver which acts as an interface between 96 + SCSI Midlayer and PCIe based UFS host controllers. 97 + 98 + The current UFSHCD implementation supports following functionality, 99 + 100 + 3.1 UFS controller initialization 101 + 102 + The initialization module brings UFS host controller to active state 103 + and prepares the controller to transfer commands/response between 104 + UFSHCD and UFS device. 105 + 106 + 3.2 UTP Transfer requests 107 + 108 + Transfer request handling module of UFSHCD receives SCSI commands 109 + from SCSI Midlayer, forms UPIUs and issues the UPIUs to UFS Host 110 + controller. Also, the module decodes, responses received from UFS 111 + host controller in the form of UPIUs and intimates the SCSI Midlayer 112 + of the status of the command. 113 + 114 + 3.3 UFS error handling 115 + 116 + Error handling module handles Host controller fatal errors, 117 + Device fatal errors and UIC interconnect layer related errors. 118 + 119 + 3.4 SCSI Error handling 120 + 121 + This is done through UFSHCD SCSI error handling routines registered 122 + with SCSI Midlayer. Examples of some of the error handling commands 123 + issues by SCSI Midlayer are Abort task, Lun reset and host reset. 124 + UFSHCD Routines to perform these tasks are registered with 125 + SCSI Midlayer through .eh_abort_handler, .eh_device_reset_handler and 126 + .eh_host_reset_handler. 127 + 128 + In this version of UFSHCD Query requests and power management 129 + functionality are not implemented. 130 + 131 + UFS Specifications can be found at, 132 + UFS - http://www.jedec.org/sites/default/files/docs/JESD220.pdf 133 + UFSHCI - http://www.jedec.org/sites/default/files/docs/JESD223.pdf
+1
drivers/scsi/Kconfig
··· 619 619 620 620 source "drivers/scsi/megaraid/Kconfig.megaraid" 621 621 source "drivers/scsi/mpt2sas/Kconfig" 622 + source "drivers/scsi/ufs/Kconfig" 622 623 623 624 config SCSI_HPTIOP 624 625 tristate "HighPoint RocketRAID 3xxx/4xxx Controller support"
+1
drivers/scsi/Makefile
··· 108 108 obj-$(CONFIG_MEGARAID_NEWGEN) += megaraid/ 109 109 obj-$(CONFIG_MEGARAID_SAS) += megaraid/ 110 110 obj-$(CONFIG_SCSI_MPT2SAS) += mpt2sas/ 111 + obj-$(CONFIG_SCSI_UFSHCD) += ufs/ 111 112 obj-$(CONFIG_SCSI_ACARD) += atp870u.o 112 113 obj-$(CONFIG_SCSI_SUNESP) += esp_scsi.o sun_esp.o 113 114 obj-$(CONFIG_SCSI_GDTH) += gdth.o
+2 -2
drivers/scsi/atp870u.c
··· 2582 2582 * this than via the PCI device table 2583 2583 */ 2584 2584 if (ent->device == PCI_DEVICE_ID_ARTOP_AEC7610) { 2585 - error = pci_read_config_byte(pdev, PCI_CLASS_REVISION, &atpdev->chip_ver); 2585 + atpdev->chip_ver = pdev->revision; 2586 2586 if (atpdev->chip_ver < 2) 2587 2587 goto err_eio; 2588 2588 } ··· 2601 2601 base_io &= 0xfffffff8; 2602 2602 2603 2603 if ((ent->device == ATP880_DEVID1)||(ent->device == ATP880_DEVID2)) { 2604 - error = pci_read_config_byte(pdev, PCI_CLASS_REVISION, &atpdev->chip_ver); 2604 + atpdev->chip_ver = pdev->revision; 2605 2605 pci_write_config_byte(pdev, PCI_LATENCY_TIMER, 0x80);//JCC082803 2606 2606 2607 2607 host_id = inb(base_io + 0x39);
+4 -5
drivers/scsi/bfa/bfa.h
··· 225 225 }; 226 226 227 227 struct bfa_iocfc_s { 228 + bfa_fsm_t fsm; 228 229 struct bfa_s *bfa; 229 230 struct bfa_iocfc_cfg_s cfg; 230 - int action; 231 231 u32 req_cq_pi[BFI_IOC_MAX_CQS]; 232 232 u32 rsp_cq_ci[BFI_IOC_MAX_CQS]; 233 233 u8 hw_qid[BFI_IOC_MAX_CQS]; ··· 236 236 struct bfa_cb_qe_s dis_hcb_qe; 237 237 struct bfa_cb_qe_s en_hcb_qe; 238 238 struct bfa_cb_qe_s stats_hcb_qe; 239 - bfa_boolean_t cfgdone; 239 + bfa_boolean_t submod_enabled; 240 + bfa_boolean_t cb_reqd; /* Driver call back reqd */ 241 + bfa_status_t op_status; /* Status of bfa iocfc op */ 240 242 241 243 struct bfa_dma_s cfg_info; 242 244 struct bfi_iocfc_cfg_s *cfginfo; ··· 343 341 void bfa_hwct_msix_get_rme_range(struct bfa_s *bfa, u32 *start, 344 342 u32 *end); 345 343 void bfa_iocfc_get_bootwwns(struct bfa_s *bfa, u8 *nwwns, wwn_t *wwns); 346 - wwn_t bfa_iocfc_get_pwwn(struct bfa_s *bfa); 347 - wwn_t bfa_iocfc_get_nwwn(struct bfa_s *bfa); 348 344 int bfa_iocfc_get_pbc_vports(struct bfa_s *bfa, 349 345 struct bfi_pbc_vport_s *pbc_vport); 350 346 ··· 428 428 429 429 void bfa_iocfc_enable(struct bfa_s *bfa); 430 430 void bfa_iocfc_disable(struct bfa_s *bfa); 431 - void bfa_iocfc_cb_dconf_modinit(struct bfa_s *bfa, bfa_status_t status); 432 431 #define bfa_timer_start(_bfa, _timer, _timercb, _arg, _timeout) \ 433 432 bfa_timer_begin(&(_bfa)->timer_mod, _timer, _timercb, _arg, _timeout) 434 433
+484 -209
drivers/scsi/bfa/bfa_core.c
··· 200 200 #define DEF_CFG_NUM_SBOOT_LUNS 16 201 201 202 202 /* 203 + * IOCFC state machine definitions/declarations 204 + */ 205 + bfa_fsm_state_decl(bfa_iocfc, stopped, struct bfa_iocfc_s, enum iocfc_event); 206 + bfa_fsm_state_decl(bfa_iocfc, initing, struct bfa_iocfc_s, enum iocfc_event); 207 + bfa_fsm_state_decl(bfa_iocfc, dconf_read, struct bfa_iocfc_s, enum iocfc_event); 208 + bfa_fsm_state_decl(bfa_iocfc, init_cfg_wait, 209 + struct bfa_iocfc_s, enum iocfc_event); 210 + bfa_fsm_state_decl(bfa_iocfc, init_cfg_done, 211 + struct bfa_iocfc_s, enum iocfc_event); 212 + bfa_fsm_state_decl(bfa_iocfc, operational, 213 + struct bfa_iocfc_s, enum iocfc_event); 214 + bfa_fsm_state_decl(bfa_iocfc, dconf_write, 215 + struct bfa_iocfc_s, enum iocfc_event); 216 + bfa_fsm_state_decl(bfa_iocfc, stopping, struct bfa_iocfc_s, enum iocfc_event); 217 + bfa_fsm_state_decl(bfa_iocfc, enabling, struct bfa_iocfc_s, enum iocfc_event); 218 + bfa_fsm_state_decl(bfa_iocfc, cfg_wait, struct bfa_iocfc_s, enum iocfc_event); 219 + bfa_fsm_state_decl(bfa_iocfc, disabling, struct bfa_iocfc_s, enum iocfc_event); 220 + bfa_fsm_state_decl(bfa_iocfc, disabled, struct bfa_iocfc_s, enum iocfc_event); 221 + bfa_fsm_state_decl(bfa_iocfc, failed, struct bfa_iocfc_s, enum iocfc_event); 222 + bfa_fsm_state_decl(bfa_iocfc, init_failed, 223 + struct bfa_iocfc_s, enum iocfc_event); 224 + 225 + /* 203 226 * forward declaration for IOC FC functions 204 227 */ 228 + static void bfa_iocfc_start_submod(struct bfa_s *bfa); 229 + static void bfa_iocfc_disable_submod(struct bfa_s *bfa); 230 + static void bfa_iocfc_send_cfg(void *bfa_arg); 205 231 static void bfa_iocfc_enable_cbfn(void *bfa_arg, enum bfa_status status); 206 232 static void bfa_iocfc_disable_cbfn(void *bfa_arg); 207 233 static void bfa_iocfc_hbfail_cbfn(void *bfa_arg); 208 234 static void bfa_iocfc_reset_cbfn(void *bfa_arg); 209 235 static struct bfa_ioc_cbfn_s bfa_iocfc_cbfn; 236 + static void bfa_iocfc_init_cb(void *bfa_arg, bfa_boolean_t complete); 237 + static void bfa_iocfc_stop_cb(void *bfa_arg, bfa_boolean_t compl); 238 + static void bfa_iocfc_enable_cb(void *bfa_arg, bfa_boolean_t compl); 239 + static void bfa_iocfc_disable_cb(void *bfa_arg, bfa_boolean_t compl); 240 + 241 + static void 242 + bfa_iocfc_sm_stopped_entry(struct bfa_iocfc_s *iocfc) 243 + { 244 + } 245 + 246 + static void 247 + bfa_iocfc_sm_stopped(struct bfa_iocfc_s *iocfc, enum iocfc_event event) 248 + { 249 + bfa_trc(iocfc->bfa, event); 250 + 251 + switch (event) { 252 + case IOCFC_E_INIT: 253 + case IOCFC_E_ENABLE: 254 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_initing); 255 + break; 256 + default: 257 + bfa_sm_fault(iocfc->bfa, event); 258 + break; 259 + } 260 + } 261 + 262 + static void 263 + bfa_iocfc_sm_initing_entry(struct bfa_iocfc_s *iocfc) 264 + { 265 + bfa_ioc_enable(&iocfc->bfa->ioc); 266 + } 267 + 268 + static void 269 + bfa_iocfc_sm_initing(struct bfa_iocfc_s *iocfc, enum iocfc_event event) 270 + { 271 + bfa_trc(iocfc->bfa, event); 272 + 273 + switch (event) { 274 + case IOCFC_E_IOC_ENABLED: 275 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_dconf_read); 276 + break; 277 + case IOCFC_E_IOC_FAILED: 278 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_init_failed); 279 + break; 280 + default: 281 + bfa_sm_fault(iocfc->bfa, event); 282 + break; 283 + } 284 + } 285 + 286 + static void 287 + bfa_iocfc_sm_dconf_read_entry(struct bfa_iocfc_s *iocfc) 288 + { 289 + bfa_dconf_modinit(iocfc->bfa); 290 + } 291 + 292 + static void 293 + bfa_iocfc_sm_dconf_read(struct bfa_iocfc_s *iocfc, enum iocfc_event event) 294 + { 295 + bfa_trc(iocfc->bfa, event); 296 + 297 + switch (event) { 298 + case IOCFC_E_DCONF_DONE: 299 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_init_cfg_wait); 300 + break; 301 + case IOCFC_E_IOC_FAILED: 302 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_init_failed); 303 + break; 304 + default: 305 + bfa_sm_fault(iocfc->bfa, event); 306 + break; 307 + } 308 + } 309 + 310 + static void 311 + bfa_iocfc_sm_init_cfg_wait_entry(struct bfa_iocfc_s *iocfc) 312 + { 313 + bfa_iocfc_send_cfg(iocfc->bfa); 314 + } 315 + 316 + static void 317 + bfa_iocfc_sm_init_cfg_wait(struct bfa_iocfc_s *iocfc, enum iocfc_event event) 318 + { 319 + bfa_trc(iocfc->bfa, event); 320 + 321 + switch (event) { 322 + case IOCFC_E_CFG_DONE: 323 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_init_cfg_done); 324 + break; 325 + case IOCFC_E_IOC_FAILED: 326 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_init_failed); 327 + break; 328 + default: 329 + bfa_sm_fault(iocfc->bfa, event); 330 + break; 331 + } 332 + } 333 + 334 + static void 335 + bfa_iocfc_sm_init_cfg_done_entry(struct bfa_iocfc_s *iocfc) 336 + { 337 + iocfc->bfa->iocfc.op_status = BFA_STATUS_OK; 338 + bfa_cb_queue(iocfc->bfa, &iocfc->bfa->iocfc.init_hcb_qe, 339 + bfa_iocfc_init_cb, iocfc->bfa); 340 + } 341 + 342 + static void 343 + bfa_iocfc_sm_init_cfg_done(struct bfa_iocfc_s *iocfc, enum iocfc_event event) 344 + { 345 + bfa_trc(iocfc->bfa, event); 346 + 347 + switch (event) { 348 + case IOCFC_E_START: 349 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_operational); 350 + break; 351 + case IOCFC_E_STOP: 352 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_stopping); 353 + break; 354 + case IOCFC_E_DISABLE: 355 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_disabling); 356 + break; 357 + case IOCFC_E_IOC_FAILED: 358 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_failed); 359 + break; 360 + default: 361 + bfa_sm_fault(iocfc->bfa, event); 362 + break; 363 + } 364 + } 365 + 366 + static void 367 + bfa_iocfc_sm_operational_entry(struct bfa_iocfc_s *iocfc) 368 + { 369 + bfa_fcport_init(iocfc->bfa); 370 + bfa_iocfc_start_submod(iocfc->bfa); 371 + } 372 + 373 + static void 374 + bfa_iocfc_sm_operational(struct bfa_iocfc_s *iocfc, enum iocfc_event event) 375 + { 376 + bfa_trc(iocfc->bfa, event); 377 + 378 + switch (event) { 379 + case IOCFC_E_STOP: 380 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_dconf_write); 381 + break; 382 + case IOCFC_E_DISABLE: 383 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_disabling); 384 + break; 385 + case IOCFC_E_IOC_FAILED: 386 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_failed); 387 + break; 388 + default: 389 + bfa_sm_fault(iocfc->bfa, event); 390 + break; 391 + } 392 + } 393 + 394 + static void 395 + bfa_iocfc_sm_dconf_write_entry(struct bfa_iocfc_s *iocfc) 396 + { 397 + bfa_dconf_modexit(iocfc->bfa); 398 + } 399 + 400 + static void 401 + bfa_iocfc_sm_dconf_write(struct bfa_iocfc_s *iocfc, enum iocfc_event event) 402 + { 403 + bfa_trc(iocfc->bfa, event); 404 + 405 + switch (event) { 406 + case IOCFC_E_DCONF_DONE: 407 + case IOCFC_E_IOC_FAILED: 408 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_stopping); 409 + break; 410 + default: 411 + bfa_sm_fault(iocfc->bfa, event); 412 + break; 413 + } 414 + } 415 + 416 + static void 417 + bfa_iocfc_sm_stopping_entry(struct bfa_iocfc_s *iocfc) 418 + { 419 + bfa_ioc_disable(&iocfc->bfa->ioc); 420 + } 421 + 422 + static void 423 + bfa_iocfc_sm_stopping(struct bfa_iocfc_s *iocfc, enum iocfc_event event) 424 + { 425 + bfa_trc(iocfc->bfa, event); 426 + 427 + switch (event) { 428 + case IOCFC_E_IOC_DISABLED: 429 + bfa_isr_disable(iocfc->bfa); 430 + bfa_iocfc_disable_submod(iocfc->bfa); 431 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_stopped); 432 + iocfc->bfa->iocfc.op_status = BFA_STATUS_OK; 433 + bfa_cb_queue(iocfc->bfa, &iocfc->bfa->iocfc.stop_hcb_qe, 434 + bfa_iocfc_stop_cb, iocfc->bfa); 435 + break; 436 + default: 437 + bfa_sm_fault(iocfc->bfa, event); 438 + break; 439 + } 440 + } 441 + 442 + static void 443 + bfa_iocfc_sm_enabling_entry(struct bfa_iocfc_s *iocfc) 444 + { 445 + bfa_ioc_enable(&iocfc->bfa->ioc); 446 + } 447 + 448 + static void 449 + bfa_iocfc_sm_enabling(struct bfa_iocfc_s *iocfc, enum iocfc_event event) 450 + { 451 + bfa_trc(iocfc->bfa, event); 452 + 453 + switch (event) { 454 + case IOCFC_E_IOC_ENABLED: 455 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_cfg_wait); 456 + break; 457 + case IOCFC_E_IOC_FAILED: 458 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_failed); 459 + 460 + if (iocfc->bfa->iocfc.cb_reqd == BFA_FALSE) 461 + break; 462 + 463 + iocfc->bfa->iocfc.op_status = BFA_STATUS_FAILED; 464 + bfa_cb_queue(iocfc->bfa, &iocfc->bfa->iocfc.en_hcb_qe, 465 + bfa_iocfc_enable_cb, iocfc->bfa); 466 + iocfc->bfa->iocfc.cb_reqd = BFA_FALSE; 467 + break; 468 + default: 469 + bfa_sm_fault(iocfc->bfa, event); 470 + break; 471 + } 472 + } 473 + 474 + static void 475 + bfa_iocfc_sm_cfg_wait_entry(struct bfa_iocfc_s *iocfc) 476 + { 477 + bfa_iocfc_send_cfg(iocfc->bfa); 478 + } 479 + 480 + static void 481 + bfa_iocfc_sm_cfg_wait(struct bfa_iocfc_s *iocfc, enum iocfc_event event) 482 + { 483 + bfa_trc(iocfc->bfa, event); 484 + 485 + switch (event) { 486 + case IOCFC_E_CFG_DONE: 487 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_operational); 488 + if (iocfc->bfa->iocfc.cb_reqd == BFA_FALSE) 489 + break; 490 + 491 + iocfc->bfa->iocfc.op_status = BFA_STATUS_OK; 492 + bfa_cb_queue(iocfc->bfa, &iocfc->bfa->iocfc.en_hcb_qe, 493 + bfa_iocfc_enable_cb, iocfc->bfa); 494 + iocfc->bfa->iocfc.cb_reqd = BFA_FALSE; 495 + break; 496 + case IOCFC_E_IOC_FAILED: 497 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_failed); 498 + if (iocfc->bfa->iocfc.cb_reqd == BFA_FALSE) 499 + break; 500 + 501 + iocfc->bfa->iocfc.op_status = BFA_STATUS_FAILED; 502 + bfa_cb_queue(iocfc->bfa, &iocfc->bfa->iocfc.en_hcb_qe, 503 + bfa_iocfc_enable_cb, iocfc->bfa); 504 + iocfc->bfa->iocfc.cb_reqd = BFA_FALSE; 505 + break; 506 + default: 507 + bfa_sm_fault(iocfc->bfa, event); 508 + break; 509 + } 510 + } 511 + 512 + static void 513 + bfa_iocfc_sm_disabling_entry(struct bfa_iocfc_s *iocfc) 514 + { 515 + bfa_ioc_disable(&iocfc->bfa->ioc); 516 + } 517 + 518 + static void 519 + bfa_iocfc_sm_disabling(struct bfa_iocfc_s *iocfc, enum iocfc_event event) 520 + { 521 + bfa_trc(iocfc->bfa, event); 522 + 523 + switch (event) { 524 + case IOCFC_E_IOC_DISABLED: 525 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_disabled); 526 + break; 527 + default: 528 + bfa_sm_fault(iocfc->bfa, event); 529 + break; 530 + } 531 + } 532 + 533 + static void 534 + bfa_iocfc_sm_disabled_entry(struct bfa_iocfc_s *iocfc) 535 + { 536 + bfa_isr_disable(iocfc->bfa); 537 + bfa_iocfc_disable_submod(iocfc->bfa); 538 + iocfc->bfa->iocfc.op_status = BFA_STATUS_OK; 539 + bfa_cb_queue(iocfc->bfa, &iocfc->bfa->iocfc.dis_hcb_qe, 540 + bfa_iocfc_disable_cb, iocfc->bfa); 541 + } 542 + 543 + static void 544 + bfa_iocfc_sm_disabled(struct bfa_iocfc_s *iocfc, enum iocfc_event event) 545 + { 546 + bfa_trc(iocfc->bfa, event); 547 + 548 + switch (event) { 549 + case IOCFC_E_STOP: 550 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_dconf_write); 551 + break; 552 + case IOCFC_E_ENABLE: 553 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_enabling); 554 + break; 555 + default: 556 + bfa_sm_fault(iocfc->bfa, event); 557 + break; 558 + } 559 + } 560 + 561 + static void 562 + bfa_iocfc_sm_failed_entry(struct bfa_iocfc_s *iocfc) 563 + { 564 + bfa_isr_disable(iocfc->bfa); 565 + bfa_iocfc_disable_submod(iocfc->bfa); 566 + } 567 + 568 + static void 569 + bfa_iocfc_sm_failed(struct bfa_iocfc_s *iocfc, enum iocfc_event event) 570 + { 571 + bfa_trc(iocfc->bfa, event); 572 + 573 + switch (event) { 574 + case IOCFC_E_STOP: 575 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_dconf_write); 576 + break; 577 + case IOCFC_E_DISABLE: 578 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_disabling); 579 + break; 580 + case IOCFC_E_IOC_ENABLED: 581 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_cfg_wait); 582 + break; 583 + case IOCFC_E_IOC_FAILED: 584 + break; 585 + default: 586 + bfa_sm_fault(iocfc->bfa, event); 587 + break; 588 + } 589 + } 590 + 591 + static void 592 + bfa_iocfc_sm_init_failed_entry(struct bfa_iocfc_s *iocfc) 593 + { 594 + bfa_isr_disable(iocfc->bfa); 595 + iocfc->bfa->iocfc.op_status = BFA_STATUS_FAILED; 596 + bfa_cb_queue(iocfc->bfa, &iocfc->bfa->iocfc.init_hcb_qe, 597 + bfa_iocfc_init_cb, iocfc->bfa); 598 + } 599 + 600 + static void 601 + bfa_iocfc_sm_init_failed(struct bfa_iocfc_s *iocfc, enum iocfc_event event) 602 + { 603 + bfa_trc(iocfc->bfa, event); 604 + 605 + switch (event) { 606 + case IOCFC_E_STOP: 607 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_stopping); 608 + break; 609 + case IOCFC_E_DISABLE: 610 + bfa_ioc_disable(&iocfc->bfa->ioc); 611 + break; 612 + case IOCFC_E_IOC_ENABLED: 613 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_dconf_read); 614 + break; 615 + case IOCFC_E_IOC_DISABLED: 616 + bfa_fsm_set_state(iocfc, bfa_iocfc_sm_stopped); 617 + iocfc->bfa->iocfc.op_status = BFA_STATUS_OK; 618 + bfa_cb_queue(iocfc->bfa, &iocfc->bfa->iocfc.dis_hcb_qe, 619 + bfa_iocfc_disable_cb, iocfc->bfa); 620 + break; 621 + case IOCFC_E_IOC_FAILED: 622 + break; 623 + default: 624 + bfa_sm_fault(iocfc->bfa, event); 625 + break; 626 + } 627 + } 210 628 211 629 /* 212 630 * BFA Interrupt handling functions ··· 649 231 } 650 232 } 651 233 652 - static inline void 234 + bfa_boolean_t 653 235 bfa_isr_rspq(struct bfa_s *bfa, int qid) 654 236 { 655 237 struct bfi_msg_s *m; 656 238 u32 pi, ci; 657 239 struct list_head *waitq; 240 + bfa_boolean_t ret; 658 241 659 242 ci = bfa_rspq_ci(bfa, qid); 660 243 pi = bfa_rspq_pi(bfa, qid); 244 + 245 + ret = (ci != pi); 661 246 662 247 while (ci != pi) { 663 248 m = bfa_rspq_elem(bfa, qid, ci); ··· 681 260 waitq = bfa_reqq(bfa, qid); 682 261 if (!list_empty(waitq)) 683 262 bfa_reqq_resume(bfa, qid); 263 + 264 + return ret; 684 265 } 685 266 686 267 static inline void ··· 743 320 { 744 321 u32 intr, qintr; 745 322 int queue; 323 + bfa_boolean_t rspq_comp = BFA_FALSE; 746 324 747 325 intr = readl(bfa->iocfc.bfa_regs.intr_status); 748 326 ··· 756 332 */ 757 333 if (bfa->queue_process) { 758 334 for (queue = 0; queue < BFI_IOC_MAX_CQS; queue++) 759 - bfa_isr_rspq(bfa, queue); 335 + if (bfa_isr_rspq(bfa, queue)) 336 + rspq_comp = BFA_TRUE; 760 337 } 761 338 762 339 if (!intr) 763 - return BFA_TRUE; 340 + return (qintr | rspq_comp) ? BFA_TRUE : BFA_FALSE; 764 341 765 342 /* 766 343 * CPE completion queue interrupt ··· 950 525 * Enable interrupt coalescing if it is driver init path 951 526 * and not ioc disable/enable path. 952 527 */ 953 - if (!iocfc->cfgdone) 528 + if (bfa_fsm_cmp_state(iocfc, bfa_iocfc_sm_init_cfg_wait)) 954 529 cfg_info->intr_attr.coalesce = BFA_TRUE; 955 - 956 - iocfc->cfgdone = BFA_FALSE; 957 530 958 531 /* 959 532 * dma map IOC configuration itself ··· 972 549 973 550 bfa->bfad = bfad; 974 551 iocfc->bfa = bfa; 975 - iocfc->action = BFA_IOCFC_ACT_NONE; 976 - 977 552 iocfc->cfg = *cfg; 978 553 979 554 /* ··· 1104 683 1105 684 for (i = 0; hal_mods[i]; i++) 1106 685 hal_mods[i]->start(bfa); 686 + 687 + bfa->iocfc.submod_enabled = BFA_TRUE; 1107 688 } 1108 689 1109 690 /* ··· 1116 693 { 1117 694 int i; 1118 695 696 + if (bfa->iocfc.submod_enabled == BFA_FALSE) 697 + return; 698 + 1119 699 for (i = 0; hal_mods[i]; i++) 1120 700 hal_mods[i]->iocdisable(bfa); 701 + 702 + bfa->iocfc.submod_enabled = BFA_FALSE; 1121 703 } 1122 704 1123 705 static void ··· 1130 702 { 1131 703 struct bfa_s *bfa = bfa_arg; 1132 704 1133 - if (complete) { 1134 - if (bfa->iocfc.cfgdone && BFA_DCONF_MOD(bfa)->flashdone) 1135 - bfa_cb_init(bfa->bfad, BFA_STATUS_OK); 1136 - else 1137 - bfa_cb_init(bfa->bfad, BFA_STATUS_FAILED); 1138 - } else { 1139 - if (bfa->iocfc.cfgdone) 1140 - bfa->iocfc.action = BFA_IOCFC_ACT_NONE; 1141 - } 705 + if (complete) 706 + bfa_cb_init(bfa->bfad, bfa->iocfc.op_status); 1142 707 } 1143 708 1144 709 static void ··· 1142 721 1143 722 if (compl) 1144 723 complete(&bfad->comp); 1145 - else 1146 - bfa->iocfc.action = BFA_IOCFC_ACT_NONE; 1147 724 } 1148 725 1149 726 static void ··· 1213 794 fwcfg->num_uf_bufs = be16_to_cpu(fwcfg->num_uf_bufs); 1214 795 fwcfg->num_rports = be16_to_cpu(fwcfg->num_rports); 1215 796 1216 - iocfc->cfgdone = BFA_TRUE; 1217 - 1218 797 /* 1219 798 * configure queue register offsets as learnt from firmware 1220 799 */ ··· 1228 811 */ 1229 812 bfa_msix_queue_install(bfa); 1230 813 1231 - /* 1232 - * Configuration is complete - initialize/start submodules 1233 - */ 1234 - bfa_fcport_init(bfa); 1235 - 1236 - if (iocfc->action == BFA_IOCFC_ACT_INIT) { 1237 - if (BFA_DCONF_MOD(bfa)->flashdone == BFA_TRUE) 1238 - bfa_cb_queue(bfa, &iocfc->init_hcb_qe, 1239 - bfa_iocfc_init_cb, bfa); 1240 - } else { 1241 - if (bfa->iocfc.action == BFA_IOCFC_ACT_ENABLE) 1242 - bfa_cb_queue(bfa, &bfa->iocfc.en_hcb_qe, 1243 - bfa_iocfc_enable_cb, bfa); 1244 - bfa_iocfc_start_submod(bfa); 814 + if (bfa->iocfc.cfgrsp->pbc_cfg.pbc_pwwn != 0) { 815 + bfa->ioc.attr->pwwn = bfa->iocfc.cfgrsp->pbc_cfg.pbc_pwwn; 816 + bfa->ioc.attr->nwwn = bfa->iocfc.cfgrsp->pbc_cfg.pbc_nwwn; 817 + bfa_fsm_send_event(iocfc, IOCFC_E_CFG_DONE); 1245 818 } 1246 819 } 820 + 1247 821 void 1248 822 bfa_iocfc_reset_queues(struct bfa_s *bfa) 1249 823 { ··· 1246 838 bfa_rspq_ci(bfa, q) = 0; 1247 839 bfa_rspq_pi(bfa, q) = 0; 1248 840 } 841 + } 842 + 843 + /* 844 + * Process FAA pwwn msg from fw. 845 + */ 846 + static void 847 + bfa_iocfc_process_faa_addr(struct bfa_s *bfa, struct bfi_faa_addr_msg_s *msg) 848 + { 849 + struct bfa_iocfc_s *iocfc = &bfa->iocfc; 850 + struct bfi_iocfc_cfgrsp_s *cfgrsp = iocfc->cfgrsp; 851 + 852 + cfgrsp->pbc_cfg.pbc_pwwn = msg->pwwn; 853 + cfgrsp->pbc_cfg.pbc_nwwn = msg->nwwn; 854 + 855 + bfa->ioc.attr->pwwn = msg->pwwn; 856 + bfa->ioc.attr->nwwn = msg->nwwn; 857 + bfa_fsm_send_event(iocfc, IOCFC_E_CFG_DONE); 1249 858 } 1250 859 1251 860 /* Fabric Assigned Address specific functions */ ··· 1280 855 if ((ioc_type != BFA_IOC_TYPE_FC) || bfa_mfg_is_mezz(card_type)) 1281 856 return BFA_STATUS_FEATURE_NOT_SUPPORTED; 1282 857 } else { 1283 - if (!bfa_ioc_is_acq_addr(&bfa->ioc)) 1284 - return BFA_STATUS_IOC_NON_OP; 858 + return BFA_STATUS_IOC_NON_OP; 1285 859 } 1286 - 1287 - return BFA_STATUS_OK; 1288 - } 1289 - 1290 - bfa_status_t 1291 - bfa_faa_enable(struct bfa_s *bfa, bfa_cb_iocfc_t cbfn, void *cbarg) 1292 - { 1293 - struct bfi_faa_en_dis_s faa_enable_req; 1294 - struct bfa_iocfc_s *iocfc = &bfa->iocfc; 1295 - bfa_status_t status; 1296 - 1297 - iocfc->faa_args.faa_cb.faa_cbfn = cbfn; 1298 - iocfc->faa_args.faa_cb.faa_cbarg = cbarg; 1299 - 1300 - status = bfa_faa_validate_request(bfa); 1301 - if (status != BFA_STATUS_OK) 1302 - return status; 1303 - 1304 - if (iocfc->faa_args.busy == BFA_TRUE) 1305 - return BFA_STATUS_DEVBUSY; 1306 - 1307 - if (iocfc->faa_args.faa_state == BFA_FAA_ENABLED) 1308 - return BFA_STATUS_FAA_ENABLED; 1309 - 1310 - if (bfa_fcport_is_trunk_enabled(bfa)) 1311 - return BFA_STATUS_ERROR_TRUNK_ENABLED; 1312 - 1313 - bfa_fcport_cfg_faa(bfa, BFA_FAA_ENABLED); 1314 - iocfc->faa_args.busy = BFA_TRUE; 1315 - 1316 - memset(&faa_enable_req, 0, sizeof(struct bfi_faa_en_dis_s)); 1317 - bfi_h2i_set(faa_enable_req.mh, BFI_MC_IOCFC, 1318 - BFI_IOCFC_H2I_FAA_ENABLE_REQ, bfa_fn_lpu(bfa)); 1319 - 1320 - bfa_ioc_mbox_send(&bfa->ioc, &faa_enable_req, 1321 - sizeof(struct bfi_faa_en_dis_s)); 1322 - 1323 - return BFA_STATUS_OK; 1324 - } 1325 - 1326 - bfa_status_t 1327 - bfa_faa_disable(struct bfa_s *bfa, bfa_cb_iocfc_t cbfn, 1328 - void *cbarg) 1329 - { 1330 - struct bfi_faa_en_dis_s faa_disable_req; 1331 - struct bfa_iocfc_s *iocfc = &bfa->iocfc; 1332 - bfa_status_t status; 1333 - 1334 - iocfc->faa_args.faa_cb.faa_cbfn = cbfn; 1335 - iocfc->faa_args.faa_cb.faa_cbarg = cbarg; 1336 - 1337 - status = bfa_faa_validate_request(bfa); 1338 - if (status != BFA_STATUS_OK) 1339 - return status; 1340 - 1341 - if (iocfc->faa_args.busy == BFA_TRUE) 1342 - return BFA_STATUS_DEVBUSY; 1343 - 1344 - if (iocfc->faa_args.faa_state == BFA_FAA_DISABLED) 1345 - return BFA_STATUS_FAA_DISABLED; 1346 - 1347 - bfa_fcport_cfg_faa(bfa, BFA_FAA_DISABLED); 1348 - iocfc->faa_args.busy = BFA_TRUE; 1349 - 1350 - memset(&faa_disable_req, 0, sizeof(struct bfi_faa_en_dis_s)); 1351 - bfi_h2i_set(faa_disable_req.mh, BFI_MC_IOCFC, 1352 - BFI_IOCFC_H2I_FAA_DISABLE_REQ, bfa_fn_lpu(bfa)); 1353 - 1354 - bfa_ioc_mbox_send(&bfa->ioc, &faa_disable_req, 1355 - sizeof(struct bfi_faa_en_dis_s)); 1356 860 1357 861 return BFA_STATUS_OK; 1358 862 } ··· 1317 963 } 1318 964 1319 965 /* 1320 - * FAA enable response 1321 - */ 1322 - static void 1323 - bfa_faa_enable_reply(struct bfa_iocfc_s *iocfc, 1324 - struct bfi_faa_en_dis_rsp_s *rsp) 1325 - { 1326 - void *cbarg = iocfc->faa_args.faa_cb.faa_cbarg; 1327 - bfa_status_t status = rsp->status; 1328 - 1329 - WARN_ON(!iocfc->faa_args.faa_cb.faa_cbfn); 1330 - 1331 - iocfc->faa_args.faa_cb.faa_cbfn(cbarg, status); 1332 - iocfc->faa_args.busy = BFA_FALSE; 1333 - } 1334 - 1335 - /* 1336 - * FAA disable response 1337 - */ 1338 - static void 1339 - bfa_faa_disable_reply(struct bfa_iocfc_s *iocfc, 1340 - struct bfi_faa_en_dis_rsp_s *rsp) 1341 - { 1342 - void *cbarg = iocfc->faa_args.faa_cb.faa_cbarg; 1343 - bfa_status_t status = rsp->status; 1344 - 1345 - WARN_ON(!iocfc->faa_args.faa_cb.faa_cbfn); 1346 - 1347 - iocfc->faa_args.faa_cb.faa_cbfn(cbarg, status); 1348 - iocfc->faa_args.busy = BFA_FALSE; 1349 - } 1350 - 1351 - /* 1352 966 * FAA query response 1353 967 */ 1354 968 static void ··· 1345 1023 { 1346 1024 struct bfa_s *bfa = bfa_arg; 1347 1025 1348 - if (status == BFA_STATUS_FAA_ACQ_ADDR) { 1349 - bfa_cb_queue(bfa, &bfa->iocfc.init_hcb_qe, 1350 - bfa_iocfc_init_cb, bfa); 1351 - return; 1352 - } 1353 - 1354 - if (status != BFA_STATUS_OK) { 1355 - bfa_isr_disable(bfa); 1356 - if (bfa->iocfc.action == BFA_IOCFC_ACT_INIT) 1357 - bfa_cb_queue(bfa, &bfa->iocfc.init_hcb_qe, 1358 - bfa_iocfc_init_cb, bfa); 1359 - else if (bfa->iocfc.action == BFA_IOCFC_ACT_ENABLE) 1360 - bfa_cb_queue(bfa, &bfa->iocfc.en_hcb_qe, 1361 - bfa_iocfc_enable_cb, bfa); 1362 - return; 1363 - } 1364 - 1365 - bfa_iocfc_send_cfg(bfa); 1366 - bfa_dconf_modinit(bfa); 1026 + if (status == BFA_STATUS_OK) 1027 + bfa_fsm_send_event(&bfa->iocfc, IOCFC_E_IOC_ENABLED); 1028 + else 1029 + bfa_fsm_send_event(&bfa->iocfc, IOCFC_E_IOC_FAILED); 1367 1030 } 1368 1031 1369 1032 /* ··· 1359 1052 { 1360 1053 struct bfa_s *bfa = bfa_arg; 1361 1054 1362 - bfa_isr_disable(bfa); 1363 - bfa_iocfc_disable_submod(bfa); 1364 - 1365 - if (bfa->iocfc.action == BFA_IOCFC_ACT_STOP) 1366 - bfa_cb_queue(bfa, &bfa->iocfc.stop_hcb_qe, bfa_iocfc_stop_cb, 1367 - bfa); 1368 - else { 1369 - WARN_ON(bfa->iocfc.action != BFA_IOCFC_ACT_DISABLE); 1370 - bfa_cb_queue(bfa, &bfa->iocfc.dis_hcb_qe, bfa_iocfc_disable_cb, 1371 - bfa); 1372 - } 1055 + bfa_fsm_send_event(&bfa->iocfc, IOCFC_E_IOC_DISABLED); 1373 1056 } 1374 1057 1375 1058 /* ··· 1371 1074 struct bfa_s *bfa = bfa_arg; 1372 1075 1373 1076 bfa->queue_process = BFA_FALSE; 1374 - 1375 - bfa_isr_disable(bfa); 1376 - bfa_iocfc_disable_submod(bfa); 1377 - 1378 - if (bfa->iocfc.action == BFA_IOCFC_ACT_INIT) 1379 - bfa_cb_queue(bfa, &bfa->iocfc.init_hcb_qe, bfa_iocfc_init_cb, 1380 - bfa); 1077 + bfa_fsm_send_event(&bfa->iocfc, IOCFC_E_IOC_FAILED); 1381 1078 } 1382 1079 1383 1080 /* ··· 1385 1094 bfa_iocfc_reset_queues(bfa); 1386 1095 bfa_isr_enable(bfa); 1387 1096 } 1388 - 1389 1097 1390 1098 /* 1391 1099 * Query IOC memory requirement information. ··· 1461 1171 INIT_LIST_HEAD(&bfa->comp_q); 1462 1172 for (i = 0; i < BFI_IOC_MAX_CQS; i++) 1463 1173 INIT_LIST_HEAD(&bfa->reqq_waitq[i]); 1174 + 1175 + bfa->iocfc.cb_reqd = BFA_FALSE; 1176 + bfa->iocfc.op_status = BFA_STATUS_OK; 1177 + bfa->iocfc.submod_enabled = BFA_FALSE; 1178 + 1179 + bfa_fsm_set_state(&bfa->iocfc, bfa_iocfc_sm_stopped); 1464 1180 } 1465 1181 1466 1182 /* ··· 1475 1179 void 1476 1180 bfa_iocfc_init(struct bfa_s *bfa) 1477 1181 { 1478 - bfa->iocfc.action = BFA_IOCFC_ACT_INIT; 1479 - bfa_ioc_enable(&bfa->ioc); 1182 + bfa_fsm_send_event(&bfa->iocfc, IOCFC_E_INIT); 1480 1183 } 1481 1184 1482 1185 /* ··· 1485 1190 void 1486 1191 bfa_iocfc_start(struct bfa_s *bfa) 1487 1192 { 1488 - if (bfa->iocfc.cfgdone) 1489 - bfa_iocfc_start_submod(bfa); 1193 + bfa_fsm_send_event(&bfa->iocfc, IOCFC_E_START); 1490 1194 } 1491 1195 1492 1196 /* ··· 1495 1201 void 1496 1202 bfa_iocfc_stop(struct bfa_s *bfa) 1497 1203 { 1498 - bfa->iocfc.action = BFA_IOCFC_ACT_STOP; 1499 - 1500 1204 bfa->queue_process = BFA_FALSE; 1501 - bfa_dconf_modexit(bfa); 1502 - if (BFA_DCONF_MOD(bfa)->flashdone == BFA_TRUE) 1503 - bfa_ioc_disable(&bfa->ioc); 1205 + bfa_fsm_send_event(&bfa->iocfc, IOCFC_E_STOP); 1504 1206 } 1505 1207 1506 1208 void ··· 1516 1226 case BFI_IOCFC_I2H_UPDATEQ_RSP: 1517 1227 iocfc->updateq_cbfn(iocfc->updateq_cbarg, BFA_STATUS_OK); 1518 1228 break; 1519 - case BFI_IOCFC_I2H_FAA_ENABLE_RSP: 1520 - bfa_faa_enable_reply(iocfc, 1521 - (struct bfi_faa_en_dis_rsp_s *)msg); 1522 - break; 1523 - case BFI_IOCFC_I2H_FAA_DISABLE_RSP: 1524 - bfa_faa_disable_reply(iocfc, 1525 - (struct bfi_faa_en_dis_rsp_s *)msg); 1229 + case BFI_IOCFC_I2H_ADDR_MSG: 1230 + bfa_iocfc_process_faa_addr(bfa, 1231 + (struct bfi_faa_addr_msg_s *)msg); 1526 1232 break; 1527 1233 case BFI_IOCFC_I2H_FAA_QUERY_RSP: 1528 1234 bfa_faa_query_reply(iocfc, (bfi_faa_query_rsp_t *)msg); ··· 1592 1306 { 1593 1307 bfa_plog_str(bfa->plog, BFA_PL_MID_HAL, BFA_PL_EID_MISC, 0, 1594 1308 "IOC Enable"); 1595 - bfa->iocfc.action = BFA_IOCFC_ACT_ENABLE; 1596 - bfa_ioc_enable(&bfa->ioc); 1309 + bfa->iocfc.cb_reqd = BFA_TRUE; 1310 + bfa_fsm_send_event(&bfa->iocfc, IOCFC_E_ENABLE); 1597 1311 } 1598 1312 1599 1313 void ··· 1601 1315 { 1602 1316 bfa_plog_str(bfa->plog, BFA_PL_MID_HAL, BFA_PL_EID_MISC, 0, 1603 1317 "IOC Disable"); 1604 - bfa->iocfc.action = BFA_IOCFC_ACT_DISABLE; 1605 1318 1606 1319 bfa->queue_process = BFA_FALSE; 1607 - bfa_ioc_disable(&bfa->ioc); 1320 + bfa_fsm_send_event(&bfa->iocfc, IOCFC_E_DISABLE); 1608 1321 } 1609 - 1610 1322 1611 1323 bfa_boolean_t 1612 1324 bfa_iocfc_is_operational(struct bfa_s *bfa) 1613 1325 { 1614 - return bfa_ioc_is_operational(&bfa->ioc) && bfa->iocfc.cfgdone; 1326 + return bfa_ioc_is_operational(&bfa->ioc) && 1327 + bfa_fsm_cmp_state(&bfa->iocfc, bfa_iocfc_sm_operational); 1615 1328 } 1616 1329 1617 1330 /* ··· 1849 1564 hcb_qe = (struct bfa_cb_qe_s *) qe; 1850 1565 WARN_ON(hcb_qe->pre_rmv); 1851 1566 hcb_qe->cbfn(hcb_qe->cbarg, BFA_FALSE); 1852 - } 1853 - } 1854 - 1855 - void 1856 - bfa_iocfc_cb_dconf_modinit(struct bfa_s *bfa, bfa_status_t status) 1857 - { 1858 - if (bfa->iocfc.action == BFA_IOCFC_ACT_INIT) { 1859 - if (bfa->iocfc.cfgdone == BFA_TRUE) 1860 - bfa_cb_queue(bfa, &bfa->iocfc.init_hcb_qe, 1861 - bfa_iocfc_init_cb, bfa); 1862 1567 } 1863 1568 } 1864 1569
+1 -1
drivers/scsi/bfa/bfa_defs_svc.h
··· 52 52 u16 num_uf_bufs; /* unsolicited recv buffers */ 53 53 u8 num_cqs; 54 54 u8 fw_tick_res; /* FW clock resolution in ms */ 55 - u8 rsvd[2]; 55 + u8 rsvd[6]; 56 56 }; 57 57 #pragma pack() 58 58
+2
drivers/scsi/bfa/bfa_fcs_lport.c
··· 5717 5717 5718 5718 if (vport_drv->comp_del) 5719 5719 complete(vport_drv->comp_del); 5720 + else 5721 + kfree(vport_drv); 5720 5722 5721 5723 bfa_lps_delete(vport->lps); 5722 5724 }
+4 -1
drivers/scsi/bfa/bfa_fcs_rport.c
··· 2169 2169 * - MAX receive frame size 2170 2170 */ 2171 2171 rport->cisc = plogi->csp.cisc; 2172 - rport->maxfrsize = be16_to_cpu(plogi->class3.rxsz); 2172 + if (be16_to_cpu(plogi->class3.rxsz) < be16_to_cpu(plogi->csp.rxsz)) 2173 + rport->maxfrsize = be16_to_cpu(plogi->class3.rxsz); 2174 + else 2175 + rport->maxfrsize = be16_to_cpu(plogi->csp.rxsz); 2173 2176 2174 2177 bfa_trc(port->fcs, be16_to_cpu(plogi->csp.bbcred)); 2175 2178 bfa_trc(port->fcs, port->fabric->bb_credit);
+74 -120
drivers/scsi/bfa/bfa_ioc.c
··· 88 88 static void bfa_ioc_mbox_poll(struct bfa_ioc_s *ioc); 89 89 static void bfa_ioc_mbox_flush(struct bfa_ioc_s *ioc); 90 90 static void bfa_ioc_recover(struct bfa_ioc_s *ioc); 91 - static void bfa_ioc_check_attr_wwns(struct bfa_ioc_s *ioc); 92 91 static void bfa_ioc_event_notify(struct bfa_ioc_s *ioc , 93 92 enum bfa_ioc_event_e event); 94 93 static void bfa_ioc_disable_comp(struct bfa_ioc_s *ioc); ··· 95 96 static void bfa_ioc_debug_save_ftrc(struct bfa_ioc_s *ioc); 96 97 static void bfa_ioc_fail_notify(struct bfa_ioc_s *ioc); 97 98 static void bfa_ioc_pf_fwmismatch(struct bfa_ioc_s *ioc); 98 - 99 99 100 100 /* 101 101 * IOC state machine definitions/declarations ··· 112 114 IOC_E_HWERROR = 10, /* hardware error interrupt */ 113 115 IOC_E_TIMEOUT = 11, /* timeout */ 114 116 IOC_E_HWFAILED = 12, /* PCI mapping failure notice */ 115 - IOC_E_FWRSP_ACQ_ADDR = 13, /* Acquiring address */ 116 117 }; 117 118 118 119 bfa_fsm_state_decl(bfa_ioc, uninit, struct bfa_ioc_s, enum ioc_event); ··· 124 127 bfa_fsm_state_decl(bfa_ioc, disabling, struct bfa_ioc_s, enum ioc_event); 125 128 bfa_fsm_state_decl(bfa_ioc, disabled, struct bfa_ioc_s, enum ioc_event); 126 129 bfa_fsm_state_decl(bfa_ioc, hwfail, struct bfa_ioc_s, enum ioc_event); 127 - bfa_fsm_state_decl(bfa_ioc, acq_addr, struct bfa_ioc_s, enum ioc_event); 128 130 129 131 static struct bfa_sm_table_s ioc_sm_table[] = { 130 132 {BFA_SM(bfa_ioc_sm_uninit), BFA_IOC_UNINIT}, ··· 136 140 {BFA_SM(bfa_ioc_sm_disabling), BFA_IOC_DISABLING}, 137 141 {BFA_SM(bfa_ioc_sm_disabled), BFA_IOC_DISABLED}, 138 142 {BFA_SM(bfa_ioc_sm_hwfail), BFA_IOC_HWFAIL}, 139 - {BFA_SM(bfa_ioc_sm_acq_addr), BFA_IOC_ACQ_ADDR}, 140 143 }; 141 144 142 145 /* ··· 366 371 switch (event) { 367 372 case IOC_E_FWRSP_GETATTR: 368 373 bfa_ioc_timer_stop(ioc); 369 - bfa_ioc_check_attr_wwns(ioc); 370 - bfa_ioc_hb_monitor(ioc); 371 374 bfa_fsm_set_state(ioc, bfa_ioc_sm_op); 372 - break; 373 - 374 - case IOC_E_FWRSP_ACQ_ADDR: 375 - bfa_ioc_timer_stop(ioc); 376 - bfa_ioc_hb_monitor(ioc); 377 - bfa_fsm_set_state(ioc, bfa_ioc_sm_acq_addr); 378 375 break; 379 376 380 377 case IOC_E_PFFAILED: ··· 393 406 } 394 407 } 395 408 396 - /* 397 - * Acquiring address from fabric (entry function) 398 - */ 399 - static void 400 - bfa_ioc_sm_acq_addr_entry(struct bfa_ioc_s *ioc) 401 - { 402 - } 403 - 404 - /* 405 - * Acquiring address from the fabric 406 - */ 407 - static void 408 - bfa_ioc_sm_acq_addr(struct bfa_ioc_s *ioc, enum ioc_event event) 409 - { 410 - bfa_trc(ioc, event); 411 - 412 - switch (event) { 413 - case IOC_E_FWRSP_GETATTR: 414 - bfa_ioc_check_attr_wwns(ioc); 415 - bfa_fsm_set_state(ioc, bfa_ioc_sm_op); 416 - break; 417 - 418 - case IOC_E_PFFAILED: 419 - case IOC_E_HWERROR: 420 - bfa_hb_timer_stop(ioc); 421 - case IOC_E_HBFAIL: 422 - ioc->cbfn->enable_cbfn(ioc->bfa, BFA_STATUS_IOC_FAILURE); 423 - bfa_fsm_set_state(ioc, bfa_ioc_sm_fail); 424 - if (event != IOC_E_PFFAILED) 425 - bfa_fsm_send_event(&ioc->iocpf, IOCPF_E_GETATTRFAIL); 426 - break; 427 - 428 - case IOC_E_DISABLE: 429 - bfa_hb_timer_stop(ioc); 430 - bfa_fsm_set_state(ioc, bfa_ioc_sm_disabling); 431 - break; 432 - 433 - case IOC_E_ENABLE: 434 - break; 435 - 436 - default: 437 - bfa_sm_fault(ioc, event); 438 - } 439 - } 440 - 441 409 static void 442 410 bfa_ioc_sm_op_entry(struct bfa_ioc_s *ioc) 443 411 { ··· 400 458 401 459 ioc->cbfn->enable_cbfn(ioc->bfa, BFA_STATUS_OK); 402 460 bfa_ioc_event_notify(ioc, BFA_IOC_E_ENABLED); 461 + bfa_ioc_hb_monitor(ioc); 403 462 BFA_LOG(KERN_INFO, bfad, bfa_log_level, "IOC enabled\n"); 404 463 bfa_ioc_aen_post(ioc, BFA_IOC_AEN_ENABLE); 405 464 } ··· 681 738 bfa_iocpf_sm_fwcheck_entry(struct bfa_iocpf_s *iocpf) 682 739 { 683 740 struct bfi_ioc_image_hdr_s fwhdr; 684 - u32 fwstate = readl(iocpf->ioc->ioc_regs.ioc_fwstate); 741 + u32 r32, fwstate, pgnum, pgoff, loff = 0; 742 + int i; 743 + 744 + /* 745 + * Spin on init semaphore to serialize. 746 + */ 747 + r32 = readl(iocpf->ioc->ioc_regs.ioc_init_sem_reg); 748 + while (r32 & 0x1) { 749 + udelay(20); 750 + r32 = readl(iocpf->ioc->ioc_regs.ioc_init_sem_reg); 751 + } 685 752 686 753 /* h/w sem init */ 687 - if (fwstate == BFI_IOC_UNINIT) 754 + fwstate = readl(iocpf->ioc->ioc_regs.ioc_fwstate); 755 + if (fwstate == BFI_IOC_UNINIT) { 756 + writel(1, iocpf->ioc->ioc_regs.ioc_init_sem_reg); 688 757 goto sem_get; 758 + } 689 759 690 760 bfa_ioc_fwver_get(iocpf->ioc, &fwhdr); 691 761 692 - if (swab32(fwhdr.exec) == BFI_FWBOOT_TYPE_NORMAL) 762 + if (swab32(fwhdr.exec) == BFI_FWBOOT_TYPE_NORMAL) { 763 + writel(1, iocpf->ioc->ioc_regs.ioc_init_sem_reg); 693 764 goto sem_get; 694 - 695 - bfa_trc(iocpf->ioc, fwstate); 696 - bfa_trc(iocpf->ioc, fwhdr.exec); 697 - writel(BFI_IOC_UNINIT, iocpf->ioc->ioc_regs.ioc_fwstate); 765 + } 698 766 699 767 /* 700 - * Try to lock and then unlock the semaphore. 768 + * Clear fwver hdr 769 + */ 770 + pgnum = PSS_SMEM_PGNUM(iocpf->ioc->ioc_regs.smem_pg0, loff); 771 + pgoff = PSS_SMEM_PGOFF(loff); 772 + writel(pgnum, iocpf->ioc->ioc_regs.host_page_num_fn); 773 + 774 + for (i = 0; i < sizeof(struct bfi_ioc_image_hdr_s) / sizeof(u32); i++) { 775 + bfa_mem_write(iocpf->ioc->ioc_regs.smem_page_start, loff, 0); 776 + loff += sizeof(u32); 777 + } 778 + 779 + bfa_trc(iocpf->ioc, fwstate); 780 + bfa_trc(iocpf->ioc, swab32(fwhdr.exec)); 781 + writel(BFI_IOC_UNINIT, iocpf->ioc->ioc_regs.ioc_fwstate); 782 + writel(BFI_IOC_UNINIT, iocpf->ioc->ioc_regs.alt_ioc_fwstate); 783 + 784 + /* 785 + * Unlock the hw semaphore. Should be here only once per boot. 701 786 */ 702 787 readl(iocpf->ioc->ioc_regs.ioc_sem_reg); 703 788 writel(1, iocpf->ioc->ioc_regs.ioc_sem_reg); 789 + 790 + /* 791 + * unlock init semaphore. 792 + */ 793 + writel(1, iocpf->ioc->ioc_regs.ioc_init_sem_reg); 794 + 704 795 sem_get: 705 796 bfa_ioc_hw_sem_get(iocpf->ioc); 706 797 } ··· 1684 1707 u32 i; 1685 1708 u32 asicmode; 1686 1709 1687 - /* 1688 - * Initialize LMEM first before code download 1689 - */ 1690 - bfa_ioc_lmem_init(ioc); 1691 - 1692 1710 bfa_trc(ioc, bfa_cb_image_get_size(bfa_ioc_asic_gen(ioc))); 1693 1711 fwimg = bfa_cb_image_get_chunk(bfa_ioc_asic_gen(ioc), chunkno); 1694 1712 ··· 1971 1999 bfa_ioc_pll_init_asic(ioc); 1972 2000 1973 2001 ioc->pllinit = BFA_TRUE; 2002 + 2003 + /* 2004 + * Initialize LMEM 2005 + */ 2006 + bfa_ioc_lmem_init(ioc); 2007 + 1974 2008 /* 1975 2009 * release semaphore. 1976 2010 */ ··· 2098 2120 2099 2121 case BFI_IOC_I2H_GETATTR_REPLY: 2100 2122 bfa_ioc_getattr_reply(ioc); 2101 - break; 2102 - 2103 - case BFI_IOC_I2H_ACQ_ADDR_REPLY: 2104 - bfa_fsm_send_event(ioc, IOC_E_FWRSP_ACQ_ADDR); 2105 2123 break; 2106 2124 2107 2125 default: ··· 2387 2413 { 2388 2414 return bfa_fsm_cmp_state(ioc, bfa_ioc_sm_disabling) || 2389 2415 bfa_fsm_cmp_state(ioc, bfa_ioc_sm_disabled); 2390 - } 2391 - 2392 - /* 2393 - * Return TRUE if IOC is in acquiring address state 2394 - */ 2395 - bfa_boolean_t 2396 - bfa_ioc_is_acq_addr(struct bfa_ioc_s *ioc) 2397 - { 2398 - return bfa_fsm_cmp_state(ioc, bfa_ioc_sm_acq_addr); 2399 2416 } 2400 2417 2401 2418 /* ··· 2879 2914 bfa_ioc_stats(ioc, ioc_hbfails); 2880 2915 ioc->stats.hb_count = ioc->hb_count; 2881 2916 bfa_fsm_send_event(ioc, IOC_E_HBFAIL); 2882 - } 2883 - 2884 - static void 2885 - bfa_ioc_check_attr_wwns(struct bfa_ioc_s *ioc) 2886 - { 2887 - if (bfa_ioc_get_type(ioc) == BFA_IOC_TYPE_LL) 2888 - return; 2889 - if (ioc->attr->nwwn == 0) 2890 - bfa_ioc_aen_post(ioc, BFA_IOC_AEN_INVALID_NWWN); 2891 - if (ioc->attr->pwwn == 0) 2892 - bfa_ioc_aen_post(ioc, BFA_IOC_AEN_INVALID_PWWN); 2893 2917 } 2894 2918 2895 2919 /* ··· 4449 4495 */ 4450 4496 4451 4497 #define BFA_DIAG_MEMTEST_TOV 50000 /* memtest timeout in msec */ 4452 - #define BFA_DIAG_FWPING_TOV 1000 /* msec */ 4498 + #define CT2_BFA_DIAG_MEMTEST_TOV (9*30*1000) /* 4.5 min */ 4453 4499 4454 4500 /* IOC event handler */ 4455 4501 static void ··· 4726 4772 } 4727 4773 4728 4774 static void 4729 - diag_ledtest_comp(struct bfa_diag_s *diag, struct bfi_diag_ledtest_rsp_s * msg) 4775 + diag_ledtest_comp(struct bfa_diag_s *diag, struct bfi_diag_ledtest_rsp_s *msg) 4730 4776 { 4731 4777 bfa_trc(diag, diag->ledtest.lock); 4732 4778 diag->ledtest.lock = BFA_FALSE; ··· 4804 4850 u32 pattern, struct bfa_diag_memtest_result *result, 4805 4851 bfa_cb_diag_t cbfn, void *cbarg) 4806 4852 { 4853 + u32 memtest_tov; 4854 + 4807 4855 bfa_trc(diag, pattern); 4808 4856 4809 4857 if (!bfa_ioc_adapter_is_disabled(diag->ioc)) ··· 4825 4869 /* download memtest code and take LPU0 out of reset */ 4826 4870 bfa_ioc_boot(diag->ioc, BFI_FWBOOT_TYPE_MEMTEST, BFI_FWBOOT_ENV_OS); 4827 4871 4872 + memtest_tov = (bfa_ioc_asic_gen(diag->ioc) == BFI_ASIC_GEN_CT2) ? 4873 + CT2_BFA_DIAG_MEMTEST_TOV : BFA_DIAG_MEMTEST_TOV; 4828 4874 bfa_timer_begin(diag->ioc->timer_mod, &diag->timer, 4829 - bfa_diag_memtest_done, diag, BFA_DIAG_MEMTEST_TOV); 4875 + bfa_diag_memtest_done, diag, memtest_tov); 4830 4876 diag->timer_active = 1; 4831 4877 return BFA_STATUS_OK; 4832 4878 } ··· 5599 5641 case BFA_DCONF_SM_INIT: 5600 5642 if (dconf->min_cfg) { 5601 5643 bfa_trc(dconf->bfa, dconf->min_cfg); 5644 + bfa_fsm_send_event(&dconf->bfa->iocfc, 5645 + IOCFC_E_DCONF_DONE); 5602 5646 return; 5603 5647 } 5604 5648 bfa_sm_set_state(dconf, bfa_dconf_sm_flash_read); 5605 - dconf->flashdone = BFA_FALSE; 5606 - bfa_trc(dconf->bfa, dconf->flashdone); 5649 + bfa_timer_start(dconf->bfa, &dconf->timer, 5650 + bfa_dconf_timer, dconf, BFA_DCONF_UPDATE_TOV); 5607 5651 bfa_status = bfa_flash_read_part(BFA_FLASH(dconf->bfa), 5608 5652 BFA_FLASH_PART_DRV, dconf->instance, 5609 5653 dconf->dconf, 5610 5654 sizeof(struct bfa_dconf_s), 0, 5611 5655 bfa_dconf_init_cb, dconf->bfa); 5612 5656 if (bfa_status != BFA_STATUS_OK) { 5657 + bfa_timer_stop(&dconf->timer); 5613 5658 bfa_dconf_init_cb(dconf->bfa, BFA_STATUS_FAILED); 5614 5659 bfa_sm_set_state(dconf, bfa_dconf_sm_uninit); 5615 5660 return; 5616 5661 } 5617 5662 break; 5618 5663 case BFA_DCONF_SM_EXIT: 5619 - dconf->flashdone = BFA_TRUE; 5664 + bfa_fsm_send_event(&dconf->bfa->iocfc, IOCFC_E_DCONF_DONE); 5620 5665 case BFA_DCONF_SM_IOCDISABLE: 5621 5666 case BFA_DCONF_SM_WR: 5622 5667 case BFA_DCONF_SM_FLASH_COMP: ··· 5640 5679 5641 5680 switch (event) { 5642 5681 case BFA_DCONF_SM_FLASH_COMP: 5682 + bfa_timer_stop(&dconf->timer); 5643 5683 bfa_sm_set_state(dconf, bfa_dconf_sm_ready); 5644 5684 break; 5645 5685 case BFA_DCONF_SM_TIMEOUT: 5646 5686 bfa_sm_set_state(dconf, bfa_dconf_sm_ready); 5687 + bfa_fsm_send_event(&dconf->bfa->iocfc, IOCFC_E_IOC_FAILED); 5647 5688 break; 5648 5689 case BFA_DCONF_SM_EXIT: 5649 - dconf->flashdone = BFA_TRUE; 5650 - bfa_trc(dconf->bfa, dconf->flashdone); 5690 + bfa_timer_stop(&dconf->timer); 5691 + bfa_sm_set_state(dconf, bfa_dconf_sm_uninit); 5692 + bfa_fsm_send_event(&dconf->bfa->iocfc, IOCFC_E_DCONF_DONE); 5693 + break; 5651 5694 case BFA_DCONF_SM_IOCDISABLE: 5695 + bfa_timer_stop(&dconf->timer); 5652 5696 bfa_sm_set_state(dconf, bfa_dconf_sm_uninit); 5653 5697 break; 5654 5698 default: ··· 5676 5710 bfa_sm_set_state(dconf, bfa_dconf_sm_dirty); 5677 5711 break; 5678 5712 case BFA_DCONF_SM_EXIT: 5679 - dconf->flashdone = BFA_TRUE; 5680 - bfa_trc(dconf->bfa, dconf->flashdone); 5681 5713 bfa_sm_set_state(dconf, bfa_dconf_sm_uninit); 5714 + bfa_fsm_send_event(&dconf->bfa->iocfc, IOCFC_E_DCONF_DONE); 5682 5715 break; 5683 5716 case BFA_DCONF_SM_INIT: 5684 5717 case BFA_DCONF_SM_IOCDISABLE: ··· 5739 5774 bfa_timer_stop(&dconf->timer); 5740 5775 case BFA_DCONF_SM_TIMEOUT: 5741 5776 bfa_sm_set_state(dconf, bfa_dconf_sm_uninit); 5742 - dconf->flashdone = BFA_TRUE; 5743 - bfa_trc(dconf->bfa, dconf->flashdone); 5744 - bfa_ioc_disable(&dconf->bfa->ioc); 5777 + bfa_fsm_send_event(&dconf->bfa->iocfc, IOCFC_E_DCONF_DONE); 5745 5778 break; 5746 5779 default: 5747 5780 bfa_sm_fault(dconf->bfa, event); ··· 5786 5823 bfa_sm_set_state(dconf, bfa_dconf_sm_dirty); 5787 5824 break; 5788 5825 case BFA_DCONF_SM_EXIT: 5789 - dconf->flashdone = BFA_TRUE; 5790 5826 bfa_sm_set_state(dconf, bfa_dconf_sm_uninit); 5827 + bfa_fsm_send_event(&dconf->bfa->iocfc, IOCFC_E_DCONF_DONE); 5791 5828 break; 5792 5829 case BFA_DCONF_SM_IOCDISABLE: 5793 5830 break; ··· 5828 5865 if (cfg->drvcfg.min_cfg) { 5829 5866 bfa_mem_kva_curp(dconf) += sizeof(struct bfa_dconf_hdr_s); 5830 5867 dconf->min_cfg = BFA_TRUE; 5831 - /* 5832 - * Set the flashdone flag to TRUE explicitly as no flash 5833 - * write will happen in min_cfg mode. 5834 - */ 5835 - dconf->flashdone = BFA_TRUE; 5836 5868 } else { 5837 5869 dconf->min_cfg = BFA_FALSE; 5838 5870 bfa_mem_kva_curp(dconf) += sizeof(struct bfa_dconf_s); ··· 5843 5885 struct bfa_s *bfa = arg; 5844 5886 struct bfa_dconf_mod_s *dconf = BFA_DCONF_MOD(bfa); 5845 5887 5846 - dconf->flashdone = BFA_TRUE; 5847 - bfa_trc(bfa, dconf->flashdone); 5848 - bfa_iocfc_cb_dconf_modinit(bfa, status); 5888 + bfa_sm_send_event(dconf, BFA_DCONF_SM_FLASH_COMP); 5849 5889 if (status == BFA_STATUS_OK) { 5850 5890 bfa_dconf_read_data_valid(bfa) = BFA_TRUE; 5851 5891 if (dconf->dconf->hdr.signature != BFI_DCONF_SIGNATURE) ··· 5851 5895 if (dconf->dconf->hdr.version != BFI_DCONF_VERSION) 5852 5896 dconf->dconf->hdr.version = BFI_DCONF_VERSION; 5853 5897 } 5854 - bfa_sm_send_event(dconf, BFA_DCONF_SM_FLASH_COMP); 5898 + bfa_fsm_send_event(&bfa->iocfc, IOCFC_E_DCONF_DONE); 5855 5899 } 5856 5900 5857 5901 void ··· 5933 5977 bfa_dconf_modexit(struct bfa_s *bfa) 5934 5978 { 5935 5979 struct bfa_dconf_mod_s *dconf = BFA_DCONF_MOD(bfa); 5936 - BFA_DCONF_MOD(bfa)->flashdone = BFA_FALSE; 5937 - bfa_trc(bfa, BFA_DCONF_MOD(bfa)->flashdone); 5938 5980 bfa_sm_send_event(dconf, BFA_DCONF_SM_EXIT); 5939 5981 }
+16 -1
drivers/scsi/bfa/bfa_ioc.h
··· 373 373 }; 374 374 375 375 /* 376 + * IOCFC state machine definitions/declarations 377 + */ 378 + enum iocfc_event { 379 + IOCFC_E_INIT = 1, /* IOCFC init request */ 380 + IOCFC_E_START = 2, /* IOCFC mod start request */ 381 + IOCFC_E_STOP = 3, /* IOCFC stop request */ 382 + IOCFC_E_ENABLE = 4, /* IOCFC enable request */ 383 + IOCFC_E_DISABLE = 5, /* IOCFC disable request */ 384 + IOCFC_E_IOC_ENABLED = 6, /* IOC enabled message */ 385 + IOCFC_E_IOC_DISABLED = 7, /* IOC disabled message */ 386 + IOCFC_E_IOC_FAILED = 8, /* failure notice by IOC sm */ 387 + IOCFC_E_DCONF_DONE = 9, /* dconf read/write done */ 388 + IOCFC_E_CFG_DONE = 10, /* IOCFC config complete */ 389 + }; 390 + 391 + /* 376 392 * ASIC block configurtion related 377 393 */ 378 394 ··· 722 706 struct bfa_dconf_mod_s { 723 707 bfa_sm_t sm; 724 708 u8 instance; 725 - bfa_boolean_t flashdone; 726 709 bfa_boolean_t read_data_valid; 727 710 bfa_boolean_t min_cfg; 728 711 struct bfa_timer_s timer;
+106 -45
drivers/scsi/bfa/bfa_ioc_ct.c
··· 786 786 } 787 787 788 788 #define CT2_NFC_MAX_DELAY 1000 789 + #define CT2_NFC_VER_VALID 0x143 790 + #define BFA_IOC_PLL_POLL 1000000 791 + 792 + static bfa_boolean_t 793 + bfa_ioc_ct2_nfc_halted(void __iomem *rb) 794 + { 795 + u32 r32; 796 + 797 + r32 = readl(rb + CT2_NFC_CSR_SET_REG); 798 + if (r32 & __NFC_CONTROLLER_HALTED) 799 + return BFA_TRUE; 800 + 801 + return BFA_FALSE; 802 + } 803 + 804 + static void 805 + bfa_ioc_ct2_nfc_resume(void __iomem *rb) 806 + { 807 + u32 r32; 808 + int i; 809 + 810 + writel(__HALT_NFC_CONTROLLER, rb + CT2_NFC_CSR_CLR_REG); 811 + for (i = 0; i < CT2_NFC_MAX_DELAY; i++) { 812 + r32 = readl(rb + CT2_NFC_CSR_SET_REG); 813 + if (!(r32 & __NFC_CONTROLLER_HALTED)) 814 + return; 815 + udelay(1000); 816 + } 817 + WARN_ON(1); 818 + } 819 + 789 820 bfa_status_t 790 821 bfa_ioc_ct2_pll_init(void __iomem *rb, enum bfi_asic_mode mode) 791 822 { 792 - u32 wgn, r32; 793 - int i; 823 + u32 wgn, r32, nfc_ver, i; 794 824 795 - /* 796 - * Initialize PLL if not already done by NFC 797 - */ 798 825 wgn = readl(rb + CT2_WGN_STATUS); 799 - if (!(wgn & __GLBL_PF_VF_CFG_RDY)) { 826 + nfc_ver = readl(rb + CT2_RSC_GPR15_REG); 827 + 828 + if ((wgn == (__A2T_AHB_LOAD | __WGN_READY)) && 829 + (nfc_ver >= CT2_NFC_VER_VALID)) { 830 + if (bfa_ioc_ct2_nfc_halted(rb)) 831 + bfa_ioc_ct2_nfc_resume(rb); 832 + 833 + writel(__RESET_AND_START_SCLK_LCLK_PLLS, 834 + rb + CT2_CSI_FW_CTL_SET_REG); 835 + 836 + for (i = 0; i < BFA_IOC_PLL_POLL; i++) { 837 + r32 = readl(rb + CT2_APP_PLL_LCLK_CTL_REG); 838 + if (r32 & __RESET_AND_START_SCLK_LCLK_PLLS) 839 + break; 840 + } 841 + 842 + WARN_ON(!(r32 & __RESET_AND_START_SCLK_LCLK_PLLS)); 843 + 844 + for (i = 0; i < BFA_IOC_PLL_POLL; i++) { 845 + r32 = readl(rb + CT2_APP_PLL_LCLK_CTL_REG); 846 + if (!(r32 & __RESET_AND_START_SCLK_LCLK_PLLS)) 847 + break; 848 + } 849 + 850 + WARN_ON(r32 & __RESET_AND_START_SCLK_LCLK_PLLS); 851 + udelay(1000); 852 + 853 + r32 = readl(rb + CT2_CSI_FW_CTL_REG); 854 + WARN_ON(r32 & __RESET_AND_START_SCLK_LCLK_PLLS); 855 + } else { 800 856 writel(__HALT_NFC_CONTROLLER, rb + CT2_NFC_CSR_SET_REG); 801 857 for (i = 0; i < CT2_NFC_MAX_DELAY; i++) { 802 858 r32 = readl(rb + CT2_NFC_CSR_SET_REG); ··· 860 804 break; 861 805 udelay(1000); 862 806 } 807 + 808 + bfa_ioc_ct2_mac_reset(rb); 809 + bfa_ioc_ct2_sclk_init(rb); 810 + bfa_ioc_ct2_lclk_init(rb); 811 + 812 + /* 813 + * release soft reset on s_clk & l_clk 814 + */ 815 + r32 = readl(rb + CT2_APP_PLL_SCLK_CTL_REG); 816 + writel(r32 & ~__APP_PLL_SCLK_LOGIC_SOFT_RESET, 817 + (rb + CT2_APP_PLL_SCLK_CTL_REG)); 818 + 819 + /* 820 + * release soft reset on s_clk & l_clk 821 + */ 822 + r32 = readl(rb + CT2_APP_PLL_LCLK_CTL_REG); 823 + writel(r32 & ~__APP_PLL_LCLK_LOGIC_SOFT_RESET, 824 + (rb + CT2_APP_PLL_LCLK_CTL_REG)); 825 + } 826 + 827 + /* 828 + * Announce flash device presence, if flash was corrupted. 829 + */ 830 + if (wgn == (__WGN_READY | __GLBL_PF_VF_CFG_RDY)) { 831 + r32 = readl(rb + PSS_GPIO_OUT_REG); 832 + writel(r32 & ~1, (rb + PSS_GPIO_OUT_REG)); 833 + r32 = readl(rb + PSS_GPIO_OE_REG); 834 + writel(r32 | 1, (rb + PSS_GPIO_OE_REG)); 863 835 } 864 836 865 837 /* ··· 897 813 writel(1, (rb + CT2_LPU0_HOSTFN_MBOX0_MSK)); 898 814 writel(1, (rb + CT2_LPU1_HOSTFN_MBOX0_MSK)); 899 815 900 - r32 = readl((rb + CT2_LPU0_HOSTFN_CMD_STAT)); 901 - if (r32 == 1) { 902 - writel(1, (rb + CT2_LPU0_HOSTFN_CMD_STAT)); 903 - readl((rb + CT2_LPU0_HOSTFN_CMD_STAT)); 904 - } 905 - r32 = readl((rb + CT2_LPU1_HOSTFN_CMD_STAT)); 906 - if (r32 == 1) { 907 - writel(1, (rb + CT2_LPU1_HOSTFN_CMD_STAT)); 908 - readl((rb + CT2_LPU1_HOSTFN_CMD_STAT)); 909 - } 910 - 911 - bfa_ioc_ct2_mac_reset(rb); 912 - bfa_ioc_ct2_sclk_init(rb); 913 - bfa_ioc_ct2_lclk_init(rb); 914 - 915 - /* 916 - * release soft reset on s_clk & l_clk 917 - */ 918 - r32 = readl((rb + CT2_APP_PLL_SCLK_CTL_REG)); 919 - writel(r32 & ~__APP_PLL_SCLK_LOGIC_SOFT_RESET, 920 - (rb + CT2_APP_PLL_SCLK_CTL_REG)); 921 - 922 - /* 923 - * release soft reset on s_clk & l_clk 924 - */ 925 - r32 = readl((rb + CT2_APP_PLL_LCLK_CTL_REG)); 926 - writel(r32 & ~__APP_PLL_LCLK_LOGIC_SOFT_RESET, 927 - (rb + CT2_APP_PLL_LCLK_CTL_REG)); 928 - 929 - /* 930 - * Announce flash device presence, if flash was corrupted. 931 - */ 932 - if (wgn == (__WGN_READY | __GLBL_PF_VF_CFG_RDY)) { 933 - r32 = readl((rb + PSS_GPIO_OUT_REG)); 934 - writel(r32 & ~1, (rb + PSS_GPIO_OUT_REG)); 935 - r32 = readl((rb + PSS_GPIO_OE_REG)); 936 - writel(r32 | 1, (rb + PSS_GPIO_OE_REG)); 816 + /* For first time initialization, no need to clear interrupts */ 817 + r32 = readl(rb + HOST_SEM5_REG); 818 + if (r32 & 0x1) { 819 + r32 = readl(rb + CT2_LPU0_HOSTFN_CMD_STAT); 820 + if (r32 == 1) { 821 + writel(1, rb + CT2_LPU0_HOSTFN_CMD_STAT); 822 + readl((rb + CT2_LPU0_HOSTFN_CMD_STAT)); 823 + } 824 + r32 = readl(rb + CT2_LPU1_HOSTFN_CMD_STAT); 825 + if (r32 == 1) { 826 + writel(1, rb + CT2_LPU1_HOSTFN_CMD_STAT); 827 + readl(rb + CT2_LPU1_HOSTFN_CMD_STAT); 828 + } 937 829 } 938 830 939 831 bfa_ioc_ct2_mem_init(rb); 940 832 941 - writel(BFI_IOC_UNINIT, (rb + CT2_BFA_IOC0_STATE_REG)); 942 - writel(BFI_IOC_UNINIT, (rb + CT2_BFA_IOC1_STATE_REG)); 833 + writel(BFI_IOC_UNINIT, rb + CT2_BFA_IOC0_STATE_REG); 834 + writel(BFI_IOC_UNINIT, rb + CT2_BFA_IOC1_STATE_REG); 835 + 943 836 return BFA_STATUS_OK; 944 837 }
+19 -50
drivers/scsi/bfa/bfa_svc.c
··· 1280 1280 switch (event) { 1281 1281 case BFA_LPS_SM_RESUME: 1282 1282 bfa_sm_set_state(lps, bfa_lps_sm_login); 1283 + bfa_lps_send_login(lps); 1283 1284 break; 1284 1285 1285 1286 case BFA_LPS_SM_OFFLINE: ··· 1579 1578 break; 1580 1579 1581 1580 case BFA_STATUS_VPORT_MAX: 1582 - if (!rsp->ext_status) 1581 + if (rsp->ext_status) 1583 1582 bfa_lps_no_res(lps, rsp->ext_status); 1584 1583 break; 1585 1584 ··· 3085 3084 } 3086 3085 3087 3086 static void 3088 - bfa_fcport_send_txcredit(void *port_cbarg) 3089 - { 3090 - 3091 - struct bfa_fcport_s *fcport = port_cbarg; 3092 - struct bfi_fcport_set_svc_params_req_s *m; 3093 - 3094 - /* 3095 - * check for room in queue to send request now 3096 - */ 3097 - m = bfa_reqq_next(fcport->bfa, BFA_REQQ_PORT); 3098 - if (!m) { 3099 - bfa_trc(fcport->bfa, fcport->cfg.tx_bbcredit); 3100 - return; 3101 - } 3102 - 3103 - bfi_h2i_set(m->mh, BFI_MC_FCPORT, BFI_FCPORT_H2I_SET_SVC_PARAMS_REQ, 3104 - bfa_fn_lpu(fcport->bfa)); 3105 - m->tx_bbcredit = cpu_to_be16((u16)fcport->cfg.tx_bbcredit); 3106 - m->bb_scn = fcport->cfg.bb_scn; 3107 - 3108 - /* 3109 - * queue I/O message to firmware 3110 - */ 3111 - bfa_reqq_produce(fcport->bfa, BFA_REQQ_PORT, m->mh); 3112 - } 3113 - 3114 - static void 3115 3087 bfa_fcport_qos_stats_swap(struct bfa_qos_stats_s *d, 3116 3088 struct bfa_qos_stats_s *s) 3117 3089 { ··· 3576 3602 return BFA_STATUS_UNSUPP_SPEED; 3577 3603 } 3578 3604 3579 - /* For Mezz card, port speed entered needs to be checked */ 3580 - if (bfa_mfg_is_mezz(fcport->bfa->ioc.attr->card_type)) { 3581 - if (bfa_ioc_get_type(&fcport->bfa->ioc) == BFA_IOC_TYPE_FC) { 3582 - /* For CT2, 1G is not supported */ 3583 - if ((speed == BFA_PORT_SPEED_1GBPS) && 3584 - (bfa_asic_id_ct2(bfa->ioc.pcidev.device_id))) 3585 - return BFA_STATUS_UNSUPP_SPEED; 3605 + /* Port speed entered needs to be checked */ 3606 + if (bfa_ioc_get_type(&fcport->bfa->ioc) == BFA_IOC_TYPE_FC) { 3607 + /* For CT2, 1G is not supported */ 3608 + if ((speed == BFA_PORT_SPEED_1GBPS) && 3609 + (bfa_asic_id_ct2(bfa->ioc.pcidev.device_id))) 3610 + return BFA_STATUS_UNSUPP_SPEED; 3586 3611 3587 - /* Already checked for Auto Speed and Max Speed supp */ 3588 - if (!(speed == BFA_PORT_SPEED_1GBPS || 3589 - speed == BFA_PORT_SPEED_2GBPS || 3590 - speed == BFA_PORT_SPEED_4GBPS || 3591 - speed == BFA_PORT_SPEED_8GBPS || 3592 - speed == BFA_PORT_SPEED_16GBPS || 3593 - speed == BFA_PORT_SPEED_AUTO)) 3594 - return BFA_STATUS_UNSUPP_SPEED; 3595 - } else { 3596 - if (speed != BFA_PORT_SPEED_10GBPS) 3597 - return BFA_STATUS_UNSUPP_SPEED; 3598 - } 3612 + /* Already checked for Auto Speed and Max Speed supp */ 3613 + if (!(speed == BFA_PORT_SPEED_1GBPS || 3614 + speed == BFA_PORT_SPEED_2GBPS || 3615 + speed == BFA_PORT_SPEED_4GBPS || 3616 + speed == BFA_PORT_SPEED_8GBPS || 3617 + speed == BFA_PORT_SPEED_16GBPS || 3618 + speed == BFA_PORT_SPEED_AUTO)) 3619 + return BFA_STATUS_UNSUPP_SPEED; 3620 + } else { 3621 + if (speed != BFA_PORT_SPEED_10GBPS) 3622 + return BFA_STATUS_UNSUPP_SPEED; 3599 3623 } 3600 3624 3601 3625 fcport->cfg.speed = speed; ··· 3737 3765 fcport->cfg.bb_scn = bb_scn; 3738 3766 if (bb_scn) 3739 3767 fcport->bbsc_op_state = BFA_TRUE; 3740 - bfa_fcport_send_txcredit(fcport); 3741 3768 } 3742 3769 3743 3770 /* ··· 3796 3825 attr->port_state = BFA_PORT_ST_IOCDIS; 3797 3826 else if (bfa_ioc_fw_mismatch(&fcport->bfa->ioc)) 3798 3827 attr->port_state = BFA_PORT_ST_FWMISMATCH; 3799 - else if (bfa_ioc_is_acq_addr(&fcport->bfa->ioc)) 3800 - attr->port_state = BFA_PORT_ST_ACQ_ADDR; 3801 3828 } 3802 3829 3803 3830 /* FCoE vlan */
-4
drivers/scsi/bfa/bfa_svc.h
··· 663 663 void bfa_cb_lps_cvl_event(void *bfad, void *uarg); 664 664 665 665 /* FAA specific APIs */ 666 - bfa_status_t bfa_faa_enable(struct bfa_s *bfa, 667 - bfa_cb_iocfc_t cbfn, void *cbarg); 668 - bfa_status_t bfa_faa_disable(struct bfa_s *bfa, 669 - bfa_cb_iocfc_t cbfn, void *cbarg); 670 666 bfa_status_t bfa_faa_query(struct bfa_s *bfa, struct bfa_faa_attr_s *attr, 671 667 bfa_cb_iocfc_t cbfn, void *cbarg); 672 668
+43 -4
drivers/scsi/bfa/bfad_attr.c
··· 442 442 return status; 443 443 } 444 444 445 + int 446 + bfad_im_issue_fc_host_lip(struct Scsi_Host *shost) 447 + { 448 + struct bfad_im_port_s *im_port = 449 + (struct bfad_im_port_s *) shost->hostdata[0]; 450 + struct bfad_s *bfad = im_port->bfad; 451 + struct bfad_hal_comp fcomp; 452 + unsigned long flags; 453 + uint32_t status; 454 + 455 + init_completion(&fcomp.comp); 456 + spin_lock_irqsave(&bfad->bfad_lock, flags); 457 + status = bfa_port_disable(&bfad->bfa.modules.port, 458 + bfad_hcb_comp, &fcomp); 459 + spin_unlock_irqrestore(&bfad->bfad_lock, flags); 460 + 461 + if (status != BFA_STATUS_OK) 462 + return -EIO; 463 + 464 + wait_for_completion(&fcomp.comp); 465 + if (fcomp.status != BFA_STATUS_OK) 466 + return -EIO; 467 + 468 + spin_lock_irqsave(&bfad->bfad_lock, flags); 469 + status = bfa_port_enable(&bfad->bfa.modules.port, 470 + bfad_hcb_comp, &fcomp); 471 + spin_unlock_irqrestore(&bfad->bfad_lock, flags); 472 + if (status != BFA_STATUS_OK) 473 + return -EIO; 474 + 475 + wait_for_completion(&fcomp.comp); 476 + if (fcomp.status != BFA_STATUS_OK) 477 + return -EIO; 478 + 479 + return 0; 480 + } 481 + 445 482 static int 446 483 bfad_im_vport_delete(struct fc_vport *fc_vport) 447 484 { ··· 494 457 unsigned long flags; 495 458 struct completion fcomp; 496 459 497 - if (im_port->flags & BFAD_PORT_DELETE) 498 - goto free_scsi_host; 460 + if (im_port->flags & BFAD_PORT_DELETE) { 461 + bfad_scsi_host_free(bfad, im_port); 462 + list_del(&vport->list_entry); 463 + return 0; 464 + } 499 465 500 466 port = im_port->port; 501 467 ··· 529 489 530 490 wait_for_completion(vport->comp_del); 531 491 532 - free_scsi_host: 533 492 bfad_scsi_host_free(bfad, im_port); 534 493 list_del(&vport->list_entry); 535 494 kfree(vport); ··· 618 579 .show_rport_dev_loss_tmo = 1, 619 580 .get_rport_dev_loss_tmo = bfad_im_get_rport_loss_tmo, 620 581 .set_rport_dev_loss_tmo = bfad_im_set_rport_loss_tmo, 621 - 582 + .issue_fc_host_lip = bfad_im_issue_fc_host_lip, 622 583 .vport_create = bfad_im_vport_create, 623 584 .vport_delete = bfad_im_vport_delete, 624 585 .vport_disable = bfad_im_vport_disable,
+11 -51
drivers/scsi/bfa/bfad_bsg.c
··· 1288 1288 } 1289 1289 1290 1290 int 1291 - bfad_iocmd_faa_enable(struct bfad_s *bfad, void *cmd) 1292 - { 1293 - struct bfa_bsg_gen_s *iocmd = (struct bfa_bsg_gen_s *)cmd; 1294 - unsigned long flags; 1295 - struct bfad_hal_comp fcomp; 1296 - 1297 - init_completion(&fcomp.comp); 1298 - iocmd->status = BFA_STATUS_OK; 1299 - spin_lock_irqsave(&bfad->bfad_lock, flags); 1300 - iocmd->status = bfa_faa_enable(&bfad->bfa, bfad_hcb_comp, &fcomp); 1301 - spin_unlock_irqrestore(&bfad->bfad_lock, flags); 1302 - 1303 - if (iocmd->status != BFA_STATUS_OK) 1304 - goto out; 1305 - 1306 - wait_for_completion(&fcomp.comp); 1307 - iocmd->status = fcomp.status; 1308 - out: 1309 - return 0; 1310 - } 1311 - 1312 - int 1313 - bfad_iocmd_faa_disable(struct bfad_s *bfad, void *cmd) 1314 - { 1315 - struct bfa_bsg_gen_s *iocmd = (struct bfa_bsg_gen_s *)cmd; 1316 - unsigned long flags; 1317 - struct bfad_hal_comp fcomp; 1318 - 1319 - init_completion(&fcomp.comp); 1320 - iocmd->status = BFA_STATUS_OK; 1321 - spin_lock_irqsave(&bfad->bfad_lock, flags); 1322 - iocmd->status = bfa_faa_disable(&bfad->bfa, bfad_hcb_comp, &fcomp); 1323 - spin_unlock_irqrestore(&bfad->bfad_lock, flags); 1324 - 1325 - if (iocmd->status != BFA_STATUS_OK) 1326 - goto out; 1327 - 1328 - wait_for_completion(&fcomp.comp); 1329 - iocmd->status = fcomp.status; 1330 - out: 1331 - return 0; 1332 - } 1333 - 1334 - int 1335 1291 bfad_iocmd_faa_query(struct bfad_s *bfad, void *cmd) 1336 1292 { 1337 1293 struct bfa_bsg_faa_attr_s *iocmd = (struct bfa_bsg_faa_attr_s *)cmd; ··· 1874 1918 struct bfa_bsg_debug_s *iocmd = (struct bfa_bsg_debug_s *)cmd; 1875 1919 void *iocmd_bufptr; 1876 1920 unsigned long flags; 1921 + u32 offset; 1877 1922 1878 1923 if (bfad_chk_iocmd_sz(payload_len, sizeof(struct bfa_bsg_debug_s), 1879 1924 BFA_DEBUG_FW_CORE_CHUNK_SZ) != BFA_STATUS_OK) { ··· 1892 1935 1893 1936 iocmd_bufptr = (char *)iocmd + sizeof(struct bfa_bsg_debug_s); 1894 1937 spin_lock_irqsave(&bfad->bfad_lock, flags); 1938 + offset = iocmd->offset; 1895 1939 iocmd->status = bfa_ioc_debug_fwcore(&bfad->bfa.ioc, iocmd_bufptr, 1896 - (u32 *)&iocmd->offset, &iocmd->bufsz); 1940 + &offset, &iocmd->bufsz); 1941 + iocmd->offset = offset; 1897 1942 spin_unlock_irqrestore(&bfad->bfad_lock, flags); 1898 1943 out: 1899 1944 return 0; ··· 2592 2633 case IOCMD_FLASH_DISABLE_OPTROM: 2593 2634 rc = bfad_iocmd_ablk_optrom(bfad, cmd, iocmd); 2594 2635 break; 2595 - case IOCMD_FAA_ENABLE: 2596 - rc = bfad_iocmd_faa_enable(bfad, iocmd); 2597 - break; 2598 - case IOCMD_FAA_DISABLE: 2599 - rc = bfad_iocmd_faa_disable(bfad, iocmd); 2600 - break; 2601 2636 case IOCMD_FAA_QUERY: 2602 2637 rc = bfad_iocmd_faa_query(bfad, iocmd); 2603 2638 break; ··· 2762 2809 struct bfad_im_port_s *im_port = 2763 2810 (struct bfad_im_port_s *) job->shost->hostdata[0]; 2764 2811 struct bfad_s *bfad = im_port->bfad; 2812 + struct request_queue *request_q = job->req->q; 2765 2813 void *payload_kbuf; 2766 2814 int rc = -EINVAL; 2815 + 2816 + /* 2817 + * Set the BSG device request_queue size to 256 to support 2818 + * payloads larger than 512*1024K bytes. 2819 + */ 2820 + blk_queue_max_segments(request_q, 256); 2767 2821 2768 2822 /* Allocate a temp buffer to hold the passed in user space command */ 2769 2823 payload_kbuf = kzalloc(job->request_payload.payload_len, GFP_KERNEL);
-2
drivers/scsi/bfa/bfad_bsg.h
··· 83 83 IOCMD_PORT_CFG_MODE, 84 84 IOCMD_FLASH_ENABLE_OPTROM, 85 85 IOCMD_FLASH_DISABLE_OPTROM, 86 - IOCMD_FAA_ENABLE, 87 - IOCMD_FAA_DISABLE, 88 86 IOCMD_FAA_QUERY, 89 87 IOCMD_CEE_GET_ATTR, 90 88 IOCMD_CEE_GET_STATS,
+1 -1
drivers/scsi/bfa/bfad_drv.h
··· 56 56 #ifdef BFA_DRIVER_VERSION 57 57 #define BFAD_DRIVER_VERSION BFA_DRIVER_VERSION 58 58 #else 59 - #define BFAD_DRIVER_VERSION "3.0.2.2" 59 + #define BFAD_DRIVER_VERSION "3.0.23.0" 60 60 #endif 61 61 62 62 #define BFAD_PROTO_NAME FCPI_NAME
+11 -6
drivers/scsi/bfa/bfi_ms.h
··· 28 28 BFI_IOCFC_H2I_CFG_REQ = 1, 29 29 BFI_IOCFC_H2I_SET_INTR_REQ = 2, 30 30 BFI_IOCFC_H2I_UPDATEQ_REQ = 3, 31 - BFI_IOCFC_H2I_FAA_ENABLE_REQ = 4, 32 - BFI_IOCFC_H2I_FAA_DISABLE_REQ = 5, 33 - BFI_IOCFC_H2I_FAA_QUERY_REQ = 6, 31 + BFI_IOCFC_H2I_FAA_QUERY_REQ = 4, 32 + BFI_IOCFC_H2I_ADDR_REQ = 5, 34 33 }; 35 34 36 35 enum bfi_iocfc_i2h_msgs { 37 36 BFI_IOCFC_I2H_CFG_REPLY = BFA_I2HM(1), 38 37 BFI_IOCFC_I2H_UPDATEQ_RSP = BFA_I2HM(3), 39 - BFI_IOCFC_I2H_FAA_ENABLE_RSP = BFA_I2HM(4), 40 - BFI_IOCFC_I2H_FAA_DISABLE_RSP = BFA_I2HM(5), 41 - BFI_IOCFC_I2H_FAA_QUERY_RSP = BFA_I2HM(6), 38 + BFI_IOCFC_I2H_FAA_QUERY_RSP = BFA_I2HM(4), 39 + BFI_IOCFC_I2H_ADDR_MSG = BFA_I2HM(5), 42 40 }; 43 41 44 42 struct bfi_iocfc_cfg_s { ··· 180 182 */ 181 183 struct bfi_faa_en_dis_s { 182 184 struct bfi_mhdr_s mh; /* common msg header */ 185 + }; 186 + 187 + struct bfi_faa_addr_msg_s { 188 + struct bfi_mhdr_s mh; /* common msg header */ 189 + u8 rsvd[4]; 190 + wwn_t pwwn; /* Fabric acquired PWWN */ 191 + wwn_t nwwn; /* Fabric acquired PWWN */ 183 192 }; 184 193 185 194 /*
+6
drivers/scsi/bfa/bfi_reg.h
··· 335 335 #define __PMM_1T_PNDB_P 0x00000002 336 336 #define CT2_PMM_1T_CONTROL_REG_P1 0x00023c1c 337 337 #define CT2_WGN_STATUS 0x00014990 338 + #define __A2T_AHB_LOAD 0x00000800 338 339 #define __WGN_READY 0x00000400 339 340 #define __GLBL_PF_VF_CFG_RDY 0x00000200 341 + #define CT2_NFC_CSR_CLR_REG 0x00027420 340 342 #define CT2_NFC_CSR_SET_REG 0x00027424 341 343 #define __HALT_NFC_CONTROLLER 0x00000002 342 344 #define __NFC_CONTROLLER_HALTED 0x00001000 345 + #define CT2_RSC_GPR15_REG 0x0002765c 346 + #define CT2_CSI_FW_CTL_REG 0x00027080 347 + #define CT2_CSI_FW_CTL_SET_REG 0x00027088 348 + #define __RESET_AND_START_SCLK_LCLK_PLLS 0x00010000 343 349 344 350 #define CT2_CSI_MAC0_CONTROL_REG 0x000270d0 345 351 #define __CSI_MAC_RESET 0x00000010
+2 -2
drivers/scsi/bnx2fc/bnx2fc_fcoe.c
··· 439 439 fr->fr_dev = lport; 440 440 441 441 bg = &bnx2fc_global; 442 - spin_lock_bh(&bg->fcoe_rx_list.lock); 442 + spin_lock(&bg->fcoe_rx_list.lock); 443 443 444 444 __skb_queue_tail(&bg->fcoe_rx_list, skb); 445 445 if (bg->fcoe_rx_list.qlen == 1) 446 446 wake_up_process(bg->thread); 447 447 448 - spin_unlock_bh(&bg->fcoe_rx_list.lock); 448 + spin_unlock(&bg->fcoe_rx_list.lock); 449 449 450 450 return 0; 451 451 err:
+35 -48
drivers/scsi/fcoe/fcoe.c
··· 1436 1436 goto err; 1437 1437 1438 1438 fps = &per_cpu(fcoe_percpu, cpu); 1439 - spin_lock_bh(&fps->fcoe_rx_list.lock); 1439 + spin_lock(&fps->fcoe_rx_list.lock); 1440 1440 if (unlikely(!fps->thread)) { 1441 1441 /* 1442 1442 * The targeted CPU is not ready, let's target ··· 1447 1447 "ready for incoming skb- using first online " 1448 1448 "CPU.\n"); 1449 1449 1450 - spin_unlock_bh(&fps->fcoe_rx_list.lock); 1450 + spin_unlock(&fps->fcoe_rx_list.lock); 1451 1451 cpu = cpumask_first(cpu_online_mask); 1452 1452 fps = &per_cpu(fcoe_percpu, cpu); 1453 - spin_lock_bh(&fps->fcoe_rx_list.lock); 1453 + spin_lock(&fps->fcoe_rx_list.lock); 1454 1454 if (!fps->thread) { 1455 - spin_unlock_bh(&fps->fcoe_rx_list.lock); 1455 + spin_unlock(&fps->fcoe_rx_list.lock); 1456 1456 goto err; 1457 1457 } 1458 1458 } ··· 1463 1463 * so we're free to queue skbs into it's queue. 1464 1464 */ 1465 1465 1466 - /* If this is a SCSI-FCP frame, and this is already executing on the 1467 - * correct CPU, and the queue for this CPU is empty, then go ahead 1468 - * and process the frame directly in the softirq context. 1469 - * This lets us process completions without context switching from the 1470 - * NET_RX softirq, to our receive processing thread, and then back to 1471 - * BLOCK softirq context. 1466 + /* 1467 + * Note: We used to have a set of conditions under which we would 1468 + * call fcoe_recv_frame directly, rather than queuing to the rx list 1469 + * as it could save a few cycles, but doing so is prohibited, as 1470 + * fcoe_recv_frame has several paths that may sleep, which is forbidden 1471 + * in softirq context. 1472 1472 */ 1473 - if (fh->fh_type == FC_TYPE_FCP && 1474 - cpu == smp_processor_id() && 1475 - skb_queue_empty(&fps->fcoe_rx_list)) { 1476 - spin_unlock_bh(&fps->fcoe_rx_list.lock); 1477 - fcoe_recv_frame(skb); 1478 - } else { 1479 - __skb_queue_tail(&fps->fcoe_rx_list, skb); 1480 - if (fps->fcoe_rx_list.qlen == 1) 1481 - wake_up_process(fps->thread); 1482 - spin_unlock_bh(&fps->fcoe_rx_list.lock); 1483 - } 1473 + __skb_queue_tail(&fps->fcoe_rx_list, skb); 1474 + if (fps->thread->state == TASK_INTERRUPTIBLE) 1475 + wake_up_process(fps->thread); 1476 + spin_unlock(&fps->fcoe_rx_list.lock); 1484 1477 1485 1478 return 0; 1486 1479 err: ··· 1790 1797 { 1791 1798 struct fcoe_percpu_s *p = arg; 1792 1799 struct sk_buff *skb; 1800 + struct sk_buff_head tmp; 1801 + 1802 + skb_queue_head_init(&tmp); 1793 1803 1794 1804 set_user_nice(current, -20); 1795 1805 1796 1806 while (!kthread_should_stop()) { 1797 1807 1798 1808 spin_lock_bh(&p->fcoe_rx_list.lock); 1799 - while ((skb = __skb_dequeue(&p->fcoe_rx_list)) == NULL) { 1809 + skb_queue_splice_init(&p->fcoe_rx_list, &tmp); 1810 + spin_unlock_bh(&p->fcoe_rx_list.lock); 1811 + 1812 + while ((skb = __skb_dequeue(&tmp)) != NULL) 1813 + fcoe_recv_frame(skb); 1814 + 1815 + spin_lock_bh(&p->fcoe_rx_list.lock); 1816 + if (!skb_queue_len(&p->fcoe_rx_list)) { 1800 1817 set_current_state(TASK_INTERRUPTIBLE); 1801 1818 spin_unlock_bh(&p->fcoe_rx_list.lock); 1802 1819 schedule(); 1803 1820 set_current_state(TASK_RUNNING); 1804 - if (kthread_should_stop()) 1805 - return 0; 1806 - spin_lock_bh(&p->fcoe_rx_list.lock); 1807 - } 1808 - spin_unlock_bh(&p->fcoe_rx_list.lock); 1809 - fcoe_recv_frame(skb); 1821 + } else 1822 + spin_unlock_bh(&p->fcoe_rx_list.lock); 1810 1823 } 1811 1824 return 0; 1812 1825 } ··· 2186 2187 /* start FIP Discovery and FLOGI */ 2187 2188 lport->boot_time = jiffies; 2188 2189 fc_fabric_login(lport); 2189 - if (!fcoe_link_ok(lport)) 2190 + if (!fcoe_link_ok(lport)) { 2191 + rtnl_unlock(); 2190 2192 fcoe_ctlr_link_up(&fcoe->ctlr); 2193 + mutex_unlock(&fcoe_config_mutex); 2194 + return rc; 2195 + } 2191 2196 2192 2197 out_nodev: 2193 2198 rtnl_unlock(); ··· 2264 2261 static void fcoe_percpu_clean(struct fc_lport *lport) 2265 2262 { 2266 2263 struct fcoe_percpu_s *pp; 2267 - struct fcoe_rcv_info *fr; 2268 - struct sk_buff_head *list; 2269 - struct sk_buff *skb, *next; 2270 - struct sk_buff *head; 2264 + struct sk_buff *skb; 2271 2265 unsigned int cpu; 2272 2266 2273 2267 for_each_possible_cpu(cpu) { 2274 2268 pp = &per_cpu(fcoe_percpu, cpu); 2275 - spin_lock_bh(&pp->fcoe_rx_list.lock); 2276 - list = &pp->fcoe_rx_list; 2277 - head = list->next; 2278 - for (skb = head; skb != (struct sk_buff *)list; 2279 - skb = next) { 2280 - next = skb->next; 2281 - fr = fcoe_dev_from_skb(skb); 2282 - if (fr->fr_dev == lport) { 2283 - __skb_unlink(skb, list); 2284 - kfree_skb(skb); 2285 - } 2286 - } 2287 2269 2288 - if (!pp->thread || !cpu_online(cpu)) { 2289 - spin_unlock_bh(&pp->fcoe_rx_list.lock); 2270 + if (!pp->thread || !cpu_online(cpu)) 2290 2271 continue; 2291 - } 2292 2272 2293 2273 skb = dev_alloc_skb(0); 2294 2274 if (!skb) { ··· 2280 2294 } 2281 2295 skb->destructor = fcoe_percpu_flush_done; 2282 2296 2297 + spin_lock_bh(&pp->fcoe_rx_list.lock); 2283 2298 __skb_queue_tail(&pp->fcoe_rx_list, skb); 2284 2299 if (pp->fcoe_rx_list.qlen == 1) 2285 2300 wake_up_process(pp->thread);
+28 -10
drivers/scsi/fcoe/fcoe_ctlr.c
··· 242 242 printk(KERN_INFO "libfcoe: host%d: FIP selected " 243 243 "Fibre-Channel Forwarder MAC %pM\n", 244 244 fip->lp->host->host_no, sel->fcf_mac); 245 - memcpy(fip->dest_addr, sel->fcf_mac, ETH_ALEN); 245 + memcpy(fip->dest_addr, sel->fcoe_mac, ETH_ALEN); 246 246 fip->map_dest = 0; 247 247 } 248 248 unlock: ··· 824 824 memcpy(fcf->fcf_mac, 825 825 ((struct fip_mac_desc *)desc)->fd_mac, 826 826 ETH_ALEN); 827 + memcpy(fcf->fcoe_mac, fcf->fcf_mac, ETH_ALEN); 827 828 if (!is_valid_ether_addr(fcf->fcf_mac)) { 828 829 LIBFCOE_FIP_DBG(fip, 829 830 "Invalid MAC addr %pM in FIP adv\n", ··· 1014 1013 struct fip_desc *desc; 1015 1014 struct fip_encaps *els; 1016 1015 struct fcoe_dev_stats *stats; 1016 + struct fcoe_fcf *sel; 1017 1017 enum fip_desc_type els_dtype = 0; 1018 1018 u8 els_op; 1019 1019 u8 sub; ··· 1042 1040 goto drop; 1043 1041 /* Drop ELS if there are duplicate critical descriptors */ 1044 1042 if (desc->fip_dtype < 32) { 1045 - if (desc_mask & 1U << desc->fip_dtype) { 1043 + if ((desc->fip_dtype != FIP_DT_MAC) && 1044 + (desc_mask & 1U << desc->fip_dtype)) { 1046 1045 LIBFCOE_FIP_DBG(fip, "Duplicate Critical " 1047 1046 "Descriptors in FIP ELS\n"); 1048 1047 goto drop; ··· 1052 1049 } 1053 1050 switch (desc->fip_dtype) { 1054 1051 case FIP_DT_MAC: 1052 + sel = fip->sel_fcf; 1055 1053 if (desc_cnt == 1) { 1056 1054 LIBFCOE_FIP_DBG(fip, "FIP descriptors " 1057 1055 "received out of order\n"); 1058 1056 goto drop; 1059 1057 } 1058 + /* 1059 + * Some switch implementations send two MAC descriptors, 1060 + * with first MAC(granted_mac) being the FPMA, and the 1061 + * second one(fcoe_mac) is used as destination address 1062 + * for sending/receiving FCoE packets. FIP traffic is 1063 + * sent using fip_mac. For regular switches, both 1064 + * fip_mac and fcoe_mac would be the same. 1065 + */ 1066 + if (desc_cnt == 2) 1067 + memcpy(granted_mac, 1068 + ((struct fip_mac_desc *)desc)->fd_mac, 1069 + ETH_ALEN); 1060 1070 1061 1071 if (dlen != sizeof(struct fip_mac_desc)) 1062 1072 goto len_err; 1063 - memcpy(granted_mac, 1064 - ((struct fip_mac_desc *)desc)->fd_mac, 1065 - ETH_ALEN); 1073 + 1074 + if ((desc_cnt == 3) && (sel)) 1075 + memcpy(sel->fcoe_mac, 1076 + ((struct fip_mac_desc *)desc)->fd_mac, 1077 + ETH_ALEN); 1066 1078 break; 1067 1079 case FIP_DT_FLOGI: 1068 1080 case FIP_DT_FDISC: ··· 1291 1273 * No Vx_Port description. Clear all NPIV ports, 1292 1274 * followed by physical port 1293 1275 */ 1294 - mutex_lock(&lport->lp_mutex); 1295 - list_for_each_entry(vn_port, &lport->vports, list) 1296 - fc_lport_reset(vn_port); 1297 - mutex_unlock(&lport->lp_mutex); 1298 - 1299 1276 mutex_lock(&fip->ctlr_mutex); 1300 1277 per_cpu_ptr(lport->dev_stats, 1301 1278 get_cpu())->VLinkFailureCount++; 1302 1279 put_cpu(); 1303 1280 fcoe_ctlr_reset(fip); 1304 1281 mutex_unlock(&fip->ctlr_mutex); 1282 + 1283 + mutex_lock(&lport->lp_mutex); 1284 + list_for_each_entry(vn_port, &lport->vports, list) 1285 + fc_lport_reset(vn_port); 1286 + mutex_unlock(&lport->lp_mutex); 1305 1287 1306 1288 fc_lport_reset(fip->lp); 1307 1289 fcoe_ctlr_solicit(fip, NULL);
+46 -27
drivers/scsi/ipr.c
··· 104 104 static const struct ipr_chip_cfg_t ipr_chip_cfg[] = { 105 105 { /* Gemstone, Citrine, Obsidian, and Obsidian-E */ 106 106 .mailbox = 0x0042C, 107 + .max_cmds = 100, 107 108 .cache_line_size = 0x20, 109 + .clear_isr = 1, 108 110 { 109 111 .set_interrupt_mask_reg = 0x0022C, 110 112 .clr_interrupt_mask_reg = 0x00230, ··· 128 126 }, 129 127 { /* Snipe and Scamp */ 130 128 .mailbox = 0x0052C, 129 + .max_cmds = 100, 131 130 .cache_line_size = 0x20, 131 + .clear_isr = 1, 132 132 { 133 133 .set_interrupt_mask_reg = 0x00288, 134 134 .clr_interrupt_mask_reg = 0x0028C, ··· 152 148 }, 153 149 { /* CRoC */ 154 150 .mailbox = 0x00044, 151 + .max_cmds = 1000, 155 152 .cache_line_size = 0x20, 153 + .clear_isr = 0, 156 154 { 157 155 .set_interrupt_mask_reg = 0x00010, 158 156 .clr_interrupt_mask_reg = 0x00018, ··· 853 847 854 848 ipr_trc_hook(ipr_cmd, IPR_TRACE_START, 0); 855 849 856 - mb(); 857 - 858 850 ipr_send_command(ipr_cmd); 859 851 } 860 852 ··· 985 981 ipr_cmd->done = ipr_process_error; 986 982 987 983 ipr_trc_hook(ipr_cmd, IPR_TRACE_START, IPR_IOA_RES_ADDR); 988 - 989 - mb(); 990 984 991 985 ipr_send_command(ipr_cmd); 992 986 } else { ··· 4341 4339 4342 4340 list_for_each_entry(res, &ioa_cfg->used_res_q, queue) { 4343 4341 if ((res->bus == starget->channel) && 4344 - (res->target == starget->id) && 4345 - (res->lun == 0)) { 4342 + (res->target == starget->id)) { 4346 4343 return res; 4347 4344 } 4348 4345 } ··· 4415 4414 struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *) shost->hostdata; 4416 4415 4417 4416 if (ioa_cfg->sis64) { 4418 - if (starget->channel == IPR_ARRAY_VIRTUAL_BUS) 4419 - clear_bit(starget->id, ioa_cfg->array_ids); 4420 - else if (starget->channel == IPR_VSET_VIRTUAL_BUS) 4421 - clear_bit(starget->id, ioa_cfg->vset_ids); 4422 - else if (starget->channel == 0) 4423 - clear_bit(starget->id, ioa_cfg->target_ids); 4417 + if (!ipr_find_starget(starget)) { 4418 + if (starget->channel == IPR_ARRAY_VIRTUAL_BUS) 4419 + clear_bit(starget->id, ioa_cfg->array_ids); 4420 + else if (starget->channel == IPR_VSET_VIRTUAL_BUS) 4421 + clear_bit(starget->id, ioa_cfg->vset_ids); 4422 + else if (starget->channel == 0) 4423 + clear_bit(starget->id, ioa_cfg->target_ids); 4424 + } 4424 4425 } 4425 4426 4426 4427 if (sata_port) { ··· 5051 5048 del_timer(&ioa_cfg->reset_cmd->timer); 5052 5049 ipr_reset_ioa_job(ioa_cfg->reset_cmd); 5053 5050 } else if ((int_reg & IPR_PCII_HRRQ_UPDATED) == int_reg) { 5054 - if (ipr_debug && printk_ratelimit()) 5055 - dev_err(&ioa_cfg->pdev->dev, 5056 - "Spurious interrupt detected. 0x%08X\n", int_reg); 5057 - writel(IPR_PCII_HRRQ_UPDATED, ioa_cfg->regs.clr_interrupt_reg32); 5058 - int_reg = readl(ioa_cfg->regs.sense_interrupt_reg32); 5059 - return IRQ_NONE; 5051 + if (ioa_cfg->clear_isr) { 5052 + if (ipr_debug && printk_ratelimit()) 5053 + dev_err(&ioa_cfg->pdev->dev, 5054 + "Spurious interrupt detected. 0x%08X\n", int_reg); 5055 + writel(IPR_PCII_HRRQ_UPDATED, ioa_cfg->regs.clr_interrupt_reg32); 5056 + int_reg = readl(ioa_cfg->regs.sense_interrupt_reg32); 5057 + return IRQ_NONE; 5058 + } 5060 5059 } else { 5061 5060 if (int_reg & IPR_PCII_IOA_UNIT_CHECKED) 5062 5061 ioa_cfg->ioa_unit_checked = 1; ··· 5157 5152 ioa_cfg->toggle_bit ^= 1u; 5158 5153 } 5159 5154 } 5155 + 5156 + if (ipr_cmd && !ioa_cfg->clear_isr) 5157 + break; 5160 5158 5161 5159 if (ipr_cmd != NULL) { 5162 5160 /* Clear the PCI interrupt */ ··· 5862 5854 rc = ipr_build_ioadl(ioa_cfg, ipr_cmd); 5863 5855 } 5864 5856 5865 - if (likely(rc == 0)) { 5866 - mb(); 5867 - ipr_send_command(ipr_cmd); 5868 - } else { 5869 - list_move_tail(&ipr_cmd->queue, &ioa_cfg->free_q); 5870 - return SCSI_MLQUEUE_HOST_BUSY; 5857 + if (unlikely(rc != 0)) { 5858 + list_move_tail(&ipr_cmd->queue, &ioa_cfg->free_q); 5859 + return SCSI_MLQUEUE_HOST_BUSY; 5871 5860 } 5872 5861 5862 + ipr_send_command(ipr_cmd); 5873 5863 return 0; 5874 5864 } 5875 5865 ··· 6244 6238 WARN_ON(1); 6245 6239 return AC_ERR_INVALID; 6246 6240 } 6247 - 6248 - mb(); 6249 6241 6250 6242 ipr_send_command(ipr_cmd); 6251 6243 ··· 8281 8277 if (ioa_cfg->ipr_cmd_pool) 8282 8278 pci_pool_destroy (ioa_cfg->ipr_cmd_pool); 8283 8279 8280 + kfree(ioa_cfg->ipr_cmnd_list); 8281 + kfree(ioa_cfg->ipr_cmnd_list_dma); 8282 + ioa_cfg->ipr_cmnd_list = NULL; 8283 + ioa_cfg->ipr_cmnd_list_dma = NULL; 8284 8284 ioa_cfg->ipr_cmd_pool = NULL; 8285 8285 } 8286 8286 ··· 8360 8352 int i; 8361 8353 8362 8354 ioa_cfg->ipr_cmd_pool = pci_pool_create (IPR_NAME, ioa_cfg->pdev, 8363 - sizeof(struct ipr_cmnd), 16, 0); 8355 + sizeof(struct ipr_cmnd), 512, 0); 8364 8356 8365 8357 if (!ioa_cfg->ipr_cmd_pool) 8366 8358 return -ENOMEM; 8359 + 8360 + ioa_cfg->ipr_cmnd_list = kcalloc(IPR_NUM_CMD_BLKS, sizeof(struct ipr_cmnd *), GFP_KERNEL); 8361 + ioa_cfg->ipr_cmnd_list_dma = kcalloc(IPR_NUM_CMD_BLKS, sizeof(dma_addr_t), GFP_KERNEL); 8362 + 8363 + if (!ioa_cfg->ipr_cmnd_list || !ioa_cfg->ipr_cmnd_list_dma) { 8364 + ipr_free_cmd_blks(ioa_cfg); 8365 + return -ENOMEM; 8366 + } 8367 8367 8368 8368 for (i = 0; i < IPR_NUM_CMD_BLKS; i++) { 8369 8369 ipr_cmd = pci_pool_alloc (ioa_cfg->ipr_cmd_pool, GFP_KERNEL, &dma_addr); ··· 8600 8584 host->max_channel = IPR_MAX_BUS_TO_SCAN; 8601 8585 host->unique_id = host->host_no; 8602 8586 host->max_cmd_len = IPR_MAX_CDB_LEN; 8587 + host->can_queue = ioa_cfg->max_cmds; 8603 8588 pci_set_drvdata(pdev, ioa_cfg); 8604 8589 8605 8590 p = &ioa_cfg->chip_cfg->regs; ··· 8785 8768 /* set SIS 32 or SIS 64 */ 8786 8769 ioa_cfg->sis64 = ioa_cfg->ipr_chip->sis_type == IPR_SIS64 ? 1 : 0; 8787 8770 ioa_cfg->chip_cfg = ioa_cfg->ipr_chip->cfg; 8771 + ioa_cfg->clear_isr = ioa_cfg->chip_cfg->clear_isr; 8772 + ioa_cfg->max_cmds = ioa_cfg->chip_cfg->max_cmds; 8788 8773 8789 8774 if (ipr_transop_timeout) 8790 8775 ioa_cfg->transop_timeout = ipr_transop_timeout;
+10 -6
drivers/scsi/ipr.h
··· 38 38 /* 39 39 * Literals 40 40 */ 41 - #define IPR_DRIVER_VERSION "2.5.2" 42 - #define IPR_DRIVER_DATE "(April 27, 2011)" 41 + #define IPR_DRIVER_VERSION "2.5.3" 42 + #define IPR_DRIVER_DATE "(March 10, 2012)" 43 43 44 44 /* 45 45 * IPR_MAX_CMD_PER_LUN: This defines the maximum number of outstanding ··· 53 53 * IPR_NUM_BASE_CMD_BLKS: This defines the maximum number of 54 54 * ops the mid-layer can send to the adapter. 55 55 */ 56 - #define IPR_NUM_BASE_CMD_BLKS 100 56 + #define IPR_NUM_BASE_CMD_BLKS (ioa_cfg->max_cmds) 57 57 58 58 #define PCI_DEVICE_ID_IBM_OBSIDIAN_E 0x0339 59 59 ··· 153 153 #define IPR_NUM_INTERNAL_CMD_BLKS (IPR_NUM_HCAMS + \ 154 154 ((IPR_NUM_RESET_RELOAD_RETRIES + 1) * 2) + 4) 155 155 156 - #define IPR_MAX_COMMANDS IPR_NUM_BASE_CMD_BLKS 156 + #define IPR_MAX_COMMANDS 100 157 157 #define IPR_NUM_CMD_BLKS (IPR_NUM_BASE_CMD_BLKS + \ 158 158 IPR_NUM_INTERNAL_CMD_BLKS) 159 159 ··· 1305 1305 1306 1306 struct ipr_chip_cfg_t { 1307 1307 u32 mailbox; 1308 + u16 max_cmds; 1308 1309 u8 cache_line_size; 1310 + u8 clear_isr; 1309 1311 struct ipr_interrupt_offsets regs; 1310 1312 }; 1311 1313 ··· 1390 1388 u8 sis64:1; 1391 1389 u8 dump_timeout:1; 1392 1390 u8 cfg_locked:1; 1391 + u8 clear_isr:1; 1393 1392 1394 1393 u8 revid; 1395 1394 ··· 1504 1501 struct ata_host ata_host; 1505 1502 char ipr_cmd_label[8]; 1506 1503 #define IPR_CMD_LABEL "ipr_cmd" 1507 - struct ipr_cmnd *ipr_cmnd_list[IPR_NUM_CMD_BLKS]; 1508 - dma_addr_t ipr_cmnd_list_dma[IPR_NUM_CMD_BLKS]; 1504 + u32 max_cmds; 1505 + struct ipr_cmnd **ipr_cmnd_list; 1506 + dma_addr_t *ipr_cmnd_list_dma; 1509 1507 }; /* struct ipr_ioa_cfg */ 1510 1508 1511 1509 struct ipr_cmnd {
+12 -2
drivers/scsi/libfc/fc_exch.c
··· 2263 2263 mp->class = class; 2264 2264 /* adjust em exch xid range for offload */ 2265 2265 mp->min_xid = min_xid; 2266 - mp->max_xid = max_xid; 2266 + 2267 + /* reduce range so per cpu pool fits into PCPU_MIN_UNIT_SIZE pool */ 2268 + pool_exch_range = (PCPU_MIN_UNIT_SIZE - sizeof(*pool)) / 2269 + sizeof(struct fc_exch *); 2270 + if ((max_xid - min_xid + 1) / (fc_cpu_mask + 1) > pool_exch_range) { 2271 + mp->max_xid = pool_exch_range * (fc_cpu_mask + 1) + 2272 + min_xid - 1; 2273 + } else { 2274 + mp->max_xid = max_xid; 2275 + pool_exch_range = (mp->max_xid - mp->min_xid + 1) / 2276 + (fc_cpu_mask + 1); 2277 + } 2267 2278 2268 2279 mp->ep_pool = mempool_create_slab_pool(2, fc_em_cachep); 2269 2280 if (!mp->ep_pool) ··· 2285 2274 * divided across all cpus. The exch pointers array memory is 2286 2275 * allocated for exch range per pool. 2287 2276 */ 2288 - pool_exch_range = (mp->max_xid - mp->min_xid + 1) / (fc_cpu_mask + 1); 2289 2277 mp->pool_max_index = pool_exch_range - 1; 2290 2278 2291 2279 /*
+9 -1
drivers/scsi/libfc/fc_lport.c
··· 1743 1743 mfs = ntohs(flp->fl_csp.sp_bb_data) & 1744 1744 FC_SP_BB_DATA_MASK; 1745 1745 if (mfs >= FC_SP_MIN_MAX_PAYLOAD && 1746 - mfs < lport->mfs) 1746 + mfs <= lport->mfs) { 1747 1747 lport->mfs = mfs; 1748 + fc_host_maxframe_size(lport->host) = mfs; 1749 + } else { 1750 + FC_LPORT_DBG(lport, "FLOGI bad mfs:%hu response, " 1751 + "lport->mfs:%hu\n", mfs, lport->mfs); 1752 + fc_lport_error(lport, fp); 1753 + goto err; 1754 + } 1755 + 1748 1756 csp_flags = ntohs(flp->fl_csp.sp_features); 1749 1757 r_a_tov = ntohl(flp->fl_csp.sp_r_a_tov); 1750 1758 e_d_tov = ntohl(flp->fl_csp.sp_e_d_tov);
+3 -1
drivers/scsi/lpfc/Makefile
··· 1 1 #/******************************************************************* 2 2 # * This file is part of the Emulex Linux Device Driver for * 3 3 # * Fibre Channel Host Bus Adapters. * 4 - # * Copyright (C) 2004-2011 Emulex. All rights reserved. * 4 + # * Copyright (C) 2004-2012 Emulex. All rights reserved. * 5 5 # * EMULEX and SLI are trademarks of Emulex. * 6 6 # * www.emulex.com * 7 7 # * * ··· 21 21 22 22 ccflags-$(GCOV) := -fprofile-arcs -ftest-coverage 23 23 ccflags-$(GCOV) += -O0 24 + 25 + ccflags-y += -Werror 24 26 25 27 obj-$(CONFIG_SCSI_LPFC) := lpfc.o 26 28
+7 -1
drivers/scsi/lpfc/lpfc.h
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2004-2011 Emulex. All rights reserved. * 4 + * Copyright (C) 2004-2012 Emulex. All rights reserved. * 5 5 * EMULEX and SLI are trademarks of Emulex. * 6 6 * www.emulex.com * 7 7 * Portions Copyright (C) 2004-2005 Christoph Hellwig * ··· 840 840 struct dentry *debug_dumpData; /* BlockGuard BPL */ 841 841 struct dentry *debug_dumpDif; /* BlockGuard BPL */ 842 842 struct dentry *debug_InjErrLBA; /* LBA to inject errors at */ 843 + struct dentry *debug_InjErrNPortID; /* NPortID to inject errors at */ 844 + struct dentry *debug_InjErrWWPN; /* WWPN to inject errors at */ 843 845 struct dentry *debug_writeGuard; /* inject write guard_tag errors */ 844 846 struct dentry *debug_writeApp; /* inject write app_tag errors */ 845 847 struct dentry *debug_writeRef; /* inject write ref_tag errors */ ··· 856 854 uint32_t lpfc_injerr_rgrd_cnt; 857 855 uint32_t lpfc_injerr_rapp_cnt; 858 856 uint32_t lpfc_injerr_rref_cnt; 857 + uint32_t lpfc_injerr_nportid; 858 + struct lpfc_name lpfc_injerr_wwpn; 859 859 sector_t lpfc_injerr_lba; 860 860 #define LPFC_INJERR_LBA_OFF (sector_t)(-1) 861 861 ··· 912 908 atomic_t fast_event_count; 913 909 uint32_t fcoe_eventtag; 914 910 uint32_t fcoe_eventtag_at_fcf_scan; 911 + uint32_t fcoe_cvl_eventtag; 912 + uint32_t fcoe_cvl_eventtag_attn; 915 913 struct lpfc_fcf fcf; 916 914 uint8_t fc_map[3]; 917 915 uint8_t valid_vlan;
+2 -2
drivers/scsi/lpfc/lpfc_attr.c
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2004-2011 Emulex. All rights reserved. * 4 + * Copyright (C) 2004-2012 Emulex. All rights reserved. * 5 5 * EMULEX and SLI are trademarks of Emulex. * 6 6 * www.emulex.com * 7 7 * Portions Copyright (C) 2004-2005 Christoph Hellwig * ··· 2575 2575 # lpfc_enable_da_id: This turns on the DA_ID CT command that deregisters 2576 2576 # objects that have been registered with the nameserver after login. 2577 2577 */ 2578 - LPFC_VPORT_ATTR_R(enable_da_id, 0, 0, 1, 2578 + LPFC_VPORT_ATTR_R(enable_da_id, 1, 0, 1, 2579 2579 "Deregister nameserver objects before LOGO"); 2580 2580 2581 2581 /*
+65 -15
drivers/scsi/lpfc/lpfc_debugfs.c
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2007-2011 Emulex. All rights reserved. * 4 + * Copyright (C) 2007-2012 Emulex. All rights reserved. * 5 5 * EMULEX and SLI are trademarks of Emulex. * 6 6 * www.emulex.com * 7 7 * * ··· 1010 1010 { 1011 1011 struct dentry *dent = file->f_dentry; 1012 1012 struct lpfc_hba *phba = file->private_data; 1013 - char cbuf[16]; 1013 + char cbuf[32]; 1014 + uint64_t tmp = 0; 1014 1015 int cnt = 0; 1015 1016 1016 1017 if (dent == phba->debug_writeGuard) 1017 - cnt = snprintf(cbuf, 16, "%u\n", phba->lpfc_injerr_wgrd_cnt); 1018 + cnt = snprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_wgrd_cnt); 1018 1019 else if (dent == phba->debug_writeApp) 1019 - cnt = snprintf(cbuf, 16, "%u\n", phba->lpfc_injerr_wapp_cnt); 1020 + cnt = snprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_wapp_cnt); 1020 1021 else if (dent == phba->debug_writeRef) 1021 - cnt = snprintf(cbuf, 16, "%u\n", phba->lpfc_injerr_wref_cnt); 1022 + cnt = snprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_wref_cnt); 1022 1023 else if (dent == phba->debug_readGuard) 1023 - cnt = snprintf(cbuf, 16, "%u\n", phba->lpfc_injerr_rgrd_cnt); 1024 + cnt = snprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_rgrd_cnt); 1024 1025 else if (dent == phba->debug_readApp) 1025 - cnt = snprintf(cbuf, 16, "%u\n", phba->lpfc_injerr_rapp_cnt); 1026 + cnt = snprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_rapp_cnt); 1026 1027 else if (dent == phba->debug_readRef) 1027 - cnt = snprintf(cbuf, 16, "%u\n", phba->lpfc_injerr_rref_cnt); 1028 - else if (dent == phba->debug_InjErrLBA) 1029 - cnt = snprintf(cbuf, 16, "0x%lx\n", 1030 - (unsigned long) phba->lpfc_injerr_lba); 1031 - else 1028 + cnt = snprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_rref_cnt); 1029 + else if (dent == phba->debug_InjErrNPortID) 1030 + cnt = snprintf(cbuf, 32, "0x%06x\n", phba->lpfc_injerr_nportid); 1031 + else if (dent == phba->debug_InjErrWWPN) { 1032 + memcpy(&tmp, &phba->lpfc_injerr_wwpn, sizeof(struct lpfc_name)); 1033 + tmp = cpu_to_be64(tmp); 1034 + cnt = snprintf(cbuf, 32, "0x%016llx\n", tmp); 1035 + } else if (dent == phba->debug_InjErrLBA) { 1036 + if (phba->lpfc_injerr_lba == (sector_t)(-1)) 1037 + cnt = snprintf(cbuf, 32, "off\n"); 1038 + else 1039 + cnt = snprintf(cbuf, 32, "0x%llx\n", 1040 + (uint64_t) phba->lpfc_injerr_lba); 1041 + } else 1032 1042 lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 1033 1043 "0547 Unknown debugfs error injection entry\n"); 1034 1044 ··· 1052 1042 struct dentry *dent = file->f_dentry; 1053 1043 struct lpfc_hba *phba = file->private_data; 1054 1044 char dstbuf[32]; 1055 - unsigned long tmp; 1045 + uint64_t tmp = 0; 1056 1046 int size; 1057 1047 1058 1048 memset(dstbuf, 0, 32); ··· 1060 1050 if (copy_from_user(dstbuf, buf, size)) 1061 1051 return 0; 1062 1052 1063 - if (strict_strtoul(dstbuf, 0, &tmp)) 1053 + if (dent == phba->debug_InjErrLBA) { 1054 + if ((buf[0] == 'o') && (buf[1] == 'f') && (buf[2] == 'f')) 1055 + tmp = (uint64_t)(-1); 1056 + } 1057 + 1058 + if ((tmp == 0) && (kstrtoull(dstbuf, 0, &tmp))) 1064 1059 return 0; 1065 1060 1066 1061 if (dent == phba->debug_writeGuard) ··· 1082 1067 phba->lpfc_injerr_rref_cnt = (uint32_t)tmp; 1083 1068 else if (dent == phba->debug_InjErrLBA) 1084 1069 phba->lpfc_injerr_lba = (sector_t)tmp; 1085 - else 1070 + else if (dent == phba->debug_InjErrNPortID) 1071 + phba->lpfc_injerr_nportid = (uint32_t)(tmp & Mask_DID); 1072 + else if (dent == phba->debug_InjErrWWPN) { 1073 + tmp = cpu_to_be64(tmp); 1074 + memcpy(&phba->lpfc_injerr_wwpn, &tmp, sizeof(struct lpfc_name)); 1075 + } else 1086 1076 lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 1087 1077 "0548 Unknown debugfs error injection entry\n"); 1088 1078 ··· 3969 3949 } 3970 3950 phba->lpfc_injerr_lba = LPFC_INJERR_LBA_OFF; 3971 3951 3952 + snprintf(name, sizeof(name), "InjErrNPortID"); 3953 + phba->debug_InjErrNPortID = 3954 + debugfs_create_file(name, S_IFREG|S_IRUGO|S_IWUSR, 3955 + phba->hba_debugfs_root, 3956 + phba, &lpfc_debugfs_op_dif_err); 3957 + if (!phba->debug_InjErrNPortID) { 3958 + lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT, 3959 + "0809 Cannot create debugfs InjErrNPortID\n"); 3960 + goto debug_failed; 3961 + } 3962 + 3963 + snprintf(name, sizeof(name), "InjErrWWPN"); 3964 + phba->debug_InjErrWWPN = 3965 + debugfs_create_file(name, S_IFREG|S_IRUGO|S_IWUSR, 3966 + phba->hba_debugfs_root, 3967 + phba, &lpfc_debugfs_op_dif_err); 3968 + if (!phba->debug_InjErrWWPN) { 3969 + lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT, 3970 + "0810 Cannot create debugfs InjErrWWPN\n"); 3971 + goto debug_failed; 3972 + } 3973 + 3972 3974 snprintf(name, sizeof(name), "writeGuardInjErr"); 3973 3975 phba->debug_writeGuard = 3974 3976 debugfs_create_file(name, S_IFREG|S_IRUGO|S_IWUSR, ··· 4362 4320 if (phba->debug_InjErrLBA) { 4363 4321 debugfs_remove(phba->debug_InjErrLBA); /* InjErrLBA */ 4364 4322 phba->debug_InjErrLBA = NULL; 4323 + } 4324 + if (phba->debug_InjErrNPortID) { /* InjErrNPortID */ 4325 + debugfs_remove(phba->debug_InjErrNPortID); 4326 + phba->debug_InjErrNPortID = NULL; 4327 + } 4328 + if (phba->debug_InjErrWWPN) { 4329 + debugfs_remove(phba->debug_InjErrWWPN); /* InjErrWWPN */ 4330 + phba->debug_InjErrWWPN = NULL; 4365 4331 } 4366 4332 if (phba->debug_writeGuard) { 4367 4333 debugfs_remove(phba->debug_writeGuard); /* writeGuard */
+13 -4
drivers/scsi/lpfc/lpfc_els.c
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2004-2011 Emulex. All rights reserved. * 4 + * Copyright (C) 2004-2012 Emulex. All rights reserved. * 5 5 * EMULEX and SLI are trademarks of Emulex. * 6 6 * www.emulex.com * 7 7 * Portions Copyright (C) 2004-2005 Christoph Hellwig * ··· 925 925 * due to new FCF discovery 926 926 */ 927 927 if ((phba->hba_flag & HBA_FIP_SUPPORT) && 928 - (phba->fcf.fcf_flag & FCF_DISCOVERY) && 929 - !((irsp->ulpStatus == IOSTAT_LOCAL_REJECT) && 930 - (irsp->un.ulpWord[4] == IOERR_SLI_ABORTED))) { 928 + (phba->fcf.fcf_flag & FCF_DISCOVERY)) { 929 + if (phba->link_state < LPFC_LINK_UP) 930 + goto stop_rr_fcf_flogi; 931 + if ((phba->fcoe_cvl_eventtag_attn == 932 + phba->fcoe_cvl_eventtag) && 933 + (irsp->ulpStatus == IOSTAT_LOCAL_REJECT) && 934 + (irsp->un.ulpWord[4] == IOERR_SLI_ABORTED)) 935 + goto stop_rr_fcf_flogi; 936 + else 937 + phba->fcoe_cvl_eventtag_attn = 938 + phba->fcoe_cvl_eventtag; 931 939 lpfc_printf_log(phba, KERN_WARNING, LOG_FIP | LOG_ELS, 932 940 "2611 FLOGI failed on FCF (x%x), " 933 941 "status:x%x/x%x, tmo:x%x, perform " ··· 951 943 goto out; 952 944 } 953 945 946 + stop_rr_fcf_flogi: 954 947 /* FLOGI failure */ 955 948 lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS, 956 949 "2858 FLOGI failure Status:x%x/x%x TMO:x%x\n",
+15 -9
drivers/scsi/lpfc/lpfc_hbadisc.c
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2004-2011 Emulex. All rights reserved. * 4 + * Copyright (C) 2004-2012 Emulex. All rights reserved. * 5 5 * EMULEX and SLI are trademarks of Emulex. * 6 6 * www.emulex.com * 7 7 * Portions Copyright (C) 2004-2005 Christoph Hellwig * ··· 2843 2843 struct lpfc_vport *vport = mboxq->vport; 2844 2844 struct Scsi_Host *shost = lpfc_shost_from_vport(vport); 2845 2845 2846 - if (mboxq->u.mb.mbxStatus) { 2846 + /* 2847 + * VFI not supported for interface type 0, so ignore any mailbox 2848 + * error (except VFI in use) and continue with the discovery. 2849 + */ 2850 + if (mboxq->u.mb.mbxStatus && 2851 + (bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) != 2852 + LPFC_SLI_INTF_IF_TYPE_0) && 2853 + mboxq->u.mb.mbxStatus != MBX_VFI_IN_USE) { 2847 2854 lpfc_printf_vlog(vport, KERN_ERR, LOG_MBOX, 2848 2855 "2018 REG_VFI mbxStatus error x%x " 2849 2856 "HBA state x%x\n", ··· 5680 5673 ret = 1; 5681 5674 spin_unlock_irq(shost->host_lock); 5682 5675 goto out; 5683 - } else { 5676 + } else if (ndlp->nlp_flag & NLP_RPI_REGISTERED) { 5677 + ret = 1; 5684 5678 lpfc_printf_log(phba, KERN_INFO, LOG_ELS, 5685 - "2624 RPI %x DID %x flg %x still " 5686 - "logged in\n", 5687 - ndlp->nlp_rpi, ndlp->nlp_DID, 5688 - ndlp->nlp_flag); 5689 - if (ndlp->nlp_flag & NLP_RPI_REGISTERED) 5690 - ret = 1; 5679 + "2624 RPI %x DID %x flag %x " 5680 + "still logged in\n", 5681 + ndlp->nlp_rpi, ndlp->nlp_DID, 5682 + ndlp->nlp_flag); 5691 5683 } 5692 5684 } 5693 5685 spin_unlock_irq(shost->host_lock);
+7 -1
drivers/scsi/lpfc/lpfc_hw4.h
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2009 Emulex. All rights reserved. * 4 + * Copyright (C) 2009-2012 Emulex. All rights reserved. * 5 5 * EMULEX and SLI are trademarks of Emulex. * 6 6 * www.emulex.com * 7 7 * * ··· 337 337 #define CQE_CODE_RECEIVE 0x4 338 338 #define CQE_CODE_XRI_ABORTED 0x5 339 339 #define CQE_CODE_RECEIVE_V1 0x9 340 + 341 + /* 342 + * Define mask value for xri_aborted and wcqe completed CQE extended status. 343 + * Currently, extended status is limited to 9 bits (0x0 -> 0x103) . 344 + */ 345 + #define WCQE_PARAM_MASK 0x1FF; 340 346 341 347 /* completion queue entry for wqe completions */ 342 348 struct lpfc_wcqe_complete {
+22 -19
drivers/scsi/lpfc/lpfc_init.c
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2004-2011 Emulex. All rights reserved. * 4 + * Copyright (C) 2004-2012 Emulex. All rights reserved. * 5 5 * EMULEX and SLI are trademarks of Emulex. * 6 6 * www.emulex.com * 7 7 * Portions Copyright (C) 2004-2005 Christoph Hellwig * ··· 2704 2704 } 2705 2705 spin_lock_irq(shost->host_lock); 2706 2706 ndlp->nlp_flag &= ~NLP_NPR_ADISC; 2707 - 2707 + spin_unlock_irq(shost->host_lock); 2708 2708 /* 2709 2709 * Whenever an SLI4 port goes offline, free the 2710 - * RPI. A new RPI when the adapter port comes 2711 - * back online. 2710 + * RPI. Get a new RPI when the adapter port 2711 + * comes back online. 2712 2712 */ 2713 2713 if (phba->sli_rev == LPFC_SLI_REV4) 2714 2714 lpfc_sli4_free_rpi(phba, ndlp->nlp_rpi); 2715 - 2716 - spin_unlock_irq(shost->host_lock); 2717 2715 lpfc_unreg_rpi(vports[i], ndlp); 2718 2716 } 2719 2717 } ··· 2784 2786 2785 2787 spin_lock_irq(&phba->hbalock); 2786 2788 spin_lock(&phba->scsi_buf_list_lock); 2787 - list_for_each_entry_safe(sb, sb_next, &phba->lpfc_scsi_buf_list, list) 2789 + list_for_each_entry_safe(sb, sb_next, &phba->lpfc_scsi_buf_list, list) { 2788 2790 sb->cur_iocbq.sli4_xritag = 2789 2791 phba->sli4_hba.xri_ids[sb->cur_iocbq.sli4_lxritag]; 2792 + set_bit(sb->cur_iocbq.sli4_lxritag, phba->sli4_hba.xri_bmask); 2793 + phba->sli4_hba.max_cfg_param.xri_used++; 2794 + phba->sli4_hba.xri_count++; 2795 + } 2790 2796 spin_unlock(&phba->scsi_buf_list_lock); 2791 2797 spin_unlock_irq(&phba->hbalock); 2792 2798 return 0; ··· 3725 3723 break; 3726 3724 3727 3725 case LPFC_FIP_EVENT_TYPE_FCF_DEAD: 3726 + phba->fcoe_cvl_eventtag = acqe_fip->event_tag; 3728 3727 lpfc_printf_log(phba, KERN_ERR, LOG_FIP | LOG_DISCOVERY, 3729 3728 "2549 FCF (x%x) disconnected from network, " 3730 3729 "tag:x%x\n", acqe_fip->index, acqe_fip->event_tag); ··· 3787 3784 } 3788 3785 break; 3789 3786 case LPFC_FIP_EVENT_TYPE_CVL: 3787 + phba->fcoe_cvl_eventtag = acqe_fip->event_tag; 3790 3788 lpfc_printf_log(phba, KERN_ERR, LOG_FIP | LOG_DISCOVERY, 3791 3789 "2718 Clear Virtual Link Received for VPI 0x%x" 3792 3790 " tag 0x%x\n", acqe_fip->index, acqe_fip->event_tag); ··· 5230 5226 * rpi is normalized to a zero base because the physical rpi is 5231 5227 * port based. 5232 5228 */ 5233 - curr_rpi_range = phba->sli4_hba.next_rpi - 5234 - phba->sli4_hba.max_cfg_param.rpi_base; 5229 + curr_rpi_range = phba->sli4_hba.next_rpi; 5235 5230 spin_unlock_irq(&phba->hbalock); 5236 5231 5237 5232 /* ··· 5821 5818 readl(phba->sli4_hba.u.if_type2. 5822 5819 ERR2regaddr); 5823 5820 lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 5824 - "2888 Port Error Detected " 5825 - "during POST: " 5826 - "port status reg 0x%x, " 5827 - "port_smphr reg 0x%x, " 5821 + "2888 Unrecoverable port error " 5822 + "following POST: port status reg " 5823 + "0x%x, port_smphr reg 0x%x, " 5828 5824 "error 1=0x%x, error 2=0x%x\n", 5829 5825 reg_data.word0, 5830 5826 portsmphr_reg.word0, ··· 6144 6142 phba->sli4_hba.next_xri = phba->sli4_hba.max_cfg_param.xri_base; 6145 6143 phba->vpi_base = phba->sli4_hba.max_cfg_param.vpi_base; 6146 6144 phba->vfi_base = phba->sli4_hba.max_cfg_param.vfi_base; 6147 - phba->sli4_hba.next_rpi = phba->sli4_hba.max_cfg_param.rpi_base; 6148 6145 phba->max_vpi = (phba->sli4_hba.max_cfg_param.max_vpi > 0) ? 6149 6146 (phba->sli4_hba.max_cfg_param.max_vpi - 1) : 0; 6150 6147 phba->max_vports = phba->max_vpi; ··· 7232 7231 uint32_t rdy_chk, num_resets = 0, reset_again = 0; 7233 7232 union lpfc_sli4_cfg_shdr *shdr; 7234 7233 struct lpfc_register reg_data; 7234 + uint16_t devid; 7235 7235 7236 7236 if_type = bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf); 7237 7237 switch (if_type) { ··· 7279 7277 LPFC_SLIPORT_INIT_PORT); 7280 7278 writel(reg_data.word0, phba->sli4_hba.u.if_type2. 7281 7279 CTRLregaddr); 7282 - 7280 + /* flush */ 7281 + pci_read_config_word(phba->pcidev, 7282 + PCI_DEVICE_ID, &devid); 7283 7283 /* 7284 7284 * Poll the Port Status Register and wait for RDY for 7285 7285 * up to 10 seconds. If the port doesn't respond, treat ··· 7319 7315 phba->work_status[1] = readl( 7320 7316 phba->sli4_hba.u.if_type2.ERR2regaddr); 7321 7317 lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 7322 - "2890 Port Error Detected " 7323 - "during Port Reset: " 7324 - "port status reg 0x%x, " 7318 + "2890 Port error detected during port " 7319 + "reset(%d): port status reg 0x%x, " 7325 7320 "error 1=0x%x, error 2=0x%x\n", 7326 - reg_data.word0, 7321 + num_resets, reg_data.word0, 7327 7322 phba->work_status[0], 7328 7323 phba->work_status[1]); 7329 7324 rc = -ENODEV;
+7 -3
drivers/scsi/lpfc/lpfc_nportdisc.c
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2004-2009 Emulex. All rights reserved. * 4 + * Copyright (C) 2004-2012 Emulex. All rights reserved. * 5 5 * EMULEX and SLI are trademarks of Emulex. * 6 6 * www.emulex.com * 7 7 * Portions Copyright (C) 2004-2005 Christoph Hellwig * ··· 440 440 spin_unlock_irq(shost->host_lock); 441 441 stat.un.b.lsRjtRsnCode = LSRJT_INVALID_CMD; 442 442 stat.un.b.lsRjtRsnCodeExp = LSEXP_NOTHING_MORE; 443 - lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb, 443 + rc = lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb, 444 444 ndlp, mbox); 445 + if (rc) 446 + mempool_free(mbox, phba->mbox_mem_pool); 445 447 return 1; 446 448 } 447 - lpfc_els_rsp_acc(vport, ELS_CMD_PLOGI, cmdiocb, ndlp, mbox); 449 + rc = lpfc_els_rsp_acc(vport, ELS_CMD_PLOGI, cmdiocb, ndlp, mbox); 450 + if (rc) 451 + mempool_free(mbox, phba->mbox_mem_pool); 448 452 return 1; 449 453 out: 450 454 stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
+302 -200
drivers/scsi/lpfc/lpfc_scsi.c
··· 39 39 #include "lpfc_sli4.h" 40 40 #include "lpfc_nl.h" 41 41 #include "lpfc_disc.h" 42 - #include "lpfc_scsi.h" 43 42 #include "lpfc.h" 43 + #include "lpfc_scsi.h" 44 44 #include "lpfc_logmsg.h" 45 45 #include "lpfc_crtn.h" 46 46 #include "lpfc_vport.h" ··· 51 51 int _dump_buf_done; 52 52 53 53 static char *dif_op_str[] = { 54 - "SCSI_PROT_NORMAL", 55 - "SCSI_PROT_READ_INSERT", 56 - "SCSI_PROT_WRITE_STRIP", 57 - "SCSI_PROT_READ_STRIP", 58 - "SCSI_PROT_WRITE_INSERT", 59 - "SCSI_PROT_READ_PASS", 60 - "SCSI_PROT_WRITE_PASS", 54 + "PROT_NORMAL", 55 + "PROT_READ_INSERT", 56 + "PROT_WRITE_STRIP", 57 + "PROT_READ_STRIP", 58 + "PROT_WRITE_INSERT", 59 + "PROT_READ_PASS", 60 + "PROT_WRITE_PASS", 61 + }; 62 + 63 + static char *dif_grd_str[] = { 64 + "NO_GUARD", 65 + "DIF_CRC", 66 + "DIX_IP", 61 67 }; 62 68 63 69 struct scsi_dif_tuple { ··· 1287 1281 1288 1282 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS 1289 1283 1290 - #define BG_ERR_INIT 1 1291 - #define BG_ERR_TGT 2 1292 - #define BG_ERR_SWAP 3 1293 - #define BG_ERR_CHECK 4 1284 + /* Return if if error injection is detected by Initiator */ 1285 + #define BG_ERR_INIT 0x1 1286 + /* Return if if error injection is detected by Target */ 1287 + #define BG_ERR_TGT 0x2 1288 + /* Return if if swapping CSUM<-->CRC is required for error injection */ 1289 + #define BG_ERR_SWAP 0x10 1290 + /* Return if disabling Guard/Ref/App checking is required for error injection */ 1291 + #define BG_ERR_CHECK 0x20 1294 1292 1295 1293 /** 1296 1294 * lpfc_bg_err_inject - Determine if we should inject an error ··· 1304 1294 * @apptag: (out) BlockGuard application tag for transmitted data 1305 1295 * @new_guard (in) Value to replace CRC with if needed 1306 1296 * 1307 - * Returns (1) if error injection is detected by Initiator 1308 - * Returns (2) if error injection is detected by Target 1309 - * Returns (3) if swapping CSUM->CRC is required for error injection 1310 - * Returns (4) disabling Guard/Ref/App checking is required for error injection 1297 + * Returns BG_ERR_* bit mask or 0 if request ignored 1311 1298 **/ 1312 1299 static int 1313 1300 lpfc_bg_err_inject(struct lpfc_hba *phba, struct scsi_cmnd *sc, ··· 1312 1305 { 1313 1306 struct scatterlist *sgpe; /* s/g prot entry */ 1314 1307 struct scatterlist *sgde; /* s/g data entry */ 1308 + struct lpfc_scsi_buf *lpfc_cmd = NULL; 1315 1309 struct scsi_dif_tuple *src = NULL; 1310 + struct lpfc_nodelist *ndlp; 1311 + struct lpfc_rport_data *rdata; 1316 1312 uint32_t op = scsi_get_prot_op(sc); 1317 1313 uint32_t blksize; 1318 1314 uint32_t numblks; ··· 1328 1318 1329 1319 sgpe = scsi_prot_sglist(sc); 1330 1320 sgde = scsi_sglist(sc); 1331 - 1332 1321 lba = scsi_get_lba(sc); 1322 + 1323 + /* First check if we need to match the LBA */ 1333 1324 if (phba->lpfc_injerr_lba != LPFC_INJERR_LBA_OFF) { 1334 1325 blksize = lpfc_cmd_blksize(sc); 1335 1326 numblks = (scsi_bufflen(sc) + blksize - 1) / blksize; ··· 1345 1334 sizeof(struct scsi_dif_tuple); 1346 1335 if (numblks < blockoff) 1347 1336 blockoff = numblks; 1348 - src = (struct scsi_dif_tuple *)sg_virt(sgpe); 1349 - src += blockoff; 1350 1337 } 1338 + } 1339 + 1340 + /* Next check if we need to match the remote NPortID or WWPN */ 1341 + rdata = sc->device->hostdata; 1342 + if (rdata && rdata->pnode) { 1343 + ndlp = rdata->pnode; 1344 + 1345 + /* Make sure we have the right NPortID if one is specified */ 1346 + if (phba->lpfc_injerr_nportid && 1347 + (phba->lpfc_injerr_nportid != ndlp->nlp_DID)) 1348 + return 0; 1349 + 1350 + /* 1351 + * Make sure we have the right WWPN if one is specified. 1352 + * wwn[0] should be a non-zero NAA in a good WWPN. 1353 + */ 1354 + if (phba->lpfc_injerr_wwpn.u.wwn[0] && 1355 + (memcmp(&ndlp->nlp_portname, &phba->lpfc_injerr_wwpn, 1356 + sizeof(struct lpfc_name)) != 0)) 1357 + return 0; 1358 + } 1359 + 1360 + /* Setup a ptr to the protection data if the SCSI host provides it */ 1361 + if (sgpe) { 1362 + src = (struct scsi_dif_tuple *)sg_virt(sgpe); 1363 + src += blockoff; 1364 + lpfc_cmd = (struct lpfc_scsi_buf *)sc->host_scribble; 1351 1365 } 1352 1366 1353 1367 /* Should we change the Reference Tag */ ··· 1380 1344 if (phba->lpfc_injerr_wref_cnt) { 1381 1345 switch (op) { 1382 1346 case SCSI_PROT_WRITE_PASS: 1383 - if (blockoff && src) { 1384 - /* Insert error in middle of the IO */ 1347 + if (src) { 1348 + /* 1349 + * For WRITE_PASS, force the error 1350 + * to be sent on the wire. It should 1351 + * be detected by the Target. 1352 + * If blockoff != 0 error will be 1353 + * inserted in middle of the IO. 1354 + */ 1385 1355 1386 1356 lpfc_printf_log(phba, KERN_ERR, LOG_BG, 1387 1357 "9076 BLKGRD: Injecting reftag error: " 1388 1358 "write lba x%lx + x%x oldrefTag x%x\n", 1389 1359 (unsigned long)lba, blockoff, 1390 - src->ref_tag); 1360 + be32_to_cpu(src->ref_tag)); 1391 1361 1392 1362 /* 1393 - * NOTE, this will change ref tag in 1394 - * the memory location forever! 1363 + * Save the old ref_tag so we can 1364 + * restore it on completion. 1395 1365 */ 1396 - src->ref_tag = 0xDEADBEEF; 1366 + if (lpfc_cmd) { 1367 + lpfc_cmd->prot_data_type = 1368 + LPFC_INJERR_REFTAG; 1369 + lpfc_cmd->prot_data_segment = 1370 + src; 1371 + lpfc_cmd->prot_data = 1372 + src->ref_tag; 1373 + } 1374 + src->ref_tag = cpu_to_be32(0xDEADBEEF); 1397 1375 phba->lpfc_injerr_wref_cnt--; 1398 - phba->lpfc_injerr_lba = 1399 - LPFC_INJERR_LBA_OFF; 1400 - rc = BG_ERR_CHECK; 1376 + if (phba->lpfc_injerr_wref_cnt == 0) { 1377 + phba->lpfc_injerr_nportid = 0; 1378 + phba->lpfc_injerr_lba = 1379 + LPFC_INJERR_LBA_OFF; 1380 + memset(&phba->lpfc_injerr_wwpn, 1381 + 0, sizeof(struct lpfc_name)); 1382 + } 1383 + rc = BG_ERR_TGT | BG_ERR_CHECK; 1384 + 1401 1385 break; 1402 1386 } 1403 1387 /* Drop thru */ 1388 + case SCSI_PROT_WRITE_INSERT: 1389 + /* 1390 + * For WRITE_INSERT, force the error 1391 + * to be sent on the wire. It should be 1392 + * detected by the Target. 1393 + */ 1394 + /* DEADBEEF will be the reftag on the wire */ 1395 + *reftag = 0xDEADBEEF; 1396 + phba->lpfc_injerr_wref_cnt--; 1397 + if (phba->lpfc_injerr_wref_cnt == 0) { 1398 + phba->lpfc_injerr_nportid = 0; 1399 + phba->lpfc_injerr_lba = 1400 + LPFC_INJERR_LBA_OFF; 1401 + memset(&phba->lpfc_injerr_wwpn, 1402 + 0, sizeof(struct lpfc_name)); 1403 + } 1404 + rc = BG_ERR_TGT | BG_ERR_CHECK; 1405 + 1406 + lpfc_printf_log(phba, KERN_ERR, LOG_BG, 1407 + "9078 BLKGRD: Injecting reftag error: " 1408 + "write lba x%lx\n", (unsigned long)lba); 1409 + break; 1404 1410 case SCSI_PROT_WRITE_STRIP: 1405 1411 /* 1406 1412 * For WRITE_STRIP and WRITE_PASS, ··· 1451 1373 */ 1452 1374 *reftag = 0xDEADBEEF; 1453 1375 phba->lpfc_injerr_wref_cnt--; 1454 - phba->lpfc_injerr_lba = LPFC_INJERR_LBA_OFF; 1376 + if (phba->lpfc_injerr_wref_cnt == 0) { 1377 + phba->lpfc_injerr_nportid = 0; 1378 + phba->lpfc_injerr_lba = 1379 + LPFC_INJERR_LBA_OFF; 1380 + memset(&phba->lpfc_injerr_wwpn, 1381 + 0, sizeof(struct lpfc_name)); 1382 + } 1455 1383 rc = BG_ERR_INIT; 1456 1384 1457 1385 lpfc_printf_log(phba, KERN_ERR, LOG_BG, 1458 1386 "9077 BLKGRD: Injecting reftag error: " 1459 - "write lba x%lx\n", (unsigned long)lba); 1460 - break; 1461 - case SCSI_PROT_WRITE_INSERT: 1462 - /* 1463 - * For WRITE_INSERT, force the 1464 - * error to be sent on the wire. It should be 1465 - * detected by the Target. 1466 - */ 1467 - /* DEADBEEF will be the reftag on the wire */ 1468 - *reftag = 0xDEADBEEF; 1469 - phba->lpfc_injerr_wref_cnt--; 1470 - phba->lpfc_injerr_lba = LPFC_INJERR_LBA_OFF; 1471 - rc = BG_ERR_TGT; 1472 - 1473 - lpfc_printf_log(phba, KERN_ERR, LOG_BG, 1474 - "9078 BLKGRD: Injecting reftag error: " 1475 1387 "write lba x%lx\n", (unsigned long)lba); 1476 1388 break; 1477 1389 } ··· 1469 1401 if (phba->lpfc_injerr_rref_cnt) { 1470 1402 switch (op) { 1471 1403 case SCSI_PROT_READ_INSERT: 1472 - /* 1473 - * For READ_INSERT, it doesn't make sense 1474 - * to change the reftag. 1475 - */ 1476 - break; 1477 1404 case SCSI_PROT_READ_STRIP: 1478 1405 case SCSI_PROT_READ_PASS: 1479 1406 /* ··· 1478 1415 */ 1479 1416 *reftag = 0xDEADBEEF; 1480 1417 phba->lpfc_injerr_rref_cnt--; 1481 - phba->lpfc_injerr_lba = LPFC_INJERR_LBA_OFF; 1418 + if (phba->lpfc_injerr_rref_cnt == 0) { 1419 + phba->lpfc_injerr_nportid = 0; 1420 + phba->lpfc_injerr_lba = 1421 + LPFC_INJERR_LBA_OFF; 1422 + memset(&phba->lpfc_injerr_wwpn, 1423 + 0, sizeof(struct lpfc_name)); 1424 + } 1482 1425 rc = BG_ERR_INIT; 1483 1426 1484 1427 lpfc_printf_log(phba, KERN_ERR, LOG_BG, ··· 1500 1431 if (phba->lpfc_injerr_wapp_cnt) { 1501 1432 switch (op) { 1502 1433 case SCSI_PROT_WRITE_PASS: 1503 - if (blockoff && src) { 1504 - /* Insert error in middle of the IO */ 1434 + if (src) { 1435 + /* 1436 + * For WRITE_PASS, force the error 1437 + * to be sent on the wire. It should 1438 + * be detected by the Target. 1439 + * If blockoff != 0 error will be 1440 + * inserted in middle of the IO. 1441 + */ 1505 1442 1506 1443 lpfc_printf_log(phba, KERN_ERR, LOG_BG, 1507 1444 "9080 BLKGRD: Injecting apptag error: " 1508 1445 "write lba x%lx + x%x oldappTag x%x\n", 1509 1446 (unsigned long)lba, blockoff, 1510 - src->app_tag); 1447 + be16_to_cpu(src->app_tag)); 1511 1448 1512 1449 /* 1513 - * NOTE, this will change app tag in 1514 - * the memory location forever! 1450 + * Save the old app_tag so we can 1451 + * restore it on completion. 1515 1452 */ 1516 - src->app_tag = 0xDEAD; 1453 + if (lpfc_cmd) { 1454 + lpfc_cmd->prot_data_type = 1455 + LPFC_INJERR_APPTAG; 1456 + lpfc_cmd->prot_data_segment = 1457 + src; 1458 + lpfc_cmd->prot_data = 1459 + src->app_tag; 1460 + } 1461 + src->app_tag = cpu_to_be16(0xDEAD); 1517 1462 phba->lpfc_injerr_wapp_cnt--; 1518 - phba->lpfc_injerr_lba = 1519 - LPFC_INJERR_LBA_OFF; 1520 - rc = BG_ERR_CHECK; 1463 + if (phba->lpfc_injerr_wapp_cnt == 0) { 1464 + phba->lpfc_injerr_nportid = 0; 1465 + phba->lpfc_injerr_lba = 1466 + LPFC_INJERR_LBA_OFF; 1467 + memset(&phba->lpfc_injerr_wwpn, 1468 + 0, sizeof(struct lpfc_name)); 1469 + } 1470 + rc = BG_ERR_TGT | BG_ERR_CHECK; 1521 1471 break; 1522 1472 } 1523 1473 /* Drop thru */ 1524 - case SCSI_PROT_WRITE_STRIP: 1525 - /* 1526 - * For WRITE_STRIP and WRITE_PASS, 1527 - * force the error on data 1528 - * being copied from SLI-Host to SLI-Port. 1529 - */ 1530 - *apptag = 0xDEAD; 1531 - phba->lpfc_injerr_wapp_cnt--; 1532 - phba->lpfc_injerr_lba = LPFC_INJERR_LBA_OFF; 1533 - rc = BG_ERR_INIT; 1534 - 1535 - lpfc_printf_log(phba, KERN_ERR, LOG_BG, 1536 - "0812 BLKGRD: Injecting apptag error: " 1537 - "write lba x%lx\n", (unsigned long)lba); 1538 - break; 1539 1474 case SCSI_PROT_WRITE_INSERT: 1540 1475 /* 1541 1476 * For WRITE_INSERT, force the ··· 1549 1476 /* DEAD will be the apptag on the wire */ 1550 1477 *apptag = 0xDEAD; 1551 1478 phba->lpfc_injerr_wapp_cnt--; 1552 - phba->lpfc_injerr_lba = LPFC_INJERR_LBA_OFF; 1553 - rc = BG_ERR_TGT; 1479 + if (phba->lpfc_injerr_wapp_cnt == 0) { 1480 + phba->lpfc_injerr_nportid = 0; 1481 + phba->lpfc_injerr_lba = 1482 + LPFC_INJERR_LBA_OFF; 1483 + memset(&phba->lpfc_injerr_wwpn, 1484 + 0, sizeof(struct lpfc_name)); 1485 + } 1486 + rc = BG_ERR_TGT | BG_ERR_CHECK; 1554 1487 1555 1488 lpfc_printf_log(phba, KERN_ERR, LOG_BG, 1556 1489 "0813 BLKGRD: Injecting apptag error: " 1490 + "write lba x%lx\n", (unsigned long)lba); 1491 + break; 1492 + case SCSI_PROT_WRITE_STRIP: 1493 + /* 1494 + * For WRITE_STRIP and WRITE_PASS, 1495 + * force the error on data 1496 + * being copied from SLI-Host to SLI-Port. 1497 + */ 1498 + *apptag = 0xDEAD; 1499 + phba->lpfc_injerr_wapp_cnt--; 1500 + if (phba->lpfc_injerr_wapp_cnt == 0) { 1501 + phba->lpfc_injerr_nportid = 0; 1502 + phba->lpfc_injerr_lba = 1503 + LPFC_INJERR_LBA_OFF; 1504 + memset(&phba->lpfc_injerr_wwpn, 1505 + 0, sizeof(struct lpfc_name)); 1506 + } 1507 + rc = BG_ERR_INIT; 1508 + 1509 + lpfc_printf_log(phba, KERN_ERR, LOG_BG, 1510 + "0812 BLKGRD: Injecting apptag error: " 1557 1511 "write lba x%lx\n", (unsigned long)lba); 1558 1512 break; 1559 1513 } ··· 1588 1488 if (phba->lpfc_injerr_rapp_cnt) { 1589 1489 switch (op) { 1590 1490 case SCSI_PROT_READ_INSERT: 1591 - /* 1592 - * For READ_INSERT, it doesn't make sense 1593 - * to change the apptag. 1594 - */ 1595 - break; 1596 1491 case SCSI_PROT_READ_STRIP: 1597 1492 case SCSI_PROT_READ_PASS: 1598 1493 /* ··· 1597 1502 */ 1598 1503 *apptag = 0xDEAD; 1599 1504 phba->lpfc_injerr_rapp_cnt--; 1600 - phba->lpfc_injerr_lba = LPFC_INJERR_LBA_OFF; 1505 + if (phba->lpfc_injerr_rapp_cnt == 0) { 1506 + phba->lpfc_injerr_nportid = 0; 1507 + phba->lpfc_injerr_lba = 1508 + LPFC_INJERR_LBA_OFF; 1509 + memset(&phba->lpfc_injerr_wwpn, 1510 + 0, sizeof(struct lpfc_name)); 1511 + } 1601 1512 rc = BG_ERR_INIT; 1602 1513 1603 1514 lpfc_printf_log(phba, KERN_ERR, LOG_BG, ··· 1620 1519 if (phba->lpfc_injerr_wgrd_cnt) { 1621 1520 switch (op) { 1622 1521 case SCSI_PROT_WRITE_PASS: 1623 - if (blockoff && src) { 1624 - /* Insert error in middle of the IO */ 1625 - 1626 - lpfc_printf_log(phba, KERN_ERR, LOG_BG, 1627 - "0815 BLKGRD: Injecting guard error: " 1628 - "write lba x%lx + x%x oldgrdTag x%x\n", 1629 - (unsigned long)lba, blockoff, 1630 - src->guard_tag); 1631 - 1632 - /* 1633 - * NOTE, this will change guard tag in 1634 - * the memory location forever! 1635 - */ 1636 - src->guard_tag = 0xDEAD; 1637 - phba->lpfc_injerr_wgrd_cnt--; 1638 - phba->lpfc_injerr_lba = 1639 - LPFC_INJERR_LBA_OFF; 1640 - rc = BG_ERR_CHECK; 1641 - break; 1642 - } 1522 + rc = BG_ERR_CHECK; 1643 1523 /* Drop thru */ 1644 - case SCSI_PROT_WRITE_STRIP: 1645 - /* 1646 - * For WRITE_STRIP and WRITE_PASS, 1647 - * force the error on data 1648 - * being copied from SLI-Host to SLI-Port. 1649 - */ 1650 - phba->lpfc_injerr_wgrd_cnt--; 1651 - phba->lpfc_injerr_lba = LPFC_INJERR_LBA_OFF; 1652 1524 1653 - rc = BG_ERR_SWAP; 1654 - /* Signals the caller to swap CRC->CSUM */ 1655 - 1656 - lpfc_printf_log(phba, KERN_ERR, LOG_BG, 1657 - "0816 BLKGRD: Injecting guard error: " 1658 - "write lba x%lx\n", (unsigned long)lba); 1659 - break; 1660 1525 case SCSI_PROT_WRITE_INSERT: 1661 1526 /* 1662 1527 * For WRITE_INSERT, force the ··· 1630 1563 * detected by the Target. 1631 1564 */ 1632 1565 phba->lpfc_injerr_wgrd_cnt--; 1633 - phba->lpfc_injerr_lba = LPFC_INJERR_LBA_OFF; 1566 + if (phba->lpfc_injerr_wgrd_cnt == 0) { 1567 + phba->lpfc_injerr_nportid = 0; 1568 + phba->lpfc_injerr_lba = 1569 + LPFC_INJERR_LBA_OFF; 1570 + memset(&phba->lpfc_injerr_wwpn, 1571 + 0, sizeof(struct lpfc_name)); 1572 + } 1634 1573 1635 - rc = BG_ERR_SWAP; 1574 + rc |= BG_ERR_TGT | BG_ERR_SWAP; 1636 1575 /* Signals the caller to swap CRC->CSUM */ 1637 1576 1638 1577 lpfc_printf_log(phba, KERN_ERR, LOG_BG, 1639 1578 "0817 BLKGRD: Injecting guard error: " 1579 + "write lba x%lx\n", (unsigned long)lba); 1580 + break; 1581 + case SCSI_PROT_WRITE_STRIP: 1582 + /* 1583 + * For WRITE_STRIP and WRITE_PASS, 1584 + * force the error on data 1585 + * being copied from SLI-Host to SLI-Port. 1586 + */ 1587 + phba->lpfc_injerr_wgrd_cnt--; 1588 + if (phba->lpfc_injerr_wgrd_cnt == 0) { 1589 + phba->lpfc_injerr_nportid = 0; 1590 + phba->lpfc_injerr_lba = 1591 + LPFC_INJERR_LBA_OFF; 1592 + memset(&phba->lpfc_injerr_wwpn, 1593 + 0, sizeof(struct lpfc_name)); 1594 + } 1595 + 1596 + rc = BG_ERR_INIT | BG_ERR_SWAP; 1597 + /* Signals the caller to swap CRC->CSUM */ 1598 + 1599 + lpfc_printf_log(phba, KERN_ERR, LOG_BG, 1600 + "0816 BLKGRD: Injecting guard error: " 1640 1601 "write lba x%lx\n", (unsigned long)lba); 1641 1602 break; 1642 1603 } ··· 1672 1577 if (phba->lpfc_injerr_rgrd_cnt) { 1673 1578 switch (op) { 1674 1579 case SCSI_PROT_READ_INSERT: 1675 - /* 1676 - * For READ_INSERT, it doesn't make sense 1677 - * to change the guard tag. 1678 - */ 1679 - break; 1680 1580 case SCSI_PROT_READ_STRIP: 1681 1581 case SCSI_PROT_READ_PASS: 1682 1582 /* ··· 1679 1589 * error on data being read off the wire. It 1680 1590 * should force an IO error to the driver. 1681 1591 */ 1682 - *apptag = 0xDEAD; 1683 1592 phba->lpfc_injerr_rgrd_cnt--; 1684 - phba->lpfc_injerr_lba = LPFC_INJERR_LBA_OFF; 1593 + if (phba->lpfc_injerr_rgrd_cnt == 0) { 1594 + phba->lpfc_injerr_nportid = 0; 1595 + phba->lpfc_injerr_lba = 1596 + LPFC_INJERR_LBA_OFF; 1597 + memset(&phba->lpfc_injerr_wwpn, 1598 + 0, sizeof(struct lpfc_name)); 1599 + } 1685 1600 1686 - rc = BG_ERR_SWAP; 1601 + rc = BG_ERR_INIT | BG_ERR_SWAP; 1687 1602 /* Signals the caller to swap CRC->CSUM */ 1688 1603 1689 1604 lpfc_printf_log(phba, KERN_ERR, LOG_BG, ··· 1724 1629 switch (scsi_get_prot_op(sc)) { 1725 1630 case SCSI_PROT_READ_INSERT: 1726 1631 case SCSI_PROT_WRITE_STRIP: 1727 - *txop = BG_OP_IN_CSUM_OUT_NODIF; 1728 1632 *rxop = BG_OP_IN_NODIF_OUT_CSUM; 1633 + *txop = BG_OP_IN_CSUM_OUT_NODIF; 1729 1634 break; 1730 1635 1731 1636 case SCSI_PROT_READ_STRIP: 1732 1637 case SCSI_PROT_WRITE_INSERT: 1733 - *txop = BG_OP_IN_NODIF_OUT_CRC; 1734 1638 *rxop = BG_OP_IN_CRC_OUT_NODIF; 1639 + *txop = BG_OP_IN_NODIF_OUT_CRC; 1735 1640 break; 1736 1641 1737 1642 case SCSI_PROT_READ_PASS: 1738 1643 case SCSI_PROT_WRITE_PASS: 1739 - *txop = BG_OP_IN_CSUM_OUT_CRC; 1740 1644 *rxop = BG_OP_IN_CRC_OUT_CSUM; 1645 + *txop = BG_OP_IN_CSUM_OUT_CRC; 1741 1646 break; 1742 1647 1743 1648 case SCSI_PROT_NORMAL: ··· 1753 1658 switch (scsi_get_prot_op(sc)) { 1754 1659 case SCSI_PROT_READ_STRIP: 1755 1660 case SCSI_PROT_WRITE_INSERT: 1756 - *txop = BG_OP_IN_NODIF_OUT_CRC; 1757 1661 *rxop = BG_OP_IN_CRC_OUT_NODIF; 1662 + *txop = BG_OP_IN_NODIF_OUT_CRC; 1758 1663 break; 1759 1664 1760 1665 case SCSI_PROT_READ_PASS: 1761 1666 case SCSI_PROT_WRITE_PASS: 1762 - *txop = BG_OP_IN_CRC_OUT_CRC; 1763 1667 *rxop = BG_OP_IN_CRC_OUT_CRC; 1668 + *txop = BG_OP_IN_CRC_OUT_CRC; 1764 1669 break; 1765 1670 1766 1671 case SCSI_PROT_READ_INSERT: 1767 1672 case SCSI_PROT_WRITE_STRIP: 1768 - *txop = BG_OP_IN_CRC_OUT_NODIF; 1769 1673 *rxop = BG_OP_IN_NODIF_OUT_CRC; 1674 + *txop = BG_OP_IN_CRC_OUT_NODIF; 1770 1675 break; 1771 1676 1772 1677 case SCSI_PROT_NORMAL: ··· 1805 1710 switch (scsi_get_prot_op(sc)) { 1806 1711 case SCSI_PROT_READ_INSERT: 1807 1712 case SCSI_PROT_WRITE_STRIP: 1808 - *txop = BG_OP_IN_CRC_OUT_NODIF; 1809 1713 *rxop = BG_OP_IN_NODIF_OUT_CRC; 1714 + *txop = BG_OP_IN_CRC_OUT_NODIF; 1810 1715 break; 1811 1716 1812 1717 case SCSI_PROT_READ_STRIP: 1813 1718 case SCSI_PROT_WRITE_INSERT: 1814 - *txop = BG_OP_IN_NODIF_OUT_CSUM; 1815 1719 *rxop = BG_OP_IN_CSUM_OUT_NODIF; 1720 + *txop = BG_OP_IN_NODIF_OUT_CSUM; 1816 1721 break; 1817 1722 1818 1723 case SCSI_PROT_READ_PASS: 1819 1724 case SCSI_PROT_WRITE_PASS: 1820 - *txop = BG_OP_IN_CRC_OUT_CRC; 1821 - *rxop = BG_OP_IN_CRC_OUT_CRC; 1725 + *rxop = BG_OP_IN_CSUM_OUT_CRC; 1726 + *txop = BG_OP_IN_CRC_OUT_CSUM; 1822 1727 break; 1823 1728 1824 1729 case SCSI_PROT_NORMAL: ··· 1830 1735 switch (scsi_get_prot_op(sc)) { 1831 1736 case SCSI_PROT_READ_STRIP: 1832 1737 case SCSI_PROT_WRITE_INSERT: 1833 - *txop = BG_OP_IN_NODIF_OUT_CSUM; 1834 1738 *rxop = BG_OP_IN_CSUM_OUT_NODIF; 1739 + *txop = BG_OP_IN_NODIF_OUT_CSUM; 1835 1740 break; 1836 1741 1837 1742 case SCSI_PROT_READ_PASS: 1838 1743 case SCSI_PROT_WRITE_PASS: 1839 - *txop = BG_OP_IN_CSUM_OUT_CRC; 1840 - *rxop = BG_OP_IN_CRC_OUT_CSUM; 1744 + *rxop = BG_OP_IN_CSUM_OUT_CSUM; 1745 + *txop = BG_OP_IN_CSUM_OUT_CSUM; 1841 1746 break; 1842 1747 1843 1748 case SCSI_PROT_READ_INSERT: 1844 1749 case SCSI_PROT_WRITE_STRIP: 1845 - *txop = BG_OP_IN_CSUM_OUT_NODIF; 1846 1750 *rxop = BG_OP_IN_NODIF_OUT_CSUM; 1751 + *txop = BG_OP_IN_CSUM_OUT_NODIF; 1847 1752 break; 1848 1753 1849 1754 case SCSI_PROT_NORMAL: ··· 1912 1817 reftag = (uint32_t)scsi_get_lba(sc); /* Truncate LBA */ 1913 1818 1914 1819 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS 1915 - rc = lpfc_bg_err_inject(phba, sc, &reftag, 0, 1); 1820 + rc = lpfc_bg_err_inject(phba, sc, &reftag, NULL, 1); 1916 1821 if (rc) { 1917 - if (rc == BG_ERR_SWAP) 1822 + if (rc & BG_ERR_SWAP) 1918 1823 lpfc_bg_err_opcodes(phba, sc, &txop, &rxop); 1919 - if (rc == BG_ERR_CHECK) 1824 + if (rc & BG_ERR_CHECK) 1920 1825 checking = 0; 1921 1826 } 1922 1827 #endif ··· 2059 1964 reftag = (uint32_t)scsi_get_lba(sc); /* Truncate LBA */ 2060 1965 2061 1966 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS 2062 - rc = lpfc_bg_err_inject(phba, sc, &reftag, 0, 1); 1967 + rc = lpfc_bg_err_inject(phba, sc, &reftag, NULL, 1); 2063 1968 if (rc) { 2064 - if (rc == BG_ERR_SWAP) 1969 + if (rc & BG_ERR_SWAP) 2065 1970 lpfc_bg_err_opcodes(phba, sc, &txop, &rxop); 2066 - if (rc == BG_ERR_CHECK) 1971 + if (rc & BG_ERR_CHECK) 2067 1972 checking = 0; 2068 1973 } 2069 1974 #endif ··· 2267 2172 reftag = (uint32_t)scsi_get_lba(sc); /* Truncate LBA */ 2268 2173 2269 2174 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS 2270 - rc = lpfc_bg_err_inject(phba, sc, &reftag, 0, 1); 2175 + rc = lpfc_bg_err_inject(phba, sc, &reftag, NULL, 1); 2271 2176 if (rc) { 2272 - if (rc == BG_ERR_SWAP) 2177 + if (rc & BG_ERR_SWAP) 2273 2178 lpfc_bg_err_opcodes(phba, sc, &txop, &rxop); 2274 - if (rc == BG_ERR_CHECK) 2179 + if (rc & BG_ERR_CHECK) 2275 2180 checking = 0; 2276 2181 } 2277 2182 #endif ··· 2407 2312 reftag = (uint32_t)scsi_get_lba(sc); /* Truncate LBA */ 2408 2313 2409 2314 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS 2410 - rc = lpfc_bg_err_inject(phba, sc, &reftag, 0, 1); 2315 + rc = lpfc_bg_err_inject(phba, sc, &reftag, NULL, 1); 2411 2316 if (rc) { 2412 - if (rc == BG_ERR_SWAP) 2317 + if (rc & BG_ERR_SWAP) 2413 2318 lpfc_bg_err_opcodes(phba, sc, &txop, &rxop); 2414 - if (rc == BG_ERR_CHECK) 2319 + if (rc & BG_ERR_CHECK) 2415 2320 checking = 0; 2416 2321 } 2417 2322 #endif ··· 2883 2788 /* No error was reported - problem in FW? */ 2884 2789 cmd->result = ScsiResult(DID_ERROR, 0); 2885 2790 lpfc_printf_log(phba, KERN_ERR, LOG_BG, 2886 - "9057 BLKGRD: no errors reported!\n"); 2791 + "9057 BLKGRD: Unknown error reported!\n"); 2887 2792 } 2888 2793 2889 2794 out: ··· 3555 3460 /* pick up SLI4 exhange busy status from HBA */ 3556 3461 lpfc_cmd->exch_busy = pIocbOut->iocb_flag & LPFC_EXCHANGE_BUSY; 3557 3462 3463 + #ifdef CONFIG_SCSI_LPFC_DEBUG_FS 3464 + if (lpfc_cmd->prot_data_type) { 3465 + struct scsi_dif_tuple *src = NULL; 3466 + 3467 + src = (struct scsi_dif_tuple *)lpfc_cmd->prot_data_segment; 3468 + /* 3469 + * Used to restore any changes to protection 3470 + * data for error injection. 3471 + */ 3472 + switch (lpfc_cmd->prot_data_type) { 3473 + case LPFC_INJERR_REFTAG: 3474 + src->ref_tag = 3475 + lpfc_cmd->prot_data; 3476 + break; 3477 + case LPFC_INJERR_APPTAG: 3478 + src->app_tag = 3479 + (uint16_t)lpfc_cmd->prot_data; 3480 + break; 3481 + case LPFC_INJERR_GUARD: 3482 + src->guard_tag = 3483 + (uint16_t)lpfc_cmd->prot_data; 3484 + break; 3485 + default: 3486 + break; 3487 + } 3488 + 3489 + lpfc_cmd->prot_data = 0; 3490 + lpfc_cmd->prot_data_type = 0; 3491 + lpfc_cmd->prot_data_segment = NULL; 3492 + } 3493 + #endif 3558 3494 if (pnode && NLP_CHK_NODE_ACT(pnode)) 3559 3495 atomic_dec(&pnode->cmd_pending); 3560 3496 ··· 4187 4061 cmnd->result = err; 4188 4062 goto out_fail_command; 4189 4063 } 4190 - /* 4191 - * Do not let the mid-layer retry I/O too fast. If an I/O is retried 4192 - * without waiting a bit then indicate that the device is busy. 4193 - */ 4194 - if (cmnd->retries && 4195 - time_before(jiffies, (cmnd->jiffies_at_alloc + 4196 - msecs_to_jiffies(LPFC_RETRY_PAUSE * 4197 - cmnd->retries)))) 4198 - return SCSI_MLQUEUE_DEVICE_BUSY; 4199 4064 ndlp = rdata->pnode; 4200 4065 4201 4066 if ((scsi_get_prot_op(cmnd) != SCSI_PROT_NORMAL) && ··· 4236 4119 if (scsi_get_prot_op(cmnd) != SCSI_PROT_NORMAL) { 4237 4120 if (vport->phba->cfg_enable_bg) { 4238 4121 lpfc_printf_vlog(vport, KERN_WARNING, LOG_BG, 4239 - "9033 BLKGRD: rcvd protected cmd:%02x op:%02x " 4240 - "str=%s\n", 4241 - cmnd->cmnd[0], scsi_get_prot_op(cmnd), 4242 - dif_op_str[scsi_get_prot_op(cmnd)]); 4243 - lpfc_printf_vlog(vport, KERN_WARNING, LOG_BG, 4244 - "9034 BLKGRD: CDB: %02x %02x %02x %02x %02x " 4245 - "%02x %02x %02x %02x %02x\n", 4246 - cmnd->cmnd[0], cmnd->cmnd[1], cmnd->cmnd[2], 4247 - cmnd->cmnd[3], cmnd->cmnd[4], cmnd->cmnd[5], 4248 - cmnd->cmnd[6], cmnd->cmnd[7], cmnd->cmnd[8], 4249 - cmnd->cmnd[9]); 4122 + "9033 BLKGRD: rcvd protected cmd:%02x op=%s " 4123 + "guard=%s\n", cmnd->cmnd[0], 4124 + dif_op_str[scsi_get_prot_op(cmnd)], 4125 + dif_grd_str[scsi_host_get_guard(shost)]); 4250 4126 if (cmnd->cmnd[0] == READ_10) 4251 4127 lpfc_printf_vlog(vport, KERN_WARNING, LOG_BG, 4252 4128 "9035 BLKGRD: READ @ sector %llu, " 4253 - "count %u\n", 4129 + "cnt %u, rpt %d\n", 4254 4130 (unsigned long long)scsi_get_lba(cmnd), 4255 - blk_rq_sectors(cmnd->request)); 4131 + blk_rq_sectors(cmnd->request), 4132 + (cmnd->cmnd[1]>>5)); 4256 4133 else if (cmnd->cmnd[0] == WRITE_10) 4257 4134 lpfc_printf_vlog(vport, KERN_WARNING, LOG_BG, 4258 4135 "9036 BLKGRD: WRITE @ sector %llu, " 4259 - "count %u cmd=%p\n", 4136 + "cnt %u, wpt %d\n", 4260 4137 (unsigned long long)scsi_get_lba(cmnd), 4261 4138 blk_rq_sectors(cmnd->request), 4262 - cmnd); 4139 + (cmnd->cmnd[1]>>5)); 4263 4140 } 4264 4141 4265 4142 err = lpfc_bg_scsi_prep_dma_buf(phba, lpfc_cmd); 4266 4143 } else { 4267 4144 if (vport->phba->cfg_enable_bg) { 4268 4145 lpfc_printf_vlog(vport, KERN_WARNING, LOG_BG, 4269 - "9038 BLKGRD: rcvd unprotected cmd:" 4270 - "%02x op:%02x str=%s\n", 4271 - cmnd->cmnd[0], scsi_get_prot_op(cmnd), 4272 - dif_op_str[scsi_get_prot_op(cmnd)]); 4273 - lpfc_printf_vlog(vport, KERN_WARNING, LOG_BG, 4274 - "9039 BLKGRD: CDB: %02x %02x %02x " 4275 - "%02x %02x %02x %02x %02x %02x %02x\n", 4276 - cmnd->cmnd[0], cmnd->cmnd[1], 4277 - cmnd->cmnd[2], cmnd->cmnd[3], 4278 - cmnd->cmnd[4], cmnd->cmnd[5], 4279 - cmnd->cmnd[6], cmnd->cmnd[7], 4280 - cmnd->cmnd[8], cmnd->cmnd[9]); 4146 + "9038 BLKGRD: rcvd unprotected cmd:" 4147 + "%02x op=%s guard=%s\n", cmnd->cmnd[0], 4148 + dif_op_str[scsi_get_prot_op(cmnd)], 4149 + dif_grd_str[scsi_host_get_guard(shost)]); 4281 4150 if (cmnd->cmnd[0] == READ_10) 4282 4151 lpfc_printf_vlog(vport, KERN_WARNING, LOG_BG, 4283 4152 "9040 dbg: READ @ sector %llu, " 4284 - "count %u\n", 4153 + "cnt %u, rpt %d\n", 4285 4154 (unsigned long long)scsi_get_lba(cmnd), 4286 - blk_rq_sectors(cmnd->request)); 4155 + blk_rq_sectors(cmnd->request), 4156 + (cmnd->cmnd[1]>>5)); 4287 4157 else if (cmnd->cmnd[0] == WRITE_10) 4288 4158 lpfc_printf_vlog(vport, KERN_WARNING, LOG_BG, 4289 - "9041 dbg: WRITE @ sector %llu, " 4290 - "count %u cmd=%p\n", 4291 - (unsigned long long)scsi_get_lba(cmnd), 4292 - blk_rq_sectors(cmnd->request), cmnd); 4293 - else 4294 - lpfc_printf_vlog(vport, KERN_WARNING, LOG_BG, 4295 - "9042 dbg: parser not implemented\n"); 4159 + "9041 dbg: WRITE @ sector %llu, " 4160 + "cnt %u, wpt %d\n", 4161 + (unsigned long long)scsi_get_lba(cmnd), 4162 + blk_rq_sectors(cmnd->request), 4163 + (cmnd->cmnd[1]>>5)); 4296 4164 } 4297 4165 err = lpfc_scsi_prep_dma_buf(phba, lpfc_cmd); 4298 4166 }
+11 -2
drivers/scsi/lpfc/lpfc_scsi.h
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2004-2006 Emulex. All rights reserved. * 4 + * Copyright (C) 2004-2012 Emulex. All rights reserved. * 5 5 * EMULEX and SLI are trademarks of Emulex. * 6 6 * www.emulex.com * 7 7 * * ··· 150 150 struct lpfc_iocbq cur_iocbq; 151 151 wait_queue_head_t *waitq; 152 152 unsigned long start_time; 153 + 154 + #ifdef CONFIG_SCSI_LPFC_DEBUG_FS 155 + /* Used to restore any changes to protection data for error injection */ 156 + void *prot_data_segment; 157 + uint32_t prot_data; 158 + uint32_t prot_data_type; 159 + #define LPFC_INJERR_REFTAG 1 160 + #define LPFC_INJERR_APPTAG 2 161 + #define LPFC_INJERR_GUARD 3 162 + #endif 153 163 }; 154 164 155 165 #define LPFC_SCSI_DMA_EXT_SIZE 264 156 166 #define LPFC_BPL_SIZE 1024 157 - #define LPFC_RETRY_PAUSE 300 158 167 #define MDAC_DIRECT_CMD 0x22
+39 -23
drivers/scsi/lpfc/lpfc_sli.c
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2004-2011 Emulex. All rights reserved. * 4 + * Copyright (C) 2004-2012 Emulex. All rights reserved. * 5 5 * EMULEX and SLI are trademarks of Emulex. * 6 6 * www.emulex.com * 7 7 * Portions Copyright (C) 2004-2005 Christoph Hellwig * ··· 5578 5578 for (i = 0; i < count; i++) 5579 5579 phba->sli4_hba.rpi_ids[i] = base + i; 5580 5580 5581 - lpfc_sli4_node_prep(phba); 5582 - 5583 5581 /* VPIs. */ 5584 5582 count = phba->sli4_hba.max_cfg_param.max_vpi; 5585 5583 base = phba->sli4_hba.max_cfg_param.vpi_base; ··· 5611 5613 rc = -ENOMEM; 5612 5614 goto free_vpi_ids; 5613 5615 } 5616 + phba->sli4_hba.max_cfg_param.xri_used = 0; 5617 + phba->sli4_hba.xri_count = 0; 5614 5618 phba->sli4_hba.xri_ids = kzalloc(count * 5615 5619 sizeof(uint16_t), 5616 5620 GFP_KERNEL); ··· 6147 6147 rc = -ENODEV; 6148 6148 goto out_free_mbox; 6149 6149 } 6150 + lpfc_sli4_node_prep(phba); 6150 6151 6151 6152 /* Create all the SLI4 queues */ 6152 6153 rc = lpfc_sli4_queue_create(phba); ··· 7252 7251 7253 7252 out_not_finished: 7254 7253 spin_lock_irqsave(&phba->hbalock, iflags); 7255 - mboxq->u.mb.mbxStatus = MBX_NOT_FINISHED; 7256 - __lpfc_mbox_cmpl_put(phba, mboxq); 7257 - /* Release the token */ 7258 - psli->sli_flag &= ~LPFC_SLI_MBOX_ACTIVE; 7259 - phba->sli.mbox_active = NULL; 7254 + if (phba->sli.mbox_active) { 7255 + mboxq->u.mb.mbxStatus = MBX_NOT_FINISHED; 7256 + __lpfc_mbox_cmpl_put(phba, mboxq); 7257 + /* Release the token */ 7258 + psli->sli_flag &= ~LPFC_SLI_MBOX_ACTIVE; 7259 + phba->sli.mbox_active = NULL; 7260 + } 7260 7261 spin_unlock_irqrestore(&phba->hbalock, iflags); 7261 7262 7262 7263 return MBX_NOT_FINISHED; ··· 7746 7743 if (pcmd && (*pcmd == ELS_CMD_FLOGI || 7747 7744 *pcmd == ELS_CMD_SCR || 7748 7745 *pcmd == ELS_CMD_FDISC || 7746 + *pcmd == ELS_CMD_LOGO || 7749 7747 *pcmd == ELS_CMD_PLOGI)) { 7750 7748 bf_set(els_req64_sp, &wqe->els_req, 1); 7751 7749 bf_set(els_req64_sid, &wqe->els_req, ··· 8389 8385 struct sli4_wcqe_xri_aborted *axri) 8390 8386 { 8391 8387 struct lpfc_vport *vport; 8388 + uint32_t ext_status = 0; 8392 8389 8393 8390 if (!ndlp || !NLP_CHK_NODE_ACT(ndlp)) { 8394 8391 lpfc_printf_log(phba, KERN_INFO, LOG_SLI, ··· 8401 8396 vport = ndlp->vport; 8402 8397 lpfc_printf_log(phba, KERN_WARNING, LOG_SLI, 8403 8398 "3116 Port generated FCP XRI ABORT event on " 8404 - "vpi %d rpi %d xri x%x status 0x%x\n", 8399 + "vpi %d rpi %d xri x%x status 0x%x parameter x%x\n", 8405 8400 ndlp->vport->vpi, ndlp->nlp_rpi, 8406 8401 bf_get(lpfc_wcqe_xa_xri, axri), 8407 - bf_get(lpfc_wcqe_xa_status, axri)); 8402 + bf_get(lpfc_wcqe_xa_status, axri), 8403 + axri->parameter); 8408 8404 8409 - if (bf_get(lpfc_wcqe_xa_status, axri) == IOSTAT_LOCAL_REJECT) 8405 + /* 8406 + * Catch the ABTS protocol failure case. Older OCe FW releases returned 8407 + * LOCAL_REJECT and 0 for a failed ABTS exchange and later OCe and 8408 + * LPe FW releases returned LOCAL_REJECT and SEQUENCE_TIMEOUT. 8409 + */ 8410 + ext_status = axri->parameter & WCQE_PARAM_MASK; 8411 + if ((bf_get(lpfc_wcqe_xa_status, axri) == IOSTAT_LOCAL_REJECT) && 8412 + ((ext_status == IOERR_SEQUENCE_TIMEOUT) || (ext_status == 0))) 8410 8413 lpfc_sli_abts_recover_port(vport, ndlp); 8411 8414 } 8412 8415 ··· 9820 9807 unsigned long timeout; 9821 9808 9822 9809 timeout = msecs_to_jiffies(LPFC_MBOX_TMO * 1000) + jiffies; 9810 + 9823 9811 spin_lock_irq(&phba->hbalock); 9824 9812 psli->sli_flag |= LPFC_SLI_ASYNC_MBX_BLK; 9825 - spin_unlock_irq(&phba->hbalock); 9826 9813 9827 9814 if (psli->sli_flag & LPFC_SLI_ACTIVE) { 9828 - spin_lock_irq(&phba->hbalock); 9829 9815 /* Determine how long we might wait for the active mailbox 9830 9816 * command to be gracefully completed by firmware. 9831 9817 */ ··· 9843 9831 */ 9844 9832 break; 9845 9833 } 9846 - } 9834 + } else 9835 + spin_unlock_irq(&phba->hbalock); 9836 + 9847 9837 lpfc_sli_mbox_sys_flush(phba); 9848 9838 } 9849 9839 ··· 13286 13272 LPFC_MBOXQ_t *mbox; 13287 13273 uint32_t reqlen, alloclen, index; 13288 13274 uint32_t mbox_tmo; 13289 - uint16_t rsrc_start, rsrc_size, els_xri_cnt; 13275 + uint16_t rsrc_start, rsrc_size, els_xri_cnt, post_els_xri_cnt; 13290 13276 uint16_t xritag_start = 0, lxri = 0; 13291 13277 struct lpfc_rsrc_blks *rsrc_blk; 13292 13278 int cnt, ttl_cnt, rc = 0; ··· 13308 13294 13309 13295 cnt = 0; 13310 13296 ttl_cnt = 0; 13297 + post_els_xri_cnt = els_xri_cnt; 13311 13298 list_for_each_entry(rsrc_blk, &phba->sli4_hba.lpfc_xri_blk_list, 13312 13299 list) { 13313 13300 rsrc_start = rsrc_blk->rsrc_start; ··· 13318 13303 "3014 Working ELS Extent start %d, cnt %d\n", 13319 13304 rsrc_start, rsrc_size); 13320 13305 13321 - loop_cnt = min(els_xri_cnt, rsrc_size); 13322 - if (ttl_cnt + loop_cnt >= els_xri_cnt) { 13323 - loop_cnt = els_xri_cnt - ttl_cnt; 13324 - ttl_cnt = els_xri_cnt; 13325 - } 13306 + loop_cnt = min(post_els_xri_cnt, rsrc_size); 13307 + if (loop_cnt < post_els_xri_cnt) { 13308 + post_els_xri_cnt -= loop_cnt; 13309 + ttl_cnt += loop_cnt; 13310 + } else 13311 + ttl_cnt += post_els_xri_cnt; 13326 13312 13327 13313 mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); 13328 13314 if (!mbox) ··· 14219 14203 * field and RX_ID from ABTS for RX_ID field. 14220 14204 */ 14221 14205 bf_set(lpfc_abts_orig, &icmd->un.bls_rsp, LPFC_ABTS_UNSOL_RSP); 14222 - bf_set(lpfc_abts_rxid, &icmd->un.bls_rsp, rxid); 14223 14206 } else { 14224 14207 /* ABTS sent by initiator to CT exchange, construction 14225 14208 * of BA_ACC will need to allocate a new XRI as for the 14226 - * XRI_TAG and RX_ID fields. 14209 + * XRI_TAG field. 14227 14210 */ 14228 14211 bf_set(lpfc_abts_orig, &icmd->un.bls_rsp, LPFC_ABTS_UNSOL_INT); 14229 - bf_set(lpfc_abts_rxid, &icmd->un.bls_rsp, NO_XRI); 14230 14212 } 14213 + bf_set(lpfc_abts_rxid, &icmd->un.bls_rsp, rxid); 14231 14214 bf_set(lpfc_abts_oxid, &icmd->un.bls_rsp, oxid); 14232 14215 14233 14216 /* Xmit CT abts response on exchange <xid> */ ··· 15057 15042 LPFC_MBOXQ_t *mboxq; 15058 15043 15059 15044 phba->fcoe_eventtag_at_fcf_scan = phba->fcoe_eventtag; 15045 + phba->fcoe_cvl_eventtag_attn = phba->fcoe_cvl_eventtag; 15060 15046 mboxq = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); 15061 15047 if (!mboxq) { 15062 15048 lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
+2 -2
drivers/scsi/lpfc/lpfc_version.h
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2004-2011 Emulex. All rights reserved. * 4 + * Copyright (C) 2004-2012 Emulex. All rights reserved. * 5 5 * EMULEX and SLI are trademarks of Emulex. * 6 6 * www.emulex.com * 7 7 * * ··· 18 18 * included with this package. * 19 19 *******************************************************************/ 20 20 21 - #define LPFC_DRIVER_VERSION "8.3.29" 21 + #define LPFC_DRIVER_VERSION "8.3.30" 22 22 #define LPFC_DRIVER_NAME "lpfc" 23 23 #define LPFC_SP_DRIVER_HANDLER_NAME "lpfc:sp" 24 24 #define LPFC_FP_DRIVER_HANDLER_NAME "lpfc:fp"
+2 -4
drivers/scsi/mpt2sas/mpt2sas_base.c
··· 657 657 return; 658 658 659 659 /* eat the loginfos associated with task aborts */ 660 - if (ioc->ignore_loginfos && (log_info == 30050000 || log_info == 660 + if (ioc->ignore_loginfos && (log_info == 0x30050000 || log_info == 661 661 0x31140000 || log_info == 0x31130000)) 662 662 return; 663 663 ··· 2060 2060 { 2061 2061 int i = 0; 2062 2062 char desc[16]; 2063 - u8 revision; 2064 2063 u32 iounit_pg1_flags; 2065 2064 u32 bios_version; 2066 2065 2067 2066 bios_version = le32_to_cpu(ioc->bios_pg3.BiosVersion); 2068 - pci_read_config_byte(ioc->pdev, PCI_CLASS_REVISION, &revision); 2069 2067 strncpy(desc, ioc->manu_pg0.ChipName, 16); 2070 2068 printk(MPT2SAS_INFO_FMT "%s: FWVersion(%02d.%02d.%02d.%02d), " 2071 2069 "ChipRevision(0x%02x), BiosVersion(%02d.%02d.%02d.%02d)\n", ··· 2072 2074 (ioc->facts.FWVersion.Word & 0x00FF0000) >> 16, 2073 2075 (ioc->facts.FWVersion.Word & 0x0000FF00) >> 8, 2074 2076 ioc->facts.FWVersion.Word & 0x000000FF, 2075 - revision, 2077 + ioc->pdev->revision, 2076 2078 (bios_version & 0xFF000000) >> 24, 2077 2079 (bios_version & 0x00FF0000) >> 16, 2078 2080 (bios_version & 0x0000FF00) >> 8,
+1 -3
drivers/scsi/mpt2sas/mpt2sas_ctl.c
··· 1026 1026 { 1027 1027 struct mpt2_ioctl_iocinfo karg; 1028 1028 struct MPT2SAS_ADAPTER *ioc; 1029 - u8 revision; 1030 1029 1031 1030 if (copy_from_user(&karg, arg, sizeof(karg))) { 1032 1031 printk(KERN_ERR "failure at %s:%d/%s()!\n", ··· 1045 1046 karg.adapter_type = MPT2_IOCTL_INTERFACE_SAS2; 1046 1047 if (ioc->pfacts) 1047 1048 karg.port_number = ioc->pfacts[0].PortNumber; 1048 - pci_read_config_byte(ioc->pdev, PCI_CLASS_REVISION, &revision); 1049 - karg.hw_rev = revision; 1049 + karg.hw_rev = ioc->pdev->revision; 1050 1050 karg.pci_id = ioc->pdev->device; 1051 1051 karg.subsystem_device = ioc->pdev->subsystem_device; 1052 1052 karg.subsystem_vendor = ioc->pdev->subsystem_vendor;
+10 -8
drivers/scsi/pm8001/pm8001_hwi.c
··· 2093 2093 struct ata_task_resp *resp ; 2094 2094 u32 *sata_resp; 2095 2095 struct pm8001_device *pm8001_dev; 2096 + unsigned long flags; 2096 2097 2097 2098 psataPayload = (struct sata_completion_resp *)(piomb + 4); 2098 2099 status = le32_to_cpu(psataPayload->status); ··· 2383 2382 ts->stat = SAS_DEV_NO_RESPONSE; 2384 2383 break; 2385 2384 } 2386 - spin_lock_irq(&t->task_state_lock); 2385 + spin_lock_irqsave(&t->task_state_lock, flags); 2387 2386 t->task_state_flags &= ~SAS_TASK_STATE_PENDING; 2388 2387 t->task_state_flags &= ~SAS_TASK_AT_INITIATOR; 2389 2388 t->task_state_flags |= SAS_TASK_STATE_DONE; 2390 2389 if (unlikely((t->task_state_flags & SAS_TASK_STATE_ABORTED))) { 2391 - spin_unlock_irq(&t->task_state_lock); 2390 + spin_unlock_irqrestore(&t->task_state_lock, flags); 2392 2391 PM8001_FAIL_DBG(pm8001_ha, 2393 2392 pm8001_printk("task 0x%p done with io_status 0x%x" 2394 2393 " resp 0x%x stat 0x%x but aborted by upper layer!\n", 2395 2394 t, status, ts->resp, ts->stat)); 2396 2395 pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); 2397 2396 } else if (t->uldd_task) { 2398 - spin_unlock_irq(&t->task_state_lock); 2397 + spin_unlock_irqrestore(&t->task_state_lock, flags); 2399 2398 pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); 2400 2399 mb();/* ditto */ 2401 2400 spin_unlock_irq(&pm8001_ha->lock); 2402 2401 t->task_done(t); 2403 2402 spin_lock_irq(&pm8001_ha->lock); 2404 2403 } else if (!t->uldd_task) { 2405 - spin_unlock_irq(&t->task_state_lock); 2404 + spin_unlock_irqrestore(&t->task_state_lock, flags); 2406 2405 pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); 2407 2406 mb();/*ditto*/ 2408 2407 spin_unlock_irq(&pm8001_ha->lock); ··· 2424 2423 u32 tag = le32_to_cpu(psataPayload->tag); 2425 2424 u32 port_id = le32_to_cpu(psataPayload->port_id); 2426 2425 u32 dev_id = le32_to_cpu(psataPayload->device_id); 2426 + unsigned long flags; 2427 2427 2428 2428 ccb = &pm8001_ha->ccb_info[tag]; 2429 2429 t = ccb->task; ··· 2595 2593 ts->stat = SAS_OPEN_TO; 2596 2594 break; 2597 2595 } 2598 - spin_lock_irq(&t->task_state_lock); 2596 + spin_lock_irqsave(&t->task_state_lock, flags); 2599 2597 t->task_state_flags &= ~SAS_TASK_STATE_PENDING; 2600 2598 t->task_state_flags &= ~SAS_TASK_AT_INITIATOR; 2601 2599 t->task_state_flags |= SAS_TASK_STATE_DONE; 2602 2600 if (unlikely((t->task_state_flags & SAS_TASK_STATE_ABORTED))) { 2603 - spin_unlock_irq(&t->task_state_lock); 2601 + spin_unlock_irqrestore(&t->task_state_lock, flags); 2604 2602 PM8001_FAIL_DBG(pm8001_ha, 2605 2603 pm8001_printk("task 0x%p done with io_status 0x%x" 2606 2604 " resp 0x%x stat 0x%x but aborted by upper layer!\n", 2607 2605 t, event, ts->resp, ts->stat)); 2608 2606 pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); 2609 2607 } else if (t->uldd_task) { 2610 - spin_unlock_irq(&t->task_state_lock); 2608 + spin_unlock_irqrestore(&t->task_state_lock, flags); 2611 2609 pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); 2612 2610 mb();/* ditto */ 2613 2611 spin_unlock_irq(&pm8001_ha->lock); 2614 2612 t->task_done(t); 2615 2613 spin_lock_irq(&pm8001_ha->lock); 2616 2614 } else if (!t->uldd_task) { 2617 - spin_unlock_irq(&t->task_state_lock); 2615 + spin_unlock_irqrestore(&t->task_state_lock, flags); 2618 2616 pm8001_ccb_task_free(pm8001_ha, t, ccb, tag); 2619 2617 mb();/*ditto*/ 2620 2618 spin_unlock_irq(&pm8001_ha->lock);
+2 -2
drivers/scsi/qla4xxx/ql4_isr.c
··· 431 431 mbox_sts_entry->out_mbox[6])); 432 432 433 433 if (mbox_sts_entry->out_mbox[0] == MBOX_STS_COMMAND_COMPLETE) 434 - status = QLA_SUCCESS; 434 + status = ISCSI_PING_SUCCESS; 435 435 else 436 - status = QLA_ERROR; 436 + status = mbox_sts_entry->out_mbox[6]; 437 437 438 438 data_size = sizeof(mbox_sts_entry->out_mbox); 439 439
+4 -6
drivers/scsi/qla4xxx/ql4_os.c
··· 834 834 static void qla4xxx_set_port_speed(struct Scsi_Host *shost) 835 835 { 836 836 struct scsi_qla_host *ha = to_qla_host(shost); 837 - struct iscsi_cls_host *ihost = shost_priv(shost); 837 + struct iscsi_cls_host *ihost = shost->shost_data; 838 838 uint32_t speed = ISCSI_PORT_SPEED_UNKNOWN; 839 839 840 840 qla4xxx_get_firmware_state(ha); ··· 859 859 static void qla4xxx_set_port_state(struct Scsi_Host *shost) 860 860 { 861 861 struct scsi_qla_host *ha = to_qla_host(shost); 862 - struct iscsi_cls_host *ihost = shost_priv(shost); 862 + struct iscsi_cls_host *ihost = shost->shost_data; 863 863 uint32_t state = ISCSI_PORT_STATE_DOWN; 864 864 865 865 if (test_bit(AF_LINK_UP, &ha->flags)) ··· 3445 3445 int qla4_8xxx_iospace_config(struct scsi_qla_host *ha) 3446 3446 { 3447 3447 int status = 0; 3448 - uint8_t revision_id; 3449 3448 unsigned long mem_base, mem_len, db_base, db_len; 3450 3449 struct pci_dev *pdev = ha->pdev; 3451 3450 ··· 3456 3457 goto iospace_error_exit; 3457 3458 } 3458 3459 3459 - pci_read_config_byte(pdev, PCI_REVISION_ID, &revision_id); 3460 3460 DEBUG2(printk(KERN_INFO "%s: revision-id=%d\n", 3461 - __func__, revision_id)); 3462 - ha->revision_id = revision_id; 3461 + __func__, pdev->revision)); 3462 + ha->revision_id = pdev->revision; 3463 3463 3464 3464 /* remap phys address */ 3465 3465 mem_base = pci_resource_start(pdev, 0); /* 0 is for BAR 0 */
+1 -1
drivers/scsi/qla4xxx/ql4_version.h
··· 5 5 * See LICENSE.qla4xxx for copyright and licensing details. 6 6 */ 7 7 8 - #define QLA4XXX_DRIVER_VERSION "5.02.00-k15" 8 + #define QLA4XXX_DRIVER_VERSION "5.02.00-k16"
+20 -7
drivers/scsi/scsi_debug.c
··· 101 101 #define DEF_LBPU 0 102 102 #define DEF_LBPWS 0 103 103 #define DEF_LBPWS10 0 104 + #define DEF_LBPRZ 1 104 105 #define DEF_LOWEST_ALIGNED 0 105 106 #define DEF_NO_LUN_0 0 106 107 #define DEF_NUM_PARTS 0 ··· 187 186 static unsigned int scsi_debug_lbpu = DEF_LBPU; 188 187 static unsigned int scsi_debug_lbpws = DEF_LBPWS; 189 188 static unsigned int scsi_debug_lbpws10 = DEF_LBPWS10; 189 + static unsigned int scsi_debug_lbprz = DEF_LBPRZ; 190 190 static unsigned int scsi_debug_unmap_alignment = DEF_UNMAP_ALIGNMENT; 191 191 static unsigned int scsi_debug_unmap_granularity = DEF_UNMAP_GRANULARITY; 192 192 static unsigned int scsi_debug_unmap_max_blocks = DEF_UNMAP_MAX_BLOCKS; ··· 777 775 return 0x3c; 778 776 } 779 777 780 - /* Thin provisioning VPD page (SBC-3) */ 778 + /* Logical block provisioning VPD page (SBC-3) */ 781 779 static int inquiry_evpd_b2(unsigned char *arr) 782 780 { 783 - memset(arr, 0, 0x8); 781 + memset(arr, 0, 0x4); 784 782 arr[0] = 0; /* threshold exponent */ 785 783 786 784 if (scsi_debug_lbpu) ··· 792 790 if (scsi_debug_lbpws10) 793 791 arr[1] |= 1 << 5; 794 792 795 - return 0x8; 793 + if (scsi_debug_lbprz) 794 + arr[1] |= 1 << 2; 795 + 796 + return 0x4; 796 797 } 797 798 798 799 #define SDEBUG_LONG_INQ_SZ 96 ··· 1076 1071 arr[13] = scsi_debug_physblk_exp & 0xf; 1077 1072 arr[14] = (scsi_debug_lowest_aligned >> 8) & 0x3f; 1078 1073 1079 - if (scsi_debug_lbp()) 1074 + if (scsi_debug_lbp()) { 1080 1075 arr[14] |= 0x80; /* LBPME */ 1076 + if (scsi_debug_lbprz) 1077 + arr[14] |= 0x40; /* LBPRZ */ 1078 + } 1081 1079 1082 1080 arr[15] = scsi_debug_lowest_aligned & 0xff; 1083 1081 ··· 2054 2046 block = lba + alignment; 2055 2047 rem = do_div(block, granularity); 2056 2048 2057 - if (rem == 0 && lba + granularity <= end && 2058 - block < map_size) 2049 + if (rem == 0 && lba + granularity <= end && block < map_size) { 2059 2050 clear_bit(block, map_storep); 2060 - 2051 + if (scsi_debug_lbprz) 2052 + memset(fake_storep + 2053 + block * scsi_debug_sector_size, 0, 2054 + scsi_debug_sector_size); 2055 + } 2061 2056 lba += granularity - rem; 2062 2057 } 2063 2058 } ··· 2742 2731 module_param_named(lbpu, scsi_debug_lbpu, int, S_IRUGO); 2743 2732 module_param_named(lbpws, scsi_debug_lbpws, int, S_IRUGO); 2744 2733 module_param_named(lbpws10, scsi_debug_lbpws10, int, S_IRUGO); 2734 + module_param_named(lbprz, scsi_debug_lbprz, int, S_IRUGO); 2745 2735 module_param_named(lowest_aligned, scsi_debug_lowest_aligned, int, S_IRUGO); 2746 2736 module_param_named(max_luns, scsi_debug_max_luns, int, S_IRUGO | S_IWUSR); 2747 2737 module_param_named(max_queue, scsi_debug_max_queue, int, S_IRUGO | S_IWUSR); ··· 2784 2772 MODULE_PARM_DESC(lbpu, "enable LBP, support UNMAP command (def=0)"); 2785 2773 MODULE_PARM_DESC(lbpws, "enable LBP, support WRITE SAME(16) with UNMAP bit (def=0)"); 2786 2774 MODULE_PARM_DESC(lbpws10, "enable LBP, support WRITE SAME(10) with UNMAP bit (def=0)"); 2775 + MODULE_PARM_DESC(lbprz, "unmapped blocks return 0 on read (def=1)"); 2787 2776 MODULE_PARM_DESC(lowest_aligned, "lowest aligned lba (def=0)"); 2788 2777 MODULE_PARM_DESC(max_luns, "number of LUNs per target to simulate(def=1)"); 2789 2778 MODULE_PARM_DESC(max_queue, "max number of queued commands (1 to 255(def))");
+4 -4
drivers/scsi/scsi_transport_iscsi.c
··· 1486 1486 struct iscsi_uevent *ev; 1487 1487 int len = NLMSG_SPACE(sizeof(*ev) + data_size); 1488 1488 1489 - skb = alloc_skb(len, GFP_KERNEL); 1489 + skb = alloc_skb(len, GFP_NOIO); 1490 1490 if (!skb) { 1491 1491 printk(KERN_ERR "gracefully ignored host event (%d):%d OOM\n", 1492 1492 host_no, code); ··· 1504 1504 if (data_size) 1505 1505 memcpy((char *)ev + sizeof(*ev), data, data_size); 1506 1506 1507 - iscsi_multicast_skb(skb, ISCSI_NL_GRP_ISCSID, GFP_KERNEL); 1507 + iscsi_multicast_skb(skb, ISCSI_NL_GRP_ISCSID, GFP_NOIO); 1508 1508 } 1509 1509 EXPORT_SYMBOL_GPL(iscsi_post_host_event); 1510 1510 ··· 1517 1517 struct iscsi_uevent *ev; 1518 1518 int len = NLMSG_SPACE(sizeof(*ev) + data_size); 1519 1519 1520 - skb = alloc_skb(len, GFP_KERNEL); 1520 + skb = alloc_skb(len, GFP_NOIO); 1521 1521 if (!skb) { 1522 1522 printk(KERN_ERR "gracefully ignored ping comp: OOM\n"); 1523 1523 return; ··· 1533 1533 ev->r.ping_comp.data_size = data_size; 1534 1534 memcpy((char *)ev + sizeof(*ev), data, data_size); 1535 1535 1536 - iscsi_multicast_skb(skb, ISCSI_NL_GRP_ISCSID, GFP_KERNEL); 1536 + iscsi_multicast_skb(skb, ISCSI_NL_GRP_ISCSID, GFP_NOIO); 1537 1537 } 1538 1538 EXPORT_SYMBOL_GPL(iscsi_ping_comp_event); 1539 1539
+10 -5
drivers/scsi/sd.c
··· 664 664 } 665 665 666 666 /** 667 - * sd_init_command - build a scsi (read or write) command from 667 + * sd_prep_fn - build a scsi (read or write) command from 668 668 * information in the request structure. 669 669 * @SCpnt: pointer to mid-level's per scsi command structure that 670 670 * contains request and into which the scsi command is written ··· 711 711 ret = BLKPREP_KILL; 712 712 713 713 SCSI_LOG_HLQUEUE(1, scmd_printk(KERN_INFO, SCpnt, 714 - "sd_init_command: block=%llu, " 714 + "sd_prep_fn: block=%llu, " 715 715 "count=%d\n", 716 716 (unsigned long long)block, 717 717 this_count)); ··· 1212 1212 retval = -ENODEV; 1213 1213 1214 1214 if (scsi_block_when_processing_errors(sdp)) { 1215 + retval = scsi_autopm_get_device(sdp); 1216 + if (retval) 1217 + goto out; 1218 + 1215 1219 sshdr = kzalloc(sizeof(*sshdr), GFP_KERNEL); 1216 1220 retval = scsi_test_unit_ready(sdp, SD_TIMEOUT, SD_MAX_RETRIES, 1217 1221 sshdr); 1222 + scsi_autopm_put_device(sdp); 1218 1223 } 1219 1224 1220 1225 /* failed to execute TUR, assume media not present */ ··· 2649 2644 * (e.g. /dev/sda). More precisely it is the block device major 2650 2645 * and minor number that is chosen here. 2651 2646 * 2652 - * Assume sd_attach is not re-entrant (for time being) 2653 - * Also think about sd_attach() and sd_remove() running coincidentally. 2647 + * Assume sd_probe is not re-entrant (for time being) 2648 + * Also think about sd_probe() and sd_remove() running coincidentally. 2654 2649 **/ 2655 2650 static int sd_probe(struct device *dev) 2656 2651 { ··· 2665 2660 goto out; 2666 2661 2667 2662 SCSI_LOG_HLQUEUE(3, sdev_printk(KERN_INFO, sdp, 2668 - "sd_attach\n")); 2663 + "sd_probe\n")); 2669 2664 2670 2665 error = -ENOMEM; 2671 2666 sdkp = kzalloc(sizeof(*sdkp), GFP_KERNEL);
+18 -3
drivers/scsi/st.c
··· 1105 1105 STp->drv_buffer)); 1106 1106 } 1107 1107 STp->drv_write_prot = ((STp->buffer)->b_data[2] & 0x80) != 0; 1108 + if (!STp->drv_buffer && STp->immediate_filemark) { 1109 + printk(KERN_WARNING 1110 + "%s: non-buffered tape: disabling writing immediate filemarks\n", 1111 + name); 1112 + STp->immediate_filemark = 0; 1113 + } 1108 1114 } 1109 1115 st_release_request(SRpnt); 1110 1116 SRpnt = NULL; ··· 1319 1313 1320 1314 memset(cmd, 0, MAX_COMMAND_SIZE); 1321 1315 cmd[0] = WRITE_FILEMARKS; 1316 + if (STp->immediate_filemark) 1317 + cmd[1] = 1; 1322 1318 cmd[4] = 1 + STp->two_fm; 1323 1319 1324 1320 SRpnt = st_do_scsi(NULL, STp, cmd, 0, DMA_NONE, ··· 2188 2180 name, STm->defaults_for_writes, STp->omit_blklims, STp->can_partitions, 2189 2181 STp->scsi2_logical); 2190 2182 printk(KERN_INFO 2191 - "%s: sysv: %d nowait: %d sili: %d\n", name, STm->sysv, STp->immediate, 2192 - STp->sili); 2183 + "%s: sysv: %d nowait: %d sili: %d nowait_filemark: %d\n", 2184 + name, STm->sysv, STp->immediate, STp->sili, 2185 + STp->immediate_filemark); 2193 2186 printk(KERN_INFO "%s: debugging: %d\n", 2194 2187 name, debugging); 2195 2188 } ··· 2232 2223 STp->can_partitions = (options & MT_ST_CAN_PARTITIONS) != 0; 2233 2224 STp->scsi2_logical = (options & MT_ST_SCSI2LOGICAL) != 0; 2234 2225 STp->immediate = (options & MT_ST_NOWAIT) != 0; 2226 + STp->immediate_filemark = (options & MT_ST_NOWAIT_EOF) != 0; 2235 2227 STm->sysv = (options & MT_ST_SYSV) != 0; 2236 2228 STp->sili = (options & MT_ST_SILI) != 0; 2237 2229 DEB( debugging = (options & MT_ST_DEBUGGING) != 0; ··· 2264 2254 STp->scsi2_logical = value; 2265 2255 if ((options & MT_ST_NOWAIT) != 0) 2266 2256 STp->immediate = value; 2257 + if ((options & MT_ST_NOWAIT_EOF) != 0) 2258 + STp->immediate_filemark = value; 2267 2259 if ((options & MT_ST_SYSV) != 0) 2268 2260 STm->sysv = value; 2269 2261 if ((options & MT_ST_SILI) != 0) ··· 2725 2713 cmd[0] = WRITE_FILEMARKS; 2726 2714 if (cmd_in == MTWSM) 2727 2715 cmd[1] = 2; 2728 - if (cmd_in == MTWEOFI) 2716 + if (cmd_in == MTWEOFI || 2717 + (cmd_in == MTWEOF && STp->immediate_filemark)) 2729 2718 cmd[1] |= 1; 2730 2719 cmd[2] = (arg >> 16); 2731 2720 cmd[3] = (arg >> 8); ··· 4105 4092 tpnt->scsi2_logical = ST_SCSI2LOGICAL; 4106 4093 tpnt->sili = ST_SILI; 4107 4094 tpnt->immediate = ST_NOWAIT; 4095 + tpnt->immediate_filemark = 0; 4108 4096 tpnt->default_drvbuffer = 0xff; /* No forced buffering */ 4109 4097 tpnt->partition = 0; 4110 4098 tpnt->new_partition = 0; ··· 4491 4477 options |= STp->scsi2_logical ? MT_ST_SCSI2LOGICAL : 0; 4492 4478 options |= STm->sysv ? MT_ST_SYSV : 0; 4493 4479 options |= STp->immediate ? MT_ST_NOWAIT : 0; 4480 + options |= STp->immediate_filemark ? MT_ST_NOWAIT_EOF : 0; 4494 4481 options |= STp->sili ? MT_ST_SILI : 0; 4495 4482 4496 4483 l = snprintf(buf, PAGE_SIZE, "0x%08x\n", options);
+1
drivers/scsi/st.h
··· 120 120 unsigned char c_algo; /* compression algorithm */ 121 121 unsigned char pos_unknown; /* after reset position unknown */ 122 122 unsigned char sili; /* use SILI when reading in variable b mode */ 123 + unsigned char immediate_filemark; /* write filemark immediately */ 123 124 int tape_type; 124 125 int long_timeout; /* timeout for commands known to take long time */ 125 126
+49
drivers/scsi/ufs/Kconfig
··· 1 + # 2 + # Kernel configuration file for the UFS Host Controller 3 + # 4 + # This code is based on drivers/scsi/ufs/Kconfig 5 + # Copyright (C) 2011 Samsung Samsung India Software Operations 6 + # 7 + # Santosh Yaraganavi <santosh.sy@samsung.com> 8 + # Vinayak Holikatti <h.vinayak@samsung.com> 9 + 10 + # This program is free software; you can redistribute it and/or 11 + # modify it under the terms of the GNU General Public License 12 + # as published by the Free Software Foundation; either version 2 13 + # of the License, or (at your option) any later version. 14 + 15 + # This program is distributed in the hope that it will be useful, 16 + # but WITHOUT ANY WARRANTY; without even the implied warranty of 17 + # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 18 + # GNU General Public License for more details. 19 + 20 + # NO WARRANTY 21 + # THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR 22 + # CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT 23 + # LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, 24 + # MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is 25 + # solely responsible for determining the appropriateness of using and 26 + # distributing the Program and assumes all risks associated with its 27 + # exercise of rights under this Agreement, including but not limited to 28 + # the risks and costs of program errors, damage to or loss of data, 29 + # programs or equipment, and unavailability or interruption of operations. 30 + 31 + # DISCLAIMER OF LIABILITY 32 + # NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY 33 + # DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 34 + # DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND 35 + # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR 36 + # TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE 37 + # USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED 38 + # HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES 39 + 40 + # You should have received a copy of the GNU General Public License 41 + # along with this program; if not, write to the Free Software 42 + # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, 43 + # USA. 44 + 45 + config SCSI_UFSHCD 46 + tristate "Universal Flash Storage host controller driver" 47 + depends on PCI && SCSI 48 + ---help--- 49 + This is a generic driver which supports PCIe UFS Host controllers.
+2
drivers/scsi/ufs/Makefile
··· 1 + # UFSHCD makefile 2 + obj-$(CONFIG_SCSI_UFSHCD) += ufshcd.o
+207
drivers/scsi/ufs/ufs.h
··· 1 + /* 2 + * Universal Flash Storage Host controller driver 3 + * 4 + * This code is based on drivers/scsi/ufs/ufs.h 5 + * Copyright (C) 2011-2012 Samsung India Software Operations 6 + * 7 + * Santosh Yaraganavi <santosh.sy@samsung.com> 8 + * Vinayak Holikatti <h.vinayak@samsung.com> 9 + * 10 + * This program is free software; you can redistribute it and/or 11 + * modify it under the terms of the GNU General Public License 12 + * as published by the Free Software Foundation; either version 2 13 + * of the License, or (at your option) any later version. 14 + * 15 + * This program is distributed in the hope that it will be useful, 16 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 17 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 18 + * GNU General Public License for more details. 19 + * 20 + * NO WARRANTY 21 + * THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR 22 + * CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT 23 + * LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, 24 + * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is 25 + * solely responsible for determining the appropriateness of using and 26 + * distributing the Program and assumes all risks associated with its 27 + * exercise of rights under this Agreement, including but not limited to 28 + * the risks and costs of program errors, damage to or loss of data, 29 + * programs or equipment, and unavailability or interruption of operations. 30 + 31 + * DISCLAIMER OF LIABILITY 32 + * NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY 33 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 34 + * DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND 35 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR 36 + * TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE 37 + * USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED 38 + * HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES 39 + 40 + * You should have received a copy of the GNU General Public License 41 + * along with this program; if not, write to the Free Software 42 + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, 43 + * USA. 44 + */ 45 + 46 + #ifndef _UFS_H 47 + #define _UFS_H 48 + 49 + #define MAX_CDB_SIZE 16 50 + 51 + #define UPIU_HEADER_DWORD(byte3, byte2, byte1, byte0)\ 52 + ((byte3 << 24) | (byte2 << 16) |\ 53 + (byte1 << 8) | (byte0)) 54 + 55 + /* 56 + * UFS Protocol Information Unit related definitions 57 + */ 58 + 59 + /* Task management functions */ 60 + enum { 61 + UFS_ABORT_TASK = 0x01, 62 + UFS_ABORT_TASK_SET = 0x02, 63 + UFS_CLEAR_TASK_SET = 0x04, 64 + UFS_LOGICAL_RESET = 0x08, 65 + UFS_QUERY_TASK = 0x80, 66 + UFS_QUERY_TASK_SET = 0x81, 67 + }; 68 + 69 + /* UTP UPIU Transaction Codes Initiator to Target */ 70 + enum { 71 + UPIU_TRANSACTION_NOP_OUT = 0x00, 72 + UPIU_TRANSACTION_COMMAND = 0x01, 73 + UPIU_TRANSACTION_DATA_OUT = 0x02, 74 + UPIU_TRANSACTION_TASK_REQ = 0x04, 75 + UPIU_TRANSACTION_QUERY_REQ = 0x26, 76 + }; 77 + 78 + /* UTP UPIU Transaction Codes Target to Initiator */ 79 + enum { 80 + UPIU_TRANSACTION_NOP_IN = 0x20, 81 + UPIU_TRANSACTION_RESPONSE = 0x21, 82 + UPIU_TRANSACTION_DATA_IN = 0x22, 83 + UPIU_TRANSACTION_TASK_RSP = 0x24, 84 + UPIU_TRANSACTION_READY_XFER = 0x31, 85 + UPIU_TRANSACTION_QUERY_RSP = 0x36, 86 + }; 87 + 88 + /* UPIU Read/Write flags */ 89 + enum { 90 + UPIU_CMD_FLAGS_NONE = 0x00, 91 + UPIU_CMD_FLAGS_WRITE = 0x20, 92 + UPIU_CMD_FLAGS_READ = 0x40, 93 + }; 94 + 95 + /* UPIU Task Attributes */ 96 + enum { 97 + UPIU_TASK_ATTR_SIMPLE = 0x00, 98 + UPIU_TASK_ATTR_ORDERED = 0x01, 99 + UPIU_TASK_ATTR_HEADQ = 0x02, 100 + UPIU_TASK_ATTR_ACA = 0x03, 101 + }; 102 + 103 + /* UTP QUERY Transaction Specific Fields OpCode */ 104 + enum { 105 + UPIU_QUERY_OPCODE_NOP = 0x0, 106 + UPIU_QUERY_OPCODE_READ_DESC = 0x1, 107 + UPIU_QUERY_OPCODE_WRITE_DESC = 0x2, 108 + UPIU_QUERY_OPCODE_READ_ATTR = 0x3, 109 + UPIU_QUERY_OPCODE_WRITE_ATTR = 0x4, 110 + UPIU_QUERY_OPCODE_READ_FLAG = 0x5, 111 + UPIU_QUERY_OPCODE_SET_FLAG = 0x6, 112 + UPIU_QUERY_OPCODE_CLEAR_FLAG = 0x7, 113 + UPIU_QUERY_OPCODE_TOGGLE_FLAG = 0x8, 114 + }; 115 + 116 + /* UTP Transfer Request Command Type (CT) */ 117 + enum { 118 + UPIU_COMMAND_SET_TYPE_SCSI = 0x0, 119 + UPIU_COMMAND_SET_TYPE_UFS = 0x1, 120 + UPIU_COMMAND_SET_TYPE_QUERY = 0x2, 121 + }; 122 + 123 + enum { 124 + MASK_SCSI_STATUS = 0xFF, 125 + MASK_TASK_RESPONSE = 0xFF00, 126 + MASK_RSP_UPIU_RESULT = 0xFFFF, 127 + }; 128 + 129 + /* Task management service response */ 130 + enum { 131 + UPIU_TASK_MANAGEMENT_FUNC_COMPL = 0x00, 132 + UPIU_TASK_MANAGEMENT_FUNC_NOT_SUPPORTED = 0x04, 133 + UPIU_TASK_MANAGEMENT_FUNC_SUCCEEDED = 0x08, 134 + UPIU_TASK_MANAGEMENT_FUNC_FAILED = 0x05, 135 + UPIU_INCORRECT_LOGICAL_UNIT_NO = 0x09, 136 + }; 137 + /** 138 + * struct utp_upiu_header - UPIU header structure 139 + * @dword_0: UPIU header DW-0 140 + * @dword_1: UPIU header DW-1 141 + * @dword_2: UPIU header DW-2 142 + */ 143 + struct utp_upiu_header { 144 + u32 dword_0; 145 + u32 dword_1; 146 + u32 dword_2; 147 + }; 148 + 149 + /** 150 + * struct utp_upiu_cmd - Command UPIU structure 151 + * @header: UPIU header structure DW-0 to DW-2 152 + * @data_transfer_len: Data Transfer Length DW-3 153 + * @cdb: Command Descriptor Block CDB DW-4 to DW-7 154 + */ 155 + struct utp_upiu_cmd { 156 + struct utp_upiu_header header; 157 + u32 exp_data_transfer_len; 158 + u8 cdb[MAX_CDB_SIZE]; 159 + }; 160 + 161 + /** 162 + * struct utp_upiu_rsp - Response UPIU structure 163 + * @header: UPIU header DW-0 to DW-2 164 + * @residual_transfer_count: Residual transfer count DW-3 165 + * @reserved: Reserved double words DW-4 to DW-7 166 + * @sense_data_len: Sense data length DW-8 U16 167 + * @sense_data: Sense data field DW-8 to DW-12 168 + */ 169 + struct utp_upiu_rsp { 170 + struct utp_upiu_header header; 171 + u32 residual_transfer_count; 172 + u32 reserved[4]; 173 + u16 sense_data_len; 174 + u8 sense_data[18]; 175 + }; 176 + 177 + /** 178 + * struct utp_upiu_task_req - Task request UPIU structure 179 + * @header - UPIU header structure DW0 to DW-2 180 + * @input_param1: Input parameter 1 DW-3 181 + * @input_param2: Input parameter 2 DW-4 182 + * @input_param3: Input parameter 3 DW-5 183 + * @reserved: Reserved double words DW-6 to DW-7 184 + */ 185 + struct utp_upiu_task_req { 186 + struct utp_upiu_header header; 187 + u32 input_param1; 188 + u32 input_param2; 189 + u32 input_param3; 190 + u32 reserved[2]; 191 + }; 192 + 193 + /** 194 + * struct utp_upiu_task_rsp - Task Management Response UPIU structure 195 + * @header: UPIU header structure DW0-DW-2 196 + * @output_param1: Ouput parameter 1 DW3 197 + * @output_param2: Output parameter 2 DW4 198 + * @reserved: Reserved double words DW-5 to DW-7 199 + */ 200 + struct utp_upiu_task_rsp { 201 + struct utp_upiu_header header; 202 + u32 output_param1; 203 + u32 output_param2; 204 + u32 reserved[3]; 205 + }; 206 + 207 + #endif /* End of Header */
+1978
drivers/scsi/ufs/ufshcd.c
··· 1 + /* 2 + * Universal Flash Storage Host controller driver 3 + * 4 + * This code is based on drivers/scsi/ufs/ufshcd.c 5 + * Copyright (C) 2011-2012 Samsung India Software Operations 6 + * 7 + * Santosh Yaraganavi <santosh.sy@samsung.com> 8 + * Vinayak Holikatti <h.vinayak@samsung.com> 9 + * 10 + * This program is free software; you can redistribute it and/or 11 + * modify it under the terms of the GNU General Public License 12 + * as published by the Free Software Foundation; either version 2 13 + * of the License, or (at your option) any later version. 14 + * 15 + * This program is distributed in the hope that it will be useful, 16 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 17 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 18 + * GNU General Public License for more details. 19 + * 20 + * NO WARRANTY 21 + * THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR 22 + * CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT 23 + * LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, 24 + * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is 25 + * solely responsible for determining the appropriateness of using and 26 + * distributing the Program and assumes all risks associated with its 27 + * exercise of rights under this Agreement, including but not limited to 28 + * the risks and costs of program errors, damage to or loss of data, 29 + * programs or equipment, and unavailability or interruption of operations. 30 + 31 + * DISCLAIMER OF LIABILITY 32 + * NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY 33 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 34 + * DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND 35 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR 36 + * TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE 37 + * USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED 38 + * HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES 39 + 40 + * You should have received a copy of the GNU General Public License 41 + * along with this program; if not, write to the Free Software 42 + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, 43 + * USA. 44 + */ 45 + 46 + #include <linux/module.h> 47 + #include <linux/kernel.h> 48 + #include <linux/init.h> 49 + #include <linux/pci.h> 50 + #include <linux/interrupt.h> 51 + #include <linux/io.h> 52 + #include <linux/delay.h> 53 + #include <linux/slab.h> 54 + #include <linux/spinlock.h> 55 + #include <linux/workqueue.h> 56 + #include <linux/errno.h> 57 + #include <linux/types.h> 58 + #include <linux/wait.h> 59 + #include <linux/bitops.h> 60 + 61 + #include <asm/irq.h> 62 + #include <asm/byteorder.h> 63 + #include <scsi/scsi.h> 64 + #include <scsi/scsi_cmnd.h> 65 + #include <scsi/scsi_host.h> 66 + #include <scsi/scsi_tcq.h> 67 + #include <scsi/scsi_dbg.h> 68 + #include <scsi/scsi_eh.h> 69 + 70 + #include "ufs.h" 71 + #include "ufshci.h" 72 + 73 + #define UFSHCD "ufshcd" 74 + #define UFSHCD_DRIVER_VERSION "0.1" 75 + 76 + enum { 77 + UFSHCD_MAX_CHANNEL = 0, 78 + UFSHCD_MAX_ID = 1, 79 + UFSHCD_MAX_LUNS = 8, 80 + UFSHCD_CMD_PER_LUN = 32, 81 + UFSHCD_CAN_QUEUE = 32, 82 + }; 83 + 84 + /* UFSHCD states */ 85 + enum { 86 + UFSHCD_STATE_OPERATIONAL, 87 + UFSHCD_STATE_RESET, 88 + UFSHCD_STATE_ERROR, 89 + }; 90 + 91 + /* Interrupt configuration options */ 92 + enum { 93 + UFSHCD_INT_DISABLE, 94 + UFSHCD_INT_ENABLE, 95 + UFSHCD_INT_CLEAR, 96 + }; 97 + 98 + /* Interrupt aggregation options */ 99 + enum { 100 + INT_AGGR_RESET, 101 + INT_AGGR_CONFIG, 102 + }; 103 + 104 + /** 105 + * struct uic_command - UIC command structure 106 + * @command: UIC command 107 + * @argument1: UIC command argument 1 108 + * @argument2: UIC command argument 2 109 + * @argument3: UIC command argument 3 110 + * @cmd_active: Indicate if UIC command is outstanding 111 + * @result: UIC command result 112 + */ 113 + struct uic_command { 114 + u32 command; 115 + u32 argument1; 116 + u32 argument2; 117 + u32 argument3; 118 + int cmd_active; 119 + int result; 120 + }; 121 + 122 + /** 123 + * struct ufs_hba - per adapter private structure 124 + * @mmio_base: UFSHCI base register address 125 + * @ucdl_base_addr: UFS Command Descriptor base address 126 + * @utrdl_base_addr: UTP Transfer Request Descriptor base address 127 + * @utmrdl_base_addr: UTP Task Management Descriptor base address 128 + * @ucdl_dma_addr: UFS Command Descriptor DMA address 129 + * @utrdl_dma_addr: UTRDL DMA address 130 + * @utmrdl_dma_addr: UTMRDL DMA address 131 + * @host: Scsi_Host instance of the driver 132 + * @pdev: PCI device handle 133 + * @lrb: local reference block 134 + * @outstanding_tasks: Bits representing outstanding task requests 135 + * @outstanding_reqs: Bits representing outstanding transfer requests 136 + * @capabilities: UFS Controller Capabilities 137 + * @nutrs: Transfer Request Queue depth supported by controller 138 + * @nutmrs: Task Management Queue depth supported by controller 139 + * @active_uic_cmd: handle of active UIC command 140 + * @ufshcd_tm_wait_queue: wait queue for task management 141 + * @tm_condition: condition variable for task management 142 + * @ufshcd_state: UFSHCD states 143 + * @int_enable_mask: Interrupt Mask Bits 144 + * @uic_workq: Work queue for UIC completion handling 145 + * @feh_workq: Work queue for fatal controller error handling 146 + * @errors: HBA errors 147 + */ 148 + struct ufs_hba { 149 + void __iomem *mmio_base; 150 + 151 + /* Virtual memory reference */ 152 + struct utp_transfer_cmd_desc *ucdl_base_addr; 153 + struct utp_transfer_req_desc *utrdl_base_addr; 154 + struct utp_task_req_desc *utmrdl_base_addr; 155 + 156 + /* DMA memory reference */ 157 + dma_addr_t ucdl_dma_addr; 158 + dma_addr_t utrdl_dma_addr; 159 + dma_addr_t utmrdl_dma_addr; 160 + 161 + struct Scsi_Host *host; 162 + struct pci_dev *pdev; 163 + 164 + struct ufshcd_lrb *lrb; 165 + 166 + unsigned long outstanding_tasks; 167 + unsigned long outstanding_reqs; 168 + 169 + u32 capabilities; 170 + int nutrs; 171 + int nutmrs; 172 + u32 ufs_version; 173 + 174 + struct uic_command active_uic_cmd; 175 + wait_queue_head_t ufshcd_tm_wait_queue; 176 + unsigned long tm_condition; 177 + 178 + u32 ufshcd_state; 179 + u32 int_enable_mask; 180 + 181 + /* Work Queues */ 182 + struct work_struct uic_workq; 183 + struct work_struct feh_workq; 184 + 185 + /* HBA Errors */ 186 + u32 errors; 187 + }; 188 + 189 + /** 190 + * struct ufshcd_lrb - local reference block 191 + * @utr_descriptor_ptr: UTRD address of the command 192 + * @ucd_cmd_ptr: UCD address of the command 193 + * @ucd_rsp_ptr: Response UPIU address for this command 194 + * @ucd_prdt_ptr: PRDT address of the command 195 + * @cmd: pointer to SCSI command 196 + * @sense_buffer: pointer to sense buffer address of the SCSI command 197 + * @sense_bufflen: Length of the sense buffer 198 + * @scsi_status: SCSI status of the command 199 + * @command_type: SCSI, UFS, Query. 200 + * @task_tag: Task tag of the command 201 + * @lun: LUN of the command 202 + */ 203 + struct ufshcd_lrb { 204 + struct utp_transfer_req_desc *utr_descriptor_ptr; 205 + struct utp_upiu_cmd *ucd_cmd_ptr; 206 + struct utp_upiu_rsp *ucd_rsp_ptr; 207 + struct ufshcd_sg_entry *ucd_prdt_ptr; 208 + 209 + struct scsi_cmnd *cmd; 210 + u8 *sense_buffer; 211 + unsigned int sense_bufflen; 212 + int scsi_status; 213 + 214 + int command_type; 215 + int task_tag; 216 + unsigned int lun; 217 + }; 218 + 219 + /** 220 + * ufshcd_get_ufs_version - Get the UFS version supported by the HBA 221 + * @hba - Pointer to adapter instance 222 + * 223 + * Returns UFSHCI version supported by the controller 224 + */ 225 + static inline u32 ufshcd_get_ufs_version(struct ufs_hba *hba) 226 + { 227 + return readl(hba->mmio_base + REG_UFS_VERSION); 228 + } 229 + 230 + /** 231 + * ufshcd_is_device_present - Check if any device connected to 232 + * the host controller 233 + * @reg_hcs - host controller status register value 234 + * 235 + * Returns 0 if device present, non-zero if no device detected 236 + */ 237 + static inline int ufshcd_is_device_present(u32 reg_hcs) 238 + { 239 + return (DEVICE_PRESENT & reg_hcs) ? 0 : -1; 240 + } 241 + 242 + /** 243 + * ufshcd_get_tr_ocs - Get the UTRD Overall Command Status 244 + * @lrb: pointer to local command reference block 245 + * 246 + * This function is used to get the OCS field from UTRD 247 + * Returns the OCS field in the UTRD 248 + */ 249 + static inline int ufshcd_get_tr_ocs(struct ufshcd_lrb *lrbp) 250 + { 251 + return lrbp->utr_descriptor_ptr->header.dword_2 & MASK_OCS; 252 + } 253 + 254 + /** 255 + * ufshcd_get_tmr_ocs - Get the UTMRD Overall Command Status 256 + * @task_req_descp: pointer to utp_task_req_desc structure 257 + * 258 + * This function is used to get the OCS field from UTMRD 259 + * Returns the OCS field in the UTMRD 260 + */ 261 + static inline int 262 + ufshcd_get_tmr_ocs(struct utp_task_req_desc *task_req_descp) 263 + { 264 + return task_req_descp->header.dword_2 & MASK_OCS; 265 + } 266 + 267 + /** 268 + * ufshcd_get_tm_free_slot - get a free slot for task management request 269 + * @hba: per adapter instance 270 + * 271 + * Returns maximum number of task management request slots in case of 272 + * task management queue full or returns the free slot number 273 + */ 274 + static inline int ufshcd_get_tm_free_slot(struct ufs_hba *hba) 275 + { 276 + return find_first_zero_bit(&hba->outstanding_tasks, hba->nutmrs); 277 + } 278 + 279 + /** 280 + * ufshcd_utrl_clear - Clear a bit in UTRLCLR register 281 + * @hba: per adapter instance 282 + * @pos: position of the bit to be cleared 283 + */ 284 + static inline void ufshcd_utrl_clear(struct ufs_hba *hba, u32 pos) 285 + { 286 + writel(~(1 << pos), 287 + (hba->mmio_base + REG_UTP_TRANSFER_REQ_LIST_CLEAR)); 288 + } 289 + 290 + /** 291 + * ufshcd_get_lists_status - Check UCRDY, UTRLRDY and UTMRLRDY 292 + * @reg: Register value of host controller status 293 + * 294 + * Returns integer, 0 on Success and positive value if failed 295 + */ 296 + static inline int ufshcd_get_lists_status(u32 reg) 297 + { 298 + /* 299 + * The mask 0xFF is for the following HCS register bits 300 + * Bit Description 301 + * 0 Device Present 302 + * 1 UTRLRDY 303 + * 2 UTMRLRDY 304 + * 3 UCRDY 305 + * 4 HEI 306 + * 5 DEI 307 + * 6-7 reserved 308 + */ 309 + return (((reg) & (0xFF)) >> 1) ^ (0x07); 310 + } 311 + 312 + /** 313 + * ufshcd_get_uic_cmd_result - Get the UIC command result 314 + * @hba: Pointer to adapter instance 315 + * 316 + * This function gets the result of UIC command completion 317 + * Returns 0 on success, non zero value on error 318 + */ 319 + static inline int ufshcd_get_uic_cmd_result(struct ufs_hba *hba) 320 + { 321 + return readl(hba->mmio_base + REG_UIC_COMMAND_ARG_2) & 322 + MASK_UIC_COMMAND_RESULT; 323 + } 324 + 325 + /** 326 + * ufshcd_free_hba_memory - Free allocated memory for LRB, request 327 + * and task lists 328 + * @hba: Pointer to adapter instance 329 + */ 330 + static inline void ufshcd_free_hba_memory(struct ufs_hba *hba) 331 + { 332 + size_t utmrdl_size, utrdl_size, ucdl_size; 333 + 334 + kfree(hba->lrb); 335 + 336 + if (hba->utmrdl_base_addr) { 337 + utmrdl_size = sizeof(struct utp_task_req_desc) * hba->nutmrs; 338 + dma_free_coherent(&hba->pdev->dev, utmrdl_size, 339 + hba->utmrdl_base_addr, hba->utmrdl_dma_addr); 340 + } 341 + 342 + if (hba->utrdl_base_addr) { 343 + utrdl_size = 344 + (sizeof(struct utp_transfer_req_desc) * hba->nutrs); 345 + dma_free_coherent(&hba->pdev->dev, utrdl_size, 346 + hba->utrdl_base_addr, hba->utrdl_dma_addr); 347 + } 348 + 349 + if (hba->ucdl_base_addr) { 350 + ucdl_size = 351 + (sizeof(struct utp_transfer_cmd_desc) * hba->nutrs); 352 + dma_free_coherent(&hba->pdev->dev, ucdl_size, 353 + hba->ucdl_base_addr, hba->ucdl_dma_addr); 354 + } 355 + } 356 + 357 + /** 358 + * ufshcd_is_valid_req_rsp - checks if controller TR response is valid 359 + * @ucd_rsp_ptr: pointer to response UPIU 360 + * 361 + * This function checks the response UPIU for valid transaction type in 362 + * response field 363 + * Returns 0 on success, non-zero on failure 364 + */ 365 + static inline int 366 + ufshcd_is_valid_req_rsp(struct utp_upiu_rsp *ucd_rsp_ptr) 367 + { 368 + return ((be32_to_cpu(ucd_rsp_ptr->header.dword_0) >> 24) == 369 + UPIU_TRANSACTION_RESPONSE) ? 0 : DID_ERROR << 16; 370 + } 371 + 372 + /** 373 + * ufshcd_get_rsp_upiu_result - Get the result from response UPIU 374 + * @ucd_rsp_ptr: pointer to response UPIU 375 + * 376 + * This function gets the response status and scsi_status from response UPIU 377 + * Returns the response result code. 378 + */ 379 + static inline int 380 + ufshcd_get_rsp_upiu_result(struct utp_upiu_rsp *ucd_rsp_ptr) 381 + { 382 + return be32_to_cpu(ucd_rsp_ptr->header.dword_1) & MASK_RSP_UPIU_RESULT; 383 + } 384 + 385 + /** 386 + * ufshcd_config_int_aggr - Configure interrupt aggregation values. 387 + * Currently there is no use case where we want to configure 388 + * interrupt aggregation dynamically. So to configure interrupt 389 + * aggregation, #define INT_AGGR_COUNTER_THRESHOLD_VALUE and 390 + * INT_AGGR_TIMEOUT_VALUE are used. 391 + * @hba: per adapter instance 392 + * @option: Interrupt aggregation option 393 + */ 394 + static inline void 395 + ufshcd_config_int_aggr(struct ufs_hba *hba, int option) 396 + { 397 + switch (option) { 398 + case INT_AGGR_RESET: 399 + writel((INT_AGGR_ENABLE | 400 + INT_AGGR_COUNTER_AND_TIMER_RESET), 401 + (hba->mmio_base + 402 + REG_UTP_TRANSFER_REQ_INT_AGG_CONTROL)); 403 + break; 404 + case INT_AGGR_CONFIG: 405 + writel((INT_AGGR_ENABLE | 406 + INT_AGGR_PARAM_WRITE | 407 + INT_AGGR_COUNTER_THRESHOLD_VALUE | 408 + INT_AGGR_TIMEOUT_VALUE), 409 + (hba->mmio_base + 410 + REG_UTP_TRANSFER_REQ_INT_AGG_CONTROL)); 411 + break; 412 + } 413 + } 414 + 415 + /** 416 + * ufshcd_enable_run_stop_reg - Enable run-stop registers, 417 + * When run-stop registers are set to 1, it indicates the 418 + * host controller that it can process the requests 419 + * @hba: per adapter instance 420 + */ 421 + static void ufshcd_enable_run_stop_reg(struct ufs_hba *hba) 422 + { 423 + writel(UTP_TASK_REQ_LIST_RUN_STOP_BIT, 424 + (hba->mmio_base + 425 + REG_UTP_TASK_REQ_LIST_RUN_STOP)); 426 + writel(UTP_TRANSFER_REQ_LIST_RUN_STOP_BIT, 427 + (hba->mmio_base + 428 + REG_UTP_TRANSFER_REQ_LIST_RUN_STOP)); 429 + } 430 + 431 + /** 432 + * ufshcd_hba_stop - Send controller to reset state 433 + * @hba: per adapter instance 434 + */ 435 + static inline void ufshcd_hba_stop(struct ufs_hba *hba) 436 + { 437 + writel(CONTROLLER_DISABLE, (hba->mmio_base + REG_CONTROLLER_ENABLE)); 438 + } 439 + 440 + /** 441 + * ufshcd_hba_start - Start controller initialization sequence 442 + * @hba: per adapter instance 443 + */ 444 + static inline void ufshcd_hba_start(struct ufs_hba *hba) 445 + { 446 + writel(CONTROLLER_ENABLE , (hba->mmio_base + REG_CONTROLLER_ENABLE)); 447 + } 448 + 449 + /** 450 + * ufshcd_is_hba_active - Get controller state 451 + * @hba: per adapter instance 452 + * 453 + * Returns zero if controller is active, 1 otherwise 454 + */ 455 + static inline int ufshcd_is_hba_active(struct ufs_hba *hba) 456 + { 457 + return (readl(hba->mmio_base + REG_CONTROLLER_ENABLE) & 0x1) ? 0 : 1; 458 + } 459 + 460 + /** 461 + * ufshcd_send_command - Send SCSI or device management commands 462 + * @hba: per adapter instance 463 + * @task_tag: Task tag of the command 464 + */ 465 + static inline 466 + void ufshcd_send_command(struct ufs_hba *hba, unsigned int task_tag) 467 + { 468 + __set_bit(task_tag, &hba->outstanding_reqs); 469 + writel((1 << task_tag), 470 + (hba->mmio_base + REG_UTP_TRANSFER_REQ_DOOR_BELL)); 471 + } 472 + 473 + /** 474 + * ufshcd_copy_sense_data - Copy sense data in case of check condition 475 + * @lrb - pointer to local reference block 476 + */ 477 + static inline void ufshcd_copy_sense_data(struct ufshcd_lrb *lrbp) 478 + { 479 + int len; 480 + if (lrbp->sense_buffer) { 481 + len = be16_to_cpu(lrbp->ucd_rsp_ptr->sense_data_len); 482 + memcpy(lrbp->sense_buffer, 483 + lrbp->ucd_rsp_ptr->sense_data, 484 + min_t(int, len, SCSI_SENSE_BUFFERSIZE)); 485 + } 486 + } 487 + 488 + /** 489 + * ufshcd_hba_capabilities - Read controller capabilities 490 + * @hba: per adapter instance 491 + */ 492 + static inline void ufshcd_hba_capabilities(struct ufs_hba *hba) 493 + { 494 + hba->capabilities = 495 + readl(hba->mmio_base + REG_CONTROLLER_CAPABILITIES); 496 + 497 + /* nutrs and nutmrs are 0 based values */ 498 + hba->nutrs = (hba->capabilities & MASK_TRANSFER_REQUESTS_SLOTS) + 1; 499 + hba->nutmrs = 500 + ((hba->capabilities & MASK_TASK_MANAGEMENT_REQUEST_SLOTS) >> 16) + 1; 501 + } 502 + 503 + /** 504 + * ufshcd_send_uic_command - Send UIC commands to unipro layers 505 + * @hba: per adapter instance 506 + * @uic_command: UIC command 507 + */ 508 + static inline void 509 + ufshcd_send_uic_command(struct ufs_hba *hba, struct uic_command *uic_cmnd) 510 + { 511 + /* Write Args */ 512 + writel(uic_cmnd->argument1, 513 + (hba->mmio_base + REG_UIC_COMMAND_ARG_1)); 514 + writel(uic_cmnd->argument2, 515 + (hba->mmio_base + REG_UIC_COMMAND_ARG_2)); 516 + writel(uic_cmnd->argument3, 517 + (hba->mmio_base + REG_UIC_COMMAND_ARG_3)); 518 + 519 + /* Write UIC Cmd */ 520 + writel((uic_cmnd->command & COMMAND_OPCODE_MASK), 521 + (hba->mmio_base + REG_UIC_COMMAND)); 522 + } 523 + 524 + /** 525 + * ufshcd_map_sg - Map scatter-gather list to prdt 526 + * @lrbp - pointer to local reference block 527 + * 528 + * Returns 0 in case of success, non-zero value in case of failure 529 + */ 530 + static int ufshcd_map_sg(struct ufshcd_lrb *lrbp) 531 + { 532 + struct ufshcd_sg_entry *prd_table; 533 + struct scatterlist *sg; 534 + struct scsi_cmnd *cmd; 535 + int sg_segments; 536 + int i; 537 + 538 + cmd = lrbp->cmd; 539 + sg_segments = scsi_dma_map(cmd); 540 + if (sg_segments < 0) 541 + return sg_segments; 542 + 543 + if (sg_segments) { 544 + lrbp->utr_descriptor_ptr->prd_table_length = 545 + cpu_to_le16((u16) (sg_segments)); 546 + 547 + prd_table = (struct ufshcd_sg_entry *)lrbp->ucd_prdt_ptr; 548 + 549 + scsi_for_each_sg(cmd, sg, sg_segments, i) { 550 + prd_table[i].size = 551 + cpu_to_le32(((u32) sg_dma_len(sg))-1); 552 + prd_table[i].base_addr = 553 + cpu_to_le32(lower_32_bits(sg->dma_address)); 554 + prd_table[i].upper_addr = 555 + cpu_to_le32(upper_32_bits(sg->dma_address)); 556 + } 557 + } else { 558 + lrbp->utr_descriptor_ptr->prd_table_length = 0; 559 + } 560 + 561 + return 0; 562 + } 563 + 564 + /** 565 + * ufshcd_int_config - enable/disable interrupts 566 + * @hba: per adapter instance 567 + * @option: interrupt option 568 + */ 569 + static void ufshcd_int_config(struct ufs_hba *hba, u32 option) 570 + { 571 + switch (option) { 572 + case UFSHCD_INT_ENABLE: 573 + writel(hba->int_enable_mask, 574 + (hba->mmio_base + REG_INTERRUPT_ENABLE)); 575 + break; 576 + case UFSHCD_INT_DISABLE: 577 + if (hba->ufs_version == UFSHCI_VERSION_10) 578 + writel(INTERRUPT_DISABLE_MASK_10, 579 + (hba->mmio_base + REG_INTERRUPT_ENABLE)); 580 + else 581 + writel(INTERRUPT_DISABLE_MASK_11, 582 + (hba->mmio_base + REG_INTERRUPT_ENABLE)); 583 + break; 584 + } 585 + } 586 + 587 + /** 588 + * ufshcd_compose_upiu - form UFS Protocol Information Unit(UPIU) 589 + * @lrb - pointer to local reference block 590 + */ 591 + static void ufshcd_compose_upiu(struct ufshcd_lrb *lrbp) 592 + { 593 + struct utp_transfer_req_desc *req_desc; 594 + struct utp_upiu_cmd *ucd_cmd_ptr; 595 + u32 data_direction; 596 + u32 upiu_flags; 597 + 598 + ucd_cmd_ptr = lrbp->ucd_cmd_ptr; 599 + req_desc = lrbp->utr_descriptor_ptr; 600 + 601 + switch (lrbp->command_type) { 602 + case UTP_CMD_TYPE_SCSI: 603 + if (lrbp->cmd->sc_data_direction == DMA_FROM_DEVICE) { 604 + data_direction = UTP_DEVICE_TO_HOST; 605 + upiu_flags = UPIU_CMD_FLAGS_READ; 606 + } else if (lrbp->cmd->sc_data_direction == DMA_TO_DEVICE) { 607 + data_direction = UTP_HOST_TO_DEVICE; 608 + upiu_flags = UPIU_CMD_FLAGS_WRITE; 609 + } else { 610 + data_direction = UTP_NO_DATA_TRANSFER; 611 + upiu_flags = UPIU_CMD_FLAGS_NONE; 612 + } 613 + 614 + /* Transfer request descriptor header fields */ 615 + req_desc->header.dword_0 = 616 + cpu_to_le32(data_direction | UTP_SCSI_COMMAND); 617 + 618 + /* 619 + * assigning invalid value for command status. Controller 620 + * updates OCS on command completion, with the command 621 + * status 622 + */ 623 + req_desc->header.dword_2 = 624 + cpu_to_le32(OCS_INVALID_COMMAND_STATUS); 625 + 626 + /* command descriptor fields */ 627 + ucd_cmd_ptr->header.dword_0 = 628 + cpu_to_be32(UPIU_HEADER_DWORD(UPIU_TRANSACTION_COMMAND, 629 + upiu_flags, 630 + lrbp->lun, 631 + lrbp->task_tag)); 632 + ucd_cmd_ptr->header.dword_1 = 633 + cpu_to_be32( 634 + UPIU_HEADER_DWORD(UPIU_COMMAND_SET_TYPE_SCSI, 635 + 0, 636 + 0, 637 + 0)); 638 + 639 + /* Total EHS length and Data segment length will be zero */ 640 + ucd_cmd_ptr->header.dword_2 = 0; 641 + 642 + ucd_cmd_ptr->exp_data_transfer_len = 643 + cpu_to_be32(lrbp->cmd->transfersize); 644 + 645 + memcpy(ucd_cmd_ptr->cdb, 646 + lrbp->cmd->cmnd, 647 + (min_t(unsigned short, 648 + lrbp->cmd->cmd_len, 649 + MAX_CDB_SIZE))); 650 + break; 651 + case UTP_CMD_TYPE_DEV_MANAGE: 652 + /* For query function implementation */ 653 + break; 654 + case UTP_CMD_TYPE_UFS: 655 + /* For UFS native command implementation */ 656 + break; 657 + } /* end of switch */ 658 + } 659 + 660 + /** 661 + * ufshcd_queuecommand - main entry point for SCSI requests 662 + * @cmd: command from SCSI Midlayer 663 + * @done: call back function 664 + * 665 + * Returns 0 for success, non-zero in case of failure 666 + */ 667 + static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd) 668 + { 669 + struct ufshcd_lrb *lrbp; 670 + struct ufs_hba *hba; 671 + unsigned long flags; 672 + int tag; 673 + int err = 0; 674 + 675 + hba = shost_priv(host); 676 + 677 + tag = cmd->request->tag; 678 + 679 + if (hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL) { 680 + err = SCSI_MLQUEUE_HOST_BUSY; 681 + goto out; 682 + } 683 + 684 + lrbp = &hba->lrb[tag]; 685 + 686 + lrbp->cmd = cmd; 687 + lrbp->sense_bufflen = SCSI_SENSE_BUFFERSIZE; 688 + lrbp->sense_buffer = cmd->sense_buffer; 689 + lrbp->task_tag = tag; 690 + lrbp->lun = cmd->device->lun; 691 + 692 + lrbp->command_type = UTP_CMD_TYPE_SCSI; 693 + 694 + /* form UPIU before issuing the command */ 695 + ufshcd_compose_upiu(lrbp); 696 + err = ufshcd_map_sg(lrbp); 697 + if (err) 698 + goto out; 699 + 700 + /* issue command to the controller */ 701 + spin_lock_irqsave(hba->host->host_lock, flags); 702 + ufshcd_send_command(hba, tag); 703 + spin_unlock_irqrestore(hba->host->host_lock, flags); 704 + out: 705 + return err; 706 + } 707 + 708 + /** 709 + * ufshcd_memory_alloc - allocate memory for host memory space data structures 710 + * @hba: per adapter instance 711 + * 712 + * 1. Allocate DMA memory for Command Descriptor array 713 + * Each command descriptor consist of Command UPIU, Response UPIU and PRDT 714 + * 2. Allocate DMA memory for UTP Transfer Request Descriptor List (UTRDL). 715 + * 3. Allocate DMA memory for UTP Task Management Request Descriptor List 716 + * (UTMRDL) 717 + * 4. Allocate memory for local reference block(lrb). 718 + * 719 + * Returns 0 for success, non-zero in case of failure 720 + */ 721 + static int ufshcd_memory_alloc(struct ufs_hba *hba) 722 + { 723 + size_t utmrdl_size, utrdl_size, ucdl_size; 724 + 725 + /* Allocate memory for UTP command descriptors */ 726 + ucdl_size = (sizeof(struct utp_transfer_cmd_desc) * hba->nutrs); 727 + hba->ucdl_base_addr = dma_alloc_coherent(&hba->pdev->dev, 728 + ucdl_size, 729 + &hba->ucdl_dma_addr, 730 + GFP_KERNEL); 731 + 732 + /* 733 + * UFSHCI requires UTP command descriptor to be 128 byte aligned. 734 + * make sure hba->ucdl_dma_addr is aligned to PAGE_SIZE 735 + * if hba->ucdl_dma_addr is aligned to PAGE_SIZE, then it will 736 + * be aligned to 128 bytes as well 737 + */ 738 + if (!hba->ucdl_base_addr || 739 + WARN_ON(hba->ucdl_dma_addr & (PAGE_SIZE - 1))) { 740 + dev_err(&hba->pdev->dev, 741 + "Command Descriptor Memory allocation failed\n"); 742 + goto out; 743 + } 744 + 745 + /* 746 + * Allocate memory for UTP Transfer descriptors 747 + * UFSHCI requires 1024 byte alignment of UTRD 748 + */ 749 + utrdl_size = (sizeof(struct utp_transfer_req_desc) * hba->nutrs); 750 + hba->utrdl_base_addr = dma_alloc_coherent(&hba->pdev->dev, 751 + utrdl_size, 752 + &hba->utrdl_dma_addr, 753 + GFP_KERNEL); 754 + if (!hba->utrdl_base_addr || 755 + WARN_ON(hba->utrdl_dma_addr & (PAGE_SIZE - 1))) { 756 + dev_err(&hba->pdev->dev, 757 + "Transfer Descriptor Memory allocation failed\n"); 758 + goto out; 759 + } 760 + 761 + /* 762 + * Allocate memory for UTP Task Management descriptors 763 + * UFSHCI requires 1024 byte alignment of UTMRD 764 + */ 765 + utmrdl_size = sizeof(struct utp_task_req_desc) * hba->nutmrs; 766 + hba->utmrdl_base_addr = dma_alloc_coherent(&hba->pdev->dev, 767 + utmrdl_size, 768 + &hba->utmrdl_dma_addr, 769 + GFP_KERNEL); 770 + if (!hba->utmrdl_base_addr || 771 + WARN_ON(hba->utmrdl_dma_addr & (PAGE_SIZE - 1))) { 772 + dev_err(&hba->pdev->dev, 773 + "Task Management Descriptor Memory allocation failed\n"); 774 + goto out; 775 + } 776 + 777 + /* Allocate memory for local reference block */ 778 + hba->lrb = kcalloc(hba->nutrs, sizeof(struct ufshcd_lrb), GFP_KERNEL); 779 + if (!hba->lrb) { 780 + dev_err(&hba->pdev->dev, "LRB Memory allocation failed\n"); 781 + goto out; 782 + } 783 + return 0; 784 + out: 785 + ufshcd_free_hba_memory(hba); 786 + return -ENOMEM; 787 + } 788 + 789 + /** 790 + * ufshcd_host_memory_configure - configure local reference block with 791 + * memory offsets 792 + * @hba: per adapter instance 793 + * 794 + * Configure Host memory space 795 + * 1. Update Corresponding UTRD.UCDBA and UTRD.UCDBAU with UCD DMA 796 + * address. 797 + * 2. Update each UTRD with Response UPIU offset, Response UPIU length 798 + * and PRDT offset. 799 + * 3. Save the corresponding addresses of UTRD, UCD.CMD, UCD.RSP and UCD.PRDT 800 + * into local reference block. 801 + */ 802 + static void ufshcd_host_memory_configure(struct ufs_hba *hba) 803 + { 804 + struct utp_transfer_cmd_desc *cmd_descp; 805 + struct utp_transfer_req_desc *utrdlp; 806 + dma_addr_t cmd_desc_dma_addr; 807 + dma_addr_t cmd_desc_element_addr; 808 + u16 response_offset; 809 + u16 prdt_offset; 810 + int cmd_desc_size; 811 + int i; 812 + 813 + utrdlp = hba->utrdl_base_addr; 814 + cmd_descp = hba->ucdl_base_addr; 815 + 816 + response_offset = 817 + offsetof(struct utp_transfer_cmd_desc, response_upiu); 818 + prdt_offset = 819 + offsetof(struct utp_transfer_cmd_desc, prd_table); 820 + 821 + cmd_desc_size = sizeof(struct utp_transfer_cmd_desc); 822 + cmd_desc_dma_addr = hba->ucdl_dma_addr; 823 + 824 + for (i = 0; i < hba->nutrs; i++) { 825 + /* Configure UTRD with command descriptor base address */ 826 + cmd_desc_element_addr = 827 + (cmd_desc_dma_addr + (cmd_desc_size * i)); 828 + utrdlp[i].command_desc_base_addr_lo = 829 + cpu_to_le32(lower_32_bits(cmd_desc_element_addr)); 830 + utrdlp[i].command_desc_base_addr_hi = 831 + cpu_to_le32(upper_32_bits(cmd_desc_element_addr)); 832 + 833 + /* Response upiu and prdt offset should be in double words */ 834 + utrdlp[i].response_upiu_offset = 835 + cpu_to_le16((response_offset >> 2)); 836 + utrdlp[i].prd_table_offset = 837 + cpu_to_le16((prdt_offset >> 2)); 838 + utrdlp[i].response_upiu_length = 839 + cpu_to_le16(ALIGNED_UPIU_SIZE); 840 + 841 + hba->lrb[i].utr_descriptor_ptr = (utrdlp + i); 842 + hba->lrb[i].ucd_cmd_ptr = 843 + (struct utp_upiu_cmd *)(cmd_descp + i); 844 + hba->lrb[i].ucd_rsp_ptr = 845 + (struct utp_upiu_rsp *)cmd_descp[i].response_upiu; 846 + hba->lrb[i].ucd_prdt_ptr = 847 + (struct ufshcd_sg_entry *)cmd_descp[i].prd_table; 848 + } 849 + } 850 + 851 + /** 852 + * ufshcd_dme_link_startup - Notify Unipro to perform link startup 853 + * @hba: per adapter instance 854 + * 855 + * UIC_CMD_DME_LINK_STARTUP command must be issued to Unipro layer, 856 + * in order to initialize the Unipro link startup procedure. 857 + * Once the Unipro links are up, the device connected to the controller 858 + * is detected. 859 + * 860 + * Returns 0 on success, non-zero value on failure 861 + */ 862 + static int ufshcd_dme_link_startup(struct ufs_hba *hba) 863 + { 864 + struct uic_command *uic_cmd; 865 + unsigned long flags; 866 + 867 + /* check if controller is ready to accept UIC commands */ 868 + if (((readl(hba->mmio_base + REG_CONTROLLER_STATUS)) & 869 + UIC_COMMAND_READY) == 0x0) { 870 + dev_err(&hba->pdev->dev, 871 + "Controller not ready" 872 + " to accept UIC commands\n"); 873 + return -EIO; 874 + } 875 + 876 + spin_lock_irqsave(hba->host->host_lock, flags); 877 + 878 + /* form UIC command */ 879 + uic_cmd = &hba->active_uic_cmd; 880 + uic_cmd->command = UIC_CMD_DME_LINK_STARTUP; 881 + uic_cmd->argument1 = 0; 882 + uic_cmd->argument2 = 0; 883 + uic_cmd->argument3 = 0; 884 + 885 + /* enable UIC related interrupts */ 886 + hba->int_enable_mask |= UIC_COMMAND_COMPL; 887 + ufshcd_int_config(hba, UFSHCD_INT_ENABLE); 888 + 889 + /* sending UIC commands to controller */ 890 + ufshcd_send_uic_command(hba, uic_cmd); 891 + spin_unlock_irqrestore(hba->host->host_lock, flags); 892 + return 0; 893 + } 894 + 895 + /** 896 + * ufshcd_make_hba_operational - Make UFS controller operational 897 + * @hba: per adapter instance 898 + * 899 + * To bring UFS host controller to operational state, 900 + * 1. Check if device is present 901 + * 2. Configure run-stop-registers 902 + * 3. Enable required interrupts 903 + * 4. Configure interrupt aggregation 904 + * 905 + * Returns 0 on success, non-zero value on failure 906 + */ 907 + static int ufshcd_make_hba_operational(struct ufs_hba *hba) 908 + { 909 + int err = 0; 910 + u32 reg; 911 + 912 + /* check if device present */ 913 + reg = readl((hba->mmio_base + REG_CONTROLLER_STATUS)); 914 + if (ufshcd_is_device_present(reg)) { 915 + dev_err(&hba->pdev->dev, "cc: Device not present\n"); 916 + err = -ENXIO; 917 + goto out; 918 + } 919 + 920 + /* 921 + * UCRDY, UTMRLDY and UTRLRDY bits must be 1 922 + * DEI, HEI bits must be 0 923 + */ 924 + if (!(ufshcd_get_lists_status(reg))) { 925 + ufshcd_enable_run_stop_reg(hba); 926 + } else { 927 + dev_err(&hba->pdev->dev, 928 + "Host controller not ready to process requests"); 929 + err = -EIO; 930 + goto out; 931 + } 932 + 933 + /* Enable required interrupts */ 934 + hba->int_enable_mask |= (UTP_TRANSFER_REQ_COMPL | 935 + UIC_ERROR | 936 + UTP_TASK_REQ_COMPL | 937 + DEVICE_FATAL_ERROR | 938 + CONTROLLER_FATAL_ERROR | 939 + SYSTEM_BUS_FATAL_ERROR); 940 + ufshcd_int_config(hba, UFSHCD_INT_ENABLE); 941 + 942 + /* Configure interrupt aggregation */ 943 + ufshcd_config_int_aggr(hba, INT_AGGR_CONFIG); 944 + 945 + if (hba->ufshcd_state == UFSHCD_STATE_RESET) 946 + scsi_unblock_requests(hba->host); 947 + 948 + hba->ufshcd_state = UFSHCD_STATE_OPERATIONAL; 949 + scsi_scan_host(hba->host); 950 + out: 951 + return err; 952 + } 953 + 954 + /** 955 + * ufshcd_hba_enable - initialize the controller 956 + * @hba: per adapter instance 957 + * 958 + * The controller resets itself and controller firmware initialization 959 + * sequence kicks off. When controller is ready it will set 960 + * the Host Controller Enable bit to 1. 961 + * 962 + * Returns 0 on success, non-zero value on failure 963 + */ 964 + static int ufshcd_hba_enable(struct ufs_hba *hba) 965 + { 966 + int retry; 967 + 968 + /* 969 + * msleep of 1 and 5 used in this function might result in msleep(20), 970 + * but it was necessary to send the UFS FPGA to reset mode during 971 + * development and testing of this driver. msleep can be changed to 972 + * mdelay and retry count can be reduced based on the controller. 973 + */ 974 + if (!ufshcd_is_hba_active(hba)) { 975 + 976 + /* change controller state to "reset state" */ 977 + ufshcd_hba_stop(hba); 978 + 979 + /* 980 + * This delay is based on the testing done with UFS host 981 + * controller FPGA. The delay can be changed based on the 982 + * host controller used. 983 + */ 984 + msleep(5); 985 + } 986 + 987 + /* start controller initialization sequence */ 988 + ufshcd_hba_start(hba); 989 + 990 + /* 991 + * To initialize a UFS host controller HCE bit must be set to 1. 992 + * During initialization the HCE bit value changes from 1->0->1. 993 + * When the host controller completes initialization sequence 994 + * it sets the value of HCE bit to 1. The same HCE bit is read back 995 + * to check if the controller has completed initialization sequence. 996 + * So without this delay the value HCE = 1, set in the previous 997 + * instruction might be read back. 998 + * This delay can be changed based on the controller. 999 + */ 1000 + msleep(1); 1001 + 1002 + /* wait for the host controller to complete initialization */ 1003 + retry = 10; 1004 + while (ufshcd_is_hba_active(hba)) { 1005 + if (retry) { 1006 + retry--; 1007 + } else { 1008 + dev_err(&hba->pdev->dev, 1009 + "Controller enable failed\n"); 1010 + return -EIO; 1011 + } 1012 + msleep(5); 1013 + } 1014 + return 0; 1015 + } 1016 + 1017 + /** 1018 + * ufshcd_initialize_hba - start the initialization process 1019 + * @hba: per adapter instance 1020 + * 1021 + * 1. Enable the controller via ufshcd_hba_enable. 1022 + * 2. Program the Transfer Request List Address with the starting address of 1023 + * UTRDL. 1024 + * 3. Program the Task Management Request List Address with starting address 1025 + * of UTMRDL. 1026 + * 1027 + * Returns 0 on success, non-zero value on failure. 1028 + */ 1029 + static int ufshcd_initialize_hba(struct ufs_hba *hba) 1030 + { 1031 + if (ufshcd_hba_enable(hba)) 1032 + return -EIO; 1033 + 1034 + /* Configure UTRL and UTMRL base address registers */ 1035 + writel(hba->utrdl_dma_addr, 1036 + (hba->mmio_base + REG_UTP_TRANSFER_REQ_LIST_BASE_L)); 1037 + writel(lower_32_bits(hba->utrdl_dma_addr), 1038 + (hba->mmio_base + REG_UTP_TRANSFER_REQ_LIST_BASE_H)); 1039 + writel(hba->utmrdl_dma_addr, 1040 + (hba->mmio_base + REG_UTP_TASK_REQ_LIST_BASE_L)); 1041 + writel(upper_32_bits(hba->utmrdl_dma_addr), 1042 + (hba->mmio_base + REG_UTP_TASK_REQ_LIST_BASE_H)); 1043 + 1044 + /* Initialize unipro link startup procedure */ 1045 + return ufshcd_dme_link_startup(hba); 1046 + } 1047 + 1048 + /** 1049 + * ufshcd_do_reset - reset the host controller 1050 + * @hba: per adapter instance 1051 + * 1052 + * Returns SUCCESS/FAILED 1053 + */ 1054 + static int ufshcd_do_reset(struct ufs_hba *hba) 1055 + { 1056 + struct ufshcd_lrb *lrbp; 1057 + unsigned long flags; 1058 + int tag; 1059 + 1060 + /* block commands from midlayer */ 1061 + scsi_block_requests(hba->host); 1062 + 1063 + spin_lock_irqsave(hba->host->host_lock, flags); 1064 + hba->ufshcd_state = UFSHCD_STATE_RESET; 1065 + 1066 + /* send controller to reset state */ 1067 + ufshcd_hba_stop(hba); 1068 + spin_unlock_irqrestore(hba->host->host_lock, flags); 1069 + 1070 + /* abort outstanding commands */ 1071 + for (tag = 0; tag < hba->nutrs; tag++) { 1072 + if (test_bit(tag, &hba->outstanding_reqs)) { 1073 + lrbp = &hba->lrb[tag]; 1074 + scsi_dma_unmap(lrbp->cmd); 1075 + lrbp->cmd->result = DID_RESET << 16; 1076 + lrbp->cmd->scsi_done(lrbp->cmd); 1077 + lrbp->cmd = NULL; 1078 + } 1079 + } 1080 + 1081 + /* clear outstanding request/task bit maps */ 1082 + hba->outstanding_reqs = 0; 1083 + hba->outstanding_tasks = 0; 1084 + 1085 + /* start the initialization process */ 1086 + if (ufshcd_initialize_hba(hba)) { 1087 + dev_err(&hba->pdev->dev, 1088 + "Reset: Controller initialization failed\n"); 1089 + return FAILED; 1090 + } 1091 + return SUCCESS; 1092 + } 1093 + 1094 + /** 1095 + * ufshcd_slave_alloc - handle initial SCSI device configurations 1096 + * @sdev: pointer to SCSI device 1097 + * 1098 + * Returns success 1099 + */ 1100 + static int ufshcd_slave_alloc(struct scsi_device *sdev) 1101 + { 1102 + struct ufs_hba *hba; 1103 + 1104 + hba = shost_priv(sdev->host); 1105 + sdev->tagged_supported = 1; 1106 + 1107 + /* Mode sense(6) is not supported by UFS, so use Mode sense(10) */ 1108 + sdev->use_10_for_ms = 1; 1109 + scsi_set_tag_type(sdev, MSG_SIMPLE_TAG); 1110 + 1111 + /* 1112 + * Inform SCSI Midlayer that the LUN queue depth is same as the 1113 + * controller queue depth. If a LUN queue depth is less than the 1114 + * controller queue depth and if the LUN reports 1115 + * SAM_STAT_TASK_SET_FULL, the LUN queue depth will be adjusted 1116 + * with scsi_adjust_queue_depth. 1117 + */ 1118 + scsi_activate_tcq(sdev, hba->nutrs); 1119 + return 0; 1120 + } 1121 + 1122 + /** 1123 + * ufshcd_slave_destroy - remove SCSI device configurations 1124 + * @sdev: pointer to SCSI device 1125 + */ 1126 + static void ufshcd_slave_destroy(struct scsi_device *sdev) 1127 + { 1128 + struct ufs_hba *hba; 1129 + 1130 + hba = shost_priv(sdev->host); 1131 + scsi_deactivate_tcq(sdev, hba->nutrs); 1132 + } 1133 + 1134 + /** 1135 + * ufshcd_task_req_compl - handle task management request completion 1136 + * @hba: per adapter instance 1137 + * @index: index of the completed request 1138 + * 1139 + * Returns SUCCESS/FAILED 1140 + */ 1141 + static int ufshcd_task_req_compl(struct ufs_hba *hba, u32 index) 1142 + { 1143 + struct utp_task_req_desc *task_req_descp; 1144 + struct utp_upiu_task_rsp *task_rsp_upiup; 1145 + unsigned long flags; 1146 + int ocs_value; 1147 + int task_result; 1148 + 1149 + spin_lock_irqsave(hba->host->host_lock, flags); 1150 + 1151 + /* Clear completed tasks from outstanding_tasks */ 1152 + __clear_bit(index, &hba->outstanding_tasks); 1153 + 1154 + task_req_descp = hba->utmrdl_base_addr; 1155 + ocs_value = ufshcd_get_tmr_ocs(&task_req_descp[index]); 1156 + 1157 + if (ocs_value == OCS_SUCCESS) { 1158 + task_rsp_upiup = (struct utp_upiu_task_rsp *) 1159 + task_req_descp[index].task_rsp_upiu; 1160 + task_result = be32_to_cpu(task_rsp_upiup->header.dword_1); 1161 + task_result = ((task_result & MASK_TASK_RESPONSE) >> 8); 1162 + 1163 + if (task_result != UPIU_TASK_MANAGEMENT_FUNC_COMPL || 1164 + task_result != UPIU_TASK_MANAGEMENT_FUNC_SUCCEEDED) 1165 + task_result = FAILED; 1166 + } else { 1167 + task_result = FAILED; 1168 + dev_err(&hba->pdev->dev, 1169 + "trc: Invalid ocs = %x\n", ocs_value); 1170 + } 1171 + spin_unlock_irqrestore(hba->host->host_lock, flags); 1172 + return task_result; 1173 + } 1174 + 1175 + /** 1176 + * ufshcd_adjust_lun_qdepth - Update LUN queue depth if device responds with 1177 + * SAM_STAT_TASK_SET_FULL SCSI command status. 1178 + * @cmd: pointer to SCSI command 1179 + */ 1180 + static void ufshcd_adjust_lun_qdepth(struct scsi_cmnd *cmd) 1181 + { 1182 + struct ufs_hba *hba; 1183 + int i; 1184 + int lun_qdepth = 0; 1185 + 1186 + hba = shost_priv(cmd->device->host); 1187 + 1188 + /* 1189 + * LUN queue depth can be obtained by counting outstanding commands 1190 + * on the LUN. 1191 + */ 1192 + for (i = 0; i < hba->nutrs; i++) { 1193 + if (test_bit(i, &hba->outstanding_reqs)) { 1194 + 1195 + /* 1196 + * Check if the outstanding command belongs 1197 + * to the LUN which reported SAM_STAT_TASK_SET_FULL. 1198 + */ 1199 + if (cmd->device->lun == hba->lrb[i].lun) 1200 + lun_qdepth++; 1201 + } 1202 + } 1203 + 1204 + /* 1205 + * LUN queue depth will be total outstanding commands, except the 1206 + * command for which the LUN reported SAM_STAT_TASK_SET_FULL. 1207 + */ 1208 + scsi_adjust_queue_depth(cmd->device, MSG_SIMPLE_TAG, lun_qdepth - 1); 1209 + } 1210 + 1211 + /** 1212 + * ufshcd_scsi_cmd_status - Update SCSI command result based on SCSI status 1213 + * @lrb: pointer to local reference block of completed command 1214 + * @scsi_status: SCSI command status 1215 + * 1216 + * Returns value base on SCSI command status 1217 + */ 1218 + static inline int 1219 + ufshcd_scsi_cmd_status(struct ufshcd_lrb *lrbp, int scsi_status) 1220 + { 1221 + int result = 0; 1222 + 1223 + switch (scsi_status) { 1224 + case SAM_STAT_GOOD: 1225 + result |= DID_OK << 16 | 1226 + COMMAND_COMPLETE << 8 | 1227 + SAM_STAT_GOOD; 1228 + break; 1229 + case SAM_STAT_CHECK_CONDITION: 1230 + result |= DID_OK << 16 | 1231 + COMMAND_COMPLETE << 8 | 1232 + SAM_STAT_CHECK_CONDITION; 1233 + ufshcd_copy_sense_data(lrbp); 1234 + break; 1235 + case SAM_STAT_BUSY: 1236 + result |= SAM_STAT_BUSY; 1237 + break; 1238 + case SAM_STAT_TASK_SET_FULL: 1239 + 1240 + /* 1241 + * If a LUN reports SAM_STAT_TASK_SET_FULL, then the LUN queue 1242 + * depth needs to be adjusted to the exact number of 1243 + * outstanding commands the LUN can handle at any given time. 1244 + */ 1245 + ufshcd_adjust_lun_qdepth(lrbp->cmd); 1246 + result |= SAM_STAT_TASK_SET_FULL; 1247 + break; 1248 + case SAM_STAT_TASK_ABORTED: 1249 + result |= SAM_STAT_TASK_ABORTED; 1250 + break; 1251 + default: 1252 + result |= DID_ERROR << 16; 1253 + break; 1254 + } /* end of switch */ 1255 + 1256 + return result; 1257 + } 1258 + 1259 + /** 1260 + * ufshcd_transfer_rsp_status - Get overall status of the response 1261 + * @hba: per adapter instance 1262 + * @lrb: pointer to local reference block of completed command 1263 + * 1264 + * Returns result of the command to notify SCSI midlayer 1265 + */ 1266 + static inline int 1267 + ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) 1268 + { 1269 + int result = 0; 1270 + int scsi_status; 1271 + int ocs; 1272 + 1273 + /* overall command status of utrd */ 1274 + ocs = ufshcd_get_tr_ocs(lrbp); 1275 + 1276 + switch (ocs) { 1277 + case OCS_SUCCESS: 1278 + 1279 + /* check if the returned transfer response is valid */ 1280 + result = ufshcd_is_valid_req_rsp(lrbp->ucd_rsp_ptr); 1281 + if (result) { 1282 + dev_err(&hba->pdev->dev, 1283 + "Invalid response = %x\n", result); 1284 + break; 1285 + } 1286 + 1287 + /* 1288 + * get the response UPIU result to extract 1289 + * the SCSI command status 1290 + */ 1291 + result = ufshcd_get_rsp_upiu_result(lrbp->ucd_rsp_ptr); 1292 + 1293 + /* 1294 + * get the result based on SCSI status response 1295 + * to notify the SCSI midlayer of the command status 1296 + */ 1297 + scsi_status = result & MASK_SCSI_STATUS; 1298 + result = ufshcd_scsi_cmd_status(lrbp, scsi_status); 1299 + break; 1300 + case OCS_ABORTED: 1301 + result |= DID_ABORT << 16; 1302 + break; 1303 + case OCS_INVALID_CMD_TABLE_ATTR: 1304 + case OCS_INVALID_PRDT_ATTR: 1305 + case OCS_MISMATCH_DATA_BUF_SIZE: 1306 + case OCS_MISMATCH_RESP_UPIU_SIZE: 1307 + case OCS_PEER_COMM_FAILURE: 1308 + case OCS_FATAL_ERROR: 1309 + default: 1310 + result |= DID_ERROR << 16; 1311 + dev_err(&hba->pdev->dev, 1312 + "OCS error from controller = %x\n", ocs); 1313 + break; 1314 + } /* end of switch */ 1315 + 1316 + return result; 1317 + } 1318 + 1319 + /** 1320 + * ufshcd_transfer_req_compl - handle SCSI and query command completion 1321 + * @hba: per adapter instance 1322 + */ 1323 + static void ufshcd_transfer_req_compl(struct ufs_hba *hba) 1324 + { 1325 + struct ufshcd_lrb *lrb; 1326 + unsigned long completed_reqs; 1327 + u32 tr_doorbell; 1328 + int result; 1329 + int index; 1330 + 1331 + lrb = hba->lrb; 1332 + tr_doorbell = 1333 + readl(hba->mmio_base + REG_UTP_TRANSFER_REQ_DOOR_BELL); 1334 + completed_reqs = tr_doorbell ^ hba->outstanding_reqs; 1335 + 1336 + for (index = 0; index < hba->nutrs; index++) { 1337 + if (test_bit(index, &completed_reqs)) { 1338 + 1339 + result = ufshcd_transfer_rsp_status(hba, &lrb[index]); 1340 + 1341 + if (lrb[index].cmd) { 1342 + scsi_dma_unmap(lrb[index].cmd); 1343 + lrb[index].cmd->result = result; 1344 + lrb[index].cmd->scsi_done(lrb[index].cmd); 1345 + 1346 + /* Mark completed command as NULL in LRB */ 1347 + lrb[index].cmd = NULL; 1348 + } 1349 + } /* end of if */ 1350 + } /* end of for */ 1351 + 1352 + /* clear corresponding bits of completed commands */ 1353 + hba->outstanding_reqs ^= completed_reqs; 1354 + 1355 + /* Reset interrupt aggregation counters */ 1356 + ufshcd_config_int_aggr(hba, INT_AGGR_RESET); 1357 + } 1358 + 1359 + /** 1360 + * ufshcd_uic_cc_handler - handle UIC command completion 1361 + * @work: pointer to a work queue structure 1362 + * 1363 + * Returns 0 on success, non-zero value on failure 1364 + */ 1365 + static void ufshcd_uic_cc_handler (struct work_struct *work) 1366 + { 1367 + struct ufs_hba *hba; 1368 + 1369 + hba = container_of(work, struct ufs_hba, uic_workq); 1370 + 1371 + if ((hba->active_uic_cmd.command == UIC_CMD_DME_LINK_STARTUP) && 1372 + !(ufshcd_get_uic_cmd_result(hba))) { 1373 + 1374 + if (ufshcd_make_hba_operational(hba)) 1375 + dev_err(&hba->pdev->dev, 1376 + "cc: hba not operational state\n"); 1377 + return; 1378 + } 1379 + } 1380 + 1381 + /** 1382 + * ufshcd_fatal_err_handler - handle fatal errors 1383 + * @hba: per adapter instance 1384 + */ 1385 + static void ufshcd_fatal_err_handler(struct work_struct *work) 1386 + { 1387 + struct ufs_hba *hba; 1388 + hba = container_of(work, struct ufs_hba, feh_workq); 1389 + 1390 + /* check if reset is already in progress */ 1391 + if (hba->ufshcd_state != UFSHCD_STATE_RESET) 1392 + ufshcd_do_reset(hba); 1393 + } 1394 + 1395 + /** 1396 + * ufshcd_err_handler - Check for fatal errors 1397 + * @work: pointer to a work queue structure 1398 + */ 1399 + static void ufshcd_err_handler(struct ufs_hba *hba) 1400 + { 1401 + u32 reg; 1402 + 1403 + if (hba->errors & INT_FATAL_ERRORS) 1404 + goto fatal_eh; 1405 + 1406 + if (hba->errors & UIC_ERROR) { 1407 + 1408 + reg = readl(hba->mmio_base + 1409 + REG_UIC_ERROR_CODE_PHY_ADAPTER_LAYER); 1410 + if (reg & UIC_DATA_LINK_LAYER_ERROR_PA_INIT) 1411 + goto fatal_eh; 1412 + } 1413 + return; 1414 + fatal_eh: 1415 + hba->ufshcd_state = UFSHCD_STATE_ERROR; 1416 + schedule_work(&hba->feh_workq); 1417 + } 1418 + 1419 + /** 1420 + * ufshcd_tmc_handler - handle task management function completion 1421 + * @hba: per adapter instance 1422 + */ 1423 + static void ufshcd_tmc_handler(struct ufs_hba *hba) 1424 + { 1425 + u32 tm_doorbell; 1426 + 1427 + tm_doorbell = readl(hba->mmio_base + REG_UTP_TASK_REQ_DOOR_BELL); 1428 + hba->tm_condition = tm_doorbell ^ hba->outstanding_tasks; 1429 + wake_up_interruptible(&hba->ufshcd_tm_wait_queue); 1430 + } 1431 + 1432 + /** 1433 + * ufshcd_sl_intr - Interrupt service routine 1434 + * @hba: per adapter instance 1435 + * @intr_status: contains interrupts generated by the controller 1436 + */ 1437 + static void ufshcd_sl_intr(struct ufs_hba *hba, u32 intr_status) 1438 + { 1439 + hba->errors = UFSHCD_ERROR_MASK & intr_status; 1440 + if (hba->errors) 1441 + ufshcd_err_handler(hba); 1442 + 1443 + if (intr_status & UIC_COMMAND_COMPL) 1444 + schedule_work(&hba->uic_workq); 1445 + 1446 + if (intr_status & UTP_TASK_REQ_COMPL) 1447 + ufshcd_tmc_handler(hba); 1448 + 1449 + if (intr_status & UTP_TRANSFER_REQ_COMPL) 1450 + ufshcd_transfer_req_compl(hba); 1451 + } 1452 + 1453 + /** 1454 + * ufshcd_intr - Main interrupt service routine 1455 + * @irq: irq number 1456 + * @__hba: pointer to adapter instance 1457 + * 1458 + * Returns IRQ_HANDLED - If interrupt is valid 1459 + * IRQ_NONE - If invalid interrupt 1460 + */ 1461 + static irqreturn_t ufshcd_intr(int irq, void *__hba) 1462 + { 1463 + u32 intr_status; 1464 + irqreturn_t retval = IRQ_NONE; 1465 + struct ufs_hba *hba = __hba; 1466 + 1467 + spin_lock(hba->host->host_lock); 1468 + intr_status = readl(hba->mmio_base + REG_INTERRUPT_STATUS); 1469 + 1470 + if (intr_status) { 1471 + ufshcd_sl_intr(hba, intr_status); 1472 + 1473 + /* If UFSHCI 1.0 then clear interrupt status register */ 1474 + if (hba->ufs_version == UFSHCI_VERSION_10) 1475 + writel(intr_status, 1476 + (hba->mmio_base + REG_INTERRUPT_STATUS)); 1477 + retval = IRQ_HANDLED; 1478 + } 1479 + spin_unlock(hba->host->host_lock); 1480 + return retval; 1481 + } 1482 + 1483 + /** 1484 + * ufshcd_issue_tm_cmd - issues task management commands to controller 1485 + * @hba: per adapter instance 1486 + * @lrbp: pointer to local reference block 1487 + * 1488 + * Returns SUCCESS/FAILED 1489 + */ 1490 + static int 1491 + ufshcd_issue_tm_cmd(struct ufs_hba *hba, 1492 + struct ufshcd_lrb *lrbp, 1493 + u8 tm_function) 1494 + { 1495 + struct utp_task_req_desc *task_req_descp; 1496 + struct utp_upiu_task_req *task_req_upiup; 1497 + struct Scsi_Host *host; 1498 + unsigned long flags; 1499 + int free_slot = 0; 1500 + int err; 1501 + 1502 + host = hba->host; 1503 + 1504 + spin_lock_irqsave(host->host_lock, flags); 1505 + 1506 + /* If task management queue is full */ 1507 + free_slot = ufshcd_get_tm_free_slot(hba); 1508 + if (free_slot >= hba->nutmrs) { 1509 + spin_unlock_irqrestore(host->host_lock, flags); 1510 + dev_err(&hba->pdev->dev, "Task management queue full\n"); 1511 + err = FAILED; 1512 + goto out; 1513 + } 1514 + 1515 + task_req_descp = hba->utmrdl_base_addr; 1516 + task_req_descp += free_slot; 1517 + 1518 + /* Configure task request descriptor */ 1519 + task_req_descp->header.dword_0 = cpu_to_le32(UTP_REQ_DESC_INT_CMD); 1520 + task_req_descp->header.dword_2 = 1521 + cpu_to_le32(OCS_INVALID_COMMAND_STATUS); 1522 + 1523 + /* Configure task request UPIU */ 1524 + task_req_upiup = 1525 + (struct utp_upiu_task_req *) task_req_descp->task_req_upiu; 1526 + task_req_upiup->header.dword_0 = 1527 + cpu_to_be32(UPIU_HEADER_DWORD(UPIU_TRANSACTION_TASK_REQ, 0, 1528 + lrbp->lun, lrbp->task_tag)); 1529 + task_req_upiup->header.dword_1 = 1530 + cpu_to_be32(UPIU_HEADER_DWORD(0, tm_function, 0, 0)); 1531 + 1532 + task_req_upiup->input_param1 = lrbp->lun; 1533 + task_req_upiup->input_param1 = 1534 + cpu_to_be32(task_req_upiup->input_param1); 1535 + task_req_upiup->input_param2 = lrbp->task_tag; 1536 + task_req_upiup->input_param2 = 1537 + cpu_to_be32(task_req_upiup->input_param2); 1538 + 1539 + /* send command to the controller */ 1540 + __set_bit(free_slot, &hba->outstanding_tasks); 1541 + writel((1 << free_slot), 1542 + (hba->mmio_base + REG_UTP_TASK_REQ_DOOR_BELL)); 1543 + 1544 + spin_unlock_irqrestore(host->host_lock, flags); 1545 + 1546 + /* wait until the task management command is completed */ 1547 + err = 1548 + wait_event_interruptible_timeout(hba->ufshcd_tm_wait_queue, 1549 + (test_bit(free_slot, 1550 + &hba->tm_condition) != 0), 1551 + 60 * HZ); 1552 + if (!err) { 1553 + dev_err(&hba->pdev->dev, 1554 + "Task management command timed-out\n"); 1555 + err = FAILED; 1556 + goto out; 1557 + } 1558 + clear_bit(free_slot, &hba->tm_condition); 1559 + return ufshcd_task_req_compl(hba, free_slot); 1560 + out: 1561 + return err; 1562 + } 1563 + 1564 + /** 1565 + * ufshcd_device_reset - reset device and abort all the pending commands 1566 + * @cmd: SCSI command pointer 1567 + * 1568 + * Returns SUCCESS/FAILED 1569 + */ 1570 + static int ufshcd_device_reset(struct scsi_cmnd *cmd) 1571 + { 1572 + struct Scsi_Host *host; 1573 + struct ufs_hba *hba; 1574 + unsigned int tag; 1575 + u32 pos; 1576 + int err; 1577 + 1578 + host = cmd->device->host; 1579 + hba = shost_priv(host); 1580 + tag = cmd->request->tag; 1581 + 1582 + err = ufshcd_issue_tm_cmd(hba, &hba->lrb[tag], UFS_LOGICAL_RESET); 1583 + if (err) 1584 + goto out; 1585 + 1586 + for (pos = 0; pos < hba->nutrs; pos++) { 1587 + if (test_bit(pos, &hba->outstanding_reqs) && 1588 + (hba->lrb[tag].lun == hba->lrb[pos].lun)) { 1589 + 1590 + /* clear the respective UTRLCLR register bit */ 1591 + ufshcd_utrl_clear(hba, pos); 1592 + 1593 + clear_bit(pos, &hba->outstanding_reqs); 1594 + 1595 + if (hba->lrb[pos].cmd) { 1596 + scsi_dma_unmap(hba->lrb[pos].cmd); 1597 + hba->lrb[pos].cmd->result = 1598 + DID_ABORT << 16; 1599 + hba->lrb[pos].cmd->scsi_done(cmd); 1600 + hba->lrb[pos].cmd = NULL; 1601 + } 1602 + } 1603 + } /* end of for */ 1604 + out: 1605 + return err; 1606 + } 1607 + 1608 + /** 1609 + * ufshcd_host_reset - Main reset function registered with scsi layer 1610 + * @cmd: SCSI command pointer 1611 + * 1612 + * Returns SUCCESS/FAILED 1613 + */ 1614 + static int ufshcd_host_reset(struct scsi_cmnd *cmd) 1615 + { 1616 + struct ufs_hba *hba; 1617 + 1618 + hba = shost_priv(cmd->device->host); 1619 + 1620 + if (hba->ufshcd_state == UFSHCD_STATE_RESET) 1621 + return SUCCESS; 1622 + 1623 + return (ufshcd_do_reset(hba) == SUCCESS) ? SUCCESS : FAILED; 1624 + } 1625 + 1626 + /** 1627 + * ufshcd_abort - abort a specific command 1628 + * @cmd: SCSI command pointer 1629 + * 1630 + * Returns SUCCESS/FAILED 1631 + */ 1632 + static int ufshcd_abort(struct scsi_cmnd *cmd) 1633 + { 1634 + struct Scsi_Host *host; 1635 + struct ufs_hba *hba; 1636 + unsigned long flags; 1637 + unsigned int tag; 1638 + int err; 1639 + 1640 + host = cmd->device->host; 1641 + hba = shost_priv(host); 1642 + tag = cmd->request->tag; 1643 + 1644 + spin_lock_irqsave(host->host_lock, flags); 1645 + 1646 + /* check if command is still pending */ 1647 + if (!(test_bit(tag, &hba->outstanding_reqs))) { 1648 + err = FAILED; 1649 + spin_unlock_irqrestore(host->host_lock, flags); 1650 + goto out; 1651 + } 1652 + spin_unlock_irqrestore(host->host_lock, flags); 1653 + 1654 + err = ufshcd_issue_tm_cmd(hba, &hba->lrb[tag], UFS_ABORT_TASK); 1655 + if (err) 1656 + goto out; 1657 + 1658 + scsi_dma_unmap(cmd); 1659 + 1660 + spin_lock_irqsave(host->host_lock, flags); 1661 + 1662 + /* clear the respective UTRLCLR register bit */ 1663 + ufshcd_utrl_clear(hba, tag); 1664 + 1665 + __clear_bit(tag, &hba->outstanding_reqs); 1666 + hba->lrb[tag].cmd = NULL; 1667 + spin_unlock_irqrestore(host->host_lock, flags); 1668 + out: 1669 + return err; 1670 + } 1671 + 1672 + static struct scsi_host_template ufshcd_driver_template = { 1673 + .module = THIS_MODULE, 1674 + .name = UFSHCD, 1675 + .proc_name = UFSHCD, 1676 + .queuecommand = ufshcd_queuecommand, 1677 + .slave_alloc = ufshcd_slave_alloc, 1678 + .slave_destroy = ufshcd_slave_destroy, 1679 + .eh_abort_handler = ufshcd_abort, 1680 + .eh_device_reset_handler = ufshcd_device_reset, 1681 + .eh_host_reset_handler = ufshcd_host_reset, 1682 + .this_id = -1, 1683 + .sg_tablesize = SG_ALL, 1684 + .cmd_per_lun = UFSHCD_CMD_PER_LUN, 1685 + .can_queue = UFSHCD_CAN_QUEUE, 1686 + }; 1687 + 1688 + /** 1689 + * ufshcd_shutdown - main function to put the controller in reset state 1690 + * @pdev: pointer to PCI device handle 1691 + */ 1692 + static void ufshcd_shutdown(struct pci_dev *pdev) 1693 + { 1694 + ufshcd_hba_stop((struct ufs_hba *)pci_get_drvdata(pdev)); 1695 + } 1696 + 1697 + #ifdef CONFIG_PM 1698 + /** 1699 + * ufshcd_suspend - suspend power management function 1700 + * @pdev: pointer to PCI device handle 1701 + * @state: power state 1702 + * 1703 + * Returns -ENOSYS 1704 + */ 1705 + static int ufshcd_suspend(struct pci_dev *pdev, pm_message_t state) 1706 + { 1707 + /* 1708 + * TODO: 1709 + * 1. Block SCSI requests from SCSI midlayer 1710 + * 2. Change the internal driver state to non operational 1711 + * 3. Set UTRLRSR and UTMRLRSR bits to zero 1712 + * 4. Wait until outstanding commands are completed 1713 + * 5. Set HCE to zero to send the UFS host controller to reset state 1714 + */ 1715 + 1716 + return -ENOSYS; 1717 + } 1718 + 1719 + /** 1720 + * ufshcd_resume - resume power management function 1721 + * @pdev: pointer to PCI device handle 1722 + * 1723 + * Returns -ENOSYS 1724 + */ 1725 + static int ufshcd_resume(struct pci_dev *pdev) 1726 + { 1727 + /* 1728 + * TODO: 1729 + * 1. Set HCE to 1, to start the UFS host controller 1730 + * initialization process 1731 + * 2. Set UTRLRSR and UTMRLRSR bits to 1 1732 + * 3. Change the internal driver state to operational 1733 + * 4. Unblock SCSI requests from SCSI midlayer 1734 + */ 1735 + 1736 + return -ENOSYS; 1737 + } 1738 + #endif /* CONFIG_PM */ 1739 + 1740 + /** 1741 + * ufshcd_hba_free - free allocated memory for 1742 + * host memory space data structures 1743 + * @hba: per adapter instance 1744 + */ 1745 + static void ufshcd_hba_free(struct ufs_hba *hba) 1746 + { 1747 + iounmap(hba->mmio_base); 1748 + ufshcd_free_hba_memory(hba); 1749 + pci_release_regions(hba->pdev); 1750 + } 1751 + 1752 + /** 1753 + * ufshcd_remove - de-allocate PCI/SCSI host and host memory space 1754 + * data structure memory 1755 + * @pdev - pointer to PCI handle 1756 + */ 1757 + static void ufshcd_remove(struct pci_dev *pdev) 1758 + { 1759 + struct ufs_hba *hba = pci_get_drvdata(pdev); 1760 + 1761 + /* disable interrupts */ 1762 + ufshcd_int_config(hba, UFSHCD_INT_DISABLE); 1763 + free_irq(pdev->irq, hba); 1764 + 1765 + ufshcd_hba_stop(hba); 1766 + ufshcd_hba_free(hba); 1767 + 1768 + scsi_remove_host(hba->host); 1769 + scsi_host_put(hba->host); 1770 + pci_set_drvdata(pdev, NULL); 1771 + pci_clear_master(pdev); 1772 + pci_disable_device(pdev); 1773 + } 1774 + 1775 + /** 1776 + * ufshcd_set_dma_mask - Set dma mask based on the controller 1777 + * addressing capability 1778 + * @pdev: PCI device structure 1779 + * 1780 + * Returns 0 for success, non-zero for failure 1781 + */ 1782 + static int ufshcd_set_dma_mask(struct ufs_hba *hba) 1783 + { 1784 + int err; 1785 + u64 dma_mask; 1786 + 1787 + /* 1788 + * If controller supports 64 bit addressing mode, then set the DMA 1789 + * mask to 64-bit, else set the DMA mask to 32-bit 1790 + */ 1791 + if (hba->capabilities & MASK_64_ADDRESSING_SUPPORT) 1792 + dma_mask = DMA_BIT_MASK(64); 1793 + else 1794 + dma_mask = DMA_BIT_MASK(32); 1795 + 1796 + err = pci_set_dma_mask(hba->pdev, dma_mask); 1797 + if (err) 1798 + return err; 1799 + 1800 + err = pci_set_consistent_dma_mask(hba->pdev, dma_mask); 1801 + 1802 + return err; 1803 + } 1804 + 1805 + /** 1806 + * ufshcd_probe - probe routine of the driver 1807 + * @pdev: pointer to PCI device handle 1808 + * @id: PCI device id 1809 + * 1810 + * Returns 0 on success, non-zero value on failure 1811 + */ 1812 + static int __devinit 1813 + ufshcd_probe(struct pci_dev *pdev, const struct pci_device_id *id) 1814 + { 1815 + struct Scsi_Host *host; 1816 + struct ufs_hba *hba; 1817 + int err; 1818 + 1819 + err = pci_enable_device(pdev); 1820 + if (err) { 1821 + dev_err(&pdev->dev, "pci_enable_device failed\n"); 1822 + goto out_error; 1823 + } 1824 + 1825 + pci_set_master(pdev); 1826 + 1827 + host = scsi_host_alloc(&ufshcd_driver_template, 1828 + sizeof(struct ufs_hba)); 1829 + if (!host) { 1830 + dev_err(&pdev->dev, "scsi_host_alloc failed\n"); 1831 + err = -ENOMEM; 1832 + goto out_disable; 1833 + } 1834 + hba = shost_priv(host); 1835 + 1836 + err = pci_request_regions(pdev, UFSHCD); 1837 + if (err < 0) { 1838 + dev_err(&pdev->dev, "request regions failed\n"); 1839 + goto out_disable; 1840 + } 1841 + 1842 + hba->mmio_base = pci_ioremap_bar(pdev, 0); 1843 + if (!hba->mmio_base) { 1844 + dev_err(&pdev->dev, "memory map failed\n"); 1845 + err = -ENOMEM; 1846 + goto out_release_regions; 1847 + } 1848 + 1849 + hba->host = host; 1850 + hba->pdev = pdev; 1851 + 1852 + /* Read capabilities registers */ 1853 + ufshcd_hba_capabilities(hba); 1854 + 1855 + /* Get UFS version supported by the controller */ 1856 + hba->ufs_version = ufshcd_get_ufs_version(hba); 1857 + 1858 + err = ufshcd_set_dma_mask(hba); 1859 + if (err) { 1860 + dev_err(&pdev->dev, "set dma mask failed\n"); 1861 + goto out_iounmap; 1862 + } 1863 + 1864 + /* Allocate memory for host memory space */ 1865 + err = ufshcd_memory_alloc(hba); 1866 + if (err) { 1867 + dev_err(&pdev->dev, "Memory allocation failed\n"); 1868 + goto out_iounmap; 1869 + } 1870 + 1871 + /* Configure LRB */ 1872 + ufshcd_host_memory_configure(hba); 1873 + 1874 + host->can_queue = hba->nutrs; 1875 + host->cmd_per_lun = hba->nutrs; 1876 + host->max_id = UFSHCD_MAX_ID; 1877 + host->max_lun = UFSHCD_MAX_LUNS; 1878 + host->max_channel = UFSHCD_MAX_CHANNEL; 1879 + host->unique_id = host->host_no; 1880 + host->max_cmd_len = MAX_CDB_SIZE; 1881 + 1882 + /* Initailize wait queue for task management */ 1883 + init_waitqueue_head(&hba->ufshcd_tm_wait_queue); 1884 + 1885 + /* Initialize work queues */ 1886 + INIT_WORK(&hba->uic_workq, ufshcd_uic_cc_handler); 1887 + INIT_WORK(&hba->feh_workq, ufshcd_fatal_err_handler); 1888 + 1889 + /* IRQ registration */ 1890 + err = request_irq(pdev->irq, ufshcd_intr, IRQF_SHARED, UFSHCD, hba); 1891 + if (err) { 1892 + dev_err(&pdev->dev, "request irq failed\n"); 1893 + goto out_lrb_free; 1894 + } 1895 + 1896 + /* Enable SCSI tag mapping */ 1897 + err = scsi_init_shared_tag_map(host, host->can_queue); 1898 + if (err) { 1899 + dev_err(&pdev->dev, "init shared queue failed\n"); 1900 + goto out_free_irq; 1901 + } 1902 + 1903 + pci_set_drvdata(pdev, hba); 1904 + 1905 + err = scsi_add_host(host, &pdev->dev); 1906 + if (err) { 1907 + dev_err(&pdev->dev, "scsi_add_host failed\n"); 1908 + goto out_free_irq; 1909 + } 1910 + 1911 + /* Initialization routine */ 1912 + err = ufshcd_initialize_hba(hba); 1913 + if (err) { 1914 + dev_err(&pdev->dev, "Initialization failed\n"); 1915 + goto out_free_irq; 1916 + } 1917 + 1918 + return 0; 1919 + 1920 + out_free_irq: 1921 + free_irq(pdev->irq, hba); 1922 + out_lrb_free: 1923 + ufshcd_free_hba_memory(hba); 1924 + out_iounmap: 1925 + iounmap(hba->mmio_base); 1926 + out_release_regions: 1927 + pci_release_regions(pdev); 1928 + out_disable: 1929 + scsi_host_put(host); 1930 + pci_clear_master(pdev); 1931 + pci_disable_device(pdev); 1932 + out_error: 1933 + return err; 1934 + } 1935 + 1936 + static DEFINE_PCI_DEVICE_TABLE(ufshcd_pci_tbl) = { 1937 + { PCI_VENDOR_ID_SAMSUNG, 0xC00C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 }, 1938 + { } /* terminate list */ 1939 + }; 1940 + 1941 + MODULE_DEVICE_TABLE(pci, ufshcd_pci_tbl); 1942 + 1943 + static struct pci_driver ufshcd_pci_driver = { 1944 + .name = UFSHCD, 1945 + .id_table = ufshcd_pci_tbl, 1946 + .probe = ufshcd_probe, 1947 + .remove = __devexit_p(ufshcd_remove), 1948 + .shutdown = ufshcd_shutdown, 1949 + #ifdef CONFIG_PM 1950 + .suspend = ufshcd_suspend, 1951 + .resume = ufshcd_resume, 1952 + #endif 1953 + }; 1954 + 1955 + /** 1956 + * ufshcd_init - Driver registration routine 1957 + */ 1958 + static int __init ufshcd_init(void) 1959 + { 1960 + return pci_register_driver(&ufshcd_pci_driver); 1961 + } 1962 + module_init(ufshcd_init); 1963 + 1964 + /** 1965 + * ufshcd_exit - Driver exit clean-up routine 1966 + */ 1967 + static void __exit ufshcd_exit(void) 1968 + { 1969 + pci_unregister_driver(&ufshcd_pci_driver); 1970 + } 1971 + module_exit(ufshcd_exit); 1972 + 1973 + 1974 + MODULE_AUTHOR("Santosh Yaragnavi <santosh.sy@samsung.com>, " 1975 + "Vinayak Holikatti <h.vinayak@samsung.com>"); 1976 + MODULE_DESCRIPTION("Generic UFS host controller driver"); 1977 + MODULE_LICENSE("GPL"); 1978 + MODULE_VERSION(UFSHCD_DRIVER_VERSION);
+376
drivers/scsi/ufs/ufshci.h
··· 1 + /* 2 + * Universal Flash Storage Host controller driver 3 + * 4 + * This code is based on drivers/scsi/ufs/ufshci.h 5 + * Copyright (C) 2011-2012 Samsung India Software Operations 6 + * 7 + * Santosh Yaraganavi <santosh.sy@samsung.com> 8 + * Vinayak Holikatti <h.vinayak@samsung.com> 9 + * 10 + * This program is free software; you can redistribute it and/or 11 + * modify it under the terms of the GNU General Public License 12 + * as published by the Free Software Foundation; either version 2 13 + * of the License, or (at your option) any later version. 14 + * 15 + * This program is distributed in the hope that it will be useful, 16 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 17 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 18 + * GNU General Public License for more details. 19 + * 20 + * NO WARRANTY 21 + * THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR 22 + * CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT 23 + * LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, 24 + * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is 25 + * solely responsible for determining the appropriateness of using and 26 + * distributing the Program and assumes all risks associated with its 27 + * exercise of rights under this Agreement, including but not limited to 28 + * the risks and costs of program errors, damage to or loss of data, 29 + * programs or equipment, and unavailability or interruption of operations. 30 + 31 + * DISCLAIMER OF LIABILITY 32 + * NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY 33 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 34 + * DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND 35 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR 36 + * TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE 37 + * USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED 38 + * HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES 39 + 40 + * You should have received a copy of the GNU General Public License 41 + * along with this program; if not, write to the Free Software 42 + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, 43 + * USA. 44 + */ 45 + 46 + #ifndef _UFSHCI_H 47 + #define _UFSHCI_H 48 + 49 + enum { 50 + TASK_REQ_UPIU_SIZE_DWORDS = 8, 51 + TASK_RSP_UPIU_SIZE_DWORDS = 8, 52 + ALIGNED_UPIU_SIZE = 128, 53 + }; 54 + 55 + /* UFSHCI Registers */ 56 + enum { 57 + REG_CONTROLLER_CAPABILITIES = 0x00, 58 + REG_UFS_VERSION = 0x08, 59 + REG_CONTROLLER_DEV_ID = 0x10, 60 + REG_CONTROLLER_PROD_ID = 0x14, 61 + REG_INTERRUPT_STATUS = 0x20, 62 + REG_INTERRUPT_ENABLE = 0x24, 63 + REG_CONTROLLER_STATUS = 0x30, 64 + REG_CONTROLLER_ENABLE = 0x34, 65 + REG_UIC_ERROR_CODE_PHY_ADAPTER_LAYER = 0x38, 66 + REG_UIC_ERROR_CODE_DATA_LINK_LAYER = 0x3C, 67 + REG_UIC_ERROR_CODE_NETWORK_LAYER = 0x40, 68 + REG_UIC_ERROR_CODE_TRANSPORT_LAYER = 0x44, 69 + REG_UIC_ERROR_CODE_DME = 0x48, 70 + REG_UTP_TRANSFER_REQ_INT_AGG_CONTROL = 0x4C, 71 + REG_UTP_TRANSFER_REQ_LIST_BASE_L = 0x50, 72 + REG_UTP_TRANSFER_REQ_LIST_BASE_H = 0x54, 73 + REG_UTP_TRANSFER_REQ_DOOR_BELL = 0x58, 74 + REG_UTP_TRANSFER_REQ_LIST_CLEAR = 0x5C, 75 + REG_UTP_TRANSFER_REQ_LIST_RUN_STOP = 0x60, 76 + REG_UTP_TASK_REQ_LIST_BASE_L = 0x70, 77 + REG_UTP_TASK_REQ_LIST_BASE_H = 0x74, 78 + REG_UTP_TASK_REQ_DOOR_BELL = 0x78, 79 + REG_UTP_TASK_REQ_LIST_CLEAR = 0x7C, 80 + REG_UTP_TASK_REQ_LIST_RUN_STOP = 0x80, 81 + REG_UIC_COMMAND = 0x90, 82 + REG_UIC_COMMAND_ARG_1 = 0x94, 83 + REG_UIC_COMMAND_ARG_2 = 0x98, 84 + REG_UIC_COMMAND_ARG_3 = 0x9C, 85 + }; 86 + 87 + /* Controller capability masks */ 88 + enum { 89 + MASK_TRANSFER_REQUESTS_SLOTS = 0x0000001F, 90 + MASK_TASK_MANAGEMENT_REQUEST_SLOTS = 0x00070000, 91 + MASK_64_ADDRESSING_SUPPORT = 0x01000000, 92 + MASK_OUT_OF_ORDER_DATA_DELIVERY_SUPPORT = 0x02000000, 93 + MASK_UIC_DME_TEST_MODE_SUPPORT = 0x04000000, 94 + }; 95 + 96 + /* UFS Version 08h */ 97 + #define MINOR_VERSION_NUM_MASK UFS_MASK(0xFFFF, 0) 98 + #define MAJOR_VERSION_NUM_MASK UFS_MASK(0xFFFF, 16) 99 + 100 + /* Controller UFSHCI version */ 101 + enum { 102 + UFSHCI_VERSION_10 = 0x00010000, 103 + UFSHCI_VERSION_11 = 0x00010100, 104 + }; 105 + 106 + /* 107 + * HCDDID - Host Controller Identification Descriptor 108 + * - Device ID and Device Class 10h 109 + */ 110 + #define DEVICE_CLASS UFS_MASK(0xFFFF, 0) 111 + #define DEVICE_ID UFS_MASK(0xFF, 24) 112 + 113 + /* 114 + * HCPMID - Host Controller Identification Descriptor 115 + * - Product/Manufacturer ID 14h 116 + */ 117 + #define MANUFACTURE_ID_MASK UFS_MASK(0xFFFF, 0) 118 + #define PRODUCT_ID_MASK UFS_MASK(0xFFFF, 16) 119 + 120 + #define UFS_BIT(x) (1L << (x)) 121 + 122 + #define UTP_TRANSFER_REQ_COMPL UFS_BIT(0) 123 + #define UIC_DME_END_PT_RESET UFS_BIT(1) 124 + #define UIC_ERROR UFS_BIT(2) 125 + #define UIC_TEST_MODE UFS_BIT(3) 126 + #define UIC_POWER_MODE UFS_BIT(4) 127 + #define UIC_HIBERNATE_EXIT UFS_BIT(5) 128 + #define UIC_HIBERNATE_ENTER UFS_BIT(6) 129 + #define UIC_LINK_LOST UFS_BIT(7) 130 + #define UIC_LINK_STARTUP UFS_BIT(8) 131 + #define UTP_TASK_REQ_COMPL UFS_BIT(9) 132 + #define UIC_COMMAND_COMPL UFS_BIT(10) 133 + #define DEVICE_FATAL_ERROR UFS_BIT(11) 134 + #define CONTROLLER_FATAL_ERROR UFS_BIT(16) 135 + #define SYSTEM_BUS_FATAL_ERROR UFS_BIT(17) 136 + 137 + #define UFSHCD_ERROR_MASK (UIC_ERROR |\ 138 + DEVICE_FATAL_ERROR |\ 139 + CONTROLLER_FATAL_ERROR |\ 140 + SYSTEM_BUS_FATAL_ERROR) 141 + 142 + #define INT_FATAL_ERRORS (DEVICE_FATAL_ERROR |\ 143 + CONTROLLER_FATAL_ERROR |\ 144 + SYSTEM_BUS_FATAL_ERROR) 145 + 146 + /* HCS - Host Controller Status 30h */ 147 + #define DEVICE_PRESENT UFS_BIT(0) 148 + #define UTP_TRANSFER_REQ_LIST_READY UFS_BIT(1) 149 + #define UTP_TASK_REQ_LIST_READY UFS_BIT(2) 150 + #define UIC_COMMAND_READY UFS_BIT(3) 151 + #define HOST_ERROR_INDICATOR UFS_BIT(4) 152 + #define DEVICE_ERROR_INDICATOR UFS_BIT(5) 153 + #define UIC_POWER_MODE_CHANGE_REQ_STATUS_MASK UFS_MASK(0x7, 8) 154 + 155 + /* HCE - Host Controller Enable 34h */ 156 + #define CONTROLLER_ENABLE UFS_BIT(0) 157 + #define CONTROLLER_DISABLE 0x0 158 + 159 + /* UECPA - Host UIC Error Code PHY Adapter Layer 38h */ 160 + #define UIC_PHY_ADAPTER_LAYER_ERROR UFS_BIT(31) 161 + #define UIC_PHY_ADAPTER_LAYER_ERROR_CODE_MASK 0x1F 162 + 163 + /* UECDL - Host UIC Error Code Data Link Layer 3Ch */ 164 + #define UIC_DATA_LINK_LAYER_ERROR UFS_BIT(31) 165 + #define UIC_DATA_LINK_LAYER_ERROR_CODE_MASK 0x7FFF 166 + #define UIC_DATA_LINK_LAYER_ERROR_PA_INIT 0x2000 167 + 168 + /* UECN - Host UIC Error Code Network Layer 40h */ 169 + #define UIC_NETWORK_LAYER_ERROR UFS_BIT(31) 170 + #define UIC_NETWORK_LAYER_ERROR_CODE_MASK 0x7 171 + 172 + /* UECT - Host UIC Error Code Transport Layer 44h */ 173 + #define UIC_TRANSPORT_LAYER_ERROR UFS_BIT(31) 174 + #define UIC_TRANSPORT_LAYER_ERROR_CODE_MASK 0x7F 175 + 176 + /* UECDME - Host UIC Error Code DME 48h */ 177 + #define UIC_DME_ERROR UFS_BIT(31) 178 + #define UIC_DME_ERROR_CODE_MASK 0x1 179 + 180 + #define INT_AGGR_TIMEOUT_VAL_MASK 0xFF 181 + #define INT_AGGR_COUNTER_THRESHOLD_MASK UFS_MASK(0x1F, 8) 182 + #define INT_AGGR_COUNTER_AND_TIMER_RESET UFS_BIT(16) 183 + #define INT_AGGR_STATUS_BIT UFS_BIT(20) 184 + #define INT_AGGR_PARAM_WRITE UFS_BIT(24) 185 + #define INT_AGGR_ENABLE UFS_BIT(31) 186 + 187 + /* UTRLRSR - UTP Transfer Request Run-Stop Register 60h */ 188 + #define UTP_TRANSFER_REQ_LIST_RUN_STOP_BIT UFS_BIT(0) 189 + 190 + /* UTMRLRSR - UTP Task Management Request Run-Stop Register 80h */ 191 + #define UTP_TASK_REQ_LIST_RUN_STOP_BIT UFS_BIT(0) 192 + 193 + /* UICCMD - UIC Command */ 194 + #define COMMAND_OPCODE_MASK 0xFF 195 + #define GEN_SELECTOR_INDEX_MASK 0xFFFF 196 + 197 + #define MIB_ATTRIBUTE_MASK UFS_MASK(0xFFFF, 16) 198 + #define RESET_LEVEL 0xFF 199 + 200 + #define ATTR_SET_TYPE_MASK UFS_MASK(0xFF, 16) 201 + #define CONFIG_RESULT_CODE_MASK 0xFF 202 + #define GENERIC_ERROR_CODE_MASK 0xFF 203 + 204 + /* UIC Commands */ 205 + enum { 206 + UIC_CMD_DME_GET = 0x01, 207 + UIC_CMD_DME_SET = 0x02, 208 + UIC_CMD_DME_PEER_GET = 0x03, 209 + UIC_CMD_DME_PEER_SET = 0x04, 210 + UIC_CMD_DME_POWERON = 0x10, 211 + UIC_CMD_DME_POWEROFF = 0x11, 212 + UIC_CMD_DME_ENABLE = 0x12, 213 + UIC_CMD_DME_RESET = 0x14, 214 + UIC_CMD_DME_END_PT_RST = 0x15, 215 + UIC_CMD_DME_LINK_STARTUP = 0x16, 216 + UIC_CMD_DME_HIBER_ENTER = 0x17, 217 + UIC_CMD_DME_HIBER_EXIT = 0x18, 218 + UIC_CMD_DME_TEST_MODE = 0x1A, 219 + }; 220 + 221 + /* UIC Config result code / Generic error code */ 222 + enum { 223 + UIC_CMD_RESULT_SUCCESS = 0x00, 224 + UIC_CMD_RESULT_INVALID_ATTR = 0x01, 225 + UIC_CMD_RESULT_FAILURE = 0x01, 226 + UIC_CMD_RESULT_INVALID_ATTR_VALUE = 0x02, 227 + UIC_CMD_RESULT_READ_ONLY_ATTR = 0x03, 228 + UIC_CMD_RESULT_WRITE_ONLY_ATTR = 0x04, 229 + UIC_CMD_RESULT_BAD_INDEX = 0x05, 230 + UIC_CMD_RESULT_LOCKED_ATTR = 0x06, 231 + UIC_CMD_RESULT_BAD_TEST_FEATURE_INDEX = 0x07, 232 + UIC_CMD_RESULT_PEER_COMM_FAILURE = 0x08, 233 + UIC_CMD_RESULT_BUSY = 0x09, 234 + UIC_CMD_RESULT_DME_FAILURE = 0x0A, 235 + }; 236 + 237 + #define MASK_UIC_COMMAND_RESULT 0xFF 238 + 239 + #define INT_AGGR_COUNTER_THRESHOLD_VALUE (0x1F << 8) 240 + #define INT_AGGR_TIMEOUT_VALUE (0x02) 241 + 242 + /* Interrupt disable masks */ 243 + enum { 244 + /* Interrupt disable mask for UFSHCI v1.0 */ 245 + INTERRUPT_DISABLE_MASK_10 = 0xFFFF, 246 + 247 + /* Interrupt disable mask for UFSHCI v1.1 */ 248 + INTERRUPT_DISABLE_MASK_11 = 0x0, 249 + }; 250 + 251 + /* 252 + * Request Descriptor Definitions 253 + */ 254 + 255 + /* Transfer request command type */ 256 + enum { 257 + UTP_CMD_TYPE_SCSI = 0x0, 258 + UTP_CMD_TYPE_UFS = 0x1, 259 + UTP_CMD_TYPE_DEV_MANAGE = 0x2, 260 + }; 261 + 262 + enum { 263 + UTP_SCSI_COMMAND = 0x00000000, 264 + UTP_NATIVE_UFS_COMMAND = 0x10000000, 265 + UTP_DEVICE_MANAGEMENT_FUNCTION = 0x20000000, 266 + UTP_REQ_DESC_INT_CMD = 0x01000000, 267 + }; 268 + 269 + /* UTP Transfer Request Data Direction (DD) */ 270 + enum { 271 + UTP_NO_DATA_TRANSFER = 0x00000000, 272 + UTP_HOST_TO_DEVICE = 0x02000000, 273 + UTP_DEVICE_TO_HOST = 0x04000000, 274 + }; 275 + 276 + /* Overall command status values */ 277 + enum { 278 + OCS_SUCCESS = 0x0, 279 + OCS_INVALID_CMD_TABLE_ATTR = 0x1, 280 + OCS_INVALID_PRDT_ATTR = 0x2, 281 + OCS_MISMATCH_DATA_BUF_SIZE = 0x3, 282 + OCS_MISMATCH_RESP_UPIU_SIZE = 0x4, 283 + OCS_PEER_COMM_FAILURE = 0x5, 284 + OCS_ABORTED = 0x6, 285 + OCS_FATAL_ERROR = 0x7, 286 + OCS_INVALID_COMMAND_STATUS = 0x0F, 287 + MASK_OCS = 0x0F, 288 + }; 289 + 290 + /** 291 + * struct ufshcd_sg_entry - UFSHCI PRD Entry 292 + * @base_addr: Lower 32bit physical address DW-0 293 + * @upper_addr: Upper 32bit physical address DW-1 294 + * @reserved: Reserved for future use DW-2 295 + * @size: size of physical segment DW-3 296 + */ 297 + struct ufshcd_sg_entry { 298 + u32 base_addr; 299 + u32 upper_addr; 300 + u32 reserved; 301 + u32 size; 302 + }; 303 + 304 + /** 305 + * struct utp_transfer_cmd_desc - UFS Command Descriptor structure 306 + * @command_upiu: Command UPIU Frame address 307 + * @response_upiu: Response UPIU Frame address 308 + * @prd_table: Physical Region Descriptor 309 + */ 310 + struct utp_transfer_cmd_desc { 311 + u8 command_upiu[ALIGNED_UPIU_SIZE]; 312 + u8 response_upiu[ALIGNED_UPIU_SIZE]; 313 + struct ufshcd_sg_entry prd_table[SG_ALL]; 314 + }; 315 + 316 + /** 317 + * struct request_desc_header - Descriptor Header common to both UTRD and UTMRD 318 + * @dword0: Descriptor Header DW0 319 + * @dword1: Descriptor Header DW1 320 + * @dword2: Descriptor Header DW2 321 + * @dword3: Descriptor Header DW3 322 + */ 323 + struct request_desc_header { 324 + u32 dword_0; 325 + u32 dword_1; 326 + u32 dword_2; 327 + u32 dword_3; 328 + }; 329 + 330 + /** 331 + * struct utp_transfer_req_desc - UTRD structure 332 + * @header: UTRD header DW-0 to DW-3 333 + * @command_desc_base_addr_lo: UCD base address low DW-4 334 + * @command_desc_base_addr_hi: UCD base address high DW-5 335 + * @response_upiu_length: response UPIU length DW-6 336 + * @response_upiu_offset: response UPIU offset DW-6 337 + * @prd_table_length: Physical region descriptor length DW-7 338 + * @prd_table_offset: Physical region descriptor offset DW-7 339 + */ 340 + struct utp_transfer_req_desc { 341 + 342 + /* DW 0-3 */ 343 + struct request_desc_header header; 344 + 345 + /* DW 4-5*/ 346 + u32 command_desc_base_addr_lo; 347 + u32 command_desc_base_addr_hi; 348 + 349 + /* DW 6 */ 350 + u16 response_upiu_length; 351 + u16 response_upiu_offset; 352 + 353 + /* DW 7 */ 354 + u16 prd_table_length; 355 + u16 prd_table_offset; 356 + }; 357 + 358 + /** 359 + * struct utp_task_req_desc - UTMRD structure 360 + * @header: UTMRD header DW-0 to DW-3 361 + * @task_req_upiu: Pointer to task request UPIU DW-4 to DW-11 362 + * @task_rsp_upiu: Pointer to task response UPIU DW12 to DW-19 363 + */ 364 + struct utp_task_req_desc { 365 + 366 + /* DW 0-3 */ 367 + struct request_desc_header header; 368 + 369 + /* DW 4-11 */ 370 + u32 task_req_upiu[TASK_REQ_UPIU_SIZE_DWORDS]; 371 + 372 + /* DW 12-19 */ 373 + u32 task_rsp_upiu[TASK_RSP_UPIU_SIZE_DWORDS]; 374 + }; 375 + 376 + #endif /* End of Header */
+64 -1
drivers/scsi/vmw_pvscsi.c
··· 17 17 * along with this program; if not, write to the Free Software 18 18 * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. 19 19 * 20 - * Maintained by: Alok N Kataria <akataria@vmware.com> 20 + * Maintained by: Arvind Kumar <arvindkumar@vmware.com> 21 21 * 22 22 */ 23 23 ··· 1178 1178 return 0; 1179 1179 } 1180 1180 1181 + /* 1182 + * Query the device, fetch the config info and return the 1183 + * maximum number of targets on the adapter. In case of 1184 + * failure due to any reason return default i.e. 16. 1185 + */ 1186 + static u32 pvscsi_get_max_targets(struct pvscsi_adapter *adapter) 1187 + { 1188 + struct PVSCSICmdDescConfigCmd cmd; 1189 + struct PVSCSIConfigPageHeader *header; 1190 + struct device *dev; 1191 + dma_addr_t configPagePA; 1192 + void *config_page; 1193 + u32 numPhys = 16; 1194 + 1195 + dev = pvscsi_dev(adapter); 1196 + config_page = pci_alloc_consistent(adapter->dev, PAGE_SIZE, 1197 + &configPagePA); 1198 + if (!config_page) { 1199 + dev_warn(dev, "vmw_pvscsi: failed to allocate memory for config page\n"); 1200 + goto exit; 1201 + } 1202 + BUG_ON(configPagePA & ~PAGE_MASK); 1203 + 1204 + /* Fetch config info from the device. */ 1205 + cmd.configPageAddress = ((u64)PVSCSI_CONFIG_CONTROLLER_ADDRESS) << 32; 1206 + cmd.configPageNum = PVSCSI_CONFIG_PAGE_CONTROLLER; 1207 + cmd.cmpAddr = configPagePA; 1208 + cmd._pad = 0; 1209 + 1210 + /* 1211 + * Mark the completion page header with error values. If the device 1212 + * completes the command successfully, it sets the status values to 1213 + * indicate success. 1214 + */ 1215 + header = config_page; 1216 + memset(header, 0, sizeof *header); 1217 + header->hostStatus = BTSTAT_INVPARAM; 1218 + header->scsiStatus = SDSTAT_CHECK; 1219 + 1220 + pvscsi_write_cmd_desc(adapter, PVSCSI_CMD_CONFIG, &cmd, sizeof cmd); 1221 + 1222 + if (header->hostStatus == BTSTAT_SUCCESS && 1223 + header->scsiStatus == SDSTAT_GOOD) { 1224 + struct PVSCSIConfigPageController *config; 1225 + 1226 + config = config_page; 1227 + numPhys = config->numPhys; 1228 + } else 1229 + dev_warn(dev, "vmw_pvscsi: PVSCSI_CMD_CONFIG failed. hostStatus = 0x%x, scsiStatus = 0x%x\n", 1230 + header->hostStatus, header->scsiStatus); 1231 + pci_free_consistent(adapter->dev, PAGE_SIZE, config_page, configPagePA); 1232 + exit: 1233 + return numPhys; 1234 + } 1235 + 1181 1236 static int __devinit pvscsi_probe(struct pci_dev *pdev, 1182 1237 const struct pci_device_id *id) 1183 1238 { 1184 1239 struct pvscsi_adapter *adapter; 1185 1240 struct Scsi_Host *host; 1241 + struct device *dev; 1186 1242 unsigned int i; 1187 1243 unsigned long flags = 0; 1188 1244 int error; ··· 1326 1270 printk(KERN_ERR "vmw_pvscsi: unable to allocate ring memory\n"); 1327 1271 goto out_release_resources; 1328 1272 } 1273 + 1274 + /* 1275 + * Ask the device for max number of targets. 1276 + */ 1277 + host->max_id = pvscsi_get_max_targets(adapter); 1278 + dev = pvscsi_dev(adapter); 1279 + dev_info(dev, "vmw_pvscsi: host->max_id: %u\n", host->max_id); 1329 1280 1330 1281 /* 1331 1282 * From this point on we should reset the adapter if anything goes
+85 -24
drivers/scsi/vmw_pvscsi.h
··· 17 17 * along with this program; if not, write to the Free Software 18 18 * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. 19 19 * 20 - * Maintained by: Alok N Kataria <akataria@vmware.com> 20 + * Maintained by: Arvind Kumar <arvindkumar@vmware.com> 21 21 * 22 22 */ 23 23 ··· 26 26 27 27 #include <linux/types.h> 28 28 29 - #define PVSCSI_DRIVER_VERSION_STRING "1.0.1.0-k" 29 + #define PVSCSI_DRIVER_VERSION_STRING "1.0.2.0-k" 30 30 31 31 #define PVSCSI_MAX_NUM_SG_ENTRIES_PER_SEGMENT 128 32 32 ··· 39 39 * host adapter status/error codes 40 40 */ 41 41 enum HostBusAdapterStatus { 42 - BTSTAT_SUCCESS = 0x00, /* CCB complete normally with no errors */ 43 - BTSTAT_LINKED_COMMAND_COMPLETED = 0x0a, 44 - BTSTAT_LINKED_COMMAND_COMPLETED_WITH_FLAG = 0x0b, 45 - BTSTAT_DATA_UNDERRUN = 0x0c, 46 - BTSTAT_SELTIMEO = 0x11, /* SCSI selection timeout */ 47 - BTSTAT_DATARUN = 0x12, /* data overrun/underrun */ 48 - BTSTAT_BUSFREE = 0x13, /* unexpected bus free */ 49 - BTSTAT_INVPHASE = 0x14, /* invalid bus phase or sequence requested by target */ 50 - BTSTAT_LUNMISMATCH = 0x17, /* linked CCB has different LUN from first CCB */ 51 - BTSTAT_SENSFAILED = 0x1b, /* auto request sense failed */ 52 - BTSTAT_TAGREJECT = 0x1c, /* SCSI II tagged queueing message rejected by target */ 53 - BTSTAT_BADMSG = 0x1d, /* unsupported message received by the host adapter */ 54 - BTSTAT_HAHARDWARE = 0x20, /* host adapter hardware failed */ 55 - BTSTAT_NORESPONSE = 0x21, /* target did not respond to SCSI ATN, sent a SCSI RST */ 56 - BTSTAT_SENTRST = 0x22, /* host adapter asserted a SCSI RST */ 57 - BTSTAT_RECVRST = 0x23, /* other SCSI devices asserted a SCSI RST */ 58 - BTSTAT_DISCONNECT = 0x24, /* target device reconnected improperly (w/o tag) */ 59 - BTSTAT_BUSRESET = 0x25, /* host adapter issued BUS device reset */ 60 - BTSTAT_ABORTQUEUE = 0x26, /* abort queue generated */ 61 - BTSTAT_HASOFTWARE = 0x27, /* host adapter software error */ 62 - BTSTAT_HATIMEOUT = 0x30, /* host adapter hardware timeout error */ 63 - BTSTAT_SCSIPARITY = 0x34, /* SCSI parity error detected */ 42 + BTSTAT_SUCCESS = 0x00, /* CCB complete normally with no errors */ 43 + BTSTAT_LINKED_COMMAND_COMPLETED = 0x0a, 44 + BTSTAT_LINKED_COMMAND_COMPLETED_WITH_FLAG = 0x0b, 45 + BTSTAT_DATA_UNDERRUN = 0x0c, 46 + BTSTAT_SELTIMEO = 0x11, /* SCSI selection timeout */ 47 + BTSTAT_DATARUN = 0x12, /* data overrun/underrun */ 48 + BTSTAT_BUSFREE = 0x13, /* unexpected bus free */ 49 + BTSTAT_INVPHASE = 0x14, /* invalid bus phase or sequence 50 + * requested by target */ 51 + BTSTAT_LUNMISMATCH = 0x17, /* linked CCB has different LUN from 52 + * first CCB */ 53 + BTSTAT_INVPARAM = 0x1a, /* invalid parameter in CCB or segment 54 + * list */ 55 + BTSTAT_SENSFAILED = 0x1b, /* auto request sense failed */ 56 + BTSTAT_TAGREJECT = 0x1c, /* SCSI II tagged queueing message 57 + * rejected by target */ 58 + BTSTAT_BADMSG = 0x1d, /* unsupported message received by the 59 + * host adapter */ 60 + BTSTAT_HAHARDWARE = 0x20, /* host adapter hardware failed */ 61 + BTSTAT_NORESPONSE = 0x21, /* target did not respond to SCSI ATN, 62 + * sent a SCSI RST */ 63 + BTSTAT_SENTRST = 0x22, /* host adapter asserted a SCSI RST */ 64 + BTSTAT_RECVRST = 0x23, /* other SCSI devices asserted a SCSI 65 + * RST */ 66 + BTSTAT_DISCONNECT = 0x24, /* target device reconnected improperly 67 + * (w/o tag) */ 68 + BTSTAT_BUSRESET = 0x25, /* host adapter issued BUS device reset */ 69 + BTSTAT_ABORTQUEUE = 0x26, /* abort queue generated */ 70 + BTSTAT_HASOFTWARE = 0x27, /* host adapter software error */ 71 + BTSTAT_HATIMEOUT = 0x30, /* host adapter hardware timeout error */ 72 + BTSTAT_SCSIPARITY = 0x34, /* SCSI parity error detected */ 73 + }; 74 + 75 + /* 76 + * SCSI device status values. 77 + */ 78 + enum ScsiDeviceStatus { 79 + SDSTAT_GOOD = 0x00, /* No errors. */ 80 + SDSTAT_CHECK = 0x02, /* Check condition. */ 64 81 }; 65 82 66 83 /* ··· 129 112 u32 target; 130 113 u8 lun[8]; 131 114 } __packed; 115 + 116 + /* 117 + * Command descriptor for PVSCSI_CMD_CONFIG -- 118 + */ 119 + 120 + struct PVSCSICmdDescConfigCmd { 121 + u64 cmpAddr; 122 + u64 configPageAddress; 123 + u32 configPageNum; 124 + u32 _pad; 125 + } __packed; 126 + 127 + enum PVSCSIConfigPageType { 128 + PVSCSI_CONFIG_PAGE_CONTROLLER = 0x1958, 129 + PVSCSI_CONFIG_PAGE_PHY = 0x1959, 130 + PVSCSI_CONFIG_PAGE_DEVICE = 0x195a, 131 + }; 132 + 133 + enum PVSCSIConfigPageAddressType { 134 + PVSCSI_CONFIG_CONTROLLER_ADDRESS = 0x2120, 135 + PVSCSI_CONFIG_BUSTARGET_ADDRESS = 0x2121, 136 + PVSCSI_CONFIG_PHY_ADDRESS = 0x2122, 137 + }; 132 138 133 139 /* 134 140 * Command descriptor for PVSCSI_CMD_ABORT_CMD -- ··· 370 330 u16 hostStatus; 371 331 u16 scsiStatus; 372 332 u32 _pad[2]; 333 + } __packed; 334 + 335 + struct PVSCSIConfigPageHeader { 336 + u32 pageNum; 337 + u16 numDwords; 338 + u16 hostStatus; 339 + u16 scsiStatus; 340 + u16 reserved[3]; 341 + } __packed; 342 + 343 + struct PVSCSIConfigPageController { 344 + struct PVSCSIConfigPageHeader header; 345 + u64 nodeWWN; /* Device name as defined in the SAS spec. */ 346 + u16 manufacturer[64]; 347 + u16 serialNumber[64]; 348 + u16 opromVersion[32]; 349 + u16 hwVersion[32]; 350 + u16 firmwareVersion[32]; 351 + u32 numPhys; 352 + u8 useConsecutivePhyWWNs; 353 + u8 reserved[3]; 373 354 } __packed; 374 355 375 356 /*
+1
include/linux/mtio.h
··· 194 194 #define MT_ST_SYSV 0x1000 195 195 #define MT_ST_NOWAIT 0x2000 196 196 #define MT_ST_SILI 0x4000 197 + #define MT_ST_NOWAIT_EOF 0x8000 197 198 198 199 /* The mode parameters to be controlled. Parameter chosen with bits 20-28 */ 199 200 #define MT_ST_CLEAR_DEFAULT 0xfffff
+17 -2
include/scsi/iscsi_if.h
··· 261 261 } host_event; 262 262 struct msg_ping_comp { 263 263 uint32_t host_no; 264 - uint32_t status; 264 + uint32_t status; /* enum 265 + * iscsi_ping_status_code */ 265 266 uint32_t pid; /* unique ping id associated 266 267 with each ping request */ 267 268 uint32_t data_size; ··· 484 483 ISCSI_PORT_STATE_UP = 0x2, 485 484 }; 486 485 486 + /* iSCSI PING status/error code */ 487 + enum iscsi_ping_status_code { 488 + ISCSI_PING_SUCCESS = 0, 489 + ISCSI_PING_FW_DISABLED = 0x1, 490 + ISCSI_PING_IPADDR_INVALID = 0x2, 491 + ISCSI_PING_LINKLOCAL_IPV6_ADDR_INVALID = 0x3, 492 + ISCSI_PING_TIMEOUT = 0x4, 493 + ISCSI_PING_INVALID_DEST_ADDR = 0x5, 494 + ISCSI_PING_OVERSIZE_PACKET = 0x6, 495 + ISCSI_PING_ICMP_ERROR = 0x7, 496 + ISCSI_PING_MAX_REQ_EXCEEDED = 0x8, 497 + ISCSI_PING_NO_ARP_RECEIVED = 0x9, 498 + }; 499 + 487 500 #define iscsi_ptr(_handle) ((void*)(unsigned long)_handle) 488 501 #define iscsi_handle(_ptr) ((uint64_t)(unsigned long)_ptr) 489 502 ··· 593 578 char username[ISCSI_CHAP_AUTH_NAME_MAX_LEN]; 594 579 uint8_t password[ISCSI_CHAP_AUTH_SECRET_MAX_LEN]; 595 580 uint8_t password_length; 596 - } __packed; 581 + }; 597 582 598 583 #endif
+3 -1
include/scsi/libfcoe.h
··· 165 165 * @switch_name: WWN of switch from advertisement 166 166 * @fabric_name: WWN of fabric from advertisement 167 167 * @fc_map: FC_MAP value from advertisement 168 - * @fcf_mac: Ethernet address of the FCF 168 + * @fcf_mac: Ethernet address of the FCF for FIP traffic 169 + * @fcoe_mac: Ethernet address of the FCF for FCoE traffic 169 170 * @vfid: virtual fabric ID 170 171 * @pri: selection priority, smaller values are better 171 172 * @flogi_sent: current FLOGI sent to this FCF ··· 189 188 u32 fc_map; 190 189 u16 vfid; 191 190 u8 fcf_mac[ETH_ALEN]; 191 + u8 fcoe_mac[ETH_ALEN]; 192 192 193 193 u8 pri; 194 194 u8 flogi_sent;