Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI updates from James Bottomley:
"Updates to the usual drivers (smartpqi, ufs, lpfc, scsi_debug, target,
hisi_sas) with the only substantive core change being the removal of
the stream_status member from the scsi_stream_status_header (to get
rid of flex array members)"

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (77 commits)
scsi: target: core: Constify struct target_opcode_descriptor
scsi: target: core: Constify enabled() in struct target_opcode_descriptor
scsi: hisi_sas: Fix warning detected by sparse
scsi: mpt3sas: Fix _ctl_get_mpt_mctp_passthru_adapter() to return IOC pointer
scsi: sg: Remove unnecessary NULL check before unregister_sysctl_table()
scsi: ufs: mcq: Delete ufshcd_release_scsi_cmd() in ufshcd_mcq_abort()
scsi: ufs: qcom: dt-bindings: Document the SM8750 UFS Controller
scsi: mvsas: Fix typos in SAS/SATA VSP register comments
scsi: fnic: Replace memset() with eth_zero_addr()
scsi: ufs: core: Support updating device command timeout
scsi: ufs: core: Change hwq_id type and value
scsi: ufs: core: Increase the UIC command timeout further
scsi: zfcp: Simplify workqueue allocation
scsi: ufs: core: Print error value as hex format in ufshcd_err_handler()
scsi: sd: Remove the stream_status member from scsi_stream_status_header
scsi: docs: Clean up some style in scsi_mid_low_api
scsi: core: Remove unused scsi_dev_info_list_del_keyed()
scsi: isci: Remove unused sci_remote_device_reset()
scsi: scsi_debug: Reduce DEF_ATOMIC_WR_MAX_LENGTH
scsi: smartpqi: Delete a stray tab in pqi_is_parity_write_stream()
...

+1996 -1829
+49
Documentation/ABI/testing/sysfs-driver-ufs
··· 1636 1636 attribute value. 1637 1637 1638 1638 The attribute is read only. 1639 + 1640 + What: /sys/bus/platform/drivers/ufshcd/*/wb_resize_enable 1641 + What: /sys/bus/platform/devices/*.ufs/wb_resize_enable 1642 + Date: April 2025 1643 + Contact: Huan Tang <tanghuan@vivo.com> 1644 + Description: 1645 + The host can enable the WriteBooster buffer resize by setting this 1646 + attribute. 1647 + 1648 + ======== ====================================== 1649 + idle There is no resize operation 1650 + decrease Decrease WriteBooster buffer size 1651 + increase Increase WriteBooster buffer size 1652 + ======== ====================================== 1653 + 1654 + The file is write only. 1655 + 1656 + What: /sys/bus/platform/drivers/ufshcd/*/attributes/wb_resize_hint 1657 + What: /sys/bus/platform/devices/*.ufs/attributes/wb_resize_hint 1658 + Date: April 2025 1659 + Contact: Huan Tang <tanghuan@vivo.com> 1660 + Description: 1661 + wb_resize_hint indicates hint information about which type of resize 1662 + for WriteBooster buffer is recommended by the device. 1663 + 1664 + ========= ====================================== 1665 + keep Recommend keep the buffer size 1666 + decrease Recommend to decrease the buffer size 1667 + increase Recommend to increase the buffer size 1668 + ========= ====================================== 1669 + 1670 + The file is read only. 1671 + 1672 + What: /sys/bus/platform/drivers/ufshcd/*/attributes/wb_resize_status 1673 + What: /sys/bus/platform/devices/*.ufs/attributes/wb_resize_status 1674 + Date: April 2025 1675 + Contact: Huan Tang <tanghuan@vivo.com> 1676 + Description: 1677 + The host can check the resize operation status of the WriteBooster 1678 + buffer by reading this attribute. 1679 + 1680 + ================ ======================================== 1681 + idle Resize operation is not issued 1682 + in_progress Resize operation in progress 1683 + complete_success Resize operation completed successfully 1684 + general_failure Resize operation general failure 1685 + ================ ======================================== 1686 + 1687 + The file is read only.
+2
Documentation/devicetree/bindings/ufs/qcom,ufs.yaml
··· 43 43 - qcom,sm8450-ufshc 44 44 - qcom,sm8550-ufshc 45 45 - qcom,sm8650-ufshc 46 + - qcom,sm8750-ufshc 46 47 - const: qcom,ufshc 47 48 - const: jedec,ufs-2.0 48 49 ··· 159 158 - qcom,sm8450-ufshc 160 159 - qcom,sm8550-ufshc 161 160 - qcom,sm8650-ufshc 161 + - qcom,sm8750-ufshc 162 162 then: 163 163 properties: 164 164 clocks:
+9 -9
Documentation/scsi/scsi_mid_low_api.rst
··· 37 37 The SCSI mid level isolates an LLD from other layers such as the SCSI 38 38 upper layer drivers and the block layer. 39 39 40 - This version of the document roughly matches linux kernel version 2.6.8 . 40 + This version of the document roughly matches Linux kernel version 2.6.8 . 41 41 42 42 Documentation 43 43 ============= ··· 48 48 at https://docs.kernel.org/scsi/scsi_mid_low_api.html. Many LLDs are 49 49 documented in Documentation/scsi (e.g. aic7xxx.rst). The SCSI mid-level is 50 50 briefly described in scsi.rst which contains a URL to a document describing 51 - the SCSI subsystem in the Linux Kernel 2.4 series. Two upper level 51 + the SCSI subsystem in the Linux kernel 2.4 series. Two upper level 52 52 drivers have documents in that directory: st.rst (SCSI tape driver) and 53 53 scsi-generic.rst (for the sg driver). 54 54 ··· 75 75 As the 2.5 series development kernels evolve into the 2.6 series 76 76 production series, changes are being introduced into this interface. An 77 77 example of this is driver initialization code where there are now 2 models 78 - available. The older one, similar to what was found in the lk 2.4 series, 78 + available. The older one, similar to what was found in the Linux 2.4 series, 79 79 is based on hosts that are detected at HBA driver load time. This will be 80 80 referred to the "passive" initialization model. The newer model allows HBAs 81 81 to be hot plugged (and unplugged) during the lifetime of the LLD and will ··· 1026 1026 of interest: 1027 1027 1028 1028 host_no 1029 - - system wide unique number that is used for identifying 1029 + - system-wide unique number that is used for identifying 1030 1030 this host. Issued in ascending order from 0. 1031 1031 can_queue 1032 1032 - must be greater than 0; do not send more than can_queue ··· 1053 1053 - pointer to driver's struct scsi_host_template from which 1054 1054 this struct Scsi_Host instance was spawned 1055 1055 hostt->proc_name 1056 - - name of LLD. This is the driver name that sysfs uses 1056 + - name of LLD. This is the driver name that sysfs uses. 1057 1057 transportt 1058 1058 - pointer to driver's struct scsi_transport_template instance 1059 1059 (if any). FC and SPI transports currently supported. ··· 1067 1067 struct scsi_device 1068 1068 ------------------ 1069 1069 Generally, there is one instance of this structure for each SCSI logical unit 1070 - on a host. Scsi devices connected to a host are uniquely identified by a 1070 + on a host. SCSI devices connected to a host are uniquely identified by a 1071 1071 channel number, target id and logical unit number (lun). 1072 1072 The structure is defined in include/scsi/scsi_device.h 1073 1073 ··· 1091 1091 - should be set by LLD prior to calling 'done'. A value 1092 1092 of 0 implies a successfully completed command (and all 1093 1093 data (if any) has been transferred to or from the SCSI 1094 - target device). 'result' is a 32 bit unsigned integer that 1094 + target device). 'result' is a 32-bit unsigned integer that 1095 1095 can be viewed as 2 related bytes. The SCSI status value is 1096 1096 in the LSB. See include/scsi/scsi.h status_byte() and 1097 1097 host_byte() macros and related constants. ··· 1180 1180 to perform autosense. 1181 1181 1182 1182 1183 - Changes since lk 2.4 series 1184 - =========================== 1183 + Changes since Linux kernel 2.4 series 1184 + ===================================== 1185 1185 io_request_lock has been replaced by several finer grained locks. The lock 1186 1186 relevant to LLDs is struct Scsi_Host::host_lock and there is 1187 1187 one per SCSI host.
+6 -10
drivers/mmc/host/sdhci-msm.c
··· 1882 1882 if (IS_ERR_OR_NULL(ice)) 1883 1883 return PTR_ERR_OR_ZERO(ice); 1884 1884 1885 + if (qcom_ice_get_supported_key_type(ice) != BLK_CRYPTO_KEY_TYPE_RAW) { 1886 + dev_warn(dev, "Wrapped keys not supported. Disabling inline encryption support.\n"); 1887 + return 0; 1888 + } 1889 + 1885 1890 msm_host->ice = ice; 1886 1891 1887 1892 /* Initialize the blk_crypto_profile */ ··· 1967 1962 struct sdhci_msm_host *msm_host = 1968 1963 sdhci_msm_host_from_crypto_profile(profile); 1969 1964 1970 - /* Only AES-256-XTS has been tested so far. */ 1971 - if (key->crypto_cfg.crypto_mode != BLK_ENCRYPTION_MODE_AES_256_XTS) 1972 - return -EOPNOTSUPP; 1973 - 1974 - return qcom_ice_program_key(msm_host->ice, 1975 - QCOM_ICE_CRYPTO_ALG_AES_XTS, 1976 - QCOM_ICE_CRYPTO_KEY_SIZE_256, 1977 - key->bytes, 1978 - key->crypto_cfg.data_unit_size / 512, 1979 - slot); 1965 + return qcom_ice_program_key(msm_host->ice, slot, key); 1980 1966 } 1981 1967 1982 1968 static int sdhci_msm_ice_keyslot_evict(struct blk_crypto_profile *profile,
+6 -8
drivers/s390/scsi/zfcp_aux.c
··· 312 312 313 313 static int zfcp_setup_adapter_work_queue(struct zfcp_adapter *adapter) 314 314 { 315 - char name[TASK_COMM_LEN]; 315 + adapter->work_queue = 316 + alloc_ordered_workqueue("zfcp_q_%s", WQ_MEM_RECLAIM, 317 + dev_name(&adapter->ccw_device->dev)); 318 + if (!adapter->work_queue) 319 + return -ENOMEM; 316 320 317 - snprintf(name, sizeof(name), "zfcp_q_%s", 318 - dev_name(&adapter->ccw_device->dev)); 319 - adapter->work_queue = alloc_ordered_workqueue(name, WQ_MEM_RECLAIM); 320 - 321 - if (adapter->work_queue) 322 - return 0; 323 - return -ENOMEM; 321 + return 0; 324 322 } 325 323 326 324 static void zfcp_destroy_adapter_work_queue(struct zfcp_adapter *adapter)
+33 -664
drivers/scsi/dc395x.c
··· 83 83 /*#define DC395x_NO_SYNC*/ 84 84 /*#define DC395x_NO_WIDE*/ 85 85 86 - /*--------------------------------------------------------------------------- 87 - Debugging 88 - ---------------------------------------------------------------------------*/ 89 - /* 90 - * Types of debugging that can be enabled and disabled 91 - */ 92 - #define DBG_KG 0x0001 93 - #define DBG_0 0x0002 94 - #define DBG_1 0x0004 95 - #define DBG_SG 0x0020 96 - #define DBG_FIFO 0x0040 97 - #define DBG_PIO 0x0080 98 - 99 - 100 - /* 101 - * Set set of things to output debugging for. 102 - * Undefine to remove all debugging 103 - */ 104 - /*#define DEBUG_MASK (DBG_0|DBG_1|DBG_SG|DBG_FIFO|DBG_PIO)*/ 105 - /*#define DEBUG_MASK DBG_0*/ 106 - 107 - 108 - /* 109 - * Output a kernel mesage at the specified level and append the 110 - * driver name and a ": " to the start of the message 111 - */ 112 - #define dprintkl(level, format, arg...) \ 113 - printk(level DC395X_NAME ": " format , ## arg) 114 - 115 - 116 - #ifdef DEBUG_MASK 117 - /* 118 - * print a debug message - this is formated with KERN_DEBUG, then the 119 - * driver name followed by a ": " and then the message is output. 120 - * This also checks that the specified debug level is enabled before 121 - * outputing the message 122 - */ 123 - #define dprintkdbg(type, format, arg...) \ 124 - do { \ 125 - if ((type) & (DEBUG_MASK)) \ 126 - dprintkl(KERN_DEBUG , format , ## arg); \ 127 - } while (0) 128 - 129 - /* 130 - * Check if the specified type of debugging is enabled 131 - */ 132 - #define debug_enabled(type) ((DEBUG_MASK) & (type)) 133 - 134 - #else 135 - /* 136 - * No debugging. Do nothing 137 - */ 138 - #define dprintkdbg(type, format, arg...) \ 139 - do {} while (0) 140 - #define debug_enabled(type) (0) 141 - 142 - #endif 143 - 144 - 145 86 #ifndef PCI_VENDOR_ID_TEKRAM 146 87 #define PCI_VENDOR_ID_TEKRAM 0x1DE1 /* Vendor ID */ 147 88 #endif ··· 373 432 374 433 /* real period:48ns,76ns,100ns,124ns,148ns,176ns,200ns,248ns */ 375 434 static u8 clock_period[] = { 12, 18, 25, 31, 37, 43, 50, 62 }; 376 - static u16 clock_speed[] = { 200, 133, 100, 80, 67, 58, 50, 40 }; 377 435 378 436 379 437 /*--------------------------------------------------------------------------- ··· 504 564 { 505 565 int i; 506 566 507 - dprintkl(KERN_INFO, "Using safe settings.\n"); 508 567 for (i = 0; i < CFG_NUM; i++) 509 568 { 510 569 cfg_data[i].value = cfg_data[i].safe; ··· 520 581 { 521 582 int i; 522 583 523 - dprintkdbg(DBG_1, 524 - "setup: AdapterId=%08x MaxSpeed=%08x DevMode=%08x " 525 - "AdapterMode=%08x Tags=%08x ResetDelay=%08x\n", 526 - cfg_data[CFG_ADAPTER_ID].value, 527 - cfg_data[CFG_MAX_SPEED].value, 528 - cfg_data[CFG_DEV_MODE].value, 529 - cfg_data[CFG_ADAPTER_MODE].value, 530 - cfg_data[CFG_TAGS].value, 531 - cfg_data[CFG_RESET_DELAY].value); 532 584 for (i = 0; i < CFG_NUM; i++) 533 585 { 534 586 if (cfg_data[i].value < cfg_data[i].min ··· 752 822 { 753 823 unsigned long flags; 754 824 struct AdapterCtlBlk *acb = from_timer(acb, t, waiting_timer); 755 - dprintkdbg(DBG_1, 756 - "waiting_timeout: Queue woken up by timer. acb=%p\n", acb); 757 825 DC395x_LOCK_IO(acb->scsi_host, flags); 758 826 waiting_process_next(acb); 759 827 DC395x_UNLOCK_IO(acb->scsi_host, flags); ··· 792 864 { 793 865 int nseg; 794 866 enum dma_data_direction dir = cmd->sc_data_direction; 795 - dprintkdbg(DBG_0, "build_srb: (0x%p) <%02i-%i>\n", 796 - cmd, dcb->target_id, dcb->target_lun); 797 867 798 868 srb->dcb = dcb; 799 869 srb->cmd = cmd; ··· 813 887 nseg = scsi_dma_map(cmd); 814 888 BUG_ON(nseg < 0); 815 889 816 - if (dir == DMA_NONE || !nseg) { 817 - dprintkdbg(DBG_0, 818 - "build_srb: [0] len=%d buf=%p use_sg=%d !MAP=%08x\n", 819 - cmd->bufflen, scsi_sglist(cmd), scsi_sg_count(cmd), 820 - srb->segment_x[0].address); 821 - } else { 890 + if (!(dir == DMA_NONE || !nseg)) { 822 891 int i; 823 892 u32 reqlen = scsi_bufflen(cmd); 824 893 struct scatterlist *sg; 825 894 struct SGentry *sgp = srb->segment_x; 826 895 827 896 srb->sg_count = nseg; 828 - 829 - dprintkdbg(DBG_0, 830 - "build_srb: [n] len=%d buf=%p use_sg=%d segs=%d\n", 831 - reqlen, scsi_sglist(cmd), scsi_sg_count(cmd), 832 - srb->sg_count); 833 897 834 898 scsi_for_each_sg(cmd, sg, srb->sg_count, i) { 835 899 u32 busaddr = (u32)sg_dma_address(sg); ··· 849 933 srb->sg_bus_addr = dma_map_single(&dcb->acb->dev->dev, 850 934 srb->segment_x, SEGMENTX_LEN, DMA_TO_DEVICE); 851 935 852 - dprintkdbg(DBG_SG, "build_srb: [n] map sg %p->%08x(%05x)\n", 853 - srb->segment_x, srb->sg_bus_addr, SEGMENTX_LEN); 854 936 } 855 937 856 938 srb->request_length = srb->total_xfer_length; ··· 880 966 struct ScsiReqBlk *srb; 881 967 struct AdapterCtlBlk *acb = 882 968 (struct AdapterCtlBlk *)cmd->device->host->hostdata; 883 - dprintkdbg(DBG_0, "queue_command: (0x%p) <%02i-%i> cmnd=0x%02x\n", 884 - cmd, cmd->device->id, (u8)cmd->device->lun, cmd->cmnd[0]); 885 969 886 970 /* Assume BAD_TARGET; will be cleared later */ 887 971 set_host_byte(cmd, DID_BAD_TARGET); ··· 887 975 /* ignore invalid targets */ 888 976 if (cmd->device->id >= acb->scsi_host->max_id || 889 977 cmd->device->lun >= acb->scsi_host->max_lun || 890 - cmd->device->lun >31) { 978 + cmd->device->lun > 31) 891 979 goto complete; 892 - } 893 980 894 981 /* does the specified lun on the specified device exist */ 895 - if (!(acb->dcb_map[cmd->device->id] & (1 << cmd->device->lun))) { 896 - dprintkl(KERN_INFO, "queue_command: Ignore target <%02i-%i>\n", 897 - cmd->device->id, (u8)cmd->device->lun); 982 + if (!(acb->dcb_map[cmd->device->id] & (1 << cmd->device->lun))) 898 983 goto complete; 899 - } 900 984 901 985 /* do we have a DCB for the device */ 902 986 dcb = find_dcb(acb, cmd->device->id, cmd->device->lun); 903 - if (!dcb) { 904 - /* should never happen */ 905 - dprintkl(KERN_ERR, "queue_command: No such device <%02i-%i>", 906 - cmd->device->id, (u8)cmd->device->lun); 987 + if (!dcb) 907 988 goto complete; 908 - } 909 989 910 990 set_host_byte(cmd, DID_OK); 911 991 set_status_byte(cmd, SAM_STAT_GOOD); 912 992 913 993 srb = list_first_entry_or_null(&acb->srb_free_list, 914 - struct ScsiReqBlk, list); 994 + struct ScsiReqBlk, list); 995 + 915 996 if (!srb) { 916 - /* 917 - * Return 1 since we are unable to queue this command at this 918 - * point in time. 919 - */ 920 - dprintkdbg(DBG_0, "queue_command: No free srb's\n"); 997 + /* should never happen */ 921 998 return 1; 922 999 } 923 1000 list_del(&srb->list); ··· 921 1020 /* process immediately */ 922 1021 send_srb(acb, srb); 923 1022 } 924 - dprintkdbg(DBG_1, "queue_command: (0x%p) done\n", cmd); 925 1023 return 0; 926 1024 927 1025 complete: ··· 936 1036 937 1037 static DEF_SCSI_QCMD(dc395x_queue_command) 938 1038 939 - static void dump_register_info(struct AdapterCtlBlk *acb, 940 - struct DeviceCtlBlk *dcb, struct ScsiReqBlk *srb) 941 - { 942 - u16 pstat; 943 - struct pci_dev *dev = acb->dev; 944 - pci_read_config_word(dev, PCI_STATUS, &pstat); 945 - if (!dcb) 946 - dcb = acb->active_dcb; 947 - if (!srb && dcb) 948 - srb = dcb->active_srb; 949 - if (srb) { 950 - if (!srb->cmd) 951 - dprintkl(KERN_INFO, "dump: srb=%p cmd=%p OOOPS!\n", 952 - srb, srb->cmd); 953 - else 954 - dprintkl(KERN_INFO, "dump: srb=%p cmd=%p " 955 - "cmnd=0x%02x <%02i-%i>\n", 956 - srb, srb->cmd, 957 - srb->cmd->cmnd[0], srb->cmd->device->id, 958 - (u8)srb->cmd->device->lun); 959 - printk(" sglist=%p cnt=%i idx=%i len=%zu\n", 960 - srb->segment_x, srb->sg_count, srb->sg_index, 961 - srb->total_xfer_length); 962 - printk(" state=0x%04x status=0x%02x phase=0x%02x (%sconn.)\n", 963 - srb->state, srb->status, srb->scsi_phase, 964 - (acb->active_dcb) ? "" : "not"); 965 - } 966 - dprintkl(KERN_INFO, "dump: SCSI{status=0x%04x fifocnt=0x%02x " 967 - "signals=0x%02x irqstat=0x%02x sync=0x%02x target=0x%02x " 968 - "rselid=0x%02x ctr=0x%08x irqen=0x%02x config=0x%04x " 969 - "config2=0x%02x cmd=0x%02x selto=0x%02x}\n", 970 - DC395x_read16(acb, TRM_S1040_SCSI_STATUS), 971 - DC395x_read8(acb, TRM_S1040_SCSI_FIFOCNT), 972 - DC395x_read8(acb, TRM_S1040_SCSI_SIGNAL), 973 - DC395x_read8(acb, TRM_S1040_SCSI_INTSTATUS), 974 - DC395x_read8(acb, TRM_S1040_SCSI_SYNC), 975 - DC395x_read8(acb, TRM_S1040_SCSI_TARGETID), 976 - DC395x_read8(acb, TRM_S1040_SCSI_IDMSG), 977 - DC395x_read32(acb, TRM_S1040_SCSI_COUNTER), 978 - DC395x_read8(acb, TRM_S1040_SCSI_INTEN), 979 - DC395x_read16(acb, TRM_S1040_SCSI_CONFIG0), 980 - DC395x_read8(acb, TRM_S1040_SCSI_CONFIG2), 981 - DC395x_read8(acb, TRM_S1040_SCSI_COMMAND), 982 - DC395x_read8(acb, TRM_S1040_SCSI_TIMEOUT)); 983 - dprintkl(KERN_INFO, "dump: DMA{cmd=0x%04x fifocnt=0x%02x fstat=0x%02x " 984 - "irqstat=0x%02x irqen=0x%02x cfg=0x%04x tctr=0x%08x " 985 - "ctctr=0x%08x addr=0x%08x:0x%08x}\n", 986 - DC395x_read16(acb, TRM_S1040_DMA_COMMAND), 987 - DC395x_read8(acb, TRM_S1040_DMA_FIFOCNT), 988 - DC395x_read8(acb, TRM_S1040_DMA_FIFOSTAT), 989 - DC395x_read8(acb, TRM_S1040_DMA_STATUS), 990 - DC395x_read8(acb, TRM_S1040_DMA_INTEN), 991 - DC395x_read16(acb, TRM_S1040_DMA_CONFIG), 992 - DC395x_read32(acb, TRM_S1040_DMA_XCNT), 993 - DC395x_read32(acb, TRM_S1040_DMA_CXCNT), 994 - DC395x_read32(acb, TRM_S1040_DMA_XHIGHADDR), 995 - DC395x_read32(acb, TRM_S1040_DMA_XLOWADDR)); 996 - dprintkl(KERN_INFO, "dump: gen{gctrl=0x%02x gstat=0x%02x gtmr=0x%02x} " 997 - "pci{status=0x%04x}\n", 998 - DC395x_read8(acb, TRM_S1040_GEN_CONTROL), 999 - DC395x_read8(acb, TRM_S1040_GEN_STATUS), 1000 - DC395x_read8(acb, TRM_S1040_GEN_TIMER), 1001 - pstat); 1002 - } 1003 - 1004 - 1005 1039 static inline void clear_fifo(struct AdapterCtlBlk *acb, char *txt) 1006 1040 { 1007 - #if debug_enabled(DBG_FIFO) 1008 - u8 lines = DC395x_read8(acb, TRM_S1040_SCSI_SIGNAL); 1009 - u8 fifocnt = DC395x_read8(acb, TRM_S1040_SCSI_FIFOCNT); 1010 - if (!(fifocnt & 0x40)) 1011 - dprintkdbg(DBG_FIFO, 1012 - "clear_fifo: (%i bytes) on phase %02x in %s\n", 1013 - fifocnt & 0x3f, lines, txt); 1014 - #endif 1015 1041 DC395x_write16(acb, TRM_S1040_SCSI_CONTROL, DO_CLRFIFO); 1016 1042 } 1017 1043 ··· 946 1120 { 947 1121 struct DeviceCtlBlk *dcb; 948 1122 struct NvRamType *eeprom = &acb->eeprom; 949 - dprintkdbg(DBG_0, "reset_dev_param: acb=%p\n", acb); 950 1123 951 1124 list_for_each_entry(dcb, &acb->dcb_list, list) { 952 1125 u8 period_index; ··· 973 1148 { 974 1149 struct AdapterCtlBlk *acb = 975 1150 (struct AdapterCtlBlk *)cmd->device->host->hostdata; 976 - dprintkl(KERN_INFO, 977 - "eh_bus_reset: (0%p) target=<%02i-%i> cmd=%p\n", 978 - cmd, cmd->device->id, (u8)cmd->device->lun, cmd); 979 1151 980 1152 if (timer_pending(&acb->waiting_timer)) 981 1153 timer_delete(&acb->waiting_timer); ··· 1038 1216 (struct AdapterCtlBlk *)cmd->device->host->hostdata; 1039 1217 struct DeviceCtlBlk *dcb; 1040 1218 struct ScsiReqBlk *srb; 1041 - dprintkl(KERN_INFO, "eh_abort: (0x%p) target=<%02i-%i> cmd=%p\n", 1042 - cmd, cmd->device->id, (u8)cmd->device->lun, cmd); 1043 1219 1044 1220 dcb = find_dcb(acb, cmd->device->id, cmd->device->lun); 1045 - if (!dcb) { 1046 - dprintkl(KERN_DEBUG, "eh_abort: No such device\n"); 1221 + if (!dcb) 1047 1222 return FAILED; 1048 - } 1049 1223 1050 1224 srb = find_cmd(cmd, &dcb->srb_waiting_list); 1051 1225 if (srb) { ··· 1050 1232 pci_unmap_srb(acb, srb); 1051 1233 free_tag(dcb, srb); 1052 1234 list_add_tail(&srb->list, &acb->srb_free_list); 1053 - dprintkl(KERN_DEBUG, "eh_abort: Command was waiting\n"); 1054 1235 set_host_byte(cmd, DID_ABORT); 1055 1236 return SUCCESS; 1056 1237 } 1057 1238 srb = find_cmd(cmd, &dcb->srb_going_list); 1058 1239 if (srb) { 1059 - dprintkl(KERN_DEBUG, "eh_abort: Command in progress\n"); 1060 1240 /* XXX: Should abort the command here */ 1061 - } else { 1062 - dprintkl(KERN_DEBUG, "eh_abort: Command not found\n"); 1063 1241 } 1064 1242 return FAILED; 1065 1243 } ··· 1067 1253 { 1068 1254 u8 *ptr = srb->msgout_buf + srb->msg_count; 1069 1255 if (srb->msg_count > 1) { 1070 - dprintkl(KERN_INFO, 1071 - "build_sdtr: msgout_buf BUSY (%i: %02x %02x)\n", 1072 - srb->msg_count, srb->msgout_buf[0], 1073 - srb->msgout_buf[1]); 1074 1256 return; 1075 1257 } 1076 1258 if (!(dcb->dev_mode & NTC_DO_SYNC_NEGO)) { ··· 1088 1278 u8 wide = ((dcb->dev_mode & NTC_DO_WIDE_NEGO) & 1089 1279 (acb->config & HCC_WIDE_CARD)) ? 1 : 0; 1090 1280 u8 *ptr = srb->msgout_buf + srb->msg_count; 1091 - if (srb->msg_count > 1) { 1092 - dprintkl(KERN_INFO, 1093 - "build_wdtr: msgout_buf BUSY (%i: %02x %02x)\n", 1094 - srb->msg_count, srb->msgout_buf[0], 1095 - srb->msgout_buf[1]); 1281 + if (srb->msg_count > 1) 1096 1282 return; 1097 - } 1283 + 1098 1284 srb->msg_count += spi_populate_width_msg(ptr, wide); 1099 1285 srb->state |= SRB_DO_WIDE_NEGO; 1100 1286 } ··· 1122 1316 unsigned long flags; 1123 1317 struct AdapterCtlBlk *acb = (struct AdapterCtlBlk *)ptr; 1124 1318 struct ScsiReqBlk *srb; 1125 - dprintkl(KERN_DEBUG, "Chip forgot to produce SelTO IRQ!\n"); 1126 - if (!acb->active_dcb || !acb->active_dcb->active_srb) { 1127 - dprintkl(KERN_DEBUG, "... but no cmd pending? Oops!\n"); 1319 + if (!acb->active_dcb || !acb->active_dcb->active_srb) 1128 1320 return; 1129 - } 1321 + 1130 1322 DC395x_LOCK_IO(acb->scsi_host, flags); 1131 1323 srb = acb->active_dcb->active_srb; 1132 1324 disconnect(acb); ··· 1139 1335 u16 __maybe_unused s_stat2, return_code; 1140 1336 u8 s_stat, scsicommand, i, identify_message; 1141 1337 u8 *ptr; 1142 - dprintkdbg(DBG_0, "start_scsi: (0x%p) <%02i-%i> srb=%p\n", 1143 - dcb->target_id, dcb->target_lun, srb); 1144 1338 1145 1339 srb->tag_number = TAG_NONE; /* acb->tag_max_num: had error read in eeprom */ 1146 1340 ··· 1147 1345 s_stat2 = DC395x_read16(acb, TRM_S1040_SCSI_STATUS); 1148 1346 #if 1 1149 1347 if (s_stat & 0x20 /* s_stat2 & 0x02000 */ ) { 1150 - dprintkdbg(DBG_KG, "start_scsi: (0x%p) BUSY %02x %04x\n", 1151 - s_stat, s_stat2); 1152 1348 /* 1153 1349 * Try anyway? 1154 1350 * ··· 1161 1361 return 1; 1162 1362 } 1163 1363 #endif 1164 - if (acb->active_dcb) { 1165 - dprintkl(KERN_DEBUG, "start_scsi: (0x%p) Attempt to start a" 1166 - "command while another command (0x%p) is active.", 1167 - srb->cmd, 1168 - acb->active_dcb->active_srb ? 1169 - acb->active_dcb->active_srb->cmd : NULL); 1364 + if (acb->active_dcb) 1170 1365 return 1; 1171 - } 1172 - if (DC395x_read16(acb, TRM_S1040_SCSI_STATUS) & SCSIINTERRUPT) { 1173 - dprintkdbg(DBG_KG, "start_scsi: (0x%p) Failed (busy)\n", srb->cmd); 1366 + 1367 + if (DC395x_read16(acb, TRM_S1040_SCSI_STATUS) & SCSIINTERRUPT) 1174 1368 return 1; 1175 - } 1369 + 1176 1370 /* Allow starting of SCSI commands half a second before we allow the mid-level 1177 1371 * to queue them again after a reset */ 1178 - if (time_before(jiffies, acb->last_reset - HZ / 2)) { 1179 - dprintkdbg(DBG_KG, "start_scsi: Refuse cmds (reset wait)\n"); 1372 + if (time_before(jiffies, acb->last_reset - HZ / 2)) 1180 1373 return 1; 1181 - } 1182 1374 1183 1375 /* Flush FIFO */ 1184 1376 clear_fifo(acb, "start_scsi"); ··· 1234 1442 tag_number++; 1235 1443 } 1236 1444 if (tag_number >= dcb->max_command) { 1237 - dprintkl(KERN_WARNING, "start_scsi: (0x%p) " 1238 - "Out of tags target=<%02i-%i>)\n", 1239 - srb->cmd, srb->cmd->device->id, 1240 - (u8)srb->cmd->device->lun); 1241 1445 srb->state = SRB_READY; 1242 1446 DC395x_write16(acb, TRM_S1040_SCSI_CONTROL, 1243 1447 DO_HWRESELECT); ··· 1250 1462 #endif 1251 1463 /*polling:*/ 1252 1464 /* Send CDB ..command block ......... */ 1253 - dprintkdbg(DBG_KG, "start_scsi: (0x%p) <%02i-%i> cmnd=0x%02x tag=%i\n", 1254 - srb->cmd, srb->cmd->device->id, (u8)srb->cmd->device->lun, 1255 - srb->cmd->cmnd[0], srb->tag_number); 1256 1465 if (srb->flag & AUTO_REQSENSE) { 1257 1466 DC395x_write8(acb, TRM_S1040_SCSI_FIFO, REQUEST_SENSE); 1258 1467 DC395x_write8(acb, TRM_S1040_SCSI_FIFO, (dcb->target_lun << 5)); ··· 1271 1486 * we caught an interrupt (must be reset or reselection ... ) 1272 1487 * : Let's process it first! 1273 1488 */ 1274 - dprintkdbg(DBG_0, "start_scsi: (0x%p) <%02i-%i> Failed - busy\n", 1275 - srb->cmd, dcb->target_id, dcb->target_lun); 1276 1489 srb->state = SRB_READY; 1277 1490 free_tag(dcb, srb); 1278 1491 srb->msg_count = 0; ··· 1334 1551 1335 1552 /* This acknowledges the IRQ */ 1336 1553 scsi_intstatus = DC395x_read8(acb, TRM_S1040_SCSI_INTSTATUS); 1337 - if ((scsi_status & 0x2007) == 0x2002) 1338 - dprintkl(KERN_DEBUG, 1339 - "COP after COP completed? %04x\n", scsi_status); 1340 - if (debug_enabled(DBG_KG)) { 1341 - if (scsi_intstatus & INT_SELTIMEOUT) 1342 - dprintkdbg(DBG_KG, "handle_interrupt: Selection timeout\n"); 1343 - } 1344 - /*dprintkl(KERN_DEBUG, "handle_interrupt: intstatus = 0x%02x ", scsi_intstatus); */ 1345 1554 1346 1555 if (timer_pending(&acb->selto_timer)) 1347 1556 timer_delete(&acb->selto_timer); ··· 1346 1571 reselect(acb); 1347 1572 goto out_unlock; 1348 1573 } 1349 - if (scsi_intstatus & INT_SELECT) { 1350 - dprintkl(KERN_INFO, "Host does not support target mode!\n"); 1574 + if (scsi_intstatus & INT_SELECT) 1351 1575 goto out_unlock; 1352 - } 1576 + 1353 1577 if (scsi_intstatus & INT_SCSIRESET) { 1354 1578 scsi_reset_detect(acb); 1355 1579 goto out_unlock; 1356 1580 } 1357 1581 if (scsi_intstatus & (INT_BUSSERVICE | INT_CMDDONE)) { 1358 1582 dcb = acb->active_dcb; 1359 - if (!dcb) { 1360 - dprintkl(KERN_DEBUG, 1361 - "Oops: BusService (%04x %02x) w/o ActiveDCB!\n", 1362 - scsi_status, scsi_intstatus); 1583 + if (!dcb) 1363 1584 goto out_unlock; 1364 - } 1585 + 1365 1586 srb = dcb->active_srb; 1366 - if (dcb->flag & ABORT_DEV_) { 1367 - dprintkdbg(DBG_0, "MsgOut Abort Device.....\n"); 1587 + if (dcb->flag & ABORT_DEV_) 1368 1588 enable_msgout_abort(acb, srb); 1369 - } 1370 1589 1371 1590 /* software sequential machine */ 1372 1591 phase = (u16)srb->scsi_phase; ··· 1428 1659 } 1429 1660 else if (dma_status & 0x20) { 1430 1661 /* Error from the DMA engine */ 1431 - dprintkl(KERN_INFO, "Interrupt from DMA engine: 0x%02x!\n", dma_status); 1432 1662 #if 0 1433 - dprintkl(KERN_INFO, "This means DMA error! Try to handle ...\n"); 1434 1663 if (acb->active_dcb) { 1435 1664 acb->active_dcb-> flag |= ABORT_DEV_; 1436 1665 if (acb->active_dcb->active_srb) ··· 1436 1669 } 1437 1670 DC395x_write8(acb, TRM_S1040_DMA_CONTROL, ABORTXFER | CLRXFIFO); 1438 1671 #else 1439 - dprintkl(KERN_INFO, "Ignoring DMA error (probably a bad thing) ...\n"); 1440 1672 acb = NULL; 1441 1673 #endif 1442 1674 handled = IRQ_HANDLED; ··· 1448 1682 static void msgout_phase0(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb, 1449 1683 u16 *pscsi_status) 1450 1684 { 1451 - dprintkdbg(DBG_0, "msgout_phase0: (0x%p)\n", srb->cmd); 1452 1685 if (srb->state & (SRB_UNEXPECT_RESEL + SRB_ABORT_SENT)) 1453 1686 *pscsi_status = PH_BUS_FREE; /*.. initial phase */ 1454 1687 ··· 1461 1696 { 1462 1697 u16 i; 1463 1698 u8 *ptr; 1464 - dprintkdbg(DBG_0, "msgout_phase1: (0x%p)\n", srb->cmd); 1465 1699 1466 1700 clear_fifo(acb, "msgout_phase1"); 1467 - if (!(srb->state & SRB_MSGOUT)) { 1701 + if (!(srb->state & SRB_MSGOUT)) 1468 1702 srb->state |= SRB_MSGOUT; 1469 - dprintkl(KERN_DEBUG, 1470 - "msgout_phase1: (0x%p) Phase unexpected\n", 1471 - srb->cmd); /* So what ? */ 1472 - } 1703 + 1473 1704 if (!srb->msg_count) { 1474 - dprintkdbg(DBG_0, "msgout_phase1: (0x%p) NOP msg\n", 1475 - srb->cmd); 1476 1705 DC395x_write8(acb, TRM_S1040_SCSI_FIFO, NOP); 1477 1706 DC395x_write16(acb, TRM_S1040_SCSI_CONTROL, DO_DATALATCH); 1478 1707 /* it's important for atn stop */ ··· 1487 1728 static void command_phase0(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb, 1488 1729 u16 *pscsi_status) 1489 1730 { 1490 - dprintkdbg(DBG_0, "command_phase0: (0x%p)\n", srb->cmd); 1491 1731 DC395x_write16(acb, TRM_S1040_SCSI_CONTROL, DO_DATALATCH); 1492 1732 } 1493 1733 ··· 1497 1739 struct DeviceCtlBlk *dcb; 1498 1740 u8 *ptr; 1499 1741 u16 i; 1500 - dprintkdbg(DBG_0, "command_phase1: (0x%p)\n", srb->cmd); 1501 1742 1502 1743 clear_fifo(acb, "command_phase1"); 1503 1744 DC395x_write16(acb, TRM_S1040_SCSI_CONTROL, DO_CLRATN); ··· 1525 1768 1526 1769 1527 1770 /* 1528 - * Verify that the remaining space in the hw sg lists is the same as 1529 - * the count of remaining bytes in srb->total_xfer_length 1530 - */ 1531 - static void sg_verify_length(struct ScsiReqBlk *srb) 1532 - { 1533 - if (debug_enabled(DBG_SG)) { 1534 - unsigned len = 0; 1535 - unsigned idx = srb->sg_index; 1536 - struct SGentry *psge = srb->segment_x + idx; 1537 - for (; idx < srb->sg_count; psge++, idx++) 1538 - len += psge->length; 1539 - if (len != srb->total_xfer_length) 1540 - dprintkdbg(DBG_SG, 1541 - "Inconsistent SRB S/G lengths (Tot=%i, Count=%i) !!\n", 1542 - srb->total_xfer_length, len); 1543 - } 1544 - } 1545 - 1546 - 1547 - /* 1548 1771 * Compute the next Scatter Gather list index and adjust its length 1549 1772 * and address if necessary 1550 1773 */ ··· 1534 1797 u32 xferred = srb->total_xfer_length - left; /* bytes transferred */ 1535 1798 struct SGentry *psge = srb->segment_x + srb->sg_index; 1536 1799 1537 - dprintkdbg(DBG_0, 1538 - "sg_update_list: Transferred %i of %i bytes, %i remain\n", 1539 - xferred, srb->total_xfer_length, left); 1540 1800 if (xferred == 0) { 1541 1801 /* nothing to update since we did not transfer any data */ 1542 1802 return; 1543 1803 } 1544 1804 1545 - sg_verify_length(srb); 1546 1805 srb->total_xfer_length = left; /* update remaining count */ 1547 1806 for (idx = srb->sg_index; idx < srb->sg_count; idx++) { 1548 1807 if (xferred >= psge->length) { ··· 1559 1826 } 1560 1827 psge++; 1561 1828 } 1562 - sg_verify_length(srb); 1563 1829 } 1564 1830 1565 1831 ··· 1614 1882 struct DeviceCtlBlk *dcb = srb->dcb; 1615 1883 u16 scsi_status = *pscsi_status; 1616 1884 u32 d_left_counter = 0; 1617 - dprintkdbg(DBG_0, "data_out_phase0: (0x%p) <%02i-%i>\n", 1618 - srb->cmd, srb->cmd->device->id, (u8)srb->cmd->device->lun); 1619 1885 1620 1886 /* 1621 1887 * KG: We need to drain the buffers before we draw any conclusions! ··· 1627 1897 * KG: Stop DMA engine pushing more data into the SCSI FIFO 1628 1898 * If we need more data, the DMA SG list will be freshly set up, anyway 1629 1899 */ 1630 - dprintkdbg(DBG_PIO, "data_out_phase0: " 1631 - "DMA{fifocnt=0x%02x fifostat=0x%02x} " 1632 - "SCSI{fifocnt=0x%02x cnt=0x%06x status=0x%04x} total=0x%06x\n", 1633 - DC395x_read8(acb, TRM_S1040_DMA_FIFOCNT), 1634 - DC395x_read8(acb, TRM_S1040_DMA_FIFOSTAT), 1635 - DC395x_read8(acb, TRM_S1040_SCSI_FIFOCNT), 1636 - DC395x_read32(acb, TRM_S1040_SCSI_COUNTER), scsi_status, 1637 - srb->total_xfer_length); 1638 1900 DC395x_write8(acb, TRM_S1040_DMA_CONTROL, STOPDMAXFER | CLRXFIFO); 1639 1901 1640 1902 if (!(srb->state & SRB_XFERPAD)) { ··· 1650 1928 if (dcb->sync_period & WIDE_SYNC) 1651 1929 d_left_counter <<= 1; 1652 1930 1653 - dprintkdbg(DBG_KG, "data_out_phase0: FIFO contains %i %s\n" 1654 - "SCSI{fifocnt=0x%02x cnt=0x%08x} " 1655 - "DMA{fifocnt=0x%04x cnt=0x%02x ctr=0x%08x}\n", 1656 - DC395x_read8(acb, TRM_S1040_SCSI_FIFOCNT), 1657 - (dcb->sync_period & WIDE_SYNC) ? "words" : "bytes", 1658 - DC395x_read8(acb, TRM_S1040_SCSI_FIFOCNT), 1659 - DC395x_read32(acb, TRM_S1040_SCSI_COUNTER), 1660 - DC395x_read8(acb, TRM_S1040_DMA_FIFOCNT), 1661 - DC395x_read8(acb, TRM_S1040_DMA_FIFOSTAT), 1662 - DC395x_read32(acb, TRM_S1040_DMA_CXCNT)); 1663 1931 } 1664 1932 /* 1665 1933 * calculate all the residue data that not yet tranfered ··· 1670 1958 if (d_left_counter == 1 && dcb->sync_period & WIDE_SYNC 1671 1959 && scsi_bufflen(srb->cmd) % 2) { 1672 1960 d_left_counter = 0; 1673 - dprintkl(KERN_INFO, 1674 - "data_out_phase0: Discard 1 byte (0x%02x)\n", 1675 - scsi_status); 1676 1961 } 1677 1962 /* 1678 1963 * KG: Oops again. Same thinko as above: The SCSI might have been ··· 1700 1991 || ((oldxferred & ~PAGE_MASK) == 1701 1992 (PAGE_SIZE - diff)) 1702 1993 ) { 1703 - dprintkl(KERN_INFO, "data_out_phase0: " 1704 - "Work around chip bug (%i)?\n", diff); 1705 1994 d_left_counter = 1706 1995 srb->total_xfer_length - diff; 1707 1996 sg_update_list(srb, d_left_counter); ··· 1710 2003 } 1711 2004 } 1712 2005 } 1713 - if ((*pscsi_status & PHASEMASK) != PH_DATA_OUT) { 2006 + if ((*pscsi_status & PHASEMASK) != PH_DATA_OUT) 1714 2007 cleanup_after_transfer(acb, srb); 1715 - } 1716 2008 } 1717 2009 1718 2010 1719 2011 static void data_out_phase1(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb, 1720 2012 u16 *pscsi_status) 1721 2013 { 1722 - dprintkdbg(DBG_0, "data_out_phase1: (0x%p) <%02i-%i>\n", 1723 - srb->cmd, srb->cmd->device->id, (u8)srb->cmd->device->lun); 1724 2014 clear_fifo(acb, "data_out_phase1"); 1725 2015 /* do prepare before transfer when data out phase */ 1726 2016 data_io_transfer(acb, srb, XFERDATAOUT); ··· 1728 2024 { 1729 2025 u16 scsi_status = *pscsi_status; 1730 2026 1731 - dprintkdbg(DBG_0, "data_in_phase0: (0x%p) <%02i-%i>\n", 1732 - srb->cmd, srb->cmd->device->id, (u8)srb->cmd->device->lun); 1733 2027 1734 2028 /* 1735 2029 * KG: DataIn is much more tricky than DataOut. When the device is finished ··· 1747 2045 unsigned int sc, fc; 1748 2046 1749 2047 if (scsi_status & PARITYERROR) { 1750 - dprintkl(KERN_INFO, "data_in_phase0: (0x%p) " 1751 - "Parity Error\n", srb->cmd); 1752 2048 srb->status |= PARITY_ERROR; 1753 2049 } 1754 2050 /* ··· 1758 2058 if (!(DC395x_read8(acb, TRM_S1040_DMA_FIFOSTAT) & 0x80)) { 1759 2059 #if 0 1760 2060 int ctr = 6000000; 1761 - dprintkl(KERN_DEBUG, 1762 - "DIP0: Wait for DMA FIFO to flush ...\n"); 1763 2061 /*DC395x_write8 (TRM_S1040_DMA_CONTROL, STOPDMAXFER); */ 1764 2062 /*DC395x_write32 (TRM_S1040_SCSI_COUNTER, 7); */ 1765 2063 /*DC395x_write8 (TRM_S1040_SCSI_COMMAND, SCMD_DMA_IN); */ 1766 2064 while (! 1767 2065 (DC395x_read16(acb, TRM_S1040_DMA_FIFOSTAT) & 1768 2066 0x80) && --ctr); 1769 - if (ctr < 6000000 - 1) 1770 - dprintkl(KERN_DEBUG 1771 - "DIP0: Had to wait for DMA ...\n"); 1772 - if (!ctr) 1773 - dprintkl(KERN_ERR, 1774 - "Deadlock in DIP0 waiting for DMA FIFO empty!!\n"); 1775 2067 /*DC395x_write32 (TRM_S1040_SCSI_COUNTER, 0); */ 1776 2068 #endif 1777 - dprintkdbg(DBG_KG, "data_in_phase0: " 1778 - "DMA{fifocnt=0x%02x fifostat=0x%02x}\n", 1779 - DC395x_read8(acb, TRM_S1040_DMA_FIFOCNT), 1780 - DC395x_read8(acb, TRM_S1040_DMA_FIFOSTAT)); 1781 2069 } 1782 2070 /* Now: Check remainig data: The SCSI counters should tell us ... */ 1783 2071 sc = DC395x_read32(acb, TRM_S1040_SCSI_COUNTER); ··· 1773 2085 d_left_counter = sc + ((fc & 0x1f) 1774 2086 << ((srb->dcb->sync_period & WIDE_SYNC) ? 1 : 1775 2087 0)); 1776 - dprintkdbg(DBG_KG, "data_in_phase0: " 1777 - "SCSI{fifocnt=0x%02x%s ctr=0x%08x} " 1778 - "DMA{fifocnt=0x%02x fifostat=0x%02x ctr=0x%08x} " 1779 - "Remain{totxfer=%i scsi_fifo+ctr=%i}\n", 1780 - fc, 1781 - (srb->dcb->sync_period & WIDE_SYNC) ? "words" : "bytes", 1782 - sc, 1783 - fc, 1784 - DC395x_read8(acb, TRM_S1040_DMA_FIFOSTAT), 1785 - DC395x_read32(acb, TRM_S1040_DMA_CXCNT), 1786 - srb->total_xfer_length, d_left_counter); 1787 2088 #if DC395x_LASTPIO 1788 2089 /* KG: Less than or equal to 4 bytes can not be transferred via DMA, it seems. */ 1789 2090 if (d_left_counter ··· 1781 2104 1782 2105 /*u32 addr = (srb->segment_x[srb->sg_index].address); */ 1783 2106 /*sg_update_list (srb, d_left_counter); */ 1784 - dprintkdbg(DBG_PIO, "data_in_phase0: PIO (%i %s) " 1785 - "for remaining %i bytes:", 1786 - fc & 0x1f, 1787 - (srb->dcb->sync_period & WIDE_SYNC) ? 1788 - "words" : "bytes", 1789 - srb->total_xfer_length); 1790 2107 if (srb->dcb->sync_period & WIDE_SYNC) 1791 2108 DC395x_write8(acb, TRM_S1040_SCSI_CONFIG2, 1792 2109 CFG2_WIDEFIFO); ··· 1804 2133 byte = DC395x_read8(acb, TRM_S1040_SCSI_FIFO); 1805 2134 *virt++ = byte; 1806 2135 1807 - if (debug_enabled(DBG_PIO)) 1808 - printk(" %02x", byte); 1809 - 1810 2136 d_left_counter--; 1811 2137 sg_subtract_one(srb); 1812 2138 ··· 1826 2158 1827 2159 *virt++ = byte; 1828 2160 srb->total_xfer_length--; 1829 - if (debug_enabled(DBG_PIO)) 1830 - printk(" %02x", byte); 1831 2161 } 1832 2162 1833 2163 DC395x_write8(acb, TRM_S1040_SCSI_CONFIG2, 0); ··· 1834 2168 scsi_kunmap_atomic_sg(base); 1835 2169 local_irq_restore(flags); 1836 2170 } 1837 - /*printk(" %08x", *(u32*)(bus_to_virt (addr))); */ 1838 2171 /*srb->total_xfer_length = 0; */ 1839 - if (debug_enabled(DBG_PIO)) 1840 - printk("\n"); 1841 2172 } 1842 2173 #endif /* DC395x_LASTPIO */ 1843 2174 ··· 1870 2207 TempDMAstatus = 1871 2208 DC395x_read8(acb, TRM_S1040_DMA_STATUS); 1872 2209 } while (!(TempDMAstatus & DMAXFERCOMP) && --ctr); 1873 - if (!ctr) 1874 - dprintkl(KERN_ERR, 1875 - "Deadlock in DataInPhase0 waiting for DMA!!\n"); 1876 2210 srb->total_xfer_length = 0; 1877 2211 #endif 1878 2212 srb->total_xfer_length = d_left_counter; ··· 1886 2226 } 1887 2227 } 1888 2228 /* KG: The target may decide to disconnect: Empty FIFO before! */ 1889 - if ((*pscsi_status & PHASEMASK) != PH_DATA_IN) { 2229 + if ((*pscsi_status & PHASEMASK) != PH_DATA_IN) 1890 2230 cleanup_after_transfer(acb, srb); 1891 - } 1892 2231 } 1893 2232 1894 2233 1895 2234 static void data_in_phase1(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb, 1896 2235 u16 *pscsi_status) 1897 2236 { 1898 - dprintkdbg(DBG_0, "data_in_phase1: (0x%p) <%02i-%i>\n", 1899 - srb->cmd, srb->cmd->device->id, (u8)srb->cmd->device->lun); 1900 2237 data_io_transfer(acb, srb, XFERDATAIN); 1901 2238 } 1902 2239 ··· 1903 2246 { 1904 2247 struct DeviceCtlBlk *dcb = srb->dcb; 1905 2248 u8 bval; 1906 - dprintkdbg(DBG_0, 1907 - "data_io_transfer: (0x%p) <%02i-%i> %c len=%i, sg=(%i/%i)\n", 1908 - srb->cmd, srb->cmd->device->id, (u8)srb->cmd->device->lun, 1909 - ((io_dir & DMACMD_DIR) ? 'r' : 'w'), 1910 - srb->total_xfer_length, srb->sg_index, srb->sg_count); 1911 - if (srb == acb->tmp_srb) 1912 - dprintkl(KERN_ERR, "data_io_transfer: Using tmp_srb!\n"); 2249 + 1913 2250 if (srb->sg_index >= srb->sg_count) { 1914 2251 /* can't happen? out of bounds error */ 1915 2252 return; ··· 1916 2265 * Maybe, even ABORTXFER would be appropriate 1917 2266 */ 1918 2267 if (dma_status & XFERPENDING) { 1919 - dprintkl(KERN_DEBUG, "data_io_transfer: Xfer pending! " 1920 - "Expect trouble!\n"); 1921 - dump_register_info(acb, dcb, srb); 1922 2268 DC395x_write8(acb, TRM_S1040_DMA_CONTROL, CLRXFIFO); 1923 2269 } 1924 2270 /* clear_fifo(acb, "IO"); */ ··· 1994 2346 left_io -= len; 1995 2347 1996 2348 while (len--) { 1997 - if (debug_enabled(DBG_PIO)) 1998 - printk(" %02x", *virt); 1999 - 2000 2349 DC395x_write8(acb, TRM_S1040_SCSI_FIFO, *virt++); 2001 2350 2002 2351 sg_subtract_one(srb); ··· 2005 2360 if (srb->dcb->sync_period & WIDE_SYNC) { 2006 2361 if (ln % 2) { 2007 2362 DC395x_write8(acb, TRM_S1040_SCSI_FIFO, 0); 2008 - if (debug_enabled(DBG_PIO)) 2009 - printk(" |00"); 2010 2363 } 2011 2364 DC395x_write8(acb, TRM_S1040_SCSI_CONFIG2, 0); 2012 2365 } 2013 2366 /*DC395x_write32(acb, TRM_S1040_SCSI_COUNTER, ln); */ 2014 - if (debug_enabled(DBG_PIO)) 2015 - printk("\n"); 2016 2367 DC395x_write8(acb, TRM_S1040_SCSI_COMMAND, 2017 2368 SCMD_FIFO_OUT); 2018 2369 } ··· 2060 2419 static void status_phase0(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb, 2061 2420 u16 *pscsi_status) 2062 2421 { 2063 - dprintkdbg(DBG_0, "status_phase0: (0x%p) <%02i-%i>\n", 2064 - srb->cmd, srb->cmd->device->id, (u8)srb->cmd->device->lun); 2065 2422 srb->target_status = DC395x_read8(acb, TRM_S1040_SCSI_FIFO); 2066 2423 srb->end_message = DC395x_read8(acb, TRM_S1040_SCSI_FIFO); /* get message */ 2067 2424 srb->state = SRB_COMPLETED; ··· 2072 2433 static void status_phase1(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb, 2073 2434 u16 *pscsi_status) 2074 2435 { 2075 - dprintkdbg(DBG_0, "status_phase1: (0x%p) <%02i-%i>\n", 2076 - srb->cmd, srb->cmd->device->id, (u8)srb->cmd->device->lun); 2077 2436 srb->state = SRB_STATUS; 2078 2437 DC395x_write16(acb, TRM_S1040_SCSI_CONTROL, DO_DATALATCH); /* it's important for atn stop */ 2079 2438 DC395x_write8(acb, TRM_S1040_SCSI_COMMAND, SCMD_COMP); ··· 2101 2464 DC395x_ENABLE_MSGOUT; 2102 2465 srb->state &= ~SRB_MSGIN; 2103 2466 srb->state |= SRB_MSGOUT; 2104 - dprintkl(KERN_INFO, "msgin_reject: 0x%02x <%02i-%i>\n", 2105 - srb->msgin_buf[0], 2106 - srb->dcb->target_id, srb->dcb->target_lun); 2107 2467 } 2108 2468 2109 2469 ··· 2109 2475 { 2110 2476 struct ScsiReqBlk *srb = NULL; 2111 2477 struct ScsiReqBlk *i; 2112 - dprintkdbg(DBG_0, "msgin_qtag: (0x%p) tag=%i srb=%p\n", 2113 - srb->cmd, tag, srb); 2114 - 2115 - if (!(dcb->tag_mask & (1 << tag))) 2116 - dprintkl(KERN_DEBUG, 2117 - "msgin_qtag: tag_mask=0x%08x does not reserve tag %i!\n", 2118 - dcb->tag_mask, tag); 2119 2478 2120 2479 if (list_empty(&dcb->srb_going_list)) 2121 2480 goto mingx0; ··· 2121 2494 if (!srb) 2122 2495 goto mingx0; 2123 2496 2124 - dprintkdbg(DBG_0, "msgin_qtag: (0x%p) <%02i-%i>\n", 2125 - srb->cmd, srb->dcb->target_id, srb->dcb->target_lun); 2126 2497 if (dcb->flag & ABORT_DEV_) { 2127 2498 /*srb->state = SRB_ABORT_SENT; */ 2128 2499 enable_msgout_abort(acb, srb); ··· 2143 2518 srb->msgout_buf[0] = ABORT_TASK; 2144 2519 srb->msg_count = 1; 2145 2520 DC395x_ENABLE_MSGOUT; 2146 - dprintkl(KERN_DEBUG, "msgin_qtag: Unknown tag %i - abort\n", tag); 2147 2521 return srb; 2148 2522 } 2149 2523 ··· 2161 2537 static void msgin_set_async(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb) 2162 2538 { 2163 2539 struct DeviceCtlBlk *dcb = srb->dcb; 2164 - dprintkl(KERN_DEBUG, "msgin_set_async: No sync transfers <%02i-%i>\n", 2165 - dcb->target_id, dcb->target_lun); 2166 2540 2167 2541 dcb->sync_mode &= ~(SYNC_NEGO_ENABLE); 2168 2542 dcb->sync_mode |= SYNC_NEGO_DONE; ··· 2173 2551 && !(dcb->sync_mode & WIDE_NEGO_DONE)) { 2174 2552 build_wdtr(acb, dcb, srb); 2175 2553 DC395x_ENABLE_MSGOUT; 2176 - dprintkdbg(DBG_0, "msgin_set_async(rej): Try WDTR anyway\n"); 2177 2554 } 2178 2555 } 2179 2556 ··· 2183 2562 struct DeviceCtlBlk *dcb = srb->dcb; 2184 2563 u8 bval; 2185 2564 int fact; 2186 - dprintkdbg(DBG_1, "msgin_set_sync: <%02i> Sync: %ins " 2187 - "(%02i.%01i MHz) Offset %i\n", 2188 - dcb->target_id, srb->msgin_buf[3] << 2, 2189 - (250 / srb->msgin_buf[3]), 2190 - ((250 % srb->msgin_buf[3]) * 10) / srb->msgin_buf[3], 2191 - srb->msgin_buf[4]); 2192 2565 2193 2566 if (srb->msgin_buf[4] > 15) 2194 2567 srb->msgin_buf[4] = 15; ··· 2199 2584 || dcb->min_nego_period > 2200 2585 clock_period[bval])) 2201 2586 bval++; 2202 - if (srb->msgin_buf[3] < clock_period[bval]) 2203 - dprintkl(KERN_INFO, 2204 - "msgin_set_sync: Increase sync nego period to %ins\n", 2205 - clock_period[bval] << 2); 2587 + 2206 2588 srb->msgin_buf[3] = clock_period[bval]; 2207 2589 dcb->sync_period &= 0xf0; 2208 2590 dcb->sync_period |= ALT_SYNC | bval; ··· 2210 2598 else 2211 2599 fact = 250; 2212 2600 2213 - dprintkl(KERN_INFO, 2214 - "Target %02i: %s Sync: %ins Offset %i (%02i.%01i MB/s)\n", 2215 - dcb->target_id, (fact == 500) ? "Wide16" : "", 2216 - dcb->min_nego_period << 2, dcb->sync_offset, 2217 - (fact / dcb->min_nego_period), 2218 - ((fact % dcb->min_nego_period) * 10 + 2219 - dcb->min_nego_period / 2) / dcb->min_nego_period); 2220 - 2221 2601 if (!(srb->state & SRB_DO_SYNC_NEGO)) { 2222 2602 /* Reply with corrected SDTR Message */ 2223 - dprintkl(KERN_DEBUG, "msgin_set_sync: answer w/%ins %i\n", 2224 - srb->msgin_buf[3] << 2, srb->msgin_buf[4]); 2225 2603 2226 2604 memcpy(srb->msgout_buf, srb->msgin_buf, 5); 2227 2605 srb->msg_count = 5; ··· 2222 2620 && !(dcb->sync_mode & WIDE_NEGO_DONE)) { 2223 2621 build_wdtr(acb, dcb, srb); 2224 2622 DC395x_ENABLE_MSGOUT; 2225 - dprintkdbg(DBG_0, "msgin_set_sync: Also try WDTR\n"); 2226 2623 } 2227 2624 } 2228 2625 srb->state &= ~SRB_DO_SYNC_NEGO; ··· 2235 2634 struct ScsiReqBlk *srb) 2236 2635 { 2237 2636 struct DeviceCtlBlk *dcb = srb->dcb; 2238 - dprintkdbg(DBG_1, "msgin_set_nowide: <%02i>\n", dcb->target_id); 2239 2637 2240 2638 dcb->sync_period &= ~WIDE_SYNC; 2241 2639 dcb->sync_mode &= ~(WIDE_NEGO_ENABLE); ··· 2245 2645 && !(dcb->sync_mode & SYNC_NEGO_DONE)) { 2246 2646 build_sdtr(acb, dcb, srb); 2247 2647 DC395x_ENABLE_MSGOUT; 2248 - dprintkdbg(DBG_0, "msgin_set_nowide: Rejected. Try SDTR anyway\n"); 2249 2648 } 2250 2649 } 2251 2650 ··· 2253 2654 struct DeviceCtlBlk *dcb = srb->dcb; 2254 2655 u8 wide = (dcb->dev_mode & NTC_DO_WIDE_NEGO 2255 2656 && acb->config & HCC_WIDE_CARD) ? 1 : 0; 2256 - dprintkdbg(DBG_1, "msgin_set_wide: <%02i>\n", dcb->target_id); 2257 2657 2258 2658 if (srb->msgin_buf[3] > wide) 2259 2659 srb->msgin_buf[3] = wide; 2260 2660 /* Completed */ 2261 2661 if (!(srb->state & SRB_DO_WIDE_NEGO)) { 2262 - dprintkl(KERN_DEBUG, 2263 - "msgin_set_wide: Wide nego initiated <%02i>\n", 2264 - dcb->target_id); 2265 2662 memcpy(srb->msgout_buf, srb->msgin_buf, 4); 2266 2663 srb->msg_count = 4; 2267 2664 srb->state |= SRB_DO_WIDE_NEGO; ··· 2271 2676 dcb->sync_period &= ~WIDE_SYNC; 2272 2677 srb->state &= ~SRB_DO_WIDE_NEGO; 2273 2678 /*dcb->sync_mode &= ~(WIDE_NEGO_ENABLE+WIDE_NEGO_DONE); */ 2274 - dprintkdbg(DBG_1, 2275 - "msgin_set_wide: Wide (%i bit) negotiated <%02i>\n", 2276 - (8 << srb->msgin_buf[3]), dcb->target_id); 2277 2679 reprogram_regs(acb, dcb); 2278 2680 if ((dcb->sync_mode & SYNC_NEGO_ENABLE) 2279 2681 && !(dcb->sync_mode & SYNC_NEGO_DONE)) { 2280 2682 build_sdtr(acb, dcb, srb); 2281 2683 DC395x_ENABLE_MSGOUT; 2282 - dprintkdbg(DBG_0, "msgin_set_wide: Also try SDTR.\n"); 2283 2684 } 2284 2685 } 2285 2686 ··· 2296 2705 u16 *pscsi_status) 2297 2706 { 2298 2707 struct DeviceCtlBlk *dcb = acb->active_dcb; 2299 - dprintkdbg(DBG_0, "msgin_phase0: (0x%p)\n", srb->cmd); 2300 2708 2301 2709 srb->msgin_buf[acb->msg_len++] = DC395x_read8(acb, TRM_S1040_SCSI_FIFO); 2302 2710 if (msgin_completed(srb->msgin_buf, acb->msg_len)) { ··· 2349 2759 2350 2760 case IGNORE_WIDE_RESIDUE: 2351 2761 /* Discard wide residual */ 2352 - dprintkdbg(DBG_0, "msgin_phase0: Ignore Wide Residual!\n"); 2353 2762 break; 2354 2763 2355 2764 case COMMAND_COMPLETE: ··· 2360 2771 * SAVE POINTER may be ignored as we have the struct 2361 2772 * ScsiReqBlk* associated with the scsi command. 2362 2773 */ 2363 - dprintkdbg(DBG_0, "msgin_phase0: (0x%p) " 2364 - "SAVE POINTER rem=%i Ignore\n", 2365 - srb->cmd, srb->total_xfer_length); 2366 2774 break; 2367 2775 2368 2776 case RESTORE_POINTERS: 2369 - dprintkdbg(DBG_0, "msgin_phase0: RESTORE POINTER. Ignore\n"); 2370 2777 break; 2371 2778 2372 2779 case ABORT: 2373 - dprintkdbg(DBG_0, "msgin_phase0: (0x%p) " 2374 - "<%02i-%i> ABORT msg\n", 2375 - srb->cmd, dcb->target_id, 2376 - dcb->target_lun); 2377 2780 dcb->flag |= ABORT_DEV_; 2378 2781 enable_msgout_abort(acb, srb); 2379 2782 break; ··· 2373 2792 default: 2374 2793 /* reject unknown messages */ 2375 2794 if (srb->msgin_buf[0] & IDENTIFY_BASE) { 2376 - dprintkdbg(DBG_0, "msgin_phase0: Identify msg\n"); 2377 2795 srb->msg_count = 1; 2378 2796 srb->msgout_buf[0] = dcb->identify_msg; 2379 2797 DC395x_ENABLE_MSGOUT; ··· 2395 2815 static void msgin_phase1(struct AdapterCtlBlk *acb, struct ScsiReqBlk *srb, 2396 2816 u16 *pscsi_status) 2397 2817 { 2398 - dprintkdbg(DBG_0, "msgin_phase1: (0x%p)\n", srb->cmd); 2399 2818 clear_fifo(acb, "msgin_phase1"); 2400 2819 DC395x_write32(acb, TRM_S1040_SCSI_COUNTER, 1); 2401 2820 if (!(srb->state & SRB_MSGIN)) { ··· 2448 2869 struct ScsiReqBlk *srb; 2449 2870 2450 2871 if (!dcb) { 2451 - dprintkl(KERN_ERR, "disconnect: No such device\n"); 2452 2872 udelay(500); 2453 2873 /* Suspend queue for a while */ 2454 2874 acb->last_reset = ··· 2459 2881 } 2460 2882 srb = dcb->active_srb; 2461 2883 acb->active_dcb = NULL; 2462 - dprintkdbg(DBG_0, "disconnect: (0x%p)\n", srb->cmd); 2463 2884 2464 2885 srb->scsi_phase = PH_BUS_FREE; /* initial phase */ 2465 2886 clear_fifo(acb, "disconnect"); 2466 2887 DC395x_write16(acb, TRM_S1040_SCSI_CONTROL, DO_HWRESELECT); 2467 2888 if (srb->state & SRB_UNEXPECT_RESEL) { 2468 - dprintkl(KERN_ERR, 2469 - "disconnect: Unexpected reselection <%02i-%i>\n", 2470 - dcb->target_id, dcb->target_lun); 2471 2889 srb->state = 0; 2472 2890 waiting_process_next(acb); 2473 2891 } else if (srb->state & SRB_ABORT_SENT) { 2474 2892 dcb->flag &= ~ABORT_DEV_; 2475 2893 acb->last_reset = jiffies + HZ / 2 + 1; 2476 - dprintkl(KERN_ERR, "disconnect: SRB_ABORT_SENT\n"); 2477 2894 doing_srb_done(acb, DID_ABORT, srb->cmd, 1); 2478 2895 waiting_process_next(acb); 2479 2896 } else { ··· 2483 2910 if (srb->state != SRB_START_ 2484 2911 && srb->state != SRB_MSGOUT) { 2485 2912 srb->state = SRB_READY; 2486 - dprintkl(KERN_DEBUG, 2487 - "disconnect: (0x%p) Unexpected\n", 2488 - srb->cmd); 2489 2913 srb->target_status = SCSI_STAT_SEL_TIMEOUT; 2490 2914 goto disc1; 2491 2915 } else { 2492 2916 /* Normal selection timeout */ 2493 - dprintkdbg(DBG_KG, "disconnect: (0x%p) " 2494 - "<%02i-%i> SelTO\n", srb->cmd, 2495 - dcb->target_id, dcb->target_lun); 2496 2917 if (srb->retry_count++ > DC395x_MAX_RETRIES 2497 2918 || acb->scan_devices) { 2498 2919 srb->target_status = ··· 2495 2928 } 2496 2929 free_tag(dcb, srb); 2497 2930 list_move(&srb->list, &dcb->srb_waiting_list); 2498 - dprintkdbg(DBG_KG, 2499 - "disconnect: (0x%p) Retry\n", 2500 - srb->cmd); 2501 2931 waiting_set_timer(acb, HZ / 20); 2502 2932 } 2503 2933 } else if (srb->state & SRB_DISCONNECT) { ··· 2503 2939 * SRB_DISCONNECT (This is what we expect!) 2504 2940 */ 2505 2941 if (bval & 0x40) { 2506 - dprintkdbg(DBG_0, "disconnect: SCSI bus stat " 2507 - " 0x%02x: ACK set! Other controllers?\n", 2508 - bval); 2509 2942 /* It could come from another initiator, therefore don't do much ! */ 2510 2943 } else 2511 2944 waiting_process_next(acb); ··· 2526 2965 struct ScsiReqBlk *srb = NULL; 2527 2966 u16 rsel_tar_lun_id; 2528 2967 u8 id, lun; 2529 - dprintkdbg(DBG_0, "reselect: acb=%p\n", acb); 2530 2968 2531 2969 clear_fifo(acb, "reselect"); 2532 2970 /*DC395x_write16(acb, TRM_S1040_SCSI_CONTROL, DO_HWRESELECT | DO_DATALATCH); */ ··· 2534 2974 if (dcb) { /* Arbitration lost but Reselection win */ 2535 2975 srb = dcb->active_srb; 2536 2976 if (!srb) { 2537 - dprintkl(KERN_DEBUG, "reselect: Arb lost Resel won, " 2538 - "but active_srb == NULL\n"); 2539 2977 DC395x_write16(acb, TRM_S1040_SCSI_CONTROL, DO_DATALATCH); /* it's important for atn stop */ 2540 2978 return; 2541 2979 } 2542 2980 /* Why the if ? */ 2543 2981 if (!acb->scan_devices) { 2544 - dprintkdbg(DBG_KG, "reselect: (0x%p) <%02i-%i> " 2545 - "Arb lost but Resel win rsel=%i stat=0x%04x\n", 2546 - srb->cmd, dcb->target_id, 2547 - dcb->target_lun, rsel_tar_lun_id, 2548 - DC395x_read16(acb, TRM_S1040_SCSI_STATUS)); 2549 2982 /*srb->state |= SRB_DISCONNECT; */ 2550 2983 2551 2984 srb->state = SRB_READY; ··· 2550 2997 } 2551 2998 } 2552 2999 /* Read Reselected Target Id and LUN */ 2553 - if (!(rsel_tar_lun_id & (IDENTIFY_BASE << 8))) 2554 - dprintkl(KERN_DEBUG, "reselect: Expects identify msg. " 2555 - "Got %i!\n", rsel_tar_lun_id); 2556 3000 id = rsel_tar_lun_id & 0xff; 2557 3001 lun = (rsel_tar_lun_id >> 8) & 7; 2558 3002 dcb = find_dcb(acb, id, lun); 2559 3003 if (!dcb) { 2560 - dprintkl(KERN_ERR, "reselect: From non existent device " 2561 - "<%02i-%i>\n", id, lun); 2562 3004 DC395x_write16(acb, TRM_S1040_SCSI_CONTROL, DO_DATALATCH); /* it's important for atn stop */ 2563 3005 return; 2564 3006 } 2565 3007 acb->active_dcb = dcb; 2566 - 2567 - if (!(dcb->dev_mode & NTC_DO_DISCONNECT)) 2568 - dprintkl(KERN_DEBUG, "reselect: in spite of forbidden " 2569 - "disconnection? <%02i-%i>\n", 2570 - dcb->target_id, dcb->target_lun); 2571 3008 2572 3009 if (dcb->sync_mode & EN_TAG_QUEUEING) { 2573 3010 srb = acb->tmp_srb; ··· 2569 3026 /* 2570 3027 * abort command 2571 3028 */ 2572 - dprintkl(KERN_DEBUG, 2573 - "reselect: w/o disconnected cmds <%02i-%i>\n", 2574 - dcb->target_id, dcb->target_lun); 2575 3029 srb = acb->tmp_srb; 2576 3030 srb->state = SRB_UNEXPECT_RESEL; 2577 3031 dcb->active_srb = srb; ··· 2585 3045 srb->scsi_phase = PH_BUS_FREE; /* initial phase */ 2586 3046 2587 3047 /* Program HA ID, target ID, period and offset */ 2588 - dprintkdbg(DBG_0, "reselect: select <%i>\n", dcb->target_id); 2589 3048 DC395x_write8(acb, TRM_S1040_SCSI_HOSTID, acb->scsi_host->this_id); /* host ID */ 2590 3049 DC395x_write8(acb, TRM_S1040_SCSI_TARGETID, dcb->target_id); /* target ID */ 2591 3050 DC395x_write8(acb, TRM_S1040_SCSI_OFFSET, dcb->sync_offset); /* offset */ ··· 2650 3111 2651 3112 if (scsi_sg_count(cmd) && dir != DMA_NONE) { 2652 3113 /* unmap DC395x SG list */ 2653 - dprintkdbg(DBG_SG, "pci_unmap_srb: list=%08x(%05x)\n", 2654 - srb->sg_bus_addr, SEGMENTX_LEN); 2655 3114 dma_unmap_single(&acb->dev->dev, srb->sg_bus_addr, SEGMENTX_LEN, 2656 3115 DMA_TO_DEVICE); 2657 - dprintkdbg(DBG_SG, "pci_unmap_srb: segs=%i buffer=%p\n", 2658 - scsi_sg_count(cmd), scsi_bufflen(cmd)); 2659 3116 /* unmap the sg segments */ 2660 3117 scsi_dma_unmap(cmd); 2661 3118 } ··· 2665 3130 if (!(srb->flag & AUTO_REQSENSE)) 2666 3131 return; 2667 3132 /* Unmap sense buffer */ 2668 - dprintkdbg(DBG_SG, "pci_unmap_srb_sense: buffer=%08x\n", 2669 - srb->segment_x[0].address); 2670 3133 dma_unmap_single(&acb->dev->dev, srb->segment_x[0].address, 2671 3134 srb->segment_x[0].length, DMA_FROM_DEVICE); 2672 3135 /* Restore SG stuff */ ··· 2688 3155 enum dma_data_direction dir = cmd->sc_data_direction; 2689 3156 int ckc_only = 1; 2690 3157 2691 - dprintkdbg(DBG_1, "srb_done: (0x%p) <%02i-%i>\n", srb->cmd, 2692 - srb->cmd->device->id, (u8)srb->cmd->device->lun); 2693 - dprintkdbg(DBG_SG, "srb_done: srb=%p sg=%i(%i/%i) buf=%p\n", 2694 - srb, scsi_sg_count(cmd), srb->sg_index, srb->sg_count, 2695 - scsi_sgtalbe(cmd)); 2696 3158 status = srb->target_status; 2697 3159 set_host_byte(cmd, DID_OK); 2698 3160 set_status_byte(cmd, SAM_STAT_GOOD); 2699 3161 if (srb->flag & AUTO_REQSENSE) { 2700 - dprintkdbg(DBG_0, "srb_done: AUTO_REQSENSE1\n"); 2701 3162 pci_unmap_srb_sense(acb, srb); 2702 3163 /* 2703 3164 ** target status.......................... ··· 2699 3172 srb->flag &= ~AUTO_REQSENSE; 2700 3173 srb->adapter_status = 0; 2701 3174 srb->target_status = SAM_STAT_CHECK_CONDITION; 2702 - if (debug_enabled(DBG_1)) { 2703 - switch (cmd->sense_buffer[2] & 0x0f) { 2704 - case NOT_READY: 2705 - dprintkl(KERN_DEBUG, 2706 - "ReqSense: NOT_READY cmnd=0x%02x <%02i-%i> stat=%i scan=%i ", 2707 - cmd->cmnd[0], dcb->target_id, 2708 - dcb->target_lun, status, acb->scan_devices); 2709 - break; 2710 - case UNIT_ATTENTION: 2711 - dprintkl(KERN_DEBUG, 2712 - "ReqSense: UNIT_ATTENTION cmnd=0x%02x <%02i-%i> stat=%i scan=%i ", 2713 - cmd->cmnd[0], dcb->target_id, 2714 - dcb->target_lun, status, acb->scan_devices); 2715 - break; 2716 - case ILLEGAL_REQUEST: 2717 - dprintkl(KERN_DEBUG, 2718 - "ReqSense: ILLEGAL_REQUEST cmnd=0x%02x <%02i-%i> stat=%i scan=%i ", 2719 - cmd->cmnd[0], dcb->target_id, 2720 - dcb->target_lun, status, acb->scan_devices); 2721 - break; 2722 - case MEDIUM_ERROR: 2723 - dprintkl(KERN_DEBUG, 2724 - "ReqSense: MEDIUM_ERROR cmnd=0x%02x <%02i-%i> stat=%i scan=%i ", 2725 - cmd->cmnd[0], dcb->target_id, 2726 - dcb->target_lun, status, acb->scan_devices); 2727 - break; 2728 - case HARDWARE_ERROR: 2729 - dprintkl(KERN_DEBUG, 2730 - "ReqSense: HARDWARE_ERROR cmnd=0x%02x <%02i-%i> stat=%i scan=%i ", 2731 - cmd->cmnd[0], dcb->target_id, 2732 - dcb->target_lun, status, acb->scan_devices); 2733 - break; 2734 - } 2735 - if (cmd->sense_buffer[7] >= 6) 2736 - printk("sense=0x%02x ASC=0x%02x ASCQ=0x%02x " 2737 - "(0x%08x 0x%08x)\n", 2738 - cmd->sense_buffer[2], cmd->sense_buffer[12], 2739 - cmd->sense_buffer[13], 2740 - *((unsigned int *)(cmd->sense_buffer + 3)), 2741 - *((unsigned int *)(cmd->sense_buffer + 8))); 2742 - else 2743 - printk("sense=0x%02x No ASC/ASCQ (0x%08x)\n", 2744 - cmd->sense_buffer[2], 2745 - *((unsigned int *)(cmd->sense_buffer + 3))); 2746 - } 2747 3175 2748 3176 if (status == SAM_STAT_CHECK_CONDITION) { 2749 3177 set_host_byte(cmd, DID_BAD_TARGET); 2750 3178 goto ckc_e; 2751 3179 } 2752 - dprintkdbg(DBG_0, "srb_done: AUTO_REQSENSE2\n"); 2753 3180 2754 3181 set_status_byte(cmd, SAM_STAT_CHECK_CONDITION); 2755 3182 ··· 2720 3239 return; 2721 3240 } else if (status == SAM_STAT_TASK_SET_FULL) { 2722 3241 tempcnt = (u8)list_size(&dcb->srb_going_list); 2723 - dprintkl(KERN_INFO, "QUEUE_FULL for dev <%02i-%i> with %i cmnds\n", 2724 - dcb->target_id, dcb->target_lun, tempcnt); 2725 3242 if (tempcnt > 1) 2726 3243 tempcnt--; 2727 3244 dcb->max_command = tempcnt; ··· 2793 3314 2794 3315 /* Here is the info for Doug Gilbert's sg3 ... */ 2795 3316 scsi_set_resid(cmd, srb->total_xfer_length); 2796 - if (debug_enabled(DBG_KG)) { 2797 - if (srb->total_xfer_length) 2798 - dprintkdbg(DBG_KG, "srb_done: (0x%p) <%02i-%i> " 2799 - "cmnd=0x%02x Missed %i bytes\n", 2800 - cmd, cmd->device->id, (u8)cmd->device->lun, 2801 - cmd->cmnd[0], srb->total_xfer_length); 2802 - } 2803 3317 2804 3318 if (srb != acb->tmp_srb) { 2805 3319 /* Add to free list */ 2806 - dprintkdbg(DBG_0, "srb_done: (0x%p) done result=0x%08x\n", 2807 - cmd, cmd->result); 2808 3320 list_move_tail(&srb->list, &acb->srb_free_list); 2809 - } else { 2810 - dprintkl(KERN_ERR, "srb_done: ERROR! Completed cmd with tmp_srb\n"); 2811 3321 } 2812 3322 2813 3323 scsi_done(cmd); ··· 2809 3341 struct scsi_cmnd *cmd, u8 force) 2810 3342 { 2811 3343 struct DeviceCtlBlk *dcb; 2812 - dprintkl(KERN_INFO, "doing_srb_done: pids "); 2813 3344 2814 3345 list_for_each_entry(dcb, &acb->dcb_list, list) { 2815 3346 struct ScsiReqBlk *srb; ··· 2832 3365 scsi_done(p); 2833 3366 } 2834 3367 } 2835 - if (!list_empty(&dcb->srb_going_list)) 2836 - dprintkl(KERN_DEBUG, 2837 - "How could the ML send cmnds to the Going queue? <%02i-%i>\n", 2838 - dcb->target_id, dcb->target_lun); 2839 - if (dcb->tag_mask) 2840 - dprintkl(KERN_DEBUG, 2841 - "tag_mask for <%02i-%i> should be empty, is %08x!\n", 2842 - dcb->target_id, dcb->target_lun, 2843 - dcb->tag_mask); 2844 3368 2845 3369 /* Waiting queue */ 2846 3370 list_for_each_entry_safe(srb, tmp, &dcb->srb_waiting_list, list) { ··· 2850 3392 scsi_done(cmd); 2851 3393 } 2852 3394 } 2853 - if (!list_empty(&dcb->srb_waiting_list)) 2854 - dprintkl(KERN_DEBUG, "ML queued %i cmnds again to <%02i-%i>\n", 2855 - list_size(&dcb->srb_waiting_list), dcb->target_id, 2856 - dcb->target_lun); 2857 3395 dcb->flag &= ~ABORT_DEV_; 2858 3396 } 2859 - printk("\n"); 2860 3397 } 2861 3398 2862 3399 2863 3400 static void reset_scsi_bus(struct AdapterCtlBlk *acb) 2864 3401 { 2865 - dprintkdbg(DBG_0, "reset_scsi_bus: acb=%p\n", acb); 2866 3402 acb->acb_flag |= RESET_DEV; /* RESET_DETECT, RESET_DONE, RESET_DEV */ 2867 3403 DC395x_write16(acb, TRM_S1040_SCSI_CONTROL, DO_RSTSCSI); 2868 3404 ··· 2903 3451 2904 3452 static void scsi_reset_detect(struct AdapterCtlBlk *acb) 2905 3453 { 2906 - dprintkl(KERN_INFO, "scsi_reset_detect: acb=%p\n", acb); 2907 3454 /* delay half a second */ 2908 3455 if (timer_pending(&acb->waiting_timer)) 2909 3456 timer_delete(&acb->waiting_timer); ··· 2939 3488 struct ScsiReqBlk *srb) 2940 3489 { 2941 3490 struct scsi_cmnd *cmd = srb->cmd; 2942 - dprintkdbg(DBG_1, "request_sense: (0x%p) <%02i-%i>\n", 2943 - cmd, cmd->device->id, (u8)cmd->device->lun); 2944 3491 2945 3492 srb->flag |= AUTO_REQSENSE; 2946 3493 srb->adapter_status = 0; ··· 2960 3511 srb->segment_x[0].address = dma_map_single(&acb->dev->dev, 2961 3512 cmd->sense_buffer, SCSI_SENSE_BUFFERSIZE, 2962 3513 DMA_FROM_DEVICE); 2963 - dprintkdbg(DBG_SG, "request_sense: map buffer %p->%08x(%05x)\n", 2964 - cmd->sense_buffer, srb->segment_x[0].address, 2965 - SCSI_SENSE_BUFFERSIZE); 2966 3514 srb->sg_count = 1; 2967 3515 srb->sg_index = 0; 2968 3516 2969 3517 if (start_scsi(acb, dcb, srb)) { /* Should only happen, if sb. else grabs the bus */ 2970 - dprintkl(KERN_DEBUG, 2971 - "request_sense: (0x%p) failed <%02i-%i>\n", 2972 - srb->cmd, dcb->target_id, dcb->target_lun); 2973 3518 list_move(&srb->list, &dcb->srb_waiting_list); 2974 3519 waiting_set_timer(acb, HZ / 100); 2975 3520 } ··· 2991 3548 struct DeviceCtlBlk *dcb; 2992 3549 2993 3550 dcb = kmalloc(sizeof(struct DeviceCtlBlk), GFP_ATOMIC); 2994 - dprintkdbg(DBG_0, "device_alloc: <%02i-%i>\n", target, lun); 2995 3551 if (!dcb) 2996 3552 return NULL; 2997 3553 dcb->acb = NULL; ··· 3040 3598 return NULL; 3041 3599 } 3042 3600 3043 - dprintkdbg(DBG_1, 3044 - "device_alloc: <%02i-%i> copy from <%02i-%i>\n", 3045 - dcb->target_id, dcb->target_lun, 3046 - p->target_id, p->target_lun); 3047 3601 dcb->sync_mode = p->sync_mode; 3048 3602 dcb->sync_period = p->sync_period; 3049 3603 dcb->min_nego_period = p->min_nego_period; ··· 3089 3651 { 3090 3652 struct DeviceCtlBlk *i; 3091 3653 struct DeviceCtlBlk *tmp; 3092 - dprintkdbg(DBG_0, "adapter_remove_device: <%02i-%i>\n", 3093 - dcb->target_id, dcb->target_lun); 3094 3654 3095 3655 /* fix up any pointers to this device that we have in the adapter */ 3096 3656 if (acb->active_dcb == dcb) ··· 3121 3685 struct DeviceCtlBlk *dcb) 3122 3686 { 3123 3687 if (list_size(&dcb->srb_going_list) > 1) { 3124 - dprintkdbg(DBG_1, "adapter_remove_and_free_device: <%02i-%i> " 3125 - "Won't remove because of %i active requests.\n", 3126 - dcb->target_id, dcb->target_lun, 3127 - list_size(&dcb->srb_going_list)); 3128 3688 return; 3129 3689 } 3130 3690 adapter_remove_device(acb, dcb); ··· 3138 3706 { 3139 3707 struct DeviceCtlBlk *dcb; 3140 3708 struct DeviceCtlBlk *tmp; 3141 - dprintkdbg(DBG_1, "adapter_remove_and_free_all_devices: num=%i\n", 3142 - list_size(&acb->dcb_list)); 3143 3709 3144 3710 list_for_each_entry_safe(dcb, tmp, &acb->dcb_list, list) 3145 3711 adapter_remove_and_free_device(acb, dcb); ··· 3432 4002 * Checksum is wrong. 3433 4003 * Load a set of defaults into the eeprom buffer 3434 4004 */ 3435 - dprintkl(KERN_WARNING, 3436 - "EEProm checksum error: using default values and options.\n"); 3437 4005 eeprom->sub_vendor_id[0] = (u8)PCI_VENDOR_ID_TEKRAM; 3438 4006 eeprom->sub_vendor_id[1] = (u8)(PCI_VENDOR_ID_TEKRAM >> 8); 3439 4007 eeprom->sub_sys_id[0] = (u8)PCI_DEVICE_ID_TEKRAM_TRMS1040; ··· 3483 4055 **/ 3484 4056 static void print_eeprom_settings(struct NvRamType *eeprom) 3485 4057 { 3486 - dprintkl(KERN_INFO, "Used settings: AdapterID=%02i, Speed=%i(%02i.%01iMHz), dev_mode=0x%02x\n", 3487 - eeprom->scsi_id, 3488 - eeprom->target[0].period, 3489 - clock_speed[eeprom->target[0].period] / 10, 3490 - clock_speed[eeprom->target[0].period] % 10, 3491 - eeprom->target[0].cfg0); 3492 - dprintkl(KERN_INFO, " AdaptMode=0x%02x, Tags=%i(%02i), DelayReset=%is\n", 3493 - eeprom->channel_cfg, eeprom->max_tag, 3494 - 1 << eeprom->max_tag, eeprom->delay_time); 3495 4058 } 3496 4059 3497 4060 ··· 3513 4094 for (i = 0; i < DC395x_MAX_SRB_CNT; i++) 3514 4095 acb->srb_array[i].segment_x = NULL; 3515 4096 3516 - dprintkdbg(DBG_1, "Allocate %i pages for SG tables\n", pages); 3517 4097 while (pages--) { 3518 4098 ptr = kmalloc(PAGE_SIZE, GFP_KERNEL); 3519 4099 if (!ptr) { 3520 4100 adapter_sg_tables_free(acb); 3521 4101 return 1; 3522 4102 } 3523 - dprintkdbg(DBG_1, "Allocate %li bytes at %p for SG segments %i\n", 3524 - PAGE_SIZE, ptr, srb_idx); 3525 4103 i = 0; 3526 4104 while (i < srbs_per_page && srb_idx < DC395x_MAX_SRB_CNT) 3527 4105 acb->srb_array[srb_idx++].segment_x = ··· 3527 4111 if (i < srbs_per_page) 3528 4112 acb->srb.segment_x = 3529 4113 ptr + (i * DC395x_MAX_SG_LISTENTRY); 3530 - else 3531 - dprintkl(KERN_DEBUG, "No space for tmsrb SG table reserved?!\n"); 3532 4114 return 0; 3533 4115 } 3534 4116 ··· 3546 4132 u8 bval; 3547 4133 3548 4134 bval = DC395x_read8(acb, TRM_S1040_GEN_STATUS); 3549 - dprintkl(KERN_INFO, "%sConnectors: ", 3550 - ((bval & WIDESCSI) ? "(Wide) " : "")); 3551 4135 if (!(bval & CON5068)) 3552 4136 printk("ext%s ", !(bval & EXT68HIGH) ? "68" : "50"); 3553 4137 if (!(bval & CON68)) ··· 3705 4293 acb->config |= HCC_SCSI_RESET; 3706 4294 3707 4295 if (acb->config & HCC_SCSI_RESET) { 3708 - dprintkl(KERN_INFO, "Performing initial SCSI bus reset\n"); 3709 4296 DC395x_write8(acb, TRM_S1040_SCSI_CONTROL, DO_RSTSCSI); 3710 4297 3711 4298 /*while (!( DC395x_read8(acb, TRM_S1040_SCSI_INTSTATUS) & INT_SCSIRESET )); */ ··· 3738 4327 u32 io_port_len, unsigned int irq) 3739 4328 { 3740 4329 if (!request_region(io_port, io_port_len, DC395X_NAME)) { 3741 - dprintkl(KERN_ERR, "Failed to reserve IO region 0x%lx\n", io_port); 3742 4330 goto failed; 3743 4331 } 3744 4332 /* store port base to indicate we have registered it */ ··· 3746 4336 3747 4337 if (request_irq(irq, dc395x_interrupt, IRQF_SHARED, DC395X_NAME, acb)) { 3748 4338 /* release the region we just claimed */ 3749 - dprintkl(KERN_INFO, "Failed to register IRQ\n"); 3750 4339 goto failed; 3751 4340 } 3752 4341 /* store irq to indicate we have registered it */ ··· 3762 4353 adapter_print_config(acb); 3763 4354 3764 4355 if (adapter_sg_tables_alloc(acb)) { 3765 - dprintkl(KERN_DEBUG, "Memory allocation for SG tables failed\n"); 3766 4356 goto failed; 3767 4357 } 3768 4358 adapter_init_scsi_host(acb->scsi_host); 3769 4359 adapter_init_chip(acb); 3770 4360 set_basic_config(acb); 3771 4361 3772 - dprintkdbg(DBG_0, 3773 - "adapter_init: acb=%p, pdcb_map=%p psrb_array=%p " 3774 - "size{acb=0x%04x dcb=0x%04x srb=0x%04x}\n", 3775 - acb, acb->dcb_map, acb->srb_array, sizeof(struct AdapterCtlBlk), 3776 - sizeof(struct DeviceCtlBlk), sizeof(struct ScsiReqBlk)); 3777 4362 return 0; 3778 4363 3779 4364 failed: ··· 3931 4528 seq_putc(m, '\n'); 3932 4529 } 3933 4530 3934 - if (debug_enabled(DBG_1)) { 3935 - seq_printf(m, "DCB list for ACB %p:\n", acb); 3936 - list_for_each_entry(dcb, &acb->dcb_list, list) { 3937 - seq_printf(m, "%p -> ", dcb); 3938 - } 3939 - seq_puts(m, "END\n"); 3940 - } 3941 - 3942 4531 DC395x_UNLOCK_IO(acb->scsi_host, flags); 3943 4532 return 0; 3944 4533 } ··· 3955 4560 3956 4561 3957 4562 /** 3958 - * banner_display - Display banner on first instance of driver 3959 - * initialized. 3960 - **/ 3961 - static void banner_display(void) 3962 - { 3963 - static int banner_done = 0; 3964 - if (!banner_done) 3965 - { 3966 - dprintkl(KERN_INFO, "%s %s\n", DC395X_BANNER, DC395X_VERSION); 3967 - banner_done = 1; 3968 - } 3969 - } 3970 - 3971 - 3972 - /** 3973 4563 * dc395x_init_one - Initialise a single instance of the adapter. 3974 4564 * 3975 4565 * The PCI layer will call this once for each instance of the adapter ··· 3975 4595 unsigned int io_port_len; 3976 4596 unsigned int irq; 3977 4597 3978 - dprintkdbg(DBG_0, "Init one instance (%s)\n", pci_name(dev)); 3979 - banner_display(); 3980 - 3981 4598 if (pci_enable_device(dev)) 3982 - { 3983 - dprintkl(KERN_INFO, "PCI Enable device failed.\n"); 3984 4599 return -ENODEV; 3985 - } 4600 + 3986 4601 io_port_base = pci_resource_start(dev, 0) & PCI_BASE_ADDRESS_IO_MASK; 3987 4602 io_port_len = pci_resource_len(dev, 0); 3988 4603 irq = dev->irq; 3989 - dprintkdbg(DBG_0, "IO_PORT=0x%04lx, IRQ=0x%x\n", io_port_base, dev->irq); 3990 4604 3991 4605 /* allocate scsi host information (includes out adapter) */ 3992 4606 scsi_host = scsi_host_alloc(&dc395x_driver_template, 3993 4607 sizeof(struct AdapterCtlBlk)); 3994 - if (!scsi_host) { 3995 - dprintkl(KERN_INFO, "scsi_host_alloc failed\n"); 4608 + if (!scsi_host) 3996 4609 goto fail; 3997 - } 4610 + 3998 4611 acb = (struct AdapterCtlBlk*)scsi_host->hostdata; 3999 4612 acb->scsi_host = scsi_host; 4000 4613 acb->dev = dev; 4001 4614 4002 4615 /* initialise the adapter and everything we need */ 4003 4616 if (adapter_init(acb, io_port_base, io_port_len, irq)) { 4004 - dprintkl(KERN_INFO, "adapter init failed\n"); 4005 4617 acb = NULL; 4006 4618 goto fail; 4007 4619 } ··· 4001 4629 pci_set_master(dev); 4002 4630 4003 4631 /* get the scsi mid level to scan for new devices on the bus */ 4004 - if (scsi_add_host(scsi_host, &dev->dev)) { 4005 - dprintkl(KERN_ERR, "scsi_add_host failed\n"); 4632 + if (scsi_add_host(scsi_host, &dev->dev)) 4006 4633 goto fail; 4007 - } 4634 + 4008 4635 pci_set_drvdata(dev, scsi_host); 4009 4636 scsi_scan_host(scsi_host); 4010 4637 ··· 4029 4658 { 4030 4659 struct Scsi_Host *scsi_host = pci_get_drvdata(dev); 4031 4660 struct AdapterCtlBlk *acb = (struct AdapterCtlBlk *)(scsi_host->hostdata); 4032 - 4033 - dprintkdbg(DBG_0, "dc395x_remove_one: acb=%p\n", acb); 4034 4661 4035 4662 scsi_remove_host(scsi_host); 4036 4663 adapter_uninit(acb);
+3 -3
drivers/scsi/elx/libefc_sli/sli4.c
··· 3804 3804 wr_obj->desired_write_len_dword = cpu_to_le32(dwflags); 3805 3805 3806 3806 wr_obj->write_offset = cpu_to_le32(offset); 3807 - strncpy(wr_obj->object_name, obj_name, sizeof(wr_obj->object_name) - 1); 3807 + strscpy(wr_obj->object_name, obj_name); 3808 3808 wr_obj->host_buffer_descriptor_count = cpu_to_le32(1); 3809 3809 3810 3810 bde = (struct sli4_bde *)wr_obj->host_buffer_descriptor; ··· 3833 3833 SLI4_SUBSYSTEM_COMMON, CMD_V0, 3834 3834 SLI4_RQST_PYLD_LEN(cmn_delete_object)); 3835 3835 3836 - strncpy(req->object_name, obj_name, sizeof(req->object_name) - 1); 3836 + strscpy(req->object_name, obj_name); 3837 3837 return 0; 3838 3838 } 3839 3839 ··· 3856 3856 cpu_to_le32(desired_read_len & SLI4_REQ_DESIRE_READLEN); 3857 3857 3858 3858 rd_obj->read_offset = cpu_to_le32(offset); 3859 - strncpy(rd_obj->object_name, obj_name, sizeof(rd_obj->object_name) - 1); 3859 + strscpy(rd_obj->object_name, obj_name); 3860 3860 rd_obj->host_buffer_descriptor_count = cpu_to_le32(1); 3861 3861 3862 3862 bde = (struct sli4_bde *)rd_obj->host_buffer_descriptor;
+4 -4
drivers/scsi/fnic/fip.c
··· 200 200 return; 201 201 } 202 202 203 - memset(iport->selected_fcf.fcf_mac, 0, ETH_ALEN); 203 + eth_zero_addr(iport->selected_fcf.fcf_mac); 204 204 205 205 pdisc_sol = (struct fip_discovery *) frame; 206 206 *pdisc_sol = (struct fip_discovery) { ··· 588 588 if (!is_zero_ether_addr(iport->fpma)) 589 589 vnic_dev_del_addr(fnic->vdev, iport->fpma); 590 590 591 - memset(iport->fpma, 0, ETH_ALEN); 591 + eth_zero_addr(iport->fpma); 592 592 iport->fcid = 0; 593 593 iport->r_a_tov = 0; 594 594 iport->e_d_tov = 0; 595 - memset(fnic->iport.fcfmac, 0, ETH_ALEN); 596 - memset(iport->selected_fcf.fcf_mac, 0, ETH_ALEN); 595 + eth_zero_addr(fnic->iport.fcfmac); 596 + eth_zero_addr(iport->selected_fcf.fcf_mac); 597 597 iport->selected_fcf.fcf_priority = 0; 598 598 iport->selected_fcf.fka_adv_period = 0; 599 599 iport->selected_fcf.ka_disabled = 0;
+36 -15
drivers/scsi/hisi_sas/hisi_sas.h
··· 46 46 #define HISI_SAS_IOST_ITCT_CACHE_DW_SZ 10 47 47 #define HISI_SAS_FIFO_DATA_DW_SIZE 32 48 48 49 + #define HISI_SAS_REG_MEM_SIZE 4 50 + #define HISI_SAS_MAX_CDB_LEN 16 51 + #define HISI_SAS_BLK_QUEUE_DEPTH 64 52 + 53 + #define BYTE_TO_DW 4 54 + #define BYTE_TO_DDW 8 55 + 49 56 #define HISI_SAS_STATUS_BUF_SZ (sizeof(struct hisi_sas_status_buffer)) 50 57 #define HISI_SAS_COMMAND_TABLE_SZ (sizeof(union hisi_sas_command_table)) 51 58 ··· 99 92 100 93 #define HISI_SAS_WAIT_PHYUP_TIMEOUT (30 * HZ) 101 94 #define HISI_SAS_CLEAR_ITCT_TIMEOUT (20 * HZ) 95 + #define HISI_SAS_DELAY_FOR_PHY_DISABLE 100 96 + #define NAME_BUF_SIZE 256 102 97 103 98 struct hisi_hba; 104 99 ··· 176 167 u32 rd_data[HISI_SAS_FIFO_DATA_DW_SIZE]; 177 168 }; 178 169 170 + #define FRAME_RCVD_BUF 32 171 + #define SAS_PHY_RESV_SIZE 2 179 172 struct hisi_sas_phy { 180 173 struct work_struct works[HISI_PHYES_NUM]; 181 174 struct hisi_hba *hisi_hba; ··· 189 178 spinlock_t lock; 190 179 u64 port_id; /* from hw */ 191 180 u64 frame_rcvd_size; 192 - u8 frame_rcvd[32]; 181 + u8 frame_rcvd[FRAME_RCVD_BUF]; 193 182 u8 phy_attached; 194 183 u8 in_reset; 195 - u8 reserved[2]; 184 + u8 reserved[SAS_PHY_RESV_SIZE]; 196 185 u32 phy_type; 197 186 u32 code_violation_err_count; 198 187 enum sas_linkrate minimum_linkrate; ··· 359 348 const struct scsi_host_template *sht; 360 349 }; 361 350 362 - #define HISI_SAS_MAX_DEBUGFS_DUMP (50) 351 + #define HISI_SAS_MAX_DEBUGFS_DUMP 50 352 + #define HISI_SAS_DEFAULT_DEBUGFS_DUMP 1 363 353 364 354 struct hisi_sas_debugfs_cq { 365 355 struct hisi_sas_cq *cq; ··· 460 448 dma_addr_t sata_breakpoint_dma; 461 449 struct hisi_sas_slot *slot_info; 462 450 unsigned long flags; 463 - const struct hisi_sas_hw *hw; /* Low level hw interface */ 451 + const struct hisi_sas_hw *hw; /* Low level hw interface */ 464 452 unsigned long sata_dev_bitmap[BITS_TO_LONGS(HISI_SAS_MAX_DEVICES)]; 465 453 struct work_struct rst_work; 466 454 u32 phy_state; 467 - u32 intr_coal_ticks; /* Time of interrupt coalesce in us */ 468 - u32 intr_coal_count; /* Interrupt count to coalesce */ 455 + u32 intr_coal_ticks; /* Time of interrupt coalesce in us */ 456 + u32 intr_coal_count; /* Interrupt count to coalesce */ 469 457 470 458 int cq_nvecs; 471 459 ··· 540 528 __le64 dif_prd_table_addr; 541 529 }; 542 530 531 + #define ITCT_RESV_DDW 12 543 532 struct hisi_sas_itct { 544 533 __le64 qw0; 545 534 __le64 sas_addr; 546 535 __le64 qw2; 547 536 __le64 qw3; 548 - __le64 qw4_15[12]; 537 + __le64 qw4_15[ITCT_RESV_DDW]; 549 538 }; 550 539 551 540 struct hisi_sas_iost { ··· 556 543 __le64 qw3; 557 544 }; 558 545 546 + #define ERROR_RECORD_BUF_DW 4 559 547 struct hisi_sas_err_record { 560 - u32 data[4]; 548 + u32 data[ERROR_RECORD_BUF_DW]; 561 549 }; 562 550 551 + #define FIS_RESV_DW 3 563 552 struct hisi_sas_initial_fis { 564 553 struct hisi_sas_err_record err_record; 565 554 struct dev_to_host_fis fis; 566 - u32 rsvd[3]; 555 + u32 rsvd[FIS_RESV_DW]; 567 556 }; 568 557 558 + #define BREAKPOINT_DATA_SIZE 128 569 559 struct hisi_sas_breakpoint { 570 - u8 data[128]; 560 + u8 data[BREAKPOINT_DATA_SIZE]; 571 561 }; 572 562 563 + #define BREAKPOINT_TAG_NUM 32 573 564 struct hisi_sas_sata_breakpoint { 574 - struct hisi_sas_breakpoint tag[32]; 565 + struct hisi_sas_breakpoint tag[BREAKPOINT_TAG_NUM]; 575 566 }; 576 567 577 568 struct hisi_sas_sge { ··· 586 569 __le32 data_off; 587 570 }; 588 571 572 + #define SMP_CMD_TABLE_SIZE 44 589 573 struct hisi_sas_command_table_smp { 590 - u8 bytes[44]; 574 + u8 bytes[SMP_CMD_TABLE_SIZE]; 591 575 }; 592 576 577 + #define DUMMY_BUF_SIZE 12 593 578 struct hisi_sas_command_table_stp { 594 579 struct host_to_dev_fis command_fis; 595 - u8 dummy[12]; 580 + u8 dummy[DUMMY_BUF_SIZE]; 596 581 u8 atapi_cdb[ATAPI_CDB_LEN]; 597 582 }; 598 583 ··· 608 589 struct hisi_sas_sge sge[HISI_SAS_SGE_DIF_PAGE_CNT]; 609 590 } __aligned(16); 610 591 592 + #define PROT_BUF_SIZE 7 611 593 struct hisi_sas_command_table_ssp { 612 594 struct ssp_frame_hdr hdr; 613 595 union { 614 596 struct { 615 597 struct ssp_command_iu task; 616 - u32 prot[7]; 598 + u32 prot[PROT_BUF_SIZE]; 617 599 }; 618 600 struct ssp_tmf_iu ssp_task; 619 601 struct xfer_rdy_iu xfer_rdy; ··· 628 608 struct hisi_sas_command_table_stp stp; 629 609 } __aligned(16); 630 610 611 + #define IU_BUF_SIZE 1024 631 612 struct hisi_sas_status_buffer { 632 613 struct hisi_sas_err_record err; 633 - u8 iu[1024]; 614 + u8 iu[IU_BUF_SIZE]; 634 615 } __aligned(16); 635 616 636 617 struct hisi_sas_slot_buf_table {
+36 -45
drivers/scsi/hisi_sas/hisi_sas_main.c
··· 7 7 #include "hisi_sas.h" 8 8 #define DRV_NAME "hisi_sas" 9 9 10 + #define LINK_RATE_BIT_MASK 2 11 + #define FIS_BUF_SIZE 20 12 + #define WAIT_CMD_COMPLETE_DELAY 100 13 + #define WAIT_CMD_COMPLETE_TMROUT 5000 14 + #define DELAY_FOR_LINK_READY 2000 15 + #define BLK_CNT_OPTIMIZE_MARK 64 16 + #define HZ_TO_MHZ 1000000 17 + #define DELAY_FOR_SOFTRESET_MAX 1000 18 + #define DELAY_FOR_SOFTRESET_MIN 900 19 + 10 20 #define DEV_IS_GONE(dev) \ 11 21 ((!dev) || (dev->dev_type == SAS_PHY_UNUSED)) 12 22 ··· 124 114 } 125 115 126 116 default: 127 - { 128 117 if (direction == DMA_NONE) 129 118 return HISI_SAS_SATA_PROTOCOL_NONDATA; 130 119 return hisi_sas_get_ata_protocol_from_tf(qc); 131 - } 132 120 } 133 121 } 134 122 EXPORT_SYMBOL_GPL(hisi_sas_get_ata_protocol); ··· 139 131 struct hisi_sas_status_buffer *status_buf = 140 132 hisi_sas_status_buf_addr_mem(slot); 141 133 u8 *iu = &status_buf->iu[0]; 142 - struct dev_to_host_fis *d2h = (struct dev_to_host_fis *)iu; 134 + struct dev_to_host_fis *d2h = (struct dev_to_host_fis *)iu; 143 135 144 136 resp->frame_len = sizeof(struct dev_to_host_fis); 145 137 memcpy(&resp->ending_fis[0], d2h, sizeof(struct dev_to_host_fis)); ··· 159 151 160 152 max -= SAS_LINK_RATE_1_5_GBPS; 161 153 for (i = 0; i <= max; i++) 162 - rate |= 1 << (i * 2); 154 + rate |= 1 << (i * LINK_RATE_BIT_MASK); 163 155 return rate; 164 156 } 165 157 EXPORT_SYMBOL_GPL(hisi_sas_get_prog_phy_linkrate_mask); ··· 908 900 if (ret) 909 901 return ret; 910 902 if (!dev_is_sata(dev)) 911 - sas_change_queue_depth(sdev, 64); 903 + sas_change_queue_depth(sdev, HISI_SAS_BLK_QUEUE_DEPTH); 912 904 913 905 return 0; 914 906 } ··· 1270 1262 sas_phy->phy->minimum_linkrate = min; 1271 1263 1272 1264 hisi_sas_phy_enable(hisi_hba, phy_no, 0); 1273 - msleep(100); 1265 + msleep(HISI_SAS_DELAY_FOR_PHY_DISABLE); 1274 1266 hisi_hba->hw->phy_set_linkrate(hisi_hba, phy_no, &_r); 1275 1267 hisi_sas_phy_enable(hisi_hba, phy_no, 1); 1276 1268 ··· 1300 1292 1301 1293 case PHY_FUNC_LINK_RESET: 1302 1294 hisi_sas_phy_enable(hisi_hba, phy_no, 0); 1303 - msleep(100); 1295 + msleep(HISI_SAS_DELAY_FOR_PHY_DISABLE); 1304 1296 hisi_sas_phy_enable(hisi_hba, phy_no, 1); 1305 1297 break; 1306 1298 ··· 1355 1347 1356 1348 static int hisi_sas_softreset_ata_disk(struct domain_device *device) 1357 1349 { 1358 - u8 fis[20] = {0}; 1350 + u8 fis[FIS_BUF_SIZE] = {0}; 1359 1351 struct ata_port *ap = device->sata_dev.ap; 1360 1352 struct ata_link *link; 1361 1353 int rc = TMF_RESP_FUNC_FAILED; ··· 1372 1364 } 1373 1365 1374 1366 if (rc == TMF_RESP_FUNC_COMPLETE) { 1375 - usleep_range(900, 1000); 1367 + usleep_range(DELAY_FOR_SOFTRESET_MIN, DELAY_FOR_SOFTRESET_MAX); 1376 1368 ata_for_each_link(link, ap, EDGE) { 1377 1369 int pmp = sata_srst_pmp(link); 1378 1370 ··· 1502 1494 struct device *dev = hisi_hba->dev; 1503 1495 int rc = TMF_RESP_FUNC_FAILED; 1504 1496 struct ata_link *link; 1505 - u8 fis[20] = {0}; 1497 + u8 fis[FIS_BUF_SIZE] = {0}; 1506 1498 int i; 1507 1499 1508 1500 for (i = 0; i < hisi_hba->n_phy; i++) { ··· 1569 1561 hisi_hba->phy_state = hisi_hba->hw->get_phys_state(hisi_hba); 1570 1562 1571 1563 scsi_block_requests(shost); 1572 - hisi_hba->hw->wait_cmds_complete_timeout(hisi_hba, 100, 5000); 1564 + hisi_hba->hw->wait_cmds_complete_timeout(hisi_hba, 1565 + WAIT_CMD_COMPLETE_DELAY, 1566 + WAIT_CMD_COMPLETE_TMROUT); 1573 1567 1574 1568 /* 1575 1569 * hisi_hba->timer is only used for v1/v2 hw, and check hw->sht ··· 1872 1862 rc = ata_wait_after_reset(link, jiffies + HISI_SAS_WAIT_PHYUP_TIMEOUT, 1873 1863 smp_ata_check_ready_type); 1874 1864 } else { 1875 - msleep(2000); 1865 + msleep(DELAY_FOR_LINK_READY); 1876 1866 } 1877 1867 1878 1868 return rc; ··· 1895 1885 } 1896 1886 hisi_sas_dereg_device(hisi_hba, device); 1897 1887 1898 - rc = hisi_sas_debug_I_T_nexus_reset(device); 1899 - if (rc == TMF_RESP_FUNC_COMPLETE && dev_is_sata(device)) { 1900 - struct sas_phy *local_phy; 1901 - 1888 + if (dev_is_sata(device)) { 1902 1889 rc = hisi_sas_softreset_ata_disk(device); 1903 - switch (rc) { 1904 - case -ECOMM: 1905 - rc = -ENODEV; 1906 - break; 1907 - case TMF_RESP_FUNC_FAILED: 1908 - case -EMSGSIZE: 1909 - case -EIO: 1910 - local_phy = sas_get_local_phy(device); 1911 - rc = sas_phy_enable(local_phy, 0); 1912 - if (!rc) { 1913 - local_phy->enabled = 0; 1914 - dev_err(dev, "Disabled local phy of ATA disk %016llx due to softreset fail (%d)\n", 1915 - SAS_ADDR(device->sas_addr), rc); 1916 - rc = -ENODEV; 1917 - } 1918 - sas_put_local_phy(local_phy); 1919 - break; 1920 - default: 1921 - break; 1922 - } 1890 + if (rc == TMF_RESP_FUNC_FAILED) 1891 + dev_err(dev, "ata disk %016llx reset (%d)\n", 1892 + SAS_ADDR(device->sas_addr), rc); 1923 1893 } 1924 1894 1895 + rc = hisi_sas_debug_I_T_nexus_reset(device); 1925 1896 if ((rc == TMF_RESP_FUNC_COMPLETE) || (rc == -ENODEV)) 1926 1897 hisi_sas_release_task(hisi_hba, device); 1927 1898 ··· 1925 1934 hisi_sas_dereg_device(hisi_hba, device); 1926 1935 1927 1936 if (dev_is_sata(device)) { 1928 - struct sas_phy *phy; 1929 - 1930 - phy = sas_get_local_phy(device); 1937 + struct sas_phy *phy = sas_get_local_phy(device); 1931 1938 1932 1939 rc = sas_phy_reset(phy, true); 1933 - 1934 1940 if (rc == 0) 1935 1941 hisi_sas_release_task(hisi_hba, device); 1936 1942 sas_put_local_phy(phy); ··· 2111 2123 hisi_sas_bytes_dmaed(hisi_hba, phy_no, gfp_flags); 2112 2124 hisi_sas_port_notify_formed(sas_phy); 2113 2125 } else { 2114 - struct hisi_sas_port *port = phy->port; 2126 + struct hisi_sas_port *port = phy->port; 2115 2127 2116 2128 if (test_bit(HISI_SAS_RESETTING_BIT, &hisi_hba->flags) || 2117 2129 phy->in_reset) { ··· 2284 2296 goto err_out; 2285 2297 2286 2298 /* roundup to avoid overly large block size */ 2287 - max_command_entries_ru = roundup(max_command_entries, 64); 2299 + max_command_entries_ru = roundup(max_command_entries, 2300 + BLK_CNT_OPTIMIZE_MARK); 2288 2301 if (hisi_hba->prot_mask & HISI_SAS_DIX_PROT_MASK) 2289 2302 sz_slot_buf_ru = sizeof(struct hisi_sas_slot_dif_buf_table); 2290 2303 else 2291 2304 sz_slot_buf_ru = sizeof(struct hisi_sas_slot_buf_table); 2292 - sz_slot_buf_ru = roundup(sz_slot_buf_ru, 64); 2305 + 2306 + sz_slot_buf_ru = roundup(sz_slot_buf_ru, BLK_CNT_OPTIMIZE_MARK); 2293 2307 s = max(lcm(max_command_entries_ru, sz_slot_buf_ru), PAGE_SIZE); 2294 2308 blk_cnt = (max_command_entries_ru * sz_slot_buf_ru) / s; 2295 2309 slots_per_blk = s / sz_slot_buf_ru; ··· 2456 2466 if (IS_ERR(refclk)) 2457 2467 dev_dbg(dev, "no ref clk property\n"); 2458 2468 else 2459 - hisi_hba->refclk_frequency_mhz = clk_get_rate(refclk) / 1000000; 2469 + hisi_hba->refclk_frequency_mhz = clk_get_rate(refclk) / 2470 + HZ_TO_MHZ; 2460 2471 2461 2472 if (device_property_read_u32(dev, "phy-count", &hisi_hba->n_phy)) { 2462 2473 dev_err(dev, "could not get property phy-count\n"); ··· 2579 2588 shost->max_id = HISI_SAS_MAX_DEVICES; 2580 2589 shost->max_lun = ~0; 2581 2590 shost->max_channel = 1; 2582 - shost->max_cmd_len = 16; 2591 + shost->max_cmd_len = HISI_SAS_MAX_CDB_LEN; 2583 2592 if (hisi_hba->hw->slot_index_alloc) { 2584 2593 shost->can_queue = HISI_SAS_MAX_COMMANDS; 2585 2594 shost->cmd_per_lun = HISI_SAS_MAX_COMMANDS;
+1 -1
drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
··· 1759 1759 .sg_tablesize = HISI_SAS_SGE_PAGE_CNT, 1760 1760 .sdev_init = hisi_sas_sdev_init, 1761 1761 .shost_groups = host_v1_hw_groups, 1762 - .host_reset = hisi_sas_host_reset, 1762 + .host_reset = hisi_sas_host_reset, 1763 1763 }; 1764 1764 1765 1765 static const struct hisi_sas_hw hisi_sas_v1_hw = {
+3 -3
drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
··· 2771 2771 irq_msk = (hisi_sas_read32(hisi_hba, HGC_INVLD_DQE_INFO) 2772 2772 >> HGC_INVLD_DQE_INFO_FB_CH0_OFF) & 0x1ff; 2773 2773 while (irq_msk) { 2774 - if (irq_msk & 1) { 2774 + if (irq_msk & 1) { 2775 2775 u32 reg_value = hisi_sas_phy_read32(hisi_hba, phy_no, 2776 2776 CHL_INT0); 2777 2777 ··· 3111 3111 return IRQ_HANDLED; 3112 3112 } 3113 3113 3114 - static irqreturn_t cq_thread_v2_hw(int irq_no, void *p) 3114 + static irqreturn_t cq_thread_v2_hw(int irq_no, void *p) 3115 3115 { 3116 3116 struct hisi_sas_cq *cq = p; 3117 3117 struct hisi_hba *hisi_hba = cq->hisi_hba; ··· 3499 3499 * numbered drive in the fourth byte. 3500 3500 * See SFF-8485 Rev. 0.7 Table 24. 3501 3501 */ 3502 - void __iomem *reg_addr = hisi_hba->sgpio_regs + 3502 + void __iomem *reg_addr = hisi_hba->sgpio_regs + 3503 3503 reg_index * 4 + phy_no; 3504 3504 int data_idx = phy_no + 3 - (phy_no % 4) * 2; 3505 3505
+160 -99
drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
··· 466 466 #define ITCT_HDR_RTOLT_OFF 48 467 467 #define ITCT_HDR_RTOLT_MSK (0xffffULL << ITCT_HDR_RTOLT_OFF) 468 468 469 + /*debugfs*/ 470 + #define TWO_PARA_PER_LINE 2 471 + #define FOUR_PARA_PER_LINE 4 472 + #define DUMP_BUF_SIZE 8 473 + #define BIST_BUF_SIZE 16 474 + 469 475 struct hisi_sas_protect_iu_v3_hw { 470 476 u32 dw0; 471 477 u32 lbrtcv; ··· 542 536 543 537 #define BASE_VECTORS_V3_HW 16 544 538 #define MIN_AFFINE_VECTORS_V3_HW (BASE_VECTORS_V3_HW + 1) 539 + #define IRQ_PHY_UP_DOWN_INDEX 1 540 + #define IRQ_CHL_INDEX 2 541 + #define IRQ_AXI_INDEX 11 542 + 543 + #define DELAY_FOR_RESET_HW 100 544 + #define HDR_SG_MOD 0x2 545 + #define LUN_SIZE 8 546 + #define ATTR_PRIO_REGION 9 547 + #define CDB_REGION 12 548 + #define PRIO_OFF 3 549 + #define TMF_REGION 10 550 + #define TAG_MSB 12 551 + #define TAG_LSB 13 552 + #define SMP_FRAME_TYPE 2 553 + #define SMP_CRC_SIZE 4 554 + #define HDR_TAG_OFF 3 555 + #define HOST_NO_OFF 6 556 + #define PHY_NO_OFF 7 557 + #define IDENTIFY_REG_READ 6 558 + #define LINK_RESET_TIMEOUT_OFF 4 559 + #define DECIMALISM_FLAG 10 560 + #define WAIT_RETRY 100 561 + #define WAIT_TMROUT 5000 562 + 563 + #define ID_DWORD0_INDEX 0 564 + #define ID_DWORD1_INDEX 1 565 + #define ID_DWORD2_INDEX 2 566 + #define ID_DWORD3_INDEX 3 567 + #define ID_DWORD4_INDEX 4 568 + #define ID_DWORD5_INDEX 5 569 + #define TICKS_BIT_INDEX 24 570 + #define COUNT_BIT_INDEX 8 571 + 572 + #define PORT_REG_LENGTH 0x100 573 + #define GLOBAL_REG_LENGTH 0x800 574 + #define AXI_REG_LENGTH 0x61 575 + #define RAS_REG_LENGTH 0x10 545 576 546 577 #define CHNL_INT_STS_MSK 0xeeeeeeee 547 578 #define CHNL_INT_STS_PHY_MSK 0xe ··· 854 811 identify_buffer = (u32 *)(&identify_frame); 855 812 856 813 hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD0, 857 - __swab32(identify_buffer[0])); 814 + __swab32(identify_buffer[ID_DWORD0_INDEX])); 858 815 hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD1, 859 - __swab32(identify_buffer[1])); 816 + __swab32(identify_buffer[ID_DWORD1_INDEX])); 860 817 hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD2, 861 - __swab32(identify_buffer[2])); 818 + __swab32(identify_buffer[ID_DWORD2_INDEX])); 862 819 hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD3, 863 - __swab32(identify_buffer[3])); 820 + __swab32(identify_buffer[ID_DWORD3_INDEX])); 864 821 hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD4, 865 - __swab32(identify_buffer[4])); 822 + __swab32(identify_buffer[ID_DWORD4_INDEX])); 866 823 hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD5, 867 - __swab32(identify_buffer[5])); 824 + __swab32(identify_buffer[ID_DWORD5_INDEX])); 868 825 } 869 826 870 827 static void setup_itct_v3_hw(struct hisi_hba *hisi_hba, ··· 984 941 985 942 /* Disable all of the PHYs */ 986 943 hisi_sas_stop_phys(hisi_hba); 987 - udelay(50); 944 + udelay(HISI_SAS_DELAY_FOR_PHY_DISABLE); 988 945 989 946 /* Ensure axi bus idle */ 990 947 ret = hisi_sas_read32_poll_timeout(AXI_CFG, val, !val, ··· 1024 981 return rc; 1025 982 } 1026 983 1027 - msleep(100); 984 + msleep(DELAY_FOR_RESET_HW); 1028 985 init_reg_v3_hw(hisi_hba); 1029 986 1030 987 if (guid_parse("D5918B4B-37AE-4E10-A99F-E5E8A6EF4C1F", &guid)) { ··· 1073 1030 cfg &= ~PHY_CFG_ENA_MSK; 1074 1031 hisi_sas_phy_write32(hisi_hba, phy_no, PHY_CFG, cfg); 1075 1032 1076 - mdelay(50); 1033 + mdelay(HISI_SAS_DELAY_FOR_PHY_DISABLE); 1077 1034 1078 1035 state = hisi_sas_read32(hisi_hba, PHY_STATE); 1079 1036 if (state & BIT(phy_no)) { ··· 1109 1066 hisi_sas_phy_write32(hisi_hba, phy_no, TXID_AUTO, 1110 1067 txid_auto | TX_HARDRST_MSK); 1111 1068 } 1112 - msleep(100); 1069 + msleep(HISI_SAS_DELAY_FOR_PHY_DISABLE); 1113 1070 hisi_sas_phy_enable(hisi_hba, phy_no, 1); 1114 1071 } 1115 1072 ··· 1154 1111 1155 1112 for (i = 0; i < hisi_hba->n_phy; i++) 1156 1113 if (phy_state & BIT(i)) 1157 - if (((phy_port_num_ma >> (i * 4)) & 0xf) == port_id) 1114 + if (((phy_port_num_ma >> (i * HISI_SAS_REG_MEM_SIZE)) & 0xf) == 1115 + port_id) 1158 1116 bitmap |= BIT(i); 1159 1117 1160 1118 return bitmap; ··· 1352 1308 /* map itct entry */ 1353 1309 dw1 |= sas_dev->device_id << CMD_HDR_DEV_ID_OFF; 1354 1310 1355 - dw2 = (((sizeof(struct ssp_command_iu) + sizeof(struct ssp_frame_hdr) 1356 - + 3) / 4) << CMD_HDR_CFL_OFF) | 1357 - ((HISI_SAS_MAX_SSP_RESP_SZ / 4) << CMD_HDR_MRFL_OFF) | 1358 - (2 << CMD_HDR_SG_MOD_OFF); 1311 + dw2 = (((sizeof(struct ssp_command_iu) + sizeof(struct ssp_frame_hdr) + 1312 + 3) / BYTE_TO_DW) << CMD_HDR_CFL_OFF) | 1313 + ((HISI_SAS_MAX_SSP_RESP_SZ / BYTE_TO_DW) << CMD_HDR_MRFL_OFF) | 1314 + (HDR_SG_MOD << CMD_HDR_SG_MOD_OFF); 1359 1315 hdr->dw2 = cpu_to_le32(dw2); 1360 1316 hdr->transfer_tags = cpu_to_le32(slot->idx); 1361 1317 ··· 1375 1331 buf_cmd = hisi_sas_cmd_hdr_addr_mem(slot) + 1376 1332 sizeof(struct ssp_frame_hdr); 1377 1333 1378 - memcpy(buf_cmd, &task->ssp_task.LUN, 8); 1334 + memcpy(buf_cmd, &task->ssp_task.LUN, LUN_SIZE); 1379 1335 if (!tmf) { 1380 - buf_cmd[9] = ssp_task->task_attr; 1381 - memcpy(buf_cmd + 12, scsi_cmnd->cmnd, scsi_cmnd->cmd_len); 1336 + buf_cmd[ATTR_PRIO_REGION] = ssp_task->task_attr; 1337 + memcpy(buf_cmd + CDB_REGION, scsi_cmnd->cmnd, 1338 + scsi_cmnd->cmd_len); 1382 1339 } else { 1383 - buf_cmd[10] = tmf->tmf; 1340 + buf_cmd[TMF_REGION] = tmf->tmf; 1384 1341 switch (tmf->tmf) { 1385 1342 case TMF_ABORT_TASK: 1386 1343 case TMF_QUERY_TASK: 1387 - buf_cmd[12] = 1344 + buf_cmd[TAG_MSB] = 1388 1345 (tmf->tag_of_task_to_be_managed >> 8) & 0xff; 1389 - buf_cmd[13] = 1346 + buf_cmd[TAG_LSB] = 1390 1347 tmf->tag_of_task_to_be_managed & 0xff; 1391 1348 break; 1392 1349 default: ··· 1420 1375 unsigned int interval = scsi_prot_interval(scsi_cmnd); 1421 1376 unsigned int ilog2_interval = ilog2(interval); 1422 1377 1423 - len = (task->total_xfer_len >> ilog2_interval) * 8; 1378 + len = (task->total_xfer_len >> ilog2_interval) * 1379 + BYTE_TO_DDW; 1424 1380 } 1425 1381 } 1426 1382 ··· 1441 1395 struct hisi_sas_device *sas_dev = device->lldd_dev; 1442 1396 dma_addr_t req_dma_addr; 1443 1397 unsigned int req_len; 1398 + u32 cfl; 1444 1399 1445 1400 /* req */ 1446 1401 sg_req = &task->smp_task.smp_req; ··· 1452 1405 /* dw0 */ 1453 1406 hdr->dw0 = cpu_to_le32((port->id << CMD_HDR_PORT_OFF) | 1454 1407 (1 << CMD_HDR_PRIORITY_OFF) | /* high pri */ 1455 - (2 << CMD_HDR_CMD_OFF)); /* smp */ 1408 + (SMP_FRAME_TYPE << CMD_HDR_CMD_OFF)); /* smp */ 1456 1409 1457 1410 /* map itct entry */ 1458 1411 hdr->dw1 = cpu_to_le32((sas_dev->device_id << CMD_HDR_DEV_ID_OFF) | ··· 1460 1413 (DIR_NO_DATA << CMD_HDR_DIR_OFF)); 1461 1414 1462 1415 /* dw2 */ 1463 - hdr->dw2 = cpu_to_le32((((req_len - 4) / 4) << CMD_HDR_CFL_OFF) | 1464 - (HISI_SAS_MAX_SMP_RESP_SZ / 4 << 1416 + cfl = (req_len - SMP_CRC_SIZE) / BYTE_TO_DW; 1417 + hdr->dw2 = cpu_to_le32((cfl << CMD_HDR_CFL_OFF) | 1418 + (HISI_SAS_MAX_SMP_RESP_SZ / BYTE_TO_DW << 1465 1419 CMD_HDR_MRFL_OFF)); 1466 1420 1467 1421 hdr->transfer_tags = cpu_to_le32(slot->idx << CMD_HDR_IPTT_OFF); ··· 1527 1479 struct ata_queued_cmd *qc = task->uldd_task; 1528 1480 1529 1481 hdr_tag = qc->tag; 1530 - task->ata_task.fis.sector_count |= (u8) (hdr_tag << 3); 1482 + task->ata_task.fis.sector_count |= 1483 + (u8)(hdr_tag << HDR_TAG_OFF); 1531 1484 dw2 |= hdr_tag << CMD_HDR_NCQ_TAG_OFF; 1532 1485 } 1533 1486 1534 - dw2 |= (HISI_SAS_MAX_STP_RESP_SZ / 4) << CMD_HDR_CFL_OFF | 1535 - 2 << CMD_HDR_SG_MOD_OFF; 1487 + dw2 |= (HISI_SAS_MAX_STP_RESP_SZ / BYTE_TO_DW) << CMD_HDR_CFL_OFF | 1488 + HDR_SG_MOD << CMD_HDR_SG_MOD_OFF; 1536 1489 hdr->dw2 = cpu_to_le32(dw2); 1537 1490 1538 1491 /* dw3 */ ··· 1593 1544 hisi_sas_phy_write32(hisi_hba, phy_no, PHYCTRL_PHY_ENA_MSK, 1); 1594 1545 1595 1546 port_id = hisi_sas_read32(hisi_hba, PHY_PORT_NUM_MA); 1596 - port_id = (port_id >> (4 * phy_no)) & 0xf; 1547 + port_id = (port_id >> (HISI_SAS_REG_MEM_SIZE * phy_no)) & 0xf; 1597 1548 link_rate = hisi_sas_read32(hisi_hba, PHY_CONN_RATE); 1598 - link_rate = (link_rate >> (phy_no * 4)) & 0xf; 1549 + link_rate = (link_rate >> (phy_no * HISI_SAS_REG_MEM_SIZE)) & 0xf; 1599 1550 1600 1551 if (port_id == 0xf) { 1601 1552 dev_err(dev, "phyup: phy%d invalid portid\n", phy_no); ··· 1628 1579 1629 1580 sas_phy->oob_mode = SATA_OOB_MODE; 1630 1581 attached_sas_addr[0] = 0x50; 1631 - attached_sas_addr[6] = shost->host_no; 1632 - attached_sas_addr[7] = phy_no; 1582 + attached_sas_addr[HOST_NO_OFF] = shost->host_no; 1583 + attached_sas_addr[PHY_NO_OFF] = phy_no; 1633 1584 memcpy(sas_phy->attached_sas_addr, 1634 1585 attached_sas_addr, 1635 1586 SAS_ADDR_SIZE); ··· 1645 1596 (struct sas_identify_frame *)frame_rcvd; 1646 1597 1647 1598 dev_info(dev, "phyup: phy%d link_rate=%d\n", phy_no, link_rate); 1648 - for (i = 0; i < 6; i++) { 1599 + for (i = 0; i < IDENTIFY_REG_READ; i++) { 1649 1600 u32 idaf = hisi_sas_phy_read32(hisi_hba, phy_no, 1650 1601 RX_IDAF_DWORD0 + (i * 4)); 1651 1602 frame_rcvd[i] = __swab32(idaf); ··· 1750 1701 irq_msk = hisi_sas_read32(hisi_hba, CHNL_INT_STATUS) 1751 1702 & 0x11111111; 1752 1703 while (irq_msk) { 1753 - if (irq_msk & 1) { 1704 + if (irq_msk & 1) { 1754 1705 u32 irq_value = hisi_sas_phy_read32(hisi_hba, phy_no, 1755 1706 CHL_INT0); 1756 1707 u32 phy_state = hisi_sas_read32(hisi_hba, PHY_STATE); ··· 1915 1866 1916 1867 dev_warn(dev, "phy%d stp link timeout (0x%x)\n", 1917 1868 phy_no, reg_value); 1918 - if (reg_value & BIT(4)) 1869 + if (reg_value & BIT(LINK_RESET_TIMEOUT_OFF)) 1919 1870 hisi_sas_notify_phy_event(phy, HISI_PHYE_LINK_RESET); 1920 1871 } 1921 1872 ··· 1973 1924 u32 irq_msk; 1974 1925 int phy_no = 0; 1975 1926 1976 - irq_msk = hisi_sas_read32(hisi_hba, CHNL_INT_STATUS) 1977 - & CHNL_INT_STS_MSK; 1927 + irq_msk = hisi_sas_read32(hisi_hba, CHNL_INT_STATUS) & CHNL_INT_STS_MSK; 1978 1928 1979 1929 while (irq_msk) { 1980 1930 if (irq_msk & (CHNL_INT_STS_INT0_MSK << (phy_no * CHNL_WIDTH))) ··· 2618 2570 if (vectors < 0) 2619 2571 return -ENOENT; 2620 2572 2621 - 2622 2573 hisi_hba->cq_nvecs = vectors - BASE_VECTORS_V3_HW - hisi_hba->iopoll_q_cnt; 2623 2574 shost->nr_hw_queues = hisi_hba->cq_nvecs + hisi_hba->iopoll_q_cnt; 2624 2575 ··· 2630 2583 struct pci_dev *pdev = hisi_hba->pci_dev; 2631 2584 int rc, i; 2632 2585 2633 - rc = devm_request_irq(dev, pci_irq_vector(pdev, 1), 2586 + rc = devm_request_irq(dev, pci_irq_vector(pdev, IRQ_PHY_UP_DOWN_INDEX), 2634 2587 int_phy_up_down_bcast_v3_hw, 0, 2635 2588 DRV_NAME " phy", hisi_hba); 2636 2589 if (rc) { ··· 2638 2591 return -ENOENT; 2639 2592 } 2640 2593 2641 - rc = devm_request_irq(dev, pci_irq_vector(pdev, 2), 2594 + rc = devm_request_irq(dev, pci_irq_vector(pdev, IRQ_CHL_INDEX), 2642 2595 int_chnl_int_v3_hw, 0, 2643 2596 DRV_NAME " channel", hisi_hba); 2644 2597 if (rc) { ··· 2646 2599 return -ENOENT; 2647 2600 } 2648 2601 2649 - rc = devm_request_irq(dev, pci_irq_vector(pdev, 11), 2602 + rc = devm_request_irq(dev, pci_irq_vector(pdev, IRQ_AXI_INDEX), 2650 2603 fatal_axi_int_v3_hw, 0, 2651 2604 DRV_NAME " fatal", hisi_hba); 2652 2605 if (rc) { ··· 2659 2612 2660 2613 for (i = 0; i < hisi_hba->cq_nvecs; i++) { 2661 2614 struct hisi_sas_cq *cq = &hisi_hba->cq[i]; 2662 - int nr = hisi_sas_intr_conv ? 16 : 16 + i; 2615 + int nr = hisi_sas_intr_conv ? BASE_VECTORS_V3_HW : 2616 + BASE_VECTORS_V3_HW + i; 2663 2617 unsigned long irqflags = hisi_sas_intr_conv ? IRQF_SHARED : 2664 2618 IRQF_ONESHOT; 2665 2619 ··· 2718 2670 struct pci_dev *pdev = hisi_hba->pci_dev; 2719 2671 int i; 2720 2672 2721 - synchronize_irq(pci_irq_vector(pdev, 1)); 2722 - synchronize_irq(pci_irq_vector(pdev, 2)); 2723 - synchronize_irq(pci_irq_vector(pdev, 11)); 2673 + synchronize_irq(pci_irq_vector(pdev, IRQ_PHY_UP_DOWN_INDEX)); 2674 + synchronize_irq(pci_irq_vector(pdev, IRQ_CHL_INDEX)); 2675 + synchronize_irq(pci_irq_vector(pdev, IRQ_AXI_INDEX)); 2724 2676 for (i = 0; i < hisi_hba->queue_count; i++) 2725 2677 hisi_sas_write32(hisi_hba, OQ0_INT_SRC_MSK + 0x4 * i, 0x1); 2726 2678 2727 2679 for (i = 0; i < hisi_hba->cq_nvecs; i++) 2728 - synchronize_irq(pci_irq_vector(pdev, i + 16)); 2680 + synchronize_irq(pci_irq_vector(pdev, i + BASE_VECTORS_V3_HW)); 2729 2681 2730 2682 hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK1, 0xffffffff); 2731 2683 hisi_sas_write32(hisi_hba, ENT_INT_SRC_MSK2, 0xffffffff); ··· 2757 2709 2758 2710 hisi_sas_stop_phys(hisi_hba); 2759 2711 2760 - mdelay(10); 2712 + mdelay(HISI_SAS_DELAY_FOR_PHY_DISABLE); 2761 2713 2762 2714 reg_val = hisi_sas_read32(hisi_hba, AXI_MASTER_CFG_BASE + 2763 2715 AM_CTRL_GLOBAL); ··· 2894 2846 u32 intr_coal_ticks; 2895 2847 int ret; 2896 2848 2897 - ret = kstrtou32(buf, 10, &intr_coal_ticks); 2849 + ret = kstrtou32(buf, DECIMALISM_FLAG, &intr_coal_ticks); 2898 2850 if (ret) { 2899 2851 dev_err(dev, "Input data of interrupt coalesce unmatch\n"); 2900 2852 return -EINVAL; 2901 2853 } 2902 2854 2903 - if (intr_coal_ticks >= BIT(24)) { 2855 + if (intr_coal_ticks >= BIT(TICKS_BIT_INDEX)) { 2904 2856 dev_err(dev, "intr_coal_ticks must be less than 2^24!\n"); 2905 2857 return -EINVAL; 2906 2858 } ··· 2933 2885 u32 intr_coal_count; 2934 2886 int ret; 2935 2887 2936 - ret = kstrtou32(buf, 10, &intr_coal_count); 2888 + ret = kstrtou32(buf, DECIMALISM_FLAG, &intr_coal_count); 2937 2889 if (ret) { 2938 2890 dev_err(dev, "Input data of interrupt coalesce unmatch\n"); 2939 2891 return -EINVAL; 2940 2892 } 2941 2893 2942 - if (intr_coal_count >= BIT(8)) { 2894 + if (intr_coal_count >= BIT(COUNT_BIT_INDEX)) { 2943 2895 dev_err(dev, "intr_coal_count must be less than 2^8!\n"); 2944 2896 return -EINVAL; 2945 2897 } ··· 3071 3023 3072 3024 static const struct hisi_sas_debugfs_reg debugfs_port_reg = { 3073 3025 .lu = debugfs_port_reg_lu, 3074 - .count = 0x100, 3026 + .count = PORT_REG_LENGTH, 3075 3027 .base_off = PORT_BASE, 3076 3028 }; 3077 3029 ··· 3145 3097 3146 3098 static const struct hisi_sas_debugfs_reg debugfs_global_reg = { 3147 3099 .lu = debugfs_global_reg_lu, 3148 - .count = 0x800, 3100 + .count = GLOBAL_REG_LENGTH, 3149 3101 }; 3150 3102 3151 3103 static const struct hisi_sas_debugfs_reg_lu debugfs_axi_reg_lu[] = { ··· 3158 3110 3159 3111 static const struct hisi_sas_debugfs_reg debugfs_axi_reg = { 3160 3112 .lu = debugfs_axi_reg_lu, 3161 - .count = 0x61, 3113 + .count = AXI_REG_LENGTH, 3162 3114 .base_off = AXI_MASTER_CFG_BASE, 3163 3115 }; 3164 3116 ··· 3175 3127 3176 3128 static const struct hisi_sas_debugfs_reg debugfs_ras_reg = { 3177 3129 .lu = debugfs_ras_reg_lu, 3178 - .count = 0x10, 3130 + .count = RAS_REG_LENGTH, 3179 3131 .base_off = RAS_BASE, 3180 3132 }; 3181 3133 ··· 3184 3136 struct Scsi_Host *shost = hisi_hba->shost; 3185 3137 3186 3138 scsi_block_requests(shost); 3187 - wait_cmds_complete_timeout_v3_hw(hisi_hba, 100, 5000); 3139 + wait_cmds_complete_timeout_v3_hw(hisi_hba, WAIT_RETRY, WAIT_TMROUT); 3188 3140 3189 3141 set_bit(HISI_SAS_REJECT_CMD_BIT, &hisi_hba->flags); 3190 3142 hisi_sas_sync_cqs(hisi_hba); ··· 3225 3177 return; 3226 3178 } 3227 3179 3228 - memset(buf, 0, cache_dw_size * 4); 3180 + memset(buf, 0, cache_dw_size * BYTE_TO_DW); 3229 3181 buf[0] = val; 3230 3182 3231 3183 for (i = 1; i < cache_dw_size; i++) ··· 3272 3224 reg_val = hisi_sas_phy_read32(hisi_hba, phy_no, PROG_PHY_LINK_RATE); 3273 3225 /* init OOB link rate as 1.5 Gbits */ 3274 3226 reg_val &= ~CFG_PROG_OOB_PHY_LINK_RATE_MSK; 3275 - reg_val |= (0x8 << CFG_PROG_OOB_PHY_LINK_RATE_OFF); 3227 + reg_val |= (SAS_LINK_RATE_1_5_GBPS << CFG_PROG_OOB_PHY_LINK_RATE_OFF); 3276 3228 hisi_sas_phy_write32(hisi_hba, phy_no, PROG_PHY_LINK_RATE, reg_val); 3277 3229 3278 3230 /* enable PHY */ ··· 3281 3233 3282 3234 #define SAS_PHY_BIST_CODE_INIT 0x1 3283 3235 #define SAS_PHY_BIST_CODE1_INIT 0X80 3236 + #define SAS_PHY_BIST_INIT_DELAY 100 3237 + #define SAS_PHY_BIST_LOOP_TEST_0 1 3238 + #define SAS_PHY_BIST_LOOP_TEST_1 2 3284 3239 static int debugfs_set_bist_v3_hw(struct hisi_hba *hisi_hba, bool enable) 3285 3240 { 3286 3241 u32 reg_val, mode_tmp; ··· 3302 3251 ffe[FFE_SATA_1_5_GBPS], ffe[FFE_SATA_3_0_GBPS], 3303 3252 ffe[FFE_SATA_6_0_GBPS], fix_code[FIXED_CODE], 3304 3253 fix_code[FIXED_CODE_1]); 3305 - mode_tmp = path_mode ? 2 : 1; 3254 + mode_tmp = path_mode ? SAS_PHY_BIST_LOOP_TEST_1 : 3255 + SAS_PHY_BIST_LOOP_TEST_0; 3306 3256 if (enable) { 3307 3257 /* some preparations before bist test */ 3308 3258 hisi_sas_bist_test_prep_v3_hw(hisi_hba); 3309 3259 3310 - /* set linkrate of bit test*/ 3260 + /* set linkrate of bit test */ 3311 3261 reg_val = hisi_sas_phy_read32(hisi_hba, phy_no, 3312 3262 PROG_PHY_LINK_RATE); 3313 3263 reg_val &= ~CFG_PROG_OOB_PHY_LINK_RATE_MSK; ··· 3346 3294 SAS_PHY_BIST_CODE1_INIT); 3347 3295 } 3348 3296 3349 - mdelay(100); 3297 + mdelay(SAS_PHY_BIST_INIT_DELAY); 3350 3298 reg_val |= (CFG_RX_BIST_EN_MSK | CFG_TX_BIST_EN_MSK); 3351 3299 hisi_sas_phy_write32(hisi_hba, phy_no, SAS_PHY_BIST_CTRL, 3352 3300 reg_val); 3353 3301 3354 3302 /* clear error bit */ 3355 - mdelay(100); 3303 + mdelay(SAS_PHY_BIST_INIT_DELAY); 3356 3304 hisi_sas_phy_read32(hisi_hba, phy_no, SAS_BIST_ERR_CNT); 3357 3305 } else { 3358 3306 /* disable bist test and recover it */ ··· 3406 3354 .shost_groups = host_v3_hw_groups, 3407 3355 .sdev_groups = sdev_groups_v3_hw, 3408 3356 .tag_alloc_policy_rr = true, 3409 - .host_reset = hisi_sas_host_reset, 3357 + .host_reset = hisi_sas_host_reset, 3410 3358 .host_tagset = 1, 3411 3359 .mq_poll = queue_complete_v3_hw, 3412 3360 }; ··· 3548 3496 for (phy_cnt = 0; phy_cnt < hisi_hba->n_phy; phy_cnt++) { 3549 3497 databuf = hisi_hba->debugfs_port_reg[dump_index][phy_cnt].data; 3550 3498 for (i = 0; i < port->count; i++, databuf++) { 3551 - offset = port->base_off + 4 * i; 3499 + offset = port->base_off + HISI_SAS_REG_MEM_SIZE * i; 3552 3500 *databuf = hisi_sas_phy_read32(hisi_hba, phy_cnt, 3553 3501 offset); 3554 3502 } ··· 3562 3510 int i; 3563 3511 3564 3512 for (i = 0; i < debugfs_global_reg.count; i++, databuf++) 3565 - *databuf = hisi_sas_read32(hisi_hba, 4 * i); 3513 + *databuf = hisi_sas_read32(hisi_hba, 3514 + HISI_SAS_REG_MEM_SIZE * i); 3566 3515 } 3567 3516 3568 3517 static void debugfs_snapshot_axi_reg_v3_hw(struct hisi_hba *hisi_hba) ··· 3574 3521 int i; 3575 3522 3576 3523 for (i = 0; i < axi->count; i++, databuf++) 3577 - *databuf = hisi_sas_read32(hisi_hba, 4 * i + axi->base_off); 3524 + *databuf = hisi_sas_read32(hisi_hba, 3525 + HISI_SAS_REG_MEM_SIZE * i + 3526 + axi->base_off); 3578 3527 } 3579 3528 3580 3529 static void debugfs_snapshot_ras_reg_v3_hw(struct hisi_hba *hisi_hba) ··· 3587 3532 int i; 3588 3533 3589 3534 for (i = 0; i < ras->count; i++, databuf++) 3590 - *databuf = hisi_sas_read32(hisi_hba, 4 * i + ras->base_off); 3535 + *databuf = hisi_sas_read32(hisi_hba, 3536 + HISI_SAS_REG_MEM_SIZE * i + 3537 + ras->base_off); 3591 3538 } 3592 3539 3593 3540 static void debugfs_snapshot_itct_reg_v3_hw(struct hisi_hba *hisi_hba) ··· 3652 3595 int i; 3653 3596 3654 3597 for (i = 0; i < reg->count; i++) { 3655 - int off = i * 4; 3598 + int off = i * HISI_SAS_REG_MEM_SIZE; 3656 3599 const char *name; 3657 3600 3658 3601 name = debugfs_to_reg_name_v3_hw(off, reg->base_off, 3659 3602 reg->lu); 3660 - 3661 3603 if (name) 3662 3604 seq_printf(s, "0x%08x 0x%08x %s\n", off, 3663 3605 regs_val[i], name); ··· 3729 3673 3730 3674 /* completion header size not fixed per HW version */ 3731 3675 seq_printf(s, "index %04d:\n\t", index); 3732 - for (i = 1; i <= sz / 8; i++, ptr++) { 3676 + for (i = 1; i <= sz / BYTE_TO_DDW; i++, ptr++) { 3733 3677 seq_printf(s, " 0x%016llx", le64_to_cpu(*ptr)); 3734 - if (!(i % 2)) 3678 + if (!(i % TWO_PARA_PER_LINE)) 3735 3679 seq_puts(s, "\n\t"); 3736 3680 } 3737 3681 ··· 3745 3689 3746 3690 /* completion header size not fixed per HW version */ 3747 3691 seq_printf(s, "index %04d:\n\t", index); 3748 - for (i = 1; i <= sz / 4; i++, ptr++) { 3692 + for (i = 1; i <= sz / BYTE_TO_DW; i++, ptr++) { 3749 3693 seq_printf(s, " 0x%08x", le32_to_cpu(*ptr)); 3750 - if (!(i % 4)) 3694 + if (!(i % FOUR_PARA_PER_LINE)) 3751 3695 seq_puts(s, "\n\t"); 3752 3696 } 3753 3697 seq_puts(s, "\n"); ··· 3832 3776 struct hisi_sas_debugfs_iost_cache *debugfs_iost_cache = s->private; 3833 3777 struct hisi_sas_iost_itct_cache *iost_cache = 3834 3778 debugfs_iost_cache->cache; 3835 - u32 cache_size = HISI_SAS_IOST_ITCT_CACHE_DW_SZ * 4; 3779 + u32 cache_size = HISI_SAS_IOST_ITCT_CACHE_DW_SZ * BYTE_TO_DW; 3836 3780 int i, tab_idx; 3837 3781 __le64 *iost; 3838 3782 ··· 3880 3824 struct hisi_sas_debugfs_itct_cache *debugfs_itct_cache = s->private; 3881 3825 struct hisi_sas_iost_itct_cache *itct_cache = 3882 3826 debugfs_itct_cache->cache; 3883 - u32 cache_size = HISI_SAS_IOST_ITCT_CACHE_DW_SZ * 4; 3827 + u32 cache_size = HISI_SAS_IOST_ITCT_CACHE_DW_SZ * BYTE_TO_DW; 3884 3828 int i, tab_idx; 3885 3829 __le64 *itct; 3886 3830 ··· 3909 3853 u64 *debugfs_timestamp; 3910 3854 struct dentry *dump_dentry; 3911 3855 struct dentry *dentry; 3912 - char name[256]; 3856 + char name[NAME_BUF_SIZE]; 3913 3857 int p; 3914 3858 int c; 3915 3859 int d; 3916 3860 3917 - snprintf(name, 256, "%d", index); 3861 + snprintf(name, NAME_BUF_SIZE, "%d", index); 3918 3862 3919 3863 dump_dentry = debugfs_create_dir(name, hisi_hba->debugfs_dump_dentry); 3920 3864 ··· 3930 3874 /* Create port dir and files */ 3931 3875 dentry = debugfs_create_dir("port", dump_dentry); 3932 3876 for (p = 0; p < hisi_hba->n_phy; p++) { 3933 - snprintf(name, 256, "%d", p); 3877 + snprintf(name, NAME_BUF_SIZE, "%d", p); 3934 3878 3935 3879 debugfs_create_file(name, 0400, dentry, 3936 3880 &hisi_hba->debugfs_port_reg[index][p], ··· 3940 3884 /* Create CQ dir and files */ 3941 3885 dentry = debugfs_create_dir("cq", dump_dentry); 3942 3886 for (c = 0; c < hisi_hba->queue_count; c++) { 3943 - snprintf(name, 256, "%d", c); 3887 + snprintf(name, NAME_BUF_SIZE, "%d", c); 3944 3888 3945 3889 debugfs_create_file(name, 0400, dentry, 3946 3890 &hisi_hba->debugfs_cq[index][c], ··· 3950 3894 /* Create DQ dir and files */ 3951 3895 dentry = debugfs_create_dir("dq", dump_dentry); 3952 3896 for (d = 0; d < hisi_hba->queue_count; d++) { 3953 - snprintf(name, 256, "%d", d); 3897 + snprintf(name, NAME_BUF_SIZE, "%d", d); 3954 3898 3955 3899 debugfs_create_file(name, 0400, dentry, 3956 3900 &hisi_hba->debugfs_dq[index][d], ··· 3987 3931 size_t count, loff_t *ppos) 3988 3932 { 3989 3933 struct hisi_hba *hisi_hba = file->f_inode->i_private; 3990 - char buf[8]; 3934 + char buf[DUMP_BUF_SIZE]; 3991 3935 3992 - if (count > 8) 3936 + if (count > DUMP_BUF_SIZE) 3993 3937 return -EFAULT; 3994 3938 3995 3939 if (copy_from_user(buf, user_buf, count)) ··· 4053 3997 { 4054 3998 struct seq_file *m = filp->private_data; 4055 3999 struct hisi_hba *hisi_hba = m->private; 4056 - char kbuf[16] = {}, *pkbuf; 4000 + char kbuf[BIST_BUF_SIZE] = {}, *pkbuf; 4057 4001 bool found = false; 4058 4002 int i; 4059 4003 ··· 4070 4014 4071 4015 for (i = 0; i < ARRAY_SIZE(debugfs_loop_linkrate_v3_hw); i++) { 4072 4016 if (!strncmp(debugfs_loop_linkrate_v3_hw[i].name, 4073 - pkbuf, 16)) { 4017 + pkbuf, BIST_BUF_SIZE)) { 4074 4018 hisi_hba->debugfs_bist_linkrate = 4075 4019 debugfs_loop_linkrate_v3_hw[i].value; 4076 4020 found = true; ··· 4128 4072 { 4129 4073 struct seq_file *m = filp->private_data; 4130 4074 struct hisi_hba *hisi_hba = m->private; 4131 - char kbuf[16] = {}, *pkbuf; 4075 + char kbuf[BIST_BUF_SIZE] = {}, *pkbuf; 4132 4076 bool found = false; 4133 4077 int i; 4134 4078 ··· 4145 4089 4146 4090 for (i = 0; i < ARRAY_SIZE(debugfs_loop_code_mode_v3_hw); i++) { 4147 4091 if (!strncmp(debugfs_loop_code_mode_v3_hw[i].name, 4148 - pkbuf, 16)) { 4092 + pkbuf, BIST_BUF_SIZE)) { 4149 4093 hisi_hba->debugfs_bist_code_mode = 4150 4094 debugfs_loop_code_mode_v3_hw[i].value; 4151 4095 found = true; ··· 4260 4204 { 4261 4205 struct seq_file *m = filp->private_data; 4262 4206 struct hisi_hba *hisi_hba = m->private; 4263 - char kbuf[16] = {}, *pkbuf; 4207 + char kbuf[BIST_BUF_SIZE] = {}, *pkbuf; 4264 4208 bool found = false; 4265 4209 int i; 4266 4210 ··· 4276 4220 pkbuf = strstrip(kbuf); 4277 4221 4278 4222 for (i = 0; i < ARRAY_SIZE(debugfs_loop_modes_v3_hw); i++) { 4279 - if (!strncmp(debugfs_loop_modes_v3_hw[i].name, pkbuf, 16)) { 4223 + if (!strncmp(debugfs_loop_modes_v3_hw[i].name, pkbuf, 4224 + BIST_BUF_SIZE)) { 4280 4225 hisi_hba->debugfs_bist_mode = 4281 4226 debugfs_loop_modes_v3_hw[i].value; 4282 4227 found = true; ··· 4556 4499 4557 4500 debugfs_read_fifo_data_v3_hw(phy); 4558 4501 4559 - debugfs_show_row_32_v3_hw(s, 0, HISI_SAS_FIFO_DATA_DW_SIZE * 4, 4560 - (__le32 *)phy->fifo.rd_data); 4502 + debugfs_show_row_32_v3_hw(s, 0, 4503 + HISI_SAS_FIFO_DATA_DW_SIZE * HISI_SAS_REG_MEM_SIZE, 4504 + (__le32 *)phy->fifo.rd_data); 4561 4505 4562 4506 return 0; 4563 4507 } ··· 4690 4632 struct hisi_sas_debugfs_regs *regs = 4691 4633 &hisi_hba->debugfs_regs[dump_index][r]; 4692 4634 4693 - sz = debugfs_reg_array_v3_hw[r]->count * 4; 4635 + sz = debugfs_reg_array_v3_hw[r]->count * HISI_SAS_REG_MEM_SIZE; 4694 4636 regs->data = devm_kmalloc(dev, sz, GFP_KERNEL); 4695 4637 if (!regs->data) 4696 4638 goto fail; 4697 4639 regs->hisi_hba = hisi_hba; 4698 4640 } 4699 4641 4700 - sz = debugfs_port_reg.count * 4; 4642 + sz = debugfs_port_reg.count * HISI_SAS_REG_MEM_SIZE; 4701 4643 for (p = 0; p < hisi_hba->n_phy; p++) { 4702 4644 struct hisi_sas_debugfs_port *port = 4703 4645 &hisi_hba->debugfs_port_reg[dump_index][p]; ··· 4807 4749 { 4808 4750 struct dentry *dir = debugfs_create_dir("phy_down_cnt", 4809 4751 hisi_hba->debugfs_dir); 4810 - char name[16]; 4752 + char name[NAME_BUF_SIZE]; 4811 4753 int phy_no; 4812 4754 4813 4755 for (phy_no = 0; phy_no < hisi_hba->n_phy; phy_no++) { 4814 - snprintf(name, 16, "%d", phy_no); 4756 + snprintf(name, NAME_BUF_SIZE, "%d", phy_no); 4815 4757 debugfs_create_file(name, 0600, dir, 4816 4758 &hisi_hba->phy[phy_no], 4817 4759 &debugfs_phy_down_cnt_v3_hw_fops); ··· 4996 4938 shost->max_id = HISI_SAS_MAX_DEVICES; 4997 4939 shost->max_lun = ~0; 4998 4940 shost->max_channel = 1; 4999 - shost->max_cmd_len = 16; 4941 + shost->max_cmd_len = HISI_SAS_MAX_CDB_LEN; 5000 4942 shost->can_queue = HISI_SAS_UNRESERVED_IPTT; 5001 4943 shost->cmd_per_lun = HISI_SAS_UNRESERVED_IPTT; 5002 4944 if (hisi_hba->iopoll_q_cnt) ··· 5074 5016 { 5075 5017 int i; 5076 5018 5077 - devm_free_irq(&pdev->dev, pci_irq_vector(pdev, 1), hisi_hba); 5078 - devm_free_irq(&pdev->dev, pci_irq_vector(pdev, 2), hisi_hba); 5079 - devm_free_irq(&pdev->dev, pci_irq_vector(pdev, 11), hisi_hba); 5019 + devm_free_irq(&pdev->dev, pci_irq_vector(pdev, IRQ_PHY_UP_DOWN_INDEX), hisi_hba); 5020 + devm_free_irq(&pdev->dev, pci_irq_vector(pdev, IRQ_CHL_INDEX), hisi_hba); 5021 + devm_free_irq(&pdev->dev, pci_irq_vector(pdev, IRQ_AXI_INDEX), hisi_hba); 5080 5022 for (i = 0; i < hisi_hba->cq_nvecs; i++) { 5081 5023 struct hisi_sas_cq *cq = &hisi_hba->cq[i]; 5082 - int nr = hisi_sas_intr_conv ? 16 : 16 + i; 5024 + int nr = hisi_sas_intr_conv ? BASE_VECTORS_V3_HW : 5025 + BASE_VECTORS_V3_HW + i; 5083 5026 5084 5027 devm_free_irq(&pdev->dev, pci_irq_vector(pdev, nr), cq); 5085 5028 } ··· 5110 5051 { 5111 5052 struct sas_ha_struct *sha = pci_get_drvdata(pdev); 5112 5053 struct hisi_hba *hisi_hba = sha->lldd_ha; 5054 + struct Scsi_Host *shost = hisi_hba->shost; 5113 5055 struct device *dev = hisi_hba->dev; 5114 5056 int rc; 5115 5057 5058 + wait_event(shost->host_wait, !scsi_host_in_recovery(shost)); 5116 5059 dev_info(dev, "FLR prepare\n"); 5117 5060 down(&hisi_hba->sem); 5118 5061 set_bit(HISI_SAS_RESETTING_BIT, &hisi_hba->flags);
-30
drivers/scsi/isci/remote_device.c
··· 392 392 } 393 393 } 394 394 395 - enum sci_status sci_remote_device_reset(struct isci_remote_device *idev) 396 - { 397 - struct sci_base_state_machine *sm = &idev->sm; 398 - enum sci_remote_device_states state = sm->current_state_id; 399 - 400 - switch (state) { 401 - case SCI_DEV_INITIAL: 402 - case SCI_DEV_STOPPED: 403 - case SCI_DEV_STARTING: 404 - case SCI_SMP_DEV_IDLE: 405 - case SCI_SMP_DEV_CMD: 406 - case SCI_DEV_STOPPING: 407 - case SCI_DEV_FAILED: 408 - case SCI_DEV_RESETTING: 409 - case SCI_DEV_FINAL: 410 - default: 411 - dev_warn(scirdev_to_dev(idev), "%s: in wrong state: %s\n", 412 - __func__, dev_state_name(state)); 413 - return SCI_FAILURE_INVALID_STATE; 414 - case SCI_DEV_READY: 415 - case SCI_STP_DEV_IDLE: 416 - case SCI_STP_DEV_CMD: 417 - case SCI_STP_DEV_NCQ: 418 - case SCI_STP_DEV_NCQ_ERROR: 419 - case SCI_STP_DEV_AWAIT_RESET: 420 - sci_change_state(sm, SCI_DEV_RESETTING); 421 - return SCI_SUCCESS; 422 - } 423 - } 424 - 425 395 enum sci_status sci_remote_device_frame_handler(struct isci_remote_device *idev, 426 396 u32 frame_index) 427 397 {
-15
drivers/scsi/isci/remote_device.h
··· 160 160 u32 timeout); 161 161 162 162 /** 163 - * sci_remote_device_reset() - This method will reset the device making it 164 - * ready for operation. This method must be called anytime the device is 165 - * reset either through a SMP phy control or a port hard reset request. 166 - * @remote_device: This parameter specifies the device to be reset. 167 - * 168 - * This method does not actually cause the device hardware to be reset. This 169 - * method resets the software object so that it will be operational after a 170 - * device hardware reset completes. An indication of whether the device reset 171 - * was accepted. SCI_SUCCESS This value is returned if the device reset is 172 - * started. 173 - */ 174 - enum sci_status sci_remote_device_reset( 175 - struct isci_remote_device *idev); 176 - 177 - /** 178 163 * enum sci_remote_device_states - This enumeration depicts all the states 179 164 * for the common remote device state machine. 180 165 * @SCI_DEV_INITIAL: Simply the initial state for the base remote device
+135 -1
drivers/scsi/lpfc/lpfc_attr.c
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2017-2024 Broadcom. All Rights Reserved. The term * 4 + * Copyright (C) 2017-2025 Broadcom. All Rights Reserved. The term * 5 5 * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. * 6 6 * Copyright (C) 2004-2016 Emulex. All rights reserved. * 7 7 * EMULEX and SLI are trademarks of Emulex. * ··· 287 287 PAGE_SIZE); 288 288 strscpy(buf + PAGE_SIZE - 1 - sizeof(LPFC_INFO_MORE_STR), 289 289 LPFC_INFO_MORE_STR, sizeof(LPFC_INFO_MORE_STR) + 1); 290 + } 291 + return len; 292 + } 293 + 294 + static ssize_t 295 + lpfc_vmid_info_show(struct device *dev, struct device_attribute *attr, 296 + char *buf) 297 + { 298 + struct Scsi_Host *shost = class_to_shost(dev); 299 + struct lpfc_vport *vport = (struct lpfc_vport *)shost->hostdata; 300 + struct lpfc_hba *phba = vport->phba; 301 + struct lpfc_vmid *vmp; 302 + int len = 0, i, j, k, cpu; 303 + char hxstr[LPFC_MAX_VMID_SIZE * 3] = {0}; 304 + struct timespec64 curr_tm; 305 + struct lpfc_vmid_priority_range *vr; 306 + u64 *lta, rct_acc = 0, max_lta = 0; 307 + struct tm tm_val; 308 + 309 + ktime_get_ts64(&curr_tm); 310 + 311 + len += scnprintf(buf + len, PAGE_SIZE - len, "Key 'vmid':\n"); 312 + 313 + /* if enabled continue, else return */ 314 + if (lpfc_is_vmid_enabled(phba)) { 315 + len += scnprintf(buf + len, PAGE_SIZE - len, 316 + "lpfc VMID Page: ON\n\n"); 317 + } else { 318 + len += scnprintf(buf + len, PAGE_SIZE - len, 319 + "lpfc VMID Page: OFF\n\n"); 320 + return len; 321 + } 322 + 323 + /* if using priority tagging */ 324 + if (vport->phba->pport->vmid_flag & LPFC_VMID_TYPE_PRIO) { 325 + len += scnprintf(buf + len, PAGE_SIZE - len, 326 + "VMID priority ranges:\n"); 327 + vr = vport->vmid_priority.vmid_range; 328 + for (i = 0; i < vport->vmid_priority.num_descriptors; ++i) { 329 + len += scnprintf(buf + len, PAGE_SIZE - len, 330 + "\t[x%x - x%x], qos: x%x\n", 331 + vr->low, vr->high, vr->qos); 332 + vr++; 333 + } 334 + } 335 + 336 + for (i = 0; i < phba->cfg_max_vmid; i++) { 337 + vmp = &vport->vmid[i]; 338 + max_lta = 0; 339 + 340 + /* only if the slot is used */ 341 + if (!(vmp->flag & LPFC_VMID_SLOT_USED) || 342 + !(vmp->flag & LPFC_VMID_REGISTERED)) 343 + continue; 344 + 345 + /* if using priority tagging */ 346 + if (vport->phba->pport->vmid_flag & LPFC_VMID_TYPE_PRIO) { 347 + len += scnprintf(buf + len, PAGE_SIZE - len, 348 + "VEM ID: %02x:%02x:%02x:%02x:" 349 + "%02x:%02x:%02x:%02x:%02x:%02x:" 350 + "%02x:%02x:%02x:%02x:%02x:%02x\n", 351 + vport->lpfc_vmid_host_uuid[0], 352 + vport->lpfc_vmid_host_uuid[1], 353 + vport->lpfc_vmid_host_uuid[2], 354 + vport->lpfc_vmid_host_uuid[3], 355 + vport->lpfc_vmid_host_uuid[4], 356 + vport->lpfc_vmid_host_uuid[5], 357 + vport->lpfc_vmid_host_uuid[6], 358 + vport->lpfc_vmid_host_uuid[7], 359 + vport->lpfc_vmid_host_uuid[8], 360 + vport->lpfc_vmid_host_uuid[9], 361 + vport->lpfc_vmid_host_uuid[10], 362 + vport->lpfc_vmid_host_uuid[11], 363 + vport->lpfc_vmid_host_uuid[12], 364 + vport->lpfc_vmid_host_uuid[13], 365 + vport->lpfc_vmid_host_uuid[14], 366 + vport->lpfc_vmid_host_uuid[15]); 367 + } 368 + 369 + /* IO stats */ 370 + len += scnprintf(buf + len, PAGE_SIZE - len, 371 + "ID00 READs:%llx WRITEs:%llx\n", 372 + vmp->io_rd_cnt, 373 + vmp->io_wr_cnt); 374 + for (j = 0, k = 0; j < strlen(vmp->host_vmid); j++, k += 3) 375 + sprintf((char *)(hxstr + k), "%2x ", vmp->host_vmid[j]); 376 + /* UUIDs */ 377 + len += scnprintf(buf + len, PAGE_SIZE - len, "UUID:\n"); 378 + len += scnprintf(buf + len, PAGE_SIZE - len, "%s\n", hxstr); 379 + 380 + len += scnprintf(buf + len, PAGE_SIZE - len, "String (%s)\n", 381 + vmp->host_vmid); 382 + 383 + if (vport->phba->pport->vmid_flag & LPFC_VMID_TYPE_PRIO) 384 + len += scnprintf(buf + len, PAGE_SIZE - len, 385 + "CS_CTL VMID: 0x%x\n", 386 + vmp->un.cs_ctl_vmid); 387 + else 388 + len += scnprintf(buf + len, PAGE_SIZE - len, 389 + "Application id: 0x%x\n", 390 + vmp->un.app_id); 391 + 392 + /* calculate the last access time */ 393 + for_each_possible_cpu(cpu) { 394 + lta = per_cpu_ptr(vmp->last_io_time, cpu); 395 + if (!lta) 396 + continue; 397 + 398 + /* if last access time is less than timeout */ 399 + if (time_after((unsigned long)*lta, jiffies)) 400 + continue; 401 + 402 + if (*lta > max_lta) 403 + max_lta = *lta; 404 + } 405 + 406 + rct_acc = jiffies_to_msecs(jiffies - max_lta) / 1000; 407 + /* current time */ 408 + time64_to_tm(ktime_get_real_seconds(), 409 + -(sys_tz.tz_minuteswest * 60) - rct_acc, &tm_val); 410 + 411 + len += scnprintf(buf + len, PAGE_SIZE - len, 412 + "Last Access Time :" 413 + "%ld-%d-%dT%02d:%02d:%02d\n\n", 414 + 1900 + tm_val.tm_year, tm_val.tm_mon + 1, 415 + tm_val.tm_mday, tm_val.tm_hour, 416 + tm_val.tm_min, tm_val.tm_sec); 417 + 418 + if (len >= PAGE_SIZE) 419 + return len; 420 + 421 + memset(hxstr, 0, LPFC_MAX_VMID_SIZE * 3); 290 422 } 291 423 return len; 292 424 } ··· 3143 3011 static DEVICE_ATTR(lpfc_xlane_supported, S_IRUGO, lpfc_oas_supported_show, 3144 3012 NULL); 3145 3013 static DEVICE_ATTR(cmf_info, 0444, lpfc_cmf_info_show, NULL); 3014 + static DEVICE_ATTR_RO(lpfc_vmid_info); 3146 3015 3147 3016 #define WWN_SZ 8 3148 3017 /** ··· 6250 6117 &dev_attr_lpfc_vmid_inactivity_timeout.attr, 6251 6118 &dev_attr_lpfc_vmid_app_header.attr, 6252 6119 &dev_attr_lpfc_vmid_priority_tagging.attr, 6120 + &dev_attr_lpfc_vmid_info.attr, 6253 6121 NULL, 6254 6122 }; 6255 6123
+2 -4
drivers/scsi/lpfc/lpfc_bsg.c
··· 2687 2687 evt->wait_time_stamp = jiffies; 2688 2688 time_left = wait_event_interruptible_timeout( 2689 2689 evt->wq, !list_empty(&evt->events_to_see), 2690 - msecs_to_jiffies(1000 * 2691 - ((phba->fc_ratov * 2) + LPFC_DRVR_TIMEOUT))); 2690 + secs_to_jiffies(phba->fc_ratov * 2 + LPFC_DRVR_TIMEOUT)); 2692 2691 if (list_empty(&evt->events_to_see)) 2693 2692 ret_val = (time_left) ? -EINTR : -ETIMEDOUT; 2694 2693 else { ··· 3257 3258 evt->waiting = 1; 3258 3259 time_left = wait_event_interruptible_timeout( 3259 3260 evt->wq, !list_empty(&evt->events_to_see), 3260 - msecs_to_jiffies(1000 * 3261 - ((phba->fc_ratov * 2) + LPFC_DRVR_TIMEOUT))); 3261 + secs_to_jiffies(phba->fc_ratov * 2 + LPFC_DRVR_TIMEOUT)); 3262 3262 evt->waiting = 0; 3263 3263 if (list_empty(&evt->events_to_see)) { 3264 3264 rc = (time_left) ? -EINTR : -ETIMEDOUT;
+19 -19
drivers/scsi/lpfc/lpfc_hbadisc.c
··· 161 161 struct lpfc_hba *phba; 162 162 struct lpfc_work_evt *evtp; 163 163 unsigned long iflags; 164 - bool nvme_reg = false; 164 + bool drop_initial_node_ref = false; 165 165 166 166 ndlp = ((struct lpfc_rport_data *)rport->dd_data)->pnode; 167 167 if (!ndlp) ··· 188 188 spin_lock_irqsave(&ndlp->lock, iflags); 189 189 ndlp->rport = NULL; 190 190 191 - if (ndlp->fc4_xpt_flags & NVME_XPT_REGD) 192 - nvme_reg = true; 191 + /* Only 1 thread can drop the initial node reference. 192 + * If not registered for NVME and NLP_DROPPED flag is 193 + * clear, remove the initial reference. 194 + */ 195 + if (!(ndlp->fc4_xpt_flags & NVME_XPT_REGD)) 196 + if (!test_and_set_bit(NLP_DROPPED, &ndlp->nlp_flag)) 197 + drop_initial_node_ref = true; 193 198 194 199 /* The scsi_transport is done with the rport so lpfc cannot 195 200 * call to unregister. ··· 205 200 /* If NLP_XPT_REGD was cleared in lpfc_nlp_unreg_node, 206 201 * unregister calls were made to the scsi and nvme 207 202 * transports and refcnt was already decremented. Clear 208 - * the NLP_XPT_REGD flag only if the NVME Rport is 203 + * the NLP_XPT_REGD flag only if the NVME nrport is 209 204 * confirmed unregistered. 210 205 */ 211 - if (!nvme_reg && ndlp->fc4_xpt_flags & NLP_XPT_REGD) { 212 - ndlp->fc4_xpt_flags &= ~NLP_XPT_REGD; 206 + if (ndlp->fc4_xpt_flags & NLP_XPT_REGD) { 207 + if (!(ndlp->fc4_xpt_flags & NVME_XPT_REGD)) 208 + ndlp->fc4_xpt_flags &= ~NLP_XPT_REGD; 213 209 spin_unlock_irqrestore(&ndlp->lock, iflags); 214 - lpfc_nlp_put(ndlp); /* may free ndlp */ 210 + 211 + /* Release scsi transport reference */ 212 + lpfc_nlp_put(ndlp); 215 213 } else { 216 214 spin_unlock_irqrestore(&ndlp->lock, iflags); 217 215 } ··· 222 214 spin_unlock_irqrestore(&ndlp->lock, iflags); 223 215 } 224 216 225 - /* Only 1 thread can drop the initial node reference. If 226 - * another thread has set NLP_DROPPED, this thread is done. 227 - */ 228 - if (nvme_reg || test_bit(NLP_DROPPED, &ndlp->nlp_flag)) 229 - return; 230 - 231 - set_bit(NLP_DROPPED, &ndlp->nlp_flag); 232 - lpfc_nlp_put(ndlp); 217 + if (drop_initial_node_ref) 218 + lpfc_nlp_put(ndlp); 233 219 return; 234 220 } 235 221 ··· 4697 4695 if (ndlp->fc4_xpt_flags & NVME_XPT_REGD) { 4698 4696 vport->phba->nport_event_cnt++; 4699 4697 if (vport->phba->nvmet_support == 0) { 4700 - /* Start devloss if target. */ 4701 - if (ndlp->nlp_type & NLP_NVME_TARGET) 4702 - lpfc_nvme_unregister_port(vport, ndlp); 4698 + lpfc_nvme_unregister_port(vport, ndlp); 4703 4699 } else { 4704 4700 /* NVMET has no upcall. */ 4705 4701 lpfc_nlp_put(ndlp); ··· 5053 5053 case CMD_GEN_REQUEST64_CR: 5054 5054 if (iocb->ndlp == ndlp) 5055 5055 return 1; 5056 - fallthrough; 5056 + break; 5057 5057 case CMD_ELS_REQUEST64_CR: 5058 5058 if (remote_id == ndlp->nlp_DID) 5059 5059 return 1;
+3
drivers/scsi/lpfc/lpfc_init.c
··· 1907 1907 uint32_t intr_mode; 1908 1908 LPFC_MBOXQ_t *mboxq; 1909 1909 1910 + /* Notifying the transport that the targets are going offline. */ 1911 + lpfc_scsi_dev_block(phba); 1912 + 1910 1913 if (bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) >= 1911 1914 LPFC_SLI_INTF_IF_TYPE_2) { 1912 1915 /*
+7 -3
drivers/scsi/lpfc/lpfc_nvme.c
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2017-2024 Broadcom. All Rights Reserved. The term * 4 + * Copyright (C) 2017-2025 Broadcom. All Rights Reserved. The term * 5 5 * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. * 6 6 * Copyright (C) 2004-2016 Emulex. All rights reserved. * 7 7 * EMULEX and SLI are trademarks of Emulex. * ··· 2508 2508 "6031 RemotePort Registration failed " 2509 2509 "err: %d, DID x%06x ref %u\n", 2510 2510 ret, ndlp->nlp_DID, kref_read(&ndlp->kref)); 2511 - lpfc_nlp_put(ndlp); 2511 + 2512 + /* Only release reference if one was taken for this request */ 2513 + if (!oldrport) 2514 + lpfc_nlp_put(ndlp); 2512 2515 } 2513 2516 2514 2517 return ret; ··· 2617 2614 * clear any rport state until the transport calls back. 2618 2615 */ 2619 2616 2620 - if (ndlp->nlp_type & NLP_NVME_TARGET) { 2617 + if ((ndlp->nlp_type & NLP_NVME_TARGET) || 2618 + (remoteport->port_role & FC_PORT_ROLE_NVME_TARGET)) { 2621 2619 /* No concern about the role change on the nvme remoteport. 2622 2620 * The transport will update it. 2623 2621 */
+20 -10
drivers/scsi/lpfc/lpfc_sli.c
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2017-2024 Broadcom. All Rights Reserved. The term * 4 + * Copyright (C) 2017-2025 Broadcom. All Rights Reserved. The term * 5 5 * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. * 6 6 * Copyright (C) 2004-2016 Emulex. All rights reserved. * 7 7 * EMULEX and SLI are trademarks of Emulex. * ··· 3926 3926 uint64_t sli_intr, cnt; 3927 3927 3928 3928 phba = from_timer(phba, t, eratt_poll); 3929 - if (!test_bit(HBA_SETUP, &phba->hba_flag)) 3930 - return; 3931 3929 3932 3930 if (test_bit(FC_UNLOADING, &phba->pport->load_flag)) 3933 3931 return; 3932 + 3933 + if (phba->sli_rev == LPFC_SLI_REV4 && 3934 + !test_bit(HBA_SETUP, &phba->hba_flag)) { 3935 + lpfc_printf_log(phba, KERN_INFO, LOG_SLI, 3936 + "0663 HBA still initializing 0x%lx, restart " 3937 + "timer\n", 3938 + phba->hba_flag); 3939 + goto restart_timer; 3940 + } 3934 3941 3935 3942 /* Here we will also keep track of interrupts per sec of the hba */ 3936 3943 sli_intr = phba->sli.slistat.sli_intr; ··· 3957 3950 /* Check chip HA register for error event */ 3958 3951 eratt = lpfc_sli_check_eratt(phba); 3959 3952 3960 - if (eratt) 3953 + if (eratt) { 3961 3954 /* Tell the worker thread there is work to do */ 3962 3955 lpfc_worker_wake_up(phba); 3963 - else 3964 - /* Restart the timer for next eratt poll */ 3965 - mod_timer(&phba->eratt_poll, 3966 - jiffies + secs_to_jiffies(phba->eratt_poll_interval)); 3956 + return; 3957 + } 3958 + 3959 + restart_timer: 3960 + /* Restart the timer for next eratt poll */ 3961 + mod_timer(&phba->eratt_poll, 3962 + jiffies + secs_to_jiffies(phba->eratt_poll_interval)); 3967 3963 return; 3968 3964 } 3969 3965 ··· 6013 6003 phba->sli4_hba.flash_id = bf_get(lpfc_cntl_attr_flash_id, cntl_attr); 6014 6004 phba->sli4_hba.asic_rev = bf_get(lpfc_cntl_attr_asic_rev, cntl_attr); 6015 6005 6016 - memset(phba->BIOSVersion, 0, sizeof(phba->BIOSVersion)); 6017 - strlcat(phba->BIOSVersion, (char *)cntl_attr->bios_ver_str, 6006 + memcpy(phba->BIOSVersion, cntl_attr->bios_ver_str, 6018 6007 sizeof(phba->BIOSVersion)); 6008 + phba->BIOSVersion[sizeof(phba->BIOSVersion) - 1] = '\0'; 6019 6009 6020 6010 lpfc_printf_log(phba, KERN_INFO, LOG_SLI, 6021 6011 "3086 lnk_type:%d, lnk_numb:%d, bios_ver:%s, "
+1 -1
drivers/scsi/lpfc/lpfc_version.h
··· 20 20 * included with this package. * 21 21 *******************************************************************/ 22 22 23 - #define LPFC_DRIVER_VERSION "14.4.0.8" 23 + #define LPFC_DRIVER_VERSION "14.4.0.9" 24 24 #define LPFC_DRIVER_NAME "lpfc" 25 25 26 26 /* Used for SLI 2/3 */
+2 -2
drivers/scsi/lpfc/lpfc_vport.c
··· 505 505 wait_event_timeout(waitq, 506 506 !test_bit(NLP_WAIT_FOR_LOGO, 507 507 &ndlp->save_flags), 508 - msecs_to_jiffies(phba->fc_ratov * 2000)); 508 + secs_to_jiffies(phba->fc_ratov * 2)); 509 509 510 510 if (!test_bit(NLP_WAIT_FOR_LOGO, &ndlp->save_flags)) 511 511 goto logo_cmpl; ··· 703 703 wait_event_timeout(waitq, 704 704 !test_bit(NLP_WAIT_FOR_DA_ID, 705 705 &ndlp->save_flags), 706 - msecs_to_jiffies(phba->fc_ratov * 2000)); 706 + secs_to_jiffies(phba->fc_ratov * 2)); 707 707 } 708 708 709 709 lpfc_printf_vlog(vport, KERN_INFO, LOG_VPORT | LOG_ELS,
+53 -20
drivers/scsi/mpi3mr/mpi3mr_os.c
··· 985 985 goto out; 986 986 } 987 987 } 988 + dprint_event_bh(mrioc, 989 + "exposed target device with handle(0x%04x), perst_id(%d)\n", 990 + tgtdev->dev_handle, perst_id); 991 + goto out; 988 992 } else 989 993 mpi3mr_report_tgtdev_to_sas_transport(mrioc, tgtdev); 990 994 out: ··· 1348 1344 (struct mpi3_event_data_device_status_change *)fwevt->event_data; 1349 1345 1350 1346 dev_handle = le16_to_cpu(evtdata->dev_handle); 1351 - ioc_info(mrioc, 1352 - "%s :device status change: handle(0x%04x): reason code(0x%x)\n", 1353 - __func__, dev_handle, evtdata->reason_code); 1347 + dprint_event_bh(mrioc, 1348 + "processing device status change event bottom half for handle(0x%04x), rc(0x%02x)\n", 1349 + dev_handle, evtdata->reason_code); 1354 1350 switch (evtdata->reason_code) { 1355 1351 case MPI3_EVENT_DEV_STAT_RC_HIDDEN: 1356 1352 delete = 1; ··· 1369 1365 } 1370 1366 1371 1367 tgtdev = mpi3mr_get_tgtdev_by_handle(mrioc, dev_handle); 1372 - if (!tgtdev) 1368 + if (!tgtdev) { 1369 + dprint_event_bh(mrioc, 1370 + "processing device status change event bottom half,\n" 1371 + "cannot identify target device for handle(0x%04x), rc(0x%02x)\n", 1372 + dev_handle, evtdata->reason_code); 1373 1373 goto out; 1374 + } 1374 1375 if (uhide) { 1375 1376 tgtdev->is_hidden = 0; 1376 1377 if (!tgtdev->host_exposed) ··· 1415 1406 1416 1407 perst_id = le16_to_cpu(dev_pg0->persistent_id); 1417 1408 dev_handle = le16_to_cpu(dev_pg0->dev_handle); 1418 - ioc_info(mrioc, 1419 - "%s :Device info change: handle(0x%04x): persist_id(0x%x)\n", 1420 - __func__, dev_handle, perst_id); 1409 + dprint_event_bh(mrioc, 1410 + "processing device info change event bottom half for handle(0x%04x), perst_id(%d)\n", 1411 + dev_handle, perst_id); 1421 1412 tgtdev = mpi3mr_get_tgtdev_by_handle(mrioc, dev_handle); 1422 - if (!tgtdev) 1413 + if (!tgtdev) { 1414 + dprint_event_bh(mrioc, 1415 + "cannot identify target device for device info\n" 1416 + "change event handle(0x%04x), perst_id(%d)\n", 1417 + dev_handle, perst_id); 1423 1418 goto out; 1419 + } 1424 1420 mpi3mr_update_tgtdev(mrioc, tgtdev, dev_pg0, false); 1425 1421 if (!tgtdev->is_hidden && !tgtdev->host_exposed) 1426 1422 mpi3mr_report_tgtdev_to_host(mrioc, perst_id); ··· 2026 2012 mpi3mr_fwevt_del_from_list(mrioc, fwevt); 2027 2013 mrioc->current_event = fwevt; 2028 2014 2029 - if (mrioc->stop_drv_processing) 2015 + if (mrioc->stop_drv_processing) { 2016 + dprint_event_bh(mrioc, "ignoring event(0x%02x) in the bottom half handler\n" 2017 + "due to stop_drv_processing\n", fwevt->event_id); 2030 2018 goto out; 2019 + } 2031 2020 2032 2021 if (mrioc->unrecoverable) { 2033 2022 dprint_event_bh(mrioc, ··· 2041 2024 2042 2025 if (!fwevt->process_evt) 2043 2026 goto evt_ack; 2027 + 2028 + dprint_event_bh(mrioc, "processing event(0x%02x) in the bottom half handler\n", 2029 + fwevt->event_id); 2044 2030 2045 2031 switch (fwevt->event_id) { 2046 2032 case MPI3_EVENT_DEVICE_ADDED: ··· 2783 2763 goto out; 2784 2764 2785 2765 dev_handle = le16_to_cpu(evtdata->dev_handle); 2766 + dprint_event_th(mrioc, 2767 + "device status change event top half with rc(0x%02x) for handle(0x%04x)\n", 2768 + evtdata->reason_code, dev_handle); 2786 2769 2787 2770 switch (evtdata->reason_code) { 2788 2771 case MPI3_EVENT_DEV_STAT_RC_INT_DEVICE_RESET_STRT: ··· 2809 2786 } 2810 2787 2811 2788 tgtdev = mpi3mr_get_tgtdev_by_handle(mrioc, dev_handle); 2812 - if (!tgtdev) 2789 + if (!tgtdev) { 2790 + dprint_event_th(mrioc, 2791 + "processing device status change event could not identify device for handle(0x%04x)\n", 2792 + dev_handle); 2813 2793 goto out; 2794 + } 2814 2795 if (hide) 2815 2796 tgtdev->is_hidden = hide; 2816 2797 if (tgtdev->starget && tgtdev->starget->hostdata) { ··· 2890 2863 u16 shutdown_timeout = le16_to_cpu(evtdata->shutdown_timeout); 2891 2864 2892 2865 if (shutdown_timeout <= 0) { 2893 - ioc_warn(mrioc, 2866 + dprint_event_th(mrioc, 2894 2867 "%s :Invalid Shutdown Timeout received = %d\n", 2895 2868 __func__, shutdown_timeout); 2896 2869 return; 2897 2870 } 2898 2871 2899 - ioc_info(mrioc, 2872 + dprint_event_th(mrioc, 2900 2873 "%s :Previous Shutdown Timeout Value = %d New Shutdown Timeout Value = %d\n", 2901 2874 __func__, mrioc->facts.shutdown_timeout, shutdown_timeout); 2902 2875 mrioc->facts.shutdown_timeout = shutdown_timeout; ··· 2972 2945 * @mrioc: Adapter instance reference 2973 2946 * @event_reply: event data 2974 2947 * 2975 - * Identify whteher the event has to handled and acknowledged 2976 - * and either process the event in the tophalf and/or schedule a 2977 - * bottom half through mpi3mr_fwevt_worker. 2948 + * Identifies whether the event has to be handled and acknowledged, 2949 + * and either processes the event in the top-half and/or schedule a 2950 + * bottom-half through mpi3mr_fwevt_worker(). 2978 2951 * 2979 2952 * Return: Nothing 2980 2953 */ ··· 3001 2974 struct mpi3_device_page0 *dev_pg0 = 3002 2975 (struct mpi3_device_page0 *)event_reply->event_data; 3003 2976 if (mpi3mr_create_tgtdev(mrioc, dev_pg0)) 3004 - ioc_err(mrioc, 3005 - "%s :Failed to add device in the device add event\n", 3006 - __func__); 2977 + dprint_event_th(mrioc, 2978 + "failed to process device added event for handle(0x%04x),\n" 2979 + "perst_id(%d) in the event top half handler\n", 2980 + le16_to_cpu(dev_pg0->dev_handle), 2981 + le16_to_cpu(dev_pg0->persistent_id)); 3007 2982 else 3008 2983 process_evt_bh = 1; 3009 2984 break; ··· 3068 3039 break; 3069 3040 } 3070 3041 if (process_evt_bh || ack_req) { 3042 + dprint_event_th(mrioc, 3043 + "scheduling bottom half handler for event(0x%02x),ack_required=%d\n", 3044 + evt_type, ack_req); 3071 3045 sz = event_reply->event_data_length * 4; 3072 3046 fwevt = mpi3mr_alloc_fwevt(sz); 3073 3047 if (!fwevt) { 3074 - ioc_info(mrioc, "%s :failure at %s:%d/%s()!\n", 3075 - __func__, __FILE__, __LINE__, __func__); 3048 + dprint_event_th(mrioc, 3049 + "failed to schedule bottom half handler for\n" 3050 + "event(0x%02x), ack_required=%d\n", evt_type, ack_req); 3076 3051 return; 3077 3052 } 3078 3053
+2 -1
drivers/scsi/mpt3sas/mpt3sas_ctl.c
··· 2869 2869 if (ioc->facts.IOCCapabilities & MPI26_IOCFACTS_CAPABILITY_MCTP_PASSTHRU) { 2870 2870 if (count == dev_index) { 2871 2871 spin_unlock(&gioc_lock); 2872 - return 0; 2872 + return ioc; 2873 2873 } 2874 + count++; 2874 2875 } 2875 2876 } 2876 2877 spin_unlock(&gioc_lock);
+2 -2
drivers/scsi/mvsas/mv_64xx.h
··· 101 101 VSR_PHY_MODE9 = 0x09, /* Test */ 102 102 VSR_PHY_MODE10 = 0x0A, /* Power */ 103 103 VSR_PHY_MODE11 = 0x0B, /* Phy Mode */ 104 - VSR_PHY_VS0 = 0x0C, /* Vednor Specific 0 */ 105 - VSR_PHY_VS1 = 0x0D, /* Vednor Specific 1 */ 104 + VSR_PHY_VS0 = 0x0C, /* Vendor Specific 0 */ 105 + VSR_PHY_VS1 = 0x0D, /* Vendor Specific 1 */ 106 106 }; 107 107 108 108 enum chip_register_bits {
+1 -1
drivers/scsi/pm8001/pm8001_ctl.c
··· 644 644 #define FLASH_CMD_SET_NVMD 0x02 645 645 646 646 struct flash_command { 647 - u8 command[8]; 647 + u8 command[8] __nonstring; 648 648 int code; 649 649 }; 650 650
-22
drivers/scsi/qedi/qedi_dbg.c
··· 103 103 ret: 104 104 va_end(va); 105 105 } 106 - 107 - int 108 - qedi_create_sysfs_attr(struct Scsi_Host *shost, struct sysfs_bin_attrs *iter) 109 - { 110 - int ret = 0; 111 - 112 - for (; iter->name; iter++) { 113 - ret = sysfs_create_bin_file(&shost->shost_gendev.kobj, 114 - iter->attr); 115 - if (ret) 116 - pr_err("Unable to create sysfs %s attr, err(%d).\n", 117 - iter->name, ret); 118 - } 119 - return ret; 120 - } 121 - 122 - void 123 - qedi_remove_sysfs_attr(struct Scsi_Host *shost, struct sysfs_bin_attrs *iter) 124 - { 125 - for (; iter->name; iter++) 126 - sysfs_remove_bin_file(&shost->shost_gendev.kobj, iter->attr); 127 - }
-12
drivers/scsi/qedi/qedi_dbg.h
··· 87 87 void qedi_dbg_info(struct qedi_dbg_ctx *qedi, const char *func, u32 line, 88 88 u32 info, const char *fmt, ...); 89 89 90 - struct Scsi_Host; 91 - 92 - struct sysfs_bin_attrs { 93 - char *name; 94 - const struct bin_attribute *attr; 95 - }; 96 - 97 - int qedi_create_sysfs_attr(struct Scsi_Host *shost, 98 - struct sysfs_bin_attrs *iter); 99 - void qedi_remove_sysfs_attr(struct Scsi_Host *shost, 100 - struct sysfs_bin_attrs *iter); 101 - 102 90 /* DebugFS related code */ 103 91 struct qedi_list_of_funcs { 104 92 char *oper_str;
-1
drivers/scsi/qedi/qedi_gbl.h
··· 45 45 void qedi_iscsi_unmap_sg_list(struct qedi_cmd *cmd); 46 46 void qedi_update_itt_map(struct qedi_ctx *qedi, u32 tid, u32 proto_itt, 47 47 struct qedi_cmd *qedi_cmd); 48 - void qedi_get_proto_itt(struct qedi_ctx *qedi, u32 tid, u32 *proto_itt); 49 48 void qedi_get_task_tid(struct qedi_ctx *qedi, u32 itt, int16_t *tid); 50 49 void qedi_process_iscsi_error(struct qedi_endpoint *ep, 51 50 struct iscsi_eqe_data *data);
-8
drivers/scsi/qedi/qedi_main.c
··· 1877 1877 WARN_ON(1); 1878 1878 } 1879 1879 1880 - void qedi_get_proto_itt(struct qedi_ctx *qedi, u32 tid, u32 *proto_itt) 1881 - { 1882 - *proto_itt = qedi->itt_map[tid].itt; 1883 - QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_CONN, 1884 - "Get itt map tid [0x%x with proto itt[0x%x]", 1885 - tid, *proto_itt); 1886 - } 1887 - 1888 1880 struct qedi_cmd *qedi_get_cmd_from_tid(struct qedi_ctx *qedi, u32 tid) 1889 1881 { 1890 1882 struct qedi_cmd *cmd = NULL;
-53
drivers/scsi/qla2xxx/qla_dbg.c
··· 2706 2706 } 2707 2707 2708 2708 /* 2709 - * This function is for formatting and logging log messages. 2710 - * It is to be used when vha is available. It formats the message 2711 - * and logs it to the messages file. All the messages will be logged 2712 - * irrespective of value of ql2xextended_error_logging. 2713 - * parameters: 2714 - * level: The level of the log messages to be printed in the 2715 - * messages file. 2716 - * vha: Pointer to the scsi_qla_host_t 2717 - * id: This is a unique id for the level. It identifies the 2718 - * part of the code from where the message originated. 2719 - * msg: The message to be displayed. 2720 - */ 2721 - void 2722 - ql_log_qp(uint32_t level, struct qla_qpair *qpair, int32_t id, 2723 - const char *fmt, ...) 2724 - { 2725 - va_list va; 2726 - struct va_format vaf; 2727 - char pbuf[128]; 2728 - 2729 - if (level > ql_errlev) 2730 - return; 2731 - 2732 - ql_ktrace(0, level, pbuf, NULL, qpair ? qpair->vha : NULL, id, fmt); 2733 - 2734 - if (!pbuf[0]) /* set by ql_ktrace */ 2735 - ql_dbg_prefix(pbuf, ARRAY_SIZE(pbuf), NULL, 2736 - qpair ? qpair->vha : NULL, id); 2737 - 2738 - va_start(va, fmt); 2739 - 2740 - vaf.fmt = fmt; 2741 - vaf.va = &va; 2742 - 2743 - switch (level) { 2744 - case ql_log_fatal: /* FATAL LOG */ 2745 - pr_crit("%s%pV", pbuf, &vaf); 2746 - break; 2747 - case ql_log_warn: 2748 - pr_err("%s%pV", pbuf, &vaf); 2749 - break; 2750 - case ql_log_info: 2751 - pr_warn("%s%pV", pbuf, &vaf); 2752 - break; 2753 - default: 2754 - pr_info("%s%pV", pbuf, &vaf); 2755 - break; 2756 - } 2757 - 2758 - va_end(va); 2759 - } 2760 - 2761 - /* 2762 2709 * This function is for formatting and logging debug information. 2763 2710 * It is to be used when vha is available. It formats the message 2764 2711 * and logs it to the messages file.
-3
drivers/scsi/qla2xxx/qla_dbg.h
··· 334 334 void __attribute__((format (printf, 4, 5))) 335 335 ql_log_pci(uint, struct pci_dev *pdev, uint, const char *fmt, ...); 336 336 337 - void __attribute__((format (printf, 4, 5))) 338 - ql_log_qp(uint32_t, struct qla_qpair *, int32_t, const char *fmt, ...); 339 - 340 337 /* Debug Levels */ 341 338 /* The 0x40000000 is the max value any debug level can have 342 339 * as ql2xextended_error_logging is of type signed int
-5
drivers/scsi/qla2xxx/qla_gbl.h
··· 164 164 extern int ql2xallocfwdump; 165 165 extern int ql2xextended_error_logging; 166 166 extern int ql2xextended_error_logging_ktrace; 167 - extern int ql2xiidmaenable; 168 167 extern int ql2xmqsupport; 169 168 extern int ql2xfwloadbin; 170 - extern int ql2xetsenable; 171 169 extern int ql2xshiftctondsd; 172 170 extern int ql2xdbwr; 173 171 extern int ql2xasynctmfenable; ··· 718 720 extern void *qla24xx_prep_ms_fdmi_iocb(scsi_qla_host_t *, uint32_t, uint32_t); 719 721 extern int qla2x00_fdmi_register(scsi_qla_host_t *); 720 722 extern int qla2x00_gfpn_id(scsi_qla_host_t *, sw_info_t *); 721 - extern int qla2x00_gpsc(scsi_qla_host_t *, sw_info_t *); 722 723 extern size_t qla2x00_get_sym_node_name(scsi_qla_host_t *, uint8_t *, size_t); 723 724 extern int qla2x00_chk_ms_status(scsi_qla_host_t *, ms_iocb_entry_t *, 724 725 struct ct_sns_rsp *, const char *); ··· 819 822 /* PCI related functions */ 820 823 extern int qla82xx_pci_config(struct scsi_qla_host *); 821 824 extern int qla82xx_pci_mem_read_2M(struct qla_hw_data *, u64, void *, int); 822 - extern int qla82xx_pci_region_offset(struct pci_dev *, int); 823 825 extern int qla82xx_iospace_config(struct qla_hw_data *); 824 826 825 827 /* Initialization related functions */ ··· 862 866 863 867 /* ISP 8021 IDC */ 864 868 extern void qla82xx_clear_drv_active(struct qla_hw_data *); 865 - extern uint32_t qla82xx_wait_for_state_change(scsi_qla_host_t *, uint32_t); 866 869 extern int qla82xx_idc_lock(struct qla_hw_data *); 867 870 extern void qla82xx_idc_unlock(struct qla_hw_data *); 868 871 extern int qla82xx_device_state_handler(scsi_qla_host_t *);
-90
drivers/scsi/qla2xxx/qla_gs.c
··· 2626 2626 } 2627 2627 2628 2628 /** 2629 - * qla2x00_gpsc() - FCS Get Port Speed Capabilities (GPSC) query. 2630 - * @vha: HA context 2631 - * @list: switch info entries to populate 2632 - * 2633 - * Returns 0 on success. 2634 - */ 2635 - int 2636 - qla2x00_gpsc(scsi_qla_host_t *vha, sw_info_t *list) 2637 - { 2638 - int rval; 2639 - uint16_t i; 2640 - struct qla_hw_data *ha = vha->hw; 2641 - ms_iocb_entry_t *ms_pkt; 2642 - struct ct_sns_req *ct_req; 2643 - struct ct_sns_rsp *ct_rsp; 2644 - struct ct_arg arg; 2645 - 2646 - if (!IS_IIDMA_CAPABLE(ha)) 2647 - return QLA_FUNCTION_FAILED; 2648 - if (!ha->flags.gpsc_supported) 2649 - return QLA_FUNCTION_FAILED; 2650 - 2651 - rval = qla2x00_mgmt_svr_login(vha); 2652 - if (rval) 2653 - return rval; 2654 - 2655 - arg.iocb = ha->ms_iocb; 2656 - arg.req_dma = ha->ct_sns_dma; 2657 - arg.rsp_dma = ha->ct_sns_dma; 2658 - arg.req_size = GPSC_REQ_SIZE; 2659 - arg.rsp_size = GPSC_RSP_SIZE; 2660 - arg.nport_handle = vha->mgmt_svr_loop_id; 2661 - 2662 - for (i = 0; i < ha->max_fibre_devices; i++) { 2663 - /* Issue GFPN_ID */ 2664 - /* Prepare common MS IOCB */ 2665 - ms_pkt = qla24xx_prep_ms_iocb(vha, &arg); 2666 - 2667 - /* Prepare CT request */ 2668 - ct_req = qla24xx_prep_ct_fm_req(ha->ct_sns, GPSC_CMD, 2669 - GPSC_RSP_SIZE); 2670 - ct_rsp = &ha->ct_sns->p.rsp; 2671 - 2672 - /* Prepare CT arguments -- port_name */ 2673 - memcpy(ct_req->req.gpsc.port_name, list[i].fabric_port_name, 2674 - WWN_SIZE); 2675 - 2676 - /* Execute MS IOCB */ 2677 - rval = qla2x00_issue_iocb(vha, ha->ms_iocb, ha->ms_iocb_dma, 2678 - sizeof(ms_iocb_entry_t)); 2679 - if (rval != QLA_SUCCESS) { 2680 - /*EMPTY*/ 2681 - ql_dbg(ql_dbg_disc, vha, 0x2059, 2682 - "GPSC issue IOCB failed (%d).\n", rval); 2683 - } else if ((rval = qla2x00_chk_ms_status(vha, ms_pkt, ct_rsp, 2684 - "GPSC")) != QLA_SUCCESS) { 2685 - /* FM command unsupported? */ 2686 - if (rval == QLA_INVALID_COMMAND && 2687 - (ct_rsp->header.reason_code == 2688 - CT_REASON_INVALID_COMMAND_CODE || 2689 - ct_rsp->header.reason_code == 2690 - CT_REASON_COMMAND_UNSUPPORTED)) { 2691 - ql_dbg(ql_dbg_disc, vha, 0x205a, 2692 - "GPSC command unsupported, disabling " 2693 - "query.\n"); 2694 - ha->flags.gpsc_supported = 0; 2695 - rval = QLA_FUNCTION_FAILED; 2696 - break; 2697 - } 2698 - rval = QLA_FUNCTION_FAILED; 2699 - } else { 2700 - list->fp_speed = qla2x00_port_speed_capability( 2701 - be16_to_cpu(ct_rsp->rsp.gpsc.speed)); 2702 - ql_dbg(ql_dbg_disc, vha, 0x205b, 2703 - "GPSC ext entry - fpn " 2704 - "%8phN speeds=%04x speed=%04x.\n", 2705 - list[i].fabric_port_name, 2706 - be16_to_cpu(ct_rsp->rsp.gpsc.speeds), 2707 - be16_to_cpu(ct_rsp->rsp.gpsc.speed)); 2708 - } 2709 - 2710 - /* Last device exit. */ 2711 - if (list[i].d_id.b.rsvd_1 != 0) 2712 - break; 2713 - } 2714 - 2715 - return (rval); 2716 - } 2717 - 2718 - /** 2719 2629 * qla2x00_gff_id() - SNS Get FC-4 Features (GFF_ID) query. 2720 2630 * 2721 2631 * @vha: HA context
-50
drivers/scsi/qla2xxx/qla_nx.c
··· 1099 1099 unsigned offset, n; 1100 1100 struct qla_hw_data *ha = vha->hw; 1101 1101 1102 - struct crb_addr_pair { 1103 - long addr; 1104 - long data; 1105 - }; 1106 - 1107 1102 /* Halt all the individual PEGs and other blocks of the ISP */ 1108 1103 qla82xx_rom_lock(ha); 1109 1104 ··· 1589 1594 1590 1595 return (u8 *)&ha->hablob->fw->data[offset]; 1591 1596 } 1592 - 1593 - /* PCI related functions */ 1594 - int qla82xx_pci_region_offset(struct pci_dev *pdev, int region) 1595 - { 1596 - unsigned long val = 0; 1597 - u32 control; 1598 - 1599 - switch (region) { 1600 - case 0: 1601 - val = 0; 1602 - break; 1603 - case 1: 1604 - pci_read_config_dword(pdev, QLA82XX_PCI_REG_MSIX_TBL, &control); 1605 - val = control + QLA82XX_MSIX_TBL_SPACE; 1606 - break; 1607 - } 1608 - return val; 1609 - } 1610 - 1611 1597 1612 1598 int 1613 1599 qla82xx_iospace_config(struct qla_hw_data *ha) ··· 2908 2932 "HW State: DEV_QUIESCENT.\n"); 2909 2933 qla82xx_wr_32(ha, QLA82XX_CRB_DEV_STATE, QLA8XXX_DEV_QUIESCENT); 2910 2934 } 2911 - } 2912 - 2913 - /* 2914 - * qla82xx_wait_for_state_change 2915 - * Wait for device state to change from given current state 2916 - * 2917 - * Note: 2918 - * IDC lock must not be held upon entry 2919 - * 2920 - * Return: 2921 - * Changed device state. 2922 - */ 2923 - uint32_t 2924 - qla82xx_wait_for_state_change(scsi_qla_host_t *vha, uint32_t curr_state) 2925 - { 2926 - struct qla_hw_data *ha = vha->hw; 2927 - uint32_t dev_state; 2928 - 2929 - do { 2930 - msleep(1000); 2931 - qla82xx_idc_lock(ha); 2932 - dev_state = qla82xx_rd_32(ha, QLA82XX_CRB_DEV_STATE); 2933 - qla82xx_idc_unlock(ha); 2934 - } while (dev_state == curr_state); 2935 - 2936 - return dev_state; 2937 2935 } 2938 2936 2939 2937 void
-12
drivers/scsi/qla2xxx/qla_os.c
··· 176 176 " 1 -- Error isolation enabled only for DIX Type 0\n" 177 177 " 2 -- Error isolation enabled for all Types\n"); 178 178 179 - int ql2xiidmaenable = 1; 180 - module_param(ql2xiidmaenable, int, S_IRUGO); 181 - MODULE_PARM_DESC(ql2xiidmaenable, 182 - "Enables iIDMA settings " 183 - "Default is 1 - perform iIDMA. 0 - no iIDMA."); 184 - 185 179 int ql2xmqsupport = 1; 186 180 module_param(ql2xmqsupport, int, S_IRUGO); 187 181 MODULE_PARM_DESC(ql2xmqsupport, ··· 192 198 " interface.\n" 193 199 " 1 -- load firmware from flash.\n" 194 200 " 0 -- use default semantics.\n"); 195 - 196 - int ql2xetsenable; 197 - module_param(ql2xetsenable, int, S_IRUGO); 198 - MODULE_PARM_DESC(ql2xetsenable, 199 - "Enables firmware ETS burst." 200 - "Default is 0 - skip ETS enablement."); 201 201 202 202 int ql2xdbwr = 1; 203 203 module_param(ql2xdbwr, int, S_IRUGO|S_IWUSR);
-129
drivers/scsi/qla2xxx/qla_target.c
··· 1454 1454 return sess; 1455 1455 } 1456 1456 1457 - /* 1458 - * max_gen - specifies maximum session generation 1459 - * at which this deletion requestion is still valid 1460 - */ 1461 - void 1462 - qlt_fc_port_deleted(struct scsi_qla_host *vha, fc_port_t *fcport, int max_gen) 1463 - { 1464 - struct qla_tgt *tgt = vha->vha_tgt.qla_tgt; 1465 - struct fc_port *sess = fcport; 1466 - unsigned long flags; 1467 - 1468 - if (!vha->hw->tgt.tgt_ops) 1469 - return; 1470 - 1471 - if (!tgt) 1472 - return; 1473 - 1474 - spin_lock_irqsave(&vha->hw->tgt.sess_lock, flags); 1475 - if (tgt->tgt_stop) { 1476 - spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags); 1477 - return; 1478 - } 1479 - if (!sess->se_sess) { 1480 - spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags); 1481 - return; 1482 - } 1483 - 1484 - if (max_gen - sess->generation < 0) { 1485 - spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags); 1486 - ql_dbg(ql_dbg_tgt_mgt, vha, 0xf092, 1487 - "Ignoring stale deletion request for se_sess %p / sess %p" 1488 - " for port %8phC, req_gen %d, sess_gen %d\n", 1489 - sess->se_sess, sess, sess->port_name, max_gen, 1490 - sess->generation); 1491 - return; 1492 - } 1493 - 1494 - ql_dbg(ql_dbg_tgt_mgt, vha, 0xf008, "qla_tgt_fc_port_deleted %p", sess); 1495 - 1496 - sess->local = 1; 1497 - spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags); 1498 - qlt_schedule_sess_for_deletion(sess); 1499 - } 1500 - 1501 1457 static inline int test_tgt_sess_count(struct qla_tgt *tgt) 1502 1458 { 1503 1459 struct qla_hw_data *ha = tgt->ha; ··· 5495 5539 spin_unlock_irqrestore(&vha->hw->tgt.q_full_lock, flags); 5496 5540 } 5497 5541 5498 - int 5499 - qlt_free_qfull_cmds(struct qla_qpair *qpair) 5500 - { 5501 - struct scsi_qla_host *vha = qpair->vha; 5502 - struct qla_hw_data *ha = vha->hw; 5503 - unsigned long flags; 5504 - struct qla_tgt_cmd *cmd, *tcmd; 5505 - struct list_head free_list, q_full_list; 5506 - int rc = 0; 5507 - 5508 - if (list_empty(&ha->tgt.q_full_list)) 5509 - return 0; 5510 - 5511 - INIT_LIST_HEAD(&free_list); 5512 - INIT_LIST_HEAD(&q_full_list); 5513 - 5514 - spin_lock_irqsave(&vha->hw->tgt.q_full_lock, flags); 5515 - if (list_empty(&ha->tgt.q_full_list)) { 5516 - spin_unlock_irqrestore(&vha->hw->tgt.q_full_lock, flags); 5517 - return 0; 5518 - } 5519 - 5520 - list_splice_init(&vha->hw->tgt.q_full_list, &q_full_list); 5521 - spin_unlock_irqrestore(&vha->hw->tgt.q_full_lock, flags); 5522 - 5523 - spin_lock_irqsave(qpair->qp_lock_ptr, flags); 5524 - list_for_each_entry_safe(cmd, tcmd, &q_full_list, cmd_list) { 5525 - if (cmd->q_full) 5526 - /* cmd->state is a borrowed field to hold status */ 5527 - rc = __qlt_send_busy(qpair, &cmd->atio, cmd->state); 5528 - else if (cmd->term_exchg) 5529 - rc = __qlt_send_term_exchange(qpair, NULL, &cmd->atio); 5530 - 5531 - if (rc == -ENOMEM) 5532 - break; 5533 - 5534 - if (cmd->q_full) 5535 - ql_dbg(ql_dbg_io, vha, 0x3006, 5536 - "%s: busy sent for ox_id[%04x]\n", __func__, 5537 - be16_to_cpu(cmd->atio.u.isp24.fcp_hdr.ox_id)); 5538 - else if (cmd->term_exchg) 5539 - ql_dbg(ql_dbg_io, vha, 0x3007, 5540 - "%s: Term exchg sent for ox_id[%04x]\n", __func__, 5541 - be16_to_cpu(cmd->atio.u.isp24.fcp_hdr.ox_id)); 5542 - else 5543 - ql_dbg(ql_dbg_io, vha, 0x3008, 5544 - "%s: Unexpected cmd in QFull list %p\n", __func__, 5545 - cmd); 5546 - 5547 - list_move_tail(&cmd->cmd_list, &free_list); 5548 - 5549 - /* piggy back on hardware_lock for protection */ 5550 - vha->hw->tgt.num_qfull_cmds_alloc--; 5551 - } 5552 - spin_unlock_irqrestore(qpair->qp_lock_ptr, flags); 5553 - 5554 - cmd = NULL; 5555 - 5556 - list_for_each_entry_safe(cmd, tcmd, &free_list, cmd_list) { 5557 - list_del(&cmd->cmd_list); 5558 - /* This cmd was never sent to TCM. There is no need 5559 - * to schedule free or call free_cmd 5560 - */ 5561 - qlt_free_cmd(cmd); 5562 - } 5563 - 5564 - if (!list_empty(&q_full_list)) { 5565 - spin_lock_irqsave(&vha->hw->tgt.q_full_lock, flags); 5566 - list_splice(&q_full_list, &vha->hw->tgt.q_full_list); 5567 - spin_unlock_irqrestore(&vha->hw->tgt.q_full_lock, flags); 5568 - } 5569 - 5570 - return rc; 5571 - } 5572 - 5573 5542 static void 5574 5543 qlt_send_busy(struct qla_qpair *qpair, struct atio_from_isp *atio, 5575 5544 uint16_t status) ··· 6970 7089 icb->firmware_options_1 |= cpu_to_le32(BIT_14); 6971 7090 } 6972 7091 } 6973 - 6974 - void 6975 - qlt_83xx_iospace_config(struct qla_hw_data *ha) 6976 - { 6977 - if (!QLA_TGT_MODE_ENABLED()) 6978 - return; 6979 - 6980 - ha->msix_count += 1; /* For ATIO Q */ 6981 - } 6982 - 6983 7092 6984 7093 void 6985 7094 qlt_modify_vp_config(struct scsi_qla_host *vha,
-3
drivers/scsi/qla2xxx/qla_target.h
··· 1014 1014 extern void qlt_lport_deregister(struct scsi_qla_host *); 1015 1015 extern void qlt_unreg_sess(struct fc_port *); 1016 1016 extern void qlt_fc_port_added(struct scsi_qla_host *, fc_port_t *); 1017 - extern void qlt_fc_port_deleted(struct scsi_qla_host *, fc_port_t *, int); 1018 1017 extern int __init qlt_init(void); 1019 1018 extern void qlt_exit(void); 1020 1019 extern void qlt_free_session_done(struct work_struct *); ··· 1081 1082 extern int qlt_stop_phase1(struct qla_tgt *); 1082 1083 extern void qlt_stop_phase2(struct qla_tgt *); 1083 1084 extern irqreturn_t qla83xx_msix_atio_q(int, void *); 1084 - extern void qlt_83xx_iospace_config(struct qla_hw_data *); 1085 - extern int qlt_free_qfull_cmds(struct qla_qpair *); 1086 1085 extern void qlt_logo_completion_handler(fc_port_t *, int); 1087 1086 extern void qlt_do_generation_tick(struct scsi_qla_host *, int *); 1088 1087
-5
drivers/scsi/qla4xxx/ql4_nx.c
··· 973 973 unsigned long off; 974 974 unsigned offset, n; 975 975 976 - struct crb_addr_pair { 977 - long addr; 978 - long data; 979 - }; 980 - 981 976 /* Halt all the indiviual PEGs and other blocks of the ISP */ 982 977 qla4_82xx_rom_lock(ha); 983 978
+206 -155
drivers/scsi/scsi_debug.c
··· 162 162 #define DEF_VPD_USE_HOSTNO 1 163 163 #define DEF_WRITESAME_LENGTH 0xFFFF 164 164 #define DEF_ATOMIC_WR 0 165 - #define DEF_ATOMIC_WR_MAX_LENGTH 8192 165 + #define DEF_ATOMIC_WR_MAX_LENGTH 128 166 166 #define DEF_ATOMIC_WR_ALIGN 2 167 167 #define DEF_ATOMIC_WR_GRAN 2 168 168 #define DEF_ATOMIC_WR_MAX_LENGTH_BNDRY (DEF_ATOMIC_WR_MAX_LENGTH) ··· 293 293 #define FF_MEDIA_IO (F_M_ACCESS | F_FAKE_RW) 294 294 #define FF_SA (F_SA_HIGH | F_SA_LOW) 295 295 #define F_LONG_DELAY (F_SSU_DELAY | F_SYNC_DELAY) 296 + 297 + /* Device selection bit mask */ 298 + #define DS_ALL 0xffffffff 299 + #define DS_SBC (1 << TYPE_DISK) 300 + #define DS_SSC (1 << TYPE_TAPE) 301 + #define DS_ZBC (1 << TYPE_ZBC) 302 + 303 + #define DS_NO_SSC (DS_ALL & ~DS_SSC) 296 304 297 305 #define SDEBUG_MAX_PARTS 4 298 306 ··· 480 472 /* for terminating element */ 481 473 u8 opcode; /* if num_attached > 0, preferred */ 482 474 u16 sa; /* service action */ 475 + u32 devsel; /* device type mask for this definition */ 483 476 u32 flags; /* OR-ed set of SDEB_F_* */ 484 477 int (*pfp)(struct scsi_cmnd *, struct sdebug_dev_info *); 485 478 const struct opcode_info_t *arrp; /* num_attached elements or NULL */ ··· 528 519 SDEB_I_WRITE_FILEMARKS = 35, 529 520 SDEB_I_SPACE = 36, 530 521 SDEB_I_FORMAT_MEDIUM = 37, 531 - SDEB_I_LAST_ELEM_P1 = 38, /* keep this last (previous + 1) */ 522 + SDEB_I_ERASE = 38, 523 + SDEB_I_LAST_ELEM_P1 = 39, /* keep this last (previous + 1) */ 532 524 }; 533 525 534 526 ··· 540 530 SDEB_I_READ, 0, SDEB_I_WRITE, 0, 0, 0, 0, 0, 541 531 SDEB_I_WRITE_FILEMARKS, SDEB_I_SPACE, SDEB_I_INQUIRY, 0, 0, 542 532 SDEB_I_MODE_SELECT, SDEB_I_RESERVE, SDEB_I_RELEASE, 543 - 0, 0, SDEB_I_MODE_SENSE, SDEB_I_START_STOP, 0, SDEB_I_SEND_DIAG, 533 + 0, SDEB_I_ERASE, SDEB_I_MODE_SENSE, SDEB_I_START_STOP, 0, SDEB_I_SEND_DIAG, 544 534 SDEB_I_ALLOW_REMOVAL, 0, 545 535 /* 0x20; 0x20->0x3f: 10 byte cdbs */ 546 536 0, 0, 0, 0, 0, SDEB_I_READ_CAPACITY, 0, 0, ··· 595 585 static int resp_log_sense(struct scsi_cmnd *, struct sdebug_dev_info *); 596 586 static int resp_readcap(struct scsi_cmnd *, struct sdebug_dev_info *); 597 587 static int resp_read_dt0(struct scsi_cmnd *, struct sdebug_dev_info *); 588 + static int resp_read_tape(struct scsi_cmnd *, struct sdebug_dev_info *); 598 589 static int resp_write_dt0(struct scsi_cmnd *, struct sdebug_dev_info *); 590 + static int resp_write_tape(struct scsi_cmnd *, struct sdebug_dev_info *); 599 591 static int resp_write_scat(struct scsi_cmnd *, struct sdebug_dev_info *); 600 592 static int resp_start_stop(struct scsi_cmnd *, struct sdebug_dev_info *); 601 593 static int resp_readcap16(struct scsi_cmnd *, struct sdebug_dev_info *); ··· 625 613 static int resp_locate(struct scsi_cmnd *, struct sdebug_dev_info *); 626 614 static int resp_write_filemarks(struct scsi_cmnd *, struct sdebug_dev_info *); 627 615 static int resp_space(struct scsi_cmnd *, struct sdebug_dev_info *); 616 + static int resp_read_position(struct scsi_cmnd *, struct sdebug_dev_info *); 628 617 static int resp_rewind(struct scsi_cmnd *, struct sdebug_dev_info *); 629 618 static int resp_format_medium(struct scsi_cmnd *, struct sdebug_dev_info *); 619 + static int resp_erase(struct scsi_cmnd *, struct sdebug_dev_info *); 630 620 631 621 static int sdebug_do_add_host(bool mk_new_store); 632 622 static int sdebug_add_host_helper(int per_host_idx); ··· 643 629 * should be placed in opcode_info_arr[], the others should be placed here. 644 630 */ 645 631 static const struct opcode_info_t msense_iarr[] = { 646 - {0, 0x1a, 0, F_D_IN, NULL, NULL, 632 + {0, 0x1a, 0, DS_ALL, F_D_IN, NULL, NULL, 647 633 {6, 0xe8, 0xff, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 648 634 }; 649 635 650 636 static const struct opcode_info_t mselect_iarr[] = { 651 - {0, 0x15, 0, F_D_OUT, NULL, NULL, 637 + {0, 0x15, 0, DS_ALL, F_D_OUT, NULL, NULL, 652 638 {6, 0xf1, 0, 0, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 653 639 }; 654 640 655 641 static const struct opcode_info_t read_iarr[] = { 656 - {0, 0x28, 0, F_D_IN | FF_MEDIA_IO, resp_read_dt0, NULL,/* READ(10) */ 642 + {0, 0x28, 0, DS_NO_SSC, F_D_IN | FF_MEDIA_IO, resp_read_dt0, NULL,/* READ(10) */ 657 643 {10, 0xff, 0xff, 0xff, 0xff, 0xff, 0x3f, 0xff, 0xff, 0xc7, 0, 0, 658 644 0, 0, 0, 0} }, 659 - {0, 0x8, 0, F_D_IN | FF_MEDIA_IO, resp_read_dt0, NULL, /* READ(6) */ 645 + {0, 0x8, 0, DS_NO_SSC, F_D_IN | FF_MEDIA_IO, resp_read_dt0, NULL, /* READ(6) disk */ 660 646 {6, 0xff, 0xff, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 661 - {0, 0xa8, 0, F_D_IN | FF_MEDIA_IO, resp_read_dt0, NULL,/* READ(12) */ 647 + {0, 0x8, 0, DS_SSC, F_D_IN | FF_MEDIA_IO, resp_read_tape, NULL, /* READ(6) tape */ 648 + {6, 0x03, 0xff, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 649 + {0, 0xa8, 0, DS_NO_SSC, F_D_IN | FF_MEDIA_IO, resp_read_dt0, NULL,/* READ(12) */ 662 650 {12, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xbf, 663 651 0xc7, 0, 0, 0, 0} }, 664 652 }; 665 653 666 654 static const struct opcode_info_t write_iarr[] = { 667 - {0, 0x2a, 0, F_D_OUT | FF_MEDIA_IO, resp_write_dt0, /* WRITE(10) */ 655 + {0, 0x2a, 0, DS_NO_SSC, F_D_OUT | FF_MEDIA_IO, resp_write_dt0, /* WRITE(10) */ 668 656 NULL, {10, 0xfb, 0xff, 0xff, 0xff, 0xff, 0x3f, 0xff, 0xff, 0xc7, 669 657 0, 0, 0, 0, 0, 0} }, 670 - {0, 0xa, 0, F_D_OUT | FF_MEDIA_IO, resp_write_dt0, /* WRITE(6) */ 658 + {0, 0xa, 0, DS_NO_SSC, F_D_OUT | FF_MEDIA_IO, resp_write_dt0, /* WRITE(6) disk */ 671 659 NULL, {6, 0xff, 0xff, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0, 672 660 0, 0, 0} }, 673 - {0, 0xaa, 0, F_D_OUT | FF_MEDIA_IO, resp_write_dt0, /* WRITE(12) */ 661 + {0, 0xa, 0, DS_SSC, F_D_OUT | FF_MEDIA_IO, resp_write_tape, /* WRITE(6) tape */ 662 + NULL, {6, 0x01, 0xff, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0, 663 + 0, 0, 0} }, 664 + {0, 0xaa, 0, DS_NO_SSC, F_D_OUT | FF_MEDIA_IO, resp_write_dt0, /* WRITE(12) */ 674 665 NULL, {12, 0xfb, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 675 666 0xbf, 0xc7, 0, 0, 0, 0} }, 676 667 }; 677 668 678 669 static const struct opcode_info_t verify_iarr[] = { 679 - {0, 0x2f, 0, F_D_OUT_MAYBE | FF_MEDIA_IO, resp_verify,/* VERIFY(10) */ 670 + {0, 0x2f, 0, DS_NO_SSC, F_D_OUT_MAYBE | FF_MEDIA_IO, resp_verify,/* VERIFY(10) */ 680 671 NULL, {10, 0xf7, 0xff, 0xff, 0xff, 0xff, 0xbf, 0xff, 0xff, 0xc7, 681 672 0, 0, 0, 0, 0, 0} }, 682 673 }; 683 674 684 675 static const struct opcode_info_t sa_in_16_iarr[] = { 685 - {0, 0x9e, 0x12, F_SA_LOW | F_D_IN, resp_get_lba_status, NULL, 676 + {0, 0x9e, 0x12, DS_NO_SSC, F_SA_LOW | F_D_IN, resp_get_lba_status, NULL, 686 677 {16, 0x12, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 687 678 0xff, 0xff, 0xff, 0, 0xc7} }, /* GET LBA STATUS(16) */ 688 - {0, 0x9e, 0x16, F_SA_LOW | F_D_IN, resp_get_stream_status, NULL, 679 + {0, 0x9e, 0x16, DS_NO_SSC, F_SA_LOW | F_D_IN, resp_get_stream_status, NULL, 689 680 {16, 0x16, 0, 0, 0xff, 0xff, 0, 0, 0, 0, 0xff, 0xff, 0xff, 0xff, 690 681 0, 0} }, /* GET STREAM STATUS */ 691 682 }; 692 683 693 684 static const struct opcode_info_t vl_iarr[] = { /* VARIABLE LENGTH */ 694 - {0, 0x7f, 0xb, F_SA_HIGH | F_D_OUT | FF_MEDIA_IO, resp_write_dt0, 685 + {0, 0x7f, 0xb, DS_NO_SSC, F_SA_HIGH | F_D_OUT | FF_MEDIA_IO, resp_write_dt0, 695 686 NULL, {32, 0xc7, 0, 0, 0, 0, 0x3f, 0x18, 0x0, 0xb, 0xfa, 696 687 0, 0xff, 0xff, 0xff, 0xff} }, /* WRITE(32) */ 697 - {0, 0x7f, 0x11, F_SA_HIGH | F_D_OUT | FF_MEDIA_IO, resp_write_scat, 688 + {0, 0x7f, 0x11, DS_NO_SSC, F_SA_HIGH | F_D_OUT | FF_MEDIA_IO, resp_write_scat, 698 689 NULL, {32, 0xc7, 0, 0, 0, 0, 0x3f, 0x18, 0x0, 0x11, 0xf8, 699 690 0, 0xff, 0xff, 0x0, 0x0} }, /* WRITE SCATTERED(32) */ 700 691 }; 701 692 702 693 static const struct opcode_info_t maint_in_iarr[] = { /* MAINT IN */ 703 - {0, 0xa3, 0xc, F_SA_LOW | F_D_IN, resp_rsup_opcodes, NULL, 694 + {0, 0xa3, 0xc, DS_ALL, F_SA_LOW | F_D_IN, resp_rsup_opcodes, NULL, 704 695 {12, 0xc, 0x87, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0, 705 696 0xc7, 0, 0, 0, 0} }, /* REPORT SUPPORTED OPERATION CODES */ 706 - {0, 0xa3, 0xd, F_SA_LOW | F_D_IN, resp_rsup_tmfs, NULL, 697 + {0, 0xa3, 0xd, DS_ALL, F_SA_LOW | F_D_IN, resp_rsup_tmfs, NULL, 707 698 {12, 0xd, 0x80, 0, 0, 0, 0xff, 0xff, 0xff, 0xff, 0, 0xc7, 0, 0, 708 699 0, 0} }, /* REPORTED SUPPORTED TASK MANAGEMENT FUNCTIONS */ 709 700 }; 710 701 711 702 static const struct opcode_info_t write_same_iarr[] = { 712 - {0, 0x93, 0, F_D_OUT_MAYBE | FF_MEDIA_IO, resp_write_same_16, NULL, 703 + {0, 0x93, 0, DS_NO_SSC, F_D_OUT_MAYBE | FF_MEDIA_IO, resp_write_same_16, NULL, 713 704 {16, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 714 705 0xff, 0xff, 0xff, 0x3f, 0xc7} }, /* WRITE SAME(16) */ 715 706 }; 716 707 717 708 static const struct opcode_info_t reserve_iarr[] = { 718 - {0, 0x16, 0, F_D_OUT, NULL, NULL, /* RESERVE(6) */ 709 + {0, 0x16, 0, DS_ALL, F_D_OUT, NULL, NULL, /* RESERVE(6) */ 719 710 {6, 0x1f, 0xff, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 720 711 }; 721 712 722 713 static const struct opcode_info_t release_iarr[] = { 723 - {0, 0x17, 0, F_D_OUT, NULL, NULL, /* RELEASE(6) */ 714 + {0, 0x17, 0, DS_ALL, F_D_OUT, NULL, NULL, /* RELEASE(6) */ 724 715 {6, 0x1f, 0xff, 0, 0, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 725 716 }; 726 717 727 718 static const struct opcode_info_t sync_cache_iarr[] = { 728 - {0, 0x91, 0, F_SYNC_DELAY | F_M_ACCESS, resp_sync_cache, NULL, 719 + {0, 0x91, 0, DS_NO_SSC, F_SYNC_DELAY | F_M_ACCESS, resp_sync_cache, NULL, 729 720 {16, 0x6, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 730 721 0xff, 0xff, 0xff, 0xff, 0x3f, 0xc7} }, /* SYNC_CACHE (16) */ 731 722 }; 732 723 733 724 static const struct opcode_info_t pre_fetch_iarr[] = { 734 - {0, 0x90, 0, F_SYNC_DELAY | FF_MEDIA_IO, resp_pre_fetch, NULL, 725 + {0, 0x90, 0, DS_NO_SSC, F_SYNC_DELAY | FF_MEDIA_IO, resp_pre_fetch, NULL, 735 726 {16, 0x2, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 736 727 0xff, 0xff, 0xff, 0xff, 0x3f, 0xc7} }, /* PRE-FETCH (16) */ 728 + {0, 0x34, 0, DS_SSC, F_SYNC_DELAY | FF_MEDIA_IO, resp_read_position, NULL, 729 + {10, 0x1f, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0xc7, 0, 0, 730 + 0, 0, 0, 0} }, /* READ POSITION (10) */ 737 731 }; 738 732 739 733 static const struct opcode_info_t zone_out_iarr[] = { /* ZONE OUT(16) */ 740 - {0, 0x94, 0x1, F_SA_LOW | F_M_ACCESS, resp_close_zone, NULL, 734 + {0, 0x94, 0x1, DS_NO_SSC, F_SA_LOW | F_M_ACCESS, resp_close_zone, NULL, 741 735 {16, 0x1, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 742 736 0xff, 0, 0, 0xff, 0xff, 0x1, 0xc7} }, /* CLOSE ZONE */ 743 - {0, 0x94, 0x2, F_SA_LOW | F_M_ACCESS, resp_finish_zone, NULL, 737 + {0, 0x94, 0x2, DS_NO_SSC, F_SA_LOW | F_M_ACCESS, resp_finish_zone, NULL, 744 738 {16, 0x2, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 745 739 0xff, 0, 0, 0xff, 0xff, 0x1, 0xc7} }, /* FINISH ZONE */ 746 - {0, 0x94, 0x4, F_SA_LOW | F_M_ACCESS, resp_rwp_zone, NULL, 740 + {0, 0x94, 0x4, DS_NO_SSC, F_SA_LOW | F_M_ACCESS, resp_rwp_zone, NULL, 747 741 {16, 0x4, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 748 742 0xff, 0, 0, 0xff, 0xff, 0x1, 0xc7} }, /* RESET WRITE POINTER */ 749 743 }; 750 744 751 745 static const struct opcode_info_t zone_in_iarr[] = { /* ZONE IN(16) */ 752 - {0, 0x95, 0x6, F_SA_LOW | F_D_IN | F_M_ACCESS, NULL, NULL, 746 + {0, 0x95, 0x6, DS_NO_SSC, F_SA_LOW | F_D_IN | F_M_ACCESS, NULL, NULL, 753 747 {16, 0x6, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 754 748 0xff, 0xff, 0xff, 0xff, 0x3f, 0xc7} }, /* REPORT ZONES */ 755 749 }; ··· 768 746 * REPORT SUPPORTED OPERATION CODES. */ 769 747 static const struct opcode_info_t opcode_info_arr[SDEB_I_LAST_ELEM_P1 + 1] = { 770 748 /* 0 */ 771 - {0, 0, 0, F_INV_OP | FF_RESPOND, NULL, NULL, /* unknown opcodes */ 749 + {0, 0, 0, DS_ALL, F_INV_OP | FF_RESPOND, NULL, NULL, /* unknown opcodes */ 772 750 {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 773 - {0, 0x12, 0, FF_RESPOND | F_D_IN, resp_inquiry, NULL, /* INQUIRY */ 751 + {0, 0x12, 0, DS_ALL, FF_RESPOND | F_D_IN, resp_inquiry, NULL, /* INQUIRY */ 774 752 {6, 0xe3, 0xff, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 775 - {0, 0xa0, 0, FF_RESPOND | F_D_IN, resp_report_luns, NULL, 753 + {0, 0xa0, 0, DS_ALL, FF_RESPOND | F_D_IN, resp_report_luns, NULL, 776 754 {12, 0xe3, 0xff, 0, 0, 0, 0xff, 0xff, 0xff, 0xff, 0, 0xc7, 0, 0, 777 755 0, 0} }, /* REPORT LUNS */ 778 - {0, 0x3, 0, FF_RESPOND | F_D_IN, resp_requests, NULL, 756 + {0, 0x3, 0, DS_ALL, FF_RESPOND | F_D_IN, resp_requests, NULL, 779 757 {6, 0xe1, 0, 0, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 780 - {0, 0x0, 0, F_M_ACCESS | F_RL_WLUN_OK, NULL, NULL,/* TEST UNIT READY */ 758 + {0, 0x0, 0, DS_ALL, F_M_ACCESS | F_RL_WLUN_OK, NULL, NULL,/* TEST UNIT READY */ 781 759 {6, 0, 0, 0, 0, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 782 760 /* 5 */ 783 - {ARRAY_SIZE(msense_iarr), 0x5a, 0, F_D_IN, /* MODE SENSE(10) */ 761 + {ARRAY_SIZE(msense_iarr), 0x5a, 0, DS_ALL, F_D_IN, /* MODE SENSE(10) */ 784 762 resp_mode_sense, msense_iarr, {10, 0xf8, 0xff, 0xff, 0, 0, 0, 785 763 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0} }, 786 - {ARRAY_SIZE(mselect_iarr), 0x55, 0, F_D_OUT, /* MODE SELECT(10) */ 764 + {ARRAY_SIZE(mselect_iarr), 0x55, 0, DS_ALL, F_D_OUT, /* MODE SELECT(10) */ 787 765 resp_mode_select, mselect_iarr, {10, 0xf1, 0, 0, 0, 0, 0, 0xff, 788 766 0xff, 0xc7, 0, 0, 0, 0, 0, 0} }, 789 - {0, 0x4d, 0, F_D_IN, resp_log_sense, NULL, /* LOG SENSE */ 767 + {0, 0x4d, 0, DS_NO_SSC, F_D_IN, resp_log_sense, NULL, /* LOG SENSE */ 790 768 {10, 0xe3, 0xff, 0xff, 0, 0xff, 0xff, 0xff, 0xff, 0xc7, 0, 0, 0, 791 769 0, 0, 0} }, 792 - {0, 0x25, 0, F_D_IN, resp_readcap, NULL, /* READ CAPACITY(10) */ 770 + {0, 0x25, 0, DS_NO_SSC, F_D_IN, resp_readcap, NULL, /* READ CAPACITY(10) */ 793 771 {10, 0xe1, 0xff, 0xff, 0xff, 0xff, 0, 0, 0x1, 0xc7, 0, 0, 0, 0, 794 772 0, 0} }, 795 - {ARRAY_SIZE(read_iarr), 0x88, 0, F_D_IN | FF_MEDIA_IO, /* READ(16) */ 773 + {ARRAY_SIZE(read_iarr), 0x88, 0, DS_NO_SSC, F_D_IN | FF_MEDIA_IO, /* READ(16) */ 796 774 resp_read_dt0, read_iarr, {16, 0xfe, 0xff, 0xff, 0xff, 0xff, 797 775 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xc7} }, 798 776 /* 10 */ 799 - {ARRAY_SIZE(write_iarr), 0x8a, 0, F_D_OUT | FF_MEDIA_IO, 777 + {ARRAY_SIZE(write_iarr), 0x8a, 0, DS_NO_SSC, F_D_OUT | FF_MEDIA_IO, 800 778 resp_write_dt0, write_iarr, /* WRITE(16) */ 801 779 {16, 0xfa, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 802 780 0xff, 0xff, 0xff, 0xff, 0xff, 0xc7} }, 803 - {0, 0x1b, 0, F_SSU_DELAY, resp_start_stop, NULL,/* START STOP UNIT */ 781 + {0, 0x1b, 0, DS_ALL, F_SSU_DELAY, resp_start_stop, NULL,/* START STOP UNIT */ 804 782 {6, 0x1, 0, 0xf, 0xf7, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 805 - {ARRAY_SIZE(sa_in_16_iarr), 0x9e, 0x10, F_SA_LOW | F_D_IN, 783 + {ARRAY_SIZE(sa_in_16_iarr), 0x9e, 0x10, DS_NO_SSC, F_SA_LOW | F_D_IN, 806 784 resp_readcap16, sa_in_16_iarr, /* SA_IN(16), READ CAPACITY(16) */ 807 785 {16, 0x10, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 808 786 0xff, 0xff, 0xff, 0xff, 0x1, 0xc7} }, 809 - {0, 0x9f, 0x12, F_SA_LOW | F_D_OUT | FF_MEDIA_IO, resp_write_scat, 787 + {0, 0x9f, 0x12, DS_NO_SSC, F_SA_LOW | F_D_OUT | FF_MEDIA_IO, resp_write_scat, 810 788 NULL, {16, 0x12, 0xf9, 0x0, 0xff, 0xff, 0, 0, 0xff, 0xff, 0xff, 811 789 0xff, 0xff, 0xff, 0xff, 0xc7} }, /* SA_OUT(16), WRITE SCAT(16) */ 812 - {ARRAY_SIZE(maint_in_iarr), 0xa3, 0xa, F_SA_LOW | F_D_IN, 790 + {ARRAY_SIZE(maint_in_iarr), 0xa3, 0xa, DS_ALL, F_SA_LOW | F_D_IN, 813 791 resp_report_tgtpgs, /* MAINT IN, REPORT TARGET PORT GROUPS */ 814 792 maint_in_iarr, {12, 0xea, 0, 0, 0, 0, 0xff, 0xff, 0xff, 815 793 0xff, 0, 0xc7, 0, 0, 0, 0} }, 816 794 /* 15 */ 817 - {0, 0, 0, F_INV_OP | FF_RESPOND, NULL, NULL, /* MAINT OUT */ 795 + {0, 0, 0, DS_ALL, F_INV_OP | FF_RESPOND, NULL, NULL, /* MAINT OUT */ 818 796 {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 819 - {ARRAY_SIZE(verify_iarr), 0x8f, 0, 797 + {ARRAY_SIZE(verify_iarr), 0x8f, 0, DS_NO_SSC, 820 798 F_D_OUT_MAYBE | FF_MEDIA_IO, resp_verify, /* VERIFY(16) */ 821 799 verify_iarr, {16, 0xf6, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 822 800 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x3f, 0xc7} }, 823 - {ARRAY_SIZE(vl_iarr), 0x7f, 0x9, F_SA_HIGH | F_D_IN | FF_MEDIA_IO, 801 + {ARRAY_SIZE(vl_iarr), 0x7f, 0x9, DS_NO_SSC, F_SA_HIGH | F_D_IN | FF_MEDIA_IO, 824 802 resp_read_dt0, vl_iarr, /* VARIABLE LENGTH, READ(32) */ 825 803 {32, 0xc7, 0, 0, 0, 0, 0x3f, 0x18, 0x0, 0x9, 0xfe, 0, 0xff, 0xff, 826 804 0xff, 0xff} }, 827 - {ARRAY_SIZE(reserve_iarr), 0x56, 0, F_D_OUT, 805 + {ARRAY_SIZE(reserve_iarr), 0x56, 0, DS_ALL, F_D_OUT, 828 806 NULL, reserve_iarr, /* RESERVE(10) <no response function> */ 829 807 {10, 0xff, 0xff, 0xff, 0, 0, 0, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 830 808 0} }, 831 - {ARRAY_SIZE(release_iarr), 0x57, 0, F_D_OUT, 809 + {ARRAY_SIZE(release_iarr), 0x57, 0, DS_ALL, F_D_OUT, 832 810 NULL, release_iarr, /* RELEASE(10) <no response function> */ 833 811 {10, 0x13, 0xff, 0xff, 0, 0, 0, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 834 812 0} }, 835 813 /* 20 */ 836 - {0, 0x1e, 0, 0, NULL, NULL, /* ALLOW REMOVAL */ 814 + {0, 0x1e, 0, DS_ALL, 0, NULL, NULL, /* ALLOW REMOVAL */ 837 815 {6, 0, 0, 0, 0x3, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 838 - {0, 0x1, 0, 0, resp_rewind, NULL, 816 + {0, 0x1, 0, DS_SSC, 0, resp_rewind, NULL, 839 817 {6, 0x1, 0, 0, 0, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 840 - {0, 0, 0, F_INV_OP | FF_RESPOND, NULL, NULL, /* ATA_PT */ 818 + {0, 0, 0, DS_NO_SSC, F_INV_OP | FF_RESPOND, NULL, NULL, /* ATA_PT */ 841 819 {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 842 - {0, 0x1d, F_D_OUT, 0, NULL, NULL, /* SEND DIAGNOSTIC */ 820 + {0, 0x1d, 0, DS_ALL, F_D_OUT, NULL, NULL, /* SEND DIAGNOSTIC */ 843 821 {6, 0xf7, 0, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 844 - {0, 0x42, 0, F_D_OUT | FF_MEDIA_IO, resp_unmap, NULL, /* UNMAP */ 822 + {0, 0x42, 0, DS_NO_SSC, F_D_OUT | FF_MEDIA_IO, resp_unmap, NULL, /* UNMAP */ 845 823 {10, 0x1, 0, 0, 0, 0, 0x3f, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0} }, 846 824 /* 25 */ 847 - {0, 0x3b, 0, F_D_OUT_MAYBE, resp_write_buffer, NULL, 825 + {0, 0x3b, 0, DS_NO_SSC, F_D_OUT_MAYBE, resp_write_buffer, NULL, 848 826 {10, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xc7, 0, 0, 849 827 0, 0, 0, 0} }, /* WRITE_BUFFER */ 850 - {ARRAY_SIZE(write_same_iarr), 0x41, 0, F_D_OUT_MAYBE | FF_MEDIA_IO, 828 + {ARRAY_SIZE(write_same_iarr), 0x41, 0, DS_NO_SSC, F_D_OUT_MAYBE | FF_MEDIA_IO, 851 829 resp_write_same_10, write_same_iarr, /* WRITE SAME(10) */ 852 830 {10, 0xff, 0xff, 0xff, 0xff, 0xff, 0x3f, 0xff, 0xff, 0xc7, 0, 853 831 0, 0, 0, 0, 0} }, 854 - {ARRAY_SIZE(sync_cache_iarr), 0x35, 0, F_SYNC_DELAY | F_M_ACCESS, 832 + {ARRAY_SIZE(sync_cache_iarr), 0x35, 0, DS_NO_SSC, F_SYNC_DELAY | F_M_ACCESS, 855 833 resp_sync_cache, sync_cache_iarr, 856 834 {10, 0x7, 0xff, 0xff, 0xff, 0xff, 0x3f, 0xff, 0xff, 0xc7, 0, 0, 857 835 0, 0, 0, 0} }, /* SYNC_CACHE (10) */ 858 - {0, 0x89, 0, F_D_OUT | FF_MEDIA_IO, resp_comp_write, NULL, 836 + {0, 0x89, 0, DS_NO_SSC, F_D_OUT | FF_MEDIA_IO, resp_comp_write, NULL, 859 837 {16, 0xf8, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0, 0, 860 838 0, 0xff, 0x3f, 0xc7} }, /* COMPARE AND WRITE */ 861 - {ARRAY_SIZE(pre_fetch_iarr), 0x34, 0, F_SYNC_DELAY | FF_MEDIA_IO, 839 + {ARRAY_SIZE(pre_fetch_iarr), 0x34, 0, DS_NO_SSC, F_SYNC_DELAY | FF_MEDIA_IO, 862 840 resp_pre_fetch, pre_fetch_iarr, 863 841 {10, 0x2, 0xff, 0xff, 0xff, 0xff, 0x3f, 0xff, 0xff, 0xc7, 0, 0, 864 842 0, 0, 0, 0} }, /* PRE-FETCH (10) */ 865 843 /* READ POSITION (10) */ 866 844 867 845 /* 30 */ 868 - {ARRAY_SIZE(zone_out_iarr), 0x94, 0x3, F_SA_LOW | F_M_ACCESS, 846 + {ARRAY_SIZE(zone_out_iarr), 0x94, 0x3, DS_NO_SSC, F_SA_LOW | F_M_ACCESS, 869 847 resp_open_zone, zone_out_iarr, /* ZONE_OUT(16), OPEN ZONE) */ 870 848 {16, 0x3 /* SA */, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 871 849 0xff, 0xff, 0x0, 0x0, 0xff, 0xff, 0x1, 0xc7} }, 872 - {ARRAY_SIZE(zone_in_iarr), 0x95, 0x0, F_SA_LOW | F_M_ACCESS, 850 + {ARRAY_SIZE(zone_in_iarr), 0x95, 0x0, DS_NO_SSC, F_SA_LOW | F_M_ACCESS, 873 851 resp_report_zones, zone_in_iarr, /* ZONE_IN(16), REPORT ZONES) */ 874 852 {16, 0x0 /* SA */, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 875 853 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xbf, 0xc7} }, 876 854 /* 32 */ 877 - {0, 0x0, 0x0, F_D_OUT | FF_MEDIA_IO, 855 + {0, 0x9c, 0x0, DS_NO_SSC, F_D_OUT | FF_MEDIA_IO, 878 856 resp_atomic_write, NULL, /* ATOMIC WRITE 16 */ 879 857 {16, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 880 858 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff} }, 881 - {0, 0x05, 0, F_D_IN, resp_read_blklimits, NULL, /* READ BLOCK LIMITS (6) */ 859 + {0, 0x05, 0, DS_SSC, F_D_IN, resp_read_blklimits, NULL, /* READ BLOCK LIMITS (6) */ 882 860 {6, 0, 0, 0, 0, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 883 - {0, 0x2b, 0, F_D_UNKN, resp_locate, NULL, /* LOCATE (10) */ 861 + {0, 0x2b, 0, DS_SSC, F_D_UNKN, resp_locate, NULL, /* LOCATE (10) */ 884 862 {10, 0x07, 0, 0xff, 0xff, 0xff, 0xff, 0, 0xff, 0xc7, 0, 0, 885 863 0, 0, 0, 0} }, 886 - {0, 0x10, 0, F_D_IN, resp_write_filemarks, NULL, /* WRITE FILEMARKS (6) */ 864 + {0, 0x10, 0, DS_SSC, F_D_IN, resp_write_filemarks, NULL, /* WRITE FILEMARKS (6) */ 887 865 {6, 0x01, 0xff, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 888 - {0, 0x11, 0, F_D_IN, resp_space, NULL, /* SPACE (6) */ 866 + {0, 0x11, 0, DS_SSC, F_D_IN, resp_space, NULL, /* SPACE (6) */ 889 867 {6, 0x07, 0xff, 0xff, 0xff, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 890 - {0, 0x4, 0, 0, resp_format_medium, NULL, /* FORMAT MEDIUM (6) */ 868 + {0, 0x4, 0, DS_SSC, 0, resp_format_medium, NULL, /* FORMAT MEDIUM (6) */ 891 869 {6, 0x3, 0x7, 0, 0, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 892 - /* 38 */ 870 + {0, 0x19, 0, DS_SSC, F_D_IN, resp_erase, NULL, /* ERASE (6) */ 871 + {6, 0x03, 0x33, 0, 0, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 872 + /* 39 */ 893 873 /* sentinel */ 894 - {0xff, 0, 0, 0, NULL, NULL, /* terminating element */ 874 + {0xff, 0, 0, 0, 0, NULL, NULL, /* terminating element */ 895 875 {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 896 876 }; 897 877 ··· 1038 1014 1039 1015 static struct dentry *sdebug_debugfs_root; 1040 1016 static ASYNC_DOMAIN_EXCLUSIVE(sdebug_async_domain); 1017 + 1018 + static u32 sdebug_get_devsel(struct scsi_device *sdp) 1019 + { 1020 + unsigned char devtype = sdp->type; 1021 + u32 devsel; 1022 + 1023 + if (devtype < 32) 1024 + devsel = (1 << devtype); 1025 + else 1026 + devsel = DS_ALL; 1027 + 1028 + return devsel; 1029 + } 1041 1030 1042 1031 static void sdebug_err_free(struct rcu_head *head) 1043 1032 { ··· 2069 2032 unsigned char *cmd = scp->cmnd; 2070 2033 u32 alloc_len, n; 2071 2034 int ret; 2072 - bool have_wlun, is_disk, is_zbc, is_disk_zbc; 2035 + bool have_wlun, is_disk, is_zbc, is_disk_zbc, is_tape; 2073 2036 2074 2037 alloc_len = get_unaligned_be16(cmd + 3); 2075 2038 arr = kzalloc(SDEBUG_MAX_INQ_ARR_SZ, GFP_ATOMIC); 2076 2039 if (! arr) 2077 2040 return DID_REQUEUE << 16; 2078 - is_disk = (sdebug_ptype == TYPE_DISK); 2041 + if (scp->device->type >= 32) { 2042 + is_disk = (sdebug_ptype == TYPE_DISK); 2043 + is_tape = (sdebug_ptype == TYPE_TAPE); 2044 + } else { 2045 + is_disk = (scp->device->type == TYPE_DISK); 2046 + is_tape = (scp->device->type == TYPE_TAPE); 2047 + } 2079 2048 is_zbc = devip->zoned; 2080 2049 is_disk_zbc = (is_disk || is_zbc); 2081 2050 have_wlun = scsi_is_wlun(scp->device->lun); ··· 2090 2047 else if (sdebug_no_lun_0 && (devip->lun == SDEBUG_LUN_0_VAL)) 2091 2048 pq_pdt = 0x7f; /* not present, PQ=3, PDT=0x1f */ 2092 2049 else 2093 - pq_pdt = (sdebug_ptype & 0x1f); 2050 + pq_pdt = ((scp->device->type >= 32 ? 2051 + sdebug_ptype : scp->device->type) & 0x1f); 2094 2052 arr[0] = pq_pdt; 2095 2053 if (0x2 & cmd[1]) { /* CMDDT bit set */ 2096 2054 mk_sense_invalid_fld(scp, SDEB_IN_CDB, 1, 1); ··· 2214 2170 if (is_disk) { /* SBC-4 no version claimed */ 2215 2171 put_unaligned_be16(0x600, arr + n); 2216 2172 n += 2; 2217 - } else if (sdebug_ptype == TYPE_TAPE) { /* SSC-4 rev 3 */ 2173 + } else if (is_tape) { /* SSC-4 rev 3 */ 2218 2174 put_unaligned_be16(0x525, arr + n); 2219 2175 n += 2; 2220 2176 } else if (is_zbc) { /* ZBC BSR INCITS 536 revision 05 */ ··· 2323 2279 changing = (stopped_state != want_stop); 2324 2280 if (changing) 2325 2281 atomic_xchg(&devip->stopped, want_stop); 2326 - if (sdebug_ptype == TYPE_TAPE && !want_stop) { 2282 + if (scp->device->type == TYPE_TAPE && !want_stop) { 2327 2283 int i; 2328 2284 2329 2285 set_bit(SDEBUG_UA_NOT_READY_TO_READY, devip->uas_bm); /* not legal! */ ··· 2498 2454 u8 reporting_opts, req_opcode, sdeb_i, supp; 2499 2455 u16 req_sa, u; 2500 2456 u32 alloc_len, a_len; 2501 - int k, offset, len, errsts, count, bump, na; 2457 + int k, offset, len, errsts, bump, na; 2502 2458 const struct opcode_info_t *oip; 2503 2459 const struct opcode_info_t *r_oip; 2504 2460 u8 *arr; 2505 2461 u8 *cmd = scp->cmnd; 2462 + u32 devsel = sdebug_get_devsel(scp->device); 2506 2463 2507 2464 rctd = !!(cmd[2] & 0x80); 2508 2465 reporting_opts = cmd[2] & 0x7; ··· 2526 2481 } 2527 2482 switch (reporting_opts) { 2528 2483 case 0: /* all commands */ 2529 - /* count number of commands */ 2530 - for (count = 0, oip = opcode_info_arr; 2531 - oip->num_attached != 0xff; ++oip) { 2532 - if (F_INV_OP & oip->flags) 2533 - continue; 2534 - count += (oip->num_attached + 1); 2535 - } 2536 2484 bump = rctd ? 20 : 8; 2537 - put_unaligned_be32(count * bump, arr); 2538 2485 for (offset = 4, oip = opcode_info_arr; 2539 2486 oip->num_attached != 0xff && offset < a_len; ++oip) { 2540 2487 if (F_INV_OP & oip->flags) 2541 2488 continue; 2489 + if ((devsel & oip->devsel) != 0) { 2490 + arr[offset] = oip->opcode; 2491 + put_unaligned_be16(oip->sa, arr + offset + 2); 2492 + if (rctd) 2493 + arr[offset + 5] |= 0x2; 2494 + if (FF_SA & oip->flags) 2495 + arr[offset + 5] |= 0x1; 2496 + put_unaligned_be16(oip->len_mask[0], arr + offset + 6); 2497 + if (rctd) 2498 + put_unaligned_be16(0xa, arr + offset + 8); 2499 + offset += bump; 2500 + } 2542 2501 na = oip->num_attached; 2543 - arr[offset] = oip->opcode; 2544 - put_unaligned_be16(oip->sa, arr + offset + 2); 2545 - if (rctd) 2546 - arr[offset + 5] |= 0x2; 2547 - if (FF_SA & oip->flags) 2548 - arr[offset + 5] |= 0x1; 2549 - put_unaligned_be16(oip->len_mask[0], arr + offset + 6); 2550 - if (rctd) 2551 - put_unaligned_be16(0xa, arr + offset + 8); 2552 2502 r_oip = oip; 2553 2503 for (k = 0, oip = oip->arrp; k < na; ++k, ++oip) { 2554 2504 if (F_INV_OP & oip->flags) 2555 2505 continue; 2556 - offset += bump; 2506 + if ((devsel & oip->devsel) == 0) 2507 + continue; 2557 2508 arr[offset] = oip->opcode; 2558 2509 put_unaligned_be16(oip->sa, arr + offset + 2); 2559 2510 if (rctd) ··· 2557 2516 if (FF_SA & oip->flags) 2558 2517 arr[offset + 5] |= 0x1; 2559 2518 put_unaligned_be16(oip->len_mask[0], 2560 - arr + offset + 6); 2519 + arr + offset + 6); 2561 2520 if (rctd) 2562 2521 put_unaligned_be16(0xa, 2563 2522 arr + offset + 8); 2523 + offset += bump; 2564 2524 } 2565 2525 oip = r_oip; 2566 - offset += bump; 2567 2526 } 2527 + put_unaligned_be32(offset - 4, arr); 2568 2528 break; 2569 2529 case 1: /* one command: opcode only */ 2570 2530 case 2: /* one command: opcode plus service action */ ··· 2591 2549 return check_condition_result; 2592 2550 } 2593 2551 if (0 == (FF_SA & oip->flags) && 2594 - req_opcode == oip->opcode) 2552 + (devsel & oip->devsel) != 0 && 2553 + req_opcode == oip->opcode) 2595 2554 supp = 3; 2596 2555 else if (0 == (FF_SA & oip->flags)) { 2597 2556 na = oip->num_attached; 2598 2557 for (k = 0, oip = oip->arrp; k < na; 2599 2558 ++k, ++oip) { 2600 - if (req_opcode == oip->opcode) 2559 + if (req_opcode == oip->opcode && 2560 + (devsel & oip->devsel) != 0) 2601 2561 break; 2602 2562 } 2603 2563 supp = (k >= na) ? 1 : 3; ··· 2607 2563 na = oip->num_attached; 2608 2564 for (k = 0, oip = oip->arrp; k < na; 2609 2565 ++k, ++oip) { 2610 - if (req_sa == oip->sa) 2566 + if (req_sa == oip->sa && 2567 + (devsel & oip->devsel) != 0) 2611 2568 break; 2612 2569 } 2613 2570 supp = (k >= na) ? 1 : 3; ··· 2959 2914 subpcode = cmd[3]; 2960 2915 msense_6 = (MODE_SENSE == cmd[0]); 2961 2916 llbaa = msense_6 ? false : !!(cmd[1] & 0x10); 2962 - is_disk = (sdebug_ptype == TYPE_DISK); 2917 + is_disk = (scp->device->type == TYPE_DISK); 2963 2918 is_zbc = devip->zoned; 2964 - is_tape = (sdebug_ptype == TYPE_TAPE); 2919 + is_tape = (scp->device->type == TYPE_TAPE); 2965 2920 if ((is_disk || is_zbc || is_tape) && !dbd) 2966 2921 bd_len = llbaa ? 16 : 8; 2967 2922 else ··· 3176 3131 md_len = mselect6 ? (arr[0] + 1) : (get_unaligned_be16(arr + 0) + 2); 3177 3132 bd_len = mselect6 ? arr[3] : get_unaligned_be16(arr + 6); 3178 3133 off = (mselect6 ? 4 : 8); 3179 - if (sdebug_ptype == TYPE_TAPE) { 3134 + if (scp->device->type == TYPE_TAPE) { 3180 3135 int blksize; 3181 3136 3182 3137 if (bd_len != 8) { ··· 3241 3196 } 3242 3197 break; 3243 3198 case 0xf: /* Compression mode page */ 3244 - if (sdebug_ptype != TYPE_TAPE) 3199 + if (scp->device->type != TYPE_TAPE) 3245 3200 goto bad_pcode; 3246 3201 if ((arr[off + 2] & 0x40) != 0) { 3247 3202 devip->tape_dce = (arr[off + 2] & 0x80) != 0; ··· 3249 3204 } 3250 3205 break; 3251 3206 case 0x11: /* Medium Partition Mode Page (tape) */ 3252 - if (sdebug_ptype == TYPE_TAPE) { 3207 + if (scp->device->type == TYPE_TAPE) { 3253 3208 int fld; 3254 3209 3255 3210 fld = process_medium_part_m_pg(devip, &arr[off], pg_len); ··· 3608 3563 return check_condition_result; 3609 3564 } 3610 3565 3566 + enum {SDEBUG_READ_POSITION_ARR_SZ = 20}; 3567 + static int resp_read_position(struct scsi_cmnd *scp, 3568 + struct sdebug_dev_info *devip) 3569 + { 3570 + u8 *cmd = scp->cmnd; 3571 + int all_length; 3572 + unsigned char arr[20]; 3573 + unsigned int pos; 3574 + 3575 + all_length = get_unaligned_be16(cmd + 7); 3576 + if ((cmd[1] & 0xfe) != 0 || 3577 + all_length != 0) { /* only short form */ 3578 + mk_sense_invalid_fld(scp, SDEB_IN_CDB, 3579 + all_length ? 7 : 1, 0); 3580 + return check_condition_result; 3581 + } 3582 + memset(arr, 0, SDEBUG_READ_POSITION_ARR_SZ); 3583 + arr[1] = devip->tape_partition; 3584 + pos = devip->tape_location[devip->tape_partition]; 3585 + put_unaligned_be32(pos, arr + 4); 3586 + put_unaligned_be32(pos, arr + 8); 3587 + return fill_from_dev_buffer(scp, arr, SDEBUG_READ_POSITION_ARR_SZ); 3588 + } 3589 + 3611 3590 static int resp_rewind(struct scsi_cmnd *scp, 3612 3591 struct sdebug_dev_info *devip) 3613 3592 { ··· 3673 3604 int res = 0; 3674 3605 unsigned char *cmd = scp->cmnd; 3675 3606 3676 - if (sdebug_ptype != TYPE_TAPE) { 3677 - mk_sense_invalid_fld(scp, SDEB_IN_CDB, 0, -1); 3678 - return check_condition_result; 3679 - } 3680 3607 if (cmd[2] > 2) { 3681 3608 mk_sense_invalid_fld(scp, SDEB_IN_DATA, 2, -1); 3682 3609 return check_condition_result; ··· 3692 3627 return -EINVAL; 3693 3628 3694 3629 devip->tape_pending_nbr_partitions = -1; 3630 + 3631 + return 0; 3632 + } 3633 + 3634 + static int resp_erase(struct scsi_cmnd *scp, 3635 + struct sdebug_dev_info *devip) 3636 + { 3637 + int partition = devip->tape_partition; 3638 + int pos = devip->tape_location[partition]; 3639 + struct tape_block *blp; 3640 + 3641 + blp = devip->tape_blocks[partition] + pos; 3642 + blp->fl_size = TAPE_BLOCK_EOD_FLAG; 3695 3643 3696 3644 return 0; 3697 3645 } ··· 4545 4467 u8 *cmd = scp->cmnd; 4546 4468 bool meta_data_locked = false; 4547 4469 4548 - if (sdebug_ptype == TYPE_TAPE) 4549 - return resp_read_tape(scp, devip); 4550 - 4551 4470 switch (cmd[0]) { 4552 4471 case READ_16: 4553 4472 ei_lba = 0; ··· 4913 4838 struct sdeb_store_info *sip = devip2sip(devip, true); 4914 4839 u8 *cmd = scp->cmnd; 4915 4840 bool meta_data_locked = false; 4916 - 4917 - if (sdebug_ptype == TYPE_TAPE) 4918 - return resp_write_tape(scp, devip); 4919 4841 4920 4842 switch (cmd[0]) { 4921 4843 case WRITE_16: ··· 5645 5573 * 5646 5574 * The pcode 0x34 is also used for READ POSITION by tape devices. 5647 5575 */ 5648 - enum {SDEBUG_READ_POSITION_ARR_SZ = 20}; 5649 5576 static int resp_pre_fetch(struct scsi_cmnd *scp, 5650 5577 struct sdebug_dev_info *devip) 5651 5578 { ··· 5655 5584 u8 *cmd = scp->cmnd; 5656 5585 struct sdeb_store_info *sip = devip2sip(devip, true); 5657 5586 u8 *fsp = sip->storep; 5658 - 5659 - if (sdebug_ptype == TYPE_TAPE) { 5660 - if (cmd[0] == PRE_FETCH) { /* READ POSITION (10) */ 5661 - int all_length; 5662 - unsigned char arr[20]; 5663 - unsigned int pos; 5664 - 5665 - all_length = get_unaligned_be16(cmd + 7); 5666 - if ((cmd[1] & 0xfe) != 0 || 5667 - all_length != 0) { /* only short form */ 5668 - mk_sense_invalid_fld(scp, SDEB_IN_CDB, 5669 - all_length ? 7 : 1, 0); 5670 - return check_condition_result; 5671 - } 5672 - memset(arr, 0, SDEBUG_READ_POSITION_ARR_SZ); 5673 - arr[1] = devip->tape_partition; 5674 - pos = devip->tape_location[devip->tape_partition]; 5675 - put_unaligned_be32(pos, arr + 4); 5676 - put_unaligned_be32(pos, arr + 8); 5677 - return fill_from_dev_buffer(scp, arr, 5678 - SDEBUG_READ_POSITION_ARR_SZ); 5679 - } 5680 - mk_sense_invalid_opcode(scp); 5681 - return check_condition_result; 5682 - } 5683 5587 5684 5588 if (cmd[0] == PRE_FETCH) { /* 10 byte cdb */ 5685 5589 lba = get_unaligned_be32(cmd + 2); ··· 6691 6645 6692 6646 debugfs_remove(devip->debugfs_entry); 6693 6647 6694 - if (sdebug_ptype == TYPE_TAPE) { 6648 + if (sdp->type == TYPE_TAPE) { 6695 6649 kfree(devip->tape_blocks[0]); 6696 6650 devip->tape_blocks[0] = NULL; 6697 6651 } ··· 6879 6833 6880 6834 static void scsi_tape_reset_clear(struct sdebug_dev_info *devip) 6881 6835 { 6882 - if (sdebug_ptype == TYPE_TAPE) { 6883 - int i; 6836 + int i; 6884 6837 6885 - devip->tape_blksize = TAPE_DEF_BLKSIZE; 6886 - devip->tape_density = TAPE_DEF_DENSITY; 6887 - devip->tape_partition = 0; 6888 - devip->tape_dce = 0; 6889 - for (i = 0; i < TAPE_MAX_PARTITIONS; i++) 6890 - devip->tape_location[i] = 0; 6891 - devip->tape_pending_nbr_partitions = -1; 6892 - /* Don't reset partitioning? */ 6893 - } 6838 + devip->tape_blksize = TAPE_DEF_BLKSIZE; 6839 + devip->tape_density = TAPE_DEF_DENSITY; 6840 + devip->tape_partition = 0; 6841 + devip->tape_dce = 0; 6842 + for (i = 0; i < TAPE_MAX_PARTITIONS; i++) 6843 + devip->tape_location[i] = 0; 6844 + devip->tape_pending_nbr_partitions = -1; 6845 + /* Don't reset partitioning? */ 6894 6846 } 6895 6847 6896 6848 static int scsi_debug_device_reset(struct scsi_cmnd *SCpnt) ··· 6906 6862 scsi_debug_stop_all_queued(sdp); 6907 6863 if (devip) { 6908 6864 set_bit(SDEBUG_UA_POR, devip->uas_bm); 6909 - scsi_tape_reset_clear(devip); 6865 + if (SCpnt->device->type == TYPE_TAPE) 6866 + scsi_tape_reset_clear(devip); 6910 6867 } 6911 6868 6912 6869 if (sdebug_fail_lun_reset(SCpnt)) { ··· 6946 6901 list_for_each_entry(devip, &sdbg_host->dev_info_list, dev_list) { 6947 6902 if (devip->target == sdp->id) { 6948 6903 set_bit(SDEBUG_UA_BUS_RESET, devip->uas_bm); 6949 - scsi_tape_reset_clear(devip); 6904 + if (SCpnt->device->type == TYPE_TAPE) 6905 + scsi_tape_reset_clear(devip); 6950 6906 ++k; 6951 6907 } 6952 6908 } ··· 6979 6933 6980 6934 list_for_each_entry(devip, &sdbg_host->dev_info_list, dev_list) { 6981 6935 set_bit(SDEBUG_UA_BUS_RESET, devip->uas_bm); 6982 - scsi_tape_reset_clear(devip); 6936 + if (SCpnt->device->type == TYPE_TAPE) 6937 + scsi_tape_reset_clear(devip); 6983 6938 ++k; 6984 6939 } 6985 6940 ··· 7004 6957 list_for_each_entry(devip, &sdbg_host->dev_info_list, 7005 6958 dev_list) { 7006 6959 set_bit(SDEBUG_UA_BUS_RESET, devip->uas_bm); 7007 - scsi_tape_reset_clear(devip); 6960 + if (SCpnt->device->type == TYPE_TAPE) 6961 + scsi_tape_reset_clear(devip); 7008 6962 ++k; 7009 6963 } 7010 6964 } ··· 9221 9173 u32 flags; 9222 9174 u16 sa; 9223 9175 u8 opcode = cmd[0]; 9176 + u32 devsel = sdebug_get_devsel(scp->device); 9224 9177 bool has_wlun_rl; 9225 9178 bool inject_now; 9226 9179 int ret = 0; ··· 9301 9252 else 9302 9253 sa = get_unaligned_be16(cmd + 8); 9303 9254 for (k = 0; k <= na; oip = r_oip->arrp + k++) { 9304 - if (opcode == oip->opcode && sa == oip->sa) 9255 + if (opcode == oip->opcode && sa == oip->sa && 9256 + (devsel & oip->devsel) != 0) 9305 9257 break; 9306 9258 } 9307 9259 } else { /* since no service action only check opcode */ 9308 9260 for (k = 0; k <= na; oip = r_oip->arrp + k++) { 9309 - if (opcode == oip->opcode) 9261 + if (opcode == oip->opcode && 9262 + (devsel & oip->devsel) != 0) 9310 9263 break; 9311 9264 } 9312 9265 }
-27
drivers/scsi/scsi_devinfo.c
··· 486 486 } 487 487 488 488 /** 489 - * scsi_dev_info_list_del_keyed - remove one dev_info list entry. 490 - * @vendor: vendor string 491 - * @model: model (product) string 492 - * @key: specify list to use 493 - * 494 - * Description: 495 - * Remove and destroy one dev_info entry for @vendor, @model 496 - * in list specified by @key. 497 - * 498 - * Returns: 0 OK, -error on failure. 499 - **/ 500 - int scsi_dev_info_list_del_keyed(char *vendor, char *model, 501 - enum scsi_devinfo_key key) 502 - { 503 - struct scsi_dev_info_list *found; 504 - 505 - found = scsi_dev_info_list_find(vendor, model, key); 506 - if (IS_ERR(found)) 507 - return PTR_ERR(found); 508 - 509 - list_del(&found->dev_info_list); 510 - kfree(found); 511 - return 0; 512 - } 513 - EXPORT_SYMBOL(scsi_dev_info_list_del_keyed); 514 - 515 - /** 516 489 * scsi_dev_info_list_add_str - parse dev_list and add to the scsi_dev_info_list. 517 490 * @dev_list: string of device flags to add 518 491 *
-2
drivers/scsi/scsi_priv.h
··· 79 79 char *model, char *strflags, 80 80 blist_flags_t flags, 81 81 enum scsi_devinfo_key key); 82 - extern int scsi_dev_info_list_del_keyed(char *vendor, char *model, 83 - enum scsi_devinfo_key key); 84 82 extern int scsi_dev_info_add_list(enum scsi_devinfo_key key, const char *name); 85 83 extern int scsi_dev_info_remove_list(enum scsi_devinfo_key key); 86 84
+1 -1
drivers/scsi/scsi_transport_fc.c
··· 3509 3509 * state as the LLDD would not have had an rport 3510 3510 * reference to pass us. 3511 3511 * 3512 - * Take no action on the del_timer failure as the state 3512 + * Take no action on the timer_delete() failure as the state 3513 3513 * machine state change will validate the 3514 3514 * transaction. 3515 3515 */
+1 -1
drivers/scsi/sd.c
··· 3215 3215 return false; 3216 3216 if (get_unaligned_be32(&buf.h.len) < sizeof(struct scsi_stream_status)) 3217 3217 return false; 3218 - return buf.h.stream_status[0].perm; 3218 + return buf.s.perm; 3219 3219 } 3220 3220 3221 3221 static void sd_read_io_hints(struct scsi_disk *sdkp, unsigned char *buffer)
+1 -2
drivers/scsi/sg.c
··· 1658 1658 1659 1659 static void unregister_sg_sysctls(void) 1660 1660 { 1661 - if (hdr) 1662 - unregister_sysctl_table(hdr); 1661 + unregister_sysctl_table(hdr); 1663 1662 } 1664 1663 #else 1665 1664 #define register_sg_sysctls() do { } while (0)
+130 -10
drivers/scsi/smartpqi/smartpqi_init.c
··· 33 33 #define BUILD_TIMESTAMP 34 34 #endif 35 35 36 - #define DRIVER_VERSION "2.1.30-031" 36 + #define DRIVER_VERSION "2.1.34-035" 37 37 #define DRIVER_MAJOR 2 38 38 #define DRIVER_MINOR 1 39 - #define DRIVER_RELEASE 30 40 - #define DRIVER_REVISION 31 39 + #define DRIVER_RELEASE 34 40 + #define DRIVER_REVISION 35 41 41 42 42 #define DRIVER_NAME "Microchip SmartPQI Driver (v" \ 43 43 DRIVER_VERSION BUILD_TIMESTAMP ")" ··· 68 68 static void pqi_verify_structures(void); 69 69 static void pqi_take_ctrl_offline(struct pqi_ctrl_info *ctrl_info, 70 70 enum pqi_ctrl_shutdown_reason ctrl_shutdown_reason); 71 + static void pqi_take_ctrl_devices_offline(struct pqi_ctrl_info *ctrl_info); 71 72 static void pqi_ctrl_offline_worker(struct work_struct *work); 72 73 static int pqi_scan_scsi_devices(struct pqi_ctrl_info *ctrl_info); 73 74 static void pqi_scan_start(struct Scsi_Host *shost); ··· 2012 2011 PQI_DEV_INFO_BUFFER_LENGTH - count, 2013 2012 "-:-"); 2014 2013 2015 - if (pqi_is_logical_device(device)) 2014 + if (pqi_is_logical_device(device)) { 2016 2015 count += scnprintf(buffer + count, 2017 2016 PQI_DEV_INFO_BUFFER_LENGTH - count, 2018 2017 " %08x%08x", 2019 2018 *((u32 *)&device->scsi3addr), 2020 2019 *((u32 *)&device->scsi3addr[4])); 2021 - else 2020 + } else if (ctrl_info->rpl_extended_format_4_5_supported) { 2021 + if (device->device_type == SA_DEVICE_TYPE_NVME) 2022 + count += scnprintf(buffer + count, 2023 + PQI_DEV_INFO_BUFFER_LENGTH - count, 2024 + " %016llx%016llx", 2025 + get_unaligned_be64(&device->wwid[0]), 2026 + get_unaligned_be64(&device->wwid[8])); 2027 + else 2028 + count += scnprintf(buffer + count, 2029 + PQI_DEV_INFO_BUFFER_LENGTH - count, 2030 + " %016llx", 2031 + get_unaligned_be64(&device->wwid[0])); 2032 + } else { 2022 2033 count += scnprintf(buffer + count, 2023 2034 PQI_DEV_INFO_BUFFER_LENGTH - count, 2024 - " %016llx%016llx", 2025 - get_unaligned_be64(&device->wwid[0]), 2026 - get_unaligned_be64(&device->wwid[8])); 2035 + " %016llx", 2036 + get_unaligned_be64(&device->wwid[0])); 2037 + } 2038 + 2027 2039 2028 2040 count += scnprintf(buffer + count, PQI_DEV_INFO_BUFFER_LENGTH - count, 2029 2041 " %s %.8s %.16s ", ··· 6004 5990 pqi_stream_data->next_lba = rmd.first_block + 6005 5991 rmd.block_cnt; 6006 5992 pqi_stream_data->last_accessed = jiffies; 6007 - per_cpu_ptr(device->raid_io_stats, smp_processor_id())->write_stream_cnt++; 5993 + per_cpu_ptr(device->raid_io_stats, raw_smp_processor_id())->write_stream_cnt++; 6008 5994 return true; 6009 5995 } 6010 5996 ··· 6083 6069 rc = pqi_raid_bypass_submit_scsi_cmd(ctrl_info, device, scmd, queue_group); 6084 6070 if (rc == 0 || rc == SCSI_MLQUEUE_HOST_BUSY) { 6085 6071 raid_bypassed = true; 6086 - per_cpu_ptr(device->raid_io_stats, smp_processor_id())->raid_bypass_cnt++; 6072 + per_cpu_ptr(device->raid_io_stats, raw_smp_processor_id())->raid_bypass_cnt++; 6087 6073 } 6088 6074 } 6089 6075 if (!raid_bypassed) ··· 9143 9129 pqi_ctrl_wait_until_quiesced(ctrl_info); 9144 9130 pqi_fail_all_outstanding_requests(ctrl_info); 9145 9131 pqi_ctrl_unblock_requests(ctrl_info); 9132 + pqi_take_ctrl_devices_offline(ctrl_info); 9146 9133 } 9147 9134 9148 9135 static void pqi_ctrl_offline_worker(struct work_struct *work) ··· 9216 9201 "controller offline: reason code 0x%x (%s)\n", 9217 9202 ctrl_shutdown_reason, pqi_ctrl_shutdown_reason_to_string(ctrl_shutdown_reason)); 9218 9203 schedule_work(&ctrl_info->ctrl_offline_work); 9204 + } 9205 + 9206 + static void pqi_take_ctrl_devices_offline(struct pqi_ctrl_info *ctrl_info) 9207 + { 9208 + int rc; 9209 + unsigned long flags; 9210 + struct pqi_scsi_dev *device; 9211 + 9212 + spin_lock_irqsave(&ctrl_info->scsi_device_list_lock, flags); 9213 + list_for_each_entry(device, &ctrl_info->scsi_device_list, scsi_device_list_entry) { 9214 + rc = list_is_last(&device->scsi_device_list_entry, &ctrl_info->scsi_device_list); 9215 + if (rc) 9216 + continue; 9217 + 9218 + /* 9219 + * Is the sdev pointer NULL? 9220 + */ 9221 + if (device->sdev) 9222 + scsi_device_set_state(device->sdev, SDEV_OFFLINE); 9223 + } 9224 + spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, flags); 9219 9225 } 9220 9226 9221 9227 static void pqi_print_ctrl_info(struct pci_dev *pci_dev, ··· 9747 9711 }, 9748 9712 { 9749 9713 PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 9714 + 0x1bd4, 0x00a3) 9715 + }, 9716 + { 9717 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 9750 9718 0x1ff9, 0x00a1) 9751 9719 }, 9752 9720 { ··· 10087 10047 }, 10088 10048 { 10089 10049 PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10050 + 0x207d, 0x4044) 10051 + }, 10052 + { 10053 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10054 + 0x207d, 0x4054) 10055 + }, 10056 + { 10057 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10058 + 0x207d, 0x4084) 10059 + }, 10060 + { 10061 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10062 + 0x207d, 0x4094) 10063 + }, 10064 + { 10065 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10066 + 0x207d, 0x4140) 10067 + }, 10068 + { 10069 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10070 + 0x207d, 0x4240) 10071 + }, 10072 + { 10073 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10090 10074 PCI_VENDOR_ID_ADVANTECH, 0x8312) 10091 10075 }, 10092 10076 { ··· 10327 10263 }, 10328 10264 { 10329 10265 PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10266 + 0x1018, 0x8238) 10267 + }, 10268 + { 10269 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10270 + 0x1f3f, 0x0610) 10271 + }, 10272 + { 10273 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10330 10274 PCI_VENDOR_ID_LENOVO, 0x0220) 10331 10275 }, 10332 10276 { ··· 10343 10271 }, 10344 10272 { 10345 10273 PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10274 + PCI_VENDOR_ID_LENOVO, 0x0222) 10275 + }, 10276 + { 10277 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10278 + PCI_VENDOR_ID_LENOVO, 0x0223) 10279 + }, 10280 + { 10281 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10282 + PCI_VENDOR_ID_LENOVO, 0x0224) 10283 + }, 10284 + { 10285 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10286 + PCI_VENDOR_ID_LENOVO, 0x0225) 10287 + }, 10288 + { 10289 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10346 10290 PCI_VENDOR_ID_LENOVO, 0x0520) 10291 + }, 10292 + { 10293 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10294 + PCI_VENDOR_ID_LENOVO, 0x0521) 10347 10295 }, 10348 10296 { 10349 10297 PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, ··· 10384 10292 { 10385 10293 PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10386 10294 PCI_VENDOR_ID_LENOVO, 0x0623) 10295 + }, 10296 + { 10297 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10298 + PCI_VENDOR_ID_LENOVO, 0x0624) 10299 + }, 10300 + { 10301 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10302 + PCI_VENDOR_ID_LENOVO, 0x0625) 10303 + }, 10304 + { 10305 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10306 + PCI_VENDOR_ID_LENOVO, 0x0626) 10307 + }, 10308 + { 10309 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10310 + PCI_VENDOR_ID_LENOVO, 0x0627) 10311 + }, 10312 + { 10313 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10314 + PCI_VENDOR_ID_LENOVO, 0x0628) 10387 10315 }, 10388 10316 { 10389 10317 PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, ··· 10432 10320 { 10433 10321 PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10434 10322 0x1137, 0x0300) 10323 + }, 10324 + { 10325 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10326 + 0x1ded, 0x3301) 10435 10327 }, 10436 10328 { 10437 10329 PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, ··· 10584 10468 { 10585 10469 PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10586 10470 0x1f51, 0x100a) 10471 + }, 10472 + { 10473 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10474 + 0x1f51, 0x100b) 10587 10475 }, 10588 10476 { 10589 10477 PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+332 -24
drivers/soc/qcom/ice.c
··· 21 21 22 22 #include <soc/qcom/ice.h> 23 23 24 - #define AES_256_XTS_KEY_SIZE 64 24 + #define AES_256_XTS_KEY_SIZE 64 /* for raw keys only */ 25 + #define QCOM_ICE_HWKM_WRAPPED_KEY_SIZE 100 /* assuming HWKM v2 */ 25 26 26 27 /* QCOM ICE registers */ 27 - #define QCOM_ICE_REG_VERSION 0x0008 28 - #define QCOM_ICE_REG_FUSE_SETTING 0x0010 29 - #define QCOM_ICE_REG_BIST_STATUS 0x0070 30 - #define QCOM_ICE_REG_ADVANCED_CONTROL 0x1000 31 28 32 - /* BIST ("built-in self-test") status flags */ 29 + #define QCOM_ICE_REG_CONTROL 0x0000 30 + #define QCOM_ICE_LEGACY_MODE_ENABLED BIT(0) 31 + 32 + #define QCOM_ICE_REG_VERSION 0x0008 33 + 34 + #define QCOM_ICE_REG_FUSE_SETTING 0x0010 35 + #define QCOM_ICE_FUSE_SETTING_MASK BIT(0) 36 + #define QCOM_ICE_FORCE_HW_KEY0_SETTING_MASK BIT(1) 37 + #define QCOM_ICE_FORCE_HW_KEY1_SETTING_MASK BIT(2) 38 + 39 + #define QCOM_ICE_REG_BIST_STATUS 0x0070 33 40 #define QCOM_ICE_BIST_STATUS_MASK GENMASK(31, 28) 34 41 35 - #define QCOM_ICE_FUSE_SETTING_MASK 0x1 36 - #define QCOM_ICE_FORCE_HW_KEY0_SETTING_MASK 0x2 37 - #define QCOM_ICE_FORCE_HW_KEY1_SETTING_MASK 0x4 42 + #define QCOM_ICE_REG_ADVANCED_CONTROL 0x1000 43 + 44 + #define QCOM_ICE_REG_CRYPTOCFG_BASE 0x4040 45 + #define QCOM_ICE_REG_CRYPTOCFG_SIZE 0x80 46 + #define QCOM_ICE_REG_CRYPTOCFG(slot) (QCOM_ICE_REG_CRYPTOCFG_BASE + \ 47 + QCOM_ICE_REG_CRYPTOCFG_SIZE * (slot)) 48 + union crypto_cfg { 49 + __le32 regval; 50 + struct { 51 + u8 dusize; 52 + u8 capidx; 53 + u8 reserved; 54 + #define QCOM_ICE_HWKM_CFG_ENABLE_VAL BIT(7) 55 + u8 cfge; 56 + }; 57 + }; 58 + 59 + /* QCOM ICE HWKM (Hardware Key Manager) registers */ 60 + 61 + #define HWKM_OFFSET 0x8000 62 + 63 + #define QCOM_ICE_REG_HWKM_TZ_KM_CTL (HWKM_OFFSET + 0x1000) 64 + #define QCOM_ICE_HWKM_DISABLE_CRC_CHECKS_VAL (BIT(1) | BIT(2)) 65 + 66 + #define QCOM_ICE_REG_HWKM_TZ_KM_STATUS (HWKM_OFFSET + 0x1004) 67 + #define QCOM_ICE_HWKM_KT_CLEAR_DONE BIT(0) 68 + #define QCOM_ICE_HWKM_BOOT_CMD_LIST0_DONE BIT(1) 69 + #define QCOM_ICE_HWKM_BOOT_CMD_LIST1_DONE BIT(2) 70 + #define QCOM_ICE_HWKM_CRYPTO_BIST_DONE_V2 BIT(7) 71 + #define QCOM_ICE_HWKM_BIST_DONE_V2 BIT(9) 72 + 73 + #define QCOM_ICE_REG_HWKM_BANK0_BANKN_IRQ_STATUS (HWKM_OFFSET + 0x2008) 74 + #define QCOM_ICE_HWKM_RSP_FIFO_CLEAR_VAL BIT(3) 75 + 76 + #define QCOM_ICE_REG_HWKM_BANK0_BBAC_0 (HWKM_OFFSET + 0x5000) 77 + #define QCOM_ICE_REG_HWKM_BANK0_BBAC_1 (HWKM_OFFSET + 0x5004) 78 + #define QCOM_ICE_REG_HWKM_BANK0_BBAC_2 (HWKM_OFFSET + 0x5008) 79 + #define QCOM_ICE_REG_HWKM_BANK0_BBAC_3 (HWKM_OFFSET + 0x500C) 80 + #define QCOM_ICE_REG_HWKM_BANK0_BBAC_4 (HWKM_OFFSET + 0x5010) 38 81 39 82 #define qcom_ice_writel(engine, val, reg) \ 40 83 writel((val), (engine)->base + (reg)) ··· 85 42 #define qcom_ice_readl(engine, reg) \ 86 43 readl((engine)->base + (reg)) 87 44 45 + static bool qcom_ice_use_wrapped_keys; 46 + module_param_named(use_wrapped_keys, qcom_ice_use_wrapped_keys, bool, 0660); 47 + MODULE_PARM_DESC(use_wrapped_keys, 48 + "Support wrapped keys instead of raw keys, if available on the platform"); 49 + 88 50 struct qcom_ice { 89 51 struct device *dev; 90 52 void __iomem *base; 91 53 92 54 struct clk *core_clk; 55 + bool use_hwkm; 56 + bool hwkm_init_complete; 93 57 }; 94 58 95 59 static bool qcom_ice_check_supported(struct qcom_ice *ice) ··· 126 76 return false; 127 77 } 128 78 79 + /* 80 + * Check for HWKM support and decide whether to use it or not. ICE 81 + * v3.2.1 and later have HWKM v2. ICE v3.2.0 has HWKM v1. Earlier ICE 82 + * versions don't have HWKM at all. However, for HWKM to be fully 83 + * usable by Linux, the TrustZone software also needs to support certain 84 + * SCM calls including the ones to generate and prepare keys. That 85 + * effectively makes the earliest supported SoC be SM8650, which has 86 + * HWKM v2. Therefore, this driver doesn't include support for HWKM v1, 87 + * and it checks for the SCM call support before it decides to use HWKM. 88 + * 89 + * Also, since HWKM and legacy mode are mutually exclusive, and 90 + * ICE-capable storage driver(s) need to know early on whether to 91 + * advertise support for raw keys or wrapped keys, HWKM cannot be used 92 + * unconditionally. A module parameter is used to opt into using it. 93 + */ 94 + if ((major >= 4 || 95 + (major == 3 && (minor >= 3 || (minor == 2 && step >= 1)))) && 96 + qcom_scm_has_wrapped_key_support()) { 97 + if (qcom_ice_use_wrapped_keys) { 98 + dev_info(dev, "Using HWKM. Supporting wrapped keys only.\n"); 99 + ice->use_hwkm = true; 100 + } else { 101 + dev_info(dev, "Not using HWKM. Supporting raw keys only.\n"); 102 + } 103 + } else if (qcom_ice_use_wrapped_keys) { 104 + dev_warn(dev, "A supported HWKM is not present. Ignoring qcom_ice.use_wrapped_keys=1.\n"); 105 + } else { 106 + dev_info(dev, "A supported HWKM is not present. Supporting raw keys only.\n"); 107 + } 129 108 return true; 130 109 } 131 110 ··· 202 123 err = readl_poll_timeout(ice->base + QCOM_ICE_REG_BIST_STATUS, 203 124 regval, !(regval & QCOM_ICE_BIST_STATUS_MASK), 204 125 50, 5000); 205 - if (err) 126 + if (err) { 206 127 dev_err(ice->dev, "Timed out waiting for ICE self-test to complete\n"); 128 + return err; 129 + } 207 130 208 - return err; 131 + if (ice->use_hwkm && 132 + qcom_ice_readl(ice, QCOM_ICE_REG_HWKM_TZ_KM_STATUS) != 133 + (QCOM_ICE_HWKM_KT_CLEAR_DONE | 134 + QCOM_ICE_HWKM_BOOT_CMD_LIST0_DONE | 135 + QCOM_ICE_HWKM_BOOT_CMD_LIST1_DONE | 136 + QCOM_ICE_HWKM_CRYPTO_BIST_DONE_V2 | 137 + QCOM_ICE_HWKM_BIST_DONE_V2)) { 138 + dev_err(ice->dev, "HWKM self-test error!\n"); 139 + /* 140 + * Too late to revoke use_hwkm here, as it was already 141 + * propagated up the stack into the crypto capabilities. 142 + */ 143 + } 144 + return 0; 145 + } 146 + 147 + static void qcom_ice_hwkm_init(struct qcom_ice *ice) 148 + { 149 + u32 regval; 150 + 151 + if (!ice->use_hwkm) 152 + return; 153 + 154 + BUILD_BUG_ON(QCOM_ICE_HWKM_WRAPPED_KEY_SIZE > 155 + BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE); 156 + /* 157 + * When ICE is in HWKM mode, it only supports wrapped keys. 158 + * When ICE is in legacy mode, it only supports raw keys. 159 + * 160 + * Put ICE in HWKM mode. ICE defaults to legacy mode. 161 + */ 162 + regval = qcom_ice_readl(ice, QCOM_ICE_REG_CONTROL); 163 + regval &= ~QCOM_ICE_LEGACY_MODE_ENABLED; 164 + qcom_ice_writel(ice, regval, QCOM_ICE_REG_CONTROL); 165 + 166 + /* Disable CRC checks. This HWKM feature is not used. */ 167 + qcom_ice_writel(ice, QCOM_ICE_HWKM_DISABLE_CRC_CHECKS_VAL, 168 + QCOM_ICE_REG_HWKM_TZ_KM_CTL); 169 + 170 + /* 171 + * Allow the HWKM slave to read and write the keyslots in the ICE HWKM 172 + * slave. Without this, TrustZone cannot program keys into ICE. 173 + */ 174 + qcom_ice_writel(ice, GENMASK(31, 0), QCOM_ICE_REG_HWKM_BANK0_BBAC_0); 175 + qcom_ice_writel(ice, GENMASK(31, 0), QCOM_ICE_REG_HWKM_BANK0_BBAC_1); 176 + qcom_ice_writel(ice, GENMASK(31, 0), QCOM_ICE_REG_HWKM_BANK0_BBAC_2); 177 + qcom_ice_writel(ice, GENMASK(31, 0), QCOM_ICE_REG_HWKM_BANK0_BBAC_3); 178 + qcom_ice_writel(ice, GENMASK(31, 0), QCOM_ICE_REG_HWKM_BANK0_BBAC_4); 179 + 180 + /* Clear the HWKM response FIFO. */ 181 + qcom_ice_writel(ice, QCOM_ICE_HWKM_RSP_FIFO_CLEAR_VAL, 182 + QCOM_ICE_REG_HWKM_BANK0_BANKN_IRQ_STATUS); 183 + ice->hwkm_init_complete = true; 209 184 } 210 185 211 186 int qcom_ice_enable(struct qcom_ice *ice) 212 187 { 213 188 qcom_ice_low_power_mode_enable(ice); 214 189 qcom_ice_optimization_enable(ice); 215 - 190 + qcom_ice_hwkm_init(ice); 216 191 return qcom_ice_wait_bist_status(ice); 217 192 } 218 193 EXPORT_SYMBOL_GPL(qcom_ice_enable); ··· 282 149 err); 283 150 return err; 284 151 } 285 - 152 + qcom_ice_hwkm_init(ice); 286 153 return qcom_ice_wait_bist_status(ice); 287 154 } 288 155 EXPORT_SYMBOL_GPL(qcom_ice_resume); ··· 290 157 int qcom_ice_suspend(struct qcom_ice *ice) 291 158 { 292 159 clk_disable_unprepare(ice->core_clk); 160 + ice->hwkm_init_complete = false; 293 161 294 162 return 0; 295 163 } 296 164 EXPORT_SYMBOL_GPL(qcom_ice_suspend); 297 165 298 - int qcom_ice_program_key(struct qcom_ice *ice, 299 - u8 algorithm_id, u8 key_size, 300 - const u8 crypto_key[], u8 data_unit_size, 301 - int slot) 166 + static unsigned int translate_hwkm_slot(struct qcom_ice *ice, unsigned int slot) 167 + { 168 + return slot * 2; 169 + } 170 + 171 + static int qcom_ice_program_wrapped_key(struct qcom_ice *ice, unsigned int slot, 172 + const struct blk_crypto_key *bkey) 173 + { 174 + struct device *dev = ice->dev; 175 + union crypto_cfg cfg = { 176 + .dusize = bkey->crypto_cfg.data_unit_size / 512, 177 + .capidx = QCOM_SCM_ICE_CIPHER_AES_256_XTS, 178 + .cfge = QCOM_ICE_HWKM_CFG_ENABLE_VAL, 179 + }; 180 + int err; 181 + 182 + if (!ice->use_hwkm) { 183 + dev_err_ratelimited(dev, "Got wrapped key when not using HWKM\n"); 184 + return -EINVAL; 185 + } 186 + if (!ice->hwkm_init_complete) { 187 + dev_err_ratelimited(dev, "HWKM not yet initialized\n"); 188 + return -EINVAL; 189 + } 190 + 191 + /* Clear CFGE before programming the key. */ 192 + qcom_ice_writel(ice, 0x0, QCOM_ICE_REG_CRYPTOCFG(slot)); 193 + 194 + /* Call into TrustZone to program the wrapped key using HWKM. */ 195 + err = qcom_scm_ice_set_key(translate_hwkm_slot(ice, slot), bkey->bytes, 196 + bkey->size, cfg.capidx, cfg.dusize); 197 + if (err) { 198 + dev_err_ratelimited(dev, 199 + "qcom_scm_ice_set_key failed; err=%d, slot=%u\n", 200 + err, slot); 201 + return err; 202 + } 203 + 204 + /* Set CFGE after programming the key. */ 205 + qcom_ice_writel(ice, le32_to_cpu(cfg.regval), 206 + QCOM_ICE_REG_CRYPTOCFG(slot)); 207 + return 0; 208 + } 209 + 210 + int qcom_ice_program_key(struct qcom_ice *ice, unsigned int slot, 211 + const struct blk_crypto_key *blk_key) 302 212 { 303 213 struct device *dev = ice->dev; 304 214 union { ··· 352 176 int err; 353 177 354 178 /* Only AES-256-XTS has been tested so far. */ 355 - if (algorithm_id != QCOM_ICE_CRYPTO_ALG_AES_XTS || 356 - key_size != QCOM_ICE_CRYPTO_KEY_SIZE_256) { 357 - dev_err_ratelimited(dev, 358 - "Unhandled crypto capability; algorithm_id=%d, key_size=%d\n", 359 - algorithm_id, key_size); 179 + if (blk_key->crypto_cfg.crypto_mode != 180 + BLK_ENCRYPTION_MODE_AES_256_XTS) { 181 + dev_err_ratelimited(dev, "Unsupported crypto mode: %d\n", 182 + blk_key->crypto_cfg.crypto_mode); 360 183 return -EINVAL; 361 184 } 362 185 363 - memcpy(key.bytes, crypto_key, AES_256_XTS_KEY_SIZE); 186 + if (blk_key->crypto_cfg.key_type == BLK_CRYPTO_KEY_TYPE_HW_WRAPPED) 187 + return qcom_ice_program_wrapped_key(ice, slot, blk_key); 188 + 189 + if (ice->use_hwkm) { 190 + dev_err_ratelimited(dev, "Got raw key when using HWKM\n"); 191 + return -EINVAL; 192 + } 193 + 194 + if (blk_key->size != AES_256_XTS_KEY_SIZE) { 195 + dev_err_ratelimited(dev, "Incorrect key size\n"); 196 + return -EINVAL; 197 + } 198 + memcpy(key.bytes, blk_key->bytes, AES_256_XTS_KEY_SIZE); 364 199 365 200 /* The SCM call requires that the key words are encoded in big endian */ 366 201 for (i = 0; i < ARRAY_SIZE(key.words); i++) ··· 379 192 380 193 err = qcom_scm_ice_set_key(slot, key.bytes, AES_256_XTS_KEY_SIZE, 381 194 QCOM_SCM_ICE_CIPHER_AES_256_XTS, 382 - data_unit_size); 195 + blk_key->crypto_cfg.data_unit_size / 512); 383 196 384 197 memzero_explicit(&key, sizeof(key)); 385 198 ··· 389 202 390 203 int qcom_ice_evict_key(struct qcom_ice *ice, int slot) 391 204 { 205 + if (ice->hwkm_init_complete) 206 + slot = translate_hwkm_slot(ice, slot); 392 207 return qcom_scm_ice_invalidate_key(slot); 393 208 } 394 209 EXPORT_SYMBOL_GPL(qcom_ice_evict_key); 210 + 211 + /** 212 + * qcom_ice_get_supported_key_type() - Get the supported key type 213 + * @ice: ICE driver data 214 + * 215 + * Return: the blk-crypto key type that the ICE driver is configured to use. 216 + * This is the key type that ICE-capable storage drivers should advertise as 217 + * supported in the crypto capabilities of any disks they register. 218 + */ 219 + enum blk_crypto_key_type qcom_ice_get_supported_key_type(struct qcom_ice *ice) 220 + { 221 + if (ice->use_hwkm) 222 + return BLK_CRYPTO_KEY_TYPE_HW_WRAPPED; 223 + return BLK_CRYPTO_KEY_TYPE_RAW; 224 + } 225 + EXPORT_SYMBOL_GPL(qcom_ice_get_supported_key_type); 226 + 227 + /** 228 + * qcom_ice_derive_sw_secret() - Derive software secret from wrapped key 229 + * @ice: ICE driver data 230 + * @eph_key: an ephemerally-wrapped key 231 + * @eph_key_size: size of @eph_key in bytes 232 + * @sw_secret: output buffer for the software secret 233 + * 234 + * Use HWKM to derive the "software secret" from a hardware-wrapped key that is 235 + * given in ephemerally-wrapped form. 236 + * 237 + * Return: 0 on success; -EBADMSG if the given ephemerally-wrapped key is 238 + * invalid; or another -errno value. 239 + */ 240 + int qcom_ice_derive_sw_secret(struct qcom_ice *ice, 241 + const u8 *eph_key, size_t eph_key_size, 242 + u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE]) 243 + { 244 + int err = qcom_scm_derive_sw_secret(eph_key, eph_key_size, 245 + sw_secret, 246 + BLK_CRYPTO_SW_SECRET_SIZE); 247 + if (err == -EIO || err == -EINVAL) 248 + err = -EBADMSG; /* probably invalid key */ 249 + return err; 250 + } 251 + EXPORT_SYMBOL_GPL(qcom_ice_derive_sw_secret); 252 + 253 + /** 254 + * qcom_ice_generate_key() - Generate a wrapped key for inline encryption 255 + * @ice: ICE driver data 256 + * @lt_key: output buffer for the long-term wrapped key 257 + * 258 + * Use HWKM to generate a new key and return it as a long-term wrapped key. 259 + * 260 + * Return: the size of the resulting wrapped key on success; -errno on failure. 261 + */ 262 + int qcom_ice_generate_key(struct qcom_ice *ice, 263 + u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]) 264 + { 265 + int err; 266 + 267 + err = qcom_scm_generate_ice_key(lt_key, QCOM_ICE_HWKM_WRAPPED_KEY_SIZE); 268 + if (err) 269 + return err; 270 + 271 + return QCOM_ICE_HWKM_WRAPPED_KEY_SIZE; 272 + } 273 + EXPORT_SYMBOL_GPL(qcom_ice_generate_key); 274 + 275 + /** 276 + * qcom_ice_prepare_key() - Prepare a wrapped key for inline encryption 277 + * @ice: ICE driver data 278 + * @lt_key: a long-term wrapped key 279 + * @lt_key_size: size of @lt_key in bytes 280 + * @eph_key: output buffer for the ephemerally-wrapped key 281 + * 282 + * Use HWKM to re-wrap a long-term wrapped key with the per-boot ephemeral key. 283 + * 284 + * Return: the size of the resulting wrapped key on success; -EBADMSG if the 285 + * given long-term wrapped key is invalid; or another -errno value. 286 + */ 287 + int qcom_ice_prepare_key(struct qcom_ice *ice, 288 + const u8 *lt_key, size_t lt_key_size, 289 + u8 eph_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]) 290 + { 291 + int err; 292 + 293 + err = qcom_scm_prepare_ice_key(lt_key, lt_key_size, 294 + eph_key, QCOM_ICE_HWKM_WRAPPED_KEY_SIZE); 295 + if (err == -EIO || err == -EINVAL) 296 + err = -EBADMSG; /* probably invalid key */ 297 + if (err) 298 + return err; 299 + 300 + return QCOM_ICE_HWKM_WRAPPED_KEY_SIZE; 301 + } 302 + EXPORT_SYMBOL_GPL(qcom_ice_prepare_key); 303 + 304 + /** 305 + * qcom_ice_import_key() - Import a raw key for inline encryption 306 + * @ice: ICE driver data 307 + * @raw_key: the raw key to import 308 + * @raw_key_size: size of @raw_key in bytes 309 + * @lt_key: output buffer for the long-term wrapped key 310 + * 311 + * Use HWKM to import a raw key and return it as a long-term wrapped key. 312 + * 313 + * Return: the size of the resulting wrapped key on success; -errno on failure. 314 + */ 315 + int qcom_ice_import_key(struct qcom_ice *ice, 316 + const u8 *raw_key, size_t raw_key_size, 317 + u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]) 318 + { 319 + int err; 320 + 321 + err = qcom_scm_import_ice_key(raw_key, raw_key_size, 322 + lt_key, QCOM_ICE_HWKM_WRAPPED_KEY_SIZE); 323 + if (err) 324 + return err; 325 + 326 + return QCOM_ICE_HWKM_WRAPPED_KEY_SIZE; 327 + } 328 + EXPORT_SYMBOL_GPL(qcom_ice_import_key); 395 329 396 330 static struct qcom_ice *qcom_ice_create(struct device *dev, 397 331 void __iomem *base)
+9 -11
drivers/target/target_core_configfs.c
··· 673 673 return ret; 674 674 675 675 BUILD_BUG_ON(sizeof(dev->t10_wwn.model) != INQUIRY_MODEL_LEN + 1); 676 - if (flag) { 676 + if (flag) 677 677 dev_set_t10_wwn_model_alias(dev); 678 - } else { 679 - strscpy(dev->t10_wwn.model, dev->transport->inquiry_prod, 680 - sizeof(dev->t10_wwn.model)); 681 - } 678 + else 679 + strscpy(dev->t10_wwn.model, dev->transport->inquiry_prod); 682 680 da->emulate_model_alias = flag; 683 681 return count; 684 682 } ··· 1431 1433 ssize_t len; 1432 1434 ssize_t ret; 1433 1435 1434 - len = strscpy(buf, page, sizeof(buf)); 1436 + len = strscpy(buf, page); 1435 1437 if (len > 0) { 1436 1438 /* Strip any newline added from userspace. */ 1437 1439 stripped = strstrip(buf); ··· 1462 1464 } 1463 1465 1464 1466 BUILD_BUG_ON(sizeof(dev->t10_wwn.vendor) != INQUIRY_VENDOR_LEN + 1); 1465 - strscpy(dev->t10_wwn.vendor, stripped, sizeof(dev->t10_wwn.vendor)); 1467 + strscpy(dev->t10_wwn.vendor, stripped); 1466 1468 1467 1469 pr_debug("Target_Core_ConfigFS: Set emulated T10 Vendor Identification:" 1468 1470 " %s\n", dev->t10_wwn.vendor); ··· 1487 1489 ssize_t len; 1488 1490 ssize_t ret; 1489 1491 1490 - len = strscpy(buf, page, sizeof(buf)); 1492 + len = strscpy(buf, page); 1491 1493 if (len > 0) { 1492 1494 /* Strip any newline added from userspace. */ 1493 1495 stripped = strstrip(buf); ··· 1518 1520 } 1519 1521 1520 1522 BUILD_BUG_ON(sizeof(dev->t10_wwn.model) != INQUIRY_MODEL_LEN + 1); 1521 - strscpy(dev->t10_wwn.model, stripped, sizeof(dev->t10_wwn.model)); 1523 + strscpy(dev->t10_wwn.model, stripped); 1522 1524 1523 1525 pr_debug("Target_Core_ConfigFS: Set emulated T10 Model Identification: %s\n", 1524 1526 dev->t10_wwn.model); ··· 1543 1545 ssize_t len; 1544 1546 ssize_t ret; 1545 1547 1546 - len = strscpy(buf, page, sizeof(buf)); 1548 + len = strscpy(buf, page); 1547 1549 if (len > 0) { 1548 1550 /* Strip any newline added from userspace. */ 1549 1551 stripped = strstrip(buf); ··· 1574 1576 } 1575 1577 1576 1578 BUILD_BUG_ON(sizeof(dev->t10_wwn.revision) != INQUIRY_REVISION_LEN + 1); 1577 - strscpy(dev->t10_wwn.revision, stripped, sizeof(dev->t10_wwn.revision)); 1579 + strscpy(dev->t10_wwn.revision, stripped); 1578 1580 1579 1581 pr_debug("Target_Core_ConfigFS: Set emulated T10 Revision: %s\n", 1580 1582 dev->t10_wwn.revision);
+70 -19
drivers/target/target_core_device.c
··· 55 55 rcu_read_lock(); 56 56 deve = target_nacl_find_deve(nacl, se_cmd->orig_fe_lun); 57 57 if (deve) { 58 - atomic_long_inc(&deve->total_cmds); 58 + this_cpu_inc(deve->stats->total_cmds); 59 59 60 60 if (se_cmd->data_direction == DMA_TO_DEVICE) 61 - atomic_long_add(se_cmd->data_length, 62 - &deve->write_bytes); 61 + this_cpu_add(deve->stats->write_bytes, 62 + se_cmd->data_length); 63 63 else if (se_cmd->data_direction == DMA_FROM_DEVICE) 64 - atomic_long_add(se_cmd->data_length, 65 - &deve->read_bytes); 64 + this_cpu_add(deve->stats->read_bytes, 65 + se_cmd->data_length); 66 66 67 67 if ((se_cmd->data_direction == DMA_TO_DEVICE) && 68 68 deve->lun_access_ro) { ··· 126 126 * target_core_fabric_configfs.c:target_fabric_port_release 127 127 */ 128 128 se_cmd->se_dev = rcu_dereference_raw(se_lun->lun_se_dev); 129 - atomic_long_inc(&se_cmd->se_dev->num_cmds); 129 + this_cpu_inc(se_cmd->se_dev->stats->total_cmds); 130 130 131 131 if (se_cmd->data_direction == DMA_TO_DEVICE) 132 - atomic_long_add(se_cmd->data_length, 133 - &se_cmd->se_dev->write_bytes); 132 + this_cpu_add(se_cmd->se_dev->stats->write_bytes, 133 + se_cmd->data_length); 134 134 else if (se_cmd->data_direction == DMA_FROM_DEVICE) 135 - atomic_long_add(se_cmd->data_length, 136 - &se_cmd->se_dev->read_bytes); 135 + this_cpu_add(se_cmd->se_dev->stats->read_bytes, 136 + se_cmd->data_length); 137 137 138 138 return ret; 139 139 } ··· 322 322 struct se_portal_group *tpg) 323 323 { 324 324 struct se_dev_entry *orig, *new; 325 + int ret = 0; 325 326 326 327 new = kzalloc(sizeof(*new), GFP_KERNEL); 327 328 if (!new) { 328 329 pr_err("Unable to allocate se_dev_entry memory\n"); 329 330 return -ENOMEM; 331 + } 332 + 333 + new->stats = alloc_percpu(struct se_dev_entry_io_stats); 334 + if (!new->stats) { 335 + ret = -ENOMEM; 336 + goto free_deve; 330 337 } 331 338 332 339 spin_lock_init(&new->ua_lock); ··· 358 351 " for dynamic -> explicit NodeACL conversion:" 359 352 " %s\n", nacl->initiatorname); 360 353 mutex_unlock(&nacl->lun_entry_mutex); 361 - kfree(new); 362 - return -EINVAL; 354 + ret = -EINVAL; 355 + goto free_stats; 363 356 } 364 357 if (orig->se_lun_acl != NULL) { 365 358 pr_warn_ratelimited("Detected existing explicit" ··· 367 360 " mapped_lun: %llu, failing\n", 368 361 nacl->initiatorname, mapped_lun); 369 362 mutex_unlock(&nacl->lun_entry_mutex); 370 - kfree(new); 371 - return -EINVAL; 363 + ret = -EINVAL; 364 + goto free_stats; 372 365 } 373 366 374 367 new->se_lun = lun; ··· 401 394 402 395 target_luns_data_has_changed(nacl, new, true); 403 396 return 0; 397 + 398 + free_stats: 399 + free_percpu(new->stats); 400 + free_deve: 401 + kfree(new); 402 + return ret; 403 + } 404 + 405 + static void target_free_dev_entry(struct rcu_head *head) 406 + { 407 + struct se_dev_entry *deve = container_of(head, struct se_dev_entry, 408 + rcu_head); 409 + free_percpu(deve->stats); 410 + kfree(deve); 404 411 } 405 412 406 413 void core_disable_device_list_for_node( ··· 464 443 kref_put(&orig->pr_kref, target_pr_kref_release); 465 444 wait_for_completion(&orig->pr_comp); 466 445 467 - kfree_rcu(orig, rcu_head); 446 + call_rcu(&orig->rcu_head, target_free_dev_entry); 468 447 469 448 core_scsi3_free_pr_reg_from_nacl(dev, nacl); 470 449 target_luns_data_has_changed(nacl, NULL, false); ··· 700 679 pr_debug(" Type: %s ", scsi_device_type(device_type)); 701 680 } 702 681 682 + static void target_non_ordered_release(struct percpu_ref *ref) 683 + { 684 + struct se_device *dev = container_of(ref, struct se_device, 685 + non_ordered); 686 + unsigned long flags; 687 + 688 + spin_lock_irqsave(&dev->delayed_cmd_lock, flags); 689 + if (!list_empty(&dev->delayed_cmd_list)) 690 + schedule_work(&dev->delayed_cmd_work); 691 + spin_unlock_irqrestore(&dev->delayed_cmd_lock, flags); 692 + } 693 + 703 694 struct se_device *target_alloc_device(struct se_hba *hba, const char *name) 704 695 { 705 696 struct se_device *dev; ··· 722 689 if (!dev) 723 690 return NULL; 724 691 692 + dev->stats = alloc_percpu(struct se_dev_io_stats); 693 + if (!dev->stats) 694 + goto free_device; 695 + 725 696 dev->queues = kcalloc(nr_cpu_ids, sizeof(*dev->queues), GFP_KERNEL); 726 - if (!dev->queues) { 727 - hba->backend->ops->free_device(dev); 728 - return NULL; 729 - } 697 + if (!dev->queues) 698 + goto free_stats; 730 699 731 700 dev->queue_cnt = nr_cpu_ids; 732 701 for (i = 0; i < dev->queue_cnt; i++) { ··· 741 706 init_llist_head(&q->sq.cmd_list); 742 707 INIT_WORK(&q->sq.work, target_queued_submit_work); 743 708 } 709 + 710 + if (percpu_ref_init(&dev->non_ordered, target_non_ordered_release, 711 + PERCPU_REF_ALLOW_REINIT, GFP_KERNEL)) 712 + goto free_queues; 744 713 745 714 dev->se_hba = hba; 746 715 dev->transport = hba->backend->ops; ··· 830 791 sizeof(dev->t10_wwn.revision)); 831 792 832 793 return dev; 794 + 795 + free_queues: 796 + kfree(dev->queues); 797 + free_stats: 798 + free_percpu(dev->stats); 799 + free_device: 800 + hba->backend->ops->free_device(dev); 801 + return NULL; 833 802 } 834 803 835 804 /* ··· 1027 980 1028 981 WARN_ON(!list_empty(&dev->dev_sep_list)); 1029 982 983 + percpu_ref_exit(&dev->non_ordered); 984 + cancel_work_sync(&dev->delayed_cmd_work); 985 + 1030 986 if (target_dev_configured(dev)) { 1031 987 dev->transport->destroy_device(dev); 1032 988 ··· 1051 1001 dev->transport->free_prot(dev); 1052 1002 1053 1003 kfree(dev->queues); 1004 + free_percpu(dev->stats); 1054 1005 dev->transport->free_device(dev); 1055 1006 } 1056 1007
+67 -67
drivers/target/target_core_spc.c
··· 1325 1325 usage_bits[10] |= 0x18; 1326 1326 } 1327 1327 1328 - static struct target_opcode_descriptor tcm_opcode_read6 = { 1328 + static const struct target_opcode_descriptor tcm_opcode_read6 = { 1329 1329 .support = SCSI_SUPPORT_FULL, 1330 1330 .opcode = READ_6, 1331 1331 .cdb_size = 6, ··· 1333 1333 0xff, SCSI_CONTROL_MASK}, 1334 1334 }; 1335 1335 1336 - static struct target_opcode_descriptor tcm_opcode_read10 = { 1336 + static const struct target_opcode_descriptor tcm_opcode_read10 = { 1337 1337 .support = SCSI_SUPPORT_FULL, 1338 1338 .opcode = READ_10, 1339 1339 .cdb_size = 10, ··· 1343 1343 .update_usage_bits = set_dpofua_usage_bits, 1344 1344 }; 1345 1345 1346 - static struct target_opcode_descriptor tcm_opcode_read12 = { 1346 + static const struct target_opcode_descriptor tcm_opcode_read12 = { 1347 1347 .support = SCSI_SUPPORT_FULL, 1348 1348 .opcode = READ_12, 1349 1349 .cdb_size = 12, ··· 1353 1353 .update_usage_bits = set_dpofua_usage_bits, 1354 1354 }; 1355 1355 1356 - static struct target_opcode_descriptor tcm_opcode_read16 = { 1356 + static const struct target_opcode_descriptor tcm_opcode_read16 = { 1357 1357 .support = SCSI_SUPPORT_FULL, 1358 1358 .opcode = READ_16, 1359 1359 .cdb_size = 16, ··· 1364 1364 .update_usage_bits = set_dpofua_usage_bits, 1365 1365 }; 1366 1366 1367 - static struct target_opcode_descriptor tcm_opcode_write6 = { 1367 + static const struct target_opcode_descriptor tcm_opcode_write6 = { 1368 1368 .support = SCSI_SUPPORT_FULL, 1369 1369 .opcode = WRITE_6, 1370 1370 .cdb_size = 6, ··· 1372 1372 0xff, SCSI_CONTROL_MASK}, 1373 1373 }; 1374 1374 1375 - static struct target_opcode_descriptor tcm_opcode_write10 = { 1375 + static const struct target_opcode_descriptor tcm_opcode_write10 = { 1376 1376 .support = SCSI_SUPPORT_FULL, 1377 1377 .opcode = WRITE_10, 1378 1378 .cdb_size = 10, ··· 1382 1382 .update_usage_bits = set_dpofua_usage_bits, 1383 1383 }; 1384 1384 1385 - static struct target_opcode_descriptor tcm_opcode_write_verify10 = { 1385 + static const struct target_opcode_descriptor tcm_opcode_write_verify10 = { 1386 1386 .support = SCSI_SUPPORT_FULL, 1387 1387 .opcode = WRITE_VERIFY, 1388 1388 .cdb_size = 10, ··· 1392 1392 .update_usage_bits = set_dpofua_usage_bits, 1393 1393 }; 1394 1394 1395 - static struct target_opcode_descriptor tcm_opcode_write12 = { 1395 + static const struct target_opcode_descriptor tcm_opcode_write12 = { 1396 1396 .support = SCSI_SUPPORT_FULL, 1397 1397 .opcode = WRITE_12, 1398 1398 .cdb_size = 12, ··· 1402 1402 .update_usage_bits = set_dpofua_usage_bits, 1403 1403 }; 1404 1404 1405 - static struct target_opcode_descriptor tcm_opcode_write16 = { 1405 + static const struct target_opcode_descriptor tcm_opcode_write16 = { 1406 1406 .support = SCSI_SUPPORT_FULL, 1407 1407 .opcode = WRITE_16, 1408 1408 .cdb_size = 16, ··· 1413 1413 .update_usage_bits = set_dpofua_usage_bits, 1414 1414 }; 1415 1415 1416 - static struct target_opcode_descriptor tcm_opcode_write_verify16 = { 1416 + static const struct target_opcode_descriptor tcm_opcode_write_verify16 = { 1417 1417 .support = SCSI_SUPPORT_FULL, 1418 1418 .opcode = WRITE_VERIFY_16, 1419 1419 .cdb_size = 16, ··· 1424 1424 .update_usage_bits = set_dpofua_usage_bits, 1425 1425 }; 1426 1426 1427 - static bool tcm_is_ws_enabled(struct target_opcode_descriptor *descr, 1427 + static bool tcm_is_ws_enabled(const struct target_opcode_descriptor *descr, 1428 1428 struct se_cmd *cmd) 1429 1429 { 1430 1430 struct exec_cmd_ops *ops = cmd->protocol_data; ··· 1434 1434 !!ops->execute_write_same; 1435 1435 } 1436 1436 1437 - static struct target_opcode_descriptor tcm_opcode_write_same32 = { 1437 + static const struct target_opcode_descriptor tcm_opcode_write_same32 = { 1438 1438 .support = SCSI_SUPPORT_FULL, 1439 1439 .serv_action_valid = 1, 1440 1440 .opcode = VARIABLE_LENGTH_CMD, ··· 1452 1452 .update_usage_bits = set_dpofua_usage_bits32, 1453 1453 }; 1454 1454 1455 - static bool tcm_is_caw_enabled(struct target_opcode_descriptor *descr, 1455 + static bool tcm_is_caw_enabled(const struct target_opcode_descriptor *descr, 1456 1456 struct se_cmd *cmd) 1457 1457 { 1458 1458 struct se_device *dev = cmd->se_dev; ··· 1460 1460 return dev->dev_attrib.emulate_caw; 1461 1461 } 1462 1462 1463 - static struct target_opcode_descriptor tcm_opcode_compare_write = { 1463 + static const struct target_opcode_descriptor tcm_opcode_compare_write = { 1464 1464 .support = SCSI_SUPPORT_FULL, 1465 1465 .opcode = COMPARE_AND_WRITE, 1466 1466 .cdb_size = 16, ··· 1472 1472 .update_usage_bits = set_dpofua_usage_bits, 1473 1473 }; 1474 1474 1475 - static struct target_opcode_descriptor tcm_opcode_read_capacity = { 1475 + static const struct target_opcode_descriptor tcm_opcode_read_capacity = { 1476 1476 .support = SCSI_SUPPORT_FULL, 1477 1477 .opcode = READ_CAPACITY, 1478 1478 .cdb_size = 10, ··· 1481 1481 0x01, SCSI_CONTROL_MASK}, 1482 1482 }; 1483 1483 1484 - static struct target_opcode_descriptor tcm_opcode_read_capacity16 = { 1484 + static const struct target_opcode_descriptor tcm_opcode_read_capacity16 = { 1485 1485 .support = SCSI_SUPPORT_FULL, 1486 1486 .serv_action_valid = 1, 1487 1487 .opcode = SERVICE_ACTION_IN_16, ··· 1493 1493 0xff, 0xff, 0x00, SCSI_CONTROL_MASK}, 1494 1494 }; 1495 1495 1496 - static bool tcm_is_rep_ref_enabled(struct target_opcode_descriptor *descr, 1496 + static bool tcm_is_rep_ref_enabled(const struct target_opcode_descriptor *descr, 1497 1497 struct se_cmd *cmd) 1498 1498 { 1499 1499 struct se_device *dev = cmd->se_dev; ··· 1507 1507 return true; 1508 1508 } 1509 1509 1510 - static struct target_opcode_descriptor tcm_opcode_read_report_refferals = { 1510 + static const struct target_opcode_descriptor tcm_opcode_read_report_refferals = { 1511 1511 .support = SCSI_SUPPORT_FULL, 1512 1512 .serv_action_valid = 1, 1513 1513 .opcode = SERVICE_ACTION_IN_16, ··· 1520 1520 .enabled = tcm_is_rep_ref_enabled, 1521 1521 }; 1522 1522 1523 - static struct target_opcode_descriptor tcm_opcode_sync_cache = { 1523 + static const struct target_opcode_descriptor tcm_opcode_sync_cache = { 1524 1524 .support = SCSI_SUPPORT_FULL, 1525 1525 .opcode = SYNCHRONIZE_CACHE, 1526 1526 .cdb_size = 10, ··· 1529 1529 0xff, SCSI_CONTROL_MASK}, 1530 1530 }; 1531 1531 1532 - static struct target_opcode_descriptor tcm_opcode_sync_cache16 = { 1532 + static const struct target_opcode_descriptor tcm_opcode_sync_cache16 = { 1533 1533 .support = SCSI_SUPPORT_FULL, 1534 1534 .opcode = SYNCHRONIZE_CACHE_16, 1535 1535 .cdb_size = 16, ··· 1539 1539 0xff, 0xff, SCSI_GROUP_NUMBER_MASK, SCSI_CONTROL_MASK}, 1540 1540 }; 1541 1541 1542 - static bool tcm_is_unmap_enabled(struct target_opcode_descriptor *descr, 1542 + static bool tcm_is_unmap_enabled(const struct target_opcode_descriptor *descr, 1543 1543 struct se_cmd *cmd) 1544 1544 { 1545 1545 struct exec_cmd_ops *ops = cmd->protocol_data; ··· 1548 1548 return ops->execute_unmap && dev->dev_attrib.emulate_tpu; 1549 1549 } 1550 1550 1551 - static struct target_opcode_descriptor tcm_opcode_unmap = { 1551 + static const struct target_opcode_descriptor tcm_opcode_unmap = { 1552 1552 .support = SCSI_SUPPORT_FULL, 1553 1553 .opcode = UNMAP, 1554 1554 .cdb_size = 10, ··· 1558 1558 .enabled = tcm_is_unmap_enabled, 1559 1559 }; 1560 1560 1561 - static struct target_opcode_descriptor tcm_opcode_write_same = { 1561 + static const struct target_opcode_descriptor tcm_opcode_write_same = { 1562 1562 .support = SCSI_SUPPORT_FULL, 1563 1563 .opcode = WRITE_SAME, 1564 1564 .cdb_size = 10, ··· 1568 1568 .enabled = tcm_is_ws_enabled, 1569 1569 }; 1570 1570 1571 - static struct target_opcode_descriptor tcm_opcode_write_same16 = { 1571 + static const struct target_opcode_descriptor tcm_opcode_write_same16 = { 1572 1572 .support = SCSI_SUPPORT_FULL, 1573 1573 .opcode = WRITE_SAME_16, 1574 1574 .cdb_size = 16, ··· 1579 1579 .enabled = tcm_is_ws_enabled, 1580 1580 }; 1581 1581 1582 - static struct target_opcode_descriptor tcm_opcode_verify = { 1582 + static const struct target_opcode_descriptor tcm_opcode_verify = { 1583 1583 .support = SCSI_SUPPORT_FULL, 1584 1584 .opcode = VERIFY, 1585 1585 .cdb_size = 10, ··· 1588 1588 0xff, SCSI_CONTROL_MASK}, 1589 1589 }; 1590 1590 1591 - static struct target_opcode_descriptor tcm_opcode_verify16 = { 1591 + static const struct target_opcode_descriptor tcm_opcode_verify16 = { 1592 1592 .support = SCSI_SUPPORT_FULL, 1593 1593 .opcode = VERIFY_16, 1594 1594 .cdb_size = 16, ··· 1598 1598 0xff, 0xff, SCSI_GROUP_NUMBER_MASK, SCSI_CONTROL_MASK}, 1599 1599 }; 1600 1600 1601 - static struct target_opcode_descriptor tcm_opcode_start_stop = { 1601 + static const struct target_opcode_descriptor tcm_opcode_start_stop = { 1602 1602 .support = SCSI_SUPPORT_FULL, 1603 1603 .opcode = START_STOP, 1604 1604 .cdb_size = 6, ··· 1606 1606 0x01, SCSI_CONTROL_MASK}, 1607 1607 }; 1608 1608 1609 - static struct target_opcode_descriptor tcm_opcode_mode_select = { 1609 + static const struct target_opcode_descriptor tcm_opcode_mode_select = { 1610 1610 .support = SCSI_SUPPORT_FULL, 1611 1611 .opcode = MODE_SELECT, 1612 1612 .cdb_size = 6, ··· 1614 1614 0xff, SCSI_CONTROL_MASK}, 1615 1615 }; 1616 1616 1617 - static struct target_opcode_descriptor tcm_opcode_mode_select10 = { 1617 + static const struct target_opcode_descriptor tcm_opcode_mode_select10 = { 1618 1618 .support = SCSI_SUPPORT_FULL, 1619 1619 .opcode = MODE_SELECT_10, 1620 1620 .cdb_size = 10, ··· 1623 1623 0xff, SCSI_CONTROL_MASK}, 1624 1624 }; 1625 1625 1626 - static struct target_opcode_descriptor tcm_opcode_mode_sense = { 1626 + static const struct target_opcode_descriptor tcm_opcode_mode_sense = { 1627 1627 .support = SCSI_SUPPORT_FULL, 1628 1628 .opcode = MODE_SENSE, 1629 1629 .cdb_size = 6, ··· 1631 1631 0xff, SCSI_CONTROL_MASK}, 1632 1632 }; 1633 1633 1634 - static struct target_opcode_descriptor tcm_opcode_mode_sense10 = { 1634 + static const struct target_opcode_descriptor tcm_opcode_mode_sense10 = { 1635 1635 .support = SCSI_SUPPORT_FULL, 1636 1636 .opcode = MODE_SENSE_10, 1637 1637 .cdb_size = 10, ··· 1640 1640 0xff, SCSI_CONTROL_MASK}, 1641 1641 }; 1642 1642 1643 - static struct target_opcode_descriptor tcm_opcode_pri_read_keys = { 1643 + static const struct target_opcode_descriptor tcm_opcode_pri_read_keys = { 1644 1644 .support = SCSI_SUPPORT_FULL, 1645 1645 .serv_action_valid = 1, 1646 1646 .opcode = PERSISTENT_RESERVE_IN, ··· 1651 1651 0xff, SCSI_CONTROL_MASK}, 1652 1652 }; 1653 1653 1654 - static struct target_opcode_descriptor tcm_opcode_pri_read_resrv = { 1654 + static const struct target_opcode_descriptor tcm_opcode_pri_read_resrv = { 1655 1655 .support = SCSI_SUPPORT_FULL, 1656 1656 .serv_action_valid = 1, 1657 1657 .opcode = PERSISTENT_RESERVE_IN, ··· 1662 1662 0xff, SCSI_CONTROL_MASK}, 1663 1663 }; 1664 1664 1665 - static bool tcm_is_pr_enabled(struct target_opcode_descriptor *descr, 1665 + static bool tcm_is_pr_enabled(const struct target_opcode_descriptor *descr, 1666 1666 struct se_cmd *cmd) 1667 1667 { 1668 1668 struct se_device *dev = cmd->se_dev; ··· 1704 1704 return true; 1705 1705 } 1706 1706 1707 - static struct target_opcode_descriptor tcm_opcode_pri_read_caps = { 1707 + static const struct target_opcode_descriptor tcm_opcode_pri_read_caps = { 1708 1708 .support = SCSI_SUPPORT_FULL, 1709 1709 .serv_action_valid = 1, 1710 1710 .opcode = PERSISTENT_RESERVE_IN, ··· 1716 1716 .enabled = tcm_is_pr_enabled, 1717 1717 }; 1718 1718 1719 - static struct target_opcode_descriptor tcm_opcode_pri_read_full_status = { 1719 + static const struct target_opcode_descriptor tcm_opcode_pri_read_full_status = { 1720 1720 .support = SCSI_SUPPORT_FULL, 1721 1721 .serv_action_valid = 1, 1722 1722 .opcode = PERSISTENT_RESERVE_IN, ··· 1728 1728 .enabled = tcm_is_pr_enabled, 1729 1729 }; 1730 1730 1731 - static struct target_opcode_descriptor tcm_opcode_pro_register = { 1731 + static const struct target_opcode_descriptor tcm_opcode_pro_register = { 1732 1732 .support = SCSI_SUPPORT_FULL, 1733 1733 .serv_action_valid = 1, 1734 1734 .opcode = PERSISTENT_RESERVE_OUT, ··· 1740 1740 .enabled = tcm_is_pr_enabled, 1741 1741 }; 1742 1742 1743 - static struct target_opcode_descriptor tcm_opcode_pro_reserve = { 1743 + static const struct target_opcode_descriptor tcm_opcode_pro_reserve = { 1744 1744 .support = SCSI_SUPPORT_FULL, 1745 1745 .serv_action_valid = 1, 1746 1746 .opcode = PERSISTENT_RESERVE_OUT, ··· 1752 1752 .enabled = tcm_is_pr_enabled, 1753 1753 }; 1754 1754 1755 - static struct target_opcode_descriptor tcm_opcode_pro_release = { 1755 + static const struct target_opcode_descriptor tcm_opcode_pro_release = { 1756 1756 .support = SCSI_SUPPORT_FULL, 1757 1757 .serv_action_valid = 1, 1758 1758 .opcode = PERSISTENT_RESERVE_OUT, ··· 1764 1764 .enabled = tcm_is_pr_enabled, 1765 1765 }; 1766 1766 1767 - static struct target_opcode_descriptor tcm_opcode_pro_clear = { 1767 + static const struct target_opcode_descriptor tcm_opcode_pro_clear = { 1768 1768 .support = SCSI_SUPPORT_FULL, 1769 1769 .serv_action_valid = 1, 1770 1770 .opcode = PERSISTENT_RESERVE_OUT, ··· 1776 1776 .enabled = tcm_is_pr_enabled, 1777 1777 }; 1778 1778 1779 - static struct target_opcode_descriptor tcm_opcode_pro_preempt = { 1779 + static const struct target_opcode_descriptor tcm_opcode_pro_preempt = { 1780 1780 .support = SCSI_SUPPORT_FULL, 1781 1781 .serv_action_valid = 1, 1782 1782 .opcode = PERSISTENT_RESERVE_OUT, ··· 1788 1788 .enabled = tcm_is_pr_enabled, 1789 1789 }; 1790 1790 1791 - static struct target_opcode_descriptor tcm_opcode_pro_preempt_abort = { 1791 + static const struct target_opcode_descriptor tcm_opcode_pro_preempt_abort = { 1792 1792 .support = SCSI_SUPPORT_FULL, 1793 1793 .serv_action_valid = 1, 1794 1794 .opcode = PERSISTENT_RESERVE_OUT, ··· 1800 1800 .enabled = tcm_is_pr_enabled, 1801 1801 }; 1802 1802 1803 - static struct target_opcode_descriptor tcm_opcode_pro_reg_ign_exist = { 1803 + static const struct target_opcode_descriptor tcm_opcode_pro_reg_ign_exist = { 1804 1804 .support = SCSI_SUPPORT_FULL, 1805 1805 .serv_action_valid = 1, 1806 1806 .opcode = PERSISTENT_RESERVE_OUT, ··· 1814 1814 .enabled = tcm_is_pr_enabled, 1815 1815 }; 1816 1816 1817 - static struct target_opcode_descriptor tcm_opcode_pro_register_move = { 1817 + static const struct target_opcode_descriptor tcm_opcode_pro_register_move = { 1818 1818 .support = SCSI_SUPPORT_FULL, 1819 1819 .serv_action_valid = 1, 1820 1820 .opcode = PERSISTENT_RESERVE_OUT, ··· 1826 1826 .enabled = tcm_is_pr_enabled, 1827 1827 }; 1828 1828 1829 - static struct target_opcode_descriptor tcm_opcode_release = { 1829 + static const struct target_opcode_descriptor tcm_opcode_release = { 1830 1830 .support = SCSI_SUPPORT_FULL, 1831 1831 .opcode = RELEASE_6, 1832 1832 .cdb_size = 6, ··· 1835 1835 .enabled = tcm_is_pr_enabled, 1836 1836 }; 1837 1837 1838 - static struct target_opcode_descriptor tcm_opcode_release10 = { 1838 + static const struct target_opcode_descriptor tcm_opcode_release10 = { 1839 1839 .support = SCSI_SUPPORT_FULL, 1840 1840 .opcode = RELEASE_10, 1841 1841 .cdb_size = 10, ··· 1845 1845 .enabled = tcm_is_pr_enabled, 1846 1846 }; 1847 1847 1848 - static struct target_opcode_descriptor tcm_opcode_reserve = { 1848 + static const struct target_opcode_descriptor tcm_opcode_reserve = { 1849 1849 .support = SCSI_SUPPORT_FULL, 1850 1850 .opcode = RESERVE_6, 1851 1851 .cdb_size = 6, ··· 1854 1854 .enabled = tcm_is_pr_enabled, 1855 1855 }; 1856 1856 1857 - static struct target_opcode_descriptor tcm_opcode_reserve10 = { 1857 + static const struct target_opcode_descriptor tcm_opcode_reserve10 = { 1858 1858 .support = SCSI_SUPPORT_FULL, 1859 1859 .opcode = RESERVE_10, 1860 1860 .cdb_size = 10, ··· 1864 1864 .enabled = tcm_is_pr_enabled, 1865 1865 }; 1866 1866 1867 - static struct target_opcode_descriptor tcm_opcode_request_sense = { 1867 + static const struct target_opcode_descriptor tcm_opcode_request_sense = { 1868 1868 .support = SCSI_SUPPORT_FULL, 1869 1869 .opcode = REQUEST_SENSE, 1870 1870 .cdb_size = 6, ··· 1872 1872 0xff, SCSI_CONTROL_MASK}, 1873 1873 }; 1874 1874 1875 - static struct target_opcode_descriptor tcm_opcode_inquiry = { 1875 + static const struct target_opcode_descriptor tcm_opcode_inquiry = { 1876 1876 .support = SCSI_SUPPORT_FULL, 1877 1877 .opcode = INQUIRY, 1878 1878 .cdb_size = 6, ··· 1880 1880 0xff, SCSI_CONTROL_MASK}, 1881 1881 }; 1882 1882 1883 - static bool tcm_is_3pc_enabled(struct target_opcode_descriptor *descr, 1883 + static bool tcm_is_3pc_enabled(const struct target_opcode_descriptor *descr, 1884 1884 struct se_cmd *cmd) 1885 1885 { 1886 1886 struct se_device *dev = cmd->se_dev; ··· 1888 1888 return dev->dev_attrib.emulate_3pc; 1889 1889 } 1890 1890 1891 - static struct target_opcode_descriptor tcm_opcode_extended_copy_lid1 = { 1891 + static const struct target_opcode_descriptor tcm_opcode_extended_copy_lid1 = { 1892 1892 .support = SCSI_SUPPORT_FULL, 1893 1893 .serv_action_valid = 1, 1894 1894 .opcode = EXTENDED_COPY, ··· 1900 1900 .enabled = tcm_is_3pc_enabled, 1901 1901 }; 1902 1902 1903 - static struct target_opcode_descriptor tcm_opcode_rcv_copy_res_op_params = { 1903 + static const struct target_opcode_descriptor tcm_opcode_rcv_copy_res_op_params = { 1904 1904 .support = SCSI_SUPPORT_FULL, 1905 1905 .serv_action_valid = 1, 1906 1906 .opcode = RECEIVE_COPY_RESULTS, ··· 1914 1914 .enabled = tcm_is_3pc_enabled, 1915 1915 }; 1916 1916 1917 - static struct target_opcode_descriptor tcm_opcode_report_luns = { 1917 + static const struct target_opcode_descriptor tcm_opcode_report_luns = { 1918 1918 .support = SCSI_SUPPORT_FULL, 1919 1919 .opcode = REPORT_LUNS, 1920 1920 .cdb_size = 12, ··· 1923 1923 0xff, 0xff, 0x00, SCSI_CONTROL_MASK}, 1924 1924 }; 1925 1925 1926 - static struct target_opcode_descriptor tcm_opcode_test_unit_ready = { 1926 + static const struct target_opcode_descriptor tcm_opcode_test_unit_ready = { 1927 1927 .support = SCSI_SUPPORT_FULL, 1928 1928 .opcode = TEST_UNIT_READY, 1929 1929 .cdb_size = 6, ··· 1931 1931 0x00, SCSI_CONTROL_MASK}, 1932 1932 }; 1933 1933 1934 - static struct target_opcode_descriptor tcm_opcode_report_target_pgs = { 1934 + static const struct target_opcode_descriptor tcm_opcode_report_target_pgs = { 1935 1935 .support = SCSI_SUPPORT_FULL, 1936 1936 .serv_action_valid = 1, 1937 1937 .opcode = MAINTENANCE_IN, ··· 1942 1942 0xff, 0xff, 0x00, SCSI_CONTROL_MASK}, 1943 1943 }; 1944 1944 1945 - static bool spc_rsoc_enabled(struct target_opcode_descriptor *descr, 1945 + static bool spc_rsoc_enabled(const struct target_opcode_descriptor *descr, 1946 1946 struct se_cmd *cmd) 1947 1947 { 1948 1948 struct se_device *dev = cmd->se_dev; ··· 1950 1950 return dev->dev_attrib.emulate_rsoc; 1951 1951 } 1952 1952 1953 - static struct target_opcode_descriptor tcm_opcode_report_supp_opcodes = { 1953 + static const struct target_opcode_descriptor tcm_opcode_report_supp_opcodes = { 1954 1954 .support = SCSI_SUPPORT_FULL, 1955 1955 .serv_action_valid = 1, 1956 1956 .opcode = MAINTENANCE_IN, ··· 1963 1963 .enabled = spc_rsoc_enabled, 1964 1964 }; 1965 1965 1966 - static bool tcm_is_set_tpg_enabled(struct target_opcode_descriptor *descr, 1966 + static bool tcm_is_set_tpg_enabled(const struct target_opcode_descriptor *descr, 1967 1967 struct se_cmd *cmd) 1968 1968 { 1969 1969 struct t10_alua_tg_pt_gp *l_tg_pt_gp; ··· 1984 1984 return true; 1985 1985 } 1986 1986 1987 - static struct target_opcode_descriptor tcm_opcode_set_tpg = { 1987 + static const struct target_opcode_descriptor tcm_opcode_set_tpg = { 1988 1988 .support = SCSI_SUPPORT_FULL, 1989 1989 .serv_action_valid = 1, 1990 1990 .opcode = MAINTENANCE_OUT, ··· 1996 1996 .enabled = tcm_is_set_tpg_enabled, 1997 1997 }; 1998 1998 1999 - static struct target_opcode_descriptor *tcm_supported_opcodes[] = { 1999 + static const struct target_opcode_descriptor *tcm_supported_opcodes[] = { 2000 2000 &tcm_opcode_read6, 2001 2001 &tcm_opcode_read10, 2002 2002 &tcm_opcode_read12, ··· 2053 2053 2054 2054 static int 2055 2055 spc_rsoc_encode_command_timeouts_descriptor(unsigned char *buf, u8 ctdp, 2056 - struct target_opcode_descriptor *descr) 2056 + const struct target_opcode_descriptor *descr) 2057 2057 { 2058 2058 if (!ctdp) 2059 2059 return 0; ··· 2068 2068 2069 2069 static int 2070 2070 spc_rsoc_encode_command_descriptor(unsigned char *buf, u8 ctdp, 2071 - struct target_opcode_descriptor *descr) 2071 + const struct target_opcode_descriptor *descr) 2072 2072 { 2073 2073 int td_size = 0; 2074 2074 ··· 2087 2087 2088 2088 static int 2089 2089 spc_rsoc_encode_one_command_descriptor(unsigned char *buf, u8 ctdp, 2090 - struct target_opcode_descriptor *descr, 2090 + const struct target_opcode_descriptor *descr, 2091 2091 struct se_device *dev) 2092 2092 { 2093 2093 int td_size = 0; ··· 2110 2110 } 2111 2111 2112 2112 static sense_reason_t 2113 - spc_rsoc_get_descr(struct se_cmd *cmd, struct target_opcode_descriptor **opcode) 2113 + spc_rsoc_get_descr(struct se_cmd *cmd, const struct target_opcode_descriptor **opcode) 2114 2114 { 2115 - struct target_opcode_descriptor *descr; 2115 + const struct target_opcode_descriptor *descr; 2116 2116 struct se_session *sess = cmd->se_sess; 2117 2117 unsigned char *cdb = cmd->t_task_cdb; 2118 2118 u8 opts = cdb[2] & 0x3; ··· 2199 2199 spc_emulate_report_supp_op_codes(struct se_cmd *cmd) 2200 2200 { 2201 2201 int descr_num = ARRAY_SIZE(tcm_supported_opcodes); 2202 - struct target_opcode_descriptor *descr = NULL; 2202 + const struct target_opcode_descriptor *descr = NULL; 2203 2203 unsigned char *cdb = cmd->t_task_cdb; 2204 2204 u8 rctd = (cdb[2] >> 7) & 0x1; 2205 2205 unsigned char *buf = NULL;
+57 -12
drivers/target/target_core_stat.c
··· 280 280 char *page) 281 281 { 282 282 struct se_device *dev = to_stat_lu_dev(item); 283 + struct se_dev_io_stats *stats; 284 + unsigned int cpu; 285 + u32 cmds = 0; 286 + 287 + for_each_possible_cpu(cpu) { 288 + stats = per_cpu_ptr(dev->stats, cpu); 289 + cmds += stats->total_cmds; 290 + } 283 291 284 292 /* scsiLuNumCommands */ 285 - return snprintf(page, PAGE_SIZE, "%lu\n", 286 - atomic_long_read(&dev->num_cmds)); 293 + return snprintf(page, PAGE_SIZE, "%u\n", cmds); 287 294 } 288 295 289 296 static ssize_t target_stat_lu_read_mbytes_show(struct config_item *item, 290 297 char *page) 291 298 { 292 299 struct se_device *dev = to_stat_lu_dev(item); 300 + struct se_dev_io_stats *stats; 301 + unsigned int cpu; 302 + u32 bytes = 0; 303 + 304 + for_each_possible_cpu(cpu) { 305 + stats = per_cpu_ptr(dev->stats, cpu); 306 + bytes += stats->read_bytes; 307 + } 293 308 294 309 /* scsiLuReadMegaBytes */ 295 - return snprintf(page, PAGE_SIZE, "%lu\n", 296 - atomic_long_read(&dev->read_bytes) >> 20); 310 + return snprintf(page, PAGE_SIZE, "%u\n", bytes >> 20); 297 311 } 298 312 299 313 static ssize_t target_stat_lu_write_mbytes_show(struct config_item *item, 300 314 char *page) 301 315 { 302 316 struct se_device *dev = to_stat_lu_dev(item); 317 + struct se_dev_io_stats *stats; 318 + unsigned int cpu; 319 + u32 bytes = 0; 320 + 321 + for_each_possible_cpu(cpu) { 322 + stats = per_cpu_ptr(dev->stats, cpu); 323 + bytes += stats->write_bytes; 324 + } 303 325 304 326 /* scsiLuWrittenMegaBytes */ 305 - return snprintf(page, PAGE_SIZE, "%lu\n", 306 - atomic_long_read(&dev->write_bytes) >> 20); 327 + return snprintf(page, PAGE_SIZE, "%u\n", bytes >> 20); 307 328 } 308 329 309 330 static ssize_t target_stat_lu_resets_show(struct config_item *item, char *page) ··· 1040 1019 { 1041 1020 struct se_lun_acl *lacl = auth_to_lacl(item); 1042 1021 struct se_node_acl *nacl = lacl->se_lun_nacl; 1022 + struct se_dev_entry_io_stats *stats; 1043 1023 struct se_dev_entry *deve; 1024 + unsigned int cpu; 1044 1025 ssize_t ret; 1026 + u32 cmds = 0; 1045 1027 1046 1028 rcu_read_lock(); 1047 1029 deve = target_nacl_find_deve(nacl, lacl->mapped_lun); ··· 1052 1028 rcu_read_unlock(); 1053 1029 return -ENODEV; 1054 1030 } 1031 + 1032 + for_each_possible_cpu(cpu) { 1033 + stats = per_cpu_ptr(deve->stats, cpu); 1034 + cmds += stats->total_cmds; 1035 + } 1036 + 1055 1037 /* scsiAuthIntrOutCommands */ 1056 - ret = snprintf(page, PAGE_SIZE, "%lu\n", 1057 - atomic_long_read(&deve->total_cmds)); 1038 + ret = snprintf(page, PAGE_SIZE, "%u\n", cmds); 1058 1039 rcu_read_unlock(); 1059 1040 return ret; 1060 1041 } ··· 1069 1040 { 1070 1041 struct se_lun_acl *lacl = auth_to_lacl(item); 1071 1042 struct se_node_acl *nacl = lacl->se_lun_nacl; 1043 + struct se_dev_entry_io_stats *stats; 1072 1044 struct se_dev_entry *deve; 1045 + unsigned int cpu; 1073 1046 ssize_t ret; 1047 + u32 bytes = 0; 1074 1048 1075 1049 rcu_read_lock(); 1076 1050 deve = target_nacl_find_deve(nacl, lacl->mapped_lun); ··· 1081 1049 rcu_read_unlock(); 1082 1050 return -ENODEV; 1083 1051 } 1052 + 1053 + for_each_possible_cpu(cpu) { 1054 + stats = per_cpu_ptr(deve->stats, cpu); 1055 + bytes += stats->read_bytes; 1056 + } 1057 + 1084 1058 /* scsiAuthIntrReadMegaBytes */ 1085 - ret = snprintf(page, PAGE_SIZE, "%u\n", 1086 - (u32)(atomic_long_read(&deve->read_bytes) >> 20)); 1059 + ret = snprintf(page, PAGE_SIZE, "%u\n", bytes >> 20); 1087 1060 rcu_read_unlock(); 1088 1061 return ret; 1089 1062 } ··· 1098 1061 { 1099 1062 struct se_lun_acl *lacl = auth_to_lacl(item); 1100 1063 struct se_node_acl *nacl = lacl->se_lun_nacl; 1064 + struct se_dev_entry_io_stats *stats; 1101 1065 struct se_dev_entry *deve; 1066 + unsigned int cpu; 1102 1067 ssize_t ret; 1068 + u32 bytes = 0; 1103 1069 1104 1070 rcu_read_lock(); 1105 1071 deve = target_nacl_find_deve(nacl, lacl->mapped_lun); ··· 1110 1070 rcu_read_unlock(); 1111 1071 return -ENODEV; 1112 1072 } 1073 + 1074 + for_each_possible_cpu(cpu) { 1075 + stats = per_cpu_ptr(deve->stats, cpu); 1076 + bytes += stats->write_bytes; 1077 + } 1078 + 1113 1079 /* scsiAuthIntrWrittenMegaBytes */ 1114 - ret = snprintf(page, PAGE_SIZE, "%u\n", 1115 - (u32)(atomic_long_read(&deve->write_bytes) >> 20)); 1080 + ret = snprintf(page, PAGE_SIZE, "%u\n", bytes >> 20); 1116 1081 rcu_read_unlock(); 1117 1082 return ret; 1118 1083 }
+64 -61
drivers/target/target_core_transport.c
··· 2213 2213 static bool target_handle_task_attr(struct se_cmd *cmd) 2214 2214 { 2215 2215 struct se_device *dev = cmd->se_dev; 2216 + unsigned long flags; 2216 2217 2217 2218 if (dev->transport_flags & TRANSPORT_FLAG_PASSTHROUGH) 2218 2219 return false; ··· 2226 2225 */ 2227 2226 switch (cmd->sam_task_attr) { 2228 2227 case TCM_HEAD_TAG: 2229 - atomic_inc_mb(&dev->non_ordered); 2230 2228 pr_debug("Added HEAD_OF_QUEUE for CDB: 0x%02x\n", 2231 2229 cmd->t_task_cdb[0]); 2232 2230 return false; 2233 2231 case TCM_ORDERED_TAG: 2234 - atomic_inc_mb(&dev->delayed_cmd_count); 2235 - 2236 2232 pr_debug("Added ORDERED for CDB: 0x%02x to ordered list\n", 2237 2233 cmd->t_task_cdb[0]); 2238 2234 break; ··· 2237 2239 /* 2238 2240 * For SIMPLE and UNTAGGED Task Attribute commands 2239 2241 */ 2240 - atomic_inc_mb(&dev->non_ordered); 2241 - 2242 - if (atomic_read(&dev->delayed_cmd_count) == 0) 2242 + retry: 2243 + if (percpu_ref_tryget_live(&dev->non_ordered)) 2243 2244 return false; 2245 + 2244 2246 break; 2245 2247 } 2246 2248 2247 - if (cmd->sam_task_attr != TCM_ORDERED_TAG) { 2248 - atomic_inc_mb(&dev->delayed_cmd_count); 2249 - /* 2250 - * We will account for this when we dequeue from the delayed 2251 - * list. 2252 - */ 2253 - atomic_dec_mb(&dev->non_ordered); 2249 + spin_lock_irqsave(&dev->delayed_cmd_lock, flags); 2250 + if (cmd->sam_task_attr == TCM_SIMPLE_TAG && 2251 + !percpu_ref_is_dying(&dev->non_ordered)) { 2252 + spin_unlock_irqrestore(&dev->delayed_cmd_lock, flags); 2253 + /* We raced with the last ordered completion so retry. */ 2254 + goto retry; 2255 + } else if (!percpu_ref_is_dying(&dev->non_ordered)) { 2256 + percpu_ref_kill(&dev->non_ordered); 2254 2257 } 2255 2258 2256 - spin_lock_irq(&cmd->t_state_lock); 2259 + spin_lock(&cmd->t_state_lock); 2257 2260 cmd->transport_state &= ~CMD_T_SENT; 2258 - spin_unlock_irq(&cmd->t_state_lock); 2261 + spin_unlock(&cmd->t_state_lock); 2259 2262 2260 - spin_lock(&dev->delayed_cmd_lock); 2261 2263 list_add_tail(&cmd->se_delayed_node, &dev->delayed_cmd_list); 2262 - spin_unlock(&dev->delayed_cmd_lock); 2264 + spin_unlock_irqrestore(&dev->delayed_cmd_lock, flags); 2263 2265 2264 2266 pr_debug("Added CDB: 0x%02x Task Attr: 0x%02x to delayed CMD listn", 2265 2267 cmd->t_task_cdb[0], cmd->sam_task_attr); ··· 2311 2313 while (!dev->ordered_sync_in_progress) { 2312 2314 struct se_cmd *cmd; 2313 2315 2314 - if (list_empty(&dev->delayed_cmd_list)) 2316 + /* 2317 + * We can be woken up early/late due to races or the 2318 + * extra wake up we do when adding commands to the list. 2319 + * We check for both cases here. 2320 + */ 2321 + if (list_empty(&dev->delayed_cmd_list) || 2322 + !percpu_ref_is_zero(&dev->non_ordered)) 2315 2323 break; 2316 2324 2317 2325 cmd = list_entry(dev->delayed_cmd_list.next, 2318 2326 struct se_cmd, se_delayed_node); 2319 - 2320 - if (cmd->sam_task_attr == TCM_ORDERED_TAG) { 2321 - /* 2322 - * Check if we started with: 2323 - * [ordered] [simple] [ordered] 2324 - * and we are now at the last ordered so we have to wait 2325 - * for the simple cmd. 2326 - */ 2327 - if (atomic_read(&dev->non_ordered) > 0) 2328 - break; 2329 - 2330 - dev->ordered_sync_in_progress = true; 2331 - } 2332 - 2333 - list_del(&cmd->se_delayed_node); 2334 - atomic_dec_mb(&dev->delayed_cmd_count); 2335 - spin_unlock(&dev->delayed_cmd_lock); 2336 - 2337 - if (cmd->sam_task_attr != TCM_ORDERED_TAG) 2338 - atomic_inc_mb(&dev->non_ordered); 2339 - 2327 + cmd->se_cmd_flags |= SCF_TASK_ORDERED_SYNC; 2340 2328 cmd->transport_state |= CMD_T_SENT; 2341 2329 2342 - __target_execute_cmd(cmd, true); 2330 + dev->ordered_sync_in_progress = true; 2343 2331 2332 + list_del(&cmd->se_delayed_node); 2333 + spin_unlock(&dev->delayed_cmd_lock); 2334 + 2335 + __target_execute_cmd(cmd, true); 2344 2336 spin_lock(&dev->delayed_cmd_lock); 2345 2337 } 2346 2338 spin_unlock(&dev->delayed_cmd_lock); 2339 + } 2340 + 2341 + static void transport_complete_ordered_sync(struct se_cmd *cmd) 2342 + { 2343 + struct se_device *dev = cmd->se_dev; 2344 + unsigned long flags; 2345 + 2346 + spin_lock_irqsave(&dev->delayed_cmd_lock, flags); 2347 + dev->dev_cur_ordered_id++; 2348 + 2349 + pr_debug("Incremented dev_cur_ordered_id: %u for type %d\n", 2350 + dev->dev_cur_ordered_id, cmd->sam_task_attr); 2351 + 2352 + dev->ordered_sync_in_progress = false; 2353 + 2354 + if (list_empty(&dev->delayed_cmd_list)) 2355 + percpu_ref_resurrect(&dev->non_ordered); 2356 + else 2357 + schedule_work(&dev->delayed_cmd_work); 2358 + 2359 + spin_unlock_irqrestore(&dev->delayed_cmd_lock, flags); 2347 2360 } 2348 2361 2349 2362 /* ··· 2369 2360 return; 2370 2361 2371 2362 if (!(cmd->se_cmd_flags & SCF_TASK_ATTR_SET)) 2372 - goto restart; 2363 + return; 2373 2364 2374 - if (cmd->sam_task_attr == TCM_SIMPLE_TAG) { 2375 - atomic_dec_mb(&dev->non_ordered); 2376 - dev->dev_cur_ordered_id++; 2377 - } else if (cmd->sam_task_attr == TCM_HEAD_TAG) { 2378 - atomic_dec_mb(&dev->non_ordered); 2379 - dev->dev_cur_ordered_id++; 2380 - pr_debug("Incremented dev_cur_ordered_id: %u for HEAD_OF_QUEUE\n", 2381 - dev->dev_cur_ordered_id); 2382 - } else if (cmd->sam_task_attr == TCM_ORDERED_TAG) { 2383 - spin_lock(&dev->delayed_cmd_lock); 2384 - dev->ordered_sync_in_progress = false; 2385 - spin_unlock(&dev->delayed_cmd_lock); 2386 - 2387 - dev->dev_cur_ordered_id++; 2388 - pr_debug("Incremented dev_cur_ordered_id: %u for ORDERED\n", 2389 - dev->dev_cur_ordered_id); 2390 - } 2391 2365 cmd->se_cmd_flags &= ~SCF_TASK_ATTR_SET; 2392 2366 2393 - restart: 2394 - if (atomic_read(&dev->delayed_cmd_count) > 0) 2395 - schedule_work(&dev->delayed_cmd_work); 2367 + if (cmd->se_cmd_flags & SCF_TASK_ORDERED_SYNC) { 2368 + transport_complete_ordered_sync(cmd); 2369 + return; 2370 + } 2371 + 2372 + switch (cmd->sam_task_attr) { 2373 + case TCM_SIMPLE_TAG: 2374 + percpu_ref_put(&dev->non_ordered); 2375 + break; 2376 + case TCM_ORDERED_TAG: 2377 + /* All ordered should have been executed as sync */ 2378 + WARN_ON(1); 2379 + break; 2380 + } 2396 2381 } 2397 2382 2398 2383 static void transport_complete_qf(struct se_cmd *cmd)
-6
drivers/ufs/core/ufs-mcq.c
··· 674 674 int tag = scsi_cmd_to_rq(cmd)->tag; 675 675 struct ufshcd_lrb *lrbp = &hba->lrb[tag]; 676 676 struct ufs_hw_queue *hwq; 677 - unsigned long flags; 678 677 int err; 679 678 680 679 /* Skip task abort in case previous aborts failed and report failure */ ··· 711 712 lrbp->req_abort_skip = true; 712 713 return FAILED; 713 714 } 714 - 715 - spin_lock_irqsave(&hwq->cq_lock, flags); 716 - if (ufshcd_cmd_inflight(lrbp->cmd)) 717 - ufshcd_release_scsi_cmd(hba, lrbp); 718 - spin_unlock_irqrestore(&hwq->cq_lock, flags); 719 715 720 716 return SUCCESS; 721 717 }
+133
drivers/ufs/core/ufs-sysfs.c
··· 57 57 } 58 58 } 59 59 60 + static const char *ufs_wb_resize_hint_to_string(enum wb_resize_hint hint) 61 + { 62 + switch (hint) { 63 + case WB_RESIZE_HINT_KEEP: 64 + return "keep"; 65 + case WB_RESIZE_HINT_DECREASE: 66 + return "decrease"; 67 + case WB_RESIZE_HINT_INCREASE: 68 + return "increase"; 69 + default: 70 + return "unknown"; 71 + } 72 + } 73 + 74 + static const char *ufs_wb_resize_status_to_string(enum wb_resize_status status) 75 + { 76 + switch (status) { 77 + case WB_RESIZE_STATUS_IDLE: 78 + return "idle"; 79 + case WB_RESIZE_STATUS_IN_PROGRESS: 80 + return "in_progress"; 81 + case WB_RESIZE_STATUS_COMPLETE_SUCCESS: 82 + return "complete_success"; 83 + case WB_RESIZE_STATUS_GENERAL_FAILURE: 84 + return "general_failure"; 85 + default: 86 + return "unknown"; 87 + } 88 + } 89 + 60 90 static const char *ufshcd_uic_link_state_to_string( 61 91 enum uic_link_state state) 62 92 { ··· 441 411 return count; 442 412 } 443 413 414 + static const char * const wb_resize_en_mode[] = { 415 + [WB_RESIZE_EN_IDLE] = "idle", 416 + [WB_RESIZE_EN_DECREASE] = "decrease", 417 + [WB_RESIZE_EN_INCREASE] = "increase", 418 + }; 419 + 420 + static ssize_t wb_resize_enable_store(struct device *dev, 421 + struct device_attribute *attr, 422 + const char *buf, size_t count) 423 + { 424 + struct ufs_hba *hba = dev_get_drvdata(dev); 425 + int mode; 426 + ssize_t res; 427 + 428 + if (!ufshcd_is_wb_allowed(hba) || !hba->dev_info.wb_enabled 429 + || !hba->dev_info.b_presrv_uspc_en 430 + || !(hba->dev_info.ext_wb_sup & UFS_DEV_WB_BUF_RESIZE)) 431 + return -EOPNOTSUPP; 432 + 433 + mode = sysfs_match_string(wb_resize_en_mode, buf); 434 + if (mode < 0) 435 + return -EINVAL; 436 + 437 + down(&hba->host_sem); 438 + if (!ufshcd_is_user_access_allowed(hba)) { 439 + res = -EBUSY; 440 + goto out; 441 + } 442 + 443 + ufshcd_rpm_get_sync(hba); 444 + res = ufshcd_wb_set_resize_en(hba, mode); 445 + ufshcd_rpm_put_sync(hba); 446 + 447 + out: 448 + up(&hba->host_sem); 449 + return res < 0 ? res : count; 450 + } 451 + 444 452 /** 445 453 * pm_qos_enable_show - sysfs handler to show pm qos enable value 446 454 * @dev: device associated with the UFS controller ··· 594 526 static DEVICE_ATTR_RW(wb_on); 595 527 static DEVICE_ATTR_RW(enable_wb_buf_flush); 596 528 static DEVICE_ATTR_RW(wb_flush_threshold); 529 + static DEVICE_ATTR_WO(wb_resize_enable); 597 530 static DEVICE_ATTR_RW(rtc_update_ms); 598 531 static DEVICE_ATTR_RW(pm_qos_enable); 599 532 static DEVICE_ATTR_RO(critical_health); ··· 612 543 &dev_attr_wb_on.attr, 613 544 &dev_attr_enable_wb_buf_flush.attr, 614 545 &dev_attr_wb_flush_threshold.attr, 546 + &dev_attr_wb_resize_enable.attr, 615 547 &dev_attr_rtc_update_ms.attr, 616 548 &dev_attr_pm_qos_enable.attr, 617 549 &dev_attr_critical_health.attr, ··· 1619 1549 idn <= QUERY_ATTR_IDN_CURR_WB_BUFF_SIZE; 1620 1550 } 1621 1551 1552 + static int wb_read_resize_attrs(struct ufs_hba *hba, 1553 + enum attr_idn idn, u32 *attr_val) 1554 + { 1555 + u8 index = 0; 1556 + int ret; 1557 + 1558 + if (!ufshcd_is_wb_allowed(hba) || !hba->dev_info.wb_enabled 1559 + || !hba->dev_info.b_presrv_uspc_en 1560 + || !(hba->dev_info.ext_wb_sup & UFS_DEV_WB_BUF_RESIZE)) 1561 + return -EOPNOTSUPP; 1562 + 1563 + down(&hba->host_sem); 1564 + if (!ufshcd_is_user_access_allowed(hba)) { 1565 + up(&hba->host_sem); 1566 + return -EBUSY; 1567 + } 1568 + 1569 + index = ufshcd_wb_get_query_index(hba); 1570 + ufshcd_rpm_get_sync(hba); 1571 + ret = ufshcd_query_attr(hba, UPIU_QUERY_OPCODE_READ_ATTR, 1572 + idn, index, 0, attr_val); 1573 + ufshcd_rpm_put_sync(hba); 1574 + 1575 + up(&hba->host_sem); 1576 + return ret; 1577 + } 1578 + 1579 + static ssize_t wb_resize_hint_show(struct device *dev, 1580 + struct device_attribute *attr, char *buf) 1581 + { 1582 + struct ufs_hba *hba = dev_get_drvdata(dev); 1583 + int ret; 1584 + u32 value; 1585 + 1586 + ret = wb_read_resize_attrs(hba, 1587 + QUERY_ATTR_IDN_WB_BUF_RESIZE_HINT, &value); 1588 + if (ret) 1589 + return ret; 1590 + 1591 + return sysfs_emit(buf, "%s\n", ufs_wb_resize_hint_to_string(value)); 1592 + } 1593 + 1594 + static DEVICE_ATTR_RO(wb_resize_hint); 1595 + 1596 + static ssize_t wb_resize_status_show(struct device *dev, 1597 + struct device_attribute *attr, char *buf) 1598 + { 1599 + struct ufs_hba *hba = dev_get_drvdata(dev); 1600 + int ret; 1601 + u32 value; 1602 + 1603 + ret = wb_read_resize_attrs(hba, 1604 + QUERY_ATTR_IDN_WB_BUF_RESIZE_STATUS, &value); 1605 + if (ret) 1606 + return ret; 1607 + 1608 + return sysfs_emit(buf, "%s\n", ufs_wb_resize_status_to_string(value)); 1609 + } 1610 + 1611 + static DEVICE_ATTR_RO(wb_resize_status); 1612 + 1622 1613 #define UFS_ATTRIBUTE(_name, _uname) \ 1623 1614 static ssize_t _name##_show(struct device *dev, \ 1624 1615 struct device_attribute *attr, char *buf) \ ··· 1753 1622 &dev_attr_wb_avail_buf.attr, 1754 1623 &dev_attr_wb_life_time_est.attr, 1755 1624 &dev_attr_wb_cur_buf.attr, 1625 + &dev_attr_wb_resize_hint.attr, 1626 + &dev_attr_wb_resize_status.attr, 1756 1627 NULL, 1757 1628 }; 1758 1629
+81 -22
drivers/ufs/core/ufshcd.c
··· 53 53 /* UIC command timeout, unit: ms */ 54 54 enum { 55 55 UIC_CMD_TIMEOUT_DEFAULT = 500, 56 - UIC_CMD_TIMEOUT_MAX = 2000, 56 + UIC_CMD_TIMEOUT_MAX = 5000, 57 57 }; 58 58 /* NOP OUT retries waiting for NOP IN response */ 59 59 #define NOP_OUT_RETRIES 10 ··· 63 63 /* Query request retries */ 64 64 #define QUERY_REQ_RETRIES 3 65 65 /* Query request timeout */ 66 - #define QUERY_REQ_TIMEOUT 1500 /* 1.5 seconds */ 66 + enum { 67 + QUERY_REQ_TIMEOUT_MIN = 1, 68 + QUERY_REQ_TIMEOUT_DEFAULT = 1500, 69 + QUERY_REQ_TIMEOUT_MAX = 30000 70 + }; 67 71 68 72 /* Advanced RPMB request timeout */ 69 73 #define ADVANCED_RPMB_REQ_TIMEOUT 3000 /* 3 seconds */ ··· 137 133 138 134 module_param_cb(uic_cmd_timeout, &uic_cmd_timeout_ops, &uic_cmd_timeout, 0644); 139 135 MODULE_PARM_DESC(uic_cmd_timeout, 140 - "UFS UIC command timeout in milliseconds. Defaults to 500ms. Supported values range from 500ms to 2 seconds inclusively"); 136 + "UFS UIC command timeout in milliseconds. Defaults to 500ms. Supported values range from 500ms to 5 seconds inclusively"); 137 + 138 + static unsigned int dev_cmd_timeout = QUERY_REQ_TIMEOUT_DEFAULT; 139 + 140 + static int dev_cmd_timeout_set(const char *val, const struct kernel_param *kp) 141 + { 142 + return param_set_uint_minmax(val, kp, QUERY_REQ_TIMEOUT_MIN, 143 + QUERY_REQ_TIMEOUT_MAX); 144 + } 145 + 146 + static const struct kernel_param_ops dev_cmd_timeout_ops = { 147 + .set = dev_cmd_timeout_set, 148 + .get = param_get_uint, 149 + }; 150 + 151 + module_param_cb(dev_cmd_timeout, &dev_cmd_timeout_ops, &dev_cmd_timeout, 0644); 152 + MODULE_PARM_DESC(dev_cmd_timeout, 153 + "UFS Device command timeout in milliseconds. Defaults to 1.5s. Supported values range from 1ms to 30 seconds inclusively"); 141 154 142 155 #define ufshcd_toggle_vreg(_dev, _vreg, _on) \ 143 156 ({ \ ··· 453 432 u8 opcode = 0, group_id = 0; 454 433 u32 doorbell = 0; 455 434 u32 intr; 456 - int hwq_id = -1; 435 + u32 hwq_id = 0; 457 436 struct ufshcd_lrb *lrbp = &hba->lrb[tag]; 458 437 struct scsi_cmnd *cmd = lrbp->cmd; 459 438 struct request *rq = scsi_cmd_to_rq(cmd); ··· 665 644 "last_hibern8_exit_tstamp at %lld us, hibern8_exit_cnt=%d\n", 666 645 div_u64(hba->ufs_stats.last_hibern8_exit_tstamp, 1000), 667 646 hba->ufs_stats.hibern8_exit_cnt); 668 - dev_err(hba->dev, "last intr at %lld us, last intr status=0x%x\n", 669 - div_u64(hba->ufs_stats.last_intr_ts, 1000), 670 - hba->ufs_stats.last_intr_status); 671 647 dev_err(hba->dev, "error handling flags=0x%x, req. abort count=%d\n", 672 648 hba->eh_flags, hba->req_abort_count); 673 649 dev_err(hba->dev, "hba->ufs_version=0x%x, Host capabilities=0x%x, caps=0x%x\n", ··· 3383 3365 struct ufs_query_req *request = NULL; 3384 3366 struct ufs_query_res *response = NULL; 3385 3367 int err, selector = 0; 3386 - int timeout = QUERY_REQ_TIMEOUT; 3368 + int timeout = dev_cmd_timeout; 3387 3369 3388 3370 BUG_ON(!hba); 3389 3371 ··· 3480 3462 goto out_unlock; 3481 3463 } 3482 3464 3483 - err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_QUERY, QUERY_REQ_TIMEOUT); 3465 + err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_QUERY, dev_cmd_timeout); 3484 3466 3485 3467 if (err) { 3486 3468 dev_err(hba->dev, "%s: opcode 0x%.2x for idn %d failed, index %d, err = %d\n", ··· 3576 3558 goto out_unlock; 3577 3559 } 3578 3560 3579 - err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_QUERY, QUERY_REQ_TIMEOUT); 3561 + err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_QUERY, dev_cmd_timeout); 3580 3562 3581 3563 if (err) { 3582 3564 dev_err(hba->dev, "%s: opcode 0x%.2x for idn %d failed, index %d, err = %d\n", ··· 6038 6020 6039 6021 request->query_func = UPIU_QUERY_FUNC_STANDARD_READ_REQUEST; 6040 6022 6041 - err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_QUERY, QUERY_REQ_TIMEOUT); 6023 + err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_QUERY, dev_cmd_timeout); 6042 6024 6043 6025 if (err) { 6044 6026 dev_err(hba->dev, "%s: failed to read device level exception %d\n", ··· 6121 6103 hba->dev_info.wb_buf_flush_enabled = enable; 6122 6104 dev_dbg(hba->dev, "%s: WB-Buf Flush %s\n", 6123 6105 __func__, enable ? "enabled" : "disabled"); 6106 + 6107 + return ret; 6108 + } 6109 + 6110 + int ufshcd_wb_set_resize_en(struct ufs_hba *hba, enum wb_resize_en en_mode) 6111 + { 6112 + int ret; 6113 + u8 index; 6114 + 6115 + index = ufshcd_wb_get_query_index(hba); 6116 + ret = ufshcd_query_attr_retry(hba, UPIU_QUERY_OPCODE_WRITE_ATTR, 6117 + QUERY_ATTR_IDN_WB_BUF_RESIZE_EN, index, 0, &en_mode); 6118 + if (ret) 6119 + dev_err(hba->dev, "%s: Enable WB buf resize operation failed %d\n", 6120 + __func__, ret); 6124 6121 6125 6122 return ret; 6126 6123 } ··· 6605 6572 hba = container_of(work, struct ufs_hba, eh_work); 6606 6573 6607 6574 dev_info(hba->dev, 6608 - "%s started; HBA state %s; powered %d; shutting down %d; saved_err = %d; saved_uic_err = %d; force_reset = %d%s\n", 6575 + "%s started; HBA state %s; powered %d; shutting down %d; saved_err = 0x%x; saved_uic_err = 0x%x; force_reset = %d%s\n", 6609 6576 __func__, ufshcd_state_name[hba->ufshcd_state], 6610 6577 hba->is_powered, hba->shutting_down, hba->saved_err, 6611 6578 hba->saved_uic_err, hba->force_reset, ··· 7034 7001 } 7035 7002 7036 7003 /** 7037 - * ufshcd_intr - Main interrupt service routine 7004 + * ufshcd_threaded_intr - Threaded interrupt service routine 7038 7005 * @irq: irq number 7039 7006 * @__hba: pointer to adapter instance 7040 7007 * ··· 7042 7009 * IRQ_HANDLED - If interrupt is valid 7043 7010 * IRQ_NONE - If invalid interrupt 7044 7011 */ 7045 - static irqreturn_t ufshcd_intr(int irq, void *__hba) 7012 + static irqreturn_t ufshcd_threaded_intr(int irq, void *__hba) 7046 7013 { 7047 - u32 intr_status, enabled_intr_status = 0; 7014 + u32 last_intr_status, intr_status, enabled_intr_status = 0; 7048 7015 irqreturn_t retval = IRQ_NONE; 7049 7016 struct ufs_hba *hba = __hba; 7050 7017 int retries = hba->nutrs; 7051 7018 7052 - intr_status = ufshcd_readl(hba, REG_INTERRUPT_STATUS); 7053 - hba->ufs_stats.last_intr_status = intr_status; 7054 - hba->ufs_stats.last_intr_ts = local_clock(); 7019 + last_intr_status = intr_status = ufshcd_readl(hba, REG_INTERRUPT_STATUS); 7055 7020 7056 7021 /* 7057 7022 * There could be max of hba->nutrs reqs in flight and in worst case ··· 7073 7042 dev_err(hba->dev, "%s: Unhandled interrupt 0x%08x (0x%08x, 0x%08x)\n", 7074 7043 __func__, 7075 7044 intr_status, 7076 - hba->ufs_stats.last_intr_status, 7045 + last_intr_status, 7077 7046 enabled_intr_status); 7078 7047 ufshcd_dump_regs(hba, 0, UFSHCI_REG_SPACE_SIZE, "host_regs: "); 7079 7048 } 7080 7049 7081 7050 return retval; 7051 + } 7052 + 7053 + /** 7054 + * ufshcd_intr - Main interrupt service routine 7055 + * @irq: irq number 7056 + * @__hba: pointer to adapter instance 7057 + * 7058 + * Return: 7059 + * IRQ_HANDLED - If interrupt is valid 7060 + * IRQ_WAKE_THREAD - If handling is moved to threaded handled 7061 + * IRQ_NONE - If invalid interrupt 7062 + */ 7063 + static irqreturn_t ufshcd_intr(int irq, void *__hba) 7064 + { 7065 + struct ufs_hba *hba = __hba; 7066 + 7067 + /* Move interrupt handling to thread when MCQ & ESI are not enabled */ 7068 + if (!hba->mcq_enabled || !hba->mcq_esi_enabled) 7069 + return IRQ_WAKE_THREAD; 7070 + 7071 + /* Directly handle interrupts since MCQ ESI handlers does the hard job */ 7072 + return ufshcd_sl_intr(hba, ufshcd_readl(hba, REG_INTERRUPT_STATUS) & 7073 + ufshcd_readl(hba, REG_INTERRUPT_ENABLE)); 7082 7074 } 7083 7075 7084 7076 static int ufshcd_clear_tm_cmd(struct ufs_hba *hba, int tag) ··· 7299 7245 * bound to fail since dev_cmd.query and dev_cmd.type were left empty. 7300 7246 * read the response directly ignoring all errors. 7301 7247 */ 7302 - ufshcd_issue_dev_cmd(hba, lrbp, tag, QUERY_REQ_TIMEOUT); 7248 + ufshcd_issue_dev_cmd(hba, lrbp, tag, dev_cmd_timeout); 7303 7249 7304 7250 /* just copy the upiu response as it is */ 7305 7251 memcpy(rsp_upiu, lrbp->ucd_rsp_ptr, sizeof(*rsp_upiu)); ··· 8161 8107 */ 8162 8108 dev_info->wb_buffer_type = desc_buf[DEVICE_DESC_PARAM_WB_TYPE]; 8163 8109 8110 + dev_info->ext_wb_sup = get_unaligned_be16(desc_buf + 8111 + DEVICE_DESC_PARAM_EXT_WB_SUP); 8112 + 8164 8113 dev_info->b_presrv_uspc_en = 8165 8114 desc_buf[DEVICE_DESC_PARAM_WB_PRESRV_USRSPC_EN]; 8166 8115 ··· 8735 8678 8736 8679 put_unaligned_be64(ktime_get_real_ns(), &upiu_data->osf3); 8737 8680 8738 - err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_QUERY, QUERY_REQ_TIMEOUT); 8681 + err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_QUERY, dev_cmd_timeout); 8739 8682 8740 8683 if (err) 8741 8684 dev_err(hba->dev, "%s: failed to set timestamp %d\n", ··· 8850 8793 u32 intrs; 8851 8794 8852 8795 ret = ufshcd_mcq_vops_config_esi(hba); 8796 + hba->mcq_esi_enabled = !ret; 8853 8797 dev_info(hba->dev, "ESI %sconfigured\n", ret ? "is not " : ""); 8854 8798 8855 8799 intrs = UFSHCD_ENABLE_MCQ_INTRS; ··· 10712 10654 ufshcd_readl(hba, REG_INTERRUPT_ENABLE); 10713 10655 10714 10656 /* IRQ registration */ 10715 - err = devm_request_irq(dev, irq, ufshcd_intr, IRQF_SHARED, UFSHCD, hba); 10657 + err = devm_request_threaded_irq(dev, irq, ufshcd_intr, ufshcd_threaded_intr, 10658 + IRQF_ONESHOT | IRQF_SHARED, UFSHCD, hba); 10716 10659 if (err) { 10717 10660 dev_err(hba->dev, "request irq failed\n"); 10718 10661 goto out_disable;
+170 -11
drivers/ufs/host/ufs-qcom.c
··· 5 5 6 6 #include <linux/acpi.h> 7 7 #include <linux/clk.h> 8 + #include <linux/cleanup.h> 8 9 #include <linux/delay.h> 9 10 #include <linux/devfreq.h> 10 11 #include <linux/gpio/consumer.h> ··· 103 102 [MODE_MAX][0][0] = { 7643136, 819200 }, 104 103 }; 105 104 105 + static const struct { 106 + int nminor; 107 + char *prefix; 108 + } testbus_info[TSTBUS_MAX] = { 109 + [TSTBUS_UAWM] = {32, "TSTBUS_UAWM"}, 110 + [TSTBUS_UARM] = {32, "TSTBUS_UARM"}, 111 + [TSTBUS_TXUC] = {32, "TSTBUS_TXUC"}, 112 + [TSTBUS_RXUC] = {32, "TSTBUS_RXUC"}, 113 + [TSTBUS_DFC] = {32, "TSTBUS_DFC"}, 114 + [TSTBUS_TRLUT] = {32, "TSTBUS_TRLUT"}, 115 + [TSTBUS_TMRLUT] = {32, "TSTBUS_TMRLUT"}, 116 + [TSTBUS_OCSC] = {32, "TSTBUS_OCSC"}, 117 + [TSTBUS_UTP_HCI] = {32, "TSTBUS_UTP_HCI"}, 118 + [TSTBUS_COMBINED] = {32, "TSTBUS_COMBINED"}, 119 + [TSTBUS_WRAPPER] = {32, "TSTBUS_WRAPPER"}, 120 + [TSTBUS_UNIPRO] = {256, "TSTBUS_UNIPRO"}, 121 + }; 122 + 106 123 static void ufs_qcom_get_default_testbus_cfg(struct ufs_qcom_host *host); 107 124 static int ufs_qcom_set_core_clk_ctrl(struct ufs_hba *hba, unsigned long freq); 108 125 ··· 192 173 193 174 profile->ll_ops = ufs_qcom_crypto_ops; 194 175 profile->max_dun_bytes_supported = 8; 195 - profile->key_types_supported = BLK_CRYPTO_KEY_TYPE_RAW; 176 + profile->key_types_supported = qcom_ice_get_supported_key_type(ice); 196 177 profile->dev = dev; 197 178 198 179 /* ··· 240 221 struct ufs_qcom_host *host = ufshcd_get_variant(hba); 241 222 int err; 242 223 243 - /* Only AES-256-XTS has been tested so far. */ 244 - if (key->crypto_cfg.crypto_mode != BLK_ENCRYPTION_MODE_AES_256_XTS) 245 - return -EOPNOTSUPP; 246 - 247 224 ufshcd_hold(hba); 248 - err = qcom_ice_program_key(host->ice, 249 - QCOM_ICE_CRYPTO_ALG_AES_XTS, 250 - QCOM_ICE_CRYPTO_KEY_SIZE_256, 251 - key->bytes, 252 - key->crypto_cfg.data_unit_size / 512, 253 - slot); 225 + err = qcom_ice_program_key(host->ice, slot, key); 254 226 ufshcd_release(hba); 255 227 return err; 256 228 } ··· 260 250 return err; 261 251 } 262 252 253 + static int ufs_qcom_ice_derive_sw_secret(struct blk_crypto_profile *profile, 254 + const u8 *eph_key, size_t eph_key_size, 255 + u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE]) 256 + { 257 + struct ufs_hba *hba = ufs_hba_from_crypto_profile(profile); 258 + struct ufs_qcom_host *host = ufshcd_get_variant(hba); 259 + 260 + return qcom_ice_derive_sw_secret(host->ice, eph_key, eph_key_size, 261 + sw_secret); 262 + } 263 + 264 + static int ufs_qcom_ice_import_key(struct blk_crypto_profile *profile, 265 + const u8 *raw_key, size_t raw_key_size, 266 + u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]) 267 + { 268 + struct ufs_hba *hba = ufs_hba_from_crypto_profile(profile); 269 + struct ufs_qcom_host *host = ufshcd_get_variant(hba); 270 + 271 + return qcom_ice_import_key(host->ice, raw_key, raw_key_size, lt_key); 272 + } 273 + 274 + static int ufs_qcom_ice_generate_key(struct blk_crypto_profile *profile, 275 + u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]) 276 + { 277 + struct ufs_hba *hba = ufs_hba_from_crypto_profile(profile); 278 + struct ufs_qcom_host *host = ufshcd_get_variant(hba); 279 + 280 + return qcom_ice_generate_key(host->ice, lt_key); 281 + } 282 + 283 + static int ufs_qcom_ice_prepare_key(struct blk_crypto_profile *profile, 284 + const u8 *lt_key, size_t lt_key_size, 285 + u8 eph_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]) 286 + { 287 + struct ufs_hba *hba = ufs_hba_from_crypto_profile(profile); 288 + struct ufs_qcom_host *host = ufshcd_get_variant(hba); 289 + 290 + return qcom_ice_prepare_key(host->ice, lt_key, lt_key_size, eph_key); 291 + } 292 + 263 293 static const struct blk_crypto_ll_ops ufs_qcom_crypto_ops = { 264 294 .keyslot_program = ufs_qcom_ice_keyslot_program, 265 295 .keyslot_evict = ufs_qcom_ice_keyslot_evict, 296 + .derive_sw_secret = ufs_qcom_ice_derive_sw_secret, 297 + .import_key = ufs_qcom_ice_import_key, 298 + .generate_key = ufs_qcom_ice_generate_key, 299 + .prepare_key = ufs_qcom_ice_prepare_key, 266 300 }; 267 301 268 302 #else ··· 1663 1609 return 0; 1664 1610 } 1665 1611 1612 + static void ufs_qcom_dump_testbus(struct ufs_hba *hba) 1613 + { 1614 + struct ufs_qcom_host *host = ufshcd_get_variant(hba); 1615 + int i, j, nminor = 0, testbus_len = 0; 1616 + u32 *testbus __free(kfree) = NULL; 1617 + char *prefix; 1618 + 1619 + testbus = kmalloc_array(256, sizeof(u32), GFP_KERNEL); 1620 + if (!testbus) 1621 + return; 1622 + 1623 + for (j = 0; j < TSTBUS_MAX; j++) { 1624 + nminor = testbus_info[j].nminor; 1625 + prefix = testbus_info[j].prefix; 1626 + host->testbus.select_major = j; 1627 + testbus_len = nminor * sizeof(u32); 1628 + for (i = 0; i < nminor; i++) { 1629 + host->testbus.select_minor = i; 1630 + ufs_qcom_testbus_config(host); 1631 + testbus[i] = ufshcd_readl(hba, UFS_TEST_BUS); 1632 + } 1633 + print_hex_dump(KERN_ERR, prefix, DUMP_PREFIX_OFFSET, 1634 + 16, 4, testbus, testbus_len, false); 1635 + } 1636 + } 1637 + 1638 + static int ufs_qcom_dump_regs(struct ufs_hba *hba, size_t offset, size_t len, 1639 + const char *prefix, enum ufshcd_res id) 1640 + { 1641 + u32 *regs __free(kfree) = NULL; 1642 + size_t pos; 1643 + 1644 + if (offset % 4 != 0 || len % 4 != 0) 1645 + return -EINVAL; 1646 + 1647 + regs = kzalloc(len, GFP_ATOMIC); 1648 + if (!regs) 1649 + return -ENOMEM; 1650 + 1651 + for (pos = 0; pos < len; pos += 4) 1652 + regs[pos / 4] = readl(hba->res[id].base + offset + pos); 1653 + 1654 + print_hex_dump(KERN_ERR, prefix, 1655 + len > 4 ? DUMP_PREFIX_OFFSET : DUMP_PREFIX_NONE, 1656 + 16, 4, regs, len, false); 1657 + 1658 + return 0; 1659 + } 1660 + 1661 + static void ufs_qcom_dump_mcq_hci_regs(struct ufs_hba *hba) 1662 + { 1663 + struct dump_info { 1664 + size_t offset; 1665 + size_t len; 1666 + const char *prefix; 1667 + enum ufshcd_res id; 1668 + }; 1669 + 1670 + struct dump_info mcq_dumps[] = { 1671 + {0x0, 256 * 4, "MCQ HCI-0 ", RES_MCQ}, 1672 + {0x400, 256 * 4, "MCQ HCI-1 ", RES_MCQ}, 1673 + {0x0, 5 * 4, "MCQ VS-0 ", RES_MCQ_VS}, 1674 + {0x0, 256 * 4, "MCQ SQD-0 ", RES_MCQ_SQD}, 1675 + {0x400, 256 * 4, "MCQ SQD-1 ", RES_MCQ_SQD}, 1676 + {0x800, 256 * 4, "MCQ SQD-2 ", RES_MCQ_SQD}, 1677 + {0xc00, 256 * 4, "MCQ SQD-3 ", RES_MCQ_SQD}, 1678 + {0x1000, 256 * 4, "MCQ SQD-4 ", RES_MCQ_SQD}, 1679 + {0x1400, 256 * 4, "MCQ SQD-5 ", RES_MCQ_SQD}, 1680 + {0x1800, 256 * 4, "MCQ SQD-6 ", RES_MCQ_SQD}, 1681 + {0x1c00, 256 * 4, "MCQ SQD-7 ", RES_MCQ_SQD}, 1682 + }; 1683 + 1684 + for (int i = 0; i < ARRAY_SIZE(mcq_dumps); i++) { 1685 + ufs_qcom_dump_regs(hba, mcq_dumps[i].offset, mcq_dumps[i].len, 1686 + mcq_dumps[i].prefix, mcq_dumps[i].id); 1687 + cond_resched(); 1688 + } 1689 + } 1690 + 1666 1691 static void ufs_qcom_dump_dbg_regs(struct ufs_hba *hba) 1667 1692 { 1668 1693 u32 reg; 1669 1694 struct ufs_qcom_host *host; 1670 1695 1671 1696 host = ufshcd_get_variant(hba); 1697 + 1698 + dev_err(hba->dev, "HW_H8_ENTER_CNT=%d\n", ufshcd_readl(hba, REG_UFS_HW_H8_ENTER_CNT)); 1699 + dev_err(hba->dev, "HW_H8_EXIT_CNT=%d\n", ufshcd_readl(hba, REG_UFS_HW_H8_EXIT_CNT)); 1700 + 1701 + dev_err(hba->dev, "SW_H8_ENTER_CNT=%d\n", ufshcd_readl(hba, REG_UFS_SW_H8_ENTER_CNT)); 1702 + dev_err(hba->dev, "SW_H8_EXIT_CNT=%d\n", ufshcd_readl(hba, REG_UFS_SW_H8_EXIT_CNT)); 1703 + 1704 + dev_err(hba->dev, "SW_AFTER_HW_H8_ENTER_CNT=%d\n", 1705 + ufshcd_readl(hba, REG_UFS_SW_AFTER_HW_H8_ENTER_CNT)); 1672 1706 1673 1707 ufshcd_dump_regs(hba, REG_UFS_SYS1CLK_1US, 16 * 4, 1674 1708 "HCI Vendor Specific Registers "); ··· 1800 1658 1801 1659 reg = ufs_qcom_get_debug_reg_offset(host, UFS_DBG_RD_REG_TMRLUT); 1802 1660 ufshcd_dump_regs(hba, reg, 9 * 4, "UFS_DBG_RD_REG_TMRLUT "); 1661 + 1662 + if (hba->mcq_enabled) { 1663 + reg = ufs_qcom_get_debug_reg_offset(host, UFS_RD_REG_MCQ); 1664 + ufshcd_dump_regs(hba, reg, 64 * 4, "HCI MCQ Debug Registers "); 1665 + } 1666 + 1667 + /* ensure below dumps occur only in task context due to blocking calls. */ 1668 + if (in_task()) { 1669 + /* Dump MCQ Host Vendor Specific Registers */ 1670 + if (hba->mcq_enabled) 1671 + ufs_qcom_dump_mcq_hci_regs(hba); 1672 + 1673 + /* voluntarily yield the CPU as we are dumping too much data */ 1674 + ufshcd_dump_regs(hba, UFS_TEST_BUS, 4, "UFS_TEST_BUS "); 1675 + cond_resched(); 1676 + ufs_qcom_dump_testbus(hba); 1677 + } 1803 1678 } 1804 1679 1805 1680 /**
+11
drivers/ufs/host/ufs-qcom.h
··· 50 50 */ 51 51 UFS_AH8_CFG = 0xFC, 52 52 53 + UFS_RD_REG_MCQ = 0xD00, 54 + 53 55 REG_UFS_MEM_ICE_CONFIG = 0x260C, 54 56 REG_UFS_MEM_ICE_NUM_CORE = 0x2664, 55 57 ··· 75 73 UFS_UFS_DBG_RD_PRDT_RAM = 0x1700, 76 74 UFS_UFS_DBG_RD_RESP_RAM = 0x1800, 77 75 UFS_UFS_DBG_RD_EDTL_RAM = 0x1900, 76 + }; 77 + 78 + /* QCOM UFS HC vendor specific Hibern8 count registers */ 79 + enum { 80 + REG_UFS_HW_H8_ENTER_CNT = 0x2700, 81 + REG_UFS_SW_H8_ENTER_CNT = 0x2704, 82 + REG_UFS_SW_AFTER_HW_H8_ENTER_CNT = 0x2708, 83 + REG_UFS_HW_H8_EXIT_CNT = 0x270C, 84 + REG_UFS_SW_H8_EXIT_CNT = 0x2710, 78 85 }; 79 86 80 87 enum {
+1 -2
include/scsi/scsi_proto.h
··· 346 346 347 347 /* GET STREAM STATUS parameter data */ 348 348 struct scsi_stream_status_header { 349 - __be32 len; /* length in bytes of stream_status[] array. */ 349 + __be32 len; /* length in bytes of following payload */ 350 350 u16 reserved; 351 351 __be16 number_of_open_streams; 352 - DECLARE_FLEX_ARRAY(struct scsi_stream_status, stream_status); 353 352 }; 354 353 355 354 static_assert(sizeof(struct scsi_stream_status_header) == 8);
+15 -19
include/soc/qcom/ice.h
··· 6 6 #ifndef __QCOM_ICE_H__ 7 7 #define __QCOM_ICE_H__ 8 8 9 + #include <linux/blk-crypto.h> 9 10 #include <linux/types.h> 10 11 11 12 struct qcom_ice; 12 13 13 - enum qcom_ice_crypto_key_size { 14 - QCOM_ICE_CRYPTO_KEY_SIZE_INVALID = 0x0, 15 - QCOM_ICE_CRYPTO_KEY_SIZE_128 = 0x1, 16 - QCOM_ICE_CRYPTO_KEY_SIZE_192 = 0x2, 17 - QCOM_ICE_CRYPTO_KEY_SIZE_256 = 0x3, 18 - QCOM_ICE_CRYPTO_KEY_SIZE_512 = 0x4, 19 - }; 20 - 21 - enum qcom_ice_crypto_alg { 22 - QCOM_ICE_CRYPTO_ALG_AES_XTS = 0x0, 23 - QCOM_ICE_CRYPTO_ALG_BITLOCKER_AES_CBC = 0x1, 24 - QCOM_ICE_CRYPTO_ALG_AES_ECB = 0x2, 25 - QCOM_ICE_CRYPTO_ALG_ESSIV_AES_CBC = 0x3, 26 - }; 27 - 28 14 int qcom_ice_enable(struct qcom_ice *ice); 29 15 int qcom_ice_resume(struct qcom_ice *ice); 30 16 int qcom_ice_suspend(struct qcom_ice *ice); 31 - int qcom_ice_program_key(struct qcom_ice *ice, 32 - u8 algorithm_id, u8 key_size, 33 - const u8 crypto_key[], u8 data_unit_size, 34 - int slot); 17 + int qcom_ice_program_key(struct qcom_ice *ice, unsigned int slot, 18 + const struct blk_crypto_key *blk_key); 35 19 int qcom_ice_evict_key(struct qcom_ice *ice, int slot); 20 + enum blk_crypto_key_type qcom_ice_get_supported_key_type(struct qcom_ice *ice); 21 + int qcom_ice_derive_sw_secret(struct qcom_ice *ice, 22 + const u8 *eph_key, size_t eph_key_size, 23 + u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE]); 24 + int qcom_ice_generate_key(struct qcom_ice *ice, 25 + u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]); 26 + int qcom_ice_prepare_key(struct qcom_ice *ice, 27 + const u8 *lt_key, size_t lt_key_size, 28 + u8 eph_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]); 29 + int qcom_ice_import_key(struct qcom_ice *ice, 30 + const u8 *raw_key, size_t raw_key_size, 31 + u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]); 36 32 struct qcom_ice *devm_of_qcom_ice_get(struct device *dev); 37 33 38 34 #endif /* __QCOM_ICE_H__ */
+17 -9
include/target/target_core_base.h
··· 157 157 SCF_USE_CPUID = (1 << 16), 158 158 SCF_TASK_ATTR_SET = (1 << 17), 159 159 SCF_TREAT_READ_AS_NORMAL = (1 << 18), 160 + SCF_TASK_ORDERED_SYNC = (1 << 19), 160 161 }; 161 162 162 163 /* ··· 670 669 struct se_ml_stat_grps ml_stat_grps; 671 670 }; 672 671 672 + struct se_dev_entry_io_stats { 673 + u32 total_cmds; 674 + u32 read_bytes; 675 + u32 write_bytes; 676 + }; 677 + 673 678 struct se_dev_entry { 674 679 u64 mapped_lun; 675 680 u64 pr_res_key; 676 681 u64 creation_time; 677 682 bool lun_access_ro; 678 683 u32 attach_count; 679 - atomic_long_t total_cmds; 680 - atomic_long_t read_bytes; 681 - atomic_long_t write_bytes; 684 + struct se_dev_entry_io_stats __percpu *stats; 682 685 /* Used for PR SPEC_I_PT=1 and REGISTER_AND_MOVE */ 683 686 struct kref pr_kref; 684 687 struct completion pr_comp; ··· 805 800 struct se_cmd_queue sq; 806 801 }; 807 802 803 + struct se_dev_io_stats { 804 + u32 total_cmds; 805 + u32 read_bytes; 806 + u32 write_bytes; 807 + }; 808 + 808 809 struct se_device { 809 810 /* Used for SAM Task Attribute ordering */ 810 811 u32 dev_cur_ordered_id; ··· 832 821 atomic_long_t num_resets; 833 822 atomic_long_t aborts_complete; 834 823 atomic_long_t aborts_no_task; 835 - atomic_long_t num_cmds; 836 - atomic_long_t read_bytes; 837 - atomic_long_t write_bytes; 824 + struct se_dev_io_stats __percpu *stats; 838 825 /* Active commands on this virtual SE device */ 839 - atomic_t non_ordered; 826 + struct percpu_ref non_ordered; 840 827 bool ordered_sync_in_progress; 841 - atomic_t delayed_cmd_count; 842 828 atomic_t dev_qf_count; 843 829 u32 export_count; 844 830 spinlock_t delayed_cmd_lock; ··· 898 890 u8 specific_timeout; 899 891 u16 nominal_timeout; 900 892 u16 recommended_timeout; 901 - bool (*enabled)(struct target_opcode_descriptor *descr, 893 + bool (*enabled)(const struct target_opcode_descriptor *descr, 902 894 struct se_cmd *cmd); 903 895 void (*update_usage_bits)(u8 *usage_bits, 904 896 struct se_device *dev);
+32
include/ufs/ufs.h
··· 182 182 QUERY_ATTR_IDN_CURR_WB_BUFF_SIZE = 0x1F, 183 183 QUERY_ATTR_IDN_TIMESTAMP = 0x30, 184 184 QUERY_ATTR_IDN_DEV_LVL_EXCEPTION_ID = 0x34, 185 + QUERY_ATTR_IDN_WB_BUF_RESIZE_HINT = 0x3C, 186 + QUERY_ATTR_IDN_WB_BUF_RESIZE_EN = 0x3D, 187 + QUERY_ATTR_IDN_WB_BUF_RESIZE_STATUS = 0x3E, 185 188 }; 186 189 187 190 /* Descriptor idn for Query requests */ ··· 293 290 DEVICE_DESC_PARAM_PRDCT_REV = 0x2A, 294 291 DEVICE_DESC_PARAM_HPB_VER = 0x40, 295 292 DEVICE_DESC_PARAM_HPB_CONTROL = 0x42, 293 + DEVICE_DESC_PARAM_EXT_WB_SUP = 0x4D, 296 294 DEVICE_DESC_PARAM_EXT_UFS_FEATURE_SUP = 0x4F, 297 295 DEVICE_DESC_PARAM_WB_PRESRV_USRSPC_EN = 0x53, 298 296 DEVICE_DESC_PARAM_WB_TYPE = 0x54, ··· 388 384 UFSHCD_AMP = 3, 389 385 }; 390 386 387 + /* Possible values for wExtendedWriteBoosterSupport */ 388 + enum { 389 + UFS_DEV_WB_BUF_RESIZE = BIT(0), 390 + }; 391 + 391 392 /* Possible values for dExtendedUFSFeaturesSupport */ 392 393 enum { 393 394 UFS_DEV_HIGH_TEMP_NOTIF = BIT(4), ··· 464 455 REF_CLK_FREQ_38_4_MHZ = 2, 465 456 REF_CLK_FREQ_52_MHZ = 3, 466 457 REF_CLK_FREQ_INVAL = -1, 458 + }; 459 + 460 + /* bWriteBoosterBufferResizeEn attribute */ 461 + enum wb_resize_en { 462 + WB_RESIZE_EN_IDLE = 0, 463 + WB_RESIZE_EN_DECREASE = 1, 464 + WB_RESIZE_EN_INCREASE = 2, 465 + }; 466 + 467 + /* bWriteBoosterBufferResizeHint attribute */ 468 + enum wb_resize_hint { 469 + WB_RESIZE_HINT_KEEP = 0, 470 + WB_RESIZE_HINT_DECREASE = 1, 471 + WB_RESIZE_HINT_INCREASE = 2, 472 + }; 473 + 474 + /* bWriteBoosterBufferResizeStatus attribute */ 475 + enum wb_resize_status { 476 + WB_RESIZE_STATUS_IDLE = 0, 477 + WB_RESIZE_STATUS_IN_PROGRESS = 1, 478 + WB_RESIZE_STATUS_COMPLETE_SUCCESS = 2, 479 + WB_RESIZE_STATUS_GENERAL_FAILURE = 3, 467 480 }; 468 481 469 482 /* Query response result code */ ··· 612 581 bool wb_buf_flush_enabled; 613 582 u8 wb_dedicated_lu; 614 583 u8 wb_buffer_type; 584 + u16 ext_wb_sup; 615 585 616 586 bool b_rpm_dev_flush_capable; 617 587 u8 b_presrv_uspc_en;
+3 -5
include/ufs/ufshcd.h
··· 501 501 502 502 /** 503 503 * struct ufs_stats - keeps usage/err statistics 504 - * @last_intr_status: record the last interrupt status. 505 - * @last_intr_ts: record the last interrupt timestamp. 506 504 * @hibern8_exit_cnt: Counter to keep track of number of exits, 507 505 * reset this after link-startup. 508 506 * @last_hibern8_exit_tstamp: Set time after the hibern8 exit. ··· 508 510 * @event: array with event history. 509 511 */ 510 512 struct ufs_stats { 511 - u32 last_intr_status; 512 - u64 last_intr_ts; 513 - 514 513 u32 hibern8_exit_cnt; 515 514 u64 last_hibern8_exit_tstamp; 516 515 struct ufs_event_hist event[UFS_EVT_CNT]; ··· 954 959 * ufshcd_resume_complete() 955 960 * @mcq_sup: is mcq supported by UFSHC 956 961 * @mcq_enabled: is mcq ready to accept requests 962 + * @mcq_esi_enabled: is mcq ESI configured 957 963 * @res: array of resource info of MCQ registers 958 964 * @mcq_base: Multi circular queue registers base address 959 965 * @uhq: array of supported hardware queues ··· 1126 1130 bool mcq_sup; 1127 1131 bool lsdb_sup; 1128 1132 bool mcq_enabled; 1133 + bool mcq_esi_enabled; 1129 1134 struct ufshcd_res_info res[RES_MAX]; 1130 1135 void __iomem *mcq_base; 1131 1136 struct ufs_hw_queue *uhq; ··· 1473 1476 struct scatterlist *sg_list, enum dma_data_direction dir); 1474 1477 int ufshcd_wb_toggle(struct ufs_hba *hba, bool enable); 1475 1478 int ufshcd_wb_toggle_buf_flush(struct ufs_hba *hba, bool enable); 1479 + int ufshcd_wb_set_resize_en(struct ufs_hba *hba, enum wb_resize_en en_mode); 1476 1480 int ufshcd_suspend_prepare(struct device *dev); 1477 1481 int __ufshcd_suspend_prepare(struct device *dev, bool rpm_ok_for_spm); 1478 1482 void ufshcd_resume_complete(struct device *dev);