Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI updates from James Bottomley:
"Updates to the usual drivers (ufs, pm80xx, libata-scsi, smartpqi,
lpfc, qla2xxx).

We have a couple of major core changes impacting other systems:

- Command Duration Limits, which spills into block and ATA

- block level Persistent Reservation Operations, which touches block,
nvme, target and dm

Both of these are added with merge commits containing a cover letter
explaining what's going on"

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (187 commits)
scsi: core: Improve warning message in scsi_device_block()
scsi: core: Replace scsi_target_block() with scsi_block_targets()
scsi: core: Don't wait for quiesce in scsi_device_block()
scsi: core: Don't wait for quiesce in scsi_stop_queue()
scsi: core: Merge scsi_internal_device_block() and device_block()
scsi: sg: Increase number of devices
scsi: bsg: Increase number of devices
scsi: qla2xxx: Remove unused nvme_ls_waitq wait queue
scsi: ufs: ufs-pci: Add support for Intel Arrow Lake
scsi: sd: sd_zbc: Use PAGE_SECTORS_SHIFT
scsi: ufs: wb: Add explicit flush_threshold sysfs attribute
scsi: ufs: ufs-qcom: Switch to the new ICE API
scsi: ufs: dt-bindings: qcom: Add ICE phandle
scsi: ufs: ufs-mediatek: Set UFSHCD_QUIRK_MCQ_BROKEN_RTC quirk
scsi: ufs: ufs-mediatek: Set UFSHCD_QUIRK_MCQ_BROKEN_INTR quirk
scsi: ufs: core: Add host quirk UFSHCD_QUIRK_MCQ_BROKEN_RTC
scsi: ufs: core: Add host quirk UFSHCD_QUIRK_MCQ_BROKEN_INTR
scsi: ufs: core: Remove dedicated hwq for dev command
scsi: ufs: core: mcq: Fix the incorrect OCS value for the device command
scsi: ufs: dt-bindings: samsung,exynos: Drop unneeded quotes
...

+4834 -2241
+22
Documentation/ABI/testing/sysfs-block-device
··· 95 95 This file does not exist if the HBA driver does not implement 96 96 support for the SATA NCQ priority feature, regardless of the 97 97 device support for this feature. 98 + 99 + 100 + What: /sys/block/*/device/cdl_supported 101 + Date: May, 2023 102 + KernelVersion: v6.5 103 + Contact: linux-scsi@vger.kernel.org 104 + Description: 105 + (RO) Indicates if the device supports the command duration 106 + limits feature found in some ATA and SCSI devices. 107 + 108 + 109 + What: /sys/block/*/device/cdl_enable 110 + Date: May, 2023 111 + KernelVersion: v6.5 112 + Contact: linux-scsi@vger.kernel.org 113 + Description: 114 + (RW) For a device supporting the command duration limits 115 + feature, write to the file to turn on or off the feature. 116 + By default this feature is turned off. 117 + Writing "1" to this file enables the use of command duration 118 + limits for read and write commands in the kernel and turns on 119 + the feature on the device. Writing "0" disables the feature.
+11
Documentation/ABI/testing/sysfs-driver-ufs
··· 1426 1426 If flushing is enabled, the device executes the flush 1427 1427 operation when the command queue is empty. 1428 1428 1429 + What: /sys/bus/platform/drivers/ufshcd/*/wb_flush_threshold 1430 + What: /sys/bus/platform/devices/*.ufs/wb_flush_threshold 1431 + Date: June 2023 1432 + Contact: Lu Hongfei <luhongfei@vivo.com> 1433 + Description: 1434 + wb_flush_threshold represents the threshold for flushing WriteBooster buffer, 1435 + whose value expressed in unit of 10% granularity, such as '1' representing 10%, 1436 + '2' representing 20%, and so on. 1437 + If avail_wb_buff < wb_flush_threshold, it indicates that WriteBooster buffer needs to 1438 + be flushed, otherwise it is not necessary. 1439 + 1429 1440 What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/hpb_version 1430 1441 What: /sys/bus/platform/devices/*.ufs/device_descriptor/hpb_version 1431 1442 Date: June 2021
+26
Documentation/devicetree/bindings/ufs/qcom,ufs.yaml
··· 26 26 - qcom,msm8994-ufshc 27 27 - qcom,msm8996-ufshc 28 28 - qcom,msm8998-ufshc 29 + - qcom,sa8775p-ufshc 29 30 - qcom,sc8280xp-ufshc 30 31 - qcom,sdm845-ufshc 31 32 - qcom,sm6350-ufshc ··· 71 70 power-domains: 72 71 maxItems: 1 73 72 73 + qcom,ice: 74 + $ref: /schemas/types.yaml#/definitions/phandle 75 + description: phandle to the Inline Crypto Engine node 76 + 74 77 reg: 75 78 minItems: 1 76 79 maxItems: 2 ··· 110 105 contains: 111 106 enum: 112 107 - qcom,msm8998-ufshc 108 + - qcom,sa8775p-ufshc 113 109 - qcom,sc8280xp-ufshc 114 110 - qcom,sm8250-ufshc 115 111 - qcom,sm8350-ufshc ··· 192 186 maxItems: 1 193 187 194 188 # TODO: define clock bindings for qcom,msm8994-ufshc 189 + 190 + - if: 191 + properties: 192 + qcom,ice: 193 + maxItems: 1 194 + then: 195 + properties: 196 + reg: 197 + maxItems: 1 198 + clocks: 199 + minItems: 8 200 + maxItems: 8 201 + else: 202 + properties: 203 + reg: 204 + minItems: 2 205 + maxItems: 2 206 + clocks: 207 + minItems: 9 208 + maxItems: 11 195 209 196 210 unevaluatedProperties: false 197 211
+1 -1
Documentation/devicetree/bindings/ufs/samsung,exynos-ufs.yaml
··· 54 54 const: ufs-phy 55 55 56 56 samsung,sysreg: 57 - $ref: '/schemas/types.yaml#/definitions/phandle-array' 57 + $ref: /schemas/types.yaml#/definitions/phandle-array 58 58 description: Should be phandle/offset pair. The phandle to the syscon node 59 59 which indicates the FSYSx sysreg interface and the offset of 60 60 the control register for UFS io coherency setting.
+1
Documentation/scsi/arcmsr_spec.rst
··· 1 + =================== 1 2 ARECA FIRMWARE SPEC 2 3 =================== 3 4
+7 -10
Documentation/scsi/dc395x.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0 2 2 3 - ====================================== 4 - README file for the dc395x SCSI driver 5 - ====================================== 3 + ================== 4 + dc395x SCSI driver 5 + ================== 6 6 7 7 Status 8 8 ------ ··· 11 11 great degree and caution should be exercised if you want to attempt 12 12 to use this driver with hard disks. 13 13 14 - This is a 2.5 only driver. For a 2.4 driver please see the original 15 - driver (which this driver started from) at 16 - http://www.garloff.de/kurt/linux/dc395/ 17 - 18 - Problems, questions and patches should be submitted to the mailing 19 - list. Details on the list, including archives, are available at 20 - http://lists.twibble.org/mailman/listinfo/dc395x/ 14 + This driver is evolved from `the original 2.4 driver 15 + <https://web.archive.org/web/20140129181343/http://www.garloff.de/kurt/linux/dc395/>`_. 16 + Problems, questions and patches should be submitted to the `Linux SCSI 17 + mailing list <linux-scsi@vger.kernel.org>`_. 21 18 22 19 Parameters 23 20 ----------
+3 -3
Documentation/scsi/g_NCR5380.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0 2 2 .. include:: <isonum.txt> 3 3 4 - ========================================== 5 - README file for the Linux g_NCR5380 driver 6 - ========================================== 4 + ================ 5 + g_NCR5380 driver 6 + ================ 7 7 8 8 Copyright |copy| 1993 Drew Eckhard 9 9
+32 -5
Documentation/scsi/index.rst
··· 7 7 .. toctree:: 8 8 :maxdepth: 1 9 9 10 + Introduction 11 + ============ 12 + 13 + .. toctree:: 14 + :maxdepth: 1 15 + 16 + scsi 17 + 18 + SCSI driver APIs 19 + ================ 20 + 21 + .. toctree:: 22 + :maxdepth: 1 23 + 24 + scsi_mid_low_api 25 + scsi_eh 26 + 27 + SCSI driver parameters 28 + ====================== 29 + 30 + .. toctree:: 31 + :maxdepth: 1 32 + 33 + scsi-parameters 34 + link_power_management_policy 35 + 36 + SCSI host adapter drivers 37 + ========================= 38 + 39 + .. toctree:: 40 + :maxdepth: 1 41 + 10 42 53c700 11 43 aacraid 12 44 advansys ··· 57 25 hpsa 58 26 hptiop 59 27 libsas 60 - link_power_management_policy 61 28 lpfc 62 29 megaraid 63 30 ncr53c8xx ··· 64 33 ppa 65 34 qlogicfas 66 35 scsi-changer 67 - scsi_eh 68 36 scsi_fc_transport 69 37 scsi-generic 70 - scsi_mid_low_api 71 - scsi-parameters 72 - scsi 73 38 sd-parameters 74 39 smartpqi 75 40 st
+3 -3
Documentation/scsi/megaraid.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0 2 2 3 - ========================== 4 - Notes on Management Module 5 - ========================== 3 + ================================= 4 + Megaraid Common Management Module 5 + ================================= 6 6 7 7 Overview 8 8 --------
+3 -3
Documentation/scsi/ncr53c8xx.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0 2 2 3 - ================================================= 4 - The Linux NCR53C8XX/SYM53C8XX drivers README file 5 - ================================================= 3 + =========================== 4 + NCR53C8XX/SYM53C8XX drivers 5 + =========================== 6 6 7 7 Written by Gerard Roudier <groudier@free.fr> 8 8
+3 -3
Documentation/scsi/scsi-changer.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0 2 2 3 - ======================================== 4 - README for the SCSI media changer driver 5 - ======================================== 3 + ========================= 4 + SCSI media changer driver 5 + ========================= 6 6 7 7 This is a driver for SCSI Medium Changer devices, which are listed 8 8 with "Type: Medium Changer" in /proc/scsi/scsi.
+21 -32
Documentation/scsi/scsi-generic.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0 2 2 3 - ======================================= 4 - Notes on Linux SCSI Generic (sg) driver 5 - ======================================= 3 + ======================== 4 + SCSI Generic (sg) driver 5 + ======================== 6 6 7 7 20020126 8 8 9 9 Introduction 10 10 ============ 11 11 The SCSI Generic driver (sg) is one of the four "high level" SCSI device 12 - drivers along with sd, st and sr (disk, tape and CDROM respectively). Sg 12 + drivers along with sd, st and sr (disk, tape and CD-ROM respectively). Sg 13 13 is more generalized (but lower level) than its siblings and tends to be 14 14 used on SCSI devices that don't fit into the already serviced categories. 15 15 Thus sg is used for scanners, CD writers and reading audio CDs digitally ··· 22 22 23 23 Major versions of the sg driver 24 24 =============================== 25 - There are three major versions of sg found in the linux kernel (lk): 25 + There are three major versions of sg found in the Linux kernel (lk): 26 26 - sg version 1 (original) from 1992 to early 1999 (lk 2.2.5) . 27 27 It is based in the sg_header interface structure. 28 28 - sg version 2 from lk 2.2.6 in the 2.2 series. It is based on ··· 33 33 34 34 Sg driver documentation 35 35 ======================= 36 - The most recent documentation of the sg driver is kept at the Linux 37 - Documentation Project's (LDP) site: 36 + The most recent documentation of the sg driver is kept at 38 37 39 - - http://www.tldp.org/HOWTO/SCSI-Generic-HOWTO 38 + - https://sg.danny.cz/sg/ 40 39 41 40 This describes the sg version 3 driver found in the lk 2.4 series. 42 41 43 - The LDP renders documents in single and multiple page HTML, postscript 44 - and pdf. This document can also be found at: 42 + Documentation (large version) for the version 2 sg driver found in the 43 + lk 2.2 series can be found at 45 44 46 - - http://sg.danny.cz/sg/p/sg_v3_ho.html 47 - 48 - Documentation for the version 2 sg driver found in the lk 2.2 series can 49 - be found at http://sg.danny.cz/sg/. A larger version 50 - is at: http://sg.danny.cz/sg/p/scsi-generic_long.txt. 45 + - https://sg.danny.cz/sg/p/scsi-generic_long.txt. 51 46 52 47 The original documentation for the sg driver (prior to lk 2.2.6) can be 53 - found at http://www.torque.net/sg/p/original/SCSI-Programming-HOWTO.txt 54 - and in the LDP archives. 48 + found in the LDP archives at 55 49 56 - A changelog with brief notes can be found in the 57 - /usr/src/linux/include/scsi/sg.h file. Note that the glibc maintainers copy 58 - and edit this file (removing its changelog for example) before placing it 59 - in /usr/include/scsi/sg.h . Driver debugging information and other notes 60 - can be found at the top of the /usr/src/linux/drivers/scsi/sg.c file. 50 + - https://tldp.org/HOWTO/archived/SCSI-Programming-HOWTO/index.html 61 51 62 52 A more general description of the Linux SCSI subsystem of which sg is a 63 - part can be found at http://www.tldp.org/HOWTO/SCSI-2.4-HOWTO . 53 + part can be found at https://www.tldp.org/HOWTO/SCSI-2.4-HOWTO . 64 54 65 55 66 56 Example code and utilities ··· 63 73 and earlier 64 74 ========= ========================================================== 65 75 66 - Both packages will work in the lk 2.4 series however sg3_utils offers more 67 - capabilities. They can be found at: http://sg.danny.cz/sg/sg3_utils.html and 76 + Both packages will work in the lk 2.4 series. However, sg3_utils offers more 77 + capabilities. They can be found at: https://sg.danny.cz/sg/sg3_utils.html and 68 78 freecode.com 69 79 70 80 Another approach is to look at the applications that use the sg driver. ··· 73 83 74 84 Mapping of Linux kernel versions to sg driver versions 75 85 ====================================================== 76 - Here is a list of linux kernels in the 2.4 series that had new version 86 + Here is a list of Linux kernels in the 2.4 series that had the new version 77 87 of the sg driver: 78 88 79 89 - lk 2.4.0 : sg version 3.1.17 ··· 82 92 - lk 2.4.17 : sg version 3.1.22 83 93 84 94 .. [#] There were 3 changes to sg version 3.1.20 by third parties in the 85 - next six linux kernel versions. 95 + next six Linux kernel versions. 86 96 87 - For reference here is a list of linux kernels in the 2.2 series that had 88 - new version of the sg driver: 97 + For reference here is a list of Linux kernels in the 2.2 series that had 98 + the new version of the sg driver: 89 99 90 100 - lk 2.2.0 : original sg version [with no version number] 91 101 - lk 2.2.6 : sg version 2.1.31 ··· 96 106 - lk 2.2.17 : sg version 2.1.39 97 107 - lk 2.2.20 : sg version 2.1.40 98 108 99 - The lk 2.5 development series has recently commenced and it currently 100 - contains sg version 3.5.23 which is functionally equivalent to sg 101 - version 3.1.22 found in lk 2.4.17. 109 + The lk 2.5 development series currently contains sg version 3.5.23 110 + which is functionally equivalent to sg version 3.1.22 found in lk 2.4.17. 102 111 103 112 104 113 Douglas Gilbert
+10 -13
Documentation/scsi/scsi.rst
··· 6 6 7 7 The Linux Documentation Project (LDP) maintains a document describing 8 8 the SCSI subsystem in the Linux kernel (lk) 2.4 series. See: 9 - http://www.tldp.org/HOWTO/SCSI-2.4-HOWTO . The LDP has single 9 + https://www.tldp.org/HOWTO/SCSI-2.4-HOWTO . The LDP has single 10 10 and multiple page HTML renderings as well as postscript and pdf. 11 - It can also be found at: 12 - http://web.archive.org/web/%2E/http://www.torque.net/scsi/SCSI-2.4-HOWTO 13 11 14 12 Notes on using modules in the SCSI subsystem 15 13 ============================================ 16 - The scsi support in the linux kernel can be modularized in a number of 14 + The SCSI support in the Linux kernel can be modularized in a number of 17 15 different ways depending upon the needs of the end user. To understand 18 16 your options, we should first define a few terms. 19 17 20 - The scsi-core (also known as the "mid level") contains the core of scsi 21 - support. Without it you can do nothing with any of the other scsi drivers. 22 - The scsi core support can be a module (scsi_mod.o), or it can be built into 23 - the kernel. If the core is a module, it must be the first scsi module 18 + The scsi-core (also known as the "mid level") contains the core of SCSI 19 + support. Without it you can do nothing with any of the other SCSI drivers. 20 + The SCSI core support can be a module (scsi_mod.o), or it can be built into 21 + the kernel. If the core is a module, it must be the first SCSI module 24 22 loaded, and if you unload the modules, it will have to be the last one 25 - unloaded. In practice the modprobe and rmmod commands (and "autoclean") 23 + unloaded. In practice the modprobe and rmmod commands 26 24 will enforce the correct ordering of loading and unloading modules in 27 25 the SCSI subsystem. 28 26 29 27 The individual upper and lower level drivers can be loaded in any order 30 - once the scsi core is present in the kernel (either compiled in or loaded 31 - as a module). The disk driver (sd_mod.o), cdrom driver (sr_mod.o), 32 - tape driver [1]_ (st.o) and scsi generics driver (sg.o) represent the upper 28 + once the SCSI core is present in the kernel (either compiled in or loaded 29 + as a module). The disk driver (sd_mod.o), CD-ROM driver (sr_mod.o), 30 + tape driver [1]_ (st.o) and SCSI generics driver (sg.o) represent the upper 33 31 level drivers to support the various assorted devices which can be 34 32 controlled. You can for example load the tape driver to use the tape drive, 35 33 and then unload it once you have no further need for the driver (and release ··· 42 44 43 45 .. [1] There is a variant of the st driver for controlling OnStream tape 44 46 devices. Its module name is osst.o . 45 -
+4 -4
Documentation/scsi/scsi_fc_transport.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0 2 2 3 - ================ 4 - SCSI FC Tansport 5 - ================ 3 + ================= 4 + SCSI FC Transport 5 + ================= 6 6 7 7 Date: 11/18/2008 8 8 ··· 556 556 557 557 558 558 James Smart 559 - james.smart@emulex.com 559 + james.smart@broadcom.com 560 560
+3 -3
Documentation/scsi/sym53c8xx_2.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0 2 2 3 - ========================================= 4 - The Linux SYM-2 driver documentation file 5 - ========================================= 3 + ============ 4 + SYM-2 driver 5 + ============ 6 6 7 7 Written by Gerard Roudier <groudier@free.fr> 8 8
+10 -3
MAINTAINERS
··· 5732 5732 M: Oliver Neukum <oliver@neukum.org> 5733 5733 M: Ali Akcaagac <aliakc@web.de> 5734 5734 M: Jamie Lenehan <lenehan@twibble.org> 5735 - L: dc395x@twibble.org 5736 5735 S: Maintained 5737 - W: http://twibble.org/dist/dc395x/ 5738 - W: http://lists.twibble.org/mailman/listinfo/dc395x/ 5739 5736 F: Documentation/scsi/dc395x.rst 5740 5737 F: drivers/scsi/dc395x.* 5741 5738 ··· 18914 18917 F: include/linux/wait.h 18915 18918 F: include/uapi/linux/sched.h 18916 18919 F: kernel/sched/ 18920 + 18921 + SCSI LIBSAS SUBSYSTEM 18922 + R: John Garry <john.g.garry@oracle.com> 18923 + R: Jason Yan <yanaijie@huawei.com> 18924 + L: linux-scsi@vger.kernel.org 18925 + S: Supported 18926 + F: drivers/scsi/libsas/ 18927 + F: include/scsi/libsas.h 18928 + F: include/scsi/sas_ata.h 18929 + F: Documentation/scsi/libsas.rst 18917 18930 18918 18931 SCSI RDMA PROTOCOL (SRP) INITIATOR 18919 18932 M: Bart Van Assche <bvanassche@acm.org>
+4 -4
block/bfq-iosched.c
··· 5528 5528 bfqq->new_ioprio_class = task_nice_ioclass(tsk); 5529 5529 break; 5530 5530 case IOPRIO_CLASS_RT: 5531 - bfqq->new_ioprio = IOPRIO_PRIO_DATA(bic->ioprio); 5531 + bfqq->new_ioprio = IOPRIO_PRIO_LEVEL(bic->ioprio); 5532 5532 bfqq->new_ioprio_class = IOPRIO_CLASS_RT; 5533 5533 break; 5534 5534 case IOPRIO_CLASS_BE: 5535 - bfqq->new_ioprio = IOPRIO_PRIO_DATA(bic->ioprio); 5535 + bfqq->new_ioprio = IOPRIO_PRIO_LEVEL(bic->ioprio); 5536 5536 bfqq->new_ioprio_class = IOPRIO_CLASS_BE; 5537 5537 break; 5538 5538 case IOPRIO_CLASS_IDLE: 5539 5539 bfqq->new_ioprio_class = IOPRIO_CLASS_IDLE; 5540 - bfqq->new_ioprio = 7; 5540 + bfqq->new_ioprio = IOPRIO_NR_LEVELS - 1; 5541 5541 break; 5542 5542 } 5543 5543 ··· 5834 5834 struct bfq_io_cq *bic, 5835 5835 bool respawn) 5836 5836 { 5837 - const int ioprio = IOPRIO_PRIO_DATA(bic->ioprio); 5837 + const int ioprio = IOPRIO_PRIO_LEVEL(bic->ioprio); 5838 5838 const int ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio); 5839 5839 struct bfq_queue **async_bfqq = NULL; 5840 5840 struct bfq_queue *bfqq;
+4 -1
block/blk-core.c
··· 155 155 [BLK_STS_NOSPC] = { -ENOSPC, "critical space allocation" }, 156 156 [BLK_STS_TRANSPORT] = { -ENOLINK, "recoverable transport" }, 157 157 [BLK_STS_TARGET] = { -EREMOTEIO, "critical target" }, 158 - [BLK_STS_NEXUS] = { -EBADE, "critical nexus" }, 158 + [BLK_STS_RESV_CONFLICT] = { -EBADE, "reservation conflict" }, 159 159 [BLK_STS_MEDIUM] = { -ENODATA, "critical medium" }, 160 160 [BLK_STS_PROTECTION] = { -EILSEQ, "protection" }, 161 161 [BLK_STS_RESOURCE] = { -ENOMEM, "kernel resource" }, ··· 169 169 /* zone device specific errors */ 170 170 [BLK_STS_ZONE_OPEN_RESOURCE] = { -ETOOMANYREFS, "open zones exceeded" }, 171 171 [BLK_STS_ZONE_ACTIVE_RESOURCE] = { -EOVERFLOW, "active zones exceeded" }, 172 + 173 + /* Command duration limit device-side timeout */ 174 + [BLK_STS_DURATION_LIMIT] = { -ETIME, "duration limit exceeded" }, 172 175 173 176 /* everything else not covered above: */ 174 177 [BLK_STS_IOERR] = { -EIO, "I/O" },
+1 -1
block/bsg.c
··· 36 36 } 37 37 38 38 #define BSG_DEFAULT_CMDS 64 39 - #define BSG_MAX_DEVS 32768 39 + #define BSG_MAX_DEVS (1 << MINORBITS) 40 40 41 41 static DEFINE_IDA(bsg_minor_ida); 42 42 static const struct class bsg_class;
+4 -3
block/ioprio.c
··· 33 33 int ioprio_check_cap(int ioprio) 34 34 { 35 35 int class = IOPRIO_PRIO_CLASS(ioprio); 36 - int data = IOPRIO_PRIO_DATA(ioprio); 36 + int level = IOPRIO_PRIO_LEVEL(ioprio); 37 37 38 38 switch (class) { 39 39 case IOPRIO_CLASS_RT: ··· 49 49 fallthrough; 50 50 /* rt has prio field too */ 51 51 case IOPRIO_CLASS_BE: 52 - if (data >= IOPRIO_NR_LEVELS || data < 0) 52 + if (level >= IOPRIO_NR_LEVELS) 53 53 return -EINVAL; 54 54 break; 55 55 case IOPRIO_CLASS_IDLE: 56 56 break; 57 57 case IOPRIO_CLASS_NONE: 58 - if (data) 58 + if (level) 59 59 return -EINVAL; 60 60 break; 61 + case IOPRIO_CLASS_INVALID: 61 62 default: 62 63 return -EINVAL; 63 64 }
+200 -4
drivers/ata/libata-core.c
··· 665 665 return block; 666 666 } 667 667 668 + /* 669 + * Set a taskfile command duration limit index. 670 + */ 671 + static inline void ata_set_tf_cdl(struct ata_queued_cmd *qc, int cdl) 672 + { 673 + struct ata_taskfile *tf = &qc->tf; 674 + 675 + if (tf->protocol == ATA_PROT_NCQ) 676 + tf->auxiliary |= cdl; 677 + else 678 + tf->feature |= cdl; 679 + 680 + /* 681 + * Mark this command as having a CDL and request the result 682 + * task file so that we can inspect the sense data available 683 + * bit on completion. 684 + */ 685 + qc->flags |= ATA_QCFLAG_HAS_CDL | ATA_QCFLAG_RESULT_TF; 686 + } 687 + 668 688 /** 669 689 * ata_build_rw_tf - Build ATA taskfile for given read/write request 670 690 * @qc: Metadata associated with the taskfile to build 671 691 * @block: Block address 672 692 * @n_block: Number of blocks 673 693 * @tf_flags: RW/FUA etc... 694 + * @cdl: Command duration limit index 674 695 * @class: IO priority class 675 696 * 676 697 * LOCKING: ··· 706 685 * -EINVAL if the request is invalid. 707 686 */ 708 687 int ata_build_rw_tf(struct ata_queued_cmd *qc, u64 block, u32 n_block, 709 - unsigned int tf_flags, int class) 688 + unsigned int tf_flags, int cdl, int class) 710 689 { 711 690 struct ata_taskfile *tf = &qc->tf; 712 691 struct ata_device *dev = qc->dev; ··· 745 724 if (dev->flags & ATA_DFLAG_NCQ_PRIO_ENABLED && 746 725 class == IOPRIO_CLASS_RT) 747 726 tf->hob_nsect |= ATA_PRIO_HIGH << ATA_SHIFT_PRIO; 727 + 728 + if ((dev->flags & ATA_DFLAG_CDL_ENABLED) && cdl) 729 + ata_set_tf_cdl(qc, cdl); 730 + 748 731 } else if (dev->flags & ATA_DFLAG_LBA) { 749 732 tf->flags |= ATA_TFLAG_LBA; 750 733 751 - /* We need LBA48 for FUA writes */ 752 - if (!(tf->flags & ATA_TFLAG_FUA) && lba_28_ok(block, n_block)) { 734 + if ((dev->flags & ATA_DFLAG_CDL_ENABLED) && cdl) 735 + ata_set_tf_cdl(qc, cdl); 736 + 737 + /* Both FUA writes and a CDL index require 48-bit commands */ 738 + if (!(tf->flags & ATA_TFLAG_FUA) && 739 + !(qc->flags & ATA_QCFLAG_HAS_CDL) && 740 + lba_28_ok(block, n_block)) { 753 741 /* use LBA28 */ 754 742 tf->device |= (block >> 24) & 0xf; 755 743 } else if (lba_48_ok(block, n_block)) { ··· 2397 2367 dev->flags |= ATA_DFLAG_TRUSTED; 2398 2368 } 2399 2369 2370 + static void ata_dev_config_cdl(struct ata_device *dev) 2371 + { 2372 + struct ata_port *ap = dev->link->ap; 2373 + unsigned int err_mask; 2374 + bool cdl_enabled; 2375 + u64 val; 2376 + 2377 + if (ata_id_major_version(dev->id) < 12) 2378 + goto not_supported; 2379 + 2380 + if (!ata_log_supported(dev, ATA_LOG_IDENTIFY_DEVICE) || 2381 + !ata_identify_page_supported(dev, ATA_LOG_SUPPORTED_CAPABILITIES) || 2382 + !ata_identify_page_supported(dev, ATA_LOG_CURRENT_SETTINGS)) 2383 + goto not_supported; 2384 + 2385 + err_mask = ata_read_log_page(dev, ATA_LOG_IDENTIFY_DEVICE, 2386 + ATA_LOG_SUPPORTED_CAPABILITIES, 2387 + ap->sector_buf, 1); 2388 + if (err_mask) 2389 + goto not_supported; 2390 + 2391 + /* Check Command Duration Limit Supported bits */ 2392 + val = get_unaligned_le64(&ap->sector_buf[168]); 2393 + if (!(val & BIT_ULL(63)) || !(val & BIT_ULL(0))) 2394 + goto not_supported; 2395 + 2396 + /* Warn the user if command duration guideline is not supported */ 2397 + if (!(val & BIT_ULL(1))) 2398 + ata_dev_warn(dev, 2399 + "Command duration guideline is not supported\n"); 2400 + 2401 + /* 2402 + * We must have support for the sense data for successful NCQ commands 2403 + * log indicated by the successful NCQ command sense data supported bit. 2404 + */ 2405 + val = get_unaligned_le64(&ap->sector_buf[8]); 2406 + if (!(val & BIT_ULL(63)) || !(val & BIT_ULL(47))) { 2407 + ata_dev_warn(dev, 2408 + "CDL supported but Successful NCQ Command Sense Data is not supported\n"); 2409 + goto not_supported; 2410 + } 2411 + 2412 + /* Without NCQ autosense, the successful NCQ commands log is useless. */ 2413 + if (!ata_id_has_ncq_autosense(dev->id)) { 2414 + ata_dev_warn(dev, 2415 + "CDL supported but NCQ autosense is not supported\n"); 2416 + goto not_supported; 2417 + } 2418 + 2419 + /* 2420 + * If CDL is marked as enabled, make sure the feature is enabled too. 2421 + * Conversely, if CDL is disabled, make sure the feature is turned off. 2422 + */ 2423 + err_mask = ata_read_log_page(dev, ATA_LOG_IDENTIFY_DEVICE, 2424 + ATA_LOG_CURRENT_SETTINGS, 2425 + ap->sector_buf, 1); 2426 + if (err_mask) 2427 + goto not_supported; 2428 + 2429 + val = get_unaligned_le64(&ap->sector_buf[8]); 2430 + cdl_enabled = val & BIT_ULL(63) && val & BIT_ULL(21); 2431 + if (dev->flags & ATA_DFLAG_CDL_ENABLED) { 2432 + if (!cdl_enabled) { 2433 + /* Enable CDL on the device */ 2434 + err_mask = ata_dev_set_feature(dev, SETFEATURES_CDL, 1); 2435 + if (err_mask) { 2436 + ata_dev_err(dev, 2437 + "Enable CDL feature failed\n"); 2438 + goto not_supported; 2439 + } 2440 + } 2441 + } else { 2442 + if (cdl_enabled) { 2443 + /* Disable CDL on the device */ 2444 + err_mask = ata_dev_set_feature(dev, SETFEATURES_CDL, 0); 2445 + if (err_mask) { 2446 + ata_dev_err(dev, 2447 + "Disable CDL feature failed\n"); 2448 + goto not_supported; 2449 + } 2450 + } 2451 + } 2452 + 2453 + /* 2454 + * While CDL itself has to be enabled using sysfs, CDL requires that 2455 + * sense data for successful NCQ commands is enabled to work properly. 2456 + * Just like ata_dev_config_sense_reporting(), enable it unconditionally 2457 + * if supported. 2458 + */ 2459 + if (!(val & BIT_ULL(63)) || !(val & BIT_ULL(18))) { 2460 + err_mask = ata_dev_set_feature(dev, 2461 + SETFEATURE_SENSE_DATA_SUCC_NCQ, 0x1); 2462 + if (err_mask) { 2463 + ata_dev_warn(dev, 2464 + "failed to enable Sense Data for successful NCQ commands, Emask 0x%x\n", 2465 + err_mask); 2466 + goto not_supported; 2467 + } 2468 + } 2469 + 2470 + /* 2471 + * Allocate a buffer to handle reading the sense data for successful 2472 + * NCQ Commands log page for commands using a CDL with one of the limit 2473 + * policy set to 0xD (successful completion with sense data available 2474 + * bit set). 2475 + */ 2476 + if (!ap->ncq_sense_buf) { 2477 + ap->ncq_sense_buf = kmalloc(ATA_LOG_SENSE_NCQ_SIZE, GFP_KERNEL); 2478 + if (!ap->ncq_sense_buf) 2479 + goto not_supported; 2480 + } 2481 + 2482 + /* 2483 + * Command duration limits is supported: cache the CDL log page 18h 2484 + * (command duration descriptors). 2485 + */ 2486 + err_mask = ata_read_log_page(dev, ATA_LOG_CDL, 0, ap->sector_buf, 1); 2487 + if (err_mask) { 2488 + ata_dev_warn(dev, "Read Command Duration Limits log failed\n"); 2489 + goto not_supported; 2490 + } 2491 + 2492 + memcpy(dev->cdl, ap->sector_buf, ATA_LOG_CDL_SIZE); 2493 + dev->flags |= ATA_DFLAG_CDL; 2494 + 2495 + return; 2496 + 2497 + not_supported: 2498 + dev->flags &= ~(ATA_DFLAG_CDL | ATA_DFLAG_CDL_ENABLED); 2499 + kfree(ap->ncq_sense_buf); 2500 + ap->ncq_sense_buf = NULL; 2501 + } 2502 + 2400 2503 static int ata_dev_config_lba(struct ata_device *dev) 2401 2504 { 2402 2505 const u16 *id = dev->id; ··· 2697 2534 return; 2698 2535 2699 2536 ata_dev_info(dev, 2700 - "Features:%s%s%s%s%s%s%s\n", 2537 + "Features:%s%s%s%s%s%s%s%s\n", 2701 2538 dev->flags & ATA_DFLAG_FUA ? " FUA" : "", 2702 2539 dev->flags & ATA_DFLAG_TRUSTED ? " Trust" : "", 2703 2540 dev->flags & ATA_DFLAG_DA ? " Dev-Attention" : "", 2704 2541 dev->flags & ATA_DFLAG_DEVSLP ? " Dev-Sleep" : "", 2705 2542 dev->flags & ATA_DFLAG_NCQ_SEND_RECV ? " NCQ-sndrcv" : "", 2706 2543 dev->flags & ATA_DFLAG_NCQ_PRIO ? " NCQ-prio" : "", 2544 + dev->flags & ATA_DFLAG_CDL ? " CDL" : "", 2707 2545 dev->cpr_log ? " CPR" : ""); 2708 2546 } 2709 2547 ··· 2866 2702 ata_dev_config_zac(dev); 2867 2703 ata_dev_config_trusted(dev); 2868 2704 ata_dev_config_cpr(dev); 2705 + ata_dev_config_cdl(dev); 2869 2706 dev->cdb_len = 32; 2870 2707 2871 2708 if (print_info) ··· 4927 4762 fill_result_tf(qc); 4928 4763 4929 4764 trace_ata_qc_complete_done(qc); 4765 + 4766 + /* 4767 + * For CDL commands that completed without an error, check if 4768 + * we have sense data (ATA_SENSE is set). If we do, then the 4769 + * command may have been aborted by the device due to a limit 4770 + * timeout using the policy 0xD. For these commands, invoke EH 4771 + * to get the command sense data. 4772 + */ 4773 + if (qc->result_tf.status & ATA_SENSE && 4774 + ((ata_is_ncq(qc->tf.protocol) && 4775 + dev->flags & ATA_DFLAG_CDL_ENABLED) || 4776 + (!(ata_is_ncq(qc->tf.protocol) && 4777 + ata_id_sense_reporting_enabled(dev->id))))) { 4778 + /* 4779 + * Tell SCSI EH to not overwrite scmd->result even if 4780 + * this command is finished with result SAM_STAT_GOOD. 4781 + */ 4782 + qc->scsicmd->flags |= SCMD_FORCE_EH_SUCCESS; 4783 + qc->flags |= ATA_QCFLAG_EH_SUCCESS_CMD; 4784 + ehi->dev_action[dev->devno] |= ATA_EH_GET_SUCCESS_SENSE; 4785 + 4786 + /* 4787 + * set pending so that ata_qc_schedule_eh() does not 4788 + * trigger fast drain, and freeze the port. 4789 + */ 4790 + ap->pflags |= ATA_PFLAG_EH_PENDING; 4791 + ata_qc_schedule_eh(qc); 4792 + return; 4793 + } 4794 + 4930 4795 /* Some commands need post-processing after successful 4931 4796 * completion. 4932 4797 */ ··· 5589 5394 5590 5395 kfree(ap->pmp_link); 5591 5396 kfree(ap->slave_link); 5397 + kfree(ap->ncq_sense_buf); 5592 5398 kfree(ap); 5593 5399 host->ports[i] = NULL; 5594 5400 }
+120 -10
drivers/ata/libata-eh.c
··· 1401 1401 * 1402 1402 * LOCKING: 1403 1403 * Kernel thread context (may sleep). 1404 + * 1405 + * RETURNS: 1406 + * true if sense data could be fetched, false otherwise. 1404 1407 */ 1405 - static void ata_eh_request_sense(struct ata_queued_cmd *qc) 1408 + static bool ata_eh_request_sense(struct ata_queued_cmd *qc) 1406 1409 { 1407 1410 struct scsi_cmnd *cmd = qc->scsicmd; 1408 1411 struct ata_device *dev = qc->dev; ··· 1414 1411 1415 1412 if (ata_port_is_frozen(qc->ap)) { 1416 1413 ata_dev_warn(dev, "sense data available but port frozen\n"); 1417 - return; 1414 + return false; 1418 1415 } 1419 - 1420 - if (!cmd || qc->flags & ATA_QCFLAG_SENSE_VALID) 1421 - return; 1422 1416 1423 1417 if (!ata_id_sense_reporting_enabled(dev->id)) { 1424 1418 ata_dev_warn(qc->dev, "sense data reporting disabled\n"); 1425 - return; 1419 + return false; 1426 1420 } 1427 1421 1428 1422 ata_tf_init(dev, &tf); ··· 1432 1432 /* Ignore err_mask; ATA_ERR might be set */ 1433 1433 if (tf.status & ATA_SENSE) { 1434 1434 if (ata_scsi_sense_is_valid(tf.lbah, tf.lbam, tf.lbal)) { 1435 - ata_scsi_set_sense(dev, cmd, tf.lbah, tf.lbam, tf.lbal); 1435 + /* Set sense without also setting scsicmd->result */ 1436 + scsi_build_sense_buffer(dev->flags & ATA_DFLAG_D_SENSE, 1437 + cmd->sense_buffer, tf.lbah, 1438 + tf.lbam, tf.lbal); 1436 1439 qc->flags |= ATA_QCFLAG_SENSE_VALID; 1440 + return true; 1437 1441 } 1438 1442 } else { 1439 1443 ata_dev_warn(dev, "request sense failed stat %02x emask %x\n", 1440 1444 tf.status, err_mask); 1441 1445 } 1446 + 1447 + return false; 1442 1448 } 1443 1449 1444 1450 /** ··· 1594 1588 * was not included in the NCQ command error log 1595 1589 * (i.e. NCQ autosense is not supported by the device). 1596 1590 */ 1597 - if (!(qc->flags & ATA_QCFLAG_SENSE_VALID) && (stat & ATA_SENSE)) 1598 - ata_eh_request_sense(qc); 1591 + if (!(qc->flags & ATA_QCFLAG_SENSE_VALID) && 1592 + (stat & ATA_SENSE) && ata_eh_request_sense(qc)) 1593 + set_status_byte(qc->scsicmd, SAM_STAT_CHECK_CONDITION); 1599 1594 if (err & ATA_ICRC) 1600 1595 qc->err_mask |= AC_ERR_ATA_BUS; 1601 1596 if (err & (ATA_UNC | ATA_AMNF)) ··· 1915 1908 return qc->flags & ATA_QCFLAG_QUIET; 1916 1909 } 1917 1910 1911 + static int ata_eh_read_sense_success_non_ncq(struct ata_link *link) 1912 + { 1913 + struct ata_port *ap = link->ap; 1914 + struct ata_queued_cmd *qc; 1915 + 1916 + qc = __ata_qc_from_tag(ap, link->active_tag); 1917 + if (!qc) 1918 + return -EIO; 1919 + 1920 + if (!(qc->flags & ATA_QCFLAG_EH) || 1921 + !(qc->flags & ATA_QCFLAG_EH_SUCCESS_CMD) || 1922 + qc->err_mask) 1923 + return -EIO; 1924 + 1925 + if (!ata_eh_request_sense(qc)) 1926 + return -EIO; 1927 + 1928 + /* 1929 + * If we have sense data, call scsi_check_sense() in order to set the 1930 + * correct SCSI ML byte (if any). No point in checking the return value, 1931 + * since the command has already completed successfully. 1932 + */ 1933 + scsi_check_sense(qc->scsicmd); 1934 + 1935 + return 0; 1936 + } 1937 + 1938 + static void ata_eh_get_success_sense(struct ata_link *link) 1939 + { 1940 + struct ata_eh_context *ehc = &link->eh_context; 1941 + struct ata_device *dev = link->device; 1942 + struct ata_port *ap = link->ap; 1943 + struct ata_queued_cmd *qc; 1944 + int tag, ret = 0; 1945 + 1946 + if (!(ehc->i.dev_action[dev->devno] & ATA_EH_GET_SUCCESS_SENSE)) 1947 + return; 1948 + 1949 + /* if frozen, we can't do much */ 1950 + if (ata_port_is_frozen(ap)) { 1951 + ata_dev_warn(dev, 1952 + "successful sense data available but port frozen\n"); 1953 + goto out; 1954 + } 1955 + 1956 + /* 1957 + * If the link has sactive set, then we have outstanding NCQ commands 1958 + * and have to read the Successful NCQ Commands log to get the sense 1959 + * data. Otherwise, we are dealing with a non-NCQ command and use 1960 + * request sense ext command to retrieve the sense data. 1961 + */ 1962 + if (link->sactive) 1963 + ret = ata_eh_read_sense_success_ncq_log(link); 1964 + else 1965 + ret = ata_eh_read_sense_success_non_ncq(link); 1966 + if (ret) 1967 + goto out; 1968 + 1969 + ata_eh_done(link, dev, ATA_EH_GET_SUCCESS_SENSE); 1970 + return; 1971 + 1972 + out: 1973 + /* 1974 + * If we failed to get sense data for a successful command that ought to 1975 + * have sense data, we cannot simply return BLK_STS_OK to user space. 1976 + * This is because we can't know if the sense data that we couldn't get 1977 + * was actually "DATA CURRENTLY UNAVAILABLE". Reporting such a command 1978 + * as success to user space would result in a silent data corruption. 1979 + * Thus, add a bogus ABORTED_COMMAND sense data to such commands, such 1980 + * that SCSI will report these commands as BLK_STS_IOERR to user space. 1981 + */ 1982 + ata_qc_for_each_raw(ap, qc, tag) { 1983 + if (!(qc->flags & ATA_QCFLAG_EH) || 1984 + !(qc->flags & ATA_QCFLAG_EH_SUCCESS_CMD) || 1985 + qc->err_mask || 1986 + ata_dev_phys_link(qc->dev) != link) 1987 + continue; 1988 + 1989 + /* We managed to get sense for this success command, skip. */ 1990 + if (qc->flags & ATA_QCFLAG_SENSE_VALID) 1991 + continue; 1992 + 1993 + /* This success command did not have any sense data, skip. */ 1994 + if (!(qc->result_tf.status & ATA_SENSE)) 1995 + continue; 1996 + 1997 + /* This success command had sense data, but we failed to get. */ 1998 + ata_scsi_set_sense(dev, qc->scsicmd, ABORTED_COMMAND, 0, 0); 1999 + qc->flags |= ATA_QCFLAG_SENSE_VALID; 2000 + } 2001 + ata_eh_done(link, dev, ATA_EH_GET_SUCCESS_SENSE); 2002 + } 2003 + 1918 2004 /** 1919 2005 * ata_eh_link_autopsy - analyze error and determine recovery action 1920 2006 * @link: host link to perform autopsy on ··· 2048 1948 /* analyze NCQ failure */ 2049 1949 ata_eh_analyze_ncq_error(link); 2050 1950 1951 + /* 1952 + * Check if this was a successful command that simply needs sense data. 1953 + * Since the sense data is not part of the completion, we need to fetch 1954 + * it using an additional command. Since this can't be done from irq 1955 + * context, the sense data for successful commands are fetched by EH. 1956 + */ 1957 + ata_eh_get_success_sense(link); 1958 + 2051 1959 /* any real error trumps AC_ERR_OTHER */ 2052 1960 if (ehc->i.err_mask & ~AC_ERR_OTHER) 2053 1961 ehc->i.err_mask &= ~AC_ERR_OTHER; ··· 2065 1957 ata_qc_for_each_raw(ap, qc, tag) { 2066 1958 if (!(qc->flags & ATA_QCFLAG_EH) || 2067 1959 qc->flags & ATA_QCFLAG_RETRY || 1960 + qc->flags & ATA_QCFLAG_EH_SUCCESS_CMD || 2068 1961 ata_dev_phys_link(qc->dev) != link) 2069 1962 continue; 2070 1963 ··· 3932 3823 ata_eh_qc_complete(qc); 3933 3824 } 3934 3825 } else { 3935 - if (qc->flags & ATA_QCFLAG_SENSE_VALID) { 3826 + if (qc->flags & ATA_QCFLAG_SENSE_VALID || 3827 + qc->flags & ATA_QCFLAG_EH_SUCCESS_CMD) { 3936 3828 ata_eh_qc_complete(qc); 3937 3829 } else { 3938 3830 /* feed zero TF to sense generation */
+101 -2
drivers/ata/libata-sata.c
··· 11 11 #include <linux/module.h> 12 12 #include <scsi/scsi_cmnd.h> 13 13 #include <scsi/scsi_device.h> 14 + #include <scsi/scsi_eh.h> 14 15 #include <linux/libata.h> 16 + #include <asm/unaligned.h> 15 17 16 18 #include "libata.h" 17 19 #include "libata-transport.h" ··· 909 907 goto unlock; 910 908 } 911 909 912 - if (input) 910 + if (input) { 911 + if (dev->flags & ATA_DFLAG_CDL_ENABLED) { 912 + ata_dev_err(dev, 913 + "CDL must be disabled to enable NCQ priority\n"); 914 + rc = -EINVAL; 915 + goto unlock; 916 + } 913 917 dev->flags |= ATA_DFLAG_NCQ_PRIO_ENABLED; 914 - else 918 + } else { 915 919 dev->flags &= ~ATA_DFLAG_NCQ_PRIO_ENABLED; 920 + } 916 921 917 922 unlock: 918 923 spin_unlock_irq(ap->lock); ··· 1423 1414 } 1424 1415 1425 1416 /** 1417 + * ata_eh_read_sense_success_ncq_log - Read the sense data for successful 1418 + * NCQ commands log 1419 + * @link: ATA link to get sense data for 1420 + * 1421 + * Read the sense data for successful NCQ commands log page to obtain 1422 + * sense data for all NCQ commands that completed successfully with 1423 + * the sense data available bit set. 1424 + * 1425 + * LOCKING: 1426 + * Kernel thread context (may sleep). 1427 + * 1428 + * RETURNS: 1429 + * 0 on success, -errno otherwise. 1430 + */ 1431 + int ata_eh_read_sense_success_ncq_log(struct ata_link *link) 1432 + { 1433 + struct ata_device *dev = link->device; 1434 + struct ata_port *ap = dev->link->ap; 1435 + u8 *buf = ap->ncq_sense_buf; 1436 + struct ata_queued_cmd *qc; 1437 + unsigned int err_mask, tag; 1438 + u8 *sense, sk = 0, asc = 0, ascq = 0; 1439 + u64 sense_valid, val; 1440 + int ret = 0; 1441 + 1442 + err_mask = ata_read_log_page(dev, ATA_LOG_SENSE_NCQ, 0, buf, 2); 1443 + if (err_mask) { 1444 + ata_dev_err(dev, 1445 + "Failed to read Sense Data for Successful NCQ Commands log\n"); 1446 + return -EIO; 1447 + } 1448 + 1449 + /* Check the log header */ 1450 + val = get_unaligned_le64(&buf[0]); 1451 + if ((val & 0xffff) != 1 || ((val >> 16) & 0xff) != 0x0f) { 1452 + ata_dev_err(dev, 1453 + "Invalid Sense Data for Successful NCQ Commands log\n"); 1454 + return -EIO; 1455 + } 1456 + 1457 + sense_valid = (u64)buf[8] | ((u64)buf[9] << 8) | 1458 + ((u64)buf[10] << 16) | ((u64)buf[11] << 24); 1459 + 1460 + ata_qc_for_each_raw(ap, qc, tag) { 1461 + if (!(qc->flags & ATA_QCFLAG_EH) || 1462 + !(qc->flags & ATA_QCFLAG_EH_SUCCESS_CMD) || 1463 + qc->err_mask || 1464 + ata_dev_phys_link(qc->dev) != link) 1465 + continue; 1466 + 1467 + /* 1468 + * If the command does not have any sense data, clear ATA_SENSE. 1469 + * Keep ATA_QCFLAG_EH_SUCCESS_CMD so that command is finished. 1470 + */ 1471 + if (!(sense_valid & (1ULL << tag))) { 1472 + qc->result_tf.status &= ~ATA_SENSE; 1473 + continue; 1474 + } 1475 + 1476 + sense = &buf[32 + 24 * tag]; 1477 + sk = sense[0]; 1478 + asc = sense[1]; 1479 + ascq = sense[2]; 1480 + 1481 + if (!ata_scsi_sense_is_valid(sk, asc, ascq)) { 1482 + ret = -EIO; 1483 + continue; 1484 + } 1485 + 1486 + /* Set sense without also setting scsicmd->result */ 1487 + scsi_build_sense_buffer(dev->flags & ATA_DFLAG_D_SENSE, 1488 + qc->scsicmd->sense_buffer, sk, 1489 + asc, ascq); 1490 + qc->flags |= ATA_QCFLAG_SENSE_VALID; 1491 + 1492 + /* 1493 + * If we have sense data, call scsi_check_sense() in order to 1494 + * set the correct SCSI ML byte (if any). No point in checking 1495 + * the return value, since the command has already completed 1496 + * successfully. 1497 + */ 1498 + scsi_check_sense(qc->scsicmd); 1499 + } 1500 + 1501 + return ret; 1502 + } 1503 + EXPORT_SYMBOL_GPL(ata_eh_read_sense_success_ncq_log); 1504 + 1505 + /** 1426 1506 * ata_eh_analyze_ncq_error - analyze NCQ error 1427 1507 * @link: ATA link to analyze NCQ error for 1428 1508 * ··· 1591 1493 1592 1494 ata_qc_for_each_raw(ap, qc, tag) { 1593 1495 if (!(qc->flags & ATA_QCFLAG_EH) || 1496 + qc->flags & ATA_QCFLAG_EH_SUCCESS_CMD || 1594 1497 ata_dev_phys_link(qc->dev) != link) 1595 1498 continue; 1596 1499
+318 -65
drivers/ata/libata-scsi.c
··· 37 37 #include "libata.h" 38 38 #include "libata-transport.h" 39 39 40 - #define ATA_SCSI_RBUF_SIZE 576 40 + #define ATA_SCSI_RBUF_SIZE 2048 41 41 42 42 static DEFINE_SPINLOCK(ata_scsi_rbuf_lock); 43 43 static u8 ata_scsi_rbuf[ATA_SCSI_RBUF_SIZE]; ··· 47 47 static struct ata_device *__ata_scsi_find_dev(struct ata_port *ap, 48 48 const struct scsi_device *scsidev); 49 49 50 - #define RW_RECOVERY_MPAGE 0x1 51 - #define RW_RECOVERY_MPAGE_LEN 12 52 - #define CACHE_MPAGE 0x8 53 - #define CACHE_MPAGE_LEN 20 54 - #define CONTROL_MPAGE 0xa 55 - #define CONTROL_MPAGE_LEN 12 56 - #define ALL_MPAGES 0x3f 57 - #define ALL_SUB_MPAGES 0xff 58 - 50 + #define RW_RECOVERY_MPAGE 0x1 51 + #define RW_RECOVERY_MPAGE_LEN 12 52 + #define CACHE_MPAGE 0x8 53 + #define CACHE_MPAGE_LEN 20 54 + #define CONTROL_MPAGE 0xa 55 + #define CONTROL_MPAGE_LEN 12 56 + #define ALL_MPAGES 0x3f 57 + #define ALL_SUB_MPAGES 0xff 58 + #define CDL_T2A_SUB_MPAGE 0x07 59 + #define CDL_T2B_SUB_MPAGE 0x08 60 + #define CDL_T2_SUB_MPAGE_LEN 232 61 + #define ATA_FEATURE_SUB_MPAGE 0xf2 62 + #define ATA_FEATURE_SUB_MPAGE_LEN 16 59 63 60 64 static const u8 def_rw_recovery_mpage[RW_RECOVERY_MPAGE_LEN] = { 61 65 RW_RECOVERY_MPAGE, ··· 213 209 { 214 210 bool d_sense = (dev->flags & ATA_DFLAG_D_SENSE); 215 211 216 - if (!cmd) 217 - return; 218 - 219 212 scsi_build_sense(cmd, d_sense, sk, asc, ascq); 220 213 } 221 214 ··· 221 220 const struct ata_taskfile *tf) 222 221 { 223 222 u64 information; 224 - 225 - if (!cmd) 226 - return; 227 223 228 224 information = ata_tf_read_block(tf, dev); 229 225 if (information == U64_MAX) ··· 1381 1383 } 1382 1384 1383 1385 /** 1386 + * scsi_dld - Get duration limit descriptor index 1387 + * @cdb: SCSI command to translate 1388 + * 1389 + * Returns the dld bits indicating the index of a command duration limit 1390 + * descriptor. 1391 + */ 1392 + static inline int scsi_dld(const u8 *cdb) 1393 + { 1394 + return ((cdb[1] & 0x01) << 2) | ((cdb[14] >> 6) & 0x03); 1395 + } 1396 + 1397 + /** 1384 1398 * ata_scsi_verify_xlat - Translate SCSI VERIFY command into an ATA one 1385 1399 * @qc: Storage for translated ATA taskfile 1386 1400 * ··· 1560 1550 struct request *rq = scsi_cmd_to_rq(scmd); 1561 1551 int class = IOPRIO_PRIO_CLASS(req_get_ioprio(rq)); 1562 1552 unsigned int tf_flags = 0; 1553 + int dld = 0; 1563 1554 u64 block; 1564 1555 u32 n_block; 1565 1556 int rc; ··· 1611 1600 goto invalid_fld; 1612 1601 } 1613 1602 scsi_16_lba_len(cdb, &block, &n_block); 1603 + dld = scsi_dld(cdb); 1614 1604 if (cdb[1] & (1 << 3)) 1615 1605 tf_flags |= ATA_TFLAG_FUA; 1616 1606 if (!ata_check_nblocks(scmd, n_block)) ··· 1636 1624 qc->flags |= ATA_QCFLAG_IO; 1637 1625 qc->nbytes = n_block * scmd->device->sector_size; 1638 1626 1639 - rc = ata_build_rw_tf(qc, block, n_block, tf_flags, class); 1627 + rc = ata_build_rw_tf(qc, block, n_block, tf_flags, dld, class); 1640 1628 if (likely(rc == 0)) 1641 1629 return 0; 1642 1630 ··· 2215 2203 return sizeof(def_cache_mpage); 2216 2204 } 2217 2205 2206 + /* 2207 + * Simulate MODE SENSE control mode page, sub-page 0. 2208 + */ 2209 + static unsigned int ata_msense_control_spg0(struct ata_device *dev, u8 *buf, 2210 + bool changeable) 2211 + { 2212 + modecpy(buf, def_control_mpage, 2213 + sizeof(def_control_mpage), changeable); 2214 + if (changeable) { 2215 + /* ata_mselect_control() */ 2216 + buf[2] |= (1 << 2); 2217 + } else { 2218 + bool d_sense = (dev->flags & ATA_DFLAG_D_SENSE); 2219 + 2220 + /* descriptor format sense data */ 2221 + buf[2] |= (d_sense << 2); 2222 + } 2223 + 2224 + return sizeof(def_control_mpage); 2225 + } 2226 + 2227 + /* 2228 + * Translate an ATA duration limit in microseconds to a SCSI duration limit 2229 + * using the t2cdlunits 0xa (10ms). Since the SCSI duration limits are 2-bytes 2230 + * only, take care of overflows. 2231 + */ 2232 + static inline u16 ata_xlat_cdl_limit(u8 *buf) 2233 + { 2234 + u32 limit = get_unaligned_le32(buf); 2235 + 2236 + return min_t(u32, limit / 10000, 65535); 2237 + } 2238 + 2239 + /* 2240 + * Simulate MODE SENSE control mode page, sub-pages 07h and 08h 2241 + * (command duration limits T2A and T2B mode pages). 2242 + */ 2243 + static unsigned int ata_msense_control_spgt2(struct ata_device *dev, u8 *buf, 2244 + u8 spg) 2245 + { 2246 + u8 *b, *cdl = dev->cdl, *desc; 2247 + u32 policy; 2248 + int i; 2249 + 2250 + /* 2251 + * Fill the subpage. The first four bytes of the T2A/T2B mode pages 2252 + * are a header. The PAGE LENGTH field is the size of the page 2253 + * excluding the header. 2254 + */ 2255 + buf[0] = CONTROL_MPAGE; 2256 + buf[1] = spg; 2257 + put_unaligned_be16(CDL_T2_SUB_MPAGE_LEN - 4, &buf[2]); 2258 + if (spg == CDL_T2A_SUB_MPAGE) { 2259 + /* 2260 + * Read descriptors map to the T2A page: 2261 + * set perf_vs_duration_guidleine. 2262 + */ 2263 + buf[7] = (cdl[0] & 0x03) << 4; 2264 + desc = cdl + 64; 2265 + } else { 2266 + /* Write descriptors map to the T2B page */ 2267 + desc = cdl + 288; 2268 + } 2269 + 2270 + /* Fill the T2 page descriptors */ 2271 + b = &buf[8]; 2272 + policy = get_unaligned_le32(&cdl[0]); 2273 + for (i = 0; i < 7; i++, b += 32, desc += 32) { 2274 + /* t2cdlunits: fixed to 10ms */ 2275 + b[0] = 0x0a; 2276 + 2277 + /* Max inactive time and its policy */ 2278 + put_unaligned_be16(ata_xlat_cdl_limit(&desc[8]), &b[2]); 2279 + b[6] = ((policy >> 8) & 0x0f) << 4; 2280 + 2281 + /* Max active time and its policy */ 2282 + put_unaligned_be16(ata_xlat_cdl_limit(&desc[4]), &b[4]); 2283 + b[6] |= (policy >> 4) & 0x0f; 2284 + 2285 + /* Command duration guideline and its policy */ 2286 + put_unaligned_be16(ata_xlat_cdl_limit(&desc[16]), &b[10]); 2287 + b[14] = policy & 0x0f; 2288 + } 2289 + 2290 + return CDL_T2_SUB_MPAGE_LEN; 2291 + } 2292 + 2293 + /* 2294 + * Simulate MODE SENSE control mode page, sub-page f2h 2295 + * (ATA feature control mode page). 2296 + */ 2297 + static unsigned int ata_msense_control_ata_feature(struct ata_device *dev, 2298 + u8 *buf) 2299 + { 2300 + /* PS=0, SPF=1 */ 2301 + buf[0] = CONTROL_MPAGE | (1 << 6); 2302 + buf[1] = ATA_FEATURE_SUB_MPAGE; 2303 + 2304 + /* 2305 + * The first four bytes of ATA Feature Control mode page are a header. 2306 + * The PAGE LENGTH field is the size of the page excluding the header. 2307 + */ 2308 + put_unaligned_be16(ATA_FEATURE_SUB_MPAGE_LEN - 4, &buf[2]); 2309 + 2310 + if (dev->flags & ATA_DFLAG_CDL) 2311 + buf[4] = 0x02; /* Support T2A and T2B pages */ 2312 + else 2313 + buf[4] = 0; 2314 + 2315 + return ATA_FEATURE_SUB_MPAGE_LEN; 2316 + } 2317 + 2218 2318 /** 2219 2319 * ata_msense_control - Simulate MODE SENSE control mode page 2220 2320 * @dev: ATA device of interest 2221 2321 * @buf: output buffer 2322 + * @spg: sub-page code 2222 2323 * @changeable: whether changeable parameters are requested 2223 2324 * 2224 2325 * Generate a generic MODE SENSE control mode page. ··· 2340 2215 * None. 2341 2216 */ 2342 2217 static unsigned int ata_msense_control(struct ata_device *dev, u8 *buf, 2343 - bool changeable) 2218 + u8 spg, bool changeable) 2344 2219 { 2345 - modecpy(buf, def_control_mpage, sizeof(def_control_mpage), changeable); 2346 - if (changeable) { 2347 - buf[2] |= (1 << 2); /* ata_mselect_control() */ 2348 - } else { 2349 - bool d_sense = (dev->flags & ATA_DFLAG_D_SENSE); 2220 + unsigned int n; 2350 2221 2351 - buf[2] |= (d_sense << 2); /* descriptor format sense data */ 2222 + switch (spg) { 2223 + case 0: 2224 + return ata_msense_control_spg0(dev, buf, changeable); 2225 + case CDL_T2A_SUB_MPAGE: 2226 + case CDL_T2B_SUB_MPAGE: 2227 + return ata_msense_control_spgt2(dev, buf, spg); 2228 + case ATA_FEATURE_SUB_MPAGE: 2229 + return ata_msense_control_ata_feature(dev, buf); 2230 + case ALL_SUB_MPAGES: 2231 + n = ata_msense_control_spg0(dev, buf, changeable); 2232 + n += ata_msense_control_spgt2(dev, buf + n, CDL_T2A_SUB_MPAGE); 2233 + n += ata_msense_control_spgt2(dev, buf + n, CDL_T2A_SUB_MPAGE); 2234 + n += ata_msense_control_ata_feature(dev, buf + n); 2235 + return n; 2236 + default: 2237 + return 0; 2352 2238 } 2353 - return sizeof(def_control_mpage); 2354 2239 } 2355 2240 2356 2241 /** ··· 2433 2298 2434 2299 pg = scsicmd[2] & 0x3f; 2435 2300 spg = scsicmd[3]; 2301 + 2436 2302 /* 2437 - * No mode subpages supported (yet) but asking for _all_ 2438 - * subpages may be valid 2303 + * Supported subpages: all subpages and sub-pages 07h, 08h and f2h of 2304 + * the control page. 2439 2305 */ 2440 - if (spg && (spg != ALL_SUB_MPAGES)) { 2441 - fp = 3; 2442 - goto invalid_fld; 2306 + if (spg) { 2307 + switch (spg) { 2308 + case ALL_SUB_MPAGES: 2309 + break; 2310 + case CDL_T2A_SUB_MPAGE: 2311 + case CDL_T2B_SUB_MPAGE: 2312 + case ATA_FEATURE_SUB_MPAGE: 2313 + if (dev->flags & ATA_DFLAG_CDL && pg == CONTROL_MPAGE) 2314 + break; 2315 + fallthrough; 2316 + default: 2317 + fp = 3; 2318 + goto invalid_fld; 2319 + } 2443 2320 } 2444 2321 2445 2322 switch(pg) { ··· 2464 2317 break; 2465 2318 2466 2319 case CONTROL_MPAGE: 2467 - p += ata_msense_control(args->dev, p, page_control == 1); 2320 + p += ata_msense_control(args->dev, p, spg, page_control == 1); 2468 2321 break; 2469 2322 2470 2323 case ALL_MPAGES: 2471 2324 p += ata_msense_rw_recovery(p, page_control == 1); 2472 2325 p += ata_msense_caching(args->id, p, page_control == 1); 2473 - p += ata_msense_control(args->dev, p, page_control == 1); 2326 + p += ata_msense_control(args->dev, p, spg, page_control == 1); 2474 2327 break; 2475 2328 2476 2329 default: /* invalid page code */ ··· 2489 2342 memcpy(rbuf + 4, sat_blk_desc, sizeof(sat_blk_desc)); 2490 2343 } 2491 2344 } else { 2492 - unsigned int output_len = p - rbuf - 2; 2493 - 2494 - rbuf[0] = output_len >> 8; 2495 - rbuf[1] = output_len; 2345 + put_unaligned_be16(p - rbuf - 2, &rbuf[0]); 2496 2346 rbuf[3] |= dpofua; 2497 2347 if (ebd) { 2498 2348 rbuf[7] = sizeof(sat_blk_desc); ··· 3404 3260 { 3405 3261 struct ata_device *dev = args->dev; 3406 3262 u8 *cdb = args->cmd->cmnd; 3407 - u8 supported = 0; 3263 + u8 supported = 0, cdlp = 0, rwcdlp = 0; 3408 3264 unsigned int err = 0; 3409 3265 3410 3266 if (cdb[2] != 1 && cdb[2] != 3) { ··· 3431 3287 case MAINTENANCE_IN: 3432 3288 case READ_6: 3433 3289 case READ_10: 3434 - case READ_16: 3435 3290 case WRITE_6: 3436 3291 case WRITE_10: 3437 - case WRITE_16: 3438 3292 case ATA_12: 3439 3293 case ATA_16: 3440 3294 case VERIFY: ··· 3441 3299 case MODE_SELECT_10: 3442 3300 case START_STOP: 3443 3301 supported = 3; 3302 + break; 3303 + case READ_16: 3304 + supported = 3; 3305 + if (dev->flags & ATA_DFLAG_CDL) { 3306 + /* 3307 + * CDL read descriptors map to the T2A page, that is, 3308 + * rwcdlp = 0x01 and cdlp = 0x01 3309 + */ 3310 + rwcdlp = 0x01; 3311 + cdlp = 0x01 << 3; 3312 + } 3313 + break; 3314 + case WRITE_16: 3315 + supported = 3; 3316 + if (dev->flags & ATA_DFLAG_CDL) { 3317 + /* 3318 + * CDL write descriptors map to the T2B page, that is, 3319 + * rwcdlp = 0x01 and cdlp = 0x02 3320 + */ 3321 + rwcdlp = 0x01; 3322 + cdlp = 0x02 << 3; 3323 + } 3444 3324 break; 3445 3325 case ZBC_IN: 3446 3326 case ZBC_OUT: ··· 3479 3315 break; 3480 3316 } 3481 3317 out: 3482 - rbuf[1] = supported; /* supported */ 3318 + /* One command format */ 3319 + rbuf[0] = rwcdlp; 3320 + rbuf[1] = cdlp | supported; 3483 3321 return err; 3484 3322 } 3485 3323 ··· 3771 3605 return 0; 3772 3606 } 3773 3607 3774 - /** 3775 - * ata_mselect_control - Simulate MODE SELECT for control page 3776 - * @qc: Storage for translated ATA taskfile 3777 - * @buf: input buffer 3778 - * @len: number of valid bytes in the input buffer 3779 - * @fp: out parameter for the failed field on error 3780 - * 3781 - * Prepare a taskfile to modify caching information for the device. 3782 - * 3783 - * LOCKING: 3784 - * None. 3608 + /* 3609 + * Simulate MODE SELECT control mode page, sub-page 0. 3785 3610 */ 3786 - static int ata_mselect_control(struct ata_queued_cmd *qc, 3787 - const u8 *buf, int len, u16 *fp) 3611 + static int ata_mselect_control_spg0(struct ata_queued_cmd *qc, 3612 + const u8 *buf, int len, u16 *fp) 3788 3613 { 3789 3614 struct ata_device *dev = qc->dev; 3790 3615 u8 mpage[CONTROL_MPAGE_LEN]; ··· 3797 3640 /* 3798 3641 * Check that read-only bits are not modified. 3799 3642 */ 3800 - ata_msense_control(dev, mpage, false); 3643 + ata_msense_control_spg0(dev, mpage, false); 3801 3644 for (i = 0; i < CONTROL_MPAGE_LEN - 2; i++) { 3802 3645 if (i == 0) 3803 3646 continue; ··· 3811 3654 else 3812 3655 dev->flags &= ~ATA_DFLAG_D_SENSE; 3813 3656 return 0; 3657 + } 3658 + 3659 + /* 3660 + * Translate MODE SELECT control mode page, sub-pages f2h (ATA feature mode 3661 + * page) into a SET FEATURES command. 3662 + */ 3663 + static unsigned int ata_mselect_control_ata_feature(struct ata_queued_cmd *qc, 3664 + const u8 *buf, int len, 3665 + u16 *fp) 3666 + { 3667 + struct ata_device *dev = qc->dev; 3668 + struct ata_taskfile *tf = &qc->tf; 3669 + u8 cdl_action; 3670 + 3671 + /* 3672 + * The first four bytes of ATA Feature Control mode page are a header, 3673 + * so offsets in mpage are off by 4 compared to buf. Same for len. 3674 + */ 3675 + if (len != ATA_FEATURE_SUB_MPAGE_LEN - 4) { 3676 + *fp = min(len, ATA_FEATURE_SUB_MPAGE_LEN - 4); 3677 + return -EINVAL; 3678 + } 3679 + 3680 + /* Check cdl_ctrl */ 3681 + switch (buf[0] & 0x03) { 3682 + case 0: 3683 + /* Disable CDL */ 3684 + cdl_action = 0; 3685 + dev->flags &= ~ATA_DFLAG_CDL_ENABLED; 3686 + break; 3687 + case 0x02: 3688 + /* Enable CDL T2A/T2B: NCQ priority must be disabled */ 3689 + if (dev->flags & ATA_DFLAG_NCQ_PRIO_ENABLED) { 3690 + ata_dev_err(dev, 3691 + "NCQ priority must be disabled to enable CDL\n"); 3692 + return -EINVAL; 3693 + } 3694 + cdl_action = 1; 3695 + dev->flags |= ATA_DFLAG_CDL_ENABLED; 3696 + break; 3697 + default: 3698 + *fp = 0; 3699 + return -EINVAL; 3700 + } 3701 + 3702 + tf->flags |= ATA_TFLAG_DEVICE | ATA_TFLAG_ISADDR; 3703 + tf->protocol = ATA_PROT_NODATA; 3704 + tf->command = ATA_CMD_SET_FEATURES; 3705 + tf->feature = SETFEATURES_CDL; 3706 + tf->nsect = cdl_action; 3707 + 3708 + return 1; 3709 + } 3710 + 3711 + /** 3712 + * ata_mselect_control - Simulate MODE SELECT for control page 3713 + * @qc: Storage for translated ATA taskfile 3714 + * @spg: target sub-page of the control page 3715 + * @buf: input buffer 3716 + * @len: number of valid bytes in the input buffer 3717 + * @fp: out parameter for the failed field on error 3718 + * 3719 + * Prepare a taskfile to modify caching information for the device. 3720 + * 3721 + * LOCKING: 3722 + * None. 3723 + */ 3724 + static int ata_mselect_control(struct ata_queued_cmd *qc, u8 spg, 3725 + const u8 *buf, int len, u16 *fp) 3726 + { 3727 + switch (spg) { 3728 + case 0: 3729 + return ata_mselect_control_spg0(qc, buf, len, fp); 3730 + case ATA_FEATURE_SUB_MPAGE: 3731 + return ata_mselect_control_ata_feature(qc, buf, len, fp); 3732 + default: 3733 + return -EINVAL; 3734 + } 3814 3735 } 3815 3736 3816 3737 /** ··· 3908 3673 const u8 *cdb = scmd->cmnd; 3909 3674 u8 pg, spg; 3910 3675 unsigned six_byte, pg_len, hdr_len, bd_len; 3911 - int len; 3676 + int len, ret; 3912 3677 u16 fp = (u16)-1; 3913 3678 u8 bp = 0xff; 3914 3679 u8 buffer[64]; ··· 3993 3758 } 3994 3759 3995 3760 /* 3996 - * No mode subpages supported (yet) but asking for _all_ 3997 - * subpages may be valid 3761 + * Supported subpages: all subpages and ATA feature sub-page f2h of 3762 + * the control page. 3998 3763 */ 3999 - if (spg && (spg != ALL_SUB_MPAGES)) { 4000 - fp = (p[0] & 0x40) ? 1 : 0; 4001 - fp += hdr_len + bd_len; 4002 - goto invalid_param; 3764 + if (spg) { 3765 + switch (spg) { 3766 + case ALL_SUB_MPAGES: 3767 + /* All subpages is not supported for the control page */ 3768 + if (pg == CONTROL_MPAGE) { 3769 + fp = (p[0] & 0x40) ? 1 : 0; 3770 + fp += hdr_len + bd_len; 3771 + goto invalid_param; 3772 + } 3773 + break; 3774 + case ATA_FEATURE_SUB_MPAGE: 3775 + if (qc->dev->flags & ATA_DFLAG_CDL && 3776 + pg == CONTROL_MPAGE) 3777 + break; 3778 + fallthrough; 3779 + default: 3780 + fp = (p[0] & 0x40) ? 1 : 0; 3781 + fp += hdr_len + bd_len; 3782 + goto invalid_param; 3783 + } 4003 3784 } 4004 3785 if (pg_len > len) 4005 3786 goto invalid_param_len; ··· 4028 3777 } 4029 3778 break; 4030 3779 case CONTROL_MPAGE: 4031 - if (ata_mselect_control(qc, p, pg_len, &fp) < 0) { 3780 + ret = ata_mselect_control(qc, spg, p, pg_len, &fp); 3781 + if (ret < 0) { 4032 3782 fp += hdr_len + bd_len; 4033 3783 goto invalid_param; 4034 - } else { 4035 - goto skip; /* No ATA command to send */ 4036 3784 } 3785 + if (!ret) 3786 + goto skip; /* No ATA command to send */ 4037 3787 break; 4038 - default: /* invalid page code */ 3788 + default: 3789 + /* Invalid page code */ 4039 3790 fp = bd_len + hdr_len; 4040 3791 goto invalid_param; 4041 3792 }
+1 -1
drivers/ata/libata.h
··· 45 45 extern u64 ata_tf_to_lba(const struct ata_taskfile *tf); 46 46 extern u64 ata_tf_to_lba48(const struct ata_taskfile *tf); 47 47 extern int ata_build_rw_tf(struct ata_queued_cmd *qc, u64 block, u32 n_block, 48 - unsigned int tf_flags, int class); 48 + unsigned int tf_flags, int dld, int class); 49 49 extern u64 ata_tf_read_block(const struct ata_taskfile *tf, 50 50 struct ata_device *dev); 51 51 extern unsigned ata_exec_internal(struct ata_device *dev,
+69
drivers/md/dm.c
··· 3143 3143 bool fail_early; 3144 3144 int ret; 3145 3145 enum pr_type type; 3146 + struct pr_keys *read_keys; 3147 + struct pr_held_reservation *rsv; 3146 3148 }; 3147 3149 3148 3150 static int dm_call_pr(struct block_device *bdev, iterate_devices_callout_fn fn, ··· 3377 3375 return r; 3378 3376 } 3379 3377 3378 + static int __dm_pr_read_keys(struct dm_target *ti, struct dm_dev *dev, 3379 + sector_t start, sector_t len, void *data) 3380 + { 3381 + struct dm_pr *pr = data; 3382 + const struct pr_ops *ops = dev->bdev->bd_disk->fops->pr_ops; 3383 + 3384 + if (!ops || !ops->pr_read_keys) { 3385 + pr->ret = -EOPNOTSUPP; 3386 + return -1; 3387 + } 3388 + 3389 + pr->ret = ops->pr_read_keys(dev->bdev, pr->read_keys); 3390 + if (!pr->ret) 3391 + return -1; 3392 + 3393 + return 0; 3394 + } 3395 + 3396 + static int dm_pr_read_keys(struct block_device *bdev, struct pr_keys *keys) 3397 + { 3398 + struct dm_pr pr = { 3399 + .read_keys = keys, 3400 + }; 3401 + int ret; 3402 + 3403 + ret = dm_call_pr(bdev, __dm_pr_read_keys, &pr); 3404 + if (ret) 3405 + return ret; 3406 + 3407 + return pr.ret; 3408 + } 3409 + 3410 + static int __dm_pr_read_reservation(struct dm_target *ti, struct dm_dev *dev, 3411 + sector_t start, sector_t len, void *data) 3412 + { 3413 + struct dm_pr *pr = data; 3414 + const struct pr_ops *ops = dev->bdev->bd_disk->fops->pr_ops; 3415 + 3416 + if (!ops || !ops->pr_read_reservation) { 3417 + pr->ret = -EOPNOTSUPP; 3418 + return -1; 3419 + } 3420 + 3421 + pr->ret = ops->pr_read_reservation(dev->bdev, pr->rsv); 3422 + if (!pr->ret) 3423 + return -1; 3424 + 3425 + return 0; 3426 + } 3427 + 3428 + static int dm_pr_read_reservation(struct block_device *bdev, 3429 + struct pr_held_reservation *rsv) 3430 + { 3431 + struct dm_pr pr = { 3432 + .rsv = rsv, 3433 + }; 3434 + int ret; 3435 + 3436 + ret = dm_call_pr(bdev, __dm_pr_read_reservation, &pr); 3437 + if (ret) 3438 + return ret; 3439 + 3440 + return pr.ret; 3441 + } 3442 + 3380 3443 static const struct pr_ops dm_pr_ops = { 3381 3444 .pr_register = dm_pr_register, 3382 3445 .pr_reserve = dm_pr_reserve, 3383 3446 .pr_release = dm_pr_release, 3384 3447 .pr_preempt = dm_pr_preempt, 3385 3448 .pr_clear = dm_pr_clear, 3449 + .pr_read_keys = dm_pr_read_keys, 3450 + .pr_read_reservation = dm_pr_read_reservation, 3386 3451 }; 3387 3452 3388 3453 static const struct block_device_operations dm_blk_dops = {
+1 -1
drivers/message/fusion/Kconfig
··· 2 2 3 3 menuconfig FUSION 4 4 bool "Fusion MPT device support" 5 - depends on PCI 5 + depends on PCI && HAS_IOPORT 6 6 help 7 7 Say Y here to get to see options for Fusion Message 8 8 Passing Technology (MPT) drivers.
+2 -2
drivers/message/fusion/mptbase.c
··· 712 712 MptDriverClass[cb_idx] = dclass; 713 713 MptEvHandlers[cb_idx] = NULL; 714 714 last_drv_idx = cb_idx; 715 - strlcpy(MptCallbacksName[cb_idx], func_name, 715 + strscpy(MptCallbacksName[cb_idx], func_name, 716 716 MPT_MAX_CALLBACKNAME_LEN+1); 717 717 break; 718 718 } ··· 7666 7666 break; 7667 7667 } 7668 7668 if (ds) 7669 - strlcpy(evStr, ds, EVENT_DESCR_STR_SZ); 7669 + strscpy(evStr, ds, EVENT_DESCR_STR_SZ); 7670 7670 7671 7671 7672 7672 devtprintk(ioc, printk(MYIOC_s_DEBUG_FMT
+1 -1
drivers/message/fusion/mptctl.c
··· 2408 2408 if (mpt_config(ioc, &cfg) == 0) { 2409 2409 ManufacturingPage0_t *pdata = (ManufacturingPage0_t *) pbuf; 2410 2410 if (strlen(pdata->BoardTracerNumber) > 1) { 2411 - strlcpy(karg.serial_number, 2411 + strscpy(karg.serial_number, 2412 2412 pdata->BoardTracerNumber, 24); 2413 2413 } 2414 2414 }
+1 -1
drivers/nvme/host/Makefile
··· 10 10 obj-$(CONFIG_NVME_TCP) += nvme-tcp.o 11 11 obj-$(CONFIG_NVME_APPLE) += nvme-apple.o 12 12 13 - nvme-core-y += core.o ioctl.o sysfs.o 13 + nvme-core-y += core.o ioctl.o sysfs.o pr.o 14 14 nvme-core-$(CONFIG_NVME_VERBOSE_ERRORS) += constants.o 15 15 nvme-core-$(CONFIG_TRACING) += trace.o 16 16 nvme-core-$(CONFIG_NVME_MULTIPATH) += multipath.o
+1 -148
drivers/nvme/host/core.c
··· 279 279 case NVME_SC_INVALID_PI: 280 280 return BLK_STS_PROTECTION; 281 281 case NVME_SC_RESERVATION_CONFLICT: 282 - return BLK_STS_NEXUS; 282 + return BLK_STS_RESV_CONFLICT; 283 283 case NVME_SC_HOST_PATH_ERROR: 284 284 return BLK_STS_TRANSPORT; 285 285 case NVME_SC_ZONE_TOO_MANY_ACTIVE: ··· 2104 2104 return nvme_update_ns_info_generic(ns, info); 2105 2105 } 2106 2106 } 2107 - 2108 - static char nvme_pr_type(enum pr_type type) 2109 - { 2110 - switch (type) { 2111 - case PR_WRITE_EXCLUSIVE: 2112 - return 1; 2113 - case PR_EXCLUSIVE_ACCESS: 2114 - return 2; 2115 - case PR_WRITE_EXCLUSIVE_REG_ONLY: 2116 - return 3; 2117 - case PR_EXCLUSIVE_ACCESS_REG_ONLY: 2118 - return 4; 2119 - case PR_WRITE_EXCLUSIVE_ALL_REGS: 2120 - return 5; 2121 - case PR_EXCLUSIVE_ACCESS_ALL_REGS: 2122 - return 6; 2123 - default: 2124 - return 0; 2125 - } 2126 - } 2127 - 2128 - static int nvme_send_ns_head_pr_command(struct block_device *bdev, 2129 - struct nvme_command *c, u8 data[16]) 2130 - { 2131 - struct nvme_ns_head *head = bdev->bd_disk->private_data; 2132 - int srcu_idx = srcu_read_lock(&head->srcu); 2133 - struct nvme_ns *ns = nvme_find_path(head); 2134 - int ret = -EWOULDBLOCK; 2135 - 2136 - if (ns) { 2137 - c->common.nsid = cpu_to_le32(ns->head->ns_id); 2138 - ret = nvme_submit_sync_cmd(ns->queue, c, data, 16); 2139 - } 2140 - srcu_read_unlock(&head->srcu, srcu_idx); 2141 - return ret; 2142 - } 2143 - 2144 - static int nvme_send_ns_pr_command(struct nvme_ns *ns, struct nvme_command *c, 2145 - u8 data[16]) 2146 - { 2147 - c->common.nsid = cpu_to_le32(ns->head->ns_id); 2148 - return nvme_submit_sync_cmd(ns->queue, c, data, 16); 2149 - } 2150 - 2151 - static int nvme_sc_to_pr_err(int nvme_sc) 2152 - { 2153 - if (nvme_is_path_error(nvme_sc)) 2154 - return PR_STS_PATH_FAILED; 2155 - 2156 - switch (nvme_sc) { 2157 - case NVME_SC_SUCCESS: 2158 - return PR_STS_SUCCESS; 2159 - case NVME_SC_RESERVATION_CONFLICT: 2160 - return PR_STS_RESERVATION_CONFLICT; 2161 - case NVME_SC_ONCS_NOT_SUPPORTED: 2162 - return -EOPNOTSUPP; 2163 - case NVME_SC_BAD_ATTRIBUTES: 2164 - case NVME_SC_INVALID_OPCODE: 2165 - case NVME_SC_INVALID_FIELD: 2166 - case NVME_SC_INVALID_NS: 2167 - return -EINVAL; 2168 - default: 2169 - return PR_STS_IOERR; 2170 - } 2171 - } 2172 - 2173 - static int nvme_pr_command(struct block_device *bdev, u32 cdw10, 2174 - u64 key, u64 sa_key, u8 op) 2175 - { 2176 - struct nvme_command c = { }; 2177 - u8 data[16] = { 0, }; 2178 - int ret; 2179 - 2180 - put_unaligned_le64(key, &data[0]); 2181 - put_unaligned_le64(sa_key, &data[8]); 2182 - 2183 - c.common.opcode = op; 2184 - c.common.cdw10 = cpu_to_le32(cdw10); 2185 - 2186 - if (IS_ENABLED(CONFIG_NVME_MULTIPATH) && 2187 - bdev->bd_disk->fops == &nvme_ns_head_ops) 2188 - ret = nvme_send_ns_head_pr_command(bdev, &c, data); 2189 - else 2190 - ret = nvme_send_ns_pr_command(bdev->bd_disk->private_data, &c, 2191 - data); 2192 - if (ret < 0) 2193 - return ret; 2194 - 2195 - return nvme_sc_to_pr_err(ret); 2196 - } 2197 - 2198 - static int nvme_pr_register(struct block_device *bdev, u64 old, 2199 - u64 new, unsigned flags) 2200 - { 2201 - u32 cdw10; 2202 - 2203 - if (flags & ~PR_FL_IGNORE_KEY) 2204 - return -EOPNOTSUPP; 2205 - 2206 - cdw10 = old ? 2 : 0; 2207 - cdw10 |= (flags & PR_FL_IGNORE_KEY) ? 1 << 3 : 0; 2208 - cdw10 |= (1 << 30) | (1 << 31); /* PTPL=1 */ 2209 - return nvme_pr_command(bdev, cdw10, old, new, nvme_cmd_resv_register); 2210 - } 2211 - 2212 - static int nvme_pr_reserve(struct block_device *bdev, u64 key, 2213 - enum pr_type type, unsigned flags) 2214 - { 2215 - u32 cdw10; 2216 - 2217 - if (flags & ~PR_FL_IGNORE_KEY) 2218 - return -EOPNOTSUPP; 2219 - 2220 - cdw10 = nvme_pr_type(type) << 8; 2221 - cdw10 |= ((flags & PR_FL_IGNORE_KEY) ? 1 << 3 : 0); 2222 - return nvme_pr_command(bdev, cdw10, key, 0, nvme_cmd_resv_acquire); 2223 - } 2224 - 2225 - static int nvme_pr_preempt(struct block_device *bdev, u64 old, u64 new, 2226 - enum pr_type type, bool abort) 2227 - { 2228 - u32 cdw10 = nvme_pr_type(type) << 8 | (abort ? 2 : 1); 2229 - 2230 - return nvme_pr_command(bdev, cdw10, old, new, nvme_cmd_resv_acquire); 2231 - } 2232 - 2233 - static int nvme_pr_clear(struct block_device *bdev, u64 key) 2234 - { 2235 - u32 cdw10 = 1 | (key ? 0 : 1 << 3); 2236 - 2237 - return nvme_pr_command(bdev, cdw10, key, 0, nvme_cmd_resv_release); 2238 - } 2239 - 2240 - static int nvme_pr_release(struct block_device *bdev, u64 key, enum pr_type type) 2241 - { 2242 - u32 cdw10 = nvme_pr_type(type) << 8 | (key ? 0 : 1 << 3); 2243 - 2244 - return nvme_pr_command(bdev, cdw10, key, 0, nvme_cmd_resv_release); 2245 - } 2246 - 2247 - const struct pr_ops nvme_pr_ops = { 2248 - .pr_register = nvme_pr_register, 2249 - .pr_reserve = nvme_pr_reserve, 2250 - .pr_release = nvme_pr_release, 2251 - .pr_preempt = nvme_pr_preempt, 2252 - .pr_clear = nvme_pr_clear, 2253 - }; 2254 2107 2255 2108 #ifdef CONFIG_BLK_SED_OPAL 2256 2109 static int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t len,
+2
drivers/nvme/host/nvme.h
··· 19 19 20 20 #include <trace/events/block.h> 21 21 22 + extern const struct pr_ops nvme_pr_ops; 23 + 22 24 extern unsigned int nvme_io_timeout; 23 25 #define NVME_IO_TIMEOUT (nvme_io_timeout * HZ) 24 26
+315
drivers/nvme/host/pr.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2015 Intel Corporation 4 + * Keith Busch <kbusch@kernel.org> 5 + */ 6 + #include <linux/blkdev.h> 7 + #include <linux/pr.h> 8 + #include <asm/unaligned.h> 9 + 10 + #include "nvme.h" 11 + 12 + static enum nvme_pr_type nvme_pr_type_from_blk(enum pr_type type) 13 + { 14 + switch (type) { 15 + case PR_WRITE_EXCLUSIVE: 16 + return NVME_PR_WRITE_EXCLUSIVE; 17 + case PR_EXCLUSIVE_ACCESS: 18 + return NVME_PR_EXCLUSIVE_ACCESS; 19 + case PR_WRITE_EXCLUSIVE_REG_ONLY: 20 + return NVME_PR_WRITE_EXCLUSIVE_REG_ONLY; 21 + case PR_EXCLUSIVE_ACCESS_REG_ONLY: 22 + return NVME_PR_EXCLUSIVE_ACCESS_REG_ONLY; 23 + case PR_WRITE_EXCLUSIVE_ALL_REGS: 24 + return NVME_PR_WRITE_EXCLUSIVE_ALL_REGS; 25 + case PR_EXCLUSIVE_ACCESS_ALL_REGS: 26 + return NVME_PR_EXCLUSIVE_ACCESS_ALL_REGS; 27 + } 28 + 29 + return 0; 30 + } 31 + 32 + static enum pr_type block_pr_type_from_nvme(enum nvme_pr_type type) 33 + { 34 + switch (type) { 35 + case NVME_PR_WRITE_EXCLUSIVE: 36 + return PR_WRITE_EXCLUSIVE; 37 + case NVME_PR_EXCLUSIVE_ACCESS: 38 + return PR_EXCLUSIVE_ACCESS; 39 + case NVME_PR_WRITE_EXCLUSIVE_REG_ONLY: 40 + return PR_WRITE_EXCLUSIVE_REG_ONLY; 41 + case NVME_PR_EXCLUSIVE_ACCESS_REG_ONLY: 42 + return PR_EXCLUSIVE_ACCESS_REG_ONLY; 43 + case NVME_PR_WRITE_EXCLUSIVE_ALL_REGS: 44 + return PR_WRITE_EXCLUSIVE_ALL_REGS; 45 + case NVME_PR_EXCLUSIVE_ACCESS_ALL_REGS: 46 + return PR_EXCLUSIVE_ACCESS_ALL_REGS; 47 + } 48 + 49 + return 0; 50 + } 51 + 52 + static int nvme_send_ns_head_pr_command(struct block_device *bdev, 53 + struct nvme_command *c, void *data, unsigned int data_len) 54 + { 55 + struct nvme_ns_head *head = bdev->bd_disk->private_data; 56 + int srcu_idx = srcu_read_lock(&head->srcu); 57 + struct nvme_ns *ns = nvme_find_path(head); 58 + int ret = -EWOULDBLOCK; 59 + 60 + if (ns) { 61 + c->common.nsid = cpu_to_le32(ns->head->ns_id); 62 + ret = nvme_submit_sync_cmd(ns->queue, c, data, data_len); 63 + } 64 + srcu_read_unlock(&head->srcu, srcu_idx); 65 + return ret; 66 + } 67 + 68 + static int nvme_send_ns_pr_command(struct nvme_ns *ns, struct nvme_command *c, 69 + void *data, unsigned int data_len) 70 + { 71 + c->common.nsid = cpu_to_le32(ns->head->ns_id); 72 + return nvme_submit_sync_cmd(ns->queue, c, data, data_len); 73 + } 74 + 75 + static int nvme_sc_to_pr_err(int nvme_sc) 76 + { 77 + if (nvme_is_path_error(nvme_sc)) 78 + return PR_STS_PATH_FAILED; 79 + 80 + switch (nvme_sc) { 81 + case NVME_SC_SUCCESS: 82 + return PR_STS_SUCCESS; 83 + case NVME_SC_RESERVATION_CONFLICT: 84 + return PR_STS_RESERVATION_CONFLICT; 85 + case NVME_SC_ONCS_NOT_SUPPORTED: 86 + return -EOPNOTSUPP; 87 + case NVME_SC_BAD_ATTRIBUTES: 88 + case NVME_SC_INVALID_OPCODE: 89 + case NVME_SC_INVALID_FIELD: 90 + case NVME_SC_INVALID_NS: 91 + return -EINVAL; 92 + default: 93 + return PR_STS_IOERR; 94 + } 95 + } 96 + 97 + static int nvme_send_pr_command(struct block_device *bdev, 98 + struct nvme_command *c, void *data, unsigned int data_len) 99 + { 100 + if (IS_ENABLED(CONFIG_NVME_MULTIPATH) && 101 + bdev->bd_disk->fops == &nvme_ns_head_ops) 102 + return nvme_send_ns_head_pr_command(bdev, c, data, data_len); 103 + 104 + return nvme_send_ns_pr_command(bdev->bd_disk->private_data, c, data, 105 + data_len); 106 + } 107 + 108 + static int nvme_pr_command(struct block_device *bdev, u32 cdw10, 109 + u64 key, u64 sa_key, u8 op) 110 + { 111 + struct nvme_command c = { }; 112 + u8 data[16] = { 0, }; 113 + int ret; 114 + 115 + put_unaligned_le64(key, &data[0]); 116 + put_unaligned_le64(sa_key, &data[8]); 117 + 118 + c.common.opcode = op; 119 + c.common.cdw10 = cpu_to_le32(cdw10); 120 + 121 + ret = nvme_send_pr_command(bdev, &c, data, sizeof(data)); 122 + if (ret < 0) 123 + return ret; 124 + 125 + return nvme_sc_to_pr_err(ret); 126 + } 127 + 128 + static int nvme_pr_register(struct block_device *bdev, u64 old, 129 + u64 new, unsigned flags) 130 + { 131 + u32 cdw10; 132 + 133 + if (flags & ~PR_FL_IGNORE_KEY) 134 + return -EOPNOTSUPP; 135 + 136 + cdw10 = old ? 2 : 0; 137 + cdw10 |= (flags & PR_FL_IGNORE_KEY) ? 1 << 3 : 0; 138 + cdw10 |= (1 << 30) | (1 << 31); /* PTPL=1 */ 139 + return nvme_pr_command(bdev, cdw10, old, new, nvme_cmd_resv_register); 140 + } 141 + 142 + static int nvme_pr_reserve(struct block_device *bdev, u64 key, 143 + enum pr_type type, unsigned flags) 144 + { 145 + u32 cdw10; 146 + 147 + if (flags & ~PR_FL_IGNORE_KEY) 148 + return -EOPNOTSUPP; 149 + 150 + cdw10 = nvme_pr_type_from_blk(type) << 8; 151 + cdw10 |= ((flags & PR_FL_IGNORE_KEY) ? 1 << 3 : 0); 152 + return nvme_pr_command(bdev, cdw10, key, 0, nvme_cmd_resv_acquire); 153 + } 154 + 155 + static int nvme_pr_preempt(struct block_device *bdev, u64 old, u64 new, 156 + enum pr_type type, bool abort) 157 + { 158 + u32 cdw10 = nvme_pr_type_from_blk(type) << 8 | (abort ? 2 : 1); 159 + 160 + return nvme_pr_command(bdev, cdw10, old, new, nvme_cmd_resv_acquire); 161 + } 162 + 163 + static int nvme_pr_clear(struct block_device *bdev, u64 key) 164 + { 165 + u32 cdw10 = 1 | (key ? 0 : 1 << 3); 166 + 167 + return nvme_pr_command(bdev, cdw10, key, 0, nvme_cmd_resv_release); 168 + } 169 + 170 + static int nvme_pr_release(struct block_device *bdev, u64 key, enum pr_type type) 171 + { 172 + u32 cdw10 = nvme_pr_type_from_blk(type) << 8 | (key ? 0 : 1 << 3); 173 + 174 + return nvme_pr_command(bdev, cdw10, key, 0, nvme_cmd_resv_release); 175 + } 176 + 177 + static int nvme_pr_resv_report(struct block_device *bdev, void *data, 178 + u32 data_len, bool *eds) 179 + { 180 + struct nvme_command c = { }; 181 + int ret; 182 + 183 + c.common.opcode = nvme_cmd_resv_report; 184 + c.common.cdw10 = cpu_to_le32(nvme_bytes_to_numd(data_len)); 185 + c.common.cdw11 = cpu_to_le32(NVME_EXTENDED_DATA_STRUCT); 186 + *eds = true; 187 + 188 + retry: 189 + ret = nvme_send_pr_command(bdev, &c, data, data_len); 190 + if (ret == NVME_SC_HOST_ID_INCONSIST && 191 + c.common.cdw11 == cpu_to_le32(NVME_EXTENDED_DATA_STRUCT)) { 192 + c.common.cdw11 = 0; 193 + *eds = false; 194 + goto retry; 195 + } 196 + 197 + if (ret < 0) 198 + return ret; 199 + 200 + return nvme_sc_to_pr_err(ret); 201 + } 202 + 203 + static int nvme_pr_read_keys(struct block_device *bdev, 204 + struct pr_keys *keys_info) 205 + { 206 + u32 rse_len, num_keys = keys_info->num_keys; 207 + struct nvme_reservation_status_ext *rse; 208 + int ret, i; 209 + bool eds; 210 + 211 + /* 212 + * Assume we are using 128-bit host IDs and allocate a buffer large 213 + * enough to get enough keys to fill the return keys buffer. 214 + */ 215 + rse_len = struct_size(rse, regctl_eds, num_keys); 216 + rse = kzalloc(rse_len, GFP_KERNEL); 217 + if (!rse) 218 + return -ENOMEM; 219 + 220 + ret = nvme_pr_resv_report(bdev, rse, rse_len, &eds); 221 + if (ret) 222 + goto free_rse; 223 + 224 + keys_info->generation = le32_to_cpu(rse->gen); 225 + keys_info->num_keys = get_unaligned_le16(&rse->regctl); 226 + 227 + num_keys = min(num_keys, keys_info->num_keys); 228 + for (i = 0; i < num_keys; i++) { 229 + if (eds) { 230 + keys_info->keys[i] = 231 + le64_to_cpu(rse->regctl_eds[i].rkey); 232 + } else { 233 + struct nvme_reservation_status *rs; 234 + 235 + rs = (struct nvme_reservation_status *)rse; 236 + keys_info->keys[i] = le64_to_cpu(rs->regctl_ds[i].rkey); 237 + } 238 + } 239 + 240 + free_rse: 241 + kfree(rse); 242 + return ret; 243 + } 244 + 245 + static int nvme_pr_read_reservation(struct block_device *bdev, 246 + struct pr_held_reservation *resv) 247 + { 248 + struct nvme_reservation_status_ext tmp_rse, *rse; 249 + int ret, i, num_regs; 250 + u32 rse_len; 251 + bool eds; 252 + 253 + get_num_regs: 254 + /* 255 + * Get the number of registrations so we know how big to allocate 256 + * the response buffer. 257 + */ 258 + ret = nvme_pr_resv_report(bdev, &tmp_rse, sizeof(tmp_rse), &eds); 259 + if (ret) 260 + return ret; 261 + 262 + num_regs = get_unaligned_le16(&tmp_rse.regctl); 263 + if (!num_regs) { 264 + resv->generation = le32_to_cpu(tmp_rse.gen); 265 + return 0; 266 + } 267 + 268 + rse_len = struct_size(rse, regctl_eds, num_regs); 269 + rse = kzalloc(rse_len, GFP_KERNEL); 270 + if (!rse) 271 + return -ENOMEM; 272 + 273 + ret = nvme_pr_resv_report(bdev, rse, rse_len, &eds); 274 + if (ret) 275 + goto free_rse; 276 + 277 + if (num_regs != get_unaligned_le16(&rse->regctl)) { 278 + kfree(rse); 279 + goto get_num_regs; 280 + } 281 + 282 + resv->generation = le32_to_cpu(rse->gen); 283 + resv->type = block_pr_type_from_nvme(rse->rtype); 284 + 285 + for (i = 0; i < num_regs; i++) { 286 + if (eds) { 287 + if (rse->regctl_eds[i].rcsts) { 288 + resv->key = le64_to_cpu(rse->regctl_eds[i].rkey); 289 + break; 290 + } 291 + } else { 292 + struct nvme_reservation_status *rs; 293 + 294 + rs = (struct nvme_reservation_status *)rse; 295 + if (rs->regctl_ds[i].rcsts) { 296 + resv->key = le64_to_cpu(rs->regctl_ds[i].rkey); 297 + break; 298 + } 299 + } 300 + } 301 + 302 + free_rse: 303 + kfree(rse); 304 + return ret; 305 + } 306 + 307 + const struct pr_ops nvme_pr_ops = { 308 + .pr_register = nvme_pr_register, 309 + .pr_reserve = nvme_pr_reserve, 310 + .pr_release = nvme_pr_release, 311 + .pr_preempt = nvme_pr_preempt, 312 + .pr_clear = nvme_pr_clear, 313 + .pr_read_keys = nvme_pr_read_keys, 314 + .pr_read_reservation = nvme_pr_read_reservation, 315 + };
+6 -1
drivers/s390/block/dasd.c
··· 2737 2737 else if (status == 0) { 2738 2738 switch (cqr->intrc) { 2739 2739 case -EPERM: 2740 - error = BLK_STS_NEXUS; 2740 + /* 2741 + * DASD doesn't implement SCSI/NVMe reservations, but it 2742 + * implements a locking scheme similar to them. We 2743 + * return this error when we no longer have the lock. 2744 + */ 2745 + error = BLK_STS_RESV_CONFLICT; 2741 2746 break; 2742 2747 case -ENOLINK: 2743 2748 error = BLK_STS_TRANSPORT;
+3 -1
drivers/scsi/3w-xxxx.c
··· 2305 2305 TW_DISABLE_INTERRUPTS(tw_dev); 2306 2306 2307 2307 /* Initialize the card */ 2308 - if (tw_reset_sequence(tw_dev)) 2308 + if (tw_reset_sequence(tw_dev)) { 2309 + retval = -EINVAL; 2309 2310 goto out_release_mem_region; 2311 + } 2310 2312 2311 2313 /* Set host specific parameters */ 2312 2314 host->max_id = TW_MAX_UNITS;
+13 -12
drivers/scsi/Kconfig
··· 334 334 335 335 config BLK_DEV_3W_XXXX_RAID 336 336 tristate "3ware 5/6/7/8xxx ATA-RAID support" 337 - depends on PCI && SCSI 337 + depends on PCI && HAS_IOPORT && SCSI 338 338 help 339 339 3ware is the only hardware ATA-Raid product in Linux to date. 340 340 This card is 2,4, or 8 channel master mode support only. ··· 381 381 382 382 config SCSI_ACARD 383 383 tristate "ACARD SCSI support" 384 - depends on PCI && SCSI 384 + depends on PCI && HAS_IOPORT && SCSI 385 385 help 386 386 This driver supports the ACARD SCSI host adapter. 387 387 Support Chip <ATP870 ATP876 ATP880 ATP885> ··· 462 462 config SCSI_ADVANSYS 463 463 tristate "AdvanSys SCSI support" 464 464 depends on SCSI 465 - depends on ISA || EISA || PCI 465 + depends on (ISA || EISA || PCI) && HAS_IOPORT 466 466 depends on ISA_DMA_API || !ISA 467 467 help 468 468 This is a driver for all SCSI host adapters manufactured by ··· 503 503 504 504 config SCSI_BUSLOGIC 505 505 tristate "BusLogic SCSI support" 506 - depends on PCI && SCSI 506 + depends on SCSI && PCI && HAS_IOPORT 507 507 help 508 508 This is support for BusLogic MultiMaster and FlashPoint SCSI Host 509 509 Adapters. Consult the SCSI-HOWTO, available from ··· 518 518 519 519 config SCSI_FLASHPOINT 520 520 bool "FlashPoint support" 521 - depends on SCSI_BUSLOGIC && PCI 521 + depends on SCSI_BUSLOGIC && PCI && HAS_IOPORT 522 522 help 523 523 This option allows you to add FlashPoint support to the 524 524 BusLogic SCSI driver. The FlashPoint SCCB Manager code is ··· 632 632 633 633 config SCSI_DMX3191D 634 634 tristate "DMX3191D SCSI support" 635 - depends on PCI && SCSI 635 + depends on PCI && HAS_IOPORT && SCSI 636 636 select SCSI_SPI_ATTRS 637 637 help 638 638 This is support for Domex DMX3191D SCSI Host Adapters. ··· 646 646 647 647 config SCSI_FDOMAIN_PCI 648 648 tristate "Future Domain TMC-3260/AHA-2920A PCI SCSI support" 649 - depends on PCI && SCSI 649 + depends on PCI && HAS_IOPORT && SCSI 650 650 select SCSI_FDOMAIN 651 651 help 652 652 This is support for Future Domain's PCI SCSI host adapters (TMC-3260) ··· 699 699 700 700 config SCSI_IPS 701 701 tristate "IBM ServeRAID support" 702 - depends on PCI && SCSI 702 + depends on PCI && HAS_IOPORT && SCSI 703 703 help 704 704 This is support for the IBM ServeRAID hardware RAID controllers. 705 705 See <http://www.developer.ibm.com/welcome/netfinity/serveraid.html> ··· 759 759 760 760 config SCSI_INITIO 761 761 tristate "Initio 9100U(W) support" 762 - depends on PCI && SCSI 762 + depends on PCI && HAS_IOPORT && SCSI 763 763 help 764 764 This is support for the Initio 91XXU(W) SCSI host adapter. Please 765 765 read the SCSI-HOWTO, available from ··· 770 770 771 771 config SCSI_INIA100 772 772 tristate "Initio INI-A100U2W support" 773 - depends on PCI && SCSI 773 + depends on PCI && HAS_IOPORT && SCSI 774 774 help 775 775 This is support for the Initio INI-A100U2W SCSI host adapter. 776 776 Please read the SCSI-HOWTO, available from ··· 782 782 config SCSI_PPA 783 783 tristate "IOMEGA parallel port (ppa - older drives)" 784 784 depends on SCSI && PARPORT_PC 785 + depends on HAS_IOPORT 785 786 help 786 787 This driver supports older versions of IOMEGA's parallel port ZIP 787 788 drive (a 100 MB removable media device). ··· 1176 1175 1177 1176 config SCSI_DC395x 1178 1177 tristate "Tekram DC395(U/UW/F) and DC315(U) SCSI support" 1179 - depends on PCI && SCSI 1178 + depends on PCI && HAS_IOPORT && SCSI 1180 1179 select SCSI_SPI_ATTRS 1181 1180 help 1182 1181 This driver supports PCI SCSI host adapters based on the ASIC ··· 1208 1207 1209 1208 config SCSI_NSP32 1210 1209 tristate "Workbit NinjaSCSI-32Bi/UDE support" 1211 - depends on PCI && SCSI && !64BIT 1210 + depends on PCI && SCSI && !64BIT && HAS_IOPORT 1212 1211 help 1213 1212 This is support for the Workbit NinjaSCSI-32Bi/UDE PCI/Cardbus 1214 1213 SCSI host adapter. Please read the SCSI-HOWTO, available from
+1 -1
drivers/scsi/aic7xxx/Kconfig.aic79xx
··· 5 5 # 6 6 config SCSI_AIC79XX 7 7 tristate "Adaptec AIC79xx U320 support" 8 - depends on PCI && SCSI 8 + depends on PCI && HAS_IOPORT && SCSI 9 9 select SCSI_SPI_ATTRS 10 10 help 11 11 This driver supports all of Adaptec's Ultra 320 PCI-X
+1 -1
drivers/scsi/aic7xxx/Kconfig.aic7xxx
··· 5 5 # 6 6 config SCSI_AIC7XXX 7 7 tristate "Adaptec AIC7xxx Fast -> U160 support" 8 - depends on (PCI || EISA) && SCSI 8 + depends on (PCI || EISA) && HAS_IOPORT && SCSI 9 9 select SCSI_SPI_ATTRS 10 10 help 11 11 This driver supports all of Adaptec's Fast through Ultra 160 PCI
+1 -1
drivers/scsi/aic94xx/Kconfig
··· 8 8 9 9 config SCSI_AIC94XX 10 10 tristate "Adaptec AIC94xx SAS/SATA support" 11 - depends on PCI 11 + depends on PCI && HAS_IOPORT 12 12 select SCSI_SAS_LIBSAS 13 13 select FW_LOADER 14 14 help
+2 -2
drivers/scsi/bfa/bfa_fcbuild.c
··· 1134 1134 memset(rspnid, 0, sizeof(struct fcgs_rspnid_req_s)); 1135 1135 1136 1136 rspnid->dap = s_id; 1137 - strlcpy(rspnid->spn, name, sizeof(rspnid->spn)); 1137 + strscpy(rspnid->spn, name, sizeof(rspnid->spn)); 1138 1138 rspnid->spn_len = (u8) strlen(rspnid->spn); 1139 1139 1140 1140 return sizeof(struct fcgs_rspnid_req_s) + sizeof(struct ct_hdr_s); ··· 1155 1155 memset(rsnn_nn, 0, sizeof(struct fcgs_rsnn_nn_req_s)); 1156 1156 1157 1157 rsnn_nn->node_name = node_name; 1158 - strlcpy(rsnn_nn->snn, name, sizeof(rsnn_nn->snn)); 1158 + strscpy(rsnn_nn->snn, name, sizeof(rsnn_nn->snn)); 1159 1159 rsnn_nn->snn_len = (u8) strlen(rsnn_nn->snn); 1160 1160 1161 1161 return sizeof(struct fcgs_rsnn_nn_req_s) + sizeof(struct ct_hdr_s);
+2 -2
drivers/scsi/bfa/bfa_fcs.c
··· 761 761 bfa_ioc_get_adapter_model(&fabric->fcs->bfa->ioc, model); 762 762 763 763 /* Model name/number */ 764 - strlcpy(port_cfg->sym_name.symname, model, 764 + strscpy(port_cfg->sym_name.symname, model, 765 765 BFA_SYMNAME_MAXLEN); 766 766 strlcat(port_cfg->sym_name.symname, BFA_FCS_PORT_SYMBNAME_SEPARATOR, 767 767 BFA_SYMNAME_MAXLEN); ··· 822 822 bfa_ioc_get_adapter_model(&fabric->fcs->bfa->ioc, model); 823 823 824 824 /* Model name/number */ 825 - strlcpy(port_cfg->node_sym_name.symname, model, 825 + strscpy(port_cfg->node_sym_name.symname, model, 826 826 BFA_SYMNAME_MAXLEN); 827 827 strlcat(port_cfg->node_sym_name.symname, 828 828 BFA_FCS_PORT_SYMBNAME_SEPARATOR,
+10 -10
drivers/scsi/bfa/bfa_fcs_lport.c
··· 2642 2642 bfa_ioc_get_adapter_fw_ver(&port->fcs->bfa->ioc, 2643 2643 hba_attr->fw_version); 2644 2644 2645 - strlcpy(hba_attr->driver_version, (char *)driver_info->version, 2645 + strscpy(hba_attr->driver_version, (char *)driver_info->version, 2646 2646 sizeof(hba_attr->driver_version)); 2647 2647 2648 - strlcpy(hba_attr->os_name, driver_info->host_os_name, 2648 + strscpy(hba_attr->os_name, driver_info->host_os_name, 2649 2649 sizeof(hba_attr->os_name)); 2650 2650 2651 2651 /* ··· 2663 2663 bfa_fcs_fdmi_get_portattr(fdmi, &fcs_port_attr); 2664 2664 hba_attr->max_ct_pyld = fcs_port_attr.max_frm_size; 2665 2665 2666 - strlcpy(hba_attr->node_sym_name.symname, 2666 + strscpy(hba_attr->node_sym_name.symname, 2667 2667 port->port_cfg.node_sym_name.symname, BFA_SYMNAME_MAXLEN); 2668 2668 strcpy(hba_attr->vendor_info, "QLogic"); 2669 2669 hba_attr->num_ports = 2670 2670 cpu_to_be32(bfa_ioc_get_nports(&port->fcs->bfa->ioc)); 2671 2671 hba_attr->fabric_name = port->fabric->lps->pr_nwwn; 2672 - strlcpy(hba_attr->bios_ver, hba_attr->option_rom_ver, BFA_VERSION_LEN); 2672 + strscpy(hba_attr->bios_ver, hba_attr->option_rom_ver, BFA_VERSION_LEN); 2673 2673 2674 2674 } 2675 2675 ··· 2736 2736 /* 2737 2737 * OS device Name 2738 2738 */ 2739 - strlcpy(port_attr->os_device_name, driver_info->os_device_name, 2739 + strscpy(port_attr->os_device_name, driver_info->os_device_name, 2740 2740 sizeof(port_attr->os_device_name)); 2741 2741 2742 2742 /* 2743 2743 * Host name 2744 2744 */ 2745 - strlcpy(port_attr->host_name, driver_info->host_machine_name, 2745 + strscpy(port_attr->host_name, driver_info->host_machine_name, 2746 2746 sizeof(port_attr->host_name)); 2747 2747 2748 2748 port_attr->node_name = bfa_fcs_lport_get_nwwn(port); 2749 2749 port_attr->port_name = bfa_fcs_lport_get_pwwn(port); 2750 2750 2751 - strlcpy(port_attr->port_sym_name.symname, 2751 + strscpy(port_attr->port_sym_name.symname, 2752 2752 bfa_fcs_lport_get_psym_name(port).symname, BFA_SYMNAME_MAXLEN); 2753 2753 bfa_fcs_lport_get_attr(port, &lport_attr); 2754 2754 port_attr->port_type = cpu_to_be32(lport_attr.port_type); ··· 3229 3229 rsp_str[gmal_entry->len-1] = 0; 3230 3230 3231 3231 /* copy IP Address to fabric */ 3232 - strlcpy(bfa_fcs_lport_get_fabric_ipaddr(port), 3232 + strscpy(bfa_fcs_lport_get_fabric_ipaddr(port), 3233 3233 gmal_entry->ip_addr, 3234 3234 BFA_FCS_FABRIC_IPADDR_SZ); 3235 3235 break; ··· 4667 4667 * to that of the base port. 4668 4668 */ 4669 4669 4670 - strlcpy(symbl, 4670 + strscpy(symbl, 4671 4671 (char *)&(bfa_fcs_lport_get_psym_name 4672 4672 (bfa_fcs_get_base_port(port->fcs))), 4673 4673 sizeof(symbl)); ··· 5194 5194 * For Vports, we append the vport's port symbolic name 5195 5195 * to that of the base port. 5196 5196 */ 5197 - strlcpy(symbl, (char *)&(bfa_fcs_lport_get_psym_name 5197 + strscpy(symbl, (char *)&(bfa_fcs_lport_get_psym_name 5198 5198 (bfa_fcs_get_base_port(port->fcs))), 5199 5199 sizeof(symbl)); 5200 5200
+1 -1
drivers/scsi/bfa/bfa_ioc.c
··· 2788 2788 bfa_ioc_get_adapter_manufacturer(struct bfa_ioc_s *ioc, char *manufacturer) 2789 2789 { 2790 2790 memset((void *)manufacturer, 0, BFA_ADAPTER_MFG_NAME_LEN); 2791 - strlcpy(manufacturer, BFA_MFG_NAME, BFA_ADAPTER_MFG_NAME_LEN); 2791 + strscpy(manufacturer, BFA_MFG_NAME, BFA_ADAPTER_MFG_NAME_LEN); 2792 2792 } 2793 2793 2794 2794 void
+1 -1
drivers/scsi/bfa/bfa_svc.c
··· 330 330 lp.eid = event; 331 331 lp.log_type = BFA_PL_LOG_TYPE_STRING; 332 332 lp.misc = misc; 333 - strlcpy(lp.log_entry.string_log, log_str, 333 + strscpy(lp.log_entry.string_log, log_str, 334 334 BFA_PL_STRING_LOG_SZ); 335 335 lp.log_entry.string_log[BFA_PL_STRING_LOG_SZ - 1] = '\0'; 336 336 bfa_plog_add(plog, &lp);
+5 -5
drivers/scsi/bfa/bfad.c
··· 965 965 966 966 /* Fill the driver_info info to fcs*/ 967 967 memset(&driver_info, 0, sizeof(driver_info)); 968 - strlcpy(driver_info.version, BFAD_DRIVER_VERSION, 968 + strscpy(driver_info.version, BFAD_DRIVER_VERSION, 969 969 sizeof(driver_info.version)); 970 970 if (host_name) 971 - strlcpy(driver_info.host_machine_name, host_name, 971 + strscpy(driver_info.host_machine_name, host_name, 972 972 sizeof(driver_info.host_machine_name)); 973 973 if (os_name) 974 - strlcpy(driver_info.host_os_name, os_name, 974 + strscpy(driver_info.host_os_name, os_name, 975 975 sizeof(driver_info.host_os_name)); 976 976 if (os_patch) 977 - strlcpy(driver_info.host_os_patch, os_patch, 977 + strscpy(driver_info.host_os_patch, os_patch, 978 978 sizeof(driver_info.host_os_patch)); 979 979 980 - strlcpy(driver_info.os_device_name, bfad->pci_name, 980 + strscpy(driver_info.os_device_name, bfad->pci_name, 981 981 sizeof(driver_info.os_device_name)); 982 982 983 983 /* FCS driver info init */
+1 -1
drivers/scsi/bfa/bfad_attr.c
··· 834 834 char symname[BFA_SYMNAME_MAXLEN]; 835 835 836 836 bfa_fcs_lport_get_attr(&bfad->bfa_fcs.fabric.bport, &port_attr); 837 - strlcpy(symname, port_attr.port_cfg.sym_name.symname, 837 + strscpy(symname, port_attr.port_cfg.sym_name.symname, 838 838 BFA_SYMNAME_MAXLEN); 839 839 return sysfs_emit(buf, "%s\n", symname); 840 840 }
+2 -2
drivers/scsi/bfa/bfad_bsg.c
··· 119 119 120 120 /* fill in driver attr info */ 121 121 strcpy(iocmd->ioc_attr.driver_attr.driver, BFAD_DRIVER_NAME); 122 - strlcpy(iocmd->ioc_attr.driver_attr.driver_ver, 122 + strscpy(iocmd->ioc_attr.driver_attr.driver_ver, 123 123 BFAD_DRIVER_VERSION, BFA_VERSION_LEN); 124 124 strcpy(iocmd->ioc_attr.driver_attr.fw_ver, 125 125 iocmd->ioc_attr.adapter_attr.fw_ver); ··· 307 307 iocmd->attr.port_type = port_attr.port_type; 308 308 iocmd->attr.loopback = port_attr.loopback; 309 309 iocmd->attr.authfail = port_attr.authfail; 310 - strlcpy(iocmd->attr.port_symname.symname, 310 + strscpy(iocmd->attr.port_symname.symname, 311 311 port_attr.port_cfg.sym_name.symname, 312 312 sizeof(iocmd->attr.port_symname.symname)); 313 313
+1 -1
drivers/scsi/bfa/bfad_im.c
··· 1046 1046 /* For fibre channel services type 0x20 */ 1047 1047 fc_host_supported_fc4s(host)[7] = 1; 1048 1048 1049 - strlcpy(symname, bfad->bfa_fcs.fabric.bport.port_cfg.sym_name.symname, 1049 + strscpy(symname, bfad->bfa_fcs.fabric.bport.port_cfg.sym_name.symname, 1050 1050 BFA_SYMNAME_MAXLEN); 1051 1051 sprintf(fc_host_symbolic_name(host), "%s", symname); 1052 1052
+1 -1
drivers/scsi/fcoe/fcoe_transport.c
··· 711 711 char ifname[IFNAMSIZ + 2]; 712 712 713 713 if (buffer) { 714 - strlcpy(ifname, buffer, IFNAMSIZ); 714 + strscpy(ifname, buffer, IFNAMSIZ); 715 715 cp = ifname + strlen(ifname); 716 716 while (--cp >= ifname && *cp == '\n') 717 717 *cp = '\0';
+2 -6
drivers/scsi/fnic/fnic_debugfs.c
··· 201 201 return -ENOMEM; 202 202 203 203 if (*rdata_ptr == fc_trc_flag->fnic_trace) { 204 - fnic_dbg_prt->buffer = vmalloc(array3_size(3, trace_max_pages, 204 + fnic_dbg_prt->buffer = vzalloc(array3_size(3, trace_max_pages, 205 205 PAGE_SIZE)); 206 206 if (!fnic_dbg_prt->buffer) { 207 207 kfree(fnic_dbg_prt); 208 208 return -ENOMEM; 209 209 } 210 - memset((void *)fnic_dbg_prt->buffer, 0, 211 - 3 * (trace_max_pages * PAGE_SIZE)); 212 210 fnic_dbg_prt->buffer_len = fnic_get_trace_data(fnic_dbg_prt); 213 211 } else { 214 212 fnic_dbg_prt->buffer = 215 - vmalloc(array3_size(3, fnic_fc_trace_max_pages, 213 + vzalloc(array3_size(3, fnic_fc_trace_max_pages, 216 214 PAGE_SIZE)); 217 215 if (!fnic_dbg_prt->buffer) { 218 216 kfree(fnic_dbg_prt); 219 217 return -ENOMEM; 220 218 } 221 - memset((void *)fnic_dbg_prt->buffer, 0, 222 - 3 * (fnic_fc_trace_max_pages * PAGE_SIZE)); 223 219 fnic_dbg_prt->buffer_len = 224 220 fnic_fc_trace_get_data(fnic_dbg_prt, *rdata_ptr); 225 221 }
+1 -1
drivers/scsi/hisi_sas/hisi_sas.h
··· 642 642 extern int hisi_sas_get_fw_info(struct hisi_hba *hisi_hba); 643 643 extern int hisi_sas_probe(struct platform_device *pdev, 644 644 const struct hisi_sas_hw *ops); 645 - extern int hisi_sas_remove(struct platform_device *pdev); 645 + extern void hisi_sas_remove(struct platform_device *pdev); 646 646 647 647 extern int hisi_sas_slave_configure(struct scsi_device *sdev); 648 648 extern int hisi_sas_slave_alloc(struct scsi_device *sdev);
+1 -2
drivers/scsi/hisi_sas/hisi_sas_main.c
··· 2560 2560 } 2561 2561 EXPORT_SYMBOL_GPL(hisi_sas_probe); 2562 2562 2563 - int hisi_sas_remove(struct platform_device *pdev) 2563 + void hisi_sas_remove(struct platform_device *pdev) 2564 2564 { 2565 2565 struct sas_ha_struct *sha = platform_get_drvdata(pdev); 2566 2566 struct hisi_hba *hisi_hba = sha->lldd_ha; ··· 2573 2573 2574 2574 hisi_sas_free(hisi_hba); 2575 2575 scsi_host_put(shost); 2576 - return 0; 2577 2576 } 2578 2577 EXPORT_SYMBOL_GPL(hisi_sas_remove); 2579 2578
+1 -6
drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
··· 1790 1790 return hisi_sas_probe(pdev, &hisi_sas_v1_hw); 1791 1791 } 1792 1792 1793 - static int hisi_sas_v1_remove(struct platform_device *pdev) 1794 - { 1795 - return hisi_sas_remove(pdev); 1796 - } 1797 - 1798 1793 static const struct of_device_id sas_v1_of_match[] = { 1799 1794 { .compatible = "hisilicon,hip05-sas-v1",}, 1800 1795 {}, ··· 1805 1810 1806 1811 static struct platform_driver hisi_sas_v1_driver = { 1807 1812 .probe = hisi_sas_v1_probe, 1808 - .remove = hisi_sas_v1_remove, 1813 + .remove_new = hisi_sas_remove, 1809 1814 .driver = { 1810 1815 .name = DRV_NAME, 1811 1816 .of_match_table = sas_v1_of_match,
+1 -6
drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
··· 3619 3619 return hisi_sas_probe(pdev, &hisi_sas_v2_hw); 3620 3620 } 3621 3621 3622 - static int hisi_sas_v2_remove(struct platform_device *pdev) 3623 - { 3624 - return hisi_sas_remove(pdev); 3625 - } 3626 - 3627 3622 static const struct of_device_id sas_v2_of_match[] = { 3628 3623 { .compatible = "hisilicon,hip06-sas-v2",}, 3629 3624 { .compatible = "hisilicon,hip07-sas-v2",}, ··· 3635 3640 3636 3641 static struct platform_driver hisi_sas_v2_driver = { 3637 3642 .probe = hisi_sas_v2_probe, 3638 - .remove = hisi_sas_v2_remove, 3643 + .remove_new = hisi_sas_remove, 3639 3644 .driver = { 3640 3645 .name = DRV_NAME, 3641 3646 .of_match_table = sas_v2_of_match,
+20 -8
drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
··· 30 30 #define SATA_INITI_D2H_STORE_ADDR_LO 0x60 31 31 #define SATA_INITI_D2H_STORE_ADDR_HI 0x64 32 32 #define CFG_MAX_TAG 0x68 33 + #define TRANS_LOCK_ICT_TIME 0X70 33 34 #define HGC_SAS_TX_OPEN_FAIL_RETRY_CTRL 0x84 34 35 #define HGC_SAS_TXFAIL_RETRY_CTRL 0x88 35 36 #define HGC_GET_ITV_TIME 0x90 ··· 628 627 629 628 static void init_reg_v3_hw(struct hisi_hba *hisi_hba) 630 629 { 630 + struct pci_dev *pdev = hisi_hba->pci_dev; 631 631 int i, j; 632 632 633 633 /* Global registers init */ 634 634 hisi_sas_write32(hisi_hba, DLVRY_QUEUE_ENABLE, 635 635 (u32)((1ULL << hisi_hba->queue_count) - 1)); 636 - hisi_sas_write32(hisi_hba, SAS_AXI_USER3, 0); 637 636 hisi_sas_write32(hisi_hba, CFG_MAX_TAG, 0xfff0400); 637 + /* time / CLK_AHB = 2.5s / 2ns = 0x4A817C80 */ 638 + hisi_sas_write32(hisi_hba, TRANS_LOCK_ICT_TIME, 0x4A817C80); 638 639 hisi_sas_write32(hisi_hba, HGC_SAS_TXFAIL_RETRY_CTRL, 0x108); 639 640 hisi_sas_write32(hisi_hba, CFG_AGING_TIME, 0x1); 640 641 hisi_sas_write32(hisi_hba, INT_COAL_EN, 0x1); ··· 655 652 hisi_sas_write32(hisi_hba, ARQOS_ARCACHE_CFG, 0xf0f0); 656 653 hisi_sas_write32(hisi_hba, HYPER_STREAM_ID_EN_CFG, 1); 657 654 655 + if (pdev->revision < 0x30) 656 + hisi_sas_write32(hisi_hba, SAS_AXI_USER3, 0); 657 + 658 658 interrupt_enable_v3_hw(hisi_hba); 659 659 for (i = 0; i < hisi_hba->n_phy; i++) { 660 660 enum sas_linkrate max; ··· 675 669 prog_phy_link_rate |= hisi_sas_get_prog_phy_linkrate_mask(max); 676 670 hisi_sas_phy_write32(hisi_hba, i, PROG_PHY_LINK_RATE, 677 671 prog_phy_link_rate); 678 - hisi_sas_phy_write32(hisi_hba, i, SERDES_CFG, 0xffc00); 679 672 hisi_sas_phy_write32(hisi_hba, i, SAS_RX_TRAIN_TIMER, 0x13e80); 680 673 hisi_sas_phy_write32(hisi_hba, i, CHL_INT0, 0xffffffff); 681 674 hisi_sas_phy_write32(hisi_hba, i, CHL_INT1, 0xffffffff); ··· 685 680 hisi_sas_phy_write32(hisi_hba, i, PHYCTRL_OOB_RESTART_MSK, 0x1); 686 681 hisi_sas_phy_write32(hisi_hba, i, STP_LINK_TIMER, 0x7f7a120); 687 682 hisi_sas_phy_write32(hisi_hba, i, CON_CFG_DRIVER, 0x2a0a01); 688 - hisi_sas_phy_write32(hisi_hba, i, SAS_SSP_CON_TIMER_CFG, 0x32); 689 683 hisi_sas_phy_write32(hisi_hba, i, SAS_EC_INT_COAL_TIME, 690 684 0x30f4240); 691 - /* used for 12G negotiate */ 692 - hisi_sas_phy_write32(hisi_hba, i, COARSETUNE_TIME, 0x1e); 693 685 hisi_sas_phy_write32(hisi_hba, i, AIP_LIMIT, 0x2ffff); 686 + 687 + /* set value through firmware for 920B and later version */ 688 + if (pdev->revision < 0x30) { 689 + hisi_sas_phy_write32(hisi_hba, i, SAS_SSP_CON_TIMER_CFG, 0x32); 690 + hisi_sas_phy_write32(hisi_hba, i, SERDES_CFG, 0xffc00); 691 + /* used for 12G negotiate */ 692 + hisi_sas_phy_write32(hisi_hba, i, COARSETUNE_TIME, 0x1e); 693 + } 694 694 695 695 /* get default FFE configuration for BIST */ 696 696 for (j = 0; j < FFE_CFG_MAX; j++) { ··· 2216 2206 u32 trans_tx_fail_type = le32_to_cpu(record->trans_tx_fail_type); 2217 2207 u16 sipc_rx_err_type = le16_to_cpu(record->sipc_rx_err_type); 2218 2208 u32 dw3 = le32_to_cpu(complete_hdr->dw3); 2209 + u32 dw0 = le32_to_cpu(complete_hdr->dw0); 2219 2210 2220 2211 switch (task->task_proto) { 2221 2212 case SAS_PROTOCOL_SSP: ··· 2226 2215 * but I/O information has been written to the host memory, we examine 2227 2216 * response IU. 2228 2217 */ 2229 - if (!(complete_hdr->dw0 & CMPLT_HDR_RSPNS_GOOD_MSK) && 2230 - (complete_hdr->dw0 & CMPLT_HDR_RSPNS_XFRD_MSK)) 2218 + if (!(dw0 & CMPLT_HDR_RSPNS_GOOD_MSK) && 2219 + (dw0 & CMPLT_HDR_RSPNS_XFRD_MSK)) 2231 2220 return false; 2232 2221 2233 2222 ts->residual = trans_tx_fail_type; ··· 2243 2232 case SAS_PROTOCOL_SATA: 2244 2233 case SAS_PROTOCOL_STP: 2245 2234 case SAS_PROTOCOL_SATA | SAS_PROTOCOL_STP: 2246 - if ((complete_hdr->dw0 & CMPLT_HDR_RSPNS_XFRD_MSK) && 2235 + if ((dw0 & CMPLT_HDR_RSPNS_XFRD_MSK) && 2247 2236 (sipc_rx_err_type & RX_FIS_STATUS_ERR_MSK)) { 2248 2237 ts->stat = SAS_PROTO_RESPONSE; 2249 2238 } else if (dma_rx_err_type & RX_DATA_LEN_UNDERFLOW_MSK) { ··· 3010 2999 HISI_SAS_DEBUGFS_REG(SATA_INITI_D2H_STORE_ADDR_LO), 3011 3000 HISI_SAS_DEBUGFS_REG(SATA_INITI_D2H_STORE_ADDR_HI), 3012 3001 HISI_SAS_DEBUGFS_REG(CFG_MAX_TAG), 3002 + HISI_SAS_DEBUGFS_REG(TRANS_LOCK_ICT_TIME), 3013 3003 HISI_SAS_DEBUGFS_REG(HGC_SAS_TX_OPEN_FAIL_RETRY_CTRL), 3014 3004 HISI_SAS_DEBUGFS_REG(HGC_SAS_TXFAIL_RETRY_CTRL), 3015 3005 HISI_SAS_DEBUGFS_REG(HGC_GET_ITV_TIME),
+1
drivers/scsi/hosts.c
··· 441 441 shost->cmd_per_lun = sht->cmd_per_lun; 442 442 shost->no_write_same = sht->no_write_same; 443 443 shost->host_tagset = sht->host_tagset; 444 + shost->queuecommand_may_block = sht->queuecommand_may_block; 444 445 445 446 if (shost_eh_deadline == -1 || !sht->eh_host_reset_handler) 446 447 shost->eh_deadline = -1;
+71 -53
drivers/scsi/libsas/sas_expander.c
··· 1198 1198 sas_route_char(child, child_phy)); 1199 1199 } 1200 1200 1201 + static bool sas_eeds_valid(struct domain_device *parent, 1202 + struct domain_device *child) 1203 + { 1204 + struct sas_discovery *disc = &parent->port->disc; 1205 + 1206 + return (SAS_ADDR(disc->eeds_a) == SAS_ADDR(parent->sas_addr) || 1207 + SAS_ADDR(disc->eeds_a) == SAS_ADDR(child->sas_addr)) && 1208 + (SAS_ADDR(disc->eeds_b) == SAS_ADDR(parent->sas_addr) || 1209 + SAS_ADDR(disc->eeds_b) == SAS_ADDR(child->sas_addr)); 1210 + } 1211 + 1201 1212 static int sas_check_eeds(struct domain_device *child, 1202 - struct ex_phy *parent_phy, 1203 - struct ex_phy *child_phy) 1213 + struct ex_phy *parent_phy, 1214 + struct ex_phy *child_phy) 1204 1215 { 1205 1216 int res = 0; 1206 1217 struct domain_device *parent = child->parent; 1218 + struct sas_discovery *disc = &parent->port->disc; 1207 1219 1208 - if (SAS_ADDR(parent->port->disc.fanout_sas_addr) != 0) { 1220 + if (SAS_ADDR(disc->fanout_sas_addr) != 0) { 1209 1221 res = -ENODEV; 1210 1222 pr_warn("edge ex %016llx phy S:%02d <--> edge ex %016llx phy S:%02d, while there is a fanout ex %016llx\n", 1211 1223 SAS_ADDR(parent->sas_addr), 1212 1224 parent_phy->phy_id, 1213 1225 SAS_ADDR(child->sas_addr), 1214 1226 child_phy->phy_id, 1215 - SAS_ADDR(parent->port->disc.fanout_sas_addr)); 1216 - } else if (SAS_ADDR(parent->port->disc.eeds_a) == 0) { 1217 - memcpy(parent->port->disc.eeds_a, parent->sas_addr, 1218 - SAS_ADDR_SIZE); 1219 - memcpy(parent->port->disc.eeds_b, child->sas_addr, 1220 - SAS_ADDR_SIZE); 1221 - } else if (((SAS_ADDR(parent->port->disc.eeds_a) == 1222 - SAS_ADDR(parent->sas_addr)) || 1223 - (SAS_ADDR(parent->port->disc.eeds_a) == 1224 - SAS_ADDR(child->sas_addr))) 1225 - && 1226 - ((SAS_ADDR(parent->port->disc.eeds_b) == 1227 - SAS_ADDR(parent->sas_addr)) || 1228 - (SAS_ADDR(parent->port->disc.eeds_b) == 1229 - SAS_ADDR(child->sas_addr)))) 1230 - ; 1231 - else { 1227 + SAS_ADDR(disc->fanout_sas_addr)); 1228 + } else if (SAS_ADDR(disc->eeds_a) == 0) { 1229 + memcpy(disc->eeds_a, parent->sas_addr, SAS_ADDR_SIZE); 1230 + memcpy(disc->eeds_b, child->sas_addr, SAS_ADDR_SIZE); 1231 + } else if (!sas_eeds_valid(parent, child)) { 1232 1232 res = -ENODEV; 1233 1233 pr_warn("edge ex %016llx phy%02d <--> edge ex %016llx phy%02d link forms a third EEDS!\n", 1234 1234 SAS_ADDR(parent->sas_addr), ··· 1240 1240 return res; 1241 1241 } 1242 1242 1243 - /* Here we spill over 80 columns. It is intentional. 1244 - */ 1245 - static int sas_check_parent_topology(struct domain_device *child) 1243 + static int sas_check_edge_expander_topo(struct domain_device *child, 1244 + struct ex_phy *parent_phy) 1246 1245 { 1247 1246 struct expander_device *child_ex = &child->ex_dev; 1247 + struct expander_device *parent_ex = &child->parent->ex_dev; 1248 + struct ex_phy *child_phy; 1249 + 1250 + child_phy = &child_ex->ex_phy[parent_phy->attached_phy_id]; 1251 + 1252 + if (child->dev_type == SAS_FANOUT_EXPANDER_DEVICE) { 1253 + if (parent_phy->routing_attr != SUBTRACTIVE_ROUTING || 1254 + child_phy->routing_attr != TABLE_ROUTING) 1255 + goto error; 1256 + } else if (parent_phy->routing_attr == SUBTRACTIVE_ROUTING) { 1257 + if (child_phy->routing_attr == SUBTRACTIVE_ROUTING) 1258 + return sas_check_eeds(child, parent_phy, child_phy); 1259 + else if (child_phy->routing_attr != TABLE_ROUTING) 1260 + goto error; 1261 + } else if (parent_phy->routing_attr == TABLE_ROUTING) { 1262 + if (child_phy->routing_attr != SUBTRACTIVE_ROUTING && 1263 + (child_phy->routing_attr != TABLE_ROUTING || 1264 + !child_ex->t2t_supp || !parent_ex->t2t_supp)) 1265 + goto error; 1266 + } 1267 + 1268 + return 0; 1269 + error: 1270 + sas_print_parent_topology_bug(child, parent_phy, child_phy); 1271 + return -ENODEV; 1272 + } 1273 + 1274 + static int sas_check_fanout_expander_topo(struct domain_device *child, 1275 + struct ex_phy *parent_phy) 1276 + { 1277 + struct expander_device *child_ex = &child->ex_dev; 1278 + struct ex_phy *child_phy; 1279 + 1280 + child_phy = &child_ex->ex_phy[parent_phy->attached_phy_id]; 1281 + 1282 + if (parent_phy->routing_attr == TABLE_ROUTING && 1283 + child_phy->routing_attr == SUBTRACTIVE_ROUTING) 1284 + return 0; 1285 + 1286 + sas_print_parent_topology_bug(child, parent_phy, child_phy); 1287 + 1288 + return -ENODEV; 1289 + } 1290 + 1291 + static int sas_check_parent_topology(struct domain_device *child) 1292 + { 1248 1293 struct expander_device *parent_ex; 1249 1294 int i; 1250 1295 int res = 0; ··· 1304 1259 1305 1260 for (i = 0; i < parent_ex->num_phys; i++) { 1306 1261 struct ex_phy *parent_phy = &parent_ex->ex_phy[i]; 1307 - struct ex_phy *child_phy; 1308 1262 1309 1263 if (parent_phy->phy_state == PHY_VACANT || 1310 1264 parent_phy->phy_state == PHY_NOT_PRESENT) ··· 1312 1268 if (!sas_phy_match_dev_addr(child, parent_phy)) 1313 1269 continue; 1314 1270 1315 - child_phy = &child_ex->ex_phy[parent_phy->attached_phy_id]; 1316 - 1317 1271 switch (child->parent->dev_type) { 1318 1272 case SAS_EDGE_EXPANDER_DEVICE: 1319 - if (child->dev_type == SAS_FANOUT_EXPANDER_DEVICE) { 1320 - if (parent_phy->routing_attr != SUBTRACTIVE_ROUTING || 1321 - child_phy->routing_attr != TABLE_ROUTING) { 1322 - sas_print_parent_topology_bug(child, parent_phy, child_phy); 1323 - res = -ENODEV; 1324 - } 1325 - } else if (parent_phy->routing_attr == SUBTRACTIVE_ROUTING) { 1326 - if (child_phy->routing_attr == SUBTRACTIVE_ROUTING) { 1327 - res = sas_check_eeds(child, parent_phy, child_phy); 1328 - } else if (child_phy->routing_attr != TABLE_ROUTING) { 1329 - sas_print_parent_topology_bug(child, parent_phy, child_phy); 1330 - res = -ENODEV; 1331 - } 1332 - } else if (parent_phy->routing_attr == TABLE_ROUTING) { 1333 - if (child_phy->routing_attr == SUBTRACTIVE_ROUTING || 1334 - (child_phy->routing_attr == TABLE_ROUTING && 1335 - child_ex->t2t_supp && parent_ex->t2t_supp)) { 1336 - /* All good */; 1337 - } else { 1338 - sas_print_parent_topology_bug(child, parent_phy, child_phy); 1339 - res = -ENODEV; 1340 - } 1341 - } 1273 + if (sas_check_edge_expander_topo(child, parent_phy)) 1274 + res = -ENODEV; 1342 1275 break; 1343 1276 case SAS_FANOUT_EXPANDER_DEVICE: 1344 - if (parent_phy->routing_attr != TABLE_ROUTING || 1345 - child_phy->routing_attr != SUBTRACTIVE_ROUTING) { 1346 - sas_print_parent_topology_bug(child, parent_phy, child_phy); 1277 + if (sas_check_fanout_expander_topo(child, parent_phy)) 1347 1278 res = -ENODEV; 1348 - } 1349 1279 break; 1350 1280 default: 1351 1281 break;
+19 -46
drivers/scsi/lpfc/lpfc.h
··· 429 429 /* Max number of days of congestion data */ 430 430 #define LPFC_MAX_CGN_DAYS 10 431 431 432 + struct lpfc_cgn_ts { 433 + uint8_t month; 434 + uint8_t day; 435 + uint8_t year; 436 + uint8_t hour; 437 + uint8_t minute; 438 + uint8_t second; 439 + }; 440 + 432 441 /* Format of congestion buffer info 433 442 * This structure defines memory thats allocated and registered with 434 443 * the HBA firmware. When adding or removing fields from this structure ··· 451 442 #define LPFC_CGN_INFO_V1 1 452 443 #define LPFC_CGN_INFO_V2 2 453 444 #define LPFC_CGN_INFO_V3 3 445 + #define LPFC_CGN_INFO_V4 4 454 446 uint8_t cgn_info_mode; /* 0=off 1=managed 2=monitor only */ 455 447 uint8_t cgn_info_detect; 456 448 uint8_t cgn_info_action; ··· 460 450 uint8_t cgn_info_level2; 461 451 462 452 /* Start Time */ 463 - uint8_t cgn_info_month; 464 - uint8_t cgn_info_day; 465 - uint8_t cgn_info_year; 466 - uint8_t cgn_info_hour; 467 - uint8_t cgn_info_minute; 468 - uint8_t cgn_info_second; 453 + struct lpfc_cgn_ts base_time; 469 454 470 455 /* minute / hours / daily indices */ 471 456 uint8_t cgn_index_minute; ··· 501 496 uint8_t cgn_stat_npm; /* Notifications per minute */ 502 497 503 498 /* Start Time */ 504 - uint8_t cgn_stat_month; 505 - uint8_t cgn_stat_day; 506 - uint8_t cgn_stat_year; 507 - uint8_t cgn_stat_hour; 508 - uint8_t cgn_stat_minute; 509 - uint8_t cgn_pad2[2]; 499 + struct lpfc_cgn_ts stat_start; /* Base time */ 500 + uint8_t cgn_pad2; 510 501 511 502 __le32 cgn_notification; 512 503 __le32 cgn_peer_notification; 513 504 __le32 link_integ_notification; 514 505 __le32 delivery_notification; 515 - 516 - uint8_t cgn_stat_cgn_month; /* Last congestion notification FPIN */ 517 - uint8_t cgn_stat_cgn_day; 518 - uint8_t cgn_stat_cgn_year; 519 - uint8_t cgn_stat_cgn_hour; 520 - uint8_t cgn_stat_cgn_min; 521 - uint8_t cgn_stat_cgn_sec; 522 - 523 - uint8_t cgn_stat_peer_month; /* Last peer congestion FPIN */ 524 - uint8_t cgn_stat_peer_day; 525 - uint8_t cgn_stat_peer_year; 526 - uint8_t cgn_stat_peer_hour; 527 - uint8_t cgn_stat_peer_min; 528 - uint8_t cgn_stat_peer_sec; 529 - 530 - uint8_t cgn_stat_lnk_month; /* Last link integrity FPIN */ 531 - uint8_t cgn_stat_lnk_day; 532 - uint8_t cgn_stat_lnk_year; 533 - uint8_t cgn_stat_lnk_hour; 534 - uint8_t cgn_stat_lnk_min; 535 - uint8_t cgn_stat_lnk_sec; 536 - 537 - uint8_t cgn_stat_del_month; /* Last delivery notification FPIN */ 538 - uint8_t cgn_stat_del_day; 539 - uint8_t cgn_stat_del_year; 540 - uint8_t cgn_stat_del_hour; 541 - uint8_t cgn_stat_del_min; 542 - uint8_t cgn_stat_del_sec; 506 + struct lpfc_cgn_ts stat_fpin; /* Last congestion notification FPIN */ 507 + struct lpfc_cgn_ts stat_peer; /* Last peer congestion FPIN */ 508 + struct lpfc_cgn_ts stat_lnk; /* Last link integrity FPIN */ 509 + struct lpfc_cgn_ts stat_delivery; /* Last delivery notification FPIN */ 543 510 ); 544 511 545 512 __le32 cgn_info_crc; ··· 909 932 void (*__lpfc_sli_release_iocbq)(struct lpfc_hba *, 910 933 struct lpfc_iocbq *); 911 934 int (*lpfc_hba_down_post)(struct lpfc_hba *phba); 912 - void (*lpfc_scsi_cmd_iocb_cmpl) 913 - (struct lpfc_hba *, struct lpfc_iocbq *, struct lpfc_iocbq *); 914 935 915 936 /* MBOX interface function jump table entries */ 916 937 int (*lpfc_sli_issue_mbox) ··· 1020 1045 * capability 1021 1046 */ 1022 1047 #define HBA_FLOGI_ISSUED 0x100000 /* FLOGI was issued */ 1023 - #define HBA_SHORT_CMF 0x200000 /* shorter CMF timer routine */ 1024 - #define HBA_CGN_DAY_WRAP 0x400000 /* HBA Congestion info day wraps */ 1025 1048 #define HBA_DEFER_FLOGI 0x800000 /* Defer FLOGI till read_sparm cmpl */ 1026 1049 #define HBA_SETUP 0x1000000 /* Signifies HBA setup is completed */ 1027 1050 #define HBA_NEEDS_CFG_PORT 0x2000000 /* SLI3 - needs a CONFIG_PORT mbox */ ··· 1502 1529 uint64_t cmf_last_sync_bw; 1503 1530 #define LPFC_CMF_BLK_SIZE 512 1504 1531 struct hrtimer cmf_timer; 1532 + struct hrtimer cmf_stats_timer; /* 1 minute stats timer */ 1505 1533 atomic_t cmf_bw_wait; 1506 1534 atomic_t cmf_busy; 1507 1535 atomic_t cmf_stop_io; /* To block request and stop IO's */ ··· 1550 1576 atomic_t cgn_sync_alarm_cnt; /* Total alarm events for SYNC wqe */ 1551 1577 atomic_t cgn_driver_evt_cnt; /* Total driver cgn events for fmw */ 1552 1578 atomic_t cgn_latency_evt_cnt; 1553 - struct timespec64 cgn_daily_ts; 1554 1579 atomic64_t cgn_latency_evt; /* Avg latency per minute */ 1555 1580 unsigned long cgn_evt_timestamp; 1556 1581 #define LPFC_CGN_TIMER_TO_MIN 60000 /* ms in a minute */ 1557 1582 uint32_t cgn_evt_minute; 1558 - #define LPFC_SEC_MIN 60 1583 + #define LPFC_SEC_MIN 60UL 1559 1584 #define LPFC_MIN_HOUR 60 1560 1585 #define LPFC_HOUR_DAY 24 1561 1586 #define LPFC_MIN_DAY (LPFC_MIN_HOUR * LPFC_HOUR_DAY)
+2 -2
drivers/scsi/lpfc/lpfc_attr.c
··· 5858 5858 module_param(lpfc_fabric_cgn_frequency, int, 0444); 5859 5859 MODULE_PARM_DESC(lpfc_fabric_cgn_frequency, "Congestion signaling fabric freq"); 5860 5860 5861 - int lpfc_acqe_cgn_frequency = 10; /* 10 sec default */ 5862 - module_param(lpfc_acqe_cgn_frequency, int, 0444); 5861 + unsigned char lpfc_acqe_cgn_frequency = 10; /* 10 sec default */ 5862 + module_param(lpfc_acqe_cgn_frequency, byte, 0444); 5863 5863 MODULE_PARM_DESC(lpfc_acqe_cgn_frequency, "Congestion signaling ACQE freq"); 5864 5864 5865 5865 int lpfc_use_cgn_signal = 1; /* 0 - only use FPINs, 1 - Use signals if avail */
+2 -2
drivers/scsi/lpfc/lpfc_crtn.h
··· 134 134 struct lpfc_nodelist *ndlp); 135 135 void lpfc_ignore_els_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, 136 136 struct lpfc_iocbq *rspiocb); 137 - int lpfc_nlp_not_used(struct lpfc_nodelist *ndlp); 138 137 struct lpfc_nodelist *lpfc_setup_disc_node(struct lpfc_vport *, uint32_t); 139 138 void lpfc_disc_list_loopmap(struct lpfc_vport *); 140 139 void lpfc_disc_start(struct lpfc_vport *); ··· 247 248 irqreturn_t lpfc_sli_fp_intr_handler(int, void *); 248 249 irqreturn_t lpfc_sli4_intr_handler(int, void *); 249 250 irqreturn_t lpfc_sli4_hba_intr_handler(int, void *); 251 + irqreturn_t lpfc_sli4_hba_intr_handler_th(int irq, void *dev_id); 250 252 251 253 int lpfc_read_object(struct lpfc_hba *phba, char *s, uint32_t *datap, 252 254 uint32_t len); ··· 664 664 extern unsigned long long lpfc_enable_nvmet[]; 665 665 extern int lpfc_no_hba_reset_cnt; 666 666 extern unsigned long lpfc_no_hba_reset[]; 667 - extern int lpfc_acqe_cgn_frequency; 667 + extern unsigned char lpfc_acqe_cgn_frequency; 668 668 extern int lpfc_fabric_cgn_frequency; 669 669 extern int lpfc_use_cgn_signal; 670 670
+46 -46
drivers/scsi/lpfc/lpfc_ct.c
··· 287 287 u32 ulp_status = get_job_ulpstatus(phba, ctiocbq); 288 288 u32 ulp_word4 = get_job_word4(phba, ctiocbq); 289 289 u32 did; 290 - u32 mi_cmd; 290 + u16 mi_cmd; 291 291 292 292 did = bf_get(els_rsp64_sid, &ctiocbq->wqe.xmit_els_rsp); 293 293 if (ulp_status) { ··· 311 311 312 312 ct_req = (struct lpfc_sli_ct_request *)ctiocbq->cmd_dmabuf->virt; 313 313 314 - mi_cmd = ct_req->CommandResponse.bits.CmdRsp; 314 + mi_cmd = be16_to_cpu(ct_req->CommandResponse.bits.CmdRsp); 315 315 lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, 316 316 "6442 : MI Cmd : x%x Not Supported\n", mi_cmd); 317 317 lpfc_ct_reject_event(ndlp, ct_req, ··· 486 486 } 487 487 488 488 static struct lpfc_dmabuf * 489 - lpfc_alloc_ct_rsp(struct lpfc_hba *phba, int cmdcode, struct ulp_bde64 *bpl, 489 + lpfc_alloc_ct_rsp(struct lpfc_hba *phba, __be16 cmdcode, struct ulp_bde64 *bpl, 490 490 uint32_t size, int *entries) 491 491 { 492 492 struct lpfc_dmabuf *mlist = NULL; ··· 507 507 508 508 INIT_LIST_HEAD(&mp->list); 509 509 510 - if (cmdcode == be16_to_cpu(SLI_CTNS_GID_FT) || 511 - cmdcode == be16_to_cpu(SLI_CTNS_GFF_ID)) 510 + if (be16_to_cpu(cmdcode) == SLI_CTNS_GID_FT || 511 + be16_to_cpu(cmdcode) == SLI_CTNS_GFF_ID) 512 512 mp->virt = lpfc_mbuf_alloc(phba, MEM_PRI, &(mp->phys)); 513 513 else 514 514 mp->virt = lpfc_mbuf_alloc(phba, 0, &(mp->phys)); ··· 671 671 struct ulp_bde64 *bpl = (struct ulp_bde64 *) bmp->virt; 672 672 struct lpfc_dmabuf *outmp; 673 673 int cnt = 0, status; 674 - int cmdcode = ((struct lpfc_sli_ct_request *) inmp->virt)-> 674 + __be16 cmdcode = ((struct lpfc_sli_ct_request *)inmp->virt)-> 675 675 CommandResponse.bits.CmdRsp; 676 676 677 677 bpl++; /* Skip past ct request */ ··· 1043 1043 outp, 1044 1044 CTreq->un.gid.Fc4Type, 1045 1045 get_job_data_placed(phba, rspiocb)); 1046 - } else if (CTrsp->CommandResponse.bits.CmdRsp == 1047 - be16_to_cpu(SLI_CT_RESPONSE_FS_RJT)) { 1046 + } else if (be16_to_cpu(CTrsp->CommandResponse.bits.CmdRsp) == 1047 + SLI_CT_RESPONSE_FS_RJT) { 1048 1048 /* NameServer Rsp Error */ 1049 1049 if ((CTrsp->ReasonCode == SLI_CT_UNABLE_TO_PERFORM_REQ) 1050 1050 && (CTrsp->Explanation == SLI_CT_NO_FC4_TYPES)) { ··· 1052 1052 LOG_DISCOVERY, 1053 1053 "0269 No NameServer Entries " 1054 1054 "Data: x%x x%x x%x x%x\n", 1055 - CTrsp->CommandResponse.bits.CmdRsp, 1055 + be16_to_cpu(CTrsp->CommandResponse.bits.CmdRsp), 1056 1056 (uint32_t) CTrsp->ReasonCode, 1057 1057 (uint32_t) CTrsp->Explanation, 1058 1058 vport->fc_flag); 1059 1059 1060 1060 lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_CT, 1061 1061 "GID_FT no entry cmd:x%x rsn:x%x exp:x%x", 1062 - (uint32_t)CTrsp->CommandResponse.bits.CmdRsp, 1062 + be16_to_cpu(CTrsp->CommandResponse.bits.CmdRsp), 1063 1063 (uint32_t) CTrsp->ReasonCode, 1064 1064 (uint32_t) CTrsp->Explanation); 1065 1065 } else { ··· 1067 1067 LOG_DISCOVERY, 1068 1068 "0240 NameServer Rsp Error " 1069 1069 "Data: x%x x%x x%x x%x\n", 1070 - CTrsp->CommandResponse.bits.CmdRsp, 1070 + be16_to_cpu(CTrsp->CommandResponse.bits.CmdRsp), 1071 1071 (uint32_t) CTrsp->ReasonCode, 1072 1072 (uint32_t) CTrsp->Explanation, 1073 1073 vport->fc_flag); 1074 1074 1075 1075 lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_CT, 1076 1076 "GID_FT rsp err1 cmd:x%x rsn:x%x exp:x%x", 1077 - (uint32_t)CTrsp->CommandResponse.bits.CmdRsp, 1077 + be16_to_cpu(CTrsp->CommandResponse.bits.CmdRsp), 1078 1078 (uint32_t) CTrsp->ReasonCode, 1079 1079 (uint32_t) CTrsp->Explanation); 1080 1080 } ··· 1085 1085 lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, 1086 1086 "0241 NameServer Rsp Error " 1087 1087 "Data: x%x x%x x%x x%x\n", 1088 - CTrsp->CommandResponse.bits.CmdRsp, 1088 + be16_to_cpu(CTrsp->CommandResponse.bits.CmdRsp), 1089 1089 (uint32_t) CTrsp->ReasonCode, 1090 1090 (uint32_t) CTrsp->Explanation, 1091 1091 vport->fc_flag); 1092 1092 1093 1093 lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_CT, 1094 1094 "GID_FT rsp err2 cmd:x%x rsn:x%x exp:x%x", 1095 - (uint32_t)CTrsp->CommandResponse.bits.CmdRsp, 1095 + be16_to_cpu(CTrsp->CommandResponse.bits.CmdRsp), 1096 1096 (uint32_t) CTrsp->ReasonCode, 1097 1097 (uint32_t) CTrsp->Explanation); 1098 1098 } ··· 1247 1247 /* Good status, continue checking */ 1248 1248 CTreq = (struct lpfc_sli_ct_request *)inp->virt; 1249 1249 CTrsp = (struct lpfc_sli_ct_request *)outp->virt; 1250 - if (CTrsp->CommandResponse.bits.CmdRsp == 1251 - cpu_to_be16(SLI_CT_RESPONSE_FS_ACC)) { 1250 + if (be16_to_cpu(CTrsp->CommandResponse.bits.CmdRsp) == 1251 + SLI_CT_RESPONSE_FS_ACC) { 1252 1252 lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, 1253 1253 "4105 NameServer Rsp Data: x%x x%x " 1254 1254 "x%x x%x sz x%x\n", ··· 1262 1262 outp, 1263 1263 CTreq->un.gid.Fc4Type, 1264 1264 get_job_data_placed(phba, rspiocb)); 1265 - } else if (CTrsp->CommandResponse.bits.CmdRsp == 1266 - be16_to_cpu(SLI_CT_RESPONSE_FS_RJT)) { 1265 + } else if (be16_to_cpu(CTrsp->CommandResponse.bits.CmdRsp) == 1266 + SLI_CT_RESPONSE_FS_RJT) { 1267 1267 /* NameServer Rsp Error */ 1268 1268 if ((CTrsp->ReasonCode == SLI_CT_UNABLE_TO_PERFORM_REQ) 1269 1269 && (CTrsp->Explanation == SLI_CT_NO_FC4_TYPES)) { ··· 1271 1271 vport, KERN_INFO, LOG_DISCOVERY, 1272 1272 "4106 No NameServer Entries " 1273 1273 "Data: x%x x%x x%x x%x\n", 1274 - CTrsp->CommandResponse.bits.CmdRsp, 1274 + be16_to_cpu(CTrsp->CommandResponse.bits.CmdRsp), 1275 1275 (uint32_t)CTrsp->ReasonCode, 1276 1276 (uint32_t)CTrsp->Explanation, 1277 1277 vport->fc_flag); ··· 1279 1279 lpfc_debugfs_disc_trc( 1280 1280 vport, LPFC_DISC_TRC_CT, 1281 1281 "GID_PT no entry cmd:x%x rsn:x%x exp:x%x", 1282 - (uint32_t)CTrsp->CommandResponse.bits.CmdRsp, 1282 + be16_to_cpu(CTrsp->CommandResponse.bits.CmdRsp), 1283 1283 (uint32_t)CTrsp->ReasonCode, 1284 1284 (uint32_t)CTrsp->Explanation); 1285 1285 } else { ··· 1287 1287 vport, KERN_INFO, LOG_DISCOVERY, 1288 1288 "4107 NameServer Rsp Error " 1289 1289 "Data: x%x x%x x%x x%x\n", 1290 - CTrsp->CommandResponse.bits.CmdRsp, 1290 + be16_to_cpu(CTrsp->CommandResponse.bits.CmdRsp), 1291 1291 (uint32_t)CTrsp->ReasonCode, 1292 1292 (uint32_t)CTrsp->Explanation, 1293 1293 vport->fc_flag); ··· 1295 1295 lpfc_debugfs_disc_trc( 1296 1296 vport, LPFC_DISC_TRC_CT, 1297 1297 "GID_PT rsp err1 cmd:x%x rsn:x%x exp:x%x", 1298 - (uint32_t)CTrsp->CommandResponse.bits.CmdRsp, 1298 + be16_to_cpu(CTrsp->CommandResponse.bits.CmdRsp), 1299 1299 (uint32_t)CTrsp->ReasonCode, 1300 1300 (uint32_t)CTrsp->Explanation); 1301 1301 } ··· 1304 1304 lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, 1305 1305 "4109 NameServer Rsp Error " 1306 1306 "Data: x%x x%x x%x x%x\n", 1307 - CTrsp->CommandResponse.bits.CmdRsp, 1307 + be16_to_cpu(CTrsp->CommandResponse.bits.CmdRsp), 1308 1308 (uint32_t)CTrsp->ReasonCode, 1309 1309 (uint32_t)CTrsp->Explanation, 1310 1310 vport->fc_flag); ··· 1312 1312 lpfc_debugfs_disc_trc( 1313 1313 vport, LPFC_DISC_TRC_CT, 1314 1314 "GID_PT rsp err2 cmd:x%x rsn:x%x exp:x%x", 1315 - (uint32_t)CTrsp->CommandResponse.bits.CmdRsp, 1315 + be16_to_cpu(CTrsp->CommandResponse.bits.CmdRsp), 1316 1316 (uint32_t)CTrsp->ReasonCode, 1317 1317 (uint32_t)CTrsp->Explanation); 1318 1318 } ··· 1391 1391 (fbits & FC4_FEATURE_INIT) ? "Initiator" : " ", 1392 1392 (fbits & FC4_FEATURE_TARGET) ? "Target" : " "); 1393 1393 1394 - if (CTrsp->CommandResponse.bits.CmdRsp == 1395 - be16_to_cpu(SLI_CT_RESPONSE_FS_ACC)) { 1394 + if (be16_to_cpu(CTrsp->CommandResponse.bits.CmdRsp) == 1395 + SLI_CT_RESPONSE_FS_ACC) { 1396 1396 if ((fbits & FC4_FEATURE_INIT) && 1397 1397 !(fbits & FC4_FEATURE_TARGET)) { 1398 1398 lpfc_printf_vlog(vport, KERN_INFO, ··· 1631 1631 "0209 CT Request completes, latt %d, " 1632 1632 "ulp_status x%x CmdRsp x%x, Context x%x, Tag x%x\n", 1633 1633 latt, ulp_status, 1634 - CTrsp->CommandResponse.bits.CmdRsp, 1634 + be16_to_cpu(CTrsp->CommandResponse.bits.CmdRsp), 1635 1635 get_job_ulpcontext(phba, cmdiocb), cmdiocb->iotag); 1636 1636 1637 1637 lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_CT, ··· 1681 1681 1682 1682 outp = cmdiocb->rsp_dmabuf; 1683 1683 CTrsp = (struct lpfc_sli_ct_request *)outp->virt; 1684 - if (CTrsp->CommandResponse.bits.CmdRsp == 1685 - be16_to_cpu(SLI_CT_RESPONSE_FS_ACC)) 1684 + if (be16_to_cpu(CTrsp->CommandResponse.bits.CmdRsp) == 1685 + SLI_CT_RESPONSE_FS_ACC) 1686 1686 vport->ct_flags |= FC_CT_RFT_ID; 1687 1687 } 1688 1688 lpfc_cmpl_ct(phba, cmdiocb, rspiocb); ··· 1702 1702 1703 1703 outp = cmdiocb->rsp_dmabuf; 1704 1704 CTrsp = (struct lpfc_sli_ct_request *) outp->virt; 1705 - if (CTrsp->CommandResponse.bits.CmdRsp == 1706 - be16_to_cpu(SLI_CT_RESPONSE_FS_ACC)) 1705 + if (be16_to_cpu(CTrsp->CommandResponse.bits.CmdRsp) == 1706 + SLI_CT_RESPONSE_FS_ACC) 1707 1707 vport->ct_flags |= FC_CT_RNN_ID; 1708 1708 } 1709 1709 lpfc_cmpl_ct(phba, cmdiocb, rspiocb); ··· 1723 1723 1724 1724 outp = cmdiocb->rsp_dmabuf; 1725 1725 CTrsp = (struct lpfc_sli_ct_request *)outp->virt; 1726 - if (CTrsp->CommandResponse.bits.CmdRsp == 1727 - be16_to_cpu(SLI_CT_RESPONSE_FS_ACC)) 1726 + if (be16_to_cpu(CTrsp->CommandResponse.bits.CmdRsp) == 1727 + SLI_CT_RESPONSE_FS_ACC) 1728 1728 vport->ct_flags |= FC_CT_RSPN_ID; 1729 1729 } 1730 1730 lpfc_cmpl_ct(phba, cmdiocb, rspiocb); ··· 1744 1744 1745 1745 outp = cmdiocb->rsp_dmabuf; 1746 1746 CTrsp = (struct lpfc_sli_ct_request *) outp->virt; 1747 - if (CTrsp->CommandResponse.bits.CmdRsp == 1748 - be16_to_cpu(SLI_CT_RESPONSE_FS_ACC)) 1747 + if (be16_to_cpu(CTrsp->CommandResponse.bits.CmdRsp) == 1748 + SLI_CT_RESPONSE_FS_ACC) 1749 1749 vport->ct_flags |= FC_CT_RSNN_NN; 1750 1750 } 1751 1751 lpfc_cmpl_ct(phba, cmdiocb, rspiocb); ··· 1777 1777 1778 1778 outp = cmdiocb->rsp_dmabuf; 1779 1779 CTrsp = (struct lpfc_sli_ct_request *)outp->virt; 1780 - if (CTrsp->CommandResponse.bits.CmdRsp == 1781 - be16_to_cpu(SLI_CT_RESPONSE_FS_ACC)) 1780 + if (be16_to_cpu(CTrsp->CommandResponse.bits.CmdRsp) == 1781 + SLI_CT_RESPONSE_FS_ACC) 1782 1782 vport->ct_flags |= FC_CT_RFF_ID; 1783 1783 } 1784 1784 lpfc_cmpl_ct(phba, cmdiocb, rspiocb); ··· 2217 2217 struct lpfc_dmabuf *outp = cmdiocb->rsp_dmabuf; 2218 2218 struct lpfc_sli_ct_request *CTcmd = inp->virt; 2219 2219 struct lpfc_sli_ct_request *CTrsp = outp->virt; 2220 - uint16_t fdmi_cmd = CTcmd->CommandResponse.bits.CmdRsp; 2221 - uint16_t fdmi_rsp = CTrsp->CommandResponse.bits.CmdRsp; 2220 + __be16 fdmi_cmd = CTcmd->CommandResponse.bits.CmdRsp; 2221 + __be16 fdmi_rsp = CTrsp->CommandResponse.bits.CmdRsp; 2222 2222 struct lpfc_nodelist *ndlp, *free_ndlp = NULL; 2223 2223 uint32_t latt, cmd, err; 2224 2224 u32 ulp_status = get_job_ulpstatus(phba, rspiocb); ··· 2278 2278 2279 2279 /* Check for a CT LS_RJT response */ 2280 2280 cmd = be16_to_cpu(fdmi_cmd); 2281 - if (fdmi_rsp == cpu_to_be16(SLI_CT_RESPONSE_FS_RJT)) { 2281 + if (be16_to_cpu(fdmi_rsp) == SLI_CT_RESPONSE_FS_RJT) { 2282 2282 /* FDMI rsp failed */ 2283 2283 lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY | LOG_ELS, 2284 2284 "0220 FDMI cmd failed FS_RJT Data: x%x", cmd); ··· 3110 3110 } 3111 3111 3112 3112 /* RHBA attribute jump table */ 3113 - int (*lpfc_fdmi_hba_action[]) 3113 + static int (*lpfc_fdmi_hba_action[]) 3114 3114 (struct lpfc_vport *vport, void *attrbuf) = { 3115 3115 /* Action routine Mask bit Attribute type */ 3116 3116 lpfc_fdmi_hba_attr_wwnn, /* bit0 RHBA_NODENAME */ ··· 3134 3134 }; 3135 3135 3136 3136 /* RPA / RPRT attribute jump table */ 3137 - int (*lpfc_fdmi_port_action[]) 3137 + static int (*lpfc_fdmi_port_action[]) 3138 3138 (struct lpfc_vport *vport, void *attrbuf) = { 3139 3139 /* Action routine Mask bit Attribute type */ 3140 3140 lpfc_fdmi_port_attr_fc4type, /* bit0 RPRT_SUPPORT_FC4_TYPES */ ··· 3570 3570 struct lpfc_dmabuf *outp = cmdiocb->rsp_dmabuf; 3571 3571 struct lpfc_sli_ct_request *ctcmd = inp->virt; 3572 3572 struct lpfc_sli_ct_request *ctrsp = outp->virt; 3573 - u16 rsp = ctrsp->CommandResponse.bits.CmdRsp; 3573 + __be16 rsp = ctrsp->CommandResponse.bits.CmdRsp; 3574 3574 struct app_id_object *app; 3575 3575 struct lpfc_nodelist *ndlp = cmdiocb->ndlp; 3576 3576 u32 cmd, hash, bucket; ··· 3587 3587 goto free_res; 3588 3588 } 3589 3589 /* Check for a CT LS_RJT response */ 3590 - if (rsp == be16_to_cpu(SLI_CT_RESPONSE_FS_RJT)) { 3590 + if (be16_to_cpu(rsp) == SLI_CT_RESPONSE_FS_RJT) { 3591 3591 if (cmd != SLI_CTAS_DALLAPP_ID) 3592 3592 lpfc_printf_vlog(vport, KERN_DEBUG, LOG_DISCOVERY, 3593 3593 "3306 VMID FS_RJT Data: x%x x%x x%x\n", ··· 3748 3748 rap->obj[0].entity_id_len = vmid->vmid_len; 3749 3749 memcpy(rap->obj[0].entity_id, vmid->host_vmid, vmid->vmid_len); 3750 3750 size = RAPP_IDENT_OFFSET + 3751 - sizeof(struct lpfc_vmid_rapp_ident_list); 3751 + struct_size(rap, obj, be32_to_cpu(rap->no_of_objects)); 3752 3752 retry = 1; 3753 3753 break; 3754 3754 ··· 3767 3767 dap->obj[0].entity_id_len = vmid->vmid_len; 3768 3768 memcpy(dap->obj[0].entity_id, vmid->host_vmid, vmid->vmid_len); 3769 3769 size = DAPP_IDENT_OFFSET + 3770 - sizeof(struct lpfc_vmid_dapp_ident_list); 3770 + struct_size(dap, obj, be32_to_cpu(dap->no_of_objects)); 3771 3771 write_lock(&vport->vmid_lock); 3772 3772 vmid->flag &= ~LPFC_VMID_REGISTERED; 3773 3773 write_unlock(&vport->vmid_lock);
+6 -2
drivers/scsi/lpfc/lpfc_debugfs.c
··· 2259 2259 goto out; 2260 2260 } 2261 2261 spin_unlock_irq(&phba->hbalock); 2262 - debug = kmalloc(sizeof(*debug), GFP_KERNEL); 2262 + 2263 + if (check_mul_overflow(LPFC_RAS_MIN_BUFF_POST_SIZE, 2264 + phba->cfg_ras_fwlog_buffsize, &size)) 2265 + goto out; 2266 + 2267 + debug = kzalloc(sizeof(*debug), GFP_KERNEL); 2263 2268 if (!debug) 2264 2269 goto out; 2265 2270 2266 - size = LPFC_RAS_MIN_BUFF_POST_SIZE * phba->cfg_ras_fwlog_buffsize; 2267 2271 debug->buffer = vmalloc(size); 2268 2272 if (!debug->buffer) 2269 2273 goto free_debug;
+19 -25
drivers/scsi/lpfc/lpfc_els.c
··· 5205 5205 * 5206 5206 * This routine is the completion callback function to the Logout (LOGO) 5207 5207 * Accept (ACC) Response ELS command. This routine is invoked to indicate 5208 - * the completion of the LOGO process. It invokes the lpfc_nlp_not_used() to 5209 - * release the ndlp if it has the last reference remaining (reference count 5210 - * is 1). If succeeded (meaning ndlp released), it sets the iocb ndlp 5211 - * field to NULL to inform the following lpfc_els_free_iocb() routine no 5212 - * ndlp reference count needs to be decremented. Otherwise, the ndlp 5213 - * reference use-count shall be decremented by the lpfc_els_free_iocb() 5214 - * routine. Finally, the lpfc_els_free_iocb() is invoked to release the 5215 - * IOCB data structure. 5208 + * the completion of the LOGO process. If the node has transitioned to NPR, 5209 + * this routine unregisters the RPI if it is still registered. The 5210 + * lpfc_els_free_iocb() is invoked to release the IOCB data structure. 5216 5211 **/ 5217 5212 static void 5218 5213 lpfc_cmpl_els_logo_acc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, ··· 5248 5253 (ndlp->nlp_last_elscmd == ELS_CMD_PLOGI)) 5249 5254 goto out; 5250 5255 5251 - /* NPort Recovery mode or node is just allocated */ 5252 - if (!lpfc_nlp_not_used(ndlp)) { 5253 - /* A LOGO is completing and the node is in NPR state. 5254 - * Just unregister the RPI because the node is still 5255 - * required. 5256 - */ 5256 + if (ndlp->nlp_flag & NLP_RPI_REGISTERED) 5257 5257 lpfc_unreg_rpi(vport, ndlp); 5258 - } else { 5259 - /* Indicate the node has already released, should 5260 - * not reference to it from within lpfc_els_free_iocb. 5261 - */ 5262 - cmdiocb->ndlp = NULL; 5263 - } 5258 + 5264 5259 } 5265 5260 out: 5266 5261 /* ··· 5270 5285 * RPI (Remote Port Index) mailbox command to the @phba. It simply releases 5271 5286 * the associated lpfc Direct Memory Access (DMA) buffer back to the pool and 5272 5287 * decrements the ndlp reference count held for this completion callback 5273 - * function. After that, it invokes the lpfc_nlp_not_used() to check 5274 - * whether there is only one reference left on the ndlp. If so, it will 5275 - * perform one more decrement and trigger the release of the ndlp. 5288 + * function. After that, it invokes the lpfc_drop_node to check 5289 + * whether it is appropriate to release the node. 5276 5290 **/ 5277 5291 void 5278 5292 lpfc_mbx_cmpl_dflt_rpi(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) ··· 5452 5468 ndlp->nlp_flag &= ~NLP_RELEASE_RPI; 5453 5469 spin_unlock_irq(&ndlp->lock); 5454 5470 } 5471 + lpfc_drop_node(vport, ndlp); 5472 + } else if (ndlp->nlp_state != NLP_STE_PLOGI_ISSUE && 5473 + ndlp->nlp_state != NLP_STE_REG_LOGIN_ISSUE && 5474 + ndlp->nlp_state != NLP_STE_PRLI_ISSUE) { 5475 + /* Drop ndlp if there is no planned or outstanding 5476 + * issued PRLI. 5477 + * 5478 + * In cases when the ndlp is acting as both an initiator 5479 + * and target function, let our issued PRLI determine 5480 + * the final ndlp kref drop. 5481 + */ 5482 + lpfc_drop_node(vport, ndlp); 5455 5483 } 5456 - 5457 - lpfc_drop_node(vport, ndlp); 5458 5484 } 5459 5485 5460 5486 /* Release the originating I/O reference. */
+21 -38
drivers/scsi/lpfc/lpfc_hbadisc.c
··· 458 458 if (ndlp->nlp_type & NLP_FABRIC) { 459 459 spin_lock_irqsave(&ndlp->lock, iflags); 460 460 461 - /* In massive vport configuration settings or when the FLOGI 462 - * completes with a sequence timeout, it's possible 463 - * dev_loss_tmo fired during node recovery. The driver has to 464 - * account for this race to allow for recovery and keep 465 - * the reference counting correct. 461 + /* The driver has to account for a race between any fabric 462 + * node that's in recovery when dev_loss_tmo expires. When this 463 + * happens, the driver has to allow node recovery. 466 464 */ 467 465 switch (ndlp->nlp_DID) { 468 466 case Fabric_DID: ··· 486 488 if (ndlp->nlp_state >= NLP_STE_PLOGI_ISSUE && 487 489 ndlp->nlp_state <= NLP_STE_REG_LOGIN_ISSUE) 488 490 recovering = true; 491 + break; 492 + default: 493 + /* Ensure the nlp_DID at least has the correct prefix. 494 + * The fabric domain controller's last three nibbles 495 + * vary so we handle it in the default case. 496 + */ 497 + if (ndlp->nlp_DID & Fabric_DID_MASK) { 498 + if (ndlp->nlp_state >= NLP_STE_PLOGI_ISSUE && 499 + ndlp->nlp_state <= NLP_STE_REG_LOGIN_ISSUE) 500 + recovering = true; 501 + } 489 502 break; 490 503 } 491 504 spin_unlock_irqrestore(&ndlp->lock, iflags); ··· 565 556 ndlp->nlp_DID, ndlp->nlp_flag, 566 557 ndlp->nlp_state, ndlp->nlp_rpi); 567 558 } 559 + spin_lock_irqsave(&ndlp->lock, iflags); 560 + ndlp->nlp_flag &= ~NLP_IN_DEV_LOSS; 561 + spin_unlock_irqrestore(&ndlp->lock, iflags); 568 562 569 563 /* If we are devloss, but we are in the process of rediscovering the 570 564 * ndlp, don't issue a NLP_EVT_DEVICE_RM event. ··· 577 565 return fcf_inuse; 578 566 } 579 567 580 - spin_lock_irqsave(&ndlp->lock, iflags); 581 - ndlp->nlp_flag &= ~NLP_IN_DEV_LOSS; 582 - spin_unlock_irqrestore(&ndlp->lock, iflags); 583 568 if (!(ndlp->fc4_xpt_flags & NVME_XPT_REGD)) 584 569 lpfc_disc_state_machine(vport, ndlp, NULL, NLP_EVT_DEVICE_RM); 585 570 ··· 4342 4333 4343 4334 /* If the node is not registered with the scsi or nvme 4344 4335 * transport, remove the fabric node. The failed reg_login 4345 - * is terminal. 4336 + * is terminal and forces the removal of the last node 4337 + * reference. 4346 4338 */ 4347 4339 if (!(ndlp->fc4_xpt_flags & (SCSI_XPT_REGD | NVME_XPT_REGD))) { 4348 4340 spin_lock_irq(&ndlp->lock); 4349 4341 ndlp->nlp_flag &= ~NLP_NPR_2B_DISC; 4350 4342 spin_unlock_irq(&ndlp->lock); 4351 - lpfc_nlp_not_used(ndlp); 4343 + lpfc_nlp_put(ndlp); 4352 4344 } 4353 4345 4354 4346 if (phba->fc_topology == LPFC_TOPOLOGY_LOOP) { ··· 4506 4496 /* Don't add the remote port if unloading. */ 4507 4497 if (vport->load_flag & FC_UNLOADING) 4508 4498 return; 4509 - 4510 - /* 4511 - * Disassociate any older association between this ndlp and rport 4512 - */ 4513 - if (ndlp->rport) { 4514 - rdata = ndlp->rport->dd_data; 4515 - rdata->pnode = NULL; 4516 - } 4517 4499 4518 4500 ndlp->rport = rport = fc_remote_port_add(shost, 0, &rport_ids); 4519 4501 if (!rport) { ··· 4837 4835 }; 4838 4836 4839 4837 if (state < NLP_STE_MAX_STATE && states[state]) 4840 - strlcpy(buffer, states[state], size); 4838 + strscpy(buffer, states[state], size); 4841 4839 else 4842 4840 snprintf(buffer, size, "unknown (%d)", state); 4843 4841 return buffer; ··· 6704 6702 } 6705 6703 6706 6704 return ndlp ? kref_put(&ndlp->kref, lpfc_nlp_release) : 0; 6707 - } 6708 - 6709 - /* This routine free's the specified nodelist if it is not in use 6710 - * by any other discovery thread. This routine returns 1 if the 6711 - * ndlp has been freed. A return value of 0 indicates the ndlp is 6712 - * not yet been released. 6713 - */ 6714 - int 6715 - lpfc_nlp_not_used(struct lpfc_nodelist *ndlp) 6716 - { 6717 - lpfc_debugfs_disc_trc(ndlp->vport, LPFC_DISC_TRC_NODE, 6718 - "node not used: did:x%x flg:x%x refcnt:x%x", 6719 - ndlp->nlp_DID, ndlp->nlp_flag, 6720 - kref_read(&ndlp->kref)); 6721 - 6722 - if (kref_read(&ndlp->kref) == 1) 6723 - if (lpfc_nlp_put(ndlp)) 6724 - return 1; 6725 - return 0; 6726 6705 } 6727 6706 6728 6707 /**
+10 -10
drivers/scsi/lpfc/lpfc_hw.h
··· 86 86 union CtCommandResponse { 87 87 /* Structure is in Big Endian format */ 88 88 struct { 89 - uint32_t CmdRsp:16; 90 - uint32_t Size:16; 89 + __be16 CmdRsp; 90 + __be16 Size; 91 91 } bits; 92 92 uint32_t word; 93 93 }; ··· 124 124 #define LPFC_CT_PREAMBLE 20 /* Size of CTReq + 4 up to here */ 125 125 126 126 union { 127 - uint32_t PortID; 127 + __be32 PortID; 128 128 struct gid { 129 129 uint8_t PortType; /* for GID_PT requests */ 130 130 #define GID_PT_N_PORT 1 ··· 1408 1408 }; 1409 1409 1410 1410 struct app_id_object { 1411 - uint32_t port_id; 1412 - uint32_t app_id; 1411 + __be32 port_id; 1412 + __be32 app_id; 1413 1413 struct entity_id_object obj; 1414 1414 }; 1415 1415 1416 1416 struct lpfc_vmid_rapp_ident_list { 1417 - uint32_t no_of_objects; 1418 - struct entity_id_object obj[1]; 1417 + __be32 no_of_objects; 1418 + struct entity_id_object obj[]; 1419 1419 }; 1420 1420 1421 1421 struct lpfc_vmid_dapp_ident_list { 1422 - uint32_t no_of_objects; 1423 - struct entity_id_object obj[1]; 1422 + __be32 no_of_objects; 1423 + struct entity_id_object obj[]; 1424 1424 }; 1425 1425 1426 1426 #define GALLAPPIA_ID_LAST 0x80 ··· 1512 1512 * Registered Port List Format 1513 1513 */ 1514 1514 struct lpfc_fdmi_reg_port_list { 1515 - uint32_t EntryCnt; 1515 + __be32 EntryCnt; 1516 1516 struct lpfc_fdmi_port_entry pe; 1517 1517 } __packed; 1518 1518
+6 -8
drivers/scsi/lpfc/lpfc_hw4.h
··· 395 395 #define CQE_STATUS_NEED_BUFF_ENTRY 0xf 396 396 #define CQE_STATUS_DI_ERROR 0x16 397 397 398 - /* Used when mapping CQE status to IOCB */ 399 - #define LPFC_IOCB_STATUS_MASK 0xf 400 - 401 398 /* Status returned by hardware (valid only if status = CQE_STATUS_SUCCESS). */ 402 399 #define CQE_HW_STATUS_NO_ERR 0x0 403 400 #define CQE_HW_STATUS_UNDERRUN 0x1 ··· 533 536 /* completion queue entry structure for rqe completion */ 534 537 struct lpfc_rcqe { 535 538 uint32_t word0; 536 - #define lpfc_rcqe_bindex_SHIFT 16 537 - #define lpfc_rcqe_bindex_MASK 0x0000FFF 538 - #define lpfc_rcqe_bindex_WORD word0 539 + #define lpfc_rcqe_iv_SHIFT 31 540 + #define lpfc_rcqe_iv_MASK 0x00000001 541 + #define lpfc_rcqe_iv_WORD word0 539 542 #define lpfc_rcqe_status_SHIFT 8 540 543 #define lpfc_rcqe_status_MASK 0x000000FF 541 544 #define lpfc_rcqe_status_WORD word0 ··· 543 546 #define FC_STATUS_RQ_BUF_LEN_EXCEEDED 0x11 /* payload truncated */ 544 547 #define FC_STATUS_INSUFF_BUF_NEED_BUF 0x12 /* Insufficient buffers */ 545 548 #define FC_STATUS_INSUFF_BUF_FRM_DISC 0x13 /* Frame Discard */ 549 + #define FC_STATUS_RQ_DMA_FAILURE 0x14 /* DMA failure */ 546 550 uint32_t word1; 547 551 #define lpfc_rcqe_fcf_id_v1_SHIFT 0 548 552 #define lpfc_rcqe_fcf_id_v1_MASK 0x0000003F ··· 4811 4813 #define cmf_sync_cqid_WORD word11 4812 4814 uint32_t read_bytes; 4813 4815 uint32_t word13; 4814 - #define cmf_sync_period_SHIFT 16 4815 - #define cmf_sync_period_MASK 0x0000ffff 4816 + #define cmf_sync_period_SHIFT 24 4817 + #define cmf_sync_period_MASK 0x000000ff 4816 4818 #define cmf_sync_period_WORD word13 4817 4819 uint32_t word14; 4818 4820 uint32_t word15;
+89 -187
drivers/scsi/lpfc/lpfc_init.c
··· 101 101 static DEFINE_IDR(lpfc_hba_index); 102 102 #define LPFC_NVMET_BUF_POST 254 103 103 static int lpfc_vmid_res_alloc(struct lpfc_hba *phba, struct lpfc_vport *vport); 104 + static void lpfc_cgn_update_tstamp(struct lpfc_hba *phba, struct lpfc_cgn_ts *ts); 104 105 105 106 /** 106 107 * lpfc_config_port_prep - Perform lpfc initialization prior to config port ··· 1280 1279 /* 1281 1280 * lpfc_idle_stat_delay_work - idle_stat tracking 1282 1281 * 1283 - * This routine tracks per-cq idle_stat and determines polling decisions. 1282 + * This routine tracks per-eq idle_stat and determines polling decisions. 1284 1283 * 1285 1284 * Return codes: 1286 1285 * None ··· 1291 1290 struct lpfc_hba *phba = container_of(to_delayed_work(work), 1292 1291 struct lpfc_hba, 1293 1292 idle_stat_delay_work); 1294 - struct lpfc_queue *cq; 1293 + struct lpfc_queue *eq; 1295 1294 struct lpfc_sli4_hdw_queue *hdwq; 1296 1295 struct lpfc_idle_stat *idle_stat; 1297 1296 u32 i, idle_percent; ··· 1307 1306 1308 1307 for_each_present_cpu(i) { 1309 1308 hdwq = &phba->sli4_hba.hdwq[phba->sli4_hba.cpu_map[i].hdwq]; 1310 - cq = hdwq->io_cq; 1309 + eq = hdwq->hba_eq; 1311 1310 1312 - /* Skip if we've already handled this cq's primary CPU */ 1313 - if (cq->chann != i) 1311 + /* Skip if we've already handled this eq's primary CPU */ 1312 + if (eq->chann != i) 1314 1313 continue; 1315 1314 1316 1315 idle_stat = &phba->sli4_hba.idle_stat[i]; ··· 1334 1333 idle_percent = 100 - idle_percent; 1335 1334 1336 1335 if (idle_percent < 15) 1337 - cq->poll_mode = LPFC_QUEUE_WORK; 1336 + eq->poll_mode = LPFC_QUEUE_WORK; 1338 1337 else 1339 - cq->poll_mode = LPFC_IRQ_POLL; 1338 + eq->poll_mode = LPFC_THREADED_IRQ; 1340 1339 1341 1340 idle_stat->prev_idle = wall_idle; 1342 1341 idle_stat->prev_wall = wall; ··· 3198 3197 "6221 Stop CMF / Cancel Timer\n"); 3199 3198 3200 3199 /* Cancel the CMF timer */ 3200 + hrtimer_cancel(&phba->cmf_stats_timer); 3201 3201 hrtimer_cancel(&phba->cmf_timer); 3202 3202 3203 3203 /* Zero CMF counters */ ··· 3285 3283 3286 3284 phba->cmf_timer_cnt = 0; 3287 3285 hrtimer_start(&phba->cmf_timer, 3288 - ktime_set(0, LPFC_CMF_INTERVAL * 1000000), 3286 + ktime_set(0, LPFC_CMF_INTERVAL * NSEC_PER_MSEC), 3287 + HRTIMER_MODE_REL); 3288 + hrtimer_start(&phba->cmf_stats_timer, 3289 + ktime_set(0, LPFC_SEC_MIN * NSEC_PER_SEC), 3289 3290 HRTIMER_MODE_REL); 3290 3291 /* Setup for latency check in IO cmpl routines */ 3291 3292 ktime_get_real_ts64(&phba->cmf_latency); ··· 4362 4357 struct lpfc_sli4_hdw_queue *qp; 4363 4358 struct lpfc_io_buf *lpfc_cmd; 4364 4359 int idx, cnt; 4360 + unsigned long iflags; 4365 4361 4366 4362 qp = phba->sli4_hba.hdwq; 4367 4363 cnt = 0; ··· 4377 4371 lpfc_cmd->hdwq_no = idx; 4378 4372 lpfc_cmd->hdwq = qp; 4379 4373 lpfc_cmd->cur_iocbq.cmd_cmpl = NULL; 4380 - spin_lock(&qp->io_buf_list_put_lock); 4374 + spin_lock_irqsave(&qp->io_buf_list_put_lock, iflags); 4381 4375 list_add_tail(&lpfc_cmd->list, 4382 4376 &qp->lpfc_io_buf_list_put); 4383 4377 qp->put_io_bufs++; 4384 4378 qp->total_io_bufs++; 4385 - spin_unlock(&qp->io_buf_list_put_lock); 4379 + spin_unlock_irqrestore(&qp->io_buf_list_put_lock, 4380 + iflags); 4386 4381 } 4387 4382 } 4388 4383 return cnt; ··· 5600 5593 lpfc_cgn_update_stat(struct lpfc_hba *phba, uint32_t dtag) 5601 5594 { 5602 5595 struct lpfc_cgn_info *cp; 5603 - struct tm broken; 5604 - struct timespec64 cur_time; 5605 - u32 cnt; 5606 5596 u32 value; 5607 5597 5608 5598 /* Make sure we have a congestion info buffer */ 5609 5599 if (!phba->cgn_i) 5610 5600 return; 5611 5601 cp = (struct lpfc_cgn_info *)phba->cgn_i->virt; 5612 - ktime_get_real_ts64(&cur_time); 5613 - time64_to_tm(cur_time.tv_sec, 0, &broken); 5614 5602 5615 5603 /* Update congestion statistics */ 5616 5604 switch (dtag) { 5617 5605 case ELS_DTAG_LNK_INTEGRITY: 5618 - cnt = le32_to_cpu(cp->link_integ_notification); 5619 - cnt++; 5620 - cp->link_integ_notification = cpu_to_le32(cnt); 5621 - 5622 - cp->cgn_stat_lnk_month = broken.tm_mon + 1; 5623 - cp->cgn_stat_lnk_day = broken.tm_mday; 5624 - cp->cgn_stat_lnk_year = broken.tm_year - 100; 5625 - cp->cgn_stat_lnk_hour = broken.tm_hour; 5626 - cp->cgn_stat_lnk_min = broken.tm_min; 5627 - cp->cgn_stat_lnk_sec = broken.tm_sec; 5606 + le32_add_cpu(&cp->link_integ_notification, 1); 5607 + lpfc_cgn_update_tstamp(phba, &cp->stat_lnk); 5628 5608 break; 5629 5609 case ELS_DTAG_DELIVERY: 5630 - cnt = le32_to_cpu(cp->delivery_notification); 5631 - cnt++; 5632 - cp->delivery_notification = cpu_to_le32(cnt); 5633 - 5634 - cp->cgn_stat_del_month = broken.tm_mon + 1; 5635 - cp->cgn_stat_del_day = broken.tm_mday; 5636 - cp->cgn_stat_del_year = broken.tm_year - 100; 5637 - cp->cgn_stat_del_hour = broken.tm_hour; 5638 - cp->cgn_stat_del_min = broken.tm_min; 5639 - cp->cgn_stat_del_sec = broken.tm_sec; 5610 + le32_add_cpu(&cp->delivery_notification, 1); 5611 + lpfc_cgn_update_tstamp(phba, &cp->stat_delivery); 5640 5612 break; 5641 5613 case ELS_DTAG_PEER_CONGEST: 5642 - cnt = le32_to_cpu(cp->cgn_peer_notification); 5643 - cnt++; 5644 - cp->cgn_peer_notification = cpu_to_le32(cnt); 5645 - 5646 - cp->cgn_stat_peer_month = broken.tm_mon + 1; 5647 - cp->cgn_stat_peer_day = broken.tm_mday; 5648 - cp->cgn_stat_peer_year = broken.tm_year - 100; 5649 - cp->cgn_stat_peer_hour = broken.tm_hour; 5650 - cp->cgn_stat_peer_min = broken.tm_min; 5651 - cp->cgn_stat_peer_sec = broken.tm_sec; 5614 + le32_add_cpu(&cp->cgn_peer_notification, 1); 5615 + lpfc_cgn_update_tstamp(phba, &cp->stat_peer); 5652 5616 break; 5653 5617 case ELS_DTAG_CONGESTION: 5654 - cnt = le32_to_cpu(cp->cgn_notification); 5655 - cnt++; 5656 - cp->cgn_notification = cpu_to_le32(cnt); 5657 - 5658 - cp->cgn_stat_cgn_month = broken.tm_mon + 1; 5659 - cp->cgn_stat_cgn_day = broken.tm_mday; 5660 - cp->cgn_stat_cgn_year = broken.tm_year - 100; 5661 - cp->cgn_stat_cgn_hour = broken.tm_hour; 5662 - cp->cgn_stat_cgn_min = broken.tm_min; 5663 - cp->cgn_stat_cgn_sec = broken.tm_sec; 5618 + le32_add_cpu(&cp->cgn_notification, 1); 5619 + lpfc_cgn_update_tstamp(phba, &cp->stat_fpin); 5664 5620 } 5665 5621 if (phba->cgn_fpin_frequency && 5666 5622 phba->cgn_fpin_frequency != LPFC_FPIN_INIT_FREQ) { 5667 5623 value = LPFC_CGN_TIMER_TO_MIN / phba->cgn_fpin_frequency; 5668 5624 cp->cgn_stat_npm = value; 5669 5625 } 5626 + 5670 5627 value = lpfc_cgn_calc_crc32(cp, LPFC_CGN_INFO_SZ, 5671 5628 LPFC_CGN_CRC32_SEED); 5672 5629 cp->cgn_info_crc = cpu_to_le32(value); 5673 5630 } 5674 5631 5675 5632 /** 5676 - * lpfc_cgn_save_evt_cnt - Save data into registered congestion buffer 5633 + * lpfc_cgn_update_tstamp - Update cmf timestamp 5677 5634 * @phba: pointer to lpfc hba data structure. 5635 + * @ts: structure to write the timestamp to. 5636 + */ 5637 + void 5638 + lpfc_cgn_update_tstamp(struct lpfc_hba *phba, struct lpfc_cgn_ts *ts) 5639 + { 5640 + struct timespec64 cur_time; 5641 + struct tm tm_val; 5642 + 5643 + ktime_get_real_ts64(&cur_time); 5644 + time64_to_tm(cur_time.tv_sec, 0, &tm_val); 5645 + 5646 + ts->month = tm_val.tm_mon + 1; 5647 + ts->day = tm_val.tm_mday; 5648 + ts->year = tm_val.tm_year - 100; 5649 + ts->hour = tm_val.tm_hour; 5650 + ts->minute = tm_val.tm_min; 5651 + ts->second = tm_val.tm_sec; 5652 + 5653 + lpfc_printf_log(phba, KERN_INFO, LOG_CGN_MGMT, 5654 + "2646 Updated CMF timestamp : " 5655 + "%u/%u/%u %u:%u:%u\n", 5656 + ts->day, ts->month, 5657 + ts->year, ts->hour, 5658 + ts->minute, ts->second); 5659 + } 5660 + 5661 + /** 5662 + * lpfc_cmf_stats_timer - Save data into registered congestion buffer 5663 + * @timer: Timer cookie to access lpfc private data 5678 5664 * 5679 5665 * Save the congestion event data every minute. 5680 5666 * On the hour collapse all the minute data into hour data. Every day ··· 5675 5675 * and fabrc congestion event counters that will be saved out 5676 5676 * to the registered congestion buffer every minute. 5677 5677 */ 5678 - static void 5679 - lpfc_cgn_save_evt_cnt(struct lpfc_hba *phba) 5678 + static enum hrtimer_restart 5679 + lpfc_cmf_stats_timer(struct hrtimer *timer) 5680 5680 { 5681 + struct lpfc_hba *phba; 5681 5682 struct lpfc_cgn_info *cp; 5682 - struct tm broken; 5683 - struct timespec64 cur_time; 5684 5683 uint32_t i, index; 5685 5684 uint16_t value, mvalue; 5686 5685 uint64_t bps; ··· 5690 5691 __le32 *lptr; 5691 5692 __le16 *mptr; 5692 5693 5694 + phba = container_of(timer, struct lpfc_hba, cmf_stats_timer); 5693 5695 /* Make sure we have a congestion info buffer */ 5694 5696 if (!phba->cgn_i) 5695 - return; 5697 + return HRTIMER_NORESTART; 5696 5698 cp = (struct lpfc_cgn_info *)phba->cgn_i->virt; 5697 5699 5698 - if (time_before(jiffies, phba->cgn_evt_timestamp)) 5699 - return; 5700 5700 phba->cgn_evt_timestamp = jiffies + 5701 5701 msecs_to_jiffies(LPFC_CGN_TIMER_TO_MIN); 5702 5702 phba->cgn_evt_minute++; 5703 5703 5704 5704 /* We should get to this point in the routine on 1 minute intervals */ 5705 - 5706 - ktime_get_real_ts64(&cur_time); 5707 - time64_to_tm(cur_time.tv_sec, 0, &broken); 5705 + lpfc_cgn_update_tstamp(phba, &cp->base_time); 5708 5706 5709 5707 if (phba->cgn_fpin_frequency && 5710 5708 phba->cgn_fpin_frequency != LPFC_FPIN_INIT_FREQ) { ··· 5854 5858 index = 0; 5855 5859 } 5856 5860 5857 - /* Anytime we overwrite daily index 0, after we wrap, 5858 - * we will be overwriting the oldest day, so we must 5859 - * update the congestion data start time for that day. 5860 - * That start time should have previously been saved after 5861 - * we wrote the last days worth of data. 5862 - */ 5863 - if ((phba->hba_flag & HBA_CGN_DAY_WRAP) && index == 0) { 5864 - time64_to_tm(phba->cgn_daily_ts.tv_sec, 0, &broken); 5865 - 5866 - cp->cgn_info_month = broken.tm_mon + 1; 5867 - cp->cgn_info_day = broken.tm_mday; 5868 - cp->cgn_info_year = broken.tm_year - 100; 5869 - cp->cgn_info_hour = broken.tm_hour; 5870 - cp->cgn_info_minute = broken.tm_min; 5871 - cp->cgn_info_second = broken.tm_sec; 5872 - 5873 - lpfc_printf_log 5874 - (phba, KERN_INFO, LOG_CGN_MGMT, 5875 - "2646 CGNInfo idx0 Start Time: " 5876 - "%d/%d/%d %d:%d:%d\n", 5877 - cp->cgn_info_day, cp->cgn_info_month, 5878 - cp->cgn_info_year, cp->cgn_info_hour, 5879 - cp->cgn_info_minute, cp->cgn_info_second); 5880 - } 5881 - 5882 5861 dvalue = 0; 5883 5862 wvalue = 0; 5884 5863 lvalue = 0; ··· 5887 5916 "2420 Congestion Info - daily (%d): " 5888 5917 "%d %d %d %d %d\n", 5889 5918 index, dvalue, wvalue, lvalue, mvalue, avalue); 5890 - 5891 - /* We just wrote LPFC_MAX_CGN_DAYS of data, 5892 - * so we are wrapped on any data after this. 5893 - * Save this as the start time for the next day. 5894 - */ 5895 - if (index == (LPFC_MAX_CGN_DAYS - 1)) { 5896 - phba->hba_flag |= HBA_CGN_DAY_WRAP; 5897 - ktime_get_real_ts64(&phba->cgn_daily_ts); 5898 - } 5899 5919 } 5900 5920 5901 5921 /* Use the frequency found in the last rcv'ed FPIN */ ··· 5897 5935 lvalue = lpfc_cgn_calc_crc32(cp, LPFC_CGN_INFO_SZ, 5898 5936 LPFC_CGN_CRC32_SEED); 5899 5937 cp->cgn_info_crc = cpu_to_le32(lvalue); 5938 + 5939 + hrtimer_forward_now(timer, ktime_set(0, LPFC_SEC_MIN * NSEC_PER_SEC)); 5940 + 5941 + return HRTIMER_RESTART; 5900 5942 } 5901 5943 5902 5944 /** ··· 6031 6065 if (ms && ms < LPFC_CMF_INTERVAL) { 6032 6066 cnt = div_u64(total, ms); /* bytes per ms */ 6033 6067 cnt *= LPFC_CMF_INTERVAL; /* what total should be */ 6034 - 6035 - /* If the timeout is scheduled to be shorter, 6036 - * this value may skew the data, so cap it at mbpi. 6037 - */ 6038 - if ((phba->hba_flag & HBA_SHORT_CMF) && cnt > mbpi) 6039 - cnt = mbpi; 6040 - 6041 6068 extra = cnt - total; 6042 6069 } 6043 6070 lpfc_issue_cmf_sync_wqe(phba, LPFC_CMF_INTERVAL, total + extra); ··· 6099 6140 atomic_inc(&phba->cgn_driver_evt_cnt); 6100 6141 } 6101 6142 phba->rx_block_cnt += div_u64(rcv, 512); /* save 512 byte block cnt */ 6102 - 6103 - /* Each minute save Fabric and Driver congestion information */ 6104 - lpfc_cgn_save_evt_cnt(phba); 6105 - 6106 - phba->hba_flag &= ~HBA_SHORT_CMF; 6107 - 6108 - /* Since we need to call lpfc_cgn_save_evt_cnt every minute, on the 6109 - * minute, adjust our next timer interval, if needed, to ensure a 6110 - * 1 minute granularity when we get the next timer interrupt. 6111 - */ 6112 - if (time_after(jiffies + msecs_to_jiffies(LPFC_CMF_INTERVAL), 6113 - phba->cgn_evt_timestamp)) { 6114 - timer_interval = jiffies_to_msecs(phba->cgn_evt_timestamp - 6115 - jiffies); 6116 - if (timer_interval <= 0) 6117 - timer_interval = LPFC_CMF_INTERVAL; 6118 - else 6119 - phba->hba_flag |= HBA_SHORT_CMF; 6120 - 6121 - /* If we adjust timer_interval, max_bytes_per_interval 6122 - * needs to be adjusted as well. 6123 - */ 6124 - phba->cmf_link_byte_count = div_u64(phba->cmf_max_line_rate * 6125 - timer_interval, 1000); 6126 - if (phba->cmf_active_mode == LPFC_CFG_MONITOR) 6127 - phba->cmf_max_bytes_per_interval = 6128 - phba->cmf_link_byte_count; 6129 - } 6130 6143 6131 6144 /* Since total_bytes has already been zero'ed, its okay to unblock 6132 6145 * after max_bytes_per_interval is setup. ··· 7945 8014 /* CMF congestion timer */ 7946 8015 hrtimer_init(&phba->cmf_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); 7947 8016 phba->cmf_timer.function = lpfc_cmf_timer; 8017 + /* CMF 1 minute stats collection timer */ 8018 + hrtimer_init(&phba->cmf_stats_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); 8019 + phba->cmf_stats_timer.function = lpfc_cmf_stats_timer; 7948 8020 7949 8021 /* 7950 8022 * Control structure for handling external multi-buffer mailbox ··· 13051 13117 } 13052 13118 eqhdl->irq = rc; 13053 13119 13054 - rc = request_irq(eqhdl->irq, &lpfc_sli4_hba_intr_handler, 0, 13055 - name, eqhdl); 13120 + rc = request_threaded_irq(eqhdl->irq, 13121 + &lpfc_sli4_hba_intr_handler, 13122 + &lpfc_sli4_hba_intr_handler_th, 13123 + IRQF_ONESHOT, name, eqhdl); 13056 13124 if (rc) { 13057 13125 lpfc_printf_log(phba, KERN_WARNING, LOG_INIT, 13058 13126 "0486 MSI-X fast-path (%d) " ··· 13457 13521 struct pci_dev *pdev = phba->pcidev; 13458 13522 13459 13523 lpfc_stop_hba_timers(phba); 13524 + hrtimer_cancel(&phba->cmf_stats_timer); 13460 13525 hrtimer_cancel(&phba->cmf_timer); 13461 13526 13462 13527 if (phba->pport) ··· 13582 13645 lpfc_init_congestion_buf(struct lpfc_hba *phba) 13583 13646 { 13584 13647 struct lpfc_cgn_info *cp; 13585 - struct timespec64 cmpl_time; 13586 - struct tm broken; 13587 13648 uint16_t size; 13588 13649 uint32_t crc; 13589 13650 ··· 13601 13666 atomic_set(&phba->cgn_latency_evt_cnt, 0); 13602 13667 atomic64_set(&phba->cgn_latency_evt, 0); 13603 13668 phba->cgn_evt_minute = 0; 13604 - phba->hba_flag &= ~HBA_CGN_DAY_WRAP; 13605 13669 13606 13670 memset(cp, 0xff, offsetof(struct lpfc_cgn_info, cgn_stat)); 13607 13671 cp->cgn_info_size = cpu_to_le16(LPFC_CGN_INFO_SZ); 13608 - cp->cgn_info_version = LPFC_CGN_INFO_V3; 13672 + cp->cgn_info_version = LPFC_CGN_INFO_V4; 13609 13673 13610 13674 /* cgn parameters */ 13611 13675 cp->cgn_info_mode = phba->cgn_p.cgn_param_mode; ··· 13612 13678 cp->cgn_info_level1 = phba->cgn_p.cgn_param_level1; 13613 13679 cp->cgn_info_level2 = phba->cgn_p.cgn_param_level2; 13614 13680 13615 - ktime_get_real_ts64(&cmpl_time); 13616 - time64_to_tm(cmpl_time.tv_sec, 0, &broken); 13617 - 13618 - cp->cgn_info_month = broken.tm_mon + 1; 13619 - cp->cgn_info_day = broken.tm_mday; 13620 - cp->cgn_info_year = broken.tm_year - 100; /* relative to 2000 */ 13621 - cp->cgn_info_hour = broken.tm_hour; 13622 - cp->cgn_info_minute = broken.tm_min; 13623 - cp->cgn_info_second = broken.tm_sec; 13624 - 13625 - lpfc_printf_log(phba, KERN_INFO, LOG_CGN_MGMT | LOG_INIT, 13626 - "2643 CGNInfo Init: Start Time " 13627 - "%d/%d/%d %d:%d:%d\n", 13628 - cp->cgn_info_day, cp->cgn_info_month, 13629 - cp->cgn_info_year, cp->cgn_info_hour, 13630 - cp->cgn_info_minute, cp->cgn_info_second); 13681 + lpfc_cgn_update_tstamp(phba, &cp->base_time); 13631 13682 13632 13683 /* Fill in default LUN qdepth */ 13633 13684 if (phba->pport) { ··· 13635 13716 lpfc_init_congestion_stat(struct lpfc_hba *phba) 13636 13717 { 13637 13718 struct lpfc_cgn_info *cp; 13638 - struct timespec64 cmpl_time; 13639 - struct tm broken; 13640 13719 uint32_t crc; 13641 13720 13642 13721 lpfc_printf_log(phba, KERN_INFO, LOG_CGN_MGMT, ··· 13646 13729 cp = (struct lpfc_cgn_info *)phba->cgn_i->virt; 13647 13730 memset(&cp->cgn_stat, 0, sizeof(cp->cgn_stat)); 13648 13731 13649 - ktime_get_real_ts64(&cmpl_time); 13650 - time64_to_tm(cmpl_time.tv_sec, 0, &broken); 13651 - 13652 - cp->cgn_stat_month = broken.tm_mon + 1; 13653 - cp->cgn_stat_day = broken.tm_mday; 13654 - cp->cgn_stat_year = broken.tm_year - 100; /* relative to 2000 */ 13655 - cp->cgn_stat_hour = broken.tm_hour; 13656 - cp->cgn_stat_minute = broken.tm_min; 13657 - 13658 - lpfc_printf_log(phba, KERN_INFO, LOG_CGN_MGMT | LOG_INIT, 13659 - "2647 CGNstat Init: Start Time " 13660 - "%d/%d/%d %d:%d\n", 13661 - cp->cgn_stat_day, cp->cgn_stat_month, 13662 - cp->cgn_stat_year, cp->cgn_stat_hour, 13663 - cp->cgn_stat_minute); 13664 - 13732 + lpfc_cgn_update_tstamp(phba, &cp->stat_start); 13665 13733 crc = lpfc_cgn_calc_crc32(cp, LPFC_CGN_INFO_SZ, LPFC_CGN_CRC32_SEED); 13666 13734 cp->cgn_info_crc = cpu_to_le32(crc); 13667 13735 } ··· 14645 14743 INIT_LIST_HEAD(&dma_buffer_list); 14646 14744 lpfc_decode_firmware_rev(phba, fwrev, 1); 14647 14745 if (strncmp(fwrev, image->revision, strnlen(image->revision, 16))) { 14648 - lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, 14649 - "3023 Updating Firmware, Current Version:%s " 14650 - "New Version:%s\n", 14651 - fwrev, image->revision); 14746 + lpfc_log_msg(phba, KERN_NOTICE, LOG_INIT | LOG_SLI, 14747 + "3023 Updating Firmware, Current Version:%s " 14748 + "New Version:%s\n", 14749 + fwrev, image->revision); 14652 14750 for (i = 0; i < LPFC_MBX_WR_CONFIG_MAX_BDE; i++) { 14653 14751 dmabuf = kzalloc(sizeof(struct lpfc_dmabuf), 14654 14752 GFP_KERNEL); ··· 14695 14793 } 14696 14794 rc = offset; 14697 14795 } else 14698 - lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, 14699 - "3029 Skipped Firmware update, Current " 14700 - "Version:%s New Version:%s\n", 14701 - fwrev, image->revision); 14796 + lpfc_log_msg(phba, KERN_NOTICE, LOG_INIT | LOG_SLI, 14797 + "3029 Skipped Firmware update, Current " 14798 + "Version:%s New Version:%s\n", 14799 + fwrev, image->revision); 14702 14800 14703 14801 release_out: 14704 14802 list_for_each_entry_safe(dmabuf, next, &dma_buffer_list, list) { ··· 14710 14808 release_firmware(fw); 14711 14809 out: 14712 14810 if (rc < 0) 14713 - lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, 14714 - "3062 Firmware update error, status %d.\n", rc); 14811 + lpfc_log_msg(phba, KERN_ERR, LOG_INIT | LOG_SLI, 14812 + "3062 Firmware update error, status %d.\n", rc); 14715 14813 else 14716 - lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, 14717 - "3024 Firmware update success: size %d.\n", rc); 14814 + lpfc_log_msg(phba, KERN_NOTICE, LOG_INIT | LOG_SLI, 14815 + "3024 Firmware update success: size %d.\n", rc); 14718 14816 } 14719 14817 14720 14818 /**
+3 -3
drivers/scsi/lpfc/lpfc_logmsg.h
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2017-2022 Broadcom. All Rights Reserved. The term * 4 + * Copyright (C) 2017-2023 Broadcom. All Rights Reserved. The term * 5 5 * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. * 6 6 * Copyright (C) 2004-2009 Emulex. All rights reserved. * 7 7 * EMULEX and SLI are trademarks of Emulex. * ··· 55 55 56 56 /* generate message by verbose log setting or severity */ 57 57 #define lpfc_vlog_msg(vport, level, mask, fmt, arg...) \ 58 - { if (((mask) & (vport)->cfg_log_verbose) || (level[1] <= '4')) \ 58 + { if (((mask) & (vport)->cfg_log_verbose) || (level[1] <= '5')) \ 59 59 dev_printk(level, &((vport)->phba->pcidev)->dev, "%d:(%d):" \ 60 60 fmt, (vport)->phba->brd_no, vport->vpi, ##arg); } 61 61 ··· 64 64 { uint32_t log_verbose = (phba)->pport ? \ 65 65 (phba)->pport->cfg_log_verbose : \ 66 66 (phba)->cfg_log_verbose; \ 67 - if (((mask) & log_verbose) || (level[1] <= '4')) \ 67 + if (((mask) & log_verbose) || (level[1] <= '5')) \ 68 68 dev_printk(level, &((phba)->pcidev)->dev, "%d:" \ 69 69 fmt, phba->brd_no, ##arg); \ 70 70 } \
+33 -30
drivers/scsi/lpfc/lpfc_nvme.c
··· 310 310 * for the LS request. 311 311 **/ 312 312 void 313 - __lpfc_nvme_ls_req_cmp(struct lpfc_hba *phba, struct lpfc_vport *vport, 313 + __lpfc_nvme_ls_req_cmp(struct lpfc_hba *phba, struct lpfc_vport *vport, 314 314 struct lpfc_iocbq *cmdwqe, 315 315 struct lpfc_wcqe_complete *wcqe) 316 316 { 317 317 struct nvmefc_ls_req *pnvme_lsreq; 318 318 struct lpfc_dmabuf *buf_ptr; 319 319 struct lpfc_nodelist *ndlp; 320 - uint32_t status; 320 + int status; 321 321 322 322 pnvme_lsreq = cmdwqe->context_un.nvme_lsreq; 323 323 ndlp = cmdwqe->ndlp; 324 324 buf_ptr = cmdwqe->bpl_dmabuf; 325 325 326 - status = bf_get(lpfc_wcqe_c_status, wcqe) & LPFC_IOCB_STATUS_MASK; 326 + status = bf_get(lpfc_wcqe_c_status, wcqe); 327 327 328 328 lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC, 329 329 "6047 NVMEx LS REQ x%px cmpl DID %x Xri: %x " ··· 343 343 kfree(buf_ptr); 344 344 cmdwqe->bpl_dmabuf = NULL; 345 345 } 346 - if (pnvme_lsreq->done) 346 + if (pnvme_lsreq->done) { 347 + if (status != CQE_STATUS_SUCCESS) 348 + status = -ENXIO; 347 349 pnvme_lsreq->done(pnvme_lsreq, status); 348 - else 350 + } else { 349 351 lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, 350 352 "6046 NVMEx cmpl without done call back? " 351 353 "Data x%px DID %x Xri: %x status %x\n", 352 354 pnvme_lsreq, ndlp ? ndlp->nlp_DID : 0, 353 355 cmdwqe->sli4_xritag, status); 356 + } 354 357 if (ndlp) { 355 358 lpfc_nlp_put(ndlp); 356 359 cmdwqe->ndlp = NULL; ··· 370 367 uint32_t status; 371 368 struct lpfc_wcqe_complete *wcqe = &rspwqe->wcqe_cmpl; 372 369 373 - status = bf_get(lpfc_wcqe_c_status, wcqe) & LPFC_IOCB_STATUS_MASK; 370 + status = bf_get(lpfc_wcqe_c_status, wcqe); 374 371 375 372 if (vport->localport) { 376 373 lport = (struct lpfc_nvme_lport *)vport->localport->private; ··· 1043 1040 nCmd->rcv_rsplen = LPFC_NVME_ERSP_LEN; 1044 1041 nCmd->transferred_length = nCmd->payload_length; 1045 1042 } else { 1046 - lpfc_ncmd->status = (status & LPFC_IOCB_STATUS_MASK); 1043 + lpfc_ncmd->status = status; 1047 1044 lpfc_ncmd->result = (wcqe->parameter & IOERR_PARAM_MASK); 1048 1045 1049 1046 /* For NVME, the only failure path that results in an ··· 1896 1893 pnvme_rport->port_id, 1897 1894 pnvme_fcreq); 1898 1895 1896 + lpfc_nbuf = freqpriv->nvme_buf; 1897 + if (!lpfc_nbuf) { 1898 + lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, 1899 + "6140 NVME IO req has no matching lpfc nvme " 1900 + "io buffer. Skipping abort req.\n"); 1901 + return; 1902 + } else if (!lpfc_nbuf->nvmeCmd) { 1903 + lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, 1904 + "6141 lpfc NVME IO req has no nvme_fcreq " 1905 + "io buffer. Skipping abort req.\n"); 1906 + return; 1907 + } 1908 + 1909 + /* Guard against IO completion being called at same time */ 1910 + spin_lock_irqsave(&lpfc_nbuf->buf_lock, flags); 1911 + 1899 1912 /* If the hba is getting reset, this flag is set. It is 1900 1913 * cleared when the reset is complete and rings reestablished. 1901 1914 */ 1902 - spin_lock_irqsave(&phba->hbalock, flags); 1915 + spin_lock(&phba->hbalock); 1903 1916 /* driver queued commands are in process of being flushed */ 1904 1917 if (phba->hba_flag & HBA_IOQ_FLUSH) { 1905 - spin_unlock_irqrestore(&phba->hbalock, flags); 1918 + spin_unlock(&phba->hbalock); 1919 + spin_unlock_irqrestore(&lpfc_nbuf->buf_lock, flags); 1906 1920 lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, 1907 1921 "6139 Driver in reset cleanup - flushing " 1908 1922 "NVME Req now. hba_flag x%x\n", ··· 1927 1907 return; 1928 1908 } 1929 1909 1930 - lpfc_nbuf = freqpriv->nvme_buf; 1931 - if (!lpfc_nbuf) { 1932 - spin_unlock_irqrestore(&phba->hbalock, flags); 1933 - lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, 1934 - "6140 NVME IO req has no matching lpfc nvme " 1935 - "io buffer. Skipping abort req.\n"); 1936 - return; 1937 - } else if (!lpfc_nbuf->nvmeCmd) { 1938 - spin_unlock_irqrestore(&phba->hbalock, flags); 1939 - lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, 1940 - "6141 lpfc NVME IO req has no nvme_fcreq " 1941 - "io buffer. Skipping abort req.\n"); 1942 - return; 1943 - } 1944 1910 nvmereq_wqe = &lpfc_nbuf->cur_iocbq; 1945 - 1946 - /* Guard against IO completion being called at same time */ 1947 - spin_lock(&lpfc_nbuf->buf_lock); 1948 1911 1949 1912 /* 1950 1913 * The lpfc_nbuf and the mapped nvme_fcreq in the driver's ··· 1974 1971 ret_val = lpfc_sli4_issue_abort_iotag(phba, nvmereq_wqe, 1975 1972 lpfc_nvme_abort_fcreq_cmpl); 1976 1973 1977 - spin_unlock(&lpfc_nbuf->buf_lock); 1978 - spin_unlock_irqrestore(&phba->hbalock, flags); 1974 + spin_unlock(&phba->hbalock); 1975 + spin_unlock_irqrestore(&lpfc_nbuf->buf_lock, flags); 1979 1976 1980 1977 /* Make sure HBA is alive */ 1981 1978 lpfc_issue_hb_tmo(phba); ··· 2001 1998 return; 2002 1999 2003 2000 out_unlock: 2004 - spin_unlock(&lpfc_nbuf->buf_lock); 2005 - spin_unlock_irqrestore(&phba->hbalock, flags); 2001 + spin_unlock(&phba->hbalock); 2002 + spin_unlock_irqrestore(&lpfc_nbuf->buf_lock, flags); 2006 2003 return; 2007 2004 } 2008 2005
+3 -3
drivers/scsi/lpfc/lpfc_nvmet.c
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2017-2022 Broadcom. All Rights Reserved. The term * 4 + * Copyright (C) 2017-2023 Broadcom. All Rights Reserved. The term * 5 5 * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. * 6 6 * Copyright (C) 2004-2016 Emulex. All rights reserved. * 7 7 * EMULEX and SLI are trademarks of Emulex. * ··· 300 300 struct nvmefc_ls_rsp *ls_rsp = &axchg->ls_rsp; 301 301 uint32_t status, result; 302 302 303 - status = bf_get(lpfc_wcqe_c_status, wcqe) & LPFC_IOCB_STATUS_MASK; 303 + status = bf_get(lpfc_wcqe_c_status, wcqe); 304 304 result = wcqe->parameter; 305 305 306 306 if (axchg->state != LPFC_NVME_STE_LS_RSP || axchg->entry_cnt != 2) { ··· 350 350 if (!phba->targetport) 351 351 goto finish; 352 352 353 - status = bf_get(lpfc_wcqe_c_status, wcqe) & LPFC_IOCB_STATUS_MASK; 353 + status = bf_get(lpfc_wcqe_c_status, wcqe); 354 354 result = wcqe->parameter; 355 355 356 356 tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private;
+31 -37
drivers/scsi/lpfc/lpfc_scsi.c
··· 4026 4026 struct lpfc_fast_path_event *fast_path_evt; 4027 4027 struct Scsi_Host *shost; 4028 4028 u32 logit = LOG_FCP; 4029 - u32 status, idx; 4029 + u32 idx; 4030 4030 u32 lat; 4031 4031 u8 wait_xb_clr = 0; 4032 4032 ··· 4061 4061 #endif 4062 4062 shost = cmd->device->host; 4063 4063 4064 - status = bf_get(lpfc_wcqe_c_status, wcqe); 4065 - lpfc_cmd->status = (status & LPFC_IOCB_STATUS_MASK); 4064 + lpfc_cmd->status = bf_get(lpfc_wcqe_c_status, wcqe); 4066 4065 lpfc_cmd->result = (wcqe->parameter & IOERR_PARAM_MASK); 4067 4066 4068 4067 lpfc_cmd->flags &= ~LPFC_SBUF_XBUSY; ··· 4103 4104 } 4104 4105 #endif 4105 4106 if (unlikely(lpfc_cmd->status)) { 4106 - if (lpfc_cmd->status == IOSTAT_LOCAL_REJECT && 4107 - (lpfc_cmd->result & IOERR_DRVR_MASK)) 4108 - lpfc_cmd->status = IOSTAT_DRIVER_REJECT; 4109 - else if (lpfc_cmd->status >= IOSTAT_CNT) 4110 - lpfc_cmd->status = IOSTAT_DEFAULT; 4111 4107 if (lpfc_cmd->status == IOSTAT_FCP_RSP_ERROR && 4112 4108 !lpfc_cmd->fcp_rsp->rspStatus3 && 4113 4109 (lpfc_cmd->fcp_rsp->rspStatus2 & RESID_UNDER) && ··· 4127 4133 } 4128 4134 4129 4135 switch (lpfc_cmd->status) { 4130 - case IOSTAT_SUCCESS: 4136 + case CQE_STATUS_SUCCESS: 4131 4137 cmd->result = DID_OK << 16; 4132 4138 break; 4133 - case IOSTAT_FCP_RSP_ERROR: 4139 + case CQE_STATUS_FCP_RSP_FAILURE: 4134 4140 lpfc_handle_fcp_err(vport, lpfc_cmd, 4135 4141 pwqeIn->wqe.fcp_iread.total_xfer_len - 4136 4142 wcqe->total_data_placed); 4137 4143 break; 4138 - case IOSTAT_NPORT_BSY: 4139 - case IOSTAT_FABRIC_BSY: 4144 + case CQE_STATUS_NPORT_BSY: 4145 + case CQE_STATUS_FABRIC_BSY: 4140 4146 cmd->result = DID_TRANSPORT_DISRUPTED << 16; 4141 4147 fast_path_evt = lpfc_alloc_fast_evt(phba); 4142 4148 if (!fast_path_evt) ··· 4179 4185 wcqe->total_data_placed, 4180 4186 lpfc_cmd->cur_iocbq.iocb.ulpIoTag); 4181 4187 break; 4182 - case IOSTAT_REMOTE_STOP: 4188 + case CQE_STATUS_DI_ERROR: 4189 + if (bf_get(lpfc_wcqe_c_bg_edir, wcqe)) 4190 + lpfc_cmd->result = IOERR_RX_DMA_FAILED; 4191 + else 4192 + lpfc_cmd->result = IOERR_TX_DMA_FAILED; 4193 + lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP | LOG_BG, 4194 + "9048 DI Error xri x%x status x%x DI ext " 4195 + "status x%x data placed x%x\n", 4196 + lpfc_cmd->cur_iocbq.sli4_xritag, 4197 + lpfc_cmd->status, wcqe->parameter, 4198 + wcqe->total_data_placed); 4199 + if (scsi_get_prot_op(cmd) != SCSI_PROT_NORMAL) { 4200 + /* BG enabled cmd. Parse BG error */ 4201 + lpfc_parse_bg_err(phba, lpfc_cmd, pwqeOut); 4202 + break; 4203 + } 4204 + cmd->result = DID_ERROR << 16; 4205 + lpfc_printf_vlog(vport, KERN_WARNING, LOG_BG, 4206 + "9040 DI Error on unprotected cmd\n"); 4207 + break; 4208 + case CQE_STATUS_REMOTE_STOP: 4183 4209 if (ndlp) { 4184 4210 /* This I/O was aborted by the target, we don't 4185 4211 * know the rxid and because we did not send the ··· 4210 4196 0, 0); 4211 4197 } 4212 4198 fallthrough; 4213 - case IOSTAT_LOCAL_REJECT: 4199 + case CQE_STATUS_LOCAL_REJECT: 4214 4200 if (lpfc_cmd->result & IOERR_DRVR_MASK) 4215 4201 lpfc_cmd->status = IOSTAT_DRIVER_REJECT; 4216 4202 if (lpfc_cmd->result == IOERR_ELXSEC_KEY_UNWRAP_ERROR || ··· 4231 4217 cmd->result = DID_TRANSPORT_DISRUPTED << 16; 4232 4218 break; 4233 4219 } 4234 - if ((lpfc_cmd->result == IOERR_RX_DMA_FAILED || 4235 - lpfc_cmd->result == IOERR_TX_DMA_FAILED) && 4236 - status == CQE_STATUS_DI_ERROR) { 4237 - if (scsi_get_prot_op(cmd) != 4238 - SCSI_PROT_NORMAL) { 4239 - /* 4240 - * This is a response for a BG enabled 4241 - * cmd. Parse BG error 4242 - */ 4243 - lpfc_parse_bg_err(phba, lpfc_cmd, pwqeOut); 4244 - break; 4245 - } else { 4246 - lpfc_printf_vlog(vport, KERN_WARNING, 4247 - LOG_BG, 4248 - "9040 non-zero BGSTAT " 4249 - "on unprotected cmd\n"); 4250 - } 4251 - } 4252 4220 lpfc_printf_vlog(vport, KERN_WARNING, logit, 4253 4221 "9036 Local Reject FCP cmd x%x failed" 4254 4222 " <%d/%lld> " ··· 4249 4253 lpfc_cmd->cur_iocbq.iocb.ulpIoTag); 4250 4254 fallthrough; 4251 4255 default: 4252 - if (lpfc_cmd->status >= IOSTAT_CNT) 4253 - lpfc_cmd->status = IOSTAT_DEFAULT; 4254 4256 cmd->result = DID_ERROR << 16; 4255 - lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_IOERR, 4257 + lpfc_printf_vlog(vport, KERN_INFO, LOG_FCP, 4256 4258 "9037 FCP Completion Error: xri %x " 4257 4259 "status x%x result x%x [x%x] " 4258 4260 "placed x%x\n", ··· 4267 4273 "x%x SNS x%x x%x LBA x%llx Data: x%x x%x\n", 4268 4274 cmd->device->id, cmd->device->lun, cmd, 4269 4275 cmd->result, *lp, *(lp + 3), 4270 - (u64)scsi_get_lba(cmd), 4276 + (cmd->device->sector_size) ? 4277 + (u64)scsi_get_lba(cmd) : 0, 4271 4278 cmd->retries, scsi_get_resid(cmd)); 4272 4279 } 4273 4280 ··· 5004 5009 return -ENODEV; 5005 5010 } 5006 5011 phba->lpfc_rampdown_queue_depth = lpfc_rampdown_queue_depth; 5007 - phba->lpfc_scsi_cmd_iocb_cmpl = lpfc_scsi_cmd_iocb_cmpl; 5008 5012 return 0; 5009 5013 } 5010 5014
+274 -170
drivers/scsi/lpfc/lpfc_sli.c
··· 82 82 int); 83 83 static void lpfc_sli4_hba_handle_eqe(struct lpfc_hba *phba, 84 84 struct lpfc_queue *eq, 85 - struct lpfc_eqe *eqe); 85 + struct lpfc_eqe *eqe, 86 + enum lpfc_poll_mode poll_mode); 86 87 static bool lpfc_sli4_mbox_completions_pending(struct lpfc_hba *phba); 87 88 static bool lpfc_sli4_process_missed_mbox_completions(struct lpfc_hba *phba); 88 89 static struct lpfc_cqe *lpfc_sli4_cq_get(struct lpfc_queue *q); ··· 630 629 631 630 static int 632 631 lpfc_sli4_process_eq(struct lpfc_hba *phba, struct lpfc_queue *eq, 633 - uint8_t rearm) 632 + u8 rearm, enum lpfc_poll_mode poll_mode) 634 633 { 635 634 struct lpfc_eqe *eqe; 636 635 int count = 0, consumed = 0; ··· 640 639 641 640 eqe = lpfc_sli4_eq_get(eq); 642 641 while (eqe) { 643 - lpfc_sli4_hba_handle_eqe(phba, eq, eqe); 642 + lpfc_sli4_hba_handle_eqe(phba, eq, eqe, poll_mode); 644 643 __lpfc_sli4_consume_eqe(phba, eq, eqe); 645 644 646 645 consumed++; ··· 1932 1931 unsigned long iflags; 1933 1932 u32 ret_val; 1934 1933 u32 atot, wtot, max; 1935 - u16 warn_sync_period = 0; 1934 + u8 warn_sync_period = 0; 1936 1935 1937 1936 /* First address any alarm / warning activity */ 1938 1937 atot = atomic_xchg(&phba->cgn_sync_alarm_cnt, 0); ··· 7958 7957 * lpfc_init_idle_stat_hb - Initialize idle_stat tracking 7959 7958 * @phba: pointer to lpfc hba data structure. 7960 7959 * 7961 - * This routine initializes the per-cq idle_stat to dynamically dictate 7960 + * This routine initializes the per-eq idle_stat to dynamically dictate 7962 7961 * polling decisions. 7963 7962 * 7964 7963 * Return codes: ··· 7968 7967 { 7969 7968 int i; 7970 7969 struct lpfc_sli4_hdw_queue *hdwq; 7971 - struct lpfc_queue *cq; 7970 + struct lpfc_queue *eq; 7972 7971 struct lpfc_idle_stat *idle_stat; 7973 7972 u64 wall; 7974 7973 7975 7974 for_each_present_cpu(i) { 7976 7975 hdwq = &phba->sli4_hba.hdwq[phba->sli4_hba.cpu_map[i].hdwq]; 7977 - cq = hdwq->io_cq; 7976 + eq = hdwq->hba_eq; 7978 7977 7979 - /* Skip if we've already handled this cq's primary CPU */ 7980 - if (cq->chann != i) 7978 + /* Skip if we've already handled this eq's primary CPU */ 7979 + if (eq->chann != i) 7981 7980 continue; 7982 7981 7983 7982 idle_stat = &phba->sli4_hba.idle_stat[i]; ··· 7986 7985 idle_stat->prev_wall = wall; 7987 7986 7988 7987 if (phba->nvmet_support || 7989 - phba->cmf_active_mode != LPFC_CFG_OFF) 7990 - cq->poll_mode = LPFC_QUEUE_WORK; 7988 + phba->cmf_active_mode != LPFC_CFG_OFF || 7989 + phba->intr_type != MSIX) 7990 + eq->poll_mode = LPFC_QUEUE_WORK; 7991 7991 else 7992 - cq->poll_mode = LPFC_IRQ_POLL; 7992 + eq->poll_mode = LPFC_THREADED_IRQ; 7993 7993 } 7994 7994 7995 - if (!phba->nvmet_support) 7995 + if (!phba->nvmet_support && phba->intr_type == MSIX) 7996 7996 schedule_delayed_work(&phba->idle_stat_delay_work, 7997 7997 msecs_to_jiffies(LPFC_IDLE_STAT_DELAY)); 7998 7998 } ··· 9220 9218 9221 9219 if (mbox_pending) 9222 9220 /* process and rearm the EQ */ 9223 - lpfc_sli4_process_eq(phba, fpeq, LPFC_QUEUE_REARM); 9221 + lpfc_sli4_process_eq(phba, fpeq, LPFC_QUEUE_REARM, 9222 + LPFC_QUEUE_WORK); 9224 9223 else 9225 9224 /* Always clear and re-arm the EQ */ 9226 9225 sli4_hba->sli4_write_eq_db(phba, fpeq, 0, LPFC_QUEUE_REARM); ··· 11257 11254 * will be handled through a sched from polling timer 11258 11255 * function which is currently triggered every 1msec. 11259 11256 */ 11260 - lpfc_sli4_process_eq(phba, eq, LPFC_QUEUE_NOARM); 11257 + lpfc_sli4_process_eq(phba, eq, LPFC_QUEUE_NOARM, 11258 + LPFC_QUEUE_WORK); 11261 11259 } 11262 11260 11263 11261 /** ··· 14686 14682 spin_unlock_irqrestore(&phba->hbalock, iflags); 14687 14683 workposted = true; 14688 14684 break; 14685 + case FC_STATUS_RQ_DMA_FAILURE: 14686 + lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, 14687 + "2564 RQE DMA Error x%x, x%08x x%08x x%08x " 14688 + "x%08x\n", 14689 + status, rcqe->word0, rcqe->word1, 14690 + rcqe->word2, rcqe->word3); 14691 + 14692 + /* If IV set, no further recovery */ 14693 + if (bf_get(lpfc_rcqe_iv, rcqe)) 14694 + break; 14695 + 14696 + /* recycle consumed resource */ 14697 + spin_lock_irqsave(&phba->hbalock, iflags); 14698 + lpfc_sli4_rq_release(hrq, drq); 14699 + dma_buf = lpfc_sli_hbqbuf_get(&phba->hbqs[0].hbq_buffer_list); 14700 + if (!dma_buf) { 14701 + hrq->RQ_no_buf_found++; 14702 + spin_unlock_irqrestore(&phba->hbalock, iflags); 14703 + break; 14704 + } 14705 + hrq->RQ_rcv_buf++; 14706 + hrq->RQ_buf_posted--; 14707 + spin_unlock_irqrestore(&phba->hbalock, iflags); 14708 + lpfc_in_buf_free(phba, &dma_buf->dbuf); 14709 + break; 14710 + default: 14711 + lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, 14712 + "2565 Unexpected RQE Status x%x, w0-3 x%08x " 14713 + "x%08x x%08x x%08x\n", 14714 + status, rcqe->word0, rcqe->word1, 14715 + rcqe->word2, rcqe->word3); 14716 + break; 14689 14717 } 14690 14718 out: 14691 14719 return workposted; ··· 14839 14803 * @cq: Pointer to CQ to be processed 14840 14804 * @handler: Routine to process each cqe 14841 14805 * @delay: Pointer to usdelay to set in case of rescheduling of the handler 14842 - * @poll_mode: Polling mode we were called from 14843 14806 * 14844 14807 * This routine processes completion queue entries in a CQ. While a valid 14845 14808 * queue element is found, the handler is called. During processing checks ··· 14856 14821 static bool 14857 14822 __lpfc_sli4_process_cq(struct lpfc_hba *phba, struct lpfc_queue *cq, 14858 14823 bool (*handler)(struct lpfc_hba *, struct lpfc_queue *, 14859 - struct lpfc_cqe *), unsigned long *delay, 14860 - enum lpfc_poll_mode poll_mode) 14824 + struct lpfc_cqe *), unsigned long *delay) 14861 14825 { 14862 14826 struct lpfc_cqe *cqe; 14863 14827 bool workposted = false; ··· 14896 14862 *delay = 1; 14897 14863 arm = false; 14898 14864 } 14899 - 14900 - /* Note: complete the irq_poll softirq before rearming CQ */ 14901 - if (poll_mode == LPFC_IRQ_POLL) 14902 - irq_poll_complete(&cq->iop); 14903 14865 14904 14866 /* Track the max number of CQEs processed in 1 EQ */ 14905 14867 if (count > cq->CQ_max_cqe) ··· 14946 14916 case LPFC_MCQ: 14947 14917 workposted |= __lpfc_sli4_process_cq(phba, cq, 14948 14918 lpfc_sli4_sp_handle_mcqe, 14949 - &delay, LPFC_QUEUE_WORK); 14919 + &delay); 14950 14920 break; 14951 14921 case LPFC_WCQ: 14952 14922 if (cq->subtype == LPFC_IO) 14953 14923 workposted |= __lpfc_sli4_process_cq(phba, cq, 14954 14924 lpfc_sli4_fp_handle_cqe, 14955 - &delay, LPFC_QUEUE_WORK); 14925 + &delay); 14956 14926 else 14957 14927 workposted |= __lpfc_sli4_process_cq(phba, cq, 14958 14928 lpfc_sli4_sp_handle_cqe, 14959 - &delay, LPFC_QUEUE_WORK); 14929 + &delay); 14960 14930 break; 14961 14931 default: 14962 14932 lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, ··· 15233 15203 hrq->RQ_no_posted_buf++; 15234 15204 /* Post more buffers if possible */ 15235 15205 break; 15206 + case FC_STATUS_RQ_DMA_FAILURE: 15207 + lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, 15208 + "2575 RQE DMA Error x%x, x%08x x%08x x%08x " 15209 + "x%08x\n", 15210 + status, rcqe->word0, rcqe->word1, 15211 + rcqe->word2, rcqe->word3); 15212 + 15213 + /* If IV set, no further recovery */ 15214 + if (bf_get(lpfc_rcqe_iv, rcqe)) 15215 + break; 15216 + 15217 + /* recycle consumed resource */ 15218 + spin_lock_irqsave(&phba->hbalock, iflags); 15219 + lpfc_sli4_rq_release(hrq, drq); 15220 + dma_buf = lpfc_sli_rqbuf_get(phba, hrq); 15221 + if (!dma_buf) { 15222 + hrq->RQ_no_buf_found++; 15223 + spin_unlock_irqrestore(&phba->hbalock, iflags); 15224 + break; 15225 + } 15226 + hrq->RQ_rcv_buf++; 15227 + hrq->RQ_buf_posted--; 15228 + spin_unlock_irqrestore(&phba->hbalock, iflags); 15229 + lpfc_rq_buf_free(phba, &dma_buf->hbuf); 15230 + break; 15231 + default: 15232 + lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, 15233 + "2576 Unexpected RQE Status x%x, w0-3 x%08x " 15234 + "x%08x x%08x x%08x\n", 15235 + status, rcqe->word0, rcqe->word1, 15236 + rcqe->word2, rcqe->word3); 15237 + break; 15236 15238 } 15237 15239 out: 15238 15240 return workposted; ··· 15333 15271 } 15334 15272 15335 15273 /** 15336 - * lpfc_sli4_sched_cq_work - Schedules cq work 15337 - * @phba: Pointer to HBA context object. 15338 - * @cq: Pointer to CQ 15339 - * @cqid: CQ ID 15274 + * __lpfc_sli4_hba_process_cq - Process a fast-path event queue entry 15275 + * @cq: Pointer to CQ to be processed 15340 15276 * 15341 - * This routine checks the poll mode of the CQ corresponding to 15342 - * cq->chann, then either schedules a softirq or queue_work to complete 15343 - * cq work. 15277 + * This routine calls the cq processing routine with the handler for 15278 + * fast path CQEs. 15344 15279 * 15345 - * queue_work path is taken if in NVMET mode, or if poll_mode is in 15346 - * LPFC_QUEUE_WORK mode. Otherwise, softirq path is taken. 15347 - * 15280 + * The CQ routine returns two values: the first is the calling status, 15281 + * which indicates whether work was queued to the background discovery 15282 + * thread. If true, the routine should wakeup the discovery thread; 15283 + * the second is the delay parameter. If non-zero, rather than rearming 15284 + * the CQ and yet another interrupt, the CQ handler should be queued so 15285 + * that it is processed in a subsequent polling action. The value of 15286 + * the delay indicates when to reschedule it. 15348 15287 **/ 15349 - static void lpfc_sli4_sched_cq_work(struct lpfc_hba *phba, 15350 - struct lpfc_queue *cq, uint16_t cqid) 15288 + static void 15289 + __lpfc_sli4_hba_process_cq(struct lpfc_queue *cq) 15351 15290 { 15352 - int ret = 0; 15291 + struct lpfc_hba *phba = cq->phba; 15292 + unsigned long delay; 15293 + bool workposted = false; 15294 + int ret; 15353 15295 15354 - switch (cq->poll_mode) { 15355 - case LPFC_IRQ_POLL: 15356 - /* CGN mgmt is mutually exclusive from softirq processing */ 15357 - if (phba->cmf_active_mode == LPFC_CFG_OFF) { 15358 - irq_poll_sched(&cq->iop); 15359 - break; 15360 - } 15361 - fallthrough; 15362 - case LPFC_QUEUE_WORK: 15363 - default: 15296 + /* process and rearm the CQ */ 15297 + workposted |= __lpfc_sli4_process_cq(phba, cq, lpfc_sli4_fp_handle_cqe, 15298 + &delay); 15299 + 15300 + if (delay) { 15364 15301 if (is_kdump_kernel()) 15365 - ret = queue_work(phba->wq, &cq->irqwork); 15302 + ret = queue_delayed_work(phba->wq, &cq->sched_irqwork, 15303 + delay); 15366 15304 else 15367 - ret = queue_work_on(cq->chann, phba->wq, &cq->irqwork); 15305 + ret = queue_delayed_work_on(cq->chann, phba->wq, 15306 + &cq->sched_irqwork, delay); 15368 15307 if (!ret) 15369 15308 lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, 15370 - "0383 Cannot schedule queue work " 15371 - "for CQ eqcqid=%d, cqid=%d on CPU %d\n", 15372 - cqid, cq->queue_id, 15373 - raw_smp_processor_id()); 15309 + "0367 Cannot schedule queue work " 15310 + "for cqid=%d on CPU %d\n", 15311 + cq->queue_id, cq->chann); 15374 15312 } 15313 + 15314 + /* wake up worker thread if there are works to be done */ 15315 + if (workposted) 15316 + lpfc_worker_wake_up(phba); 15317 + } 15318 + 15319 + /** 15320 + * lpfc_sli4_hba_process_cq - fast-path work handler when started by 15321 + * interrupt 15322 + * @work: pointer to work element 15323 + * 15324 + * translates from the work handler and calls the fast-path handler. 15325 + **/ 15326 + static void 15327 + lpfc_sli4_hba_process_cq(struct work_struct *work) 15328 + { 15329 + struct lpfc_queue *cq = container_of(work, struct lpfc_queue, irqwork); 15330 + 15331 + __lpfc_sli4_hba_process_cq(cq); 15375 15332 } 15376 15333 15377 15334 /** ··· 15398 15317 * @phba: Pointer to HBA context object. 15399 15318 * @eq: Pointer to the queue structure. 15400 15319 * @eqe: Pointer to fast-path event queue entry. 15320 + * @poll_mode: poll_mode to execute processing the cq. 15401 15321 * 15402 15322 * This routine process a event queue entry from the fast-path event queue. 15403 15323 * It will check the MajorCode and MinorCode to determine this is for a ··· 15409 15327 **/ 15410 15328 static void 15411 15329 lpfc_sli4_hba_handle_eqe(struct lpfc_hba *phba, struct lpfc_queue *eq, 15412 - struct lpfc_eqe *eqe) 15330 + struct lpfc_eqe *eqe, enum lpfc_poll_mode poll_mode) 15413 15331 { 15414 15332 struct lpfc_queue *cq = NULL; 15415 15333 uint32_t qidx = eq->hdwq; 15416 15334 uint16_t cqid, id; 15335 + int ret; 15417 15336 15418 15337 if (unlikely(bf_get_le32(lpfc_eqe_major_code, eqe) != 0)) { 15419 15338 lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, ··· 15474 15391 else 15475 15392 cq->isr_timestamp = 0; 15476 15393 #endif 15477 - lpfc_sli4_sched_cq_work(phba, cq, cqid); 15478 - } 15479 15394 15480 - /** 15481 - * __lpfc_sli4_hba_process_cq - Process a fast-path event queue entry 15482 - * @cq: Pointer to CQ to be processed 15483 - * @poll_mode: Enum lpfc_poll_state to determine poll mode 15484 - * 15485 - * This routine calls the cq processing routine with the handler for 15486 - * fast path CQEs. 15487 - * 15488 - * The CQ routine returns two values: the first is the calling status, 15489 - * which indicates whether work was queued to the background discovery 15490 - * thread. If true, the routine should wakeup the discovery thread; 15491 - * the second is the delay parameter. If non-zero, rather than rearming 15492 - * the CQ and yet another interrupt, the CQ handler should be queued so 15493 - * that it is processed in a subsequent polling action. The value of 15494 - * the delay indicates when to reschedule it. 15495 - **/ 15496 - static void 15497 - __lpfc_sli4_hba_process_cq(struct lpfc_queue *cq, 15498 - enum lpfc_poll_mode poll_mode) 15499 - { 15500 - struct lpfc_hba *phba = cq->phba; 15501 - unsigned long delay; 15502 - bool workposted = false; 15503 - int ret = 0; 15504 - 15505 - /* process and rearm the CQ */ 15506 - workposted |= __lpfc_sli4_process_cq(phba, cq, lpfc_sli4_fp_handle_cqe, 15507 - &delay, poll_mode); 15508 - 15509 - if (delay) { 15395 + switch (poll_mode) { 15396 + case LPFC_THREADED_IRQ: 15397 + __lpfc_sli4_hba_process_cq(cq); 15398 + break; 15399 + case LPFC_QUEUE_WORK: 15400 + default: 15510 15401 if (is_kdump_kernel()) 15511 - ret = queue_delayed_work(phba->wq, &cq->sched_irqwork, 15512 - delay); 15402 + ret = queue_work(phba->wq, &cq->irqwork); 15513 15403 else 15514 - ret = queue_delayed_work_on(cq->chann, phba->wq, 15515 - &cq->sched_irqwork, delay); 15404 + ret = queue_work_on(cq->chann, phba->wq, &cq->irqwork); 15516 15405 if (!ret) 15517 15406 lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, 15518 - "0367 Cannot schedule queue work " 15519 - "for cqid=%d on CPU %d\n", 15520 - cq->queue_id, cq->chann); 15407 + "0383 Cannot schedule queue work " 15408 + "for CQ eqcqid=%d, cqid=%d on CPU %d\n", 15409 + cqid, cq->queue_id, 15410 + raw_smp_processor_id()); 15411 + break; 15521 15412 } 15522 - 15523 - /* wake up worker thread if there are works to be done */ 15524 - if (workposted) 15525 - lpfc_worker_wake_up(phba); 15526 - } 15527 - 15528 - /** 15529 - * lpfc_sli4_hba_process_cq - fast-path work handler when started by 15530 - * interrupt 15531 - * @work: pointer to work element 15532 - * 15533 - * translates from the work handler and calls the fast-path handler. 15534 - **/ 15535 - static void 15536 - lpfc_sli4_hba_process_cq(struct work_struct *work) 15537 - { 15538 - struct lpfc_queue *cq = container_of(work, struct lpfc_queue, irqwork); 15539 - 15540 - __lpfc_sli4_hba_process_cq(cq, LPFC_QUEUE_WORK); 15541 15413 } 15542 15414 15543 15415 /** ··· 15507 15469 struct lpfc_queue *cq = container_of(to_delayed_work(work), 15508 15470 struct lpfc_queue, sched_irqwork); 15509 15471 15510 - __lpfc_sli4_hba_process_cq(cq, LPFC_QUEUE_WORK); 15472 + __lpfc_sli4_hba_process_cq(cq); 15511 15473 } 15512 15474 15513 15475 /** ··· 15533 15495 * and returns for these events. This function is called without any lock 15534 15496 * held. It gets the hbalock to access and update SLI data structures. 15535 15497 * 15536 - * This function returns IRQ_HANDLED when interrupt is handled else it 15537 - * returns IRQ_NONE. 15498 + * This function returns IRQ_HANDLED when interrupt is handled, IRQ_WAKE_THREAD 15499 + * when interrupt is scheduled to be handled from a threaded irq context, or 15500 + * else returns IRQ_NONE. 15538 15501 **/ 15539 15502 irqreturn_t 15540 15503 lpfc_sli4_hba_intr_handler(int irq, void *dev_id) ··· 15544 15505 struct lpfc_hba_eq_hdl *hba_eq_hdl; 15545 15506 struct lpfc_queue *fpeq; 15546 15507 unsigned long iflag; 15547 - int ecount = 0; 15548 15508 int hba_eqidx; 15509 + int ecount = 0; 15549 15510 struct lpfc_eq_intr_info *eqi; 15550 15511 15551 15512 /* Get the driver's phba structure from the dev_id */ ··· 15574 15535 return IRQ_NONE; 15575 15536 } 15576 15537 15577 - eqi = this_cpu_ptr(phba->sli4_hba.eq_info); 15578 - eqi->icnt++; 15538 + switch (fpeq->poll_mode) { 15539 + case LPFC_THREADED_IRQ: 15540 + /* CGN mgmt is mutually exclusive from irq processing */ 15541 + if (phba->cmf_active_mode == LPFC_CFG_OFF) 15542 + return IRQ_WAKE_THREAD; 15543 + fallthrough; 15544 + case LPFC_QUEUE_WORK: 15545 + default: 15546 + eqi = this_cpu_ptr(phba->sli4_hba.eq_info); 15547 + eqi->icnt++; 15579 15548 15580 - fpeq->last_cpu = raw_smp_processor_id(); 15549 + fpeq->last_cpu = raw_smp_processor_id(); 15581 15550 15582 - if (eqi->icnt > LPFC_EQD_ISR_TRIGGER && 15583 - fpeq->q_flag & HBA_EQ_DELAY_CHK && 15584 - phba->cfg_auto_imax && 15585 - fpeq->q_mode != LPFC_MAX_AUTO_EQ_DELAY && 15586 - phba->sli.sli_flag & LPFC_SLI_USE_EQDR) 15587 - lpfc_sli4_mod_hba_eq_delay(phba, fpeq, LPFC_MAX_AUTO_EQ_DELAY); 15551 + if (eqi->icnt > LPFC_EQD_ISR_TRIGGER && 15552 + fpeq->q_flag & HBA_EQ_DELAY_CHK && 15553 + phba->cfg_auto_imax && 15554 + fpeq->q_mode != LPFC_MAX_AUTO_EQ_DELAY && 15555 + phba->sli.sli_flag & LPFC_SLI_USE_EQDR) 15556 + lpfc_sli4_mod_hba_eq_delay(phba, fpeq, 15557 + LPFC_MAX_AUTO_EQ_DELAY); 15588 15558 15589 - /* process and rearm the EQ */ 15590 - ecount = lpfc_sli4_process_eq(phba, fpeq, LPFC_QUEUE_REARM); 15559 + /* process and rearm the EQ */ 15560 + ecount = lpfc_sli4_process_eq(phba, fpeq, LPFC_QUEUE_REARM, 15561 + LPFC_QUEUE_WORK); 15591 15562 15592 - if (unlikely(ecount == 0)) { 15593 - fpeq->EQ_no_entry++; 15594 - if (phba->intr_type == MSIX) 15595 - /* MSI-X treated interrupt served as no EQ share INT */ 15596 - lpfc_printf_log(phba, KERN_WARNING, LOG_SLI, 15597 - "0358 MSI-X interrupt with no EQE\n"); 15598 - else 15599 - /* Non MSI-X treated on interrupt as EQ share INT */ 15600 - return IRQ_NONE; 15563 + if (unlikely(ecount == 0)) { 15564 + fpeq->EQ_no_entry++; 15565 + if (phba->intr_type == MSIX) 15566 + /* MSI-X treated interrupt served as no EQ share INT */ 15567 + lpfc_printf_log(phba, KERN_WARNING, LOG_SLI, 15568 + "0358 MSI-X interrupt with no EQE\n"); 15569 + else 15570 + /* Non MSI-X treated on interrupt as EQ share INT */ 15571 + return IRQ_NONE; 15572 + } 15601 15573 } 15602 15574 15603 15575 return IRQ_HANDLED; ··· 16165 16115 return status; 16166 16116 } 16167 16117 16168 - static int lpfc_cq_poll_hdler(struct irq_poll *iop, int budget) 16118 + /** 16119 + * lpfc_sli4_hba_intr_handler_th - SLI4 HBA threaded interrupt handler 16120 + * @irq: Interrupt number. 16121 + * @dev_id: The device context pointer. 16122 + * 16123 + * This routine is a mirror of lpfc_sli4_hba_intr_handler, but executed within 16124 + * threaded irq context. 16125 + * 16126 + * Returns 16127 + * IRQ_HANDLED - interrupt is handled 16128 + * IRQ_NONE - otherwise 16129 + **/ 16130 + irqreturn_t lpfc_sli4_hba_intr_handler_th(int irq, void *dev_id) 16169 16131 { 16170 - struct lpfc_queue *cq = container_of(iop, struct lpfc_queue, iop); 16132 + struct lpfc_hba *phba; 16133 + struct lpfc_hba_eq_hdl *hba_eq_hdl; 16134 + struct lpfc_queue *fpeq; 16135 + int ecount = 0; 16136 + int hba_eqidx; 16137 + struct lpfc_eq_intr_info *eqi; 16171 16138 16172 - __lpfc_sli4_hba_process_cq(cq, LPFC_IRQ_POLL); 16139 + /* Get the driver's phba structure from the dev_id */ 16140 + hba_eq_hdl = (struct lpfc_hba_eq_hdl *)dev_id; 16141 + phba = hba_eq_hdl->phba; 16142 + hba_eqidx = hba_eq_hdl->idx; 16173 16143 16174 - return 1; 16144 + if (unlikely(!phba)) 16145 + return IRQ_NONE; 16146 + if (unlikely(!phba->sli4_hba.hdwq)) 16147 + return IRQ_NONE; 16148 + 16149 + /* Get to the EQ struct associated with this vector */ 16150 + fpeq = phba->sli4_hba.hba_eq_hdl[hba_eqidx].eq; 16151 + if (unlikely(!fpeq)) 16152 + return IRQ_NONE; 16153 + 16154 + eqi = per_cpu_ptr(phba->sli4_hba.eq_info, raw_smp_processor_id()); 16155 + eqi->icnt++; 16156 + 16157 + fpeq->last_cpu = raw_smp_processor_id(); 16158 + 16159 + if (eqi->icnt > LPFC_EQD_ISR_TRIGGER && 16160 + fpeq->q_flag & HBA_EQ_DELAY_CHK && 16161 + phba->cfg_auto_imax && 16162 + fpeq->q_mode != LPFC_MAX_AUTO_EQ_DELAY && 16163 + phba->sli.sli_flag & LPFC_SLI_USE_EQDR) 16164 + lpfc_sli4_mod_hba_eq_delay(phba, fpeq, LPFC_MAX_AUTO_EQ_DELAY); 16165 + 16166 + /* process and rearm the EQ */ 16167 + ecount = lpfc_sli4_process_eq(phba, fpeq, LPFC_QUEUE_REARM, 16168 + LPFC_THREADED_IRQ); 16169 + 16170 + if (unlikely(ecount == 0)) { 16171 + fpeq->EQ_no_entry++; 16172 + if (phba->intr_type == MSIX) 16173 + /* MSI-X treated interrupt served as no EQ share INT */ 16174 + lpfc_printf_log(phba, KERN_WARNING, LOG_SLI, 16175 + "3358 MSI-X interrupt with no EQE\n"); 16176 + else 16177 + /* Non MSI-X treated on interrupt as EQ share INT */ 16178 + return IRQ_NONE; 16179 + } 16180 + return IRQ_HANDLED; 16175 16181 } 16176 16182 16177 16183 /** ··· 16371 16265 16372 16266 if (cq->queue_id > phba->sli4_hba.cq_max) 16373 16267 phba->sli4_hba.cq_max = cq->queue_id; 16374 - 16375 - irq_poll_init(&cq->iop, LPFC_IRQ_POLL_WEIGHT, lpfc_cq_poll_hdler); 16376 16268 out: 16377 16269 mempool_free(mbox, phba->mbox_mem_pool); 16378 16270 return status; ··· 20800 20696 if (shdr_add_status == LPFC_ADD_STATUS_INCOMPAT_OBJ) { 20801 20697 switch (shdr_add_status_2) { 20802 20698 case LPFC_ADD_STATUS_2_INCOMPAT_FLASH: 20803 - lpfc_printf_log(phba, KERN_WARNING, LOG_MBOX | LOG_SLI, 20804 - "4199 Firmware write failed: " 20805 - "image incompatible with flash x%02x\n", 20806 - phba->sli4_hba.flash_id); 20699 + lpfc_log_msg(phba, KERN_WARNING, LOG_MBOX | LOG_SLI, 20700 + "4199 Firmware write failed: " 20701 + "image incompatible with flash x%02x\n", 20702 + phba->sli4_hba.flash_id); 20807 20703 break; 20808 20704 case LPFC_ADD_STATUS_2_INCORRECT_ASIC: 20809 - lpfc_printf_log(phba, KERN_WARNING, LOG_MBOX | LOG_SLI, 20810 - "4200 Firmware write failed: " 20811 - "image incompatible with ASIC " 20812 - "architecture x%02x\n", 20813 - phba->sli4_hba.asic_rev); 20705 + lpfc_log_msg(phba, KERN_WARNING, LOG_MBOX | LOG_SLI, 20706 + "4200 Firmware write failed: " 20707 + "image incompatible with ASIC " 20708 + "architecture x%02x\n", 20709 + phba->sli4_hba.asic_rev); 20814 20710 break; 20815 20711 default: 20816 - lpfc_printf_log(phba, KERN_WARNING, LOG_MBOX | LOG_SLI, 20817 - "4210 Firmware write failed: " 20818 - "add_status_2 x%02x\n", 20819 - shdr_add_status_2); 20712 + lpfc_log_msg(phba, KERN_WARNING, LOG_MBOX | LOG_SLI, 20713 + "4210 Firmware write failed: " 20714 + "add_status_2 x%02x\n", 20715 + shdr_add_status_2); 20820 20716 break; 20821 20717 } 20822 20718 } else if (!shdr_status && !shdr_add_status) { ··· 20829 20725 20830 20726 switch (shdr_change_status) { 20831 20727 case (LPFC_CHANGE_STATUS_PHYS_DEV_RESET): 20832 - lpfc_printf_log(phba, KERN_INFO, LOG_MBOX | LOG_SLI, 20833 - "3198 Firmware write complete: System " 20834 - "reboot required to instantiate\n"); 20728 + lpfc_log_msg(phba, KERN_NOTICE, LOG_MBOX | LOG_SLI, 20729 + "3198 Firmware write complete: System " 20730 + "reboot required to instantiate\n"); 20835 20731 break; 20836 20732 case (LPFC_CHANGE_STATUS_FW_RESET): 20837 - lpfc_printf_log(phba, KERN_INFO, LOG_MBOX | LOG_SLI, 20838 - "3199 Firmware write complete: " 20839 - "Firmware reset required to " 20840 - "instantiate\n"); 20733 + lpfc_log_msg(phba, KERN_NOTICE, LOG_MBOX | LOG_SLI, 20734 + "3199 Firmware write complete: " 20735 + "Firmware reset required to " 20736 + "instantiate\n"); 20841 20737 break; 20842 20738 case (LPFC_CHANGE_STATUS_PORT_MIGRATION): 20843 - lpfc_printf_log(phba, KERN_INFO, LOG_MBOX | LOG_SLI, 20844 - "3200 Firmware write complete: Port " 20845 - "Migration or PCI Reset required to " 20846 - "instantiate\n"); 20739 + lpfc_log_msg(phba, KERN_NOTICE, LOG_MBOX | LOG_SLI, 20740 + "3200 Firmware write complete: Port " 20741 + "Migration or PCI Reset required to " 20742 + "instantiate\n"); 20847 20743 break; 20848 20744 case (LPFC_CHANGE_STATUS_PCI_RESET): 20849 - lpfc_printf_log(phba, KERN_INFO, LOG_MBOX | LOG_SLI, 20850 - "3201 Firmware write complete: PCI " 20851 - "Reset required to instantiate\n"); 20745 + lpfc_log_msg(phba, KERN_NOTICE, LOG_MBOX | LOG_SLI, 20746 + "3201 Firmware write complete: PCI " 20747 + "Reset required to instantiate\n"); 20852 20748 break; 20853 20749 default: 20854 20750 break;
+1 -3
drivers/scsi/lpfc/lpfc_sli4.h
··· 140 140 141 141 enum lpfc_poll_mode { 142 142 LPFC_QUEUE_WORK, 143 - LPFC_IRQ_POLL 143 + LPFC_THREADED_IRQ, 144 144 }; 145 145 146 146 struct lpfc_idle_stat { ··· 279 279 struct list_head _poll_list; 280 280 void **q_pgs; /* array to index entries per page */ 281 281 282 - #define LPFC_IRQ_POLL_WEIGHT 256 283 - struct irq_poll iop; 284 282 enum lpfc_poll_mode poll_mode; 285 283 }; 286 284
+1 -1
drivers/scsi/lpfc/lpfc_version.h
··· 20 20 * included with this package. * 21 21 *******************************************************************/ 22 22 23 - #define LPFC_DRIVER_VERSION "14.2.0.11" 23 + #define LPFC_DRIVER_VERSION "14.2.0.13" 24 24 #define LPFC_DRIVER_NAME "lpfc" 25 25 26 26 /* Used for SLI 2/3 */
+3 -3
drivers/scsi/megaraid/Kconfig.megaraid
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 config MEGARAID_NEWGEN 3 3 bool "LSI Logic New Generation RAID Device Drivers" 4 - depends on PCI && SCSI 4 + depends on PCI && HAS_IOPORT && SCSI 5 5 help 6 6 LSI Logic RAID Device Drivers 7 7 8 8 config MEGARAID_MM 9 9 tristate "LSI Logic Management Module (New Driver)" 10 - depends on PCI && SCSI && MEGARAID_NEWGEN 10 + depends on PCI && HAS_IOPORT && SCSI && MEGARAID_NEWGEN 11 11 help 12 12 Management Module provides ioctl, sysfs support for LSI Logic 13 13 RAID controllers. ··· 67 67 68 68 config MEGARAID_LEGACY 69 69 tristate "LSI Logic Legacy MegaRAID Driver" 70 - depends on PCI && SCSI 70 + depends on PCI && HAS_IOPORT && SCSI 71 71 help 72 72 This driver supports the LSI MegaRAID 418, 428, 438, 466, 762, 490 73 73 and 467 SCSI host adapters. This driver also support the all U320
+3 -5
drivers/scsi/megaraid/megaraid_sas.h
··· 1722 1722 } __packed; 1723 1723 1724 1724 union megasas_sgl { 1725 - 1726 - struct megasas_sge32 sge32[1]; 1727 - struct megasas_sge64 sge64[1]; 1728 - struct megasas_sge_skinny sge_skinny[1]; 1729 - 1725 + DECLARE_FLEX_ARRAY(struct megasas_sge32, sge32); 1726 + DECLARE_FLEX_ARRAY(struct megasas_sge64, sge64); 1727 + DECLARE_FLEX_ARRAY(struct megasas_sge_skinny, sge_skinny); 1730 1728 } __attribute__ ((packed)); 1731 1729 1732 1730 struct megasas_header {
+4 -4
drivers/scsi/mpi3mr/mpi3mr.h
··· 1133 1133 u32 chain_buf_count; 1134 1134 struct dma_pool *chain_buf_pool; 1135 1135 struct chain_element *chain_sgl_list; 1136 - void *chain_bitmap; 1136 + unsigned long *chain_bitmap; 1137 1137 spinlock_t chain_buf_lock; 1138 1138 1139 1139 struct mpi3mr_drv_cmd bsg_cmds; 1140 1140 struct mpi3mr_drv_cmd host_tm_cmds; 1141 1141 struct mpi3mr_drv_cmd dev_rmhs_cmds[MPI3MR_NUM_DEVRMCMD]; 1142 1142 struct mpi3mr_drv_cmd evtack_cmds[MPI3MR_NUM_EVTACKCMD]; 1143 - void *devrem_bitmap; 1143 + unsigned long *devrem_bitmap; 1144 1144 u16 dev_handle_bitmap_bits; 1145 - void *removepend_bitmap; 1145 + unsigned long *removepend_bitmap; 1146 1146 struct list_head delayed_rmhs_list; 1147 - void *evtack_cmds_bitmap; 1147 + unsigned long *evtack_cmds_bitmap; 1148 1148 struct list_head delayed_evtack_cmds_list; 1149 1149 1150 1150 u32 ts_update_counter;
+6 -1
drivers/scsi/mpi3mr/mpi3mr_fw.c
··· 402 402 memcpy((u8 *)cmdptr->reply, (u8 *)def_reply, 403 403 mrioc->reply_sz); 404 404 } 405 + if (sense_buf && cmdptr->sensebuf) { 406 + cmdptr->is_sense = 1; 407 + memcpy(cmdptr->sensebuf, sense_buf, 408 + MPI3MR_SENSE_BUF_SZ); 409 + } 405 410 if (cmdptr->is_waiting) { 406 411 complete(&cmdptr->done); 407 412 cmdptr->is_waiting = 0; ··· 1139 1134 static int 1140 1135 mpi3mr_revalidate_factsdata(struct mpi3mr_ioc *mrioc) 1141 1136 { 1142 - void *removepend_bitmap; 1137 + unsigned long *removepend_bitmap; 1143 1138 1144 1139 if (mrioc->facts.reply_sz > mrioc->reply_sz) { 1145 1140 ioc_err(mrioc,
+1 -1
drivers/scsi/mpi3mr/mpi3mr_transport.c
··· 2058 2058 sas_expander = kzalloc(sizeof(struct mpi3mr_sas_node), 2059 2059 GFP_KERNEL); 2060 2060 if (!sas_expander) 2061 - return -1; 2061 + return -ENOMEM; 2062 2062 2063 2063 sas_expander->handle = handle; 2064 2064 sas_expander->num_phys = expander_pg0.num_phys;
+1 -1
drivers/scsi/mvsas/Kconfig
··· 9 9 10 10 config SCSI_MVSAS 11 11 tristate "Marvell 88SE64XX/88SE94XX SAS/SATA support" 12 - depends on PCI 12 + depends on PCI && HAS_IOPORT 13 13 select SCSI_SAS_LIBSAS 14 14 select FW_LOADER 15 15 help
+5 -1
drivers/scsi/pcmcia/Kconfig
··· 12 12 13 13 config PCMCIA_AHA152X 14 14 tristate "Adaptec AHA152X PCMCIA support" 15 + depends on HAS_IOPORT 15 16 select SCSI_SPI_ATTRS 16 17 help 17 18 Say Y here if you intend to attach this type of PCMCIA SCSI host ··· 23 22 24 23 config PCMCIA_FDOMAIN 25 24 tristate "Future Domain PCMCIA support" 25 + depends on HAS_IOPORT 26 26 select SCSI_FDOMAIN 27 27 help 28 28 Say Y here if you intend to attach this type of PCMCIA SCSI host ··· 34 32 35 33 config PCMCIA_NINJA_SCSI 36 34 tristate "NinjaSCSI-3 / NinjaSCSI-32Bi (16bit) PCMCIA support" 37 - depends on !64BIT || COMPILE_TEST 35 + depends on (!64BIT || COMPILE_TEST) && HAS_IOPORT 38 36 help 39 37 If you intend to attach this type of PCMCIA SCSI host adapter to 40 38 your computer, say Y here and read ··· 68 66 69 67 config PCMCIA_QLOGIC 70 68 tristate "Qlogic PCMCIA support" 69 + depends on HAS_IOPORT 71 70 help 72 71 Say Y here if you intend to attach this type of PCMCIA SCSI host 73 72 adapter to your computer. ··· 78 75 79 76 config PCMCIA_SYM53C500 80 77 tristate "Symbios 53c500 PCMCIA support" 78 + depends on HAS_IOPORT 81 79 help 82 80 Say Y here if you have a New Media Bus Toaster or other PCMCIA 83 81 SCSI adapter based on the Symbios 53c500 controller.
+24 -8
drivers/scsi/pm8001/pm8001_init.c
··· 43 43 #include "pm8001_chips.h" 44 44 #include "pm80xx_hwi.h" 45 45 46 - static ulong logging_level = PM8001_FAIL_LOGGING | PM8001_IOERR_LOGGING; 46 + static ulong logging_level = PM8001_FAIL_LOGGING | PM8001_IOERR_LOGGING | 47 + PM8001_EVENT_LOGGING | PM8001_INIT_LOGGING; 47 48 module_param(logging_level, ulong, 0644); 48 49 MODULE_PARM_DESC(logging_level, " bits for enabling logging info."); 49 50 ··· 667 666 * Currently we just set the fixed SAS address to our HBA, for manufacture, 668 667 * it should read from the EEPROM 669 668 */ 670 - static void pm8001_init_sas_add(struct pm8001_hba_info *pm8001_ha) 669 + static int pm8001_init_sas_add(struct pm8001_hba_info *pm8001_ha) 671 670 { 672 671 u8 i, j; 673 672 u8 sas_add[8]; ··· 680 679 struct pm8001_ioctl_payload payload; 681 680 u16 deviceid; 682 681 int rc; 682 + unsigned long time_remaining; 683 + 684 + if (PM8001_CHIP_DISP->fatal_errors(pm8001_ha)) { 685 + pm8001_dbg(pm8001_ha, FAIL, "controller is in fatal error state\n"); 686 + return -EIO; 687 + } 683 688 684 689 pci_read_config_word(pm8001_ha->pdev, PCI_DEVICE_ID, &deviceid); 685 690 pm8001_ha->nvmd_completion = &completion; ··· 710 703 payload.offset = 0; 711 704 payload.func_specific = kzalloc(payload.rd_length, GFP_KERNEL); 712 705 if (!payload.func_specific) { 713 - pm8001_dbg(pm8001_ha, INIT, "mem alloc fail\n"); 714 - return; 706 + pm8001_dbg(pm8001_ha, FAIL, "mem alloc fail\n"); 707 + return -ENOMEM; 715 708 } 716 709 rc = PM8001_CHIP_DISP->get_nvmd_req(pm8001_ha, &payload); 717 710 if (rc) { 718 711 kfree(payload.func_specific); 719 - pm8001_dbg(pm8001_ha, INIT, "nvmd failed\n"); 720 - return; 712 + pm8001_dbg(pm8001_ha, FAIL, "nvmd failed\n"); 713 + return -EIO; 721 714 } 722 - wait_for_completion(&completion); 715 + time_remaining = wait_for_completion_timeout(&completion, 716 + msecs_to_jiffies(60*1000)); // 1 min 717 + if (!time_remaining) { 718 + kfree(payload.func_specific); 719 + pm8001_dbg(pm8001_ha, FAIL, "get_nvmd_req timeout\n"); 720 + return -EIO; 721 + } 722 + 723 723 724 724 for (i = 0, j = 0; i <= 7; i++, j++) { 725 725 if (pm8001_ha->chip_id == chip_8001) { ··· 765 751 memcpy(pm8001_ha->sas_addr, &pm8001_ha->phy[0].dev_sas_addr, 766 752 SAS_ADDR_SIZE); 767 753 #endif 754 + return 0; 768 755 } 769 756 770 757 /* ··· 1181 1166 pm80xx_set_thermal_config(pm8001_ha); 1182 1167 } 1183 1168 1184 - pm8001_init_sas_add(pm8001_ha); 1169 + if (pm8001_init_sas_add(pm8001_ha)) 1170 + goto err_out_shost; 1185 1171 /* phy setting support for motherboard controller */ 1186 1172 rc = pm8001_configure_phy_settings(pm8001_ha); 1187 1173 if (rc)
+22
drivers/scsi/pm8001/pm8001_sas.c
··· 167 167 pm8001_ha = sas_phy->ha->lldd_ha; 168 168 phy = &pm8001_ha->phy[phy_id]; 169 169 pm8001_ha->phy[phy_id].enable_completion = &completion; 170 + 171 + if (PM8001_CHIP_DISP->fatal_errors(pm8001_ha)) { 172 + /* 173 + * If the controller is in fatal error state, 174 + * we will not get a response from the controller 175 + */ 176 + pm8001_dbg(pm8001_ha, FAIL, 177 + "Phy control failed due to fatal errors\n"); 178 + return -EFAULT; 179 + } 180 + 170 181 switch (func) { 171 182 case PHY_FUNC_SET_LINK_RATE: 172 183 rates = funcdata; ··· 919 908 struct pm8001_device *pm8001_dev = dev->lldd_dev; 920 909 struct pm8001_hba_info *pm8001_ha = pm8001_find_ha_by_dev(dev); 921 910 DECLARE_COMPLETION_ONSTACK(completion_setstate); 911 + 912 + if (PM8001_CHIP_DISP->fatal_errors(pm8001_ha)) { 913 + /* 914 + * If the controller is in fatal error state, 915 + * we will not get a response from the controller 916 + */ 917 + pm8001_dbg(pm8001_ha, FAIL, 918 + "LUN reset failed due to fatal errors\n"); 919 + return rc; 920 + } 921 + 922 922 if (dev_is_sata(dev)) { 923 923 struct sas_phy *phy = sas_get_local_phy(dev); 924 924 sas_execute_internal_abort_dev(dev, 0, NULL);
+1
drivers/scsi/pm8001/pm8001_sas.h
··· 71 71 #define PM8001_DEV_LOGGING 0x80 /* development message logging */ 72 72 #define PM8001_DEVIO_LOGGING 0x100 /* development io message logging */ 73 73 #define PM8001_IOERR_LOGGING 0x200 /* development io err message logging */ 74 + #define PM8001_EVENT_LOGGING 0x400 /* HW event logging */ 74 75 75 76 #define pm8001_info(HBA, fmt, ...) \ 76 77 pr_info("%s:: %s %d: " fmt, \
+82 -44
drivers/scsi/pm8001/pm80xx_hwi.c
··· 3239 3239 struct pm8001_port *port = &pm8001_ha->port[port_id]; 3240 3240 struct pm8001_phy *phy = &pm8001_ha->phy[phy_id]; 3241 3241 unsigned long flags; 3242 - pm8001_dbg(pm8001_ha, DEVIO, 3243 - "port id %d, phy id %d link_rate %d portstate 0x%x\n", 3244 - port_id, phy_id, link_rate, portstate); 3242 + pm8001_dbg(pm8001_ha, EVENT, 3243 + "HW_EVENT_SATA_PHY_UP phyid:%#x port_id:%#x link_rate:%d portstate:%#x\n", 3244 + phy_id, port_id, link_rate, portstate); 3245 3245 3246 3246 phy->port = port; 3247 3247 port->port_id = port_id; ··· 3291 3291 phy->phy_attached = 0; 3292 3292 switch (portstate) { 3293 3293 case PORT_VALID: 3294 + pm8001_dbg(pm8001_ha, EVENT, 3295 + "HW_EVENT_PHY_DOWN phyid:%#x port_id:%#x portstate: PORT_VALID\n", 3296 + phy_id, port_id); 3294 3297 break; 3295 3298 case PORT_INVALID: 3296 - pm8001_dbg(pm8001_ha, MSG, " PortInvalid portID %d\n", 3297 - port_id); 3299 + pm8001_dbg(pm8001_ha, EVENT, 3300 + "HW_EVENT_PHY_DOWN phyid:%#x port_id:%#x portstate: PORT_INVALID\n", 3301 + phy_id, port_id); 3298 3302 pm8001_dbg(pm8001_ha, MSG, 3299 3303 " Last phy Down and port invalid\n"); 3300 3304 if (port_sata) { ··· 3310 3306 sas_phy_disconnected(&phy->sas_phy); 3311 3307 break; 3312 3308 case PORT_IN_RESET: 3313 - pm8001_dbg(pm8001_ha, MSG, " Port In Reset portID %d\n", 3314 - port_id); 3309 + pm8001_dbg(pm8001_ha, EVENT, 3310 + "HW_EVENT_PHY_DOWN phyid:%#x port_id:%#x portstate: PORT_IN_RESET\n", 3311 + phy_id, port_id); 3315 3312 break; 3316 3313 case PORT_NOT_ESTABLISHED: 3317 - pm8001_dbg(pm8001_ha, MSG, 3318 - " Phy Down and PORT_NOT_ESTABLISHED\n"); 3314 + pm8001_dbg(pm8001_ha, EVENT, 3315 + "HW_EVENT_PHY_DOWN phyid:%#x port_id:%#x portstate: PORT_NOT_ESTABLISHED\n", 3316 + phy_id, port_id); 3319 3317 port->port_attached = 0; 3320 3318 break; 3321 3319 case PORT_LOSTCOMM: 3322 - pm8001_dbg(pm8001_ha, MSG, " Phy Down and PORT_LOSTCOMM\n"); 3323 - pm8001_dbg(pm8001_ha, MSG, 3324 - " Last phy Down and port invalid\n"); 3320 + pm8001_dbg(pm8001_ha, EVENT, 3321 + "HW_EVENT_PHY_DOWN phyid:%#x port_id:%#x portstate: PORT_LOSTCOMM\n", 3322 + phy_id, port_id); 3323 + pm8001_dbg(pm8001_ha, MSG, " Last phy Down and port invalid\n"); 3325 3324 if (port_sata) { 3326 3325 port->port_attached = 0; 3327 3326 phy->phy_type = 0; ··· 3335 3328 break; 3336 3329 default: 3337 3330 port->port_attached = 0; 3338 - pm8001_dbg(pm8001_ha, DEVIO, 3339 - " Phy Down and(default) = 0x%x\n", 3340 - portstate); 3331 + pm8001_dbg(pm8001_ha, EVENT, 3332 + "HW_EVENT_PHY_DOWN phyid:%#x port_id:%#x portstate:%#x\n", 3333 + phy_id, port_id, portstate); 3341 3334 break; 3342 3335 3343 3336 } ··· 3417 3410 u8 port_id = (u8)(lr_status_evt_portid & 0x000000FF); 3418 3411 u8 phy_id = 3419 3412 (u8)((phyid_npip_portstate & 0xFF0000) >> 16); 3413 + u8 portstate = (u8)(phyid_npip_portstate & 0x0000000F); 3420 3414 u16 eventType = 3421 3415 (u16)((lr_status_evt_portid & 0x00FFFF00) >> 8); 3422 3416 u8 status = ··· 3433 3425 switch (eventType) { 3434 3426 3435 3427 case HW_EVENT_SAS_PHY_UP: 3436 - pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PHY_START_STATUS\n"); 3428 + pm8001_dbg(pm8001_ha, EVENT, 3429 + "HW_EVENT_SAS_PHY_UP phyid:%#x port_id:%#x\n", 3430 + phy_id, port_id); 3437 3431 hw_event_sas_phy_up(pm8001_ha, piomb); 3438 3432 break; 3439 3433 case HW_EVENT_SATA_PHY_UP: 3440 - pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_SATA_PHY_UP\n"); 3441 3434 hw_event_sata_phy_up(pm8001_ha, piomb); 3442 3435 break; 3443 3436 case HW_EVENT_SATA_SPINUP_HOLD: 3444 - pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_SATA_SPINUP_HOLD\n"); 3437 + pm8001_dbg(pm8001_ha, EVENT, 3438 + "HW_EVENT_SATA_SPINUP_HOLD phyid:%#x port_id:%#x\n", 3439 + phy_id, port_id); 3445 3440 sas_notify_phy_event(&phy->sas_phy, PHYE_SPINUP_HOLD, 3446 3441 GFP_ATOMIC); 3447 3442 break; 3448 3443 case HW_EVENT_PHY_DOWN: 3449 - pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PHY_DOWN\n"); 3450 3444 hw_event_phy_down(pm8001_ha, piomb); 3451 - phy->phy_attached = 0; 3452 3445 phy->phy_state = PHY_LINK_DISABLE; 3453 3446 break; 3454 3447 case HW_EVENT_PORT_INVALID: 3455 - pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PORT_INVALID\n"); 3448 + pm8001_dbg(pm8001_ha, EVENT, 3449 + "HW_EVENT_PORT_INVALID phyid:%#x port_id:%#x\n", 3450 + phy_id, port_id); 3456 3451 sas_phy_disconnected(sas_phy); 3457 3452 phy->phy_attached = 0; 3458 3453 sas_notify_port_event(sas_phy, PORTE_LINK_RESET_ERR, ··· 3474 3463 GFP_ATOMIC); 3475 3464 break; 3476 3465 case HW_EVENT_PHY_ERROR: 3477 - pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PHY_ERROR\n"); 3466 + pm8001_dbg(pm8001_ha, EVENT, 3467 + "HW_EVENT_PHY_ERROR phyid:%#x port_id:%#x\n", 3468 + phy_id, port_id); 3478 3469 sas_phy_disconnected(&phy->sas_phy); 3479 3470 phy->phy_attached = 0; 3480 3471 sas_notify_phy_event(&phy->sas_phy, PHYE_OOB_ERROR, GFP_ATOMIC); ··· 3490 3477 GFP_ATOMIC); 3491 3478 break; 3492 3479 case HW_EVENT_LINK_ERR_INVALID_DWORD: 3493 - pm8001_dbg(pm8001_ha, MSG, 3494 - "HW_EVENT_LINK_ERR_INVALID_DWORD\n"); 3480 + pm8001_dbg(pm8001_ha, EVENT, 3481 + "HW_EVENT_LINK_ERR_INVALID_DWORD phyid:%#x port_id:%#x\n", 3482 + phy_id, port_id); 3495 3483 pm80xx_hw_event_ack_req(pm8001_ha, 0, 3496 3484 HW_EVENT_LINK_ERR_INVALID_DWORD, port_id, phy_id, 0, 0); 3497 3485 break; 3498 3486 case HW_EVENT_LINK_ERR_DISPARITY_ERROR: 3499 - pm8001_dbg(pm8001_ha, MSG, 3500 - "HW_EVENT_LINK_ERR_DISPARITY_ERROR\n"); 3487 + pm8001_dbg(pm8001_ha, EVENT, 3488 + "HW_EVENT_LINK_ERR_DISPARITY_ERROR phyid:%#x port_id:%#x\n", 3489 + phy_id, port_id); 3501 3490 pm80xx_hw_event_ack_req(pm8001_ha, 0, 3502 3491 HW_EVENT_LINK_ERR_DISPARITY_ERROR, 3503 3492 port_id, phy_id, 0, 0); 3504 3493 break; 3505 3494 case HW_EVENT_LINK_ERR_CODE_VIOLATION: 3506 - pm8001_dbg(pm8001_ha, MSG, 3507 - "HW_EVENT_LINK_ERR_CODE_VIOLATION\n"); 3495 + pm8001_dbg(pm8001_ha, EVENT, 3496 + "HW_EVENT_LINK_ERR_CODE_VIOLATION phyid:%#x port_id:%#x\n", 3497 + phy_id, port_id); 3508 3498 pm80xx_hw_event_ack_req(pm8001_ha, 0, 3509 3499 HW_EVENT_LINK_ERR_CODE_VIOLATION, 3510 3500 port_id, phy_id, 0, 0); 3511 3501 break; 3512 3502 case HW_EVENT_LINK_ERR_LOSS_OF_DWORD_SYNCH: 3513 - pm8001_dbg(pm8001_ha, MSG, 3514 - "HW_EVENT_LINK_ERR_LOSS_OF_DWORD_SYNCH\n"); 3503 + pm8001_dbg(pm8001_ha, EVENT, 3504 + "HW_EVENT_LINK_ERR_LOSS_OF_DWORD_SYNCH phyid:%#x port_id:%#x\n", 3505 + phy_id, port_id); 3515 3506 pm80xx_hw_event_ack_req(pm8001_ha, 0, 3516 3507 HW_EVENT_LINK_ERR_LOSS_OF_DWORD_SYNCH, 3517 3508 port_id, phy_id, 0, 0); 3518 3509 break; 3519 3510 case HW_EVENT_MALFUNCTION: 3520 - pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_MALFUNCTION\n"); 3511 + pm8001_dbg(pm8001_ha, EVENT, 3512 + "HW_EVENT_MALFUNCTION phyid:%#x\n", phy_id); 3521 3513 break; 3522 3514 case HW_EVENT_BROADCAST_SES: 3523 3515 pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_BROADCAST_SES\n"); ··· 3533 3515 GFP_ATOMIC); 3534 3516 break; 3535 3517 case HW_EVENT_INBOUND_CRC_ERROR: 3536 - pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_INBOUND_CRC_ERROR\n"); 3518 + pm8001_dbg(pm8001_ha, EVENT, 3519 + "HW_EVENT_INBOUND_CRC_ERROR phyid:%#x port_id:%#x\n", 3520 + phy_id, port_id); 3537 3521 pm80xx_hw_event_ack_req(pm8001_ha, 0, 3538 3522 HW_EVENT_INBOUND_CRC_ERROR, 3539 3523 port_id, phy_id, 0, 0); 3540 3524 break; 3541 3525 case HW_EVENT_HARD_RESET_RECEIVED: 3542 - pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_HARD_RESET_RECEIVED\n"); 3526 + pm8001_dbg(pm8001_ha, EVENT, 3527 + "HW_EVENT_HARD_RESET_RECEIVED phyid:%#x\n", phy_id); 3543 3528 sas_notify_port_event(sas_phy, PORTE_HARD_RESET, GFP_ATOMIC); 3544 3529 break; 3545 3530 case HW_EVENT_ID_FRAME_TIMEOUT: 3546 - pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_ID_FRAME_TIMEOUT\n"); 3531 + pm8001_dbg(pm8001_ha, EVENT, 3532 + "HW_EVENT_ID_FRAME_TIMEOUT phyid:%#x\n", phy_id); 3547 3533 sas_phy_disconnected(sas_phy); 3548 3534 phy->phy_attached = 0; 3549 3535 sas_notify_port_event(sas_phy, PORTE_LINK_RESET_ERR, 3550 3536 GFP_ATOMIC); 3551 3537 break; 3552 3538 case HW_EVENT_LINK_ERR_PHY_RESET_FAILED: 3553 - pm8001_dbg(pm8001_ha, MSG, 3554 - "HW_EVENT_LINK_ERR_PHY_RESET_FAILED\n"); 3539 + pm8001_dbg(pm8001_ha, EVENT, 3540 + "HW_EVENT_LINK_ERR_PHY_RESET_FAILED phyid:%#x port_id:%#x\n", 3541 + phy_id, port_id); 3555 3542 pm80xx_hw_event_ack_req(pm8001_ha, 0, 3556 3543 HW_EVENT_LINK_ERR_PHY_RESET_FAILED, 3557 3544 port_id, phy_id, 0, 0); ··· 3566 3543 GFP_ATOMIC); 3567 3544 break; 3568 3545 case HW_EVENT_PORT_RESET_TIMER_TMO: 3569 - pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PORT_RESET_TIMER_TMO\n"); 3546 + pm8001_dbg(pm8001_ha, EVENT, 3547 + "HW_EVENT_PORT_RESET_TIMER_TMO phyid:%#x port_id:%#x portstate:%#x\n", 3548 + phy_id, port_id, portstate); 3570 3549 if (!pm8001_ha->phy[phy_id].reset_completion) { 3571 3550 pm80xx_hw_event_ack_req(pm8001_ha, 0, HW_EVENT_PHY_DOWN, 3572 3551 port_id, phy_id, 0, 0); 3573 3552 } 3574 3553 sas_phy_disconnected(sas_phy); 3575 3554 phy->phy_attached = 0; 3555 + port->port_state = portstate; 3576 3556 sas_notify_port_event(sas_phy, PORTE_LINK_RESET_ERR, 3577 3557 GFP_ATOMIC); 3578 3558 if (pm8001_ha->phy[phy_id].reset_completion) { ··· 3586 3560 } 3587 3561 break; 3588 3562 case HW_EVENT_PORT_RECOVERY_TIMER_TMO: 3589 - pm8001_dbg(pm8001_ha, MSG, 3590 - "HW_EVENT_PORT_RECOVERY_TIMER_TMO\n"); 3563 + pm8001_dbg(pm8001_ha, EVENT, 3564 + "HW_EVENT_PORT_RECOVERY_TIMER_TMO phyid:%#x port_id:%#x\n", 3565 + phy_id, port_id); 3591 3566 pm80xx_hw_event_ack_req(pm8001_ha, 0, 3592 3567 HW_EVENT_PORT_RECOVERY_TIMER_TMO, 3593 3568 port_id, phy_id, 0, 0); ··· 3602 3575 } 3603 3576 break; 3604 3577 case HW_EVENT_PORT_RECOVER: 3605 - pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PORT_RECOVER\n"); 3578 + pm8001_dbg(pm8001_ha, EVENT, 3579 + "HW_EVENT_PORT_RECOVER phyid:%#x port_id:%#x\n", 3580 + phy_id, port_id); 3606 3581 hw_event_port_recover(pm8001_ha, piomb); 3607 3582 break; 3608 3583 case HW_EVENT_PORT_RESET_COMPLETE: 3609 - pm8001_dbg(pm8001_ha, MSG, "HW_EVENT_PORT_RESET_COMPLETE\n"); 3584 + pm8001_dbg(pm8001_ha, EVENT, 3585 + "HW_EVENT_PORT_RESET_COMPLETE phyid:%#x port_id:%#x portstate:%#x\n", 3586 + phy_id, port_id, portstate); 3610 3587 if (pm8001_ha->phy[phy_id].reset_completion) { 3611 3588 pm8001_ha->phy[phy_id].port_reset_status = 3612 3589 PORT_RESET_SUCCESS; 3613 3590 complete(pm8001_ha->phy[phy_id].reset_completion); 3614 3591 pm8001_ha->phy[phy_id].reset_completion = NULL; 3615 3592 } 3593 + phy->phy_attached = 1; 3594 + phy->phy_state = PHY_STATE_LINK_UP_SPCV; 3595 + port->port_state = portstate; 3616 3596 break; 3617 3597 case EVENT_BROADCAST_ASYNCH_EVENT: 3618 3598 pm8001_dbg(pm8001_ha, MSG, "EVENT_BROADCAST_ASYNCH_EVENT\n"); 3619 3599 break; 3620 3600 default: 3621 - pm8001_dbg(pm8001_ha, DEVIO, "Unknown event type 0x%x\n", 3622 - eventType); 3601 + pm8001_dbg(pm8001_ha, DEVIO, 3602 + "Unknown event portid:%d phyid:%d event:0x%x status:0x%x\n", 3603 + port_id, phy_id, eventType, status); 3623 3604 break; 3624 3605 } 3625 3606 return 0; ··· 4761 4726 memcpy(payload.sas_addr, pm8001_dev->sas_device->sas_addr, 4762 4727 SAS_ADDR_SIZE); 4763 4728 4729 + pm8001_dbg(pm8001_ha, INIT, 4730 + "register device req phy_id 0x%x port_id 0x%x\n", phy_id, 4731 + (port->port_id & 0xFF)); 4764 4732 rc = pm8001_mpi_build_cmd(pm8001_ha, 0, opc, &payload, 4765 4733 sizeof(payload), 0); 4766 4734 if (rc) ··· 4853 4815 payload.tag = cpu_to_le32(tag); 4854 4816 payload.ppc_phyid = 4855 4817 cpu_to_le32(((operation & 0xF) << 8) | (phyid & 0xFF)); 4856 - pm8001_dbg(pm8001_ha, INIT, 4818 + pm8001_dbg(pm8001_ha, DISC, 4857 4819 " phy profile command for phy %x ,length is %d\n", 4858 4820 le32_to_cpu(payload.ppc_phyid), length); 4859 4821 for (i = length; i < (length + PHY_DWORD_LENGTH - 1); i++) {
+1 -2
drivers/scsi/qedf/qedf_main.c
··· 3041 3041 * addresses of our queues 3042 3042 */ 3043 3043 if (!qedf->p_cpuq) { 3044 - status = -EINVAL; 3045 3044 QEDF_ERR(&qedf->dbg_ctx, "p_cpuq is NULL.\n"); 3046 - goto mem_alloc_failure; 3045 + return -EINVAL; 3047 3046 } 3048 3047 3049 3048 qedf->global_queues = kzalloc((sizeof(struct global_queue *)
+1 -1
drivers/scsi/qla2xxx/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 config SCSI_QLA_FC 3 3 tristate "QLogic QLA2XXX Fibre Channel Support" 4 - depends on PCI && SCSI 4 + depends on PCI && HAS_IOPORT && SCSI 5 5 depends on SCSI_FC_ATTRS 6 6 depends on NVME_FC || !NVME_FC 7 7 select FW_LOADER
+13
drivers/scsi/qla2xxx/qla_attr.c
··· 2750 2750 qla2x00_terminate_rport_io(struct fc_rport *rport) 2751 2751 { 2752 2752 fc_port_t *fcport = *(fc_port_t **)rport->dd_data; 2753 + scsi_qla_host_t *vha; 2753 2754 2754 2755 if (!fcport) 2755 2756 return; ··· 2760 2759 2761 2760 if (test_bit(ABORT_ISP_ACTIVE, &fcport->vha->dpc_flags)) 2762 2761 return; 2762 + vha = fcport->vha; 2763 2763 2764 2764 if (unlikely(pci_channel_offline(fcport->vha->hw->pdev))) { 2765 2765 qla2x00_abort_all_cmds(fcport->vha, DID_NO_CONNECT << 16); 2766 + qla2x00_eh_wait_for_pending_commands(fcport->vha, fcport->d_id.b24, 2767 + 0, WAIT_TARGET); 2766 2768 return; 2767 2769 } 2768 2770 /* ··· 2789 2785 } else if (!IS_FWI2_CAPABLE(fcport->vha->hw)) { 2790 2786 qla2x00_port_logout(fcport->vha, fcport); 2791 2787 } 2788 + } 2789 + 2790 + /* check for any straggling io left behind */ 2791 + if (qla2x00_eh_wait_for_pending_commands(fcport->vha, fcport->d_id.b24, 0, WAIT_TARGET)) { 2792 + ql_log(ql_log_warn, vha, 0x300b, 2793 + "IO not return. Resetting. \n"); 2794 + set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags); 2795 + qla2xxx_wake_dpc(vha); 2796 + qla2x00_wait_for_chip_reset(vha); 2792 2797 } 2793 2798 } 2794 2799
+6
drivers/scsi/qla2xxx/qla_bsg.c
··· 283 283 284 284 if (bsg_request->msgcode == FC_BSG_RPT_ELS) { 285 285 rport = fc_bsg_to_rport(bsg_job); 286 + if (!rport) { 287 + rval = -ENOMEM; 288 + goto done; 289 + } 286 290 fcport = *(fc_port_t **) rport->dd_data; 287 291 host = rport_to_shost(rport); 288 292 vha = shost_priv(host); ··· 2996 2992 2997 2993 if (bsg_request->msgcode == FC_BSG_RPT_ELS) { 2998 2994 rport = fc_bsg_to_rport(bsg_job); 2995 + if (!rport) 2996 + return ret; 2999 2997 host = rport_to_shost(rport); 3000 2998 vha = shost_priv(host); 3001 2999 } else {
+23 -3
drivers/scsi/qla2xxx/qla_def.h
··· 465 465 return res; 466 466 } 467 467 468 + struct tmf_arg { 469 + struct qla_qpair *qpair; 470 + struct fc_port *fcport; 471 + struct scsi_qla_host *vha; 472 + u64 lun; 473 + u32 flags; 474 + uint8_t modifier; 475 + }; 476 + 468 477 struct els_logo_payload { 469 478 uint8_t opcode; 470 479 uint8_t rsvd[3]; ··· 553 544 uint32_t data; 554 545 struct completion comp; 555 546 __le16 comp_status; 547 + 548 + uint8_t modifier; 549 + uint8_t vp_index; 550 + uint16_t loop_id; 556 551 } tmf; 557 552 struct { 558 553 #define SRB_FXDISC_REQ_DMA_VALID BIT_0 ··· 660 647 #define SRB_SA_UPDATE 25 661 648 #define SRB_ELS_CMD_HST_NOLOGIN 26 662 649 #define SRB_SA_REPLACE 27 650 + #define SRB_MARKER 28 663 651 664 652 struct qla_els_pt_arg { 665 653 u8 els_opcode; ··· 703 689 struct iocb_resource iores; 704 690 struct kref cmd_kref; /* need to migrate ref_count over to this */ 705 691 void *priv; 706 - wait_queue_head_t nvme_ls_waitq; 707 692 struct fc_port *fcport; 708 693 struct scsi_qla_host *vha; 709 694 unsigned int start_timer:1; ··· 2541 2528 typedef struct fc_port { 2542 2529 struct list_head list; 2543 2530 struct scsi_qla_host *vha; 2531 + struct list_head tmf_pending; 2544 2532 2545 2533 unsigned int conf_compl_supported:1; 2546 2534 unsigned int deleted:2; ··· 2562 2548 unsigned int do_prli_nvme:1; 2563 2549 2564 2550 uint8_t nvme_flag; 2551 + uint8_t active_tmf; 2552 + #define MAX_ACTIVE_TMF 8 2565 2553 2566 2554 uint8_t node_name[WWN_SIZE]; 2567 2555 uint8_t port_name[WWN_SIZE]; ··· 3173 3157 uint8_t vendor_unique; 3174 3158 }; 3175 3159 /* Assume the largest number of targets for the union */ 3176 - struct ct_sns_gpn_ft_data { 3160 + DECLARE_FLEX_ARRAY(struct ct_sns_gpn_ft_data { 3177 3161 u8 control_byte; 3178 3162 u8 port_id[3]; 3179 3163 u32 reserved; 3180 3164 u8 port_name[8]; 3181 - } entries[1]; 3165 + }, entries); 3182 3166 }; 3183 3167 3184 3168 /* CT command response */ ··· 5514 5498 __func__, _fp->port_name, ##_args, atomic_read(&_fp->state), \ 5515 5499 _fp->disc_state, _fp->scan_state, _fp->loop_id, _fp->deleted, \ 5516 5500 _fp->flags 5501 + 5502 + #define TMF_NOT_READY(_fcport) \ 5503 + (!_fcport || IS_SESSION_DELETED(_fcport) || atomic_read(&_fcport->state) != FCS_ONLINE || \ 5504 + !_fcport->vha->hw->flags.fw_started) 5517 5505 5518 5506 #endif
+2 -2
drivers/scsi/qla2xxx/qla_edif.c
··· 2361 2361 if (!sa_ctl) { 2362 2362 ql_dbg(ql_dbg_edif, vha, 0x70e6, 2363 2363 "sa_ctl allocation failed\n"); 2364 - rval = -ENOMEM; 2365 - goto done; 2364 + rval = -ENOMEM; 2365 + return rval; 2366 2366 } 2367 2367 2368 2368 fcport = sa_ctl->fcport;
+1 -1
drivers/scsi/qla2xxx/qla_gbl.h
··· 69 69 extern int qla2x00_async_prlo(struct scsi_qla_host *, fc_port_t *); 70 70 extern int qla2x00_async_adisc(struct scsi_qla_host *, fc_port_t *, 71 71 uint16_t *); 72 - extern int qla2x00_async_tm_cmd(fc_port_t *, uint32_t, uint32_t, uint32_t); 72 + extern int qla2x00_async_tm_cmd(fc_port_t *, uint32_t, uint64_t, uint32_t); 73 73 struct qla_work_evt *qla2x00_alloc_work(struct scsi_qla_host *, 74 74 enum qla_work_type); 75 75 extern int qla24xx_async_gnl(struct scsi_qla_host *, fc_port_t *);
+2 -2
drivers/scsi/qla2xxx/qla_gs.c
··· 3776 3776 sp->u.iocb_cmd.u.ctarg.req_size = GPN_FT_REQ_SIZE; 3777 3777 3778 3778 rspsz = sizeof(struct ct_sns_gpnft_rsp) + 3779 - ((vha->hw->max_fibre_devices - 1) * 3780 - sizeof(struct ct_sns_gpn_ft_data)); 3779 + vha->hw->max_fibre_devices * 3780 + sizeof(struct ct_sns_gpn_ft_data); 3781 3781 3782 3782 sp->u.iocb_cmd.u.ctarg.rsp = dma_alloc_coherent(&vha->hw->pdev->dev, 3783 3783 rspsz,
+241 -26
drivers/scsi/qla2xxx/qla_init.c
··· 1996 1996 int rc, h; 1997 1997 unsigned long flags; 1998 1998 1999 + if (sp->type == SRB_MARKER) { 2000 + complete(&tmf->u.tmf.comp); 2001 + return; 2002 + } 2003 + 1999 2004 rc = qla24xx_async_abort_cmd(sp, false); 2000 2005 if (rc) { 2001 2006 spin_lock_irqsave(sp->qpair->qp_lock_ptr, flags); ··· 2018 2013 } 2019 2014 } 2020 2015 2016 + static void qla_marker_sp_done(srb_t *sp, int res) 2017 + { 2018 + struct srb_iocb *tmf = &sp->u.iocb_cmd; 2019 + 2020 + if (res != QLA_SUCCESS) 2021 + ql_dbg(ql_dbg_taskm, sp->vha, 0x8004, 2022 + "Async-marker fail hdl=%x portid=%06x ctrl=%x lun=%lld qp=%d.\n", 2023 + sp->handle, sp->fcport->d_id.b24, sp->u.iocb_cmd.u.tmf.flags, 2024 + sp->u.iocb_cmd.u.tmf.lun, sp->qpair->id); 2025 + 2026 + sp->u.iocb_cmd.u.tmf.data = res; 2027 + complete(&tmf->u.tmf.comp); 2028 + } 2029 + 2030 + #define START_SP_W_RETRIES(_sp, _rval) \ 2031 + {\ 2032 + int cnt = 5; \ 2033 + do { \ 2034 + _rval = qla2x00_start_sp(_sp); \ 2035 + if (_rval == EAGAIN) \ 2036 + msleep(1); \ 2037 + else \ 2038 + break; \ 2039 + cnt--; \ 2040 + } while (cnt); \ 2041 + } 2042 + 2043 + /** 2044 + * qla26xx_marker: send marker IOCB and wait for the completion of it. 2045 + * @arg: pointer to argument list. 2046 + * It is assume caller will provide an fcport pointer and modifier 2047 + */ 2048 + static int 2049 + qla26xx_marker(struct tmf_arg *arg) 2050 + { 2051 + struct scsi_qla_host *vha = arg->vha; 2052 + struct srb_iocb *tm_iocb; 2053 + srb_t *sp; 2054 + int rval = QLA_FUNCTION_FAILED; 2055 + fc_port_t *fcport = arg->fcport; 2056 + 2057 + if (TMF_NOT_READY(arg->fcport)) { 2058 + ql_dbg(ql_dbg_taskm, vha, 0x8039, 2059 + "FC port not ready for marker loop-id=%x portid=%06x modifier=%x lun=%lld qp=%d.\n", 2060 + fcport->loop_id, fcport->d_id.b24, 2061 + arg->modifier, arg->lun, arg->qpair->id); 2062 + return QLA_SUSPENDED; 2063 + } 2064 + 2065 + /* ref: INIT */ 2066 + sp = qla2xxx_get_qpair_sp(vha, arg->qpair, fcport, GFP_KERNEL); 2067 + if (!sp) 2068 + goto done; 2069 + 2070 + sp->type = SRB_MARKER; 2071 + sp->name = "marker"; 2072 + qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha), qla_marker_sp_done); 2073 + sp->u.iocb_cmd.timeout = qla2x00_tmf_iocb_timeout; 2074 + 2075 + tm_iocb = &sp->u.iocb_cmd; 2076 + init_completion(&tm_iocb->u.tmf.comp); 2077 + tm_iocb->u.tmf.modifier = arg->modifier; 2078 + tm_iocb->u.tmf.lun = arg->lun; 2079 + tm_iocb->u.tmf.loop_id = fcport->loop_id; 2080 + tm_iocb->u.tmf.vp_index = vha->vp_idx; 2081 + 2082 + START_SP_W_RETRIES(sp, rval); 2083 + 2084 + ql_dbg(ql_dbg_taskm, vha, 0x8006, 2085 + "Async-marker hdl=%x loop-id=%x portid=%06x modifier=%x lun=%lld qp=%d rval %d.\n", 2086 + sp->handle, fcport->loop_id, fcport->d_id.b24, 2087 + arg->modifier, arg->lun, sp->qpair->id, rval); 2088 + 2089 + if (rval != QLA_SUCCESS) { 2090 + ql_log(ql_log_warn, vha, 0x8031, 2091 + "Marker IOCB send failure (%x).\n", rval); 2092 + goto done_free_sp; 2093 + } 2094 + 2095 + wait_for_completion(&tm_iocb->u.tmf.comp); 2096 + rval = tm_iocb->u.tmf.data; 2097 + 2098 + if (rval != QLA_SUCCESS) { 2099 + ql_log(ql_log_warn, vha, 0x8019, 2100 + "Marker failed hdl=%x loop-id=%x portid=%06x modifier=%x lun=%lld qp=%d rval %d.\n", 2101 + sp->handle, fcport->loop_id, fcport->d_id.b24, 2102 + arg->modifier, arg->lun, sp->qpair->id, rval); 2103 + } 2104 + 2105 + done_free_sp: 2106 + /* ref: INIT */ 2107 + kref_put(&sp->cmd_kref, qla2x00_sp_release); 2108 + done: 2109 + return rval; 2110 + } 2111 + 2021 2112 static void qla2x00_tmf_sp_done(srb_t *sp, int res) 2022 2113 { 2023 2114 struct srb_iocb *tmf = &sp->u.iocb_cmd; 2024 2115 2116 + if (res) 2117 + tmf->u.tmf.data = res; 2025 2118 complete(&tmf->u.tmf.comp); 2026 2119 } 2027 2120 2028 - int 2029 - qla2x00_async_tm_cmd(fc_port_t *fcport, uint32_t flags, uint32_t lun, 2030 - uint32_t tag) 2121 + static int 2122 + __qla2x00_async_tm_cmd(struct tmf_arg *arg) 2031 2123 { 2032 - struct scsi_qla_host *vha = fcport->vha; 2124 + struct scsi_qla_host *vha = arg->vha; 2033 2125 struct srb_iocb *tm_iocb; 2034 2126 srb_t *sp; 2035 2127 int rval = QLA_FUNCTION_FAILED; 2036 2128 2129 + fc_port_t *fcport = arg->fcport; 2130 + 2131 + if (TMF_NOT_READY(arg->fcport)) { 2132 + ql_dbg(ql_dbg_taskm, vha, 0x8032, 2133 + "FC port not ready for TM command loop-id=%x portid=%06x modifier=%x lun=%lld qp=%d.\n", 2134 + fcport->loop_id, fcport->d_id.b24, 2135 + arg->modifier, arg->lun, arg->qpair->id); 2136 + return QLA_SUSPENDED; 2137 + } 2138 + 2037 2139 /* ref: INIT */ 2038 - sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL); 2140 + sp = qla2xxx_get_qpair_sp(vha, arg->qpair, fcport, GFP_KERNEL); 2039 2141 if (!sp) 2040 2142 goto done; 2041 2143 ··· 2155 2043 2156 2044 tm_iocb = &sp->u.iocb_cmd; 2157 2045 init_completion(&tm_iocb->u.tmf.comp); 2158 - tm_iocb->u.tmf.flags = flags; 2159 - tm_iocb->u.tmf.lun = lun; 2046 + tm_iocb->u.tmf.flags = arg->flags; 2047 + tm_iocb->u.tmf.lun = arg->lun; 2048 + 2049 + START_SP_W_RETRIES(sp, rval); 2160 2050 2161 2051 ql_dbg(ql_dbg_taskm, vha, 0x802f, 2162 - "Async-tmf hdl=%x loop-id=%x portid=%02x%02x%02x.\n", 2163 - sp->handle, fcport->loop_id, fcport->d_id.b.domain, 2164 - fcport->d_id.b.area, fcport->d_id.b.al_pa); 2052 + "Async-tmf hdl=%x loop-id=%x portid=%06x ctrl=%x lun=%lld qp=%d rval=%x.\n", 2053 + sp->handle, fcport->loop_id, fcport->d_id.b24, 2054 + arg->flags, arg->lun, sp->qpair->id, rval); 2165 2055 2166 - rval = qla2x00_start_sp(sp); 2167 2056 if (rval != QLA_SUCCESS) 2168 2057 goto done_free_sp; 2169 2058 wait_for_completion(&tm_iocb->u.tmf.comp); ··· 2176 2063 "TM IOCB failed (%x).\n", rval); 2177 2064 } 2178 2065 2179 - if (!test_bit(UNLOADING, &vha->dpc_flags) && !IS_QLAFX00(vha->hw)) { 2180 - flags = tm_iocb->u.tmf.flags; 2181 - lun = (uint16_t)tm_iocb->u.tmf.lun; 2182 - 2183 - /* Issue Marker IOCB */ 2184 - qla2x00_marker(vha, vha->hw->base_qpair, 2185 - fcport->loop_id, lun, 2186 - flags == TCF_LUN_RESET ? MK_SYNC_ID_LUN : MK_SYNC_ID); 2187 - } 2066 + if (!test_bit(UNLOADING, &vha->dpc_flags) && !IS_QLAFX00(vha->hw)) 2067 + rval = qla26xx_marker(arg); 2188 2068 2189 2069 done_free_sp: 2190 2070 /* ref: INIT */ 2191 2071 kref_put(&sp->cmd_kref, qla2x00_sp_release); 2192 2072 done: 2073 + return rval; 2074 + } 2075 + 2076 + static void qla_put_tmf(fc_port_t *fcport) 2077 + { 2078 + struct scsi_qla_host *vha = fcport->vha; 2079 + struct qla_hw_data *ha = vha->hw; 2080 + unsigned long flags; 2081 + 2082 + spin_lock_irqsave(&ha->tgt.sess_lock, flags); 2083 + fcport->active_tmf--; 2084 + spin_unlock_irqrestore(&ha->tgt.sess_lock, flags); 2085 + } 2086 + 2087 + static 2088 + int qla_get_tmf(fc_port_t *fcport) 2089 + { 2090 + struct scsi_qla_host *vha = fcport->vha; 2091 + struct qla_hw_data *ha = vha->hw; 2092 + unsigned long flags; 2093 + int rc = 0; 2094 + LIST_HEAD(tmf_elem); 2095 + 2096 + spin_lock_irqsave(&ha->tgt.sess_lock, flags); 2097 + list_add_tail(&tmf_elem, &fcport->tmf_pending); 2098 + 2099 + while (fcport->active_tmf >= MAX_ACTIVE_TMF) { 2100 + spin_unlock_irqrestore(&ha->tgt.sess_lock, flags); 2101 + 2102 + msleep(1); 2103 + 2104 + spin_lock_irqsave(&ha->tgt.sess_lock, flags); 2105 + if (TMF_NOT_READY(fcport)) { 2106 + ql_log(ql_log_warn, vha, 0x802c, 2107 + "Unable to acquire TM resource due to disruption.\n"); 2108 + rc = EIO; 2109 + break; 2110 + } 2111 + if (fcport->active_tmf < MAX_ACTIVE_TMF && 2112 + list_is_first(&tmf_elem, &fcport->tmf_pending)) 2113 + break; 2114 + } 2115 + 2116 + list_del(&tmf_elem); 2117 + 2118 + if (!rc) 2119 + fcport->active_tmf++; 2120 + 2121 + spin_unlock_irqrestore(&ha->tgt.sess_lock, flags); 2122 + 2123 + return rc; 2124 + } 2125 + 2126 + int 2127 + qla2x00_async_tm_cmd(fc_port_t *fcport, uint32_t flags, uint64_t lun, 2128 + uint32_t tag) 2129 + { 2130 + struct scsi_qla_host *vha = fcport->vha; 2131 + struct qla_qpair *qpair; 2132 + struct tmf_arg a; 2133 + int i, rval = QLA_SUCCESS; 2134 + 2135 + if (TMF_NOT_READY(fcport)) 2136 + return QLA_SUSPENDED; 2137 + 2138 + a.vha = fcport->vha; 2139 + a.fcport = fcport; 2140 + a.lun = lun; 2141 + if (flags & (TCF_LUN_RESET|TCF_ABORT_TASK_SET|TCF_CLEAR_TASK_SET|TCF_CLEAR_ACA)) { 2142 + a.modifier = MK_SYNC_ID_LUN; 2143 + 2144 + if (qla_get_tmf(fcport)) 2145 + return QLA_FUNCTION_FAILED; 2146 + } else { 2147 + a.modifier = MK_SYNC_ID; 2148 + } 2149 + 2150 + if (vha->hw->mqenable) { 2151 + for (i = 0; i < vha->hw->num_qpairs; i++) { 2152 + qpair = vha->hw->queue_pair_map[i]; 2153 + if (!qpair) 2154 + continue; 2155 + 2156 + if (TMF_NOT_READY(fcport)) { 2157 + ql_log(ql_log_warn, vha, 0x8026, 2158 + "Unable to send TM due to disruption.\n"); 2159 + rval = QLA_SUSPENDED; 2160 + break; 2161 + } 2162 + 2163 + a.qpair = qpair; 2164 + a.flags = flags|TCF_NOTMCMD_TO_TARGET; 2165 + rval = __qla2x00_async_tm_cmd(&a); 2166 + if (rval) 2167 + break; 2168 + } 2169 + } 2170 + 2171 + if (rval) 2172 + goto bailout; 2173 + 2174 + a.qpair = vha->hw->base_qpair; 2175 + a.flags = flags; 2176 + rval = __qla2x00_async_tm_cmd(&a); 2177 + 2178 + bailout: 2179 + if (a.modifier == MK_SYNC_ID_LUN) 2180 + qla_put_tmf(fcport); 2181 + 2193 2182 return rval; 2194 2183 } 2195 2184 ··· 5076 4861 if (use_tbl && 5077 4862 ha->pdev->subsystem_vendor == PCI_VENDOR_ID_QLOGIC && 5078 4863 index < QLA_MODEL_NAMES) 5079 - strlcpy(ha->model_desc, 4864 + strscpy(ha->model_desc, 5080 4865 qla2x00_model_name[index * 2 + 1], 5081 4866 sizeof(ha->model_desc)); 5082 4867 } else { ··· 5084 4869 if (use_tbl && 5085 4870 ha->pdev->subsystem_vendor == PCI_VENDOR_ID_QLOGIC && 5086 4871 index < QLA_MODEL_NAMES) { 5087 - strlcpy(ha->model_number, 4872 + strscpy(ha->model_number, 5088 4873 qla2x00_model_name[index * 2], 5089 4874 sizeof(ha->model_number)); 5090 - strlcpy(ha->model_desc, 4875 + strscpy(ha->model_desc, 5091 4876 qla2x00_model_name[index * 2 + 1], 5092 4877 sizeof(ha->model_desc)); 5093 4878 } else { 5094 - strlcpy(ha->model_number, def, 4879 + strscpy(ha->model_number, def, 5095 4880 sizeof(ha->model_number)); 5096 4881 } 5097 4882 } ··· 5506 5291 INIT_WORK(&fcport->reg_work, qla_register_fcport_fn); 5507 5292 INIT_LIST_HEAD(&fcport->gnl_entry); 5508 5293 INIT_LIST_HEAD(&fcport->list); 5294 + INIT_LIST_HEAD(&fcport->tmf_pending); 5509 5295 5510 5296 INIT_LIST_HEAD(&fcport->sess_cmd_list); 5511 5297 spin_lock_init(&fcport->sess_cmd_lock); ··· 5549 5333 __be32 *q; 5550 5334 5551 5335 memset(ha->init_cb, 0, ha->init_cb_size); 5552 - sz = min_t(int, sizeof(struct fc_els_flogi), ha->init_cb_size); 5336 + sz = min_t(int, sizeof(struct fc_els_csp), ha->init_cb_size); 5553 5337 rval = qla24xx_get_port_login_templ(vha, ha->init_cb_dma, 5554 5338 ha->init_cb, sz); 5555 5339 if (rval != QLA_SUCCESS) { ··· 6220 6004 fc_port_t *fcport; 6221 6005 uint16_t mb[MAILBOX_REGISTER_COUNT]; 6222 6006 uint16_t loop_id; 6223 - LIST_HEAD(new_fcports); 6224 6007 struct qla_hw_data *ha = vha->hw; 6225 6008 int discovery_gen; 6226 6009
+4 -1
drivers/scsi/qla2xxx/qla_inline.h
··· 109 109 { 110 110 int old_val; 111 111 uint8_t shiftbits, mask; 112 + uint8_t port_dstate_str_sz; 112 113 113 114 /* This will have to change when the max no. of states > 16 */ 114 115 shiftbits = 4; 115 116 mask = (1 << shiftbits) - 1; 116 117 118 + port_dstate_str_sz = sizeof(port_dstate_str) / sizeof(char *); 117 119 fcport->disc_state = state; 118 120 while (1) { 119 121 old_val = atomic_read(&fcport->shadow_disc_state); ··· 123 121 old_val, (old_val << shiftbits) | state)) { 124 122 ql_dbg(ql_dbg_disc, fcport->vha, 0x2134, 125 123 "FCPort %8phC disc_state transition: %s to %s - portid=%06x.\n", 126 - fcport->port_name, port_dstate_str[old_val & mask], 124 + fcport->port_name, (old_val & mask) < port_dstate_str_sz ? 125 + port_dstate_str[old_val & mask] : "Unknown", 127 126 port_dstate_str[state], fcport->d_id.b24); 128 127 return; 129 128 }
+29 -7
drivers/scsi/qla2xxx/qla_iocb.c
··· 522 522 return (QLA_FUNCTION_FAILED); 523 523 } 524 524 525 + mrk24 = (struct mrk_entry_24xx *)mrk; 526 + 525 527 mrk->entry_type = MARKER_TYPE; 526 528 mrk->modifier = type; 527 529 if (type != MK_SYNC_ALL) { 528 530 if (IS_FWI2_CAPABLE(ha)) { 529 - mrk24 = (struct mrk_entry_24xx *) mrk; 530 531 mrk24->nport_handle = cpu_to_le16(loop_id); 531 532 int_to_scsilun(lun, (struct scsi_lun *)&mrk24->lun); 532 533 host_to_fcp_swap(mrk24->lun, sizeof(mrk24->lun)); 533 534 mrk24->vp_index = vha->vp_idx; 534 - mrk24->handle = make_handle(req->id, mrk24->handle); 535 535 } else { 536 536 SET_TARGET_ID(ha, mrk->target, loop_id); 537 537 mrk->lun = cpu_to_le16((uint16_t)lun); 538 538 } 539 539 } 540 + 541 + if (IS_FWI2_CAPABLE(ha)) 542 + mrk24->handle = QLA_SKIP_HANDLE; 543 + 540 544 wmb(); 541 545 542 546 qla2x00_start_iocbs(vha, req); ··· 607 603 put_unaligned_le32(COMMAND_TYPE_6, &cmd_pkt->entry_type); 608 604 609 605 /* No data transfer */ 610 - if (!scsi_bufflen(cmd) || cmd->sc_data_direction == DMA_NONE) { 606 + if (!scsi_bufflen(cmd) || cmd->sc_data_direction == DMA_NONE || 607 + tot_dsds == 0) { 611 608 cmd_pkt->byte_count = cpu_to_le32(0); 612 609 return 0; 613 610 } ··· 2546 2541 scsi_qla_host_t *vha = fcport->vha; 2547 2542 struct qla_hw_data *ha = vha->hw; 2548 2543 struct srb_iocb *iocb = &sp->u.iocb_cmd; 2549 - struct req_que *req = vha->req; 2544 + struct req_que *req = sp->qpair->req; 2550 2545 2551 2546 flags = iocb->u.tmf.flags; 2552 2547 lun = iocb->u.tmf.lun; ··· 2562 2557 tsk->port_id[2] = fcport->d_id.b.domain; 2563 2558 tsk->vp_index = fcport->vha->vp_idx; 2564 2559 2565 - if (flags == TCF_LUN_RESET) { 2560 + if (flags & (TCF_LUN_RESET | TCF_ABORT_TASK_SET| 2561 + TCF_CLEAR_TASK_SET|TCF_CLEAR_ACA)) { 2566 2562 int_to_scsilun(lun, &tsk->lun); 2567 2563 host_to_fcp_swap((uint8_t *)&tsk->lun, 2568 2564 sizeof(tsk->lun)); ··· 3858 3852 case SRB_NACK_LOGO: 3859 3853 case SRB_LOGOUT_CMD: 3860 3854 case SRB_CTRL_VP: 3861 - push_it_through = true; 3862 - fallthrough; 3855 + case SRB_MARKER: 3863 3856 default: 3857 + push_it_through = true; 3864 3858 get_exch = false; 3865 3859 } 3866 3860 ··· 3874 3868 sp->iores.res_type |= RESOURCE_FORCE; 3875 3869 3876 3870 return qla_get_fw_resources(sp->qpair, &sp->iores); 3871 + } 3872 + 3873 + static void 3874 + qla_marker_iocb(srb_t *sp, struct mrk_entry_24xx *mrk) 3875 + { 3876 + mrk->entry_type = MARKER_TYPE; 3877 + mrk->modifier = sp->u.iocb_cmd.u.tmf.modifier; 3878 + if (sp->u.iocb_cmd.u.tmf.modifier != MK_SYNC_ALL) { 3879 + mrk->nport_handle = cpu_to_le16(sp->u.iocb_cmd.u.tmf.loop_id); 3880 + int_to_scsilun(sp->u.iocb_cmd.u.tmf.lun, (struct scsi_lun *)&mrk->lun); 3881 + host_to_fcp_swap(mrk->lun, sizeof(mrk->lun)); 3882 + mrk->vp_index = sp->u.iocb_cmd.u.tmf.vp_index; 3883 + } 3877 3884 } 3878 3885 3879 3886 int ··· 3991 3972 break; 3992 3973 case SRB_SA_REPLACE: 3993 3974 qla24xx_sa_replace_iocb(sp, pkt); 3975 + break; 3976 + case SRB_MARKER: 3977 + qla_marker_iocb(sp, pkt); 3994 3978 break; 3995 3979 default: 3996 3980 break;
+54 -10
drivers/scsi/qla2xxx/qla_isr.c
··· 1862 1862 } 1863 1863 } 1864 1864 1865 - srb_t * 1866 - qla2x00_get_sp_from_handle(scsi_qla_host_t *vha, const char *func, 1867 - struct req_que *req, void *iocb) 1865 + static srb_t * 1866 + qla_get_sp_from_handle(scsi_qla_host_t *vha, const char *func, 1867 + struct req_que *req, void *iocb, u16 *ret_index) 1868 1868 { 1869 1869 struct qla_hw_data *ha = vha->hw; 1870 1870 sts_entry_t *pkt = iocb; ··· 1899 1899 return NULL; 1900 1900 } 1901 1901 1902 - req->outstanding_cmds[index] = NULL; 1903 - 1902 + *ret_index = index; 1904 1903 qla_put_fw_resources(sp->qpair, &sp->iores); 1904 + return sp; 1905 + } 1906 + 1907 + srb_t * 1908 + qla2x00_get_sp_from_handle(scsi_qla_host_t *vha, const char *func, 1909 + struct req_que *req, void *iocb) 1910 + { 1911 + uint16_t index; 1912 + srb_t *sp; 1913 + 1914 + sp = qla_get_sp_from_handle(vha, func, req, iocb, &index); 1915 + if (sp) 1916 + req->outstanding_cmds[index] = NULL; 1917 + 1905 1918 return sp; 1906 1919 } 1907 1920 ··· 3250 3237 return; 3251 3238 } 3252 3239 3253 - req->outstanding_cmds[handle] = NULL; 3254 3240 cp = GET_CMD_SP(sp); 3255 3241 if (cp == NULL) { 3256 3242 ql_dbg(ql_dbg_io, vha, 0x3018, 3257 3243 "Command already returned (0x%x/%p).\n", 3258 3244 sts->handle, sp); 3259 3245 3246 + req->outstanding_cmds[handle] = NULL; 3260 3247 return; 3261 3248 } 3262 3249 ··· 3527 3514 3528 3515 if (rsp->status_srb == NULL) 3529 3516 sp->done(sp, res); 3517 + 3518 + /* for io's, clearing of outstanding_cmds[handle] means scsi_done was called */ 3519 + req->outstanding_cmds[handle] = NULL; 3530 3520 } 3531 3521 3532 3522 /** ··· 3606 3590 uint16_t que = MSW(pkt->handle); 3607 3591 struct req_que *req = NULL; 3608 3592 int res = DID_ERROR << 16; 3593 + u16 index; 3609 3594 3610 3595 ql_dbg(ql_dbg_async, vha, 0x502a, 3611 3596 "iocb type %xh with error status %xh, handle %xh, rspq id %d\n", ··· 3625 3608 3626 3609 switch (pkt->entry_type) { 3627 3610 case NOTIFY_ACK_TYPE: 3628 - case STATUS_TYPE: 3629 3611 case STATUS_CONT_TYPE: 3630 3612 case LOGINOUT_PORT_IOCB_TYPE: 3631 3613 case CT_IOCB_TYPE: ··· 3644 3628 case CTIO_TYPE7: 3645 3629 case CTIO_CRC2: 3646 3630 return 1; 3631 + case STATUS_TYPE: 3632 + sp = qla_get_sp_from_handle(vha, func, req, pkt, &index); 3633 + if (sp) { 3634 + sp->done(sp, res); 3635 + req->outstanding_cmds[index] = NULL; 3636 + return 0; 3637 + } 3638 + break; 3647 3639 } 3648 3640 fatal: 3649 3641 ql_log(ql_log_warn, vha, 0x5030, ··· 3774 3750 return rc; 3775 3751 } 3776 3752 3753 + static void qla_marker_iocb_entry(scsi_qla_host_t *vha, struct req_que *req, 3754 + struct mrk_entry_24xx *pkt) 3755 + { 3756 + const char func[] = "MRK-IOCB"; 3757 + srb_t *sp; 3758 + int res = QLA_SUCCESS; 3759 + 3760 + if (!IS_FWI2_CAPABLE(vha->hw)) 3761 + return; 3762 + 3763 + sp = qla2x00_get_sp_from_handle(vha, func, req, pkt); 3764 + if (!sp) 3765 + return; 3766 + 3767 + if (pkt->entry_status) { 3768 + ql_dbg(ql_dbg_taskm, vha, 0x8025, "marker failure.\n"); 3769 + res = QLA_COMMAND_ERROR; 3770 + } 3771 + sp->u.iocb_cmd.u.tmf.data = res; 3772 + sp->done(sp, res); 3773 + } 3774 + 3777 3775 /** 3778 3776 * qla24xx_process_response_queue() - Process response queue entries. 3779 3777 * @vha: SCSI driver HA context ··· 3912 3866 (struct nack_to_isp *)pkt); 3913 3867 break; 3914 3868 case MARKER_TYPE: 3915 - /* Do nothing in this case, this check is to prevent it 3916 - * from falling into default case 3917 - */ 3869 + qla_marker_iocb_entry(vha, rsp->req, (struct mrk_entry_24xx *)pkt); 3918 3870 break; 3919 3871 case ABORT_IOCB_TYPE: 3920 3872 qla24xx_abort_iocb_entry(vha, rsp->req,
+10 -10
drivers/scsi/qla2xxx/qla_mr.c
··· 691 691 struct qla_hw_data *ha = vha->hw; 692 692 693 693 if (pci_is_pcie(ha->pdev)) 694 - strlcpy(str, "PCIe iSA", str_len); 694 + strscpy(str, "PCIe iSA", str_len); 695 695 return str; 696 696 } 697 697 ··· 1850 1850 phost_info = &preg_hsi->hsi; 1851 1851 memset(preg_hsi, 0, sizeof(struct register_host_info)); 1852 1852 phost_info->os_type = OS_TYPE_LINUX; 1853 - strlcpy(phost_info->sysname, p_sysid->sysname, 1853 + strscpy(phost_info->sysname, p_sysid->sysname, 1854 1854 sizeof(phost_info->sysname)); 1855 - strlcpy(phost_info->nodename, p_sysid->nodename, 1855 + strscpy(phost_info->nodename, p_sysid->nodename, 1856 1856 sizeof(phost_info->nodename)); 1857 1857 if (!strcmp(phost_info->nodename, "(none)")) 1858 1858 ha->mr.host_info_resend = true; 1859 - strlcpy(phost_info->release, p_sysid->release, 1859 + strscpy(phost_info->release, p_sysid->release, 1860 1860 sizeof(phost_info->release)); 1861 - strlcpy(phost_info->version, p_sysid->version, 1861 + strscpy(phost_info->version, p_sysid->version, 1862 1862 sizeof(phost_info->version)); 1863 - strlcpy(phost_info->machine, p_sysid->machine, 1863 + strscpy(phost_info->machine, p_sysid->machine, 1864 1864 sizeof(phost_info->machine)); 1865 - strlcpy(phost_info->domainname, p_sysid->domainname, 1865 + strscpy(phost_info->domainname, p_sysid->domainname, 1866 1866 sizeof(phost_info->domainname)); 1867 - strlcpy(phost_info->hostdriver, QLA2XXX_VERSION, 1867 + strscpy(phost_info->hostdriver, QLA2XXX_VERSION, 1868 1868 sizeof(phost_info->hostdriver)); 1869 1869 preg_hsi->utc = (uint64_t)ktime_get_real_seconds(); 1870 1870 ql_dbg(ql_dbg_init, vha, 0x0149, ··· 1909 1909 if (fx_type == FXDISC_GET_CONFIG_INFO) { 1910 1910 struct config_info_data *pinfo = 1911 1911 (struct config_info_data *) fdisc->u.fxiocb.rsp_addr; 1912 - strlcpy(vha->hw->model_number, pinfo->model_num, 1912 + strscpy(vha->hw->model_number, pinfo->model_num, 1913 1913 ARRAY_SIZE(vha->hw->model_number)); 1914 - strlcpy(vha->hw->model_desc, pinfo->model_description, 1914 + strscpy(vha->hw->model_desc, pinfo->model_description, 1915 1915 ARRAY_SIZE(vha->hw->model_desc)); 1916 1916 memcpy(&vha->hw->mr.symbolic_name, pinfo->symbolic_name, 1917 1917 sizeof(vha->hw->mr.symbolic_name));
-3
drivers/scsi/qla2xxx/qla_nvme.c
··· 360 360 if (rval != QLA_SUCCESS) { 361 361 ql_log(ql_log_warn, vha, 0x700e, 362 362 "qla2x00_start_sp failed = %d\n", rval); 363 - wake_up(&sp->nvme_ls_waitq); 364 363 sp->priv = NULL; 365 364 priv->sp = NULL; 366 365 qla2x00_rel_sp(sp); ··· 651 652 if (!sp) 652 653 return -EBUSY; 653 654 654 - init_waitqueue_head(&sp->nvme_ls_waitq); 655 655 kref_init(&sp->cmd_kref); 656 656 spin_lock_init(&priv->cmd_lock); 657 657 sp->priv = priv; ··· 669 671 if (rval != QLA_SUCCESS) { 670 672 ql_log(ql_log_warn, vha, 0x212d, 671 673 "qla2x00_start_nvme_mq failed = %d\n", rval); 672 - wake_up(&sp->nvme_ls_waitq); 673 674 sp->priv = NULL; 674 675 priv->sp = NULL; 675 676 qla2xxx_rel_qpair_sp(sp->qpair, sp);
+66 -67
drivers/scsi/qla2xxx/qla_os.c
··· 1079 1079 } 1080 1080 1081 1081 /* 1082 - * qla2x00_eh_wait_on_command 1083 - * Waits for the command to be returned by the Firmware for some 1084 - * max time. 1085 - * 1086 - * Input: 1087 - * cmd = Scsi Command to wait on. 1088 - * 1089 - * Return: 1090 - * Completed in time : QLA_SUCCESS 1091 - * Did not complete in time : QLA_FUNCTION_FAILED 1092 - */ 1093 - static int 1094 - qla2x00_eh_wait_on_command(struct scsi_cmnd *cmd) 1095 - { 1096 - #define ABORT_POLLING_PERIOD 1000 1097 - #define ABORT_WAIT_ITER ((2 * 1000) / (ABORT_POLLING_PERIOD)) 1098 - unsigned long wait_iter = ABORT_WAIT_ITER; 1099 - scsi_qla_host_t *vha = shost_priv(cmd->device->host); 1100 - struct qla_hw_data *ha = vha->hw; 1101 - srb_t *sp = scsi_cmd_priv(cmd); 1102 - int ret = QLA_SUCCESS; 1103 - 1104 - if (unlikely(pci_channel_offline(ha->pdev)) || ha->flags.eeh_busy) { 1105 - ql_dbg(ql_dbg_taskm, vha, 0x8005, 1106 - "Return:eh_wait.\n"); 1107 - return ret; 1108 - } 1109 - 1110 - while (sp->type && wait_iter--) 1111 - msleep(ABORT_POLLING_PERIOD); 1112 - if (sp->type) 1113 - ret = QLA_FUNCTION_FAILED; 1114 - 1115 - return ret; 1116 - } 1117 - 1118 - /* 1119 1082 * qla2x00_wait_for_hba_online 1120 1083 * Wait till the HBA is online after going through 1121 1084 * <= MAX_RETRIES_OF_ISP_ABORT or ··· 1328 1365 return ret; 1329 1366 } 1330 1367 1368 + #define ABORT_POLLING_PERIOD 1000 1369 + #define ABORT_WAIT_ITER ((2 * 1000) / (ABORT_POLLING_PERIOD)) 1370 + 1331 1371 /* 1332 1372 * Returns: QLA_SUCCESS or QLA_FUNCTION_FAILED. 1333 1373 */ ··· 1344 1378 struct req_que *req = qpair->req; 1345 1379 srb_t *sp; 1346 1380 struct scsi_cmnd *cmd; 1381 + unsigned long wait_iter = ABORT_WAIT_ITER; 1382 + bool found; 1383 + struct qla_hw_data *ha = vha->hw; 1347 1384 1348 1385 status = QLA_SUCCESS; 1349 1386 1350 - spin_lock_irqsave(qpair->qp_lock_ptr, flags); 1351 - for (cnt = 1; status == QLA_SUCCESS && 1352 - cnt < req->num_outstanding_cmds; cnt++) { 1353 - sp = req->outstanding_cmds[cnt]; 1354 - if (!sp) 1355 - continue; 1356 - if (sp->type != SRB_SCSI_CMD) 1357 - continue; 1358 - if (vha->vp_idx != sp->vha->vp_idx) 1359 - continue; 1360 - match = 0; 1361 - cmd = GET_CMD_SP(sp); 1362 - switch (type) { 1363 - case WAIT_HOST: 1364 - match = 1; 1365 - break; 1366 - case WAIT_TARGET: 1367 - match = cmd->device->id == t; 1368 - break; 1369 - case WAIT_LUN: 1370 - match = (cmd->device->id == t && 1371 - cmd->device->lun == l); 1372 - break; 1373 - } 1374 - if (!match) 1375 - continue; 1387 + while (wait_iter--) { 1388 + found = false; 1376 1389 1377 - spin_unlock_irqrestore(qpair->qp_lock_ptr, flags); 1378 - status = qla2x00_eh_wait_on_command(cmd); 1379 1390 spin_lock_irqsave(qpair->qp_lock_ptr, flags); 1391 + for (cnt = 1; cnt < req->num_outstanding_cmds; cnt++) { 1392 + sp = req->outstanding_cmds[cnt]; 1393 + if (!sp) 1394 + continue; 1395 + if (sp->type != SRB_SCSI_CMD) 1396 + continue; 1397 + if (vha->vp_idx != sp->vha->vp_idx) 1398 + continue; 1399 + match = 0; 1400 + cmd = GET_CMD_SP(sp); 1401 + switch (type) { 1402 + case WAIT_HOST: 1403 + match = 1; 1404 + break; 1405 + case WAIT_TARGET: 1406 + if (sp->fcport) 1407 + match = sp->fcport->d_id.b24 == t; 1408 + else 1409 + match = 0; 1410 + break; 1411 + case WAIT_LUN: 1412 + if (sp->fcport) 1413 + match = (sp->fcport->d_id.b24 == t && 1414 + cmd->device->lun == l); 1415 + else 1416 + match = 0; 1417 + break; 1418 + } 1419 + if (!match) 1420 + continue; 1421 + 1422 + spin_unlock_irqrestore(qpair->qp_lock_ptr, flags); 1423 + 1424 + if (unlikely(pci_channel_offline(ha->pdev)) || 1425 + ha->flags.eeh_busy) { 1426 + ql_dbg(ql_dbg_taskm, vha, 0x8005, 1427 + "Return:eh_wait.\n"); 1428 + return status; 1429 + } 1430 + 1431 + /* 1432 + * SRB_SCSI_CMD is still in the outstanding_cmds array. 1433 + * it means scsi_done has not called. Wait for it to 1434 + * clear from outstanding_cmds. 1435 + */ 1436 + msleep(ABORT_POLLING_PERIOD); 1437 + spin_lock_irqsave(qpair->qp_lock_ptr, flags); 1438 + found = true; 1439 + } 1440 + spin_unlock_irqrestore(qpair->qp_lock_ptr, flags); 1441 + 1442 + if (!found) 1443 + break; 1380 1444 } 1381 - spin_unlock_irqrestore(qpair->qp_lock_ptr, flags); 1445 + 1446 + if (wait_iter == -1) 1447 + status = QLA_FUNCTION_FAILED; 1382 1448 1383 1449 return status; 1384 1450 } ··· 5088 5090 } 5089 5091 INIT_DELAYED_WORK(&vha->scan.scan_work, qla_scan_work_fn); 5090 5092 5091 - sprintf(vha->host_str, "%s_%lu", QLA2XXX_DRIVER_NAME, vha->host_no); 5093 + snprintf(vha->host_str, sizeof(vha->host_str), "%s_%lu", 5094 + QLA2XXX_DRIVER_NAME, vha->host_no); 5092 5095 ql_dbg(ql_dbg_init, vha, 0x0041, 5093 5096 "Allocated the host=%p hw=%p vha=%p dev_name=%s", 5094 5097 vha->host, vha->hw, vha,
+2 -2
drivers/scsi/qla2xxx/qla_version.h
··· 6 6 /* 7 7 * Driver version 8 8 */ 9 - #define QLA2XXX_VERSION "10.02.08.200-k" 9 + #define QLA2XXX_VERSION "10.02.08.400-k" 10 10 11 11 #define QLA_DRIVER_MAJOR_VER 10 12 12 #define QLA_DRIVER_MINOR_VER 2 13 13 #define QLA_DRIVER_PATCH_VER 8 14 - #define QLA_DRIVER_BETA_VER 200 14 + #define QLA_DRIVER_BETA_VER 400
+4 -4
drivers/scsi/qla4xxx/ql4_mbx.c
··· 1611 1611 goto exit_get_chap; 1612 1612 } 1613 1613 1614 - strlcpy(password, chap_table->secret, QL4_CHAP_MAX_SECRET_LEN); 1615 - strlcpy(username, chap_table->name, QL4_CHAP_MAX_NAME_LEN); 1614 + strscpy(password, chap_table->secret, QL4_CHAP_MAX_SECRET_LEN); 1615 + strscpy(username, chap_table->name, QL4_CHAP_MAX_NAME_LEN); 1616 1616 chap_table->cookie = cpu_to_le16(CHAP_VALID_COOKIE); 1617 1617 1618 1618 exit_get_chap: ··· 1732 1732 goto exit_unlock_uni_chap; 1733 1733 } 1734 1734 1735 - strlcpy(password, chap_table->secret, MAX_CHAP_SECRET_LEN); 1736 - strlcpy(username, chap_table->name, MAX_CHAP_NAME_LEN); 1735 + strscpy(password, chap_table->secret, MAX_CHAP_SECRET_LEN); 1736 + strscpy(username, chap_table->name, MAX_CHAP_NAME_LEN); 1737 1737 1738 1738 rval = QLA_SUCCESS; 1739 1739
+7 -7
drivers/scsi/qla4xxx/ql4_os.c
··· 798 798 continue; 799 799 800 800 chap_rec->chap_tbl_idx = i; 801 - strlcpy(chap_rec->username, chap_table->name, 801 + strscpy(chap_rec->username, chap_table->name, 802 802 ISCSI_CHAP_AUTH_NAME_MAX_LEN); 803 - strlcpy(chap_rec->password, chap_table->secret, 803 + strscpy(chap_rec->password, chap_table->secret, 804 804 QL4_CHAP_MAX_SECRET_LEN); 805 805 chap_rec->password_length = chap_table->secret_len; 806 806 ··· 6052 6052 if (!(chap_table->flags & BIT_6)) /* Not BIDI */ 6053 6053 continue; 6054 6054 6055 - strlcpy(password, chap_table->secret, QL4_CHAP_MAX_SECRET_LEN); 6056 - strlcpy(username, chap_table->name, QL4_CHAP_MAX_NAME_LEN); 6055 + strscpy(password, chap_table->secret, QL4_CHAP_MAX_SECRET_LEN); 6056 + strscpy(username, chap_table->name, QL4_CHAP_MAX_NAME_LEN); 6057 6057 ret = 0; 6058 6058 break; 6059 6059 } ··· 6281 6281 6282 6282 tddb->tpgt = sess->tpgt; 6283 6283 tddb->port = conn->persistent_port; 6284 - strlcpy(tddb->iscsi_name, sess->targetname, ISCSI_NAME_SIZE); 6285 - strlcpy(tddb->ip_addr, conn->persistent_address, DDB_IPADDR_LEN); 6284 + strscpy(tddb->iscsi_name, sess->targetname, ISCSI_NAME_SIZE); 6285 + strscpy(tddb->ip_addr, conn->persistent_address, DDB_IPADDR_LEN); 6286 6286 } 6287 6287 6288 6288 static void qla4xxx_convert_param_ddb(struct dev_db_entry *fw_ddb_entry, ··· 7781 7781 goto exit_ddb_logout; 7782 7782 } 7783 7783 7784 - strlcpy(flash_tddb->iscsi_name, fnode_sess->targetname, 7784 + strscpy(flash_tddb->iscsi_name, fnode_sess->targetname, 7785 7785 ISCSI_NAME_SIZE); 7786 7786 7787 7787 if (!strncmp(fnode_sess->portal_type, PORTAL_TYPE_IPV6, 4))
+161 -8
drivers/scsi/scsi.c
··· 504 504 } 505 505 506 506 /** 507 - * scsi_report_opcode - Find out if a given command opcode is supported 507 + * scsi_report_opcode - Find out if a given command is supported 508 508 * @sdev: scsi device to query 509 509 * @buffer: scratch buffer (must be at least 20 bytes long) 510 510 * @len: length of buffer 511 - * @opcode: opcode for command to look up 511 + * @opcode: opcode for the command to look up 512 + * @sa: service action for the command to look up 512 513 * 513 - * Uses the REPORT SUPPORTED OPERATION CODES to look up the given 514 - * opcode. Returns -EINVAL if RSOC fails, 0 if the command opcode is 515 - * unsupported and 1 if the device claims to support the command. 514 + * Uses the REPORT SUPPORTED OPERATION CODES to check support for the 515 + * command identified with @opcode and @sa. If the command does not 516 + * have a service action, @sa must be 0. Returns -EINVAL if RSOC fails, 517 + * 0 if the command is not supported and 1 if the device claims to 518 + * support the command. 516 519 */ 517 520 int scsi_report_opcode(struct scsi_device *sdev, unsigned char *buffer, 518 - unsigned int len, unsigned char opcode) 521 + unsigned int len, unsigned char opcode, 522 + unsigned short sa) 519 523 { 520 524 unsigned char cmd[16]; 521 525 struct scsi_sense_hdr sshdr; ··· 543 539 memset(cmd, 0, 16); 544 540 cmd[0] = MAINTENANCE_IN; 545 541 cmd[1] = MI_REPORT_SUPPORTED_OPERATION_CODES; 546 - cmd[2] = 1; /* One command format */ 547 - cmd[3] = opcode; 542 + if (!sa) { 543 + cmd[2] = 1; /* One command format */ 544 + cmd[3] = opcode; 545 + } else { 546 + cmd[2] = 3; /* One command format with service action */ 547 + cmd[3] = opcode; 548 + put_unaligned_be16(sa, &cmd[4]); 549 + } 548 550 put_unaligned_be32(request_len, &cmd[6]); 549 551 memset(buffer, 0, len); 550 552 ··· 569 559 return 0; 570 560 } 571 561 EXPORT_SYMBOL(scsi_report_opcode); 562 + 563 + #define SCSI_CDL_CHECK_BUF_LEN 64 564 + 565 + static bool scsi_cdl_check_cmd(struct scsi_device *sdev, u8 opcode, u16 sa, 566 + unsigned char *buf) 567 + { 568 + int ret; 569 + u8 cdlp; 570 + 571 + /* Check operation code */ 572 + ret = scsi_report_opcode(sdev, buf, SCSI_CDL_CHECK_BUF_LEN, opcode, sa); 573 + if (ret <= 0) 574 + return false; 575 + 576 + if ((buf[1] & 0x03) != 0x03) 577 + return false; 578 + 579 + /* See SPC-6, one command format of REPORT SUPPORTED OPERATION CODES */ 580 + cdlp = (buf[1] & 0x18) >> 3; 581 + if (buf[0] & 0x01) { 582 + /* rwcdlp == 1 */ 583 + switch (cdlp) { 584 + case 0x01: 585 + /* T2A page */ 586 + return true; 587 + case 0x02: 588 + /* T2B page */ 589 + return true; 590 + } 591 + } else { 592 + /* rwcdlp == 0 */ 593 + switch (cdlp) { 594 + case 0x01: 595 + /* A page */ 596 + return true; 597 + case 0x02: 598 + /* B page */ 599 + return true; 600 + } 601 + } 602 + 603 + return false; 604 + } 605 + 606 + /** 607 + * scsi_cdl_check - Check if a SCSI device supports Command Duration Limits 608 + * @sdev: The device to check 609 + */ 610 + void scsi_cdl_check(struct scsi_device *sdev) 611 + { 612 + bool cdl_supported; 613 + unsigned char *buf; 614 + 615 + buf = kmalloc(SCSI_CDL_CHECK_BUF_LEN, GFP_KERNEL); 616 + if (!buf) { 617 + sdev->cdl_supported = 0; 618 + return; 619 + } 620 + 621 + /* Check support for READ_16, WRITE_16, READ_32 and WRITE_32 commands */ 622 + cdl_supported = 623 + scsi_cdl_check_cmd(sdev, READ_16, 0, buf) || 624 + scsi_cdl_check_cmd(sdev, WRITE_16, 0, buf) || 625 + scsi_cdl_check_cmd(sdev, VARIABLE_LENGTH_CMD, READ_32, buf) || 626 + scsi_cdl_check_cmd(sdev, VARIABLE_LENGTH_CMD, WRITE_32, buf); 627 + if (cdl_supported) { 628 + /* 629 + * We have CDL support: force the use of READ16/WRITE16. 630 + * READ32 and WRITE32 will be used for devices that support 631 + * the T10_PI_TYPE2_PROTECTION protection type. 632 + */ 633 + sdev->use_16_for_rw = 1; 634 + sdev->use_10_for_rw = 0; 635 + 636 + sdev->cdl_supported = 1; 637 + } else { 638 + sdev->cdl_supported = 0; 639 + } 640 + 641 + kfree(buf); 642 + } 643 + 644 + /** 645 + * scsi_cdl_enable - Enable or disable a SCSI device supports for Command 646 + * Duration Limits 647 + * @sdev: The target device 648 + * @enable: the target state 649 + */ 650 + int scsi_cdl_enable(struct scsi_device *sdev, bool enable) 651 + { 652 + struct scsi_mode_data data; 653 + struct scsi_sense_hdr sshdr; 654 + struct scsi_vpd *vpd; 655 + bool is_ata = false; 656 + char buf[64]; 657 + int ret; 658 + 659 + if (!sdev->cdl_supported) 660 + return -EOPNOTSUPP; 661 + 662 + rcu_read_lock(); 663 + vpd = rcu_dereference(sdev->vpd_pg89); 664 + if (vpd) 665 + is_ata = true; 666 + rcu_read_unlock(); 667 + 668 + /* 669 + * For ATA devices, CDL needs to be enabled with a SET FEATURES command. 670 + */ 671 + if (is_ata) { 672 + char *buf_data; 673 + int len; 674 + 675 + ret = scsi_mode_sense(sdev, 0x08, 0x0a, 0xf2, buf, sizeof(buf), 676 + 5 * HZ, 3, &data, NULL); 677 + if (ret) 678 + return -EINVAL; 679 + 680 + /* Enable CDL using the ATA feature page */ 681 + len = min_t(size_t, sizeof(buf), 682 + data.length - data.header_length - 683 + data.block_descriptor_length); 684 + buf_data = buf + data.header_length + 685 + data.block_descriptor_length; 686 + if (enable) 687 + buf_data[4] = 0x02; 688 + else 689 + buf_data[4] = 0; 690 + 691 + ret = scsi_mode_select(sdev, 1, 0, buf_data, len, 5 * HZ, 3, 692 + &data, &sshdr); 693 + if (ret) { 694 + if (scsi_sense_valid(&sshdr)) 695 + scsi_print_sense_hdr(sdev, 696 + dev_name(&sdev->sdev_gendev), &sshdr); 697 + return ret; 698 + } 699 + } 700 + 701 + sdev->cdl_enable = enable; 702 + 703 + return 0; 704 + } 572 705 573 706 /** 574 707 * scsi_device_get - get an additional reference to a scsi_device
+44 -2
drivers/scsi/scsi_common.c
··· 8 8 #include <linux/string.h> 9 9 #include <linux/errno.h> 10 10 #include <linux/module.h> 11 + #include <uapi/linux/pr.h> 11 12 #include <asm/unaligned.h> 12 13 #include <scsi/scsi_common.h> 13 14 ··· 63 62 return scsi_device_types[type]; 64 63 } 65 64 EXPORT_SYMBOL(scsi_device_type); 65 + 66 + enum pr_type scsi_pr_type_to_block(enum scsi_pr_type type) 67 + { 68 + switch (type) { 69 + case SCSI_PR_WRITE_EXCLUSIVE: 70 + return PR_WRITE_EXCLUSIVE; 71 + case SCSI_PR_EXCLUSIVE_ACCESS: 72 + return PR_EXCLUSIVE_ACCESS; 73 + case SCSI_PR_WRITE_EXCLUSIVE_REG_ONLY: 74 + return PR_WRITE_EXCLUSIVE_REG_ONLY; 75 + case SCSI_PR_EXCLUSIVE_ACCESS_REG_ONLY: 76 + return PR_EXCLUSIVE_ACCESS_REG_ONLY; 77 + case SCSI_PR_WRITE_EXCLUSIVE_ALL_REGS: 78 + return PR_WRITE_EXCLUSIVE_ALL_REGS; 79 + case SCSI_PR_EXCLUSIVE_ACCESS_ALL_REGS: 80 + return PR_EXCLUSIVE_ACCESS_ALL_REGS; 81 + } 82 + 83 + return 0; 84 + } 85 + EXPORT_SYMBOL_GPL(scsi_pr_type_to_block); 86 + 87 + enum scsi_pr_type block_pr_type_to_scsi(enum pr_type type) 88 + { 89 + switch (type) { 90 + case PR_WRITE_EXCLUSIVE: 91 + return SCSI_PR_WRITE_EXCLUSIVE; 92 + case PR_EXCLUSIVE_ACCESS: 93 + return SCSI_PR_EXCLUSIVE_ACCESS; 94 + case PR_WRITE_EXCLUSIVE_REG_ONLY: 95 + return SCSI_PR_WRITE_EXCLUSIVE_REG_ONLY; 96 + case PR_EXCLUSIVE_ACCESS_REG_ONLY: 97 + return SCSI_PR_EXCLUSIVE_ACCESS_REG_ONLY; 98 + case PR_WRITE_EXCLUSIVE_ALL_REGS: 99 + return SCSI_PR_WRITE_EXCLUSIVE_ALL_REGS; 100 + case PR_EXCLUSIVE_ACCESS_ALL_REGS: 101 + return SCSI_PR_EXCLUSIVE_ACCESS_ALL_REGS; 102 + } 103 + 104 + return 0; 105 + } 106 + EXPORT_SYMBOL_GPL(block_pr_type_to_scsi); 66 107 67 108 /** 68 109 * scsilun_to_int - convert a scsi_lun to an int ··· 219 176 if (sb_len > 2) 220 177 sshdr->sense_key = (sense_buffer[2] & 0xf); 221 178 if (sb_len > 7) { 222 - sb_len = (sb_len < (sense_buffer[7] + 8)) ? 223 - sb_len : (sense_buffer[7] + 8); 179 + sb_len = min(sb_len, sense_buffer[7] + 8); 224 180 if (sb_len > 12) 225 181 sshdr->asc = sense_buffer[12]; 226 182 if (sb_len > 13)
+47 -1
drivers/scsi/scsi_error.c
··· 536 536 */ 537 537 enum scsi_disposition scsi_check_sense(struct scsi_cmnd *scmd) 538 538 { 539 + struct request *req = scsi_cmd_to_rq(scmd); 539 540 struct scsi_device *sdev = scmd->device; 540 541 struct scsi_sense_hdr sshdr; 541 542 ··· 595 594 case ABORTED_COMMAND: 596 595 if (sshdr.asc == 0x10) /* DIF */ 597 596 return SUCCESS; 597 + 598 + /* 599 + * Check aborts due to command duration limit policy: 600 + * ABORTED COMMAND additional sense code with the 601 + * COMMAND TIMEOUT BEFORE PROCESSING or 602 + * COMMAND TIMEOUT DURING PROCESSING or 603 + * COMMAND TIMEOUT DURING PROCESSING DUE TO ERROR RECOVERY 604 + * additional sense code qualifiers. 605 + */ 606 + if (sshdr.asc == 0x2e && 607 + sshdr.ascq >= 0x01 && sshdr.ascq <= 0x03) { 608 + set_scsi_ml_byte(scmd, SCSIML_STAT_DL_TIMEOUT); 609 + req->cmd_flags |= REQ_FAILFAST_DEV; 610 + req->rq_flags |= RQF_QUIET; 611 + return SUCCESS; 612 + } 598 613 599 614 if (sshdr.asc == 0x44 && sdev->sdev_bflags & BLIST_RETRY_ITF) 600 615 return ADD_TO_MLQUEUE; ··· 708 691 } 709 692 return SUCCESS; 710 693 694 + case COMPLETED: 695 + if (sshdr.asc == 0x55 && sshdr.ascq == 0x0a) { 696 + set_scsi_ml_byte(scmd, SCSIML_STAT_DL_TIMEOUT); 697 + req->cmd_flags |= REQ_FAILFAST_DEV; 698 + req->rq_flags |= RQF_QUIET; 699 + } 700 + return SUCCESS; 701 + 711 702 default: 712 703 return SUCCESS; 713 704 } ··· 810 785 switch (get_status_byte(scmd)) { 811 786 case SAM_STAT_GOOD: 812 787 scsi_handle_queue_ramp_up(scmd->device); 788 + if (scmd->sense_buffer && SCSI_SENSE_VALID(scmd)) 789 + /* 790 + * If we have sense data, call scsi_check_sense() in 791 + * order to set the correct SCSI ML byte (if any). 792 + * No point in checking the return value, since the 793 + * command has already completed successfully. 794 + */ 795 + scsi_check_sense(scmd); 813 796 fallthrough; 814 797 case SAM_STAT_COMMAND_TERMINATED: 815 798 return SUCCESS; ··· 1840 1807 return !!(req->cmd_flags & REQ_FAILFAST_DRIVER); 1841 1808 } 1842 1809 1810 + /* Never retry commands aborted due to a duration limit timeout */ 1811 + if (scsi_ml_byte(scmd->result) == SCSIML_STAT_DL_TIMEOUT) 1812 + return true; 1813 + 1843 1814 if (!scsi_status_is_check_condition(scmd->result)) 1844 1815 return false; 1845 1816 ··· 2003 1966 if (scmd->cmnd[0] == REPORT_LUNS) 2004 1967 scmd->device->sdev_target->expecting_lun_change = 0; 2005 1968 scsi_handle_queue_ramp_up(scmd->device); 1969 + if (scmd->sense_buffer && SCSI_SENSE_VALID(scmd)) 1970 + /* 1971 + * If we have sense data, call scsi_check_sense() in 1972 + * order to set the correct SCSI ML byte (if any). 1973 + * No point in checking the return value, since the 1974 + * command has already completed successfully. 1975 + */ 1976 + scsi_check_sense(scmd); 2006 1977 fallthrough; 2007 1978 case SAM_STAT_COMMAND_TERMINATED: 2008 1979 return SUCCESS; ··· 2210 2165 * scsi_eh_get_sense), scmd->result is already 2211 2166 * set, do not set DID_TIME_OUT. 2212 2167 */ 2213 - if (!scmd->result) 2168 + if (!scmd->result && 2169 + !(scmd->flags & SCMD_FORCE_EH_SUCCESS)) 2214 2170 scmd->result |= (DID_TIME_OUT << 16); 2215 2171 SCSI_LOG_ERROR_RECOVERY(3, 2216 2172 scmd_printk(KERN_INFO, scmd,
+73 -64
drivers/scsi/scsi_lib.c
··· 122 122 WARN_ON_ONCE(true); 123 123 } 124 124 125 - if (msecs) { 126 - blk_mq_requeue_request(rq, false); 125 + blk_mq_requeue_request(rq, false); 126 + if (!scsi_host_in_recovery(cmd->device->host)) 127 127 blk_mq_delay_kick_requeue_list(rq->q, msecs); 128 - } else 129 - blk_mq_requeue_request(rq, true); 130 128 } 131 129 132 130 /** ··· 163 165 */ 164 166 cmd->result = 0; 165 167 166 - blk_mq_requeue_request(scsi_cmd_to_rq(cmd), true); 168 + blk_mq_requeue_request(scsi_cmd_to_rq(cmd), 169 + !scsi_host_in_recovery(cmd->device->host)); 167 170 } 168 171 169 172 /** ··· 452 453 if (!list_empty(&sdev->host->starved_list)) 453 454 scsi_starved_list_run(sdev->host); 454 455 456 + blk_mq_kick_requeue_list(q); 455 457 blk_mq_run_hw_queues(q, false); 456 458 } 457 459 ··· 503 503 504 504 static void scsi_run_queue_async(struct scsi_device *sdev) 505 505 { 506 + if (scsi_host_in_recovery(sdev->host)) 507 + return; 508 + 506 509 if (scsi_target(sdev)->single_lun || 507 510 !list_empty(&sdev->host->starved_list)) { 508 511 kblockd_schedule_work(&sdev->requeue_work); ··· 581 578 return false; 582 579 } 583 580 584 - static inline u8 get_scsi_ml_byte(int result) 585 - { 586 - return (result >> 8) & 0xff; 587 - } 588 - 589 581 /** 590 582 * scsi_result_to_blk_status - translate a SCSI result code into blk_status_t 591 583 * @result: scsi error code ··· 593 595 * Check the scsi-ml byte first in case we converted a host or status 594 596 * byte. 595 597 */ 596 - switch (get_scsi_ml_byte(result)) { 598 + switch (scsi_ml_byte(result)) { 597 599 case SCSIML_STAT_OK: 598 600 break; 599 601 case SCSIML_STAT_RESV_CONFLICT: 600 - return BLK_STS_NEXUS; 602 + return BLK_STS_RESV_CONFLICT; 601 603 case SCSIML_STAT_NOSPC: 602 604 return BLK_STS_NOSPC; 603 605 case SCSIML_STAT_MED_ERROR: 604 606 return BLK_STS_MEDIUM; 605 607 case SCSIML_STAT_TGT_FAILURE: 606 608 return BLK_STS_TARGET; 609 + case SCSIML_STAT_DL_TIMEOUT: 610 + return BLK_STS_DURATION_LIMIT; 607 611 } 608 612 609 613 switch (host_byte(result)) { ··· 803 803 blk_stat = BLK_STS_ZONE_OPEN_RESOURCE; 804 804 } 805 805 break; 806 + case COMPLETED: 807 + fallthrough; 806 808 default: 807 809 action = ACTION_FAIL; 808 810 break; ··· 1987 1985 tag_set->flags = BLK_MQ_F_SHOULD_MERGE; 1988 1986 tag_set->flags |= 1989 1987 BLK_ALLOC_POLICY_TO_MQ_FLAG(shost->hostt->tag_alloc_policy); 1988 + if (shost->queuecommand_may_block) 1989 + tag_set->flags |= BLK_MQ_F_BLOCKING; 1990 1990 tag_set->driver_data = shost; 1991 1991 if (shost->host_tagset) 1992 1992 tag_set->flags |= BLK_MQ_F_TAG_HCTX_SHARED; ··· 2156 2152 * @sdev: SCSI device to be queried 2157 2153 * @dbd: set to prevent mode sense from returning block descriptors 2158 2154 * @modepage: mode page being requested 2155 + * @subpage: sub-page of the mode page being requested 2159 2156 * @buffer: request buffer (may not be smaller than eight bytes) 2160 2157 * @len: length of request buffer. 2161 2158 * @timeout: command timeout ··· 2168 2163 * Returns zero if successful, or a negative error number on failure 2169 2164 */ 2170 2165 int 2171 - scsi_mode_sense(struct scsi_device *sdev, int dbd, int modepage, 2166 + scsi_mode_sense(struct scsi_device *sdev, int dbd, int modepage, int subpage, 2172 2167 unsigned char *buffer, int len, int timeout, int retries, 2173 2168 struct scsi_mode_data *data, struct scsi_sense_hdr *sshdr) 2174 2169 { ··· 2188 2183 dbd = sdev->set_dbd_for_ms ? 8 : dbd; 2189 2184 cmd[1] = dbd & 0x18; /* allows DBD and LLBA bits */ 2190 2185 cmd[2] = modepage; 2186 + cmd[3] = subpage; 2191 2187 2192 2188 sshdr = exec_args.sshdr; 2193 2189 ··· 2734 2728 blk_mq_unquiesce_queue(sdev->request_queue); 2735 2729 } 2736 2730 2737 - static void scsi_stop_queue(struct scsi_device *sdev, bool nowait) 2731 + static void scsi_stop_queue(struct scsi_device *sdev) 2738 2732 { 2739 2733 /* 2740 2734 * The atomic variable of ->queue_stopped covers that 2741 2735 * blk_mq_quiesce_queue* is balanced with blk_mq_unquiesce_queue. 2742 2736 * 2743 - * However, we still need to wait until quiesce is done 2744 - * in case that queue has been stopped. 2737 + * The caller needs to wait until quiesce is done. 2745 2738 */ 2746 - if (!cmpxchg(&sdev->queue_stopped, 0, 1)) { 2747 - if (nowait) 2748 - blk_mq_quiesce_queue_nowait(sdev->request_queue); 2749 - else 2750 - blk_mq_quiesce_queue(sdev->request_queue); 2751 - } else { 2752 - if (!nowait) 2753 - blk_mq_wait_quiesce_done(sdev->request_queue->tag_set); 2754 - } 2739 + if (!cmpxchg(&sdev->queue_stopped, 0, 1)) 2740 + blk_mq_quiesce_queue_nowait(sdev->request_queue); 2755 2741 } 2756 2742 2757 2743 /** ··· 2770 2772 * request queue. 2771 2773 */ 2772 2774 if (!ret) 2773 - scsi_stop_queue(sdev, true); 2775 + scsi_stop_queue(sdev); 2774 2776 return ret; 2775 2777 } 2776 2778 EXPORT_SYMBOL_GPL(scsi_internal_device_block_nowait); 2777 2779 2778 2780 /** 2779 - * scsi_internal_device_block - try to transition to the SDEV_BLOCK state 2781 + * scsi_device_block - try to transition to the SDEV_BLOCK state 2780 2782 * @sdev: device to block 2783 + * @data: dummy argument, ignored 2781 2784 * 2782 - * Pause SCSI command processing on the specified device and wait until all 2783 - * ongoing scsi_request_fn() / scsi_queue_rq() calls have finished. May sleep. 2784 - * 2785 - * Returns zero if successful or a negative error code upon failure. 2785 + * Pause SCSI command processing on the specified device. Callers must wait 2786 + * until all ongoing scsi_queue_rq() calls have finished after this function 2787 + * returns. 2786 2788 * 2787 2789 * Note: 2788 2790 * This routine transitions the device to the SDEV_BLOCK state (which must be ··· 2790 2792 * is paused until the device leaves the SDEV_BLOCK state. See also 2791 2793 * scsi_internal_device_unblock(). 2792 2794 */ 2793 - static int scsi_internal_device_block(struct scsi_device *sdev) 2795 + static void scsi_device_block(struct scsi_device *sdev, void *data) 2794 2796 { 2795 2797 int err; 2798 + enum scsi_device_state state; 2796 2799 2797 2800 mutex_lock(&sdev->state_mutex); 2798 2801 err = __scsi_internal_device_block_nowait(sdev); 2802 + state = sdev->sdev_state; 2799 2803 if (err == 0) 2800 - scsi_stop_queue(sdev, false); 2804 + /* 2805 + * scsi_stop_queue() must be called with the state_mutex 2806 + * held. Otherwise a simultaneous scsi_start_queue() call 2807 + * might unquiesce the queue before we quiesce it. 2808 + */ 2809 + scsi_stop_queue(sdev); 2810 + 2801 2811 mutex_unlock(&sdev->state_mutex); 2802 2812 2803 - return err; 2813 + WARN_ONCE(err, "%s: failed to block %s in state %d\n", 2814 + __func__, dev_name(&sdev->sdev_gendev), state); 2804 2815 } 2805 2816 2806 2817 /** ··· 2892 2885 return ret; 2893 2886 } 2894 2887 2895 - static void 2896 - device_block(struct scsi_device *sdev, void *data) 2897 - { 2898 - int ret; 2899 - 2900 - ret = scsi_internal_device_block(sdev); 2901 - 2902 - WARN_ONCE(ret, "scsi_internal_device_block(%s) failed: ret = %d\n", 2903 - dev_name(&sdev->sdev_gendev), ret); 2904 - } 2905 - 2906 2888 static int 2907 2889 target_block(struct device *dev, void *data) 2908 2890 { 2909 2891 if (scsi_is_target_device(dev)) 2910 2892 starget_for_each_device(to_scsi_target(dev), NULL, 2911 - device_block); 2893 + scsi_device_block); 2912 2894 return 0; 2913 2895 } 2914 2896 2897 + /** 2898 + * scsi_block_targets - transition all SCSI child devices to SDEV_BLOCK state 2899 + * @dev: a parent device of one or more scsi_target devices 2900 + * @shost: the Scsi_Host to which this device belongs 2901 + * 2902 + * Iterate over all children of @dev, which should be scsi_target devices, 2903 + * and switch all subordinate scsi devices to SDEV_BLOCK state. Wait for 2904 + * ongoing scsi_queue_rq() calls to finish. May sleep. 2905 + * 2906 + * Note: 2907 + * @dev must not itself be a scsi_target device. 2908 + */ 2915 2909 void 2916 - scsi_target_block(struct device *dev) 2910 + scsi_block_targets(struct Scsi_Host *shost, struct device *dev) 2917 2911 { 2918 - if (scsi_is_target_device(dev)) 2919 - starget_for_each_device(to_scsi_target(dev), NULL, 2920 - device_block); 2921 - else 2922 - device_for_each_child(dev, NULL, target_block); 2912 + WARN_ON_ONCE(scsi_is_target_device(dev)); 2913 + device_for_each_child(dev, NULL, target_block); 2914 + blk_mq_wait_quiesce_done(&shost->tag_set); 2923 2915 } 2924 - EXPORT_SYMBOL_GPL(scsi_target_block); 2916 + EXPORT_SYMBOL_GPL(scsi_block_targets); 2925 2917 2926 2918 static void 2927 2919 device_unblock(struct scsi_device *sdev, void *data) ··· 2948 2942 } 2949 2943 EXPORT_SYMBOL_GPL(scsi_target_unblock); 2950 2944 2945 + /** 2946 + * scsi_host_block - Try to transition all logical units to the SDEV_BLOCK state 2947 + * @shost: device to block 2948 + * 2949 + * Pause SCSI command processing for all logical units associated with the SCSI 2950 + * host and wait until pending scsi_queue_rq() calls have finished. 2951 + * 2952 + * Returns zero if successful or a negative error code upon failure. 2953 + */ 2951 2954 int 2952 2955 scsi_host_block(struct Scsi_Host *shost) 2953 2956 { 2954 2957 struct scsi_device *sdev; 2955 - int ret = 0; 2958 + int ret; 2956 2959 2957 2960 /* 2958 2961 * Call scsi_internal_device_block_nowait so we can avoid ··· 2973 2958 mutex_unlock(&sdev->state_mutex); 2974 2959 if (ret) { 2975 2960 scsi_device_put(sdev); 2976 - break; 2961 + return ret; 2977 2962 } 2978 2963 } 2979 2964 2980 - /* 2981 - * SCSI never enables blk-mq's BLK_MQ_F_BLOCKING flag so 2982 - * calling synchronize_rcu() once is enough. 2983 - */ 2984 - WARN_ON_ONCE(shost->tag_set.flags & BLK_MQ_F_BLOCKING); 2965 + /* Wait for ongoing scsi_queue_rq() calls to finish. */ 2966 + blk_mq_wait_quiesce_done(&shost->tag_set); 2985 2967 2986 - if (!ret) 2987 - synchronize_rcu(); 2988 - 2989 - return ret; 2968 + return 0; 2990 2969 } 2991 2970 EXPORT_SYMBOL_GPL(scsi_host_block); 2992 2971
+6
drivers/scsi/scsi_priv.h
··· 27 27 SCSIML_STAT_NOSPC = 0x02, /* Space allocation on the dev failed */ 28 28 SCSIML_STAT_MED_ERROR = 0x03, /* Medium error */ 29 29 SCSIML_STAT_TGT_FAILURE = 0x04, /* Permanent target failure */ 30 + SCSIML_STAT_DL_TIMEOUT = 0x05, /* Command Duration Limit timeout */ 30 31 }; 32 + 33 + static inline u8 scsi_ml_byte(int result) 34 + { 35 + return (result >> 8) & 0xff; 36 + } 31 37 32 38 /* 33 39 * Scsi Error Handler Flags
+3
drivers/scsi/scsi_scan.c
··· 1087 1087 if (sdev->scsi_level >= SCSI_3) 1088 1088 scsi_attach_vpd(sdev); 1089 1089 1090 + scsi_cdl_check(sdev); 1091 + 1090 1092 sdev->max_queue_depth = sdev->queue_depth; 1091 1093 WARN_ON_ONCE(sdev->max_queue_depth > sdev->budget_map.depth); 1092 1094 sdev->sdev_bflags = *bflags; ··· 1626 1624 device_lock(dev); 1627 1625 1628 1626 scsi_attach_vpd(sdev); 1627 + scsi_cdl_check(sdev); 1629 1628 1630 1629 if (sdev->handler && sdev->handler->rescan) 1631 1630 sdev->handler->rescan(sdev);
+30
drivers/scsi/scsi_sysfs.c
··· 670 670 sdev_rd_attr (vendor, "%.8s\n"); 671 671 sdev_rd_attr (model, "%.16s\n"); 672 672 sdev_rd_attr (rev, "%.4s\n"); 673 + sdev_rd_attr (cdl_supported, "%d\n"); 673 674 674 675 static ssize_t 675 676 sdev_show_device_busy(struct device *dev, struct device_attribute *attr, ··· 1222 1221 sdev_show_queue_ramp_up_period, 1223 1222 sdev_store_queue_ramp_up_period); 1224 1223 1224 + static ssize_t sdev_show_cdl_enable(struct device *dev, 1225 + struct device_attribute *attr, char *buf) 1226 + { 1227 + struct scsi_device *sdev = to_scsi_device(dev); 1228 + 1229 + return sysfs_emit(buf, "%d\n", (int)sdev->cdl_enable); 1230 + } 1231 + 1232 + static ssize_t sdev_store_cdl_enable(struct device *dev, 1233 + struct device_attribute *attr, 1234 + const char *buf, size_t count) 1235 + { 1236 + int ret; 1237 + bool v; 1238 + 1239 + if (kstrtobool(buf, &v)) 1240 + return -EINVAL; 1241 + 1242 + ret = scsi_cdl_enable(to_scsi_device(dev), v); 1243 + if (ret) 1244 + return ret; 1245 + 1246 + return count; 1247 + } 1248 + static DEVICE_ATTR(cdl_enable, S_IRUGO | S_IWUSR, 1249 + sdev_show_cdl_enable, sdev_store_cdl_enable); 1250 + 1225 1251 static umode_t scsi_sdev_attr_is_visible(struct kobject *kobj, 1226 1252 struct attribute *attr, int i) 1227 1253 { ··· 1328 1300 &dev_attr_preferred_path.attr, 1329 1301 #endif 1330 1302 &dev_attr_queue_ramp_up_period.attr, 1303 + &dev_attr_cdl_supported.attr, 1304 + &dev_attr_cdl_enable.attr, 1331 1305 REF_EVT(media_change), 1332 1306 REF_EVT(inquiry_change_reported), 1333 1307 REF_EVT(capacity_change_reported),
+1 -1
drivers/scsi/scsi_transport_fc.c
··· 3451 3451 3452 3452 spin_unlock_irqrestore(shost->host_lock, flags); 3453 3453 3454 - scsi_target_block(&rport->dev); 3454 + scsi_block_targets(shost, &rport->dev); 3455 3455 3456 3456 /* see if we need to kill io faster than waiting for device loss */ 3457 3457 if ((rport->fast_io_fail_tmo != -1) &&
+2 -1
drivers/scsi/scsi_transport_iscsi.c
··· 1943 1943 struct iscsi_cls_session *session = 1944 1944 container_of(work, struct iscsi_cls_session, 1945 1945 block_work); 1946 + struct Scsi_Host *shost = iscsi_session_to_shost(session); 1946 1947 unsigned long flags; 1947 1948 1948 1949 ISCSI_DBG_TRANS_SESSION(session, "Blocking session\n"); 1949 1950 spin_lock_irqsave(&session->lock, flags); 1950 1951 session->state = ISCSI_SESSION_FAILED; 1951 1952 spin_unlock_irqrestore(&session->lock, flags); 1952 - scsi_target_block(&session->dev); 1953 + scsi_block_targets(shost, &session->dev); 1953 1954 ISCSI_DBG_TRANS_SESSION(session, "Completed SCSI target blocking\n"); 1954 1955 if (session->recovery_tmo >= 0) 1955 1956 queue_delayed_work(session->workq,
+1 -1
drivers/scsi/scsi_transport_sas.c
··· 1245 1245 if (!buffer) 1246 1246 return -ENOMEM; 1247 1247 1248 - error = scsi_mode_sense(sdev, 1, 0x19, buffer, BUF_SIZE, 30*HZ, 3, 1248 + error = scsi_mode_sense(sdev, 1, 0x19, 0, buffer, BUF_SIZE, 30*HZ, 3, 1249 1249 &mode_data, NULL); 1250 1250 1251 1251 if (error)
+3 -3
drivers/scsi/scsi_transport_srp.c
··· 396 396 } 397 397 398 398 /* 399 - * scsi_target_block() must have been called before this function is 399 + * scsi_block_targets() must have been called before this function is 400 400 * called to guarantee that no .queuecommand() calls are in progress. 401 401 */ 402 402 static void __rport_fail_io_fast(struct srp_rport *rport) ··· 480 480 srp_rport_set_state(rport, SRP_RPORT_BLOCKED) == 0) { 481 481 pr_debug("%s new state: %d\n", dev_name(&shost->shost_gendev), 482 482 rport->state); 483 - scsi_target_block(&shost->shost_gendev); 483 + scsi_block_targets(shost, &shost->shost_gendev); 484 484 if (fast_io_fail_tmo >= 0) 485 485 queue_delayed_work(system_long_wq, 486 486 &rport->fast_io_fail_work, ··· 548 548 * later is ok though, scsi_internal_device_unblock_nowait() 549 549 * treats SDEV_TRANSPORT_OFFLINE like SDEV_BLOCK. 550 550 */ 551 - scsi_target_block(&shost->shost_gendev); 551 + scsi_block_targets(shost, &shost->shost_gendev); 552 552 res = rport->state != SRP_RPORT_LOST ? i->f->reconnect(rport) : -ENODEV; 553 553 pr_debug("%s (state %d): transport.reconnect() returned %d\n", 554 554 dev_name(&shost->shost_gendev), rport->state, res);
+145 -44
drivers/scsi/sd.c
··· 67 67 #include <scsi/scsi_host.h> 68 68 #include <scsi/scsi_ioctl.h> 69 69 #include <scsi/scsicam.h> 70 + #include <scsi/scsi_common.h> 70 71 71 72 #include "sd.h" 72 73 #include "scsi_priv.h" ··· 184 183 return count; 185 184 } 186 185 187 - if (scsi_mode_sense(sdp, 0x08, 8, buffer, sizeof(buffer), SD_TIMEOUT, 186 + if (scsi_mode_sense(sdp, 0x08, 8, 0, buffer, sizeof(buffer), SD_TIMEOUT, 188 187 sdkp->max_retries, &data, NULL)) 189 188 return -EINVAL; 190 189 len = min_t(size_t, sizeof(buffer), data.length - data.header_length - ··· 1042 1041 1043 1042 static blk_status_t sd_setup_rw32_cmnd(struct scsi_cmnd *cmd, bool write, 1044 1043 sector_t lba, unsigned int nr_blocks, 1045 - unsigned char flags) 1044 + unsigned char flags, unsigned int dld) 1046 1045 { 1047 1046 cmd->cmd_len = SD_EXT_CDB_SIZE; 1048 1047 cmd->cmnd[0] = VARIABLE_LENGTH_CMD; 1049 1048 cmd->cmnd[7] = 0x18; /* Additional CDB len */ 1050 1049 cmd->cmnd[9] = write ? WRITE_32 : READ_32; 1051 1050 cmd->cmnd[10] = flags; 1051 + cmd->cmnd[11] = dld & 0x07; 1052 1052 put_unaligned_be64(lba, &cmd->cmnd[12]); 1053 1053 put_unaligned_be32(lba, &cmd->cmnd[20]); /* Expected Indirect LBA */ 1054 1054 put_unaligned_be32(nr_blocks, &cmd->cmnd[28]); ··· 1059 1057 1060 1058 static blk_status_t sd_setup_rw16_cmnd(struct scsi_cmnd *cmd, bool write, 1061 1059 sector_t lba, unsigned int nr_blocks, 1062 - unsigned char flags) 1060 + unsigned char flags, unsigned int dld) 1063 1061 { 1064 1062 cmd->cmd_len = 16; 1065 1063 cmd->cmnd[0] = write ? WRITE_16 : READ_16; 1066 - cmd->cmnd[1] = flags; 1067 - cmd->cmnd[14] = 0; 1064 + cmd->cmnd[1] = flags | ((dld >> 2) & 0x01); 1065 + cmd->cmnd[14] = (dld & 0x03) << 6; 1068 1066 cmd->cmnd[15] = 0; 1069 1067 put_unaligned_be64(lba, &cmd->cmnd[2]); 1070 1068 put_unaligned_be32(nr_blocks, &cmd->cmnd[10]); ··· 1116 1114 return BLK_STS_OK; 1117 1115 } 1118 1116 1117 + /* 1118 + * Check if a command has a duration limit set. If it does, and the target 1119 + * device supports CDL and the feature is enabled, return the limit 1120 + * descriptor index to use. Return 0 (no limit) otherwise. 1121 + */ 1122 + static int sd_cdl_dld(struct scsi_disk *sdkp, struct scsi_cmnd *scmd) 1123 + { 1124 + struct scsi_device *sdp = sdkp->device; 1125 + int hint; 1126 + 1127 + if (!sdp->cdl_supported || !sdp->cdl_enable) 1128 + return 0; 1129 + 1130 + /* 1131 + * Use "no limit" if the request ioprio does not specify a duration 1132 + * limit hint. 1133 + */ 1134 + hint = IOPRIO_PRIO_HINT(req_get_ioprio(scsi_cmd_to_rq(scmd))); 1135 + if (hint < IOPRIO_HINT_DEV_DURATION_LIMIT_1 || 1136 + hint > IOPRIO_HINT_DEV_DURATION_LIMIT_7) 1137 + return 0; 1138 + 1139 + return (hint - IOPRIO_HINT_DEV_DURATION_LIMIT_1) + 1; 1140 + } 1141 + 1119 1142 static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd) 1120 1143 { 1121 1144 struct request *rq = scsi_cmd_to_rq(cmd); ··· 1152 1125 unsigned int mask = logical_to_sectors(sdp, 1) - 1; 1153 1126 bool write = rq_data_dir(rq) == WRITE; 1154 1127 unsigned char protect, fua; 1128 + unsigned int dld; 1155 1129 blk_status_t ret; 1156 1130 unsigned int dif; 1157 1131 bool dix; ··· 1202 1174 fua = rq->cmd_flags & REQ_FUA ? 0x8 : 0; 1203 1175 dix = scsi_prot_sg_count(cmd); 1204 1176 dif = scsi_host_dif_capable(cmd->device->host, sdkp->protection_type); 1177 + dld = sd_cdl_dld(sdkp, cmd); 1205 1178 1206 1179 if (dif || dix) 1207 1180 protect = sd_setup_protect_cmnd(cmd, dix, dif); ··· 1211 1182 1212 1183 if (protect && sdkp->protection_type == T10_PI_TYPE2_PROTECTION) { 1213 1184 ret = sd_setup_rw32_cmnd(cmd, write, lba, nr_blocks, 1214 - protect | fua); 1185 + protect | fua, dld); 1215 1186 } else if (sdp->use_16_for_rw || (nr_blocks > 0xffff)) { 1216 1187 ret = sd_setup_rw16_cmnd(cmd, write, lba, nr_blocks, 1217 - protect | fua); 1188 + protect | fua, dld); 1218 1189 } else if ((nr_blocks > 0xff) || (lba > 0x1fffff) || 1219 1190 sdp->use_10_for_rw || protect) { 1220 1191 ret = sd_setup_rw10_cmnd(cmd, write, lba, nr_blocks, ··· 1719 1690 return ret; 1720 1691 } 1721 1692 1722 - static char sd_pr_type(enum pr_type type) 1723 - { 1724 - switch (type) { 1725 - case PR_WRITE_EXCLUSIVE: 1726 - return 0x01; 1727 - case PR_EXCLUSIVE_ACCESS: 1728 - return 0x03; 1729 - case PR_WRITE_EXCLUSIVE_REG_ONLY: 1730 - return 0x05; 1731 - case PR_EXCLUSIVE_ACCESS_REG_ONLY: 1732 - return 0x06; 1733 - case PR_WRITE_EXCLUSIVE_ALL_REGS: 1734 - return 0x07; 1735 - case PR_EXCLUSIVE_ACCESS_ALL_REGS: 1736 - return 0x08; 1737 - default: 1738 - return 0; 1739 - } 1740 - }; 1741 - 1742 1693 static int sd_scsi_to_pr_err(struct scsi_sense_hdr *sshdr, int result) 1743 1694 { 1744 1695 switch (host_byte(result)) { ··· 1749 1740 } 1750 1741 } 1751 1742 1752 - static int sd_pr_command(struct block_device *bdev, u8 sa, 1753 - u64 key, u64 sa_key, u8 type, u8 flags) 1743 + static int sd_pr_in_command(struct block_device *bdev, u8 sa, 1744 + unsigned char *data, int data_len) 1745 + { 1746 + struct scsi_disk *sdkp = scsi_disk(bdev->bd_disk); 1747 + struct scsi_device *sdev = sdkp->device; 1748 + struct scsi_sense_hdr sshdr; 1749 + u8 cmd[10] = { PERSISTENT_RESERVE_IN, sa }; 1750 + const struct scsi_exec_args exec_args = { 1751 + .sshdr = &sshdr, 1752 + }; 1753 + int result; 1754 + 1755 + put_unaligned_be16(data_len, &cmd[7]); 1756 + 1757 + result = scsi_execute_cmd(sdev, cmd, REQ_OP_DRV_IN, data, data_len, 1758 + SD_TIMEOUT, sdkp->max_retries, &exec_args); 1759 + if (scsi_status_is_check_condition(result) && 1760 + scsi_sense_valid(&sshdr)) { 1761 + sdev_printk(KERN_INFO, sdev, "PR command failed: %d\n", result); 1762 + scsi_print_sense_hdr(sdev, NULL, &sshdr); 1763 + } 1764 + 1765 + if (result <= 0) 1766 + return result; 1767 + 1768 + return sd_scsi_to_pr_err(&sshdr, result); 1769 + } 1770 + 1771 + static int sd_pr_read_keys(struct block_device *bdev, struct pr_keys *keys_info) 1772 + { 1773 + int result, i, data_offset, num_copy_keys; 1774 + u32 num_keys = keys_info->num_keys; 1775 + int data_len = num_keys * 8 + 8; 1776 + u8 *data; 1777 + 1778 + data = kzalloc(data_len, GFP_KERNEL); 1779 + if (!data) 1780 + return -ENOMEM; 1781 + 1782 + result = sd_pr_in_command(bdev, READ_KEYS, data, data_len); 1783 + if (result) 1784 + goto free_data; 1785 + 1786 + keys_info->generation = get_unaligned_be32(&data[0]); 1787 + keys_info->num_keys = get_unaligned_be32(&data[4]) / 8; 1788 + 1789 + data_offset = 8; 1790 + num_copy_keys = min(num_keys, keys_info->num_keys); 1791 + 1792 + for (i = 0; i < num_copy_keys; i++) { 1793 + keys_info->keys[i] = get_unaligned_be64(&data[data_offset]); 1794 + data_offset += 8; 1795 + } 1796 + 1797 + free_data: 1798 + kfree(data); 1799 + return result; 1800 + } 1801 + 1802 + static int sd_pr_read_reservation(struct block_device *bdev, 1803 + struct pr_held_reservation *rsv) 1804 + { 1805 + struct scsi_disk *sdkp = scsi_disk(bdev->bd_disk); 1806 + struct scsi_device *sdev = sdkp->device; 1807 + u8 data[24] = { }; 1808 + int result, len; 1809 + 1810 + result = sd_pr_in_command(bdev, READ_RESERVATION, data, sizeof(data)); 1811 + if (result) 1812 + return result; 1813 + 1814 + len = get_unaligned_be32(&data[4]); 1815 + if (!len) 1816 + return 0; 1817 + 1818 + /* Make sure we have at least the key and type */ 1819 + if (len < 14) { 1820 + sdev_printk(KERN_INFO, sdev, 1821 + "READ RESERVATION failed due to short return buffer of %d bytes\n", 1822 + len); 1823 + return -EINVAL; 1824 + } 1825 + 1826 + rsv->generation = get_unaligned_be32(&data[0]); 1827 + rsv->key = get_unaligned_be64(&data[8]); 1828 + rsv->type = scsi_pr_type_to_block(data[21] & 0x0f); 1829 + return 0; 1830 + } 1831 + 1832 + static int sd_pr_out_command(struct block_device *bdev, u8 sa, u64 key, 1833 + u64 sa_key, enum scsi_pr_type type, u8 flags) 1754 1834 { 1755 1835 struct scsi_disk *sdkp = scsi_disk(bdev->bd_disk); 1756 1836 struct scsi_device *sdev = sdkp->device; ··· 1881 1783 { 1882 1784 if (flags & ~PR_FL_IGNORE_KEY) 1883 1785 return -EOPNOTSUPP; 1884 - return sd_pr_command(bdev, (flags & PR_FL_IGNORE_KEY) ? 0x06 : 0x00, 1786 + return sd_pr_out_command(bdev, (flags & PR_FL_IGNORE_KEY) ? 0x06 : 0x00, 1885 1787 old_key, new_key, 0, 1886 1788 (1 << 0) /* APTPL */); 1887 1789 } ··· 1891 1793 { 1892 1794 if (flags) 1893 1795 return -EOPNOTSUPP; 1894 - return sd_pr_command(bdev, 0x01, key, 0, sd_pr_type(type), 0); 1796 + return sd_pr_out_command(bdev, 0x01, key, 0, 1797 + block_pr_type_to_scsi(type), 0); 1895 1798 } 1896 1799 1897 1800 static int sd_pr_release(struct block_device *bdev, u64 key, enum pr_type type) 1898 1801 { 1899 - return sd_pr_command(bdev, 0x02, key, 0, sd_pr_type(type), 0); 1802 + return sd_pr_out_command(bdev, 0x02, key, 0, 1803 + block_pr_type_to_scsi(type), 0); 1900 1804 } 1901 1805 1902 1806 static int sd_pr_preempt(struct block_device *bdev, u64 old_key, u64 new_key, 1903 1807 enum pr_type type, bool abort) 1904 1808 { 1905 - return sd_pr_command(bdev, abort ? 0x05 : 0x04, old_key, new_key, 1906 - sd_pr_type(type), 0); 1809 + return sd_pr_out_command(bdev, abort ? 0x05 : 0x04, old_key, new_key, 1810 + block_pr_type_to_scsi(type), 0); 1907 1811 } 1908 1812 1909 1813 static int sd_pr_clear(struct block_device *bdev, u64 key) 1910 1814 { 1911 - return sd_pr_command(bdev, 0x03, key, 0, 0, 0); 1815 + return sd_pr_out_command(bdev, 0x03, key, 0, 0, 0); 1912 1816 } 1913 1817 1914 1818 static const struct pr_ops sd_pr_ops = { ··· 1919 1819 .pr_release = sd_pr_release, 1920 1820 .pr_preempt = sd_pr_preempt, 1921 1821 .pr_clear = sd_pr_clear, 1822 + .pr_read_keys = sd_pr_read_keys, 1823 + .pr_read_reservation = sd_pr_read_reservation, 1922 1824 }; 1923 1825 1924 1826 static void scsi_disk_free_disk(struct gendisk *disk) ··· 2710 2608 if (sdkp->device->use_10_for_ms && len < 8) 2711 2609 len = 8; 2712 2610 2713 - return scsi_mode_sense(sdkp->device, dbd, modepage, buffer, len, 2714 - SD_TIMEOUT, sdkp->max_retries, data, 2715 - sshdr); 2611 + return scsi_mode_sense(sdkp->device, dbd, modepage, 0, buffer, len, 2612 + SD_TIMEOUT, sdkp->max_retries, data, sshdr); 2716 2613 } 2717 2614 2718 2615 /* ··· 2968 2867 if (sdkp->protection_type == 0) 2969 2868 return; 2970 2869 2971 - res = scsi_mode_sense(sdp, 1, 0x0a, buffer, 36, SD_TIMEOUT, 2870 + res = scsi_mode_sense(sdp, 1, 0x0a, 0, buffer, 36, SD_TIMEOUT, 2972 2871 sdkp->max_retries, &data, &sshdr); 2973 2872 2974 2873 if (res < 0 || !data.header_length || ··· 3157 3056 return; 3158 3057 } 3159 3058 3160 - if (scsi_report_opcode(sdev, buffer, SD_BUF_SIZE, INQUIRY) < 0) { 3059 + if (scsi_report_opcode(sdev, buffer, SD_BUF_SIZE, INQUIRY, 0) < 0) { 3161 3060 struct scsi_vpd *vpd; 3162 3061 3163 3062 sdev->no_report_opcodes = 1; ··· 3173 3072 rcu_read_unlock(); 3174 3073 } 3175 3074 3176 - if (scsi_report_opcode(sdev, buffer, SD_BUF_SIZE, WRITE_SAME_16) == 1) 3075 + if (scsi_report_opcode(sdev, buffer, SD_BUF_SIZE, WRITE_SAME_16, 0) == 1) 3177 3076 sdkp->ws16 = 1; 3178 3077 3179 - if (scsi_report_opcode(sdev, buffer, SD_BUF_SIZE, WRITE_SAME) == 1) 3078 + if (scsi_report_opcode(sdev, buffer, SD_BUF_SIZE, WRITE_SAME, 0) == 1) 3180 3079 sdkp->ws10 = 1; 3181 3080 } 3182 3081 ··· 3188 3087 return; 3189 3088 3190 3089 if (scsi_report_opcode(sdev, buffer, SD_BUF_SIZE, 3191 - SECURITY_PROTOCOL_IN) == 1 && 3090 + SECURITY_PROTOCOL_IN, 0) == 1 && 3192 3091 scsi_report_opcode(sdev, buffer, SD_BUF_SIZE, 3193 - SECURITY_PROTOCOL_OUT) == 1) 3092 + SECURITY_PROTOCOL_OUT, 0) == 1) 3194 3093 sdkp->security = 1; 3195 3094 } 3196 3095
+1 -1
drivers/scsi/sd_zbc.c
··· 889 889 } 890 890 891 891 max_append = min_t(u32, logical_to_sectors(sdkp->device, zone_blocks), 892 - q->limits.max_segments << (PAGE_SHIFT - 9)); 892 + q->limits.max_segments << PAGE_SECTORS_SHIFT); 893 893 max_append = min_t(u32, max_append, queue_max_hw_sectors(q)); 894 894 895 895 blk_queue_max_zone_append_sectors(q, max_append);
+1 -1
drivers/scsi/sg.c
··· 71 71 72 72 #define SG_ALLOW_DIO_DEF 0 73 73 74 - #define SG_MAX_DEVS 32768 74 + #define SG_MAX_DEVS (1 << MINORBITS) 75 75 76 76 /* SG_MAX_CDB_SIZE should be 260 (spc4r37 section 3.1.30) however the type 77 77 * of sg_io_hdr::cmd_len can only represent 255. All SCSI commands greater
+1 -1
drivers/scsi/smartpqi/Kconfig
··· 1 1 # 2 2 # Kernel configuration file for the SMARTPQI 3 3 # 4 - # Copyright (c) 2019-2022 Microchip Technology Inc. and its subsidiaries 4 + # Copyright (c) 2019-2023 Microchip Technology Inc. and its subsidiaries 5 5 # Copyright (c) 2017-2018 Microsemi Corporation 6 6 # Copyright (c) 2016 Microsemi Corporation 7 7 # Copyright (c) 2016 PMC-Sierra, Inc.
+4 -2
drivers/scsi/smartpqi/smartpqi.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* 3 3 * driver for Microchip PQI-based storage controllers 4 - * Copyright (c) 2019-2022 Microchip Technology Inc. and its subsidiaries 4 + * Copyright (c) 2019-2023 Microchip Technology Inc. and its subsidiaries 5 5 * Copyright (c) 2016-2018 Microsemi Corporation 6 6 * Copyright (c) 2016 PMC-Sierra, Inc. 7 7 * ··· 1108 1108 u8 volume_offline : 1; 1109 1109 u8 rescan : 1; 1110 1110 u8 ignore_device : 1; 1111 + u8 erase_in_progress : 1; 1111 1112 bool aio_enabled; /* only valid for physical disks */ 1112 1113 bool in_remove; 1113 1114 bool device_offline; ··· 1148 1147 1149 1148 struct pqi_stream_data stream_data[NUM_STREAMS_PER_LUN]; 1150 1149 atomic_t scsi_cmds_outstanding[PQI_MAX_LUNS_PER_DEVICE]; 1151 - atomic_t raid_bypass_cnt; 1150 + unsigned int raid_bypass_cnt; 1152 1151 }; 1153 1152 1154 1153 /* VPD inquiry pages */ ··· 1358 1357 u32 max_write_raid_5_6; 1359 1358 u32 max_write_raid_1_10_2drive; 1360 1359 u32 max_write_raid_1_10_3drive; 1360 + int numa_node; 1361 1361 1362 1362 struct list_head scsi_device_list; 1363 1363 spinlock_t scsi_device_list_lock;
+155 -131
drivers/scsi/smartpqi/smartpqi_init.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 3 * driver for Microchip PQI-based storage controllers 4 - * Copyright (c) 2019-2022 Microchip Technology Inc. and its subsidiaries 4 + * Copyright (c) 2019-2023 Microchip Technology Inc. and its subsidiaries 5 5 * Copyright (c) 2016-2018 Microsemi Corporation 6 6 * Copyright (c) 2016 PMC-Sierra, Inc. 7 7 * ··· 33 33 #define BUILD_TIMESTAMP 34 34 #endif 35 35 36 - #define DRIVER_VERSION "2.1.20-035" 36 + #define DRIVER_VERSION "2.1.22-040" 37 37 #define DRIVER_MAJOR 2 38 38 #define DRIVER_MINOR 1 39 - #define DRIVER_RELEASE 20 40 - #define DRIVER_REVISION 35 39 + #define DRIVER_RELEASE 22 40 + #define DRIVER_REVISION 40 41 41 42 42 #define DRIVER_NAME "Microchip SmartPQI Driver (v" \ 43 43 DRIVER_VERSION BUILD_TIMESTAMP ")" ··· 519 519 writeb(status, ctrl_info->soft_reset_status); 520 520 } 521 521 522 + static inline bool pqi_is_io_high_priority(struct pqi_scsi_dev *device, struct scsi_cmnd *scmd) 523 + { 524 + bool io_high_prio; 525 + int priority_class; 526 + 527 + io_high_prio = false; 528 + 529 + if (device->ncq_prio_enable) { 530 + priority_class = 531 + IOPRIO_PRIO_CLASS(req_get_ioprio(scsi_cmd_to_rq(scmd))); 532 + if (priority_class == IOPRIO_CLASS_RT) { 533 + /* Set NCQ priority for read/write commands. */ 534 + switch (scmd->cmnd[0]) { 535 + case WRITE_16: 536 + case READ_16: 537 + case WRITE_12: 538 + case READ_12: 539 + case WRITE_10: 540 + case READ_10: 541 + case WRITE_6: 542 + case READ_6: 543 + io_high_prio = true; 544 + break; 545 + } 546 + } 547 + } 548 + 549 + return io_high_prio; 550 + } 551 + 522 552 static int pqi_map_single(struct pci_dev *pci_dev, 523 553 struct pqi_sg_descriptor *sg_descriptor, void *buffer, 524 554 size_t buffer_length, enum dma_data_direction data_direction) ··· 608 578 cdb = request->cdb; 609 579 610 580 switch (cmd) { 611 - case TEST_UNIT_READY: 612 - request->data_direction = SOP_READ_FLAG; 613 - cdb[0] = TEST_UNIT_READY; 614 - break; 615 581 case INQUIRY: 616 582 request->data_direction = SOP_READ_FLAG; 617 583 cdb[0] = INQUIRY; ··· 734 708 } 735 709 } 736 710 737 - pqi_reinit_io_request(io_request); 711 + if (io_request) 712 + pqi_reinit_io_request(io_request); 738 713 739 714 return io_request; 740 715 } ··· 1615 1588 1616 1589 #define PQI_DEVICE_NCQ_PRIO_SUPPORTED 0x01 1617 1590 #define PQI_DEVICE_PHY_MAP_SUPPORTED 0x10 1591 + #define PQI_DEVICE_ERASE_IN_PROGRESS 0x10 1618 1592 1619 1593 static int pqi_get_physical_device_info(struct pqi_ctrl_info *ctrl_info, 1620 1594 struct pqi_scsi_dev *device, ··· 1664 1636 ((get_unaligned_le32(&id_phys->misc_drive_flags) >> 16) & 1665 1637 PQI_DEVICE_NCQ_PRIO_SUPPORTED); 1666 1638 1639 + device->erase_in_progress = !!(get_unaligned_le16(&id_phys->extra_physical_drive_flags) & PQI_DEVICE_ERASE_IN_PROGRESS); 1640 + 1667 1641 return 0; 1668 1642 } 1669 1643 ··· 1711 1681 1712 1682 /* 1713 1683 * Prevent adding drive to OS for some corner cases such as a drive 1714 - * undergoing a sanitize operation. Some OSes will continue to poll 1684 + * undergoing a sanitize (erase) operation. Some OSes will continue to poll 1715 1685 * the drive until the sanitize completes, which can take hours, 1716 1686 * resulting in long bootup delays. Commands such as TUR, READ_CAP 1717 1687 * are allowed, but READ/WRITE cause check condition. So the OS ··· 1719 1689 * Note: devices that have completed sanitize must be re-enabled 1720 1690 * using the management utility. 1721 1691 */ 1722 - static bool pqi_keep_device_offline(struct pqi_ctrl_info *ctrl_info, 1723 - struct pqi_scsi_dev *device) 1692 + static inline bool pqi_keep_device_offline(struct pqi_scsi_dev *device) 1724 1693 { 1725 - u8 scsi_status; 1726 - int rc; 1727 - enum dma_data_direction dir; 1728 - char *buffer; 1729 - int buffer_length = 64; 1730 - size_t sense_data_length; 1731 - struct scsi_sense_hdr sshdr; 1732 - struct pqi_raid_path_request request; 1733 - struct pqi_raid_error_info error_info; 1734 - bool offline = false; /* Assume keep online */ 1735 - 1736 - /* Do not check controllers. */ 1737 - if (pqi_is_hba_lunid(device->scsi3addr)) 1738 - return false; 1739 - 1740 - /* Do not check LVs. */ 1741 - if (pqi_is_logical_device(device)) 1742 - return false; 1743 - 1744 - buffer = kmalloc(buffer_length, GFP_KERNEL); 1745 - if (!buffer) 1746 - return false; /* Assume not offline */ 1747 - 1748 - /* Check for SANITIZE in progress using TUR */ 1749 - rc = pqi_build_raid_path_request(ctrl_info, &request, 1750 - TEST_UNIT_READY, RAID_CTLR_LUNID, buffer, 1751 - buffer_length, 0, &dir); 1752 - if (rc) 1753 - goto out; /* Assume not offline */ 1754 - 1755 - memcpy(request.lun_number, device->scsi3addr, sizeof(request.lun_number)); 1756 - 1757 - rc = pqi_submit_raid_request_synchronous(ctrl_info, &request.header, 0, &error_info); 1758 - 1759 - if (rc) 1760 - goto out; /* Assume not offline */ 1761 - 1762 - scsi_status = error_info.status; 1763 - sense_data_length = get_unaligned_le16(&error_info.sense_data_length); 1764 - if (sense_data_length == 0) 1765 - sense_data_length = 1766 - get_unaligned_le16(&error_info.response_data_length); 1767 - if (sense_data_length) { 1768 - if (sense_data_length > sizeof(error_info.data)) 1769 - sense_data_length = sizeof(error_info.data); 1770 - 1771 - /* 1772 - * Check for sanitize in progress: asc:0x04, ascq: 0x1b 1773 - */ 1774 - if (scsi_status == SAM_STAT_CHECK_CONDITION && 1775 - scsi_normalize_sense(error_info.data, 1776 - sense_data_length, &sshdr) && 1777 - sshdr.sense_key == NOT_READY && 1778 - sshdr.asc == 0x04 && 1779 - sshdr.ascq == 0x1b) { 1780 - device->device_offline = true; 1781 - offline = true; 1782 - goto out; /* Keep device offline */ 1783 - } 1784 - } 1785 - 1786 - out: 1787 - kfree(buffer); 1788 - return offline; 1694 + return device->erase_in_progress; 1789 1695 } 1790 1696 1791 1697 static int pqi_get_device_info_phys_logical(struct pqi_ctrl_info *ctrl_info, ··· 2465 2499 if (!pqi_is_supported_device(device)) 2466 2500 continue; 2467 2501 2468 - /* Do not present disks that the OS cannot fully probe */ 2469 - if (pqi_keep_device_offline(ctrl_info, device)) 2470 - continue; 2471 - 2472 2502 /* Gather information about the device. */ 2473 2503 rc = pqi_get_device_info(ctrl_info, device, id_phys); 2474 2504 if (rc == -ENOMEM) { ··· 2486 2524 rc = 0; 2487 2525 continue; 2488 2526 } 2527 + 2528 + /* Do not present disks that the OS cannot fully probe. */ 2529 + if (pqi_keep_device_offline(device)) 2530 + continue; 2489 2531 2490 2532 pqi_assign_bus_target_lun(device); 2491 2533 ··· 5470 5504 pqi_scsi_done(scmd); 5471 5505 } 5472 5506 5473 - static int pqi_raid_submit_scsi_cmd_with_io_request( 5474 - struct pqi_ctrl_info *ctrl_info, struct pqi_io_request *io_request, 5507 + static int pqi_raid_submit_io(struct pqi_ctrl_info *ctrl_info, 5475 5508 struct pqi_scsi_dev *device, struct scsi_cmnd *scmd, 5476 - struct pqi_queue_group *queue_group) 5509 + struct pqi_queue_group *queue_group, bool io_high_prio) 5477 5510 { 5478 5511 int rc; 5479 5512 size_t cdb_length; 5513 + struct pqi_io_request *io_request; 5480 5514 struct pqi_raid_path_request *request; 5515 + 5516 + io_request = pqi_alloc_io_request(ctrl_info, scmd); 5517 + if (!io_request) 5518 + return SCSI_MLQUEUE_HOST_BUSY; 5481 5519 5482 5520 io_request->io_complete_callback = pqi_raid_io_complete; 5483 5521 io_request->scmd = scmd; ··· 5492 5522 request->header.iu_type = PQI_REQUEST_IU_RAID_PATH_IO; 5493 5523 put_unaligned_le32(scsi_bufflen(scmd), &request->buffer_length); 5494 5524 request->task_attribute = SOP_TASK_ATTRIBUTE_SIMPLE; 5525 + request->command_priority = io_high_prio; 5495 5526 put_unaligned_le16(io_request->index, &request->request_id); 5496 5527 request->error_index = request->request_id; 5497 5528 memcpy(request->lun_number, device->scsi3addr, sizeof(request->lun_number)); ··· 5558 5587 struct pqi_scsi_dev *device, struct scsi_cmnd *scmd, 5559 5588 struct pqi_queue_group *queue_group) 5560 5589 { 5561 - struct pqi_io_request *io_request; 5590 + bool io_high_prio; 5562 5591 5563 - io_request = pqi_alloc_io_request(ctrl_info, scmd); 5564 - if (!io_request) 5565 - return SCSI_MLQUEUE_HOST_BUSY; 5592 + io_high_prio = pqi_is_io_high_priority(device, scmd); 5566 5593 5567 - return pqi_raid_submit_scsi_cmd_with_io_request(ctrl_info, io_request, 5568 - device, scmd, queue_group); 5594 + return pqi_raid_submit_io(ctrl_info, device, scmd, queue_group, io_high_prio); 5569 5595 } 5570 5596 5571 5597 static bool pqi_raid_bypass_retry_needed(struct pqi_io_request *io_request) ··· 5607 5639 pqi_scsi_done(scmd); 5608 5640 } 5609 5641 5610 - static inline bool pqi_is_io_high_priority(struct pqi_ctrl_info *ctrl_info, 5611 - struct pqi_scsi_dev *device, struct scsi_cmnd *scmd) 5612 - { 5613 - bool io_high_prio; 5614 - int priority_class; 5615 - 5616 - io_high_prio = false; 5617 - 5618 - if (device->ncq_prio_enable) { 5619 - priority_class = 5620 - IOPRIO_PRIO_CLASS(req_get_ioprio(scsi_cmd_to_rq(scmd))); 5621 - if (priority_class == IOPRIO_CLASS_RT) { 5622 - /* Set NCQ priority for read/write commands. */ 5623 - switch (scmd->cmnd[0]) { 5624 - case WRITE_16: 5625 - case READ_16: 5626 - case WRITE_12: 5627 - case READ_12: 5628 - case WRITE_10: 5629 - case READ_10: 5630 - case WRITE_6: 5631 - case READ_6: 5632 - io_high_prio = true; 5633 - break; 5634 - } 5635 - } 5636 - } 5637 - 5638 - return io_high_prio; 5639 - } 5640 - 5641 5642 static inline int pqi_aio_submit_scsi_cmd(struct pqi_ctrl_info *ctrl_info, 5642 5643 struct pqi_scsi_dev *device, struct scsi_cmnd *scmd, 5643 5644 struct pqi_queue_group *queue_group) 5644 5645 { 5645 5646 bool io_high_prio; 5646 5647 5647 - io_high_prio = pqi_is_io_high_priority(ctrl_info, device, scmd); 5648 + io_high_prio = pqi_is_io_high_priority(device, scmd); 5648 5649 5649 5650 return pqi_aio_submit_io(ctrl_info, scmd, device->aio_handle, 5650 5651 scmd->cmnd, scmd->cmd_len, queue_group, NULL, ··· 5631 5694 struct pqi_aio_path_request *request; 5632 5695 struct pqi_scsi_dev *device; 5633 5696 5634 - device = scmd->device->hostdata; 5635 5697 io_request = pqi_alloc_io_request(ctrl_info, scmd); 5636 5698 if (!io_request) 5637 5699 return SCSI_MLQUEUE_HOST_BUSY; 5700 + 5638 5701 io_request->io_complete_callback = pqi_aio_io_complete; 5639 5702 io_request->scmd = scmd; 5640 5703 io_request->raid_bypass = raid_bypass; ··· 5649 5712 request->command_priority = io_high_prio; 5650 5713 put_unaligned_le16(io_request->index, &request->request_id); 5651 5714 request->error_index = request->request_id; 5715 + device = scmd->device->hostdata; 5652 5716 if (!pqi_is_logical_device(device) && ctrl_info->multi_lun_device_supported) 5653 5717 put_unaligned_le64(((scmd->device->lun) << 8), &request->lun_number); 5654 5718 if (cdb_length > sizeof(request->cdb)) ··· 5990 6052 rc = pqi_raid_bypass_submit_scsi_cmd(ctrl_info, device, scmd, queue_group); 5991 6053 if (rc == 0 || rc == SCSI_MLQUEUE_HOST_BUSY) { 5992 6054 raid_bypassed = true; 5993 - atomic_inc(&device->raid_bypass_cnt); 6055 + device->raid_bypass_cnt++; 5994 6056 } 5995 6057 } 5996 6058 if (!raid_bypassed) ··· 6841 6903 char *action_name; 6842 6904 char action_name_buffer[32]; 6843 6905 6844 - strlcpy(action_name_buffer, buffer, sizeof(action_name_buffer)); 6906 + strscpy(action_name_buffer, buffer, sizeof(action_name_buffer)); 6845 6907 action_name = strstrip(action_name_buffer); 6846 6908 6847 6909 for (i = 0; i < ARRAY_SIZE(pqi_lockup_actions); i++) { ··· 7226 7288 struct scsi_device *sdev; 7227 7289 struct pqi_scsi_dev *device; 7228 7290 unsigned long flags; 7229 - int raid_bypass_cnt; 7291 + unsigned int raid_bypass_cnt; 7230 7292 7231 7293 sdev = to_scsi_device(dev); 7232 7294 ctrl_info = shost_to_hba(sdev->host); ··· 7242 7304 return -ENODEV; 7243 7305 } 7244 7306 7245 - raid_bypass_cnt = atomic_read(&device->raid_bypass_cnt); 7307 + raid_bypass_cnt = device->raid_bypass_cnt; 7246 7308 7247 7309 spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, flags); 7248 7310 ··· 7304 7366 return -ENODEV; 7305 7367 } 7306 7368 7307 - if (!device->ncq_prio_support || 7308 - !device->is_physical_device) { 7369 + if (!device->ncq_prio_support) { 7309 7370 spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, flags); 7310 7371 return -EINVAL; 7311 7372 } ··· 7316 7379 return strlen(buf); 7317 7380 } 7318 7381 7382 + static ssize_t pqi_numa_node_show(struct device *dev, 7383 + struct device_attribute *attr, char *buffer) 7384 + { 7385 + struct scsi_device *sdev; 7386 + struct pqi_ctrl_info *ctrl_info; 7387 + 7388 + sdev = to_scsi_device(dev); 7389 + ctrl_info = shost_to_hba(sdev->host); 7390 + 7391 + return scnprintf(buffer, PAGE_SIZE, "%d\n", ctrl_info->numa_node); 7392 + } 7393 + 7319 7394 static DEVICE_ATTR(lunid, 0444, pqi_lunid_show, NULL); 7320 7395 static DEVICE_ATTR(unique_id, 0444, pqi_unique_id_show, NULL); 7321 7396 static DEVICE_ATTR(path_info, 0444, pqi_path_info_show, NULL); ··· 7337 7388 static DEVICE_ATTR(raid_bypass_cnt, 0444, pqi_raid_bypass_cnt_show, NULL); 7338 7389 static DEVICE_ATTR(sas_ncq_prio_enable, 0644, 7339 7390 pqi_sas_ncq_prio_enable_show, pqi_sas_ncq_prio_enable_store); 7391 + static DEVICE_ATTR(numa_node, 0444, pqi_numa_node_show, NULL); 7340 7392 7341 7393 static struct attribute *pqi_sdev_attrs[] = { 7342 7394 &dev_attr_lunid.attr, ··· 7348 7398 &dev_attr_raid_level.attr, 7349 7399 &dev_attr_raid_bypass_cnt.attr, 7350 7400 &dev_attr_sas_ncq_prio_enable.attr, 7401 + &dev_attr_numa_node.attr, 7351 7402 NULL 7352 7403 }; 7353 7404 ··· 7667 7716 features_requested_iomem_addr + 7668 7717 (le16_to_cpu(firmware_features->num_elements) * 2) + 7669 7718 sizeof(__le16); 7670 - writew(PQI_FIRMWARE_FEATURE_MAXIMUM, 7671 - host_max_known_feature_iomem_addr); 7719 + writeb(PQI_FIRMWARE_FEATURE_MAXIMUM & 0xFF, host_max_known_feature_iomem_addr); 7720 + writeb((PQI_FIRMWARE_FEATURE_MAXIMUM & 0xFF00) >> 8, host_max_known_feature_iomem_addr + 1); 7672 7721 } 7673 7722 7674 7723 return pqi_config_table_update(ctrl_info, ··· 8511 8560 8512 8561 ctrl_info->iomem_base = ioremap(pci_resource_start( 8513 8562 ctrl_info->pci_dev, 0), 8514 - sizeof(struct pqi_ctrl_registers)); 8563 + pci_resource_len(ctrl_info->pci_dev, 0)); 8515 8564 if (!ctrl_info->iomem_base) { 8516 8565 dev_err(&ctrl_info->pci_dev->dev, 8517 8566 "failed to map memory for controller registers\n"); ··· 8969 9018 "failed to allocate controller info block\n"); 8970 9019 return -ENOMEM; 8971 9020 } 9021 + ctrl_info->numa_node = node; 8972 9022 8973 9023 ctrl_info->pci_dev = pci_dev; 8974 9024 ··· 9881 9929 }, 9882 9930 { 9883 9931 PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 9932 + 0x1cf2, 0x0804) 9933 + }, 9934 + { 9935 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 9936 + 0x1cf2, 0x0805) 9937 + }, 9938 + { 9939 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 9940 + 0x1cf2, 0x0806) 9941 + }, 9942 + { 9943 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 9884 9944 0x1cf2, 0x5445) 9885 9945 }, 9886 9946 { ··· 9926 9962 { 9927 9963 PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 9928 9964 0x1cf2, 0x544f) 9965 + }, 9966 + { 9967 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 9968 + 0x1cf2, 0x54da) 9969 + }, 9970 + { 9971 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 9972 + 0x1cf2, 0x54db) 9973 + }, 9974 + { 9975 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 9976 + 0x1cf2, 0x54dc) 9929 9977 }, 9930 9978 { 9931 9979 PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, ··· 9993 10017 }, 9994 10018 { 9995 10019 PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10020 + 0x1014, 0x0718) 10021 + }, 10022 + { 10023 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 9996 10024 0x1e93, 0x1000) 9997 10025 }, 9998 10026 { ··· 10006 10026 { 10007 10027 PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10008 10028 0x1e93, 0x1002) 10029 + }, 10030 + { 10031 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10032 + 0x1e93, 0x1005) 10033 + }, 10034 + { 10035 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10036 + 0x1f51, 0x1001) 10037 + }, 10038 + { 10039 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10040 + 0x1f51, 0x1002) 10041 + }, 10042 + { 10043 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10044 + 0x1f51, 0x1003) 10045 + }, 10046 + { 10047 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10048 + 0x1f51, 0x1004) 10049 + }, 10050 + { 10051 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10052 + 0x1f51, 0x1005) 10053 + }, 10054 + { 10055 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10056 + 0x1f51, 0x1006) 10057 + }, 10058 + { 10059 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10060 + 0x1f51, 0x1007) 10061 + }, 10062 + { 10063 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10064 + 0x1f51, 0x1008) 10065 + }, 10066 + { 10067 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10068 + 0x1f51, 0x1009) 10069 + }, 10070 + { 10071 + PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f, 10072 + 0x1f51, 0x100a) 10009 10073 }, 10010 10074 { 10011 10075 PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x028f,
+17 -17
drivers/scsi/smartpqi/smartpqi_sas_transport.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 3 * driver for Microchip PQI-based storage controllers 4 - * Copyright (c) 2019-2022 Microchip Technology Inc. and its subsidiaries 4 + * Copyright (c) 2019-2023 Microchip Technology Inc. and its subsidiaries 5 5 * Copyright (c) 2016-2018 Microsemi Corporation 6 6 * Copyright (c) 2016 PMC-Sierra, Inc. 7 7 * ··· 92 92 93 93 identify = &rphy->identify; 94 94 identify->sas_address = pqi_sas_port->sas_address; 95 + identify->phy_identifier = pqi_sas_port->device->phy_id; 95 96 96 97 identify->initiator_port_protocols = SAS_PROTOCOL_ALL; 97 98 identify->target_port_protocols = SAS_PROTOCOL_STP; 98 99 99 - if (pqi_sas_port->device) { 100 - identify->phy_identifier = pqi_sas_port->device->phy_id; 101 - switch (pqi_sas_port->device->device_type) { 102 - case SA_DEVICE_TYPE_SAS: 103 - case SA_DEVICE_TYPE_SES: 104 - case SA_DEVICE_TYPE_NVME: 105 - identify->target_port_protocols = SAS_PROTOCOL_SSP; 106 - break; 107 - case SA_DEVICE_TYPE_EXPANDER_SMP: 108 - identify->target_port_protocols = SAS_PROTOCOL_SMP; 109 - break; 110 - case SA_DEVICE_TYPE_SATA: 111 - default: 112 - break; 113 - } 100 + switch (pqi_sas_port->device->device_type) { 101 + case SA_DEVICE_TYPE_SAS: 102 + case SA_DEVICE_TYPE_SES: 103 + case SA_DEVICE_TYPE_NVME: 104 + identify->target_port_protocols = SAS_PROTOCOL_SSP; 105 + break; 106 + case SA_DEVICE_TYPE_EXPANDER_SMP: 107 + identify->target_port_protocols = SAS_PROTOCOL_SMP; 108 + break; 109 + case SA_DEVICE_TYPE_SATA: 110 + default: 111 + break; 114 112 } 115 113 116 114 return sas_rphy_add(rphy); ··· 293 295 294 296 rc = pqi_sas_port_add_rphy(pqi_sas_port, rphy); 295 297 if (rc) 296 - goto free_sas_port; 298 + goto free_sas_rphy; 297 299 298 300 return 0; 299 301 302 + free_sas_rphy: 303 + sas_rphy_free(rphy); 300 304 free_sas_port: 301 305 pqi_free_sas_port(pqi_sas_port); 302 306 device->sas_port = NULL;
+1 -1
drivers/scsi/smartpqi/smartpqi_sis.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 3 * driver for Microchip PQI-based storage controllers 4 - * Copyright (c) 2019-2022 Microchip Technology Inc. and its subsidiaries 4 + * Copyright (c) 2019-2023 Microchip Technology Inc. and its subsidiaries 5 5 * Copyright (c) 2016-2018 Microsemi Corporation 6 6 * Copyright (c) 2016 PMC-Sierra, Inc. 7 7 *
+1 -1
drivers/scsi/smartpqi/smartpqi_sis.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* 3 3 * driver for Microchip PQI-based storage controllers 4 - * Copyright (c) 2019-2022 Microchip Technology Inc. and its subsidiaries 4 + * Copyright (c) 2019-2023 Microchip Technology Inc. and its subsidiaries 5 5 * Copyright (c) 2016-2018 Microsemi Corporation 6 6 * Copyright (c) 2016 PMC-Sierra, Inc. 7 7 *
+1 -1
drivers/scsi/snic/snic_disc.c
··· 214 214 scsi_flush_work(shost); 215 215 216 216 /* Block IOs on child devices, stops new IOs */ 217 - scsi_target_block(&tgt->dev); 217 + scsi_block_targets(shost, &tgt->dev); 218 218 219 219 /* Cleanup IOs */ 220 220 snic_tgt_scsi_abort_io(tgt);
+1 -1
drivers/scsi/sr.c
··· 825 825 scsi_test_unit_ready(cd->device, SR_TIMEOUT, MAX_RETRIES, &sshdr); 826 826 827 827 /* ask for mode page 0x2a */ 828 - rc = scsi_mode_sense(cd->device, 0, 0x2a, buffer, ms_len, 828 + rc = scsi_mode_sense(cd->device, 0, 0x2a, 0, buffer, ms_len, 829 829 SR_TIMEOUT, 3, &data, NULL); 830 830 831 831 if (rc < 0 || data.length > ms_len ||
+1 -1
drivers/scsi/sym53c8xx_2/sym_glue.c
··· 1286 1286 /* 1287 1287 * Edit its name. 1288 1288 */ 1289 - strlcpy(np->s.chip_name, dev->chip.name, sizeof(np->s.chip_name)); 1289 + strscpy(np->s.chip_name, dev->chip.name, sizeof(np->s.chip_name)); 1290 1290 sprintf(np->s.inst_name, "sym%d", np->s.unit); 1291 1291 1292 1292 if ((SYM_CONF_DMA_ADDRESSING_MODE > 0) && (np->features & FE_DAC) &&
+1 -3
drivers/scsi/virtio_scsi.c
··· 338 338 int result, inquiry_len, inq_result_len = 256; 339 339 char *inq_result = kmalloc(inq_result_len, GFP_KERNEL); 340 340 341 - if (!inq_result) { 342 - kfree(inq_result); 341 + if (!inq_result) 343 342 return -ENOMEM; 344 - } 345 343 346 344 shost_for_each_device(sdev, shost) { 347 345 inquiry_len = sdev->inquiry_len ? sdev->inquiry_len : 36;
+2 -2
drivers/target/iscsi/iscsi_target_parameters.c
··· 726 726 } 727 727 INIT_LIST_HEAD(&extra_response->er_list); 728 728 729 - strlcpy(extra_response->key, key, sizeof(extra_response->key)); 730 - strlcpy(extra_response->value, NOTUNDERSTOOD, 729 + strscpy(extra_response->key, key, sizeof(extra_response->key)); 730 + strscpy(extra_response->value, NOTUNDERSTOOD, 731 731 sizeof(extra_response->value)); 732 732 733 733 list_add_tail(&extra_response->er_list,
+2 -2
drivers/target/iscsi/iscsi_target_util.c
··· 1375 1375 if (conn->param_list) 1376 1376 intrname = iscsi_find_param_from_key(INITIATORNAME, 1377 1377 conn->param_list); 1378 - strlcpy(ls->last_intr_fail_name, 1378 + strscpy(ls->last_intr_fail_name, 1379 1379 (intrname ? intrname->value : "Unknown"), 1380 1380 sizeof(ls->last_intr_fail_name)); 1381 1381 ··· 1414 1414 return; 1415 1415 1416 1416 spin_lock_bh(&tiqn->sess_err_stats.lock); 1417 - strlcpy(tiqn->sess_err_stats.last_sess_fail_rem_name, 1417 + strscpy(tiqn->sess_err_stats.last_sess_fail_rem_name, 1418 1418 sess->sess_ops->InitiatorName, 1419 1419 sizeof(tiqn->sess_err_stats.last_sess_fail_rem_name)); 1420 1420 tiqn->sess_err_stats.last_sess_failure_type =
+5 -5
drivers/target/target_core_configfs.c
··· 649 649 * here without potentially breaking existing setups, so continue to 650 650 * truncate one byte shorter than what can be carried in INQUIRY. 651 651 */ 652 - strlcpy(dev->t10_wwn.model, configname, INQUIRY_MODEL_LEN); 652 + strscpy(dev->t10_wwn.model, configname, INQUIRY_MODEL_LEN); 653 653 } 654 654 655 655 static ssize_t emulate_model_alias_store(struct config_item *item, ··· 675 675 if (flag) { 676 676 dev_set_t10_wwn_model_alias(dev); 677 677 } else { 678 - strlcpy(dev->t10_wwn.model, dev->transport->inquiry_prod, 678 + strscpy(dev->t10_wwn.model, dev->transport->inquiry_prod, 679 679 sizeof(dev->t10_wwn.model)); 680 680 } 681 681 da->emulate_model_alias = flag; ··· 1426 1426 } 1427 1427 1428 1428 BUILD_BUG_ON(sizeof(dev->t10_wwn.vendor) != INQUIRY_VENDOR_LEN + 1); 1429 - strlcpy(dev->t10_wwn.vendor, stripped, sizeof(dev->t10_wwn.vendor)); 1429 + strscpy(dev->t10_wwn.vendor, stripped, sizeof(dev->t10_wwn.vendor)); 1430 1430 1431 1431 pr_debug("Target_Core_ConfigFS: Set emulated T10 Vendor Identification:" 1432 1432 " %s\n", dev->t10_wwn.vendor); ··· 1482 1482 } 1483 1483 1484 1484 BUILD_BUG_ON(sizeof(dev->t10_wwn.model) != INQUIRY_MODEL_LEN + 1); 1485 - strlcpy(dev->t10_wwn.model, stripped, sizeof(dev->t10_wwn.model)); 1485 + strscpy(dev->t10_wwn.model, stripped, sizeof(dev->t10_wwn.model)); 1486 1486 1487 1487 pr_debug("Target_Core_ConfigFS: Set emulated T10 Model Identification: %s\n", 1488 1488 dev->t10_wwn.model); ··· 1538 1538 } 1539 1539 1540 1540 BUILD_BUG_ON(sizeof(dev->t10_wwn.revision) != INQUIRY_REVISION_LEN + 1); 1541 - strlcpy(dev->t10_wwn.revision, stripped, sizeof(dev->t10_wwn.revision)); 1541 + strscpy(dev->t10_wwn.revision, stripped, sizeof(dev->t10_wwn.revision)); 1542 1542 1543 1543 pr_debug("Target_Core_ConfigFS: Set emulated T10 Revision: %s\n", 1544 1544 dev->t10_wwn.revision);
+3 -3
drivers/target/target_core_device.c
··· 789 789 xcopy_lun->lun_tpg = &xcopy_pt_tpg; 790 790 791 791 /* Preload the default INQUIRY const values */ 792 - strlcpy(dev->t10_wwn.vendor, "LIO-ORG", sizeof(dev->t10_wwn.vendor)); 793 - strlcpy(dev->t10_wwn.model, dev->transport->inquiry_prod, 792 + strscpy(dev->t10_wwn.vendor, "LIO-ORG", sizeof(dev->t10_wwn.vendor)); 793 + strscpy(dev->t10_wwn.model, dev->transport->inquiry_prod, 794 794 sizeof(dev->t10_wwn.model)); 795 - strlcpy(dev->t10_wwn.revision, dev->transport->inquiry_rev, 795 + strscpy(dev->t10_wwn.revision, dev->transport->inquiry_rev, 796 796 sizeof(dev->t10_wwn.revision)); 797 797 798 798 return dev;
+2 -2
drivers/target/target_core_file.c
··· 896 896 fd_dev->fd_prot_file = NULL; 897 897 } 898 898 899 - static struct sbc_ops fd_sbc_ops = { 899 + static struct exec_cmd_ops fd_exec_cmd_ops = { 900 900 .execute_rw = fd_execute_rw, 901 901 .execute_sync_cache = fd_execute_sync_cache, 902 902 .execute_write_same = fd_execute_write_same, ··· 906 906 static sense_reason_t 907 907 fd_parse_cdb(struct se_cmd *cmd) 908 908 { 909 - return sbc_parse_cdb(cmd, &fd_sbc_ops); 909 + return sbc_parse_cdb(cmd, &fd_exec_cmd_ops); 910 910 } 911 911 912 912 static const struct target_backend_ops fileio_ops = {
+268 -7
drivers/target/target_core_iblock.c
··· 23 23 #include <linux/file.h> 24 24 #include <linux/module.h> 25 25 #include <linux/scatterlist.h> 26 + #include <linux/pr.h> 26 27 #include <scsi/scsi_proto.h> 28 + #include <scsi/scsi_common.h> 27 29 #include <asm/unaligned.h> 28 30 29 31 #include <target/target_core_base.h> 30 32 #include <target/target_core_backend.h> 31 33 32 34 #include "target_core_iblock.h" 35 + #include "target_core_pr.h" 33 36 34 37 #define IBLOCK_MAX_BIO_PER_TASK 32 /* max # of bios to submit at a time */ 35 38 #define IBLOCK_BIO_POOL_SIZE 128 ··· 312 309 return blocks_long; 313 310 } 314 311 315 - static void iblock_complete_cmd(struct se_cmd *cmd) 312 + static void iblock_complete_cmd(struct se_cmd *cmd, blk_status_t blk_status) 316 313 { 317 314 struct iblock_req *ibr = cmd->priv; 318 315 u8 status; ··· 320 317 if (!refcount_dec_and_test(&ibr->pending)) 321 318 return; 322 319 323 - if (atomic_read(&ibr->ib_bio_err_cnt)) 320 + if (blk_status == BLK_STS_RESV_CONFLICT) 321 + status = SAM_STAT_RESERVATION_CONFLICT; 322 + else if (atomic_read(&ibr->ib_bio_err_cnt)) 324 323 status = SAM_STAT_CHECK_CONDITION; 325 324 else 326 325 status = SAM_STAT_GOOD; ··· 335 330 { 336 331 struct se_cmd *cmd = bio->bi_private; 337 332 struct iblock_req *ibr = cmd->priv; 333 + blk_status_t blk_status = bio->bi_status; 338 334 339 335 if (bio->bi_status) { 340 336 pr_err("bio error: %p, err: %d\n", bio, bio->bi_status); ··· 348 342 349 343 bio_put(bio); 350 344 351 - iblock_complete_cmd(cmd); 345 + iblock_complete_cmd(cmd, blk_status); 352 346 } 353 347 354 348 static struct bio *iblock_get_bio(struct se_cmd *cmd, sector_t lba, u32 sg_num, ··· 764 758 765 759 if (!sgl_nents) { 766 760 refcount_set(&ibr->pending, 1); 767 - iblock_complete_cmd(cmd); 761 + iblock_complete_cmd(cmd, BLK_STS_OK); 768 762 return 0; 769 763 } 770 764 ··· 822 816 } 823 817 824 818 iblock_submit_bios(&list); 825 - iblock_complete_cmd(cmd); 819 + iblock_complete_cmd(cmd, BLK_STS_OK); 826 820 return 0; 827 821 828 822 fail_put_bios: ··· 832 826 kfree(ibr); 833 827 fail: 834 828 return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 829 + } 830 + 831 + static sense_reason_t iblock_execute_pr_out(struct se_cmd *cmd, u8 sa, u64 key, 832 + u64 sa_key, u8 type, bool aptpl) 833 + { 834 + struct se_device *dev = cmd->se_dev; 835 + struct iblock_dev *ib_dev = IBLOCK_DEV(dev); 836 + struct block_device *bdev = ib_dev->ibd_bd; 837 + const struct pr_ops *ops = bdev->bd_disk->fops->pr_ops; 838 + int ret; 839 + 840 + if (!ops) { 841 + pr_err("Block device does not support pr_ops but iblock device has been configured for PR passthrough.\n"); 842 + return TCM_UNSUPPORTED_SCSI_OPCODE; 843 + } 844 + 845 + switch (sa) { 846 + case PRO_REGISTER: 847 + case PRO_REGISTER_AND_IGNORE_EXISTING_KEY: 848 + if (!ops->pr_register) { 849 + pr_err("block device does not support pr_register.\n"); 850 + return TCM_UNSUPPORTED_SCSI_OPCODE; 851 + } 852 + 853 + /* The block layer pr ops always enables aptpl */ 854 + if (!aptpl) 855 + pr_info("APTPL not set by initiator, but will be used.\n"); 856 + 857 + ret = ops->pr_register(bdev, key, sa_key, 858 + sa == PRO_REGISTER ? 0 : PR_FL_IGNORE_KEY); 859 + break; 860 + case PRO_RESERVE: 861 + if (!ops->pr_reserve) { 862 + pr_err("block_device does not support pr_reserve.\n"); 863 + return TCM_UNSUPPORTED_SCSI_OPCODE; 864 + } 865 + 866 + ret = ops->pr_reserve(bdev, key, scsi_pr_type_to_block(type), 0); 867 + break; 868 + case PRO_CLEAR: 869 + if (!ops->pr_clear) { 870 + pr_err("block_device does not support pr_clear.\n"); 871 + return TCM_UNSUPPORTED_SCSI_OPCODE; 872 + } 873 + 874 + ret = ops->pr_clear(bdev, key); 875 + break; 876 + case PRO_PREEMPT: 877 + case PRO_PREEMPT_AND_ABORT: 878 + if (!ops->pr_clear) { 879 + pr_err("block_device does not support pr_preempt.\n"); 880 + return TCM_UNSUPPORTED_SCSI_OPCODE; 881 + } 882 + 883 + ret = ops->pr_preempt(bdev, key, sa_key, 884 + scsi_pr_type_to_block(type), 885 + sa == PRO_PREEMPT ? false : true); 886 + break; 887 + case PRO_RELEASE: 888 + if (!ops->pr_clear) { 889 + pr_err("block_device does not support pr_pclear.\n"); 890 + return TCM_UNSUPPORTED_SCSI_OPCODE; 891 + } 892 + 893 + ret = ops->pr_release(bdev, key, scsi_pr_type_to_block(type)); 894 + break; 895 + default: 896 + pr_err("Unknown PERSISTENT_RESERVE_OUT SA: 0x%02x\n", sa); 897 + return TCM_UNSUPPORTED_SCSI_OPCODE; 898 + } 899 + 900 + if (!ret) 901 + return TCM_NO_SENSE; 902 + else if (ret == PR_STS_RESERVATION_CONFLICT) 903 + return TCM_RESERVATION_CONFLICT; 904 + else 905 + return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 906 + } 907 + 908 + static void iblock_pr_report_caps(unsigned char *param_data) 909 + { 910 + u16 len = 8; 911 + 912 + put_unaligned_be16(len, &param_data[0]); 913 + /* 914 + * When using the pr_ops passthrough method we only support exporting 915 + * the device through one target port because from the backend module 916 + * level we can't see the target port config. As a result we only 917 + * support registration directly from the I_T nexus the cmd is sent 918 + * through and do not set ATP_C here. 919 + * 920 + * The block layer pr_ops do not support passing in initiators so 921 + * we don't set SIP_C here. 922 + */ 923 + /* PTPL_C: Persistence across Target Power Loss bit */ 924 + param_data[2] |= 0x01; 925 + /* 926 + * We are filling in the PERSISTENT RESERVATION TYPE MASK below, so 927 + * set the TMV: Task Mask Valid bit. 928 + */ 929 + param_data[3] |= 0x80; 930 + /* 931 + * Change ALLOW COMMANDs to 0x20 or 0x40 later from Table 166 932 + */ 933 + param_data[3] |= 0x10; /* ALLOW COMMANDs field 001b */ 934 + /* 935 + * PTPL_A: Persistence across Target Power Loss Active bit. The block 936 + * layer pr ops always enables this so report it active. 937 + */ 938 + param_data[3] |= 0x01; 939 + /* 940 + * Setup the PERSISTENT RESERVATION TYPE MASK from Table 212 spc4r37. 941 + */ 942 + param_data[4] |= 0x80; /* PR_TYPE_EXCLUSIVE_ACCESS_ALLREG */ 943 + param_data[4] |= 0x40; /* PR_TYPE_EXCLUSIVE_ACCESS_REGONLY */ 944 + param_data[4] |= 0x20; /* PR_TYPE_WRITE_EXCLUSIVE_REGONLY */ 945 + param_data[4] |= 0x08; /* PR_TYPE_EXCLUSIVE_ACCESS */ 946 + param_data[4] |= 0x02; /* PR_TYPE_WRITE_EXCLUSIVE */ 947 + param_data[5] |= 0x01; /* PR_TYPE_EXCLUSIVE_ACCESS_ALLREG */ 948 + } 949 + 950 + static sense_reason_t iblock_pr_read_keys(struct se_cmd *cmd, 951 + unsigned char *param_data) 952 + { 953 + struct se_device *dev = cmd->se_dev; 954 + struct iblock_dev *ib_dev = IBLOCK_DEV(dev); 955 + struct block_device *bdev = ib_dev->ibd_bd; 956 + const struct pr_ops *ops = bdev->bd_disk->fops->pr_ops; 957 + int i, len, paths, data_offset; 958 + struct pr_keys *keys; 959 + sense_reason_t ret; 960 + 961 + if (!ops) { 962 + pr_err("Block device does not support pr_ops but iblock device has been configured for PR passthrough.\n"); 963 + return TCM_UNSUPPORTED_SCSI_OPCODE; 964 + } 965 + 966 + if (!ops->pr_read_keys) { 967 + pr_err("Block device does not support read_keys.\n"); 968 + return TCM_UNSUPPORTED_SCSI_OPCODE; 969 + } 970 + 971 + /* 972 + * We don't know what's under us, but dm-multipath will register every 973 + * path with the same key, so start off with enough space for 16 paths. 974 + * which is not a lot of memory and should normally be enough. 975 + */ 976 + paths = 16; 977 + retry: 978 + len = 8 * paths; 979 + keys = kzalloc(sizeof(*keys) + len, GFP_KERNEL); 980 + if (!keys) 981 + return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 982 + 983 + keys->num_keys = paths; 984 + if (!ops->pr_read_keys(bdev, keys)) { 985 + if (keys->num_keys > paths) { 986 + kfree(keys); 987 + paths *= 2; 988 + goto retry; 989 + } 990 + } else { 991 + ret = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 992 + goto free_keys; 993 + } 994 + 995 + ret = TCM_NO_SENSE; 996 + 997 + put_unaligned_be32(keys->generation, &param_data[0]); 998 + if (!keys->num_keys) { 999 + put_unaligned_be32(0, &param_data[4]); 1000 + goto free_keys; 1001 + } 1002 + 1003 + put_unaligned_be32(8 * keys->num_keys, &param_data[4]); 1004 + 1005 + data_offset = 8; 1006 + for (i = 0; i < keys->num_keys; i++) { 1007 + if (data_offset + 8 > cmd->data_length) 1008 + break; 1009 + 1010 + put_unaligned_be64(keys->keys[i], &param_data[data_offset]); 1011 + data_offset += 8; 1012 + } 1013 + 1014 + free_keys: 1015 + kfree(keys); 1016 + return ret; 1017 + } 1018 + 1019 + static sense_reason_t iblock_pr_read_reservation(struct se_cmd *cmd, 1020 + unsigned char *param_data) 1021 + { 1022 + struct se_device *dev = cmd->se_dev; 1023 + struct iblock_dev *ib_dev = IBLOCK_DEV(dev); 1024 + struct block_device *bdev = ib_dev->ibd_bd; 1025 + const struct pr_ops *ops = bdev->bd_disk->fops->pr_ops; 1026 + struct pr_held_reservation rsv = { }; 1027 + 1028 + if (!ops) { 1029 + pr_err("Block device does not support pr_ops but iblock device has been configured for PR passthrough.\n"); 1030 + return TCM_UNSUPPORTED_SCSI_OPCODE; 1031 + } 1032 + 1033 + if (!ops->pr_read_reservation) { 1034 + pr_err("Block device does not support read_keys.\n"); 1035 + return TCM_UNSUPPORTED_SCSI_OPCODE; 1036 + } 1037 + 1038 + if (ops->pr_read_reservation(bdev, &rsv)) 1039 + return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 1040 + 1041 + put_unaligned_be32(rsv.generation, &param_data[0]); 1042 + if (!block_pr_type_to_scsi(rsv.type)) { 1043 + put_unaligned_be32(0, &param_data[4]); 1044 + return TCM_NO_SENSE; 1045 + } 1046 + 1047 + put_unaligned_be32(16, &param_data[4]); 1048 + 1049 + if (cmd->data_length < 16) 1050 + return TCM_NO_SENSE; 1051 + put_unaligned_be64(rsv.key, &param_data[8]); 1052 + 1053 + if (cmd->data_length < 22) 1054 + return TCM_NO_SENSE; 1055 + param_data[21] = block_pr_type_to_scsi(rsv.type); 1056 + 1057 + return TCM_NO_SENSE; 1058 + } 1059 + 1060 + static sense_reason_t iblock_execute_pr_in(struct se_cmd *cmd, u8 sa, 1061 + unsigned char *param_data) 1062 + { 1063 + sense_reason_t ret = TCM_NO_SENSE; 1064 + 1065 + switch (sa) { 1066 + case PRI_REPORT_CAPABILITIES: 1067 + iblock_pr_report_caps(param_data); 1068 + break; 1069 + case PRI_READ_KEYS: 1070 + ret = iblock_pr_read_keys(cmd, param_data); 1071 + break; 1072 + case PRI_READ_RESERVATION: 1073 + ret = iblock_pr_read_reservation(cmd, param_data); 1074 + break; 1075 + default: 1076 + pr_err("Unknown PERSISTENT_RESERVE_IN SA: 0x%02x\n", sa); 1077 + return TCM_UNSUPPORTED_SCSI_OPCODE; 1078 + } 1079 + 1080 + return ret; 835 1081 } 836 1082 837 1083 static sector_t iblock_get_alignment_offset_lbas(struct se_device *dev) ··· 1126 868 return bdev_io_opt(bd); 1127 869 } 1128 870 1129 - static struct sbc_ops iblock_sbc_ops = { 871 + static struct exec_cmd_ops iblock_exec_cmd_ops = { 1130 872 .execute_rw = iblock_execute_rw, 1131 873 .execute_sync_cache = iblock_execute_sync_cache, 1132 874 .execute_write_same = iblock_execute_write_same, 1133 875 .execute_unmap = iblock_execute_unmap, 876 + .execute_pr_out = iblock_execute_pr_out, 877 + .execute_pr_in = iblock_execute_pr_in, 1134 878 }; 1135 879 1136 880 static sense_reason_t 1137 881 iblock_parse_cdb(struct se_cmd *cmd) 1138 882 { 1139 - return sbc_parse_cdb(cmd, &iblock_sbc_ops); 883 + return sbc_parse_cdb(cmd, &iblock_exec_cmd_ops); 1140 884 } 1141 885 1142 886 static bool iblock_get_write_cache(struct se_device *dev) ··· 1149 889 static const struct target_backend_ops iblock_ops = { 1150 890 .name = "iblock", 1151 891 .inquiry_prod = "IBLOCK", 892 + .transport_flags_changeable = TRANSPORT_FLAG_PASSTHROUGH_PGR, 1152 893 .inquiry_rev = IBLOCK_VERSION, 1153 894 .owner = THIS_MODULE, 1154 895 .attach_hba = iblock_attach_hba,
+78 -1
drivers/target/target_core_pr.c
··· 3538 3538 return ret; 3539 3539 } 3540 3540 3541 + static sense_reason_t 3542 + target_try_pr_out_pt(struct se_cmd *cmd, u8 sa, u64 res_key, u64 sa_res_key, 3543 + u8 type, bool aptpl, bool all_tg_pt, bool spec_i_pt) 3544 + { 3545 + struct exec_cmd_ops *ops = cmd->protocol_data; 3546 + 3547 + if (!cmd->se_sess || !cmd->se_lun) { 3548 + pr_err("SPC-3 PR: se_sess || struct se_lun is NULL!\n"); 3549 + return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 3550 + } 3551 + 3552 + if (!ops->execute_pr_out) { 3553 + pr_err("SPC-3 PR: Device has been configured for PR passthrough but it's not supported by the backend.\n"); 3554 + return TCM_UNSUPPORTED_SCSI_OPCODE; 3555 + } 3556 + 3557 + switch (sa) { 3558 + case PRO_REGISTER_AND_MOVE: 3559 + case PRO_REPLACE_LOST_RESERVATION: 3560 + pr_err("SPC-3 PR: PRO_REGISTER_AND_MOVE and PRO_REPLACE_LOST_RESERVATION are not supported by PR passthrough.\n"); 3561 + return TCM_UNSUPPORTED_SCSI_OPCODE; 3562 + } 3563 + 3564 + if (spec_i_pt || all_tg_pt) { 3565 + pr_err("SPC-3 PR: SPEC_I_PT and ALL_TG_PT are not supported by PR passthrough.\n"); 3566 + return TCM_UNSUPPORTED_SCSI_OPCODE; 3567 + } 3568 + 3569 + return ops->execute_pr_out(cmd, sa, res_key, sa_res_key, type, aptpl); 3570 + } 3571 + 3541 3572 /* 3542 3573 * See spc4r17 section 6.14 Table 170 3543 3574 */ ··· 3672 3641 return TCM_PARAMETER_LIST_LENGTH_ERROR; 3673 3642 } 3674 3643 3644 + if (dev->transport_flags & TRANSPORT_FLAG_PASSTHROUGH_PGR) { 3645 + ret = target_try_pr_out_pt(cmd, sa, res_key, sa_res_key, type, 3646 + aptpl, all_tg_pt, spec_i_pt); 3647 + goto done; 3648 + } 3649 + 3675 3650 /* 3676 3651 * (core_scsi3_emulate_pro_* function parameters 3677 3652 * are defined by spc4r17 Table 174: ··· 3719 3682 return TCM_INVALID_CDB_FIELD; 3720 3683 } 3721 3684 3685 + done: 3722 3686 if (!ret) 3723 3687 target_complete_cmd(cmd, SAM_STAT_GOOD); 3724 3688 return ret; ··· 4077 4039 return 0; 4078 4040 } 4079 4041 4042 + static sense_reason_t target_try_pr_in_pt(struct se_cmd *cmd, u8 sa) 4043 + { 4044 + struct exec_cmd_ops *ops = cmd->protocol_data; 4045 + unsigned char *buf; 4046 + sense_reason_t ret; 4047 + 4048 + if (cmd->data_length < 8) { 4049 + pr_err("PRIN SA SCSI Data Length: %u too small\n", 4050 + cmd->data_length); 4051 + return TCM_INVALID_CDB_FIELD; 4052 + } 4053 + 4054 + if (!ops->execute_pr_in) { 4055 + pr_err("SPC-3 PR: Device has been configured for PR passthrough but it's not supported by the backend.\n"); 4056 + return TCM_UNSUPPORTED_SCSI_OPCODE; 4057 + } 4058 + 4059 + if (sa == PRI_READ_FULL_STATUS) { 4060 + pr_err("SPC-3 PR: PRI_READ_FULL_STATUS is not supported by PR passthrough.\n"); 4061 + return TCM_UNSUPPORTED_SCSI_OPCODE; 4062 + } 4063 + 4064 + buf = transport_kmap_data_sg(cmd); 4065 + if (!buf) 4066 + return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 4067 + 4068 + ret = ops->execute_pr_in(cmd, sa, buf); 4069 + 4070 + transport_kunmap_data_sg(cmd); 4071 + return ret; 4072 + } 4073 + 4080 4074 sense_reason_t 4081 4075 target_scsi3_emulate_pr_in(struct se_cmd *cmd) 4082 4076 { 4077 + u8 sa = cmd->t_task_cdb[1] & 0x1f; 4083 4078 sense_reason_t ret; 4084 4079 4085 4080 /* ··· 4131 4060 return TCM_RESERVATION_CONFLICT; 4132 4061 } 4133 4062 4134 - switch (cmd->t_task_cdb[1] & 0x1f) { 4063 + if (cmd->se_dev->transport_flags & TRANSPORT_FLAG_PASSTHROUGH_PGR) { 4064 + ret = target_try_pr_in_pt(cmd, sa); 4065 + goto done; 4066 + } 4067 + 4068 + switch (sa) { 4135 4069 case PRI_READ_KEYS: 4136 4070 ret = core_scsi3_pri_read_keys(cmd); 4137 4071 break; ··· 4155 4079 return TCM_INVALID_CDB_FIELD; 4156 4080 } 4157 4081 4082 + done: 4158 4083 if (!ret) 4159 4084 target_complete_cmd(cmd, SAM_STAT_GOOD); 4160 4085 return ret;
+2 -2
drivers/target/target_core_rd.c
··· 643 643 rd_release_prot_space(rd_dev); 644 644 } 645 645 646 - static struct sbc_ops rd_sbc_ops = { 646 + static struct exec_cmd_ops rd_exec_cmd_ops = { 647 647 .execute_rw = rd_execute_rw, 648 648 }; 649 649 650 650 static sense_reason_t 651 651 rd_parse_cdb(struct se_cmd *cmd) 652 652 { 653 - return sbc_parse_cdb(cmd, &rd_sbc_ops); 653 + return sbc_parse_cdb(cmd, &rd_exec_cmd_ops); 654 654 } 655 655 656 656 static const struct target_backend_ops rd_mcp_ops = {
+7 -6
drivers/target/target_core_sbc.c
··· 192 192 static sense_reason_t 193 193 sbc_execute_write_same_unmap(struct se_cmd *cmd) 194 194 { 195 - struct sbc_ops *ops = cmd->protocol_data; 195 + struct exec_cmd_ops *ops = cmd->protocol_data; 196 196 sector_t nolb = sbc_get_write_same_sectors(cmd); 197 197 sense_reason_t ret; 198 198 ··· 271 271 } 272 272 273 273 static sense_reason_t 274 - sbc_setup_write_same(struct se_cmd *cmd, unsigned char flags, struct sbc_ops *ops) 274 + sbc_setup_write_same(struct se_cmd *cmd, unsigned char flags, 275 + struct exec_cmd_ops *ops) 275 276 { 276 277 struct se_device *dev = cmd->se_dev; 277 278 sector_t end_lba = dev->transport->get_blocks(dev) + 1; ··· 341 340 static sense_reason_t 342 341 sbc_execute_rw(struct se_cmd *cmd) 343 342 { 344 - struct sbc_ops *ops = cmd->protocol_data; 343 + struct exec_cmd_ops *ops = cmd->protocol_data; 345 344 346 345 return ops->execute_rw(cmd, cmd->t_data_sg, cmd->t_data_nents, 347 346 cmd->data_direction); ··· 567 566 static sense_reason_t 568 567 sbc_compare_and_write(struct se_cmd *cmd) 569 568 { 570 - struct sbc_ops *ops = cmd->protocol_data; 569 + struct exec_cmd_ops *ops = cmd->protocol_data; 571 570 struct se_device *dev = cmd->se_dev; 572 571 sense_reason_t ret; 573 572 int rc; ··· 765 764 } 766 765 767 766 sense_reason_t 768 - sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops) 767 + sbc_parse_cdb(struct se_cmd *cmd, struct exec_cmd_ops *ops) 769 768 { 770 769 struct se_device *dev = cmd->se_dev; 771 770 unsigned char *cdb = cmd->t_task_cdb; ··· 1077 1076 static sense_reason_t 1078 1077 sbc_execute_unmap(struct se_cmd *cmd) 1079 1078 { 1080 - struct sbc_ops *ops = cmd->protocol_data; 1079 + struct exec_cmd_ops *ops = cmd->protocol_data; 1081 1080 struct se_device *dev = cmd->se_dev; 1082 1081 unsigned char *buf, *ptr = NULL; 1083 1082 sector_t lba;
+79 -34
drivers/target/target_core_spc.c
··· 1424 1424 .update_usage_bits = set_dpofua_usage_bits, 1425 1425 }; 1426 1426 1427 - static bool tcm_is_ws_enabled(struct se_cmd *cmd) 1427 + static bool tcm_is_ws_enabled(struct target_opcode_descriptor *descr, 1428 + struct se_cmd *cmd) 1428 1429 { 1429 - struct sbc_ops *ops = cmd->protocol_data; 1430 + struct exec_cmd_ops *ops = cmd->protocol_data; 1430 1431 struct se_device *dev = cmd->se_dev; 1431 1432 1432 1433 return (dev->dev_attrib.emulate_tpws && !!ops->execute_unmap) || ··· 1452 1451 .update_usage_bits = set_dpofua_usage_bits32, 1453 1452 }; 1454 1453 1455 - static bool tcm_is_caw_enabled(struct se_cmd *cmd) 1454 + static bool tcm_is_caw_enabled(struct target_opcode_descriptor *descr, 1455 + struct se_cmd *cmd) 1456 1456 { 1457 1457 struct se_device *dev = cmd->se_dev; 1458 1458 ··· 1493 1491 0xff, 0xff, 0x00, SCSI_CONTROL_MASK}, 1494 1492 }; 1495 1493 1496 - static bool tcm_is_rep_ref_enabled(struct se_cmd *cmd) 1494 + static bool tcm_is_rep_ref_enabled(struct target_opcode_descriptor *descr, 1495 + struct se_cmd *cmd) 1497 1496 { 1498 1497 struct se_device *dev = cmd->se_dev; 1499 1498 ··· 1505 1502 } 1506 1503 spin_unlock(&dev->t10_alua.lba_map_lock); 1507 1504 return true; 1508 - 1509 1505 } 1510 1506 1511 1507 static struct target_opcode_descriptor tcm_opcode_read_report_refferals = { ··· 1539 1537 0xff, 0xff, SCSI_GROUP_NUMBER_MASK, SCSI_CONTROL_MASK}, 1540 1538 }; 1541 1539 1542 - static bool tcm_is_unmap_enabled(struct se_cmd *cmd) 1540 + static bool tcm_is_unmap_enabled(struct target_opcode_descriptor *descr, 1541 + struct se_cmd *cmd) 1543 1542 { 1544 - struct sbc_ops *ops = cmd->protocol_data; 1543 + struct exec_cmd_ops *ops = cmd->protocol_data; 1545 1544 struct se_device *dev = cmd->se_dev; 1546 1545 1547 1546 return ops->execute_unmap && dev->dev_attrib.emulate_tpu; ··· 1662 1659 0xff, SCSI_CONTROL_MASK}, 1663 1660 }; 1664 1661 1665 - static bool tcm_is_pr_enabled(struct se_cmd *cmd) 1662 + static bool tcm_is_pr_enabled(struct target_opcode_descriptor *descr, 1663 + struct se_cmd *cmd) 1666 1664 { 1667 1665 struct se_device *dev = cmd->se_dev; 1668 1666 1669 - return dev->dev_attrib.emulate_pr; 1667 + if (!dev->dev_attrib.emulate_pr) 1668 + return false; 1669 + 1670 + if (!(dev->transport_flags & TRANSPORT_FLAG_PASSTHROUGH_PGR)) 1671 + return true; 1672 + 1673 + switch (descr->opcode) { 1674 + case RESERVE: 1675 + case RESERVE_10: 1676 + case RELEASE: 1677 + case RELEASE_10: 1678 + /* 1679 + * The pr_ops which are used by the backend modules don't 1680 + * support these commands. 1681 + */ 1682 + return false; 1683 + case PERSISTENT_RESERVE_OUT: 1684 + switch (descr->service_action) { 1685 + case PRO_REGISTER_AND_MOVE: 1686 + case PRO_REPLACE_LOST_RESERVATION: 1687 + /* 1688 + * The backend modules don't have access to ports and 1689 + * I_T nexuses so they can't handle these type of 1690 + * requests. 1691 + */ 1692 + return false; 1693 + } 1694 + break; 1695 + case PERSISTENT_RESERVE_IN: 1696 + if (descr->service_action == PRI_READ_FULL_STATUS) 1697 + return false; 1698 + break; 1699 + } 1700 + 1701 + return true; 1670 1702 } 1671 1703 1672 1704 static struct target_opcode_descriptor tcm_opcode_pri_read_caps = { ··· 1826 1788 .enabled = tcm_is_pr_enabled, 1827 1789 }; 1828 1790 1829 - static bool tcm_is_scsi2_reservations_enabled(struct se_cmd *cmd) 1830 - { 1831 - struct se_device *dev = cmd->se_dev; 1832 - 1833 - return dev->dev_attrib.emulate_pr; 1834 - } 1835 - 1836 1791 static struct target_opcode_descriptor tcm_opcode_release = { 1837 1792 .support = SCSI_SUPPORT_FULL, 1838 1793 .opcode = RELEASE, 1839 1794 .cdb_size = 6, 1840 1795 .usage_bits = {RELEASE, 0x00, 0x00, 0x00, 1841 1796 0x00, SCSI_CONTROL_MASK}, 1842 - .enabled = tcm_is_scsi2_reservations_enabled, 1797 + .enabled = tcm_is_pr_enabled, 1843 1798 }; 1844 1799 1845 1800 static struct target_opcode_descriptor tcm_opcode_release10 = { ··· 1842 1811 .usage_bits = {RELEASE_10, 0x00, 0x00, 0x00, 1843 1812 0x00, 0x00, 0x00, 0xff, 1844 1813 0xff, SCSI_CONTROL_MASK}, 1845 - .enabled = tcm_is_scsi2_reservations_enabled, 1814 + .enabled = tcm_is_pr_enabled, 1846 1815 }; 1847 1816 1848 1817 static struct target_opcode_descriptor tcm_opcode_reserve = { ··· 1851 1820 .cdb_size = 6, 1852 1821 .usage_bits = {RESERVE, 0x00, 0x00, 0x00, 1853 1822 0x00, SCSI_CONTROL_MASK}, 1854 - .enabled = tcm_is_scsi2_reservations_enabled, 1823 + .enabled = tcm_is_pr_enabled, 1855 1824 }; 1856 1825 1857 1826 static struct target_opcode_descriptor tcm_opcode_reserve10 = { ··· 1861 1830 .usage_bits = {RESERVE_10, 0x00, 0x00, 0x00, 1862 1831 0x00, 0x00, 0x00, 0xff, 1863 1832 0xff, SCSI_CONTROL_MASK}, 1864 - .enabled = tcm_is_scsi2_reservations_enabled, 1833 + .enabled = tcm_is_pr_enabled, 1865 1834 }; 1866 1835 1867 1836 static struct target_opcode_descriptor tcm_opcode_request_sense = { ··· 1880 1849 0xff, SCSI_CONTROL_MASK}, 1881 1850 }; 1882 1851 1883 - static bool tcm_is_3pc_enabled(struct se_cmd *cmd) 1852 + static bool tcm_is_3pc_enabled(struct target_opcode_descriptor *descr, 1853 + struct se_cmd *cmd) 1884 1854 { 1885 1855 struct se_device *dev = cmd->se_dev; 1886 1856 ··· 1942 1910 0xff, 0xff, 0x00, SCSI_CONTROL_MASK}, 1943 1911 }; 1944 1912 1945 - 1946 - static bool spc_rsoc_enabled(struct se_cmd *cmd) 1913 + static bool spc_rsoc_enabled(struct target_opcode_descriptor *descr, 1914 + struct se_cmd *cmd) 1947 1915 { 1948 1916 struct se_device *dev = cmd->se_dev; 1949 1917 ··· 1963 1931 .enabled = spc_rsoc_enabled, 1964 1932 }; 1965 1933 1966 - static bool tcm_is_set_tpg_enabled(struct se_cmd *cmd) 1934 + static bool tcm_is_set_tpg_enabled(struct target_opcode_descriptor *descr, 1935 + struct se_cmd *cmd) 1967 1936 { 1968 1937 struct t10_alua_tg_pt_gp *l_tg_pt_gp; 1969 1938 struct se_lun *l_lun = cmd->se_lun; ··· 2151 2118 if (descr->serv_action_valid) 2152 2119 return TCM_INVALID_CDB_FIELD; 2153 2120 2154 - if (!descr->enabled || descr->enabled(cmd)) 2121 + if (!descr->enabled || descr->enabled(descr, cmd)) 2155 2122 *opcode = descr; 2156 2123 break; 2157 2124 case 0x2: ··· 2165 2132 */ 2166 2133 if (descr->serv_action_valid && 2167 2134 descr->service_action == requested_sa) { 2168 - if (!descr->enabled || descr->enabled(cmd)) 2135 + if (!descr->enabled || descr->enabled(descr, 2136 + cmd)) 2169 2137 *opcode = descr; 2170 2138 } else if (!descr->serv_action_valid) 2171 2139 return TCM_INVALID_CDB_FIELD; ··· 2179 2145 * be returned in the one_command parameter data format. 2180 2146 */ 2181 2147 if (descr->service_action == requested_sa) 2182 - if (!descr->enabled || descr->enabled(cmd)) 2148 + if (!descr->enabled || descr->enabled(descr, 2149 + cmd)) 2183 2150 *opcode = descr; 2184 2151 break; 2185 2152 } ··· 2237 2202 2238 2203 for (i = 0; i < ARRAY_SIZE(tcm_supported_opcodes); i++) { 2239 2204 descr = tcm_supported_opcodes[i]; 2240 - if (descr->enabled && !descr->enabled(cmd)) 2205 + if (descr->enabled && !descr->enabled(descr, cmd)) 2241 2206 continue; 2242 2207 2243 2208 response_length += spc_rsoc_encode_command_descriptor( ··· 2266 2231 struct se_device *dev = cmd->se_dev; 2267 2232 unsigned char *cdb = cmd->t_task_cdb; 2268 2233 2269 - if (!dev->dev_attrib.emulate_pr && 2270 - ((cdb[0] == PERSISTENT_RESERVE_IN) || 2271 - (cdb[0] == PERSISTENT_RESERVE_OUT) || 2272 - (cdb[0] == RELEASE || cdb[0] == RELEASE_10) || 2273 - (cdb[0] == RESERVE || cdb[0] == RESERVE_10))) { 2274 - return TCM_UNSUPPORTED_SCSI_OPCODE; 2234 + switch (cdb[0]) { 2235 + case RESERVE: 2236 + case RESERVE_10: 2237 + case RELEASE: 2238 + case RELEASE_10: 2239 + if (!dev->dev_attrib.emulate_pr) 2240 + return TCM_UNSUPPORTED_SCSI_OPCODE; 2241 + 2242 + if (dev->transport_flags & TRANSPORT_FLAG_PASSTHROUGH_PGR) 2243 + return TCM_UNSUPPORTED_SCSI_OPCODE; 2244 + break; 2245 + case PERSISTENT_RESERVE_IN: 2246 + case PERSISTENT_RESERVE_OUT: 2247 + if (!dev->dev_attrib.emulate_pr) 2248 + return TCM_UNSUPPORTED_SCSI_OPCODE; 2249 + break; 2275 2250 } 2276 2251 2277 2252 switch (cdb[0]) {
+1 -1
drivers/ufs/core/ufs-fault-injection.c
··· 54 54 if (!setup_fault_attr(attr, (char *)val)) 55 55 return -EINVAL; 56 56 57 - strlcpy(kp->arg, val, FAULT_INJ_STR_SIZE); 57 + strscpy(kp->arg, val, FAULT_INJ_STR_SIZE); 58 58 59 59 return 0; 60 60 }
+1 -1
drivers/ufs/core/ufs-hwmon.c
··· 146 146 return 0; 147 147 } 148 148 149 - static const struct hwmon_channel_info *ufs_hwmon_info[] = { 149 + static const struct hwmon_channel_info *const ufs_hwmon_info[] = { 150 150 HWMON_CHANNEL_INFO(temp, HWMON_T_ENABLE | HWMON_T_INPUT | HWMON_T_CRIT | HWMON_T_LCRIT), 151 151 NULL 152 152 };
+272 -26
drivers/ufs/core/ufs-mcq.c
··· 12 12 #include <linux/module.h> 13 13 #include <linux/platform_device.h> 14 14 #include "ufshcd-priv.h" 15 + #include <linux/delay.h> 16 + #include <scsi/scsi_cmnd.h> 17 + #include <linux/bitfield.h> 18 + #include <linux/iopoll.h> 15 19 16 20 #define MAX_QUEUE_SUP GENMASK(7, 0) 17 21 #define UFS_MCQ_MIN_RW_QUEUES 2 18 22 #define UFS_MCQ_MIN_READ_QUEUES 0 19 - #define UFS_MCQ_NUM_DEV_CMD_QUEUES 1 20 23 #define UFS_MCQ_MIN_POLL_QUEUES 0 21 24 #define QUEUE_EN_OFFSET 31 22 25 #define QUEUE_ID_OFFSET 16 23 26 24 - #define MAX_DEV_CMD_ENTRIES 2 25 27 #define MCQ_CFG_MAC_MASK GENMASK(16, 8) 26 28 #define MCQ_QCFG_SIZE 0x40 27 29 #define MCQ_ENTRY_SIZE_IN_DWORD 8 28 30 #define CQE_UCD_BA GENMASK_ULL(63, 7) 31 + 32 + /* Max mcq register polling time in microseconds */ 33 + #define MCQ_POLL_US 500000 29 34 30 35 static int rw_queue_count_set(const char *val, const struct kernel_param *kp) 31 36 { ··· 113 108 u32 utag = blk_mq_unique_tag(req); 114 109 u32 hwq = blk_mq_unique_tag_to_hwq(utag); 115 110 116 - /* uhq[0] is used to serve device commands */ 117 - return &hba->uhq[hwq + UFSHCD_MCQ_IO_QUEUE_OFFSET]; 111 + return &hba->uhq[hwq]; 118 112 } 119 113 120 114 /** ··· 157 153 /* maxq is 0 based value */ 158 154 hba_maxq = FIELD_GET(MAX_QUEUE_SUP, hba->mcq_capabilities) + 1; 159 155 160 - tot_queues = UFS_MCQ_NUM_DEV_CMD_QUEUES + read_queues + poll_queues + 161 - rw_queues; 156 + tot_queues = read_queues + poll_queues + rw_queues; 162 157 163 158 if (hba_maxq < tot_queues) { 164 159 dev_err(hba->dev, "Total queues (%d) exceeds HC capacity (%d)\n", ··· 165 162 return -EOPNOTSUPP; 166 163 } 167 164 168 - rem = hba_maxq - UFS_MCQ_NUM_DEV_CMD_QUEUES; 165 + rem = hba_maxq; 169 166 170 167 if (rw_queues) { 171 168 hba->nr_queues[HCTX_TYPE_DEFAULT] = rw_queues; ··· 191 188 for (i = 0; i < HCTX_MAX_TYPES; i++) 192 189 host->nr_hw_queues += hba->nr_queues[i]; 193 190 194 - hba->nr_hw_queues = host->nr_hw_queues + UFS_MCQ_NUM_DEV_CMD_QUEUES; 191 + hba->nr_hw_queues = host->nr_hw_queues; 195 192 return 0; 196 193 } 197 194 ··· 273 270 } 274 271 275 272 static void ufshcd_mcq_process_cqe(struct ufs_hba *hba, 276 - struct ufs_hw_queue *hwq) 273 + struct ufs_hw_queue *hwq) 277 274 { 278 275 struct cq_entry *cqe = ufshcd_mcq_cur_cqe(hwq); 279 276 int tag = ufshcd_mcq_get_tag(hba, hwq, cqe); 280 277 281 - ufshcd_compl_one_cqe(hba, tag, cqe); 278 + if (cqe->command_desc_base_addr) { 279 + ufshcd_compl_one_cqe(hba, tag, cqe); 280 + /* After processed the cqe, mark it empty (invalid) entry */ 281 + cqe->command_desc_base_addr = 0; 282 + } 282 283 } 283 284 284 - unsigned long ufshcd_mcq_poll_cqe_nolock(struct ufs_hba *hba, 285 - struct ufs_hw_queue *hwq) 285 + void ufshcd_mcq_compl_all_cqes_lock(struct ufs_hba *hba, 286 + struct ufs_hw_queue *hwq) 287 + { 288 + unsigned long flags; 289 + u32 entries = hwq->max_entries; 290 + 291 + spin_lock_irqsave(&hwq->cq_lock, flags); 292 + while (entries > 0) { 293 + ufshcd_mcq_process_cqe(hba, hwq); 294 + ufshcd_mcq_inc_cq_head_slot(hwq); 295 + entries--; 296 + } 297 + 298 + ufshcd_mcq_update_cq_tail_slot(hwq); 299 + hwq->cq_head_slot = hwq->cq_tail_slot; 300 + spin_unlock_irqrestore(&hwq->cq_lock, flags); 301 + } 302 + 303 + unsigned long ufshcd_mcq_poll_cqe_lock(struct ufs_hba *hba, 304 + struct ufs_hw_queue *hwq) 286 305 { 287 306 unsigned long completed_reqs = 0; 307 + unsigned long flags; 288 308 309 + spin_lock_irqsave(&hwq->cq_lock, flags); 289 310 ufshcd_mcq_update_cq_tail_slot(hwq); 290 311 while (!ufshcd_mcq_is_cq_empty(hwq)) { 291 312 ufshcd_mcq_process_cqe(hba, hwq); ··· 319 292 320 293 if (completed_reqs) 321 294 ufshcd_mcq_update_cq_head(hwq); 322 - 323 - return completed_reqs; 324 - } 325 - EXPORT_SYMBOL_GPL(ufshcd_mcq_poll_cqe_nolock); 326 - 327 - unsigned long ufshcd_mcq_poll_cqe_lock(struct ufs_hba *hba, 328 - struct ufs_hw_queue *hwq) 329 - { 330 - unsigned long completed_reqs, flags; 331 - 332 - spin_lock_irqsave(&hwq->cq_lock, flags); 333 - completed_reqs = ufshcd_mcq_poll_cqe_nolock(hba, hwq); 334 295 spin_unlock_irqrestore(&hwq->cq_lock, flags); 335 296 336 297 return completed_reqs; 337 298 } 299 + EXPORT_SYMBOL_GPL(ufshcd_mcq_poll_cqe_lock); 338 300 339 301 void ufshcd_mcq_make_queues_operational(struct ufs_hba *hba) 340 302 { ··· 436 420 hwq->max_entries = hba->nutrs; 437 421 spin_lock_init(&hwq->sq_lock); 438 422 spin_lock_init(&hwq->cq_lock); 423 + mutex_init(&hwq->sq_mutex); 439 424 } 440 425 441 426 /* The very first HW queue serves device commands */ 442 427 hba->dev_cmd_queue = &hba->uhq[0]; 443 - /* Give dev_cmd_queue the minimal number of entries */ 444 - hba->dev_cmd_queue->max_entries = MAX_DEV_CMD_ENTRIES; 445 428 446 429 host->host_tagset = 1; 447 430 return 0; 431 + } 432 + 433 + static int ufshcd_mcq_sq_stop(struct ufs_hba *hba, struct ufs_hw_queue *hwq) 434 + { 435 + void __iomem *reg; 436 + u32 id = hwq->id, val; 437 + int err; 438 + 439 + if (hba->quirks & UFSHCD_QUIRK_MCQ_BROKEN_RTC) 440 + return -ETIMEDOUT; 441 + 442 + writel(SQ_STOP, mcq_opr_base(hba, OPR_SQD, id) + REG_SQRTC); 443 + reg = mcq_opr_base(hba, OPR_SQD, id) + REG_SQRTS; 444 + err = read_poll_timeout(readl, val, val & SQ_STS, 20, 445 + MCQ_POLL_US, false, reg); 446 + if (err) 447 + dev_err(hba->dev, "%s: failed. hwq-id=%d, err=%d\n", 448 + __func__, id, err); 449 + return err; 450 + } 451 + 452 + static int ufshcd_mcq_sq_start(struct ufs_hba *hba, struct ufs_hw_queue *hwq) 453 + { 454 + void __iomem *reg; 455 + u32 id = hwq->id, val; 456 + int err; 457 + 458 + if (hba->quirks & UFSHCD_QUIRK_MCQ_BROKEN_RTC) 459 + return -ETIMEDOUT; 460 + 461 + writel(SQ_START, mcq_opr_base(hba, OPR_SQD, id) + REG_SQRTC); 462 + reg = mcq_opr_base(hba, OPR_SQD, id) + REG_SQRTS; 463 + err = read_poll_timeout(readl, val, !(val & SQ_STS), 20, 464 + MCQ_POLL_US, false, reg); 465 + if (err) 466 + dev_err(hba->dev, "%s: failed. hwq-id=%d, err=%d\n", 467 + __func__, id, err); 468 + return err; 469 + } 470 + 471 + /** 472 + * ufshcd_mcq_sq_cleanup - Clean up submission queue resources 473 + * associated with the pending command. 474 + * @hba - per adapter instance. 475 + * @task_tag - The command's task tag. 476 + * 477 + * Returns 0 for success; error code otherwise. 478 + */ 479 + int ufshcd_mcq_sq_cleanup(struct ufs_hba *hba, int task_tag) 480 + { 481 + struct ufshcd_lrb *lrbp = &hba->lrb[task_tag]; 482 + struct scsi_cmnd *cmd = lrbp->cmd; 483 + struct ufs_hw_queue *hwq; 484 + void __iomem *reg, *opr_sqd_base; 485 + u32 nexus, id, val; 486 + int err; 487 + 488 + if (hba->quirks & UFSHCD_QUIRK_MCQ_BROKEN_RTC) 489 + return -ETIMEDOUT; 490 + 491 + if (task_tag != hba->nutrs - UFSHCD_NUM_RESERVED) { 492 + if (!cmd) 493 + return -EINVAL; 494 + hwq = ufshcd_mcq_req_to_hwq(hba, scsi_cmd_to_rq(cmd)); 495 + } else { 496 + hwq = hba->dev_cmd_queue; 497 + } 498 + 499 + id = hwq->id; 500 + 501 + mutex_lock(&hwq->sq_mutex); 502 + 503 + /* stop the SQ fetching before working on it */ 504 + err = ufshcd_mcq_sq_stop(hba, hwq); 505 + if (err) 506 + goto unlock; 507 + 508 + /* SQCTI = EXT_IID, IID, LUN, Task Tag */ 509 + nexus = lrbp->lun << 8 | task_tag; 510 + opr_sqd_base = mcq_opr_base(hba, OPR_SQD, id); 511 + writel(nexus, opr_sqd_base + REG_SQCTI); 512 + 513 + /* SQRTCy.ICU = 1 */ 514 + writel(SQ_ICU, opr_sqd_base + REG_SQRTC); 515 + 516 + /* Poll SQRTSy.CUS = 1. Return result from SQRTSy.RTC */ 517 + reg = opr_sqd_base + REG_SQRTS; 518 + err = read_poll_timeout(readl, val, val & SQ_CUS, 20, 519 + MCQ_POLL_US, false, reg); 520 + if (err) 521 + dev_err(hba->dev, "%s: failed. hwq=%d, tag=%d err=%ld\n", 522 + __func__, id, task_tag, 523 + FIELD_GET(SQ_ICU_ERR_CODE_MASK, readl(reg))); 524 + 525 + if (ufshcd_mcq_sq_start(hba, hwq)) 526 + err = -ETIMEDOUT; 527 + 528 + unlock: 529 + mutex_unlock(&hwq->sq_mutex); 530 + return err; 531 + } 532 + 533 + /** 534 + * ufshcd_mcq_nullify_sqe - Nullify the submission queue entry. 535 + * Write the sqe's Command Type to 0xF. The host controller will not 536 + * fetch any sqe with Command Type = 0xF. 537 + * 538 + * @utrd - UTP Transfer Request Descriptor to be nullified. 539 + */ 540 + static void ufshcd_mcq_nullify_sqe(struct utp_transfer_req_desc *utrd) 541 + { 542 + u32 dword_0; 543 + 544 + dword_0 = le32_to_cpu(utrd->header.dword_0); 545 + dword_0 &= ~UPIU_COMMAND_TYPE_MASK; 546 + dword_0 |= FIELD_PREP(UPIU_COMMAND_TYPE_MASK, 0xF); 547 + utrd->header.dword_0 = cpu_to_le32(dword_0); 548 + } 549 + 550 + /** 551 + * ufshcd_mcq_sqe_search - Search for the command in the submission queue 552 + * If the command is in the submission queue and not issued to the device yet, 553 + * nullify the sqe so the host controller will skip fetching the sqe. 554 + * 555 + * @hba - per adapter instance. 556 + * @hwq - Hardware Queue to be searched. 557 + * @task_tag - The command's task tag. 558 + * 559 + * Returns true if the SQE containing the command is present in the SQ 560 + * (not fetched by the controller); returns false if the SQE is not in the SQ. 561 + */ 562 + static bool ufshcd_mcq_sqe_search(struct ufs_hba *hba, 563 + struct ufs_hw_queue *hwq, int task_tag) 564 + { 565 + struct ufshcd_lrb *lrbp = &hba->lrb[task_tag]; 566 + struct utp_transfer_req_desc *utrd; 567 + u32 mask = hwq->max_entries - 1; 568 + __le64 cmd_desc_base_addr; 569 + bool ret = false; 570 + u64 addr, match; 571 + u32 sq_head_slot; 572 + 573 + if (hba->quirks & UFSHCD_QUIRK_MCQ_BROKEN_RTC) 574 + return true; 575 + 576 + mutex_lock(&hwq->sq_mutex); 577 + 578 + ufshcd_mcq_sq_stop(hba, hwq); 579 + sq_head_slot = ufshcd_mcq_get_sq_head_slot(hwq); 580 + if (sq_head_slot == hwq->sq_tail_slot) 581 + goto out; 582 + 583 + cmd_desc_base_addr = lrbp->utr_descriptor_ptr->command_desc_base_addr; 584 + addr = le64_to_cpu(cmd_desc_base_addr) & CQE_UCD_BA; 585 + 586 + while (sq_head_slot != hwq->sq_tail_slot) { 587 + utrd = hwq->sqe_base_addr + 588 + sq_head_slot * sizeof(struct utp_transfer_req_desc); 589 + match = le64_to_cpu(utrd->command_desc_base_addr) & CQE_UCD_BA; 590 + if (addr == match) { 591 + ufshcd_mcq_nullify_sqe(utrd); 592 + ret = true; 593 + goto out; 594 + } 595 + sq_head_slot = (sq_head_slot + 1) & mask; 596 + } 597 + 598 + out: 599 + ufshcd_mcq_sq_start(hba, hwq); 600 + mutex_unlock(&hwq->sq_mutex); 601 + return ret; 602 + } 603 + 604 + /** 605 + * ufshcd_mcq_abort - Abort the command in MCQ. 606 + * @cmd - The command to be aborted. 607 + * 608 + * Returns SUCCESS or FAILED error codes 609 + */ 610 + int ufshcd_mcq_abort(struct scsi_cmnd *cmd) 611 + { 612 + struct Scsi_Host *host = cmd->device->host; 613 + struct ufs_hba *hba = shost_priv(host); 614 + int tag = scsi_cmd_to_rq(cmd)->tag; 615 + struct ufshcd_lrb *lrbp = &hba->lrb[tag]; 616 + struct ufs_hw_queue *hwq; 617 + int err = FAILED; 618 + 619 + if (!ufshcd_cmd_inflight(lrbp->cmd)) { 620 + dev_err(hba->dev, 621 + "%s: skip abort. cmd at tag %d already completed.\n", 622 + __func__, tag); 623 + goto out; 624 + } 625 + 626 + /* Skip task abort in case previous aborts failed and report failure */ 627 + if (lrbp->req_abort_skip) { 628 + dev_err(hba->dev, "%s: skip abort. tag %d failed earlier\n", 629 + __func__, tag); 630 + goto out; 631 + } 632 + 633 + hwq = ufshcd_mcq_req_to_hwq(hba, scsi_cmd_to_rq(cmd)); 634 + 635 + if (ufshcd_mcq_sqe_search(hba, hwq, tag)) { 636 + /* 637 + * Failure. The command should not be "stuck" in SQ for 638 + * a long time which resulted in command being aborted. 639 + */ 640 + dev_err(hba->dev, "%s: cmd found in sq. hwq=%d, tag=%d\n", 641 + __func__, hwq->id, tag); 642 + goto out; 643 + } 644 + 645 + /* 646 + * The command is not in the submission queue, and it is not 647 + * in the completion queue either. Query the device to see if 648 + * the command is being processed in the device. 649 + */ 650 + if (ufshcd_try_to_abort_task(hba, tag)) { 651 + dev_err(hba->dev, "%s: device abort failed %d\n", __func__, err); 652 + lrbp->req_abort_skip = true; 653 + goto out; 654 + } 655 + 656 + err = SUCCESS; 657 + if (ufshcd_cmd_inflight(lrbp->cmd)) 658 + ufshcd_release_scsi_cmd(hba, lrbp); 659 + 660 + out: 661 + return err; 448 662 }
+34 -1
drivers/ufs/core/ufs-sysfs.c
··· 168 168 } 169 169 170 170 pm_runtime_get_sync(hba->dev); 171 - ufshcd_hold(hba, false); 171 + ufshcd_hold(hba); 172 172 ahit = ufshcd_readl(hba, REG_AUTO_HIBERNATE_IDLE_TIMER); 173 173 ufshcd_release(hba); 174 174 pm_runtime_put_sync(hba->dev); ··· 298 298 return res < 0 ? res : count; 299 299 } 300 300 301 + static ssize_t wb_flush_threshold_show(struct device *dev, 302 + struct device_attribute *attr, 303 + char *buf) 304 + { 305 + struct ufs_hba *hba = dev_get_drvdata(dev); 306 + 307 + return sysfs_emit(buf, "%u\n", hba->vps->wb_flush_threshold); 308 + } 309 + 310 + static ssize_t wb_flush_threshold_store(struct device *dev, 311 + struct device_attribute *attr, 312 + const char *buf, size_t count) 313 + { 314 + struct ufs_hba *hba = dev_get_drvdata(dev); 315 + unsigned int wb_flush_threshold; 316 + 317 + if (kstrtouint(buf, 0, &wb_flush_threshold)) 318 + return -EINVAL; 319 + 320 + /* The range of values for wb_flush_threshold is (0,10] */ 321 + if (wb_flush_threshold > UFS_WB_BUF_REMAIN_PERCENT(100) || 322 + wb_flush_threshold == 0) { 323 + dev_err(dev, "The value of wb_flush_threshold is invalid!\n"); 324 + return -EINVAL; 325 + } 326 + 327 + hba->vps->wb_flush_threshold = wb_flush_threshold; 328 + 329 + return count; 330 + } 331 + 301 332 static DEVICE_ATTR_RW(rpm_lvl); 302 333 static DEVICE_ATTR_RO(rpm_target_dev_state); 303 334 static DEVICE_ATTR_RO(rpm_target_link_state); ··· 338 307 static DEVICE_ATTR_RW(auto_hibern8); 339 308 static DEVICE_ATTR_RW(wb_on); 340 309 static DEVICE_ATTR_RW(enable_wb_buf_flush); 310 + static DEVICE_ATTR_RW(wb_flush_threshold); 341 311 342 312 static struct attribute *ufs_sysfs_ufshcd_attrs[] = { 343 313 &dev_attr_rpm_lvl.attr, ··· 350 318 &dev_attr_auto_hibern8.attr, 351 319 &dev_attr_wb_on.attr, 352 320 &dev_attr_enable_wb_buf_flush.attr, 321 + &dev_attr_wb_flush_threshold.attr, 353 322 NULL 354 323 }; 355 324
+1 -1
drivers/ufs/core/ufshcd-crypto.c
··· 24 24 u32 slot_offset = hba->crypto_cfg_register + slot * sizeof(*cfg); 25 25 int err = 0; 26 26 27 - ufshcd_hold(hba, false); 27 + ufshcd_hold(hba); 28 28 29 29 if (hba->vops && hba->vops->program_key) { 30 30 err = hba->vops->program_key(hba, cfg, slot);
+19 -8
drivers/ufs/core/ufshcd-priv.h
··· 71 71 void ufshcd_mcq_select_mcq_mode(struct ufs_hba *hba); 72 72 u32 ufshcd_mcq_read_cqis(struct ufs_hba *hba, int i); 73 73 void ufshcd_mcq_write_cqis(struct ufs_hba *hba, u32 val, int i); 74 - unsigned long ufshcd_mcq_poll_cqe_nolock(struct ufs_hba *hba, 75 - struct ufs_hw_queue *hwq); 76 74 struct ufs_hw_queue *ufshcd_mcq_req_to_hwq(struct ufs_hba *hba, 77 75 struct request *req); 78 76 unsigned long ufshcd_mcq_poll_cqe_lock(struct ufs_hba *hba, 79 77 struct ufs_hw_queue *hwq); 78 + void ufshcd_mcq_compl_all_cqes_lock(struct ufs_hba *hba, 79 + struct ufs_hw_queue *hwq); 80 + bool ufshcd_cmd_inflight(struct scsi_cmnd *cmd); 81 + int ufshcd_mcq_sq_cleanup(struct ufs_hba *hba, int task_tag); 82 + int ufshcd_mcq_abort(struct scsi_cmnd *cmd); 83 + int ufshcd_try_to_abort_task(struct ufs_hba *hba, int tag); 84 + void ufshcd_release_scsi_cmd(struct ufs_hba *hba, 85 + struct ufshcd_lrb *lrbp); 80 86 81 - #define UFSHCD_MCQ_IO_QUEUE_OFFSET 1 82 87 #define SD_ASCII_STD true 83 88 #define SD_RAW false 84 89 int ufshcd_read_string_desc(struct ufs_hba *hba, u8 desc_index, 85 90 u8 **buf, bool ascii); 86 - 87 - int ufshcd_hold(struct ufs_hba *hba, bool async); 88 - void ufshcd_release(struct ufs_hba *hba); 89 91 90 92 int ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd); 91 93 ··· 368 366 static inline void ufshcd_inc_sq_tail(struct ufs_hw_queue *q) 369 367 __must_hold(&q->sq_lock) 370 368 { 371 - u32 mask = q->max_entries - 1; 372 369 u32 val; 373 370 374 - q->sq_tail_slot = (q->sq_tail_slot + 1) & mask; 371 + q->sq_tail_slot++; 372 + if (q->sq_tail_slot == q->max_entries) 373 + q->sq_tail_slot = 0; 375 374 val = q->sq_tail_slot * sizeof(struct utp_transfer_req_desc); 376 375 writel(val, q->mcq_sq_tail); 377 376 } ··· 407 404 408 405 return cqe + q->cq_head_slot; 409 406 } 407 + 408 + static inline u32 ufshcd_mcq_get_sq_head_slot(struct ufs_hw_queue *q) 409 + { 410 + u32 val = readl(q->mcq_sq_head); 411 + 412 + return val / sizeof(struct utp_transfer_req_desc); 413 + } 414 + 410 415 #endif /* _UFSHCD_PRIV_H_ */
+287 -184
drivers/ufs/core/ufshcd.c
··· 98 98 /* Polling time to wait for fDeviceInit */ 99 99 #define FDEVICEINIT_COMPL_TIMEOUT 1500 /* millisecs */ 100 100 101 - /* UFSHC 4.0 compliant HC support this mode, refer param_set_mcq_mode() */ 101 + /* UFSHC 4.0 compliant HC support this mode. */ 102 102 static bool use_mcq_mode = true; 103 103 104 104 static bool is_mcq_supported(struct ufs_hba *hba) ··· 106 106 return hba->mcq_sup && use_mcq_mode; 107 107 } 108 108 109 - static int param_set_mcq_mode(const char *val, const struct kernel_param *kp) 110 - { 111 - int ret; 112 - 113 - ret = param_set_bool(val, kp); 114 - if (ret) 115 - return ret; 116 - 117 - return 0; 118 - } 119 - 120 - static const struct kernel_param_ops mcq_mode_ops = { 121 - .set = param_set_mcq_mode, 122 - .get = param_get_bool, 123 - }; 124 - 125 - module_param_cb(use_mcq_mode, &mcq_mode_ops, &use_mcq_mode, 0644); 109 + module_param(use_mcq_mode, bool, 0644); 126 110 MODULE_PARM_DESC(use_mcq_mode, "Control MCQ mode for controllers starting from UFSHCI 4.0. 1 - enable MCQ, 0 - disable MCQ. MCQ is enabled by default"); 127 111 128 112 #define ufshcd_toggle_vreg(_dev, _vreg, _on) \ ··· 157 173 enum { 158 174 UFSHCD_MAX_CHANNEL = 0, 159 175 UFSHCD_MAX_ID = 1, 160 - UFSHCD_NUM_RESERVED = 1, 161 176 UFSHCD_CMD_PER_LUN = 32 - UFSHCD_NUM_RESERVED, 162 177 UFSHCD_CAN_QUEUE = 32 - UFSHCD_NUM_RESERVED, 163 178 }; ··· 284 301 static int ufshcd_setup_vreg(struct ufs_hba *hba, bool on); 285 302 static inline int ufshcd_config_vreg_hpm(struct ufs_hba *hba, 286 303 struct ufs_vreg *vreg); 287 - static int ufshcd_try_to_abort_task(struct ufs_hba *hba, int tag); 288 304 static void ufshcd_wb_toggle_buf_flush_during_h8(struct ufs_hba *hba, 289 305 bool enable); 290 306 static void ufshcd_hba_vreg_set_lpm(struct ufs_hba *hba); ··· 1187 1205 bool timeout = false, do_last_check = false; 1188 1206 ktime_t start; 1189 1207 1190 - ufshcd_hold(hba, false); 1208 + ufshcd_hold(hba); 1191 1209 spin_lock_irqsave(hba->host->host_lock, flags); 1192 1210 /* 1193 1211 * Wait for all the outstanding tasks/transfer requests. ··· 1308 1326 } 1309 1327 1310 1328 /* let's not get into low power until clock scaling is completed */ 1311 - ufshcd_hold(hba, false); 1329 + ufshcd_hold(hba); 1312 1330 1313 1331 out: 1314 1332 return ret; ··· 1638 1656 goto out; 1639 1657 1640 1658 ufshcd_rpm_get_sync(hba); 1641 - ufshcd_hold(hba, false); 1659 + ufshcd_hold(hba); 1642 1660 1643 1661 hba->clk_scaling.is_enabled = value; 1644 1662 ··· 1721 1739 spin_lock_irqsave(hba->host->host_lock, flags); 1722 1740 if (hba->clk_gating.state == CLKS_ON) { 1723 1741 spin_unlock_irqrestore(hba->host->host_lock, flags); 1724 - goto unblock_reqs; 1742 + return; 1725 1743 } 1726 1744 1727 1745 spin_unlock_irqrestore(hba->host->host_lock, flags); ··· 1744 1762 } 1745 1763 hba->clk_gating.is_suspended = false; 1746 1764 } 1747 - unblock_reqs: 1748 - ufshcd_scsi_unblock_requests(hba); 1749 1765 } 1750 1766 1751 1767 /** 1752 1768 * ufshcd_hold - Enable clocks that were gated earlier due to ufshcd_release. 1753 1769 * Also, exit from hibern8 mode and set the link as active. 1754 1770 * @hba: per adapter instance 1755 - * @async: This indicates whether caller should ungate clocks asynchronously. 1756 1771 */ 1757 - int ufshcd_hold(struct ufs_hba *hba, bool async) 1772 + void ufshcd_hold(struct ufs_hba *hba) 1758 1773 { 1759 - int rc = 0; 1760 1774 bool flush_result; 1761 1775 unsigned long flags; 1762 1776 1763 1777 if (!ufshcd_is_clkgating_allowed(hba) || 1764 1778 !hba->clk_gating.is_initialized) 1765 - goto out; 1779 + return; 1766 1780 spin_lock_irqsave(hba->host->host_lock, flags); 1767 1781 hba->clk_gating.active_reqs++; 1768 1782 ··· 1775 1797 */ 1776 1798 if (ufshcd_can_hibern8_during_gating(hba) && 1777 1799 ufshcd_is_link_hibern8(hba)) { 1778 - if (async) { 1779 - rc = -EAGAIN; 1780 - hba->clk_gating.active_reqs--; 1781 - break; 1782 - } 1783 1800 spin_unlock_irqrestore(hba->host->host_lock, flags); 1784 1801 flush_result = flush_work(&hba->clk_gating.ungate_work); 1785 1802 if (hba->clk_gating.is_suspended && !flush_result) 1786 - goto out; 1803 + return; 1787 1804 spin_lock_irqsave(hba->host->host_lock, flags); 1788 1805 goto start; 1789 1806 } ··· 1800 1827 hba->clk_gating.state = REQ_CLKS_ON; 1801 1828 trace_ufshcd_clk_gating(dev_name(hba->dev), 1802 1829 hba->clk_gating.state); 1803 - if (queue_work(hba->clk_gating.clk_gating_workq, 1804 - &hba->clk_gating.ungate_work)) 1805 - ufshcd_scsi_block_requests(hba); 1830 + queue_work(hba->clk_gating.clk_gating_workq, 1831 + &hba->clk_gating.ungate_work); 1806 1832 /* 1807 1833 * fall through to check if we should wait for this 1808 1834 * work to be done or not. 1809 1835 */ 1810 1836 fallthrough; 1811 1837 case REQ_CLKS_ON: 1812 - if (async) { 1813 - rc = -EAGAIN; 1814 - hba->clk_gating.active_reqs--; 1815 - break; 1816 - } 1817 - 1818 1838 spin_unlock_irqrestore(hba->host->host_lock, flags); 1819 1839 flush_work(&hba->clk_gating.ungate_work); 1820 1840 /* Make sure state is CLKS_ON before returning */ ··· 1819 1853 break; 1820 1854 } 1821 1855 spin_unlock_irqrestore(hba->host->host_lock, flags); 1822 - out: 1823 - return rc; 1824 1856 } 1825 1857 EXPORT_SYMBOL_GPL(ufshcd_hold); 1826 1858 ··· 2050 2086 ufshcd_remove_clk_gating_sysfs(hba); 2051 2087 2052 2088 /* Ungate the clock if necessary. */ 2053 - ufshcd_hold(hba, false); 2089 + ufshcd_hold(hba); 2054 2090 hba->clk_gating.is_initialized = false; 2055 2091 ufshcd_release(hba); 2056 2092 ··· 2300 2336 2301 2337 /* Read crypto capabilities */ 2302 2338 err = ufshcd_hba_init_crypto_capabilities(hba); 2303 - if (err) 2339 + if (err) { 2304 2340 dev_err(hba->dev, "crypto setup failed\n"); 2341 + return err; 2342 + } 2305 2343 2306 2344 hba->mcq_sup = FIELD_GET(MASK_MCQ_SUPPORT, hba->capabilities); 2307 2345 if (!hba->mcq_sup) 2308 - return err; 2346 + return 0; 2309 2347 2310 2348 hba->mcq_capabilities = ufshcd_readl(hba, REG_MCQCAP); 2311 2349 hba->ext_iid_sup = FIELD_GET(MASK_EXT_IID_SUPPORT, 2312 2350 hba->mcq_capabilities); 2313 2351 2314 - return err; 2352 + return 0; 2315 2353 } 2316 2354 2317 2355 /** ··· 2448 2482 if (hba->quirks & UFSHCD_QUIRK_BROKEN_UIC_CMD) 2449 2483 return 0; 2450 2484 2451 - ufshcd_hold(hba, false); 2485 + ufshcd_hold(hba); 2452 2486 mutex_lock(&hba->uic_cmd_mutex); 2453 2487 ufshcd_add_delay_before_dme_cmd(hba); 2454 2488 ··· 2499 2533 * 11b to indicate Dword granularity. A value of '3' 2500 2534 * indicates 4 bytes, '7' indicates 8 bytes, etc." 2501 2535 */ 2502 - WARN_ONCE(len > 256 * 1024, "len = %#x\n", len); 2536 + WARN_ONCE(len > SZ_256K, "len = %#x\n", len); 2503 2537 prd->size = cpu_to_le32(len - 1); 2504 2538 prd->addr = cpu_to_le64(sg->dma_address); 2505 2539 prd->reserved = 0; ··· 2851 2885 2852 2886 WARN_ONCE(tag < 0 || tag >= hba->nutrs, "Invalid tag %d\n", tag); 2853 2887 2854 - /* 2855 - * Allows the UFS error handler to wait for prior ufshcd_queuecommand() 2856 - * calls. 2857 - */ 2858 - rcu_read_lock(); 2859 - 2860 2888 switch (hba->ufshcd_state) { 2861 2889 case UFSHCD_STATE_OPERATIONAL: 2862 2890 break; ··· 2896 2936 2897 2937 hba->req_abort_count = 0; 2898 2938 2899 - err = ufshcd_hold(hba, true); 2900 - if (err) { 2901 - err = SCSI_MLQUEUE_HOST_BUSY; 2902 - goto out; 2903 - } 2904 - WARN_ON(ufshcd_is_clkgating_allowed(hba) && 2905 - (hba->clk_gating.state != CLKS_ON)); 2939 + ufshcd_hold(hba); 2906 2940 2907 2941 lrbp = &hba->lrb[tag]; 2908 - WARN_ON(lrbp->cmd); 2909 2942 lrbp->cmd = cmd; 2910 2943 lrbp->task_tag = tag; 2911 2944 lrbp->lun = ufshcd_scsi_to_upiu_lun(cmd->device->lun); ··· 2914 2961 2915 2962 err = ufshcd_map_sg(hba, lrbp); 2916 2963 if (err) { 2917 - lrbp->cmd = NULL; 2918 2964 ufshcd_release(hba); 2919 2965 goto out; 2920 2966 } ··· 2924 2972 ufshcd_send_command(hba, tag, hwq); 2925 2973 2926 2974 out: 2927 - rcu_read_unlock(); 2928 - 2929 2975 if (ufs_trigger_eh()) { 2930 2976 unsigned long flags; 2931 2977 ··· 2949 2999 } 2950 3000 2951 3001 /* 2952 - * Clear all the requests from the controller for which a bit has been set in 2953 - * @mask and wait until the controller confirms that these requests have been 2954 - * cleared. 3002 + * Check with the block layer if the command is inflight 3003 + * @cmd: command to check. 3004 + * 3005 + * Returns true if command is inflight; false if not. 2955 3006 */ 2956 - static int ufshcd_clear_cmds(struct ufs_hba *hba, u32 mask) 3007 + bool ufshcd_cmd_inflight(struct scsi_cmnd *cmd) 2957 3008 { 3009 + struct request *rq; 3010 + 3011 + if (!cmd) 3012 + return false; 3013 + 3014 + rq = scsi_cmd_to_rq(cmd); 3015 + if (!blk_mq_request_started(rq)) 3016 + return false; 3017 + 3018 + return true; 3019 + } 3020 + 3021 + /* 3022 + * Clear the pending command in the controller and wait until 3023 + * the controller confirms that the command has been cleared. 3024 + * @hba: per adapter instance 3025 + * @task_tag: The tag number of the command to be cleared. 3026 + */ 3027 + static int ufshcd_clear_cmd(struct ufs_hba *hba, u32 task_tag) 3028 + { 3029 + u32 mask = 1U << task_tag; 2958 3030 unsigned long flags; 3031 + int err; 3032 + 3033 + if (is_mcq_enabled(hba)) { 3034 + /* 3035 + * MCQ mode. Clean up the MCQ resources similar to 3036 + * what the ufshcd_utrl_clear() does for SDB mode. 3037 + */ 3038 + err = ufshcd_mcq_sq_cleanup(hba, task_tag); 3039 + if (err) { 3040 + dev_err(hba->dev, "%s: failed tag=%d. err=%d\n", 3041 + __func__, task_tag, err); 3042 + return err; 3043 + } 3044 + return 0; 3045 + } 2959 3046 2960 3047 /* clear outstanding transaction before retry */ 2961 3048 spin_lock_irqsave(hba->host->host_lock, flags); ··· 3086 3099 * not trigger any race conditions. 3087 3100 */ 3088 3101 hba->dev_cmd.complete = NULL; 3089 - err = ufshcd_get_tr_ocs(lrbp, hba->dev_cmd.cqe); 3102 + err = ufshcd_get_tr_ocs(lrbp, NULL); 3090 3103 if (!err) 3091 3104 err = ufshcd_dev_cmd_completion(hba, lrbp); 3092 3105 } else { 3093 3106 err = -ETIMEDOUT; 3094 3107 dev_dbg(hba->dev, "%s: dev_cmd request timedout, tag %d\n", 3095 3108 __func__, lrbp->task_tag); 3096 - if (ufshcd_clear_cmds(hba, 1U << lrbp->task_tag) == 0) { 3109 + 3110 + /* MCQ mode */ 3111 + if (is_mcq_enabled(hba)) { 3112 + err = ufshcd_clear_cmd(hba, lrbp->task_tag); 3113 + hba->dev_cmd.complete = NULL; 3114 + return err; 3115 + } 3116 + 3117 + /* SDB mode */ 3118 + if (ufshcd_clear_cmd(hba, lrbp->task_tag) == 0) { 3097 3119 /* successfully cleared the command, retry if needed */ 3098 3120 err = -EAGAIN; 3099 3121 /* ··· 3176 3180 down_read(&hba->clk_scaling_lock); 3177 3181 3178 3182 lrbp = &hba->lrb[tag]; 3179 - WARN_ON(lrbp->cmd); 3183 + lrbp->cmd = NULL; 3180 3184 err = ufshcd_compose_dev_cmd(hba, lrbp, cmd_type, tag); 3181 3185 if (unlikely(err)) 3182 3186 goto out; 3183 3187 3184 3188 hba->dev_cmd.complete = &wait; 3185 - hba->dev_cmd.cqe = NULL; 3186 3189 3187 3190 ufshcd_add_query_upiu_trace(hba, UFS_QUERY_SEND, lrbp->ucd_req_ptr); 3188 3191 ··· 3262 3267 3263 3268 BUG_ON(!hba); 3264 3269 3265 - ufshcd_hold(hba, false); 3270 + ufshcd_hold(hba); 3266 3271 mutex_lock(&hba->dev_cmd.lock); 3267 3272 ufshcd_init_query(hba, &request, &response, opcode, idn, index, 3268 3273 selector); ··· 3336 3341 return -EINVAL; 3337 3342 } 3338 3343 3339 - ufshcd_hold(hba, false); 3344 + ufshcd_hold(hba); 3340 3345 3341 3346 mutex_lock(&hba->dev_cmd.lock); 3342 3347 ufshcd_init_query(hba, &request, &response, opcode, idn, index, ··· 3432 3437 return -EINVAL; 3433 3438 } 3434 3439 3435 - ufshcd_hold(hba, false); 3440 + ufshcd_hold(hba); 3436 3441 3437 3442 mutex_lock(&hba->dev_cmd.lock); 3438 3443 ufshcd_init_query(hba, &request, &response, opcode, idn, index, ··· 3774 3779 3775 3780 /* 3776 3781 * Allocate memory for UTP Transfer descriptors 3777 - * UFSHCI requires 1024 byte alignment of UTRD 3782 + * UFSHCI requires 1KB alignment of UTRD 3778 3783 */ 3779 3784 utrdl_size = (sizeof(struct utp_transfer_req_desc) * hba->nutrs); 3780 3785 hba->utrdl_base_addr = dmam_alloc_coherent(hba->dev, ··· 3782 3787 &hba->utrdl_dma_addr, 3783 3788 GFP_KERNEL); 3784 3789 if (!hba->utrdl_base_addr || 3785 - WARN_ON(hba->utrdl_dma_addr & (1024 - 1))) { 3790 + WARN_ON(hba->utrdl_dma_addr & (SZ_1K - 1))) { 3786 3791 dev_err(hba->dev, 3787 3792 "Transfer Descriptor Memory allocation failed\n"); 3788 3793 goto out; ··· 3798 3803 goto skip_utmrdl; 3799 3804 /* 3800 3805 * Allocate memory for UTP Task Management descriptors 3801 - * UFSHCI requires 1024 byte alignment of UTMRD 3806 + * UFSHCI requires 1KB alignment of UTMRD 3802 3807 */ 3803 3808 utmrdl_size = sizeof(struct utp_task_req_desc) * hba->nutmrs; 3804 3809 hba->utmrdl_base_addr = dmam_alloc_coherent(hba->dev, ··· 3806 3811 &hba->utmrdl_dma_addr, 3807 3812 GFP_KERNEL); 3808 3813 if (!hba->utmrdl_base_addr || 3809 - WARN_ON(hba->utmrdl_dma_addr & (1024 - 1))) { 3814 + WARN_ON(hba->utmrdl_dma_addr & (SZ_1K - 1))) { 3810 3815 dev_err(hba->dev, 3811 3816 "Task Management Descriptor Memory allocation failed\n"); 3812 3817 goto out; ··· 3863 3868 /* Configure UTRD with command descriptor base address */ 3864 3869 cmd_desc_element_addr = 3865 3870 (cmd_desc_dma_addr + (cmd_desc_size * i)); 3866 - utrdlp[i].command_desc_base_addr_lo = 3867 - cpu_to_le32(lower_32_bits(cmd_desc_element_addr)); 3868 - utrdlp[i].command_desc_base_addr_hi = 3869 - cpu_to_le32(upper_32_bits(cmd_desc_element_addr)); 3871 + utrdlp[i].command_desc_base_addr = 3872 + cpu_to_le64(cmd_desc_element_addr); 3870 3873 3871 3874 /* Response upiu and prdt offset should be in double words */ 3872 3875 if (hba->quirks & UFSHCD_QUIRK_PRDT_BYTE_GRAN) { ··· 4248 4255 uic_cmd.command = UIC_CMD_DME_SET; 4249 4256 uic_cmd.argument1 = UIC_ARG_MIB(PA_PWRMODE); 4250 4257 uic_cmd.argument3 = mode; 4251 - ufshcd_hold(hba, false); 4258 + ufshcd_hold(hba); 4252 4259 ret = ufshcd_uic_pwr_ctrl(hba, &uic_cmd); 4253 4260 ufshcd_release(hba); 4254 4261 ··· 4355 4362 if (update && 4356 4363 !pm_runtime_suspended(&hba->ufs_device_wlun->sdev_gendev)) { 4357 4364 ufshcd_rpm_get_sync(hba); 4358 - ufshcd_hold(hba, false); 4365 + ufshcd_hold(hba); 4359 4366 ufshcd_auto_hibern8_enable(hba); 4360 4367 ufshcd_release(hba); 4361 4368 ufshcd_rpm_put_sync(hba); ··· 4948 4955 int err = 0; 4949 4956 int retries; 4950 4957 4951 - ufshcd_hold(hba, false); 4958 + ufshcd_hold(hba); 4952 4959 mutex_lock(&hba->dev_cmd.lock); 4953 4960 for (retries = NOP_OUT_RETRIES; retries > 0; retries--) { 4954 4961 err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_NOP, ··· 5141 5148 5142 5149 blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1); 5143 5150 if (hba->quirks & UFSHCD_QUIRK_4KB_DMA_ALIGNMENT) 5144 - blk_queue_update_dma_alignment(q, 4096 - 1); 5151 + blk_queue_update_dma_alignment(q, SZ_4K - 1); 5145 5152 /* 5146 5153 * Block runtime-pm until all consumers are added. 5147 5154 * Refer ufshcd_setup_links(). ··· 5409 5416 } 5410 5417 5411 5418 /* Release the resources allocated for processing a SCSI command. */ 5412 - static void ufshcd_release_scsi_cmd(struct ufs_hba *hba, 5413 - struct ufshcd_lrb *lrbp) 5419 + void ufshcd_release_scsi_cmd(struct ufs_hba *hba, 5420 + struct ufshcd_lrb *lrbp) 5414 5421 { 5415 5422 struct scsi_cmnd *cmd = lrbp->cmd; 5416 5423 5417 5424 scsi_dma_unmap(cmd); 5418 - lrbp->cmd = NULL; /* Mark the command as completed. */ 5419 5425 ufshcd_release(hba); 5420 5426 ufshcd_clk_scaling_update_busy(hba); 5421 5427 } ··· 5430 5438 { 5431 5439 struct ufshcd_lrb *lrbp; 5432 5440 struct scsi_cmnd *cmd; 5441 + enum utp_ocs ocs; 5433 5442 5434 5443 lrbp = &hba->lrb[task_tag]; 5435 5444 lrbp->compl_time_stamp = ktime_get(); ··· 5446 5453 } else if (lrbp->command_type == UTP_CMD_TYPE_DEV_MANAGE || 5447 5454 lrbp->command_type == UTP_CMD_TYPE_UFS_STORAGE) { 5448 5455 if (hba->dev_cmd.complete) { 5449 - hba->dev_cmd.cqe = cqe; 5450 - ufshcd_add_command_trace(hba, task_tag, UFS_DEV_COMP); 5456 + if (cqe) { 5457 + ocs = le32_to_cpu(cqe->status) & MASK_OCS; 5458 + lrbp->utr_descriptor_ptr->header.dword_2 = 5459 + cpu_to_le32(ocs); 5460 + } 5451 5461 complete(hba->dev_cmd.complete); 5452 5462 ufshcd_clk_scaling_update_busy(hba); 5453 5463 } ··· 5503 5507 struct ufs_hw_queue *hwq; 5504 5508 5505 5509 if (is_mcq_enabled(hba)) { 5506 - hwq = &hba->uhq[queue_num + UFSHCD_MCQ_IO_QUEUE_OFFSET]; 5510 + hwq = &hba->uhq[queue_num]; 5507 5511 5508 5512 return ufshcd_mcq_poll_cqe_lock(hba, hwq); 5509 5513 } ··· 5525 5529 __ufshcd_transfer_req_compl(hba, completed_reqs); 5526 5530 5527 5531 return completed_reqs != 0; 5532 + } 5533 + 5534 + /** 5535 + * ufshcd_mcq_compl_pending_transfer - MCQ mode function. It is 5536 + * invoked from the error handler context or ufshcd_host_reset_and_restore() 5537 + * to complete the pending transfers and free the resources associated with 5538 + * the scsi command. 5539 + * 5540 + * @hba: per adapter instance 5541 + * @force_compl: This flag is set to true when invoked 5542 + * from ufshcd_host_reset_and_restore() in which case it requires special 5543 + * handling because the host controller has been reset by ufshcd_hba_stop(). 5544 + */ 5545 + static void ufshcd_mcq_compl_pending_transfer(struct ufs_hba *hba, 5546 + bool force_compl) 5547 + { 5548 + struct ufs_hw_queue *hwq; 5549 + struct ufshcd_lrb *lrbp; 5550 + struct scsi_cmnd *cmd; 5551 + unsigned long flags; 5552 + u32 hwq_num, utag; 5553 + int tag; 5554 + 5555 + for (tag = 0; tag < hba->nutrs; tag++) { 5556 + lrbp = &hba->lrb[tag]; 5557 + cmd = lrbp->cmd; 5558 + if (!ufshcd_cmd_inflight(cmd) || 5559 + test_bit(SCMD_STATE_COMPLETE, &cmd->state)) 5560 + continue; 5561 + 5562 + utag = blk_mq_unique_tag(scsi_cmd_to_rq(cmd)); 5563 + hwq_num = blk_mq_unique_tag_to_hwq(utag); 5564 + hwq = &hba->uhq[hwq_num]; 5565 + 5566 + if (force_compl) { 5567 + ufshcd_mcq_compl_all_cqes_lock(hba, hwq); 5568 + /* 5569 + * For those cmds of which the cqes are not present 5570 + * in the cq, complete them explicitly. 5571 + */ 5572 + if (cmd && !test_bit(SCMD_STATE_COMPLETE, &cmd->state)) { 5573 + spin_lock_irqsave(&hwq->cq_lock, flags); 5574 + set_host_byte(cmd, DID_REQUEUE); 5575 + ufshcd_release_scsi_cmd(hba, lrbp); 5576 + scsi_done(cmd); 5577 + spin_unlock_irqrestore(&hwq->cq_lock, flags); 5578 + } 5579 + } else { 5580 + ufshcd_mcq_poll_cqe_lock(hba, hwq); 5581 + } 5582 + } 5528 5583 } 5529 5584 5530 5585 /** ··· 6142 6095 } 6143 6096 6144 6097 /* Complete requests that have door-bell cleared */ 6145 - static void ufshcd_complete_requests(struct ufs_hba *hba) 6098 + static void ufshcd_complete_requests(struct ufs_hba *hba, bool force_compl) 6146 6099 { 6147 - ufshcd_transfer_req_compl(hba); 6100 + if (is_mcq_enabled(hba)) 6101 + ufshcd_mcq_compl_pending_transfer(hba, force_compl); 6102 + else 6103 + ufshcd_transfer_req_compl(hba); 6104 + 6148 6105 ufshcd_tmc_handler(hba); 6149 6106 } 6150 6107 ··· 6292 6241 ufshcd_setup_vreg(hba, true); 6293 6242 ufshcd_config_vreg_hpm(hba, hba->vreg_info.vccq); 6294 6243 ufshcd_config_vreg_hpm(hba, hba->vreg_info.vccq2); 6295 - ufshcd_hold(hba, false); 6244 + ufshcd_hold(hba); 6296 6245 if (!ufshcd_is_clkgating_allowed(hba)) 6297 6246 ufshcd_setup_clocks(hba, true); 6298 6247 ufshcd_release(hba); 6299 6248 pm_op = hba->is_sys_suspended ? UFS_SYSTEM_PM : UFS_RUNTIME_PM; 6300 6249 ufshcd_vops_resume(hba, pm_op); 6301 6250 } else { 6302 - ufshcd_hold(hba, false); 6251 + ufshcd_hold(hba); 6303 6252 if (ufshcd_is_clkscaling_supported(hba) && 6304 6253 hba->clk_scaling.is_enabled) 6305 6254 ufshcd_suspend_clkscaling(hba); 6306 6255 ufshcd_clk_scaling_allow(hba, false); 6307 6256 } 6308 6257 ufshcd_scsi_block_requests(hba); 6309 - /* Drain ufshcd_queuecommand() */ 6310 - synchronize_rcu(); 6258 + /* Wait for ongoing ufshcd_queuecommand() calls to finish. */ 6259 + blk_mq_wait_quiesce_done(&hba->host->tag_set); 6311 6260 cancel_work_sync(&hba->eeh_work); 6312 6261 } 6313 6262 ··· 6389 6338 bool needs_reset = false; 6390 6339 int tag, ret; 6391 6340 6392 - /* Clear pending transfer requests */ 6393 - for_each_set_bit(tag, &hba->outstanding_reqs, hba->nutrs) { 6394 - ret = ufshcd_try_to_abort_task(hba, tag); 6395 - dev_err(hba->dev, "Aborting tag %d / CDB %#02x %s\n", tag, 6396 - hba->lrb[tag].cmd ? hba->lrb[tag].cmd->cmnd[0] : -1, 6397 - ret ? "failed" : "succeeded"); 6398 - if (ret) { 6399 - needs_reset = true; 6400 - goto out; 6341 + if (is_mcq_enabled(hba)) { 6342 + struct ufshcd_lrb *lrbp; 6343 + int tag; 6344 + 6345 + for (tag = 0; tag < hba->nutrs; tag++) { 6346 + lrbp = &hba->lrb[tag]; 6347 + if (!ufshcd_cmd_inflight(lrbp->cmd)) 6348 + continue; 6349 + ret = ufshcd_try_to_abort_task(hba, tag); 6350 + dev_err(hba->dev, "Aborting tag %d / CDB %#02x %s\n", tag, 6351 + hba->lrb[tag].cmd ? hba->lrb[tag].cmd->cmnd[0] : -1, 6352 + ret ? "failed" : "succeeded"); 6353 + if (ret) { 6354 + needs_reset = true; 6355 + goto out; 6356 + } 6357 + } 6358 + } else { 6359 + /* Clear pending transfer requests */ 6360 + for_each_set_bit(tag, &hba->outstanding_reqs, hba->nutrs) { 6361 + ret = ufshcd_try_to_abort_task(hba, tag); 6362 + dev_err(hba->dev, "Aborting tag %d / CDB %#02x %s\n", tag, 6363 + hba->lrb[tag].cmd ? hba->lrb[tag].cmd->cmnd[0] : -1, 6364 + ret ? "failed" : "succeeded"); 6365 + if (ret) { 6366 + needs_reset = true; 6367 + goto out; 6368 + } 6401 6369 } 6402 6370 } 6403 - 6404 6371 /* Clear pending task management requests */ 6405 6372 for_each_set_bit(tag, &hba->outstanding_tasks, hba->nutmrs) { 6406 6373 if (ufshcd_clear_tm_cmd(hba, tag)) { ··· 6429 6360 6430 6361 out: 6431 6362 /* Complete the requests that are cleared by s/w */ 6432 - ufshcd_complete_requests(hba); 6363 + ufshcd_complete_requests(hba, false); 6433 6364 6434 6365 return needs_reset; 6435 6366 } ··· 6469 6400 spin_unlock_irqrestore(hba->host->host_lock, flags); 6470 6401 ufshcd_err_handling_prepare(hba); 6471 6402 /* Complete requests that have door-bell cleared by h/w */ 6472 - ufshcd_complete_requests(hba); 6403 + ufshcd_complete_requests(hba, false); 6473 6404 spin_lock_irqsave(hba->host->host_lock, flags); 6474 6405 again: 6475 6406 needs_restore = false; ··· 6840 6771 ufshcd_mcq_write_cqis(hba, events, i); 6841 6772 6842 6773 if (events & UFSHCD_MCQ_CQIS_TAIL_ENT_PUSH_STS) 6843 - ufshcd_mcq_poll_cqe_nolock(hba, hwq); 6774 + ufshcd_mcq_poll_cqe_lock(hba, hwq); 6844 6775 } 6845 6776 6846 6777 return IRQ_HANDLED; ··· 6970 6901 return PTR_ERR(req); 6971 6902 6972 6903 req->end_io_data = &wait; 6973 - ufshcd_hold(hba, false); 6904 + ufshcd_hold(hba); 6974 6905 6975 6906 spin_lock_irqsave(host->host_lock, flags); 6976 6907 ··· 7106 7037 down_read(&hba->clk_scaling_lock); 7107 7038 7108 7039 lrbp = &hba->lrb[tag]; 7109 - WARN_ON(lrbp->cmd); 7110 7040 lrbp->cmd = NULL; 7111 7041 lrbp->task_tag = tag; 7112 7042 lrbp->lun = 0; ··· 7206 7138 cmd_type = DEV_CMD_TYPE_NOP; 7207 7139 fallthrough; 7208 7140 case UPIU_TRANSACTION_QUERY_REQ: 7209 - ufshcd_hold(hba, false); 7141 + ufshcd_hold(hba); 7210 7142 mutex_lock(&hba->dev_cmd.lock); 7211 7143 err = ufshcd_issue_devman_upiu_cmd(hba, req_upiu, rsp_upiu, 7212 7144 desc_buff, buff_len, ··· 7272 7204 u16 ehs_len; 7273 7205 7274 7206 /* Protects use of hba->reserved_slot. */ 7275 - ufshcd_hold(hba, false); 7207 + ufshcd_hold(hba); 7276 7208 mutex_lock(&hba->dev_cmd.lock); 7277 7209 down_read(&hba->clk_scaling_lock); 7278 7210 7279 7211 lrbp = &hba->lrb[tag]; 7280 - WARN_ON(lrbp->cmd); 7281 7212 lrbp->cmd = NULL; 7282 7213 lrbp->task_tag = tag; 7283 7214 lrbp->lun = UFS_UPIU_RPMB_WLUN; ··· 7348 7281 unsigned long flags, pending_reqs = 0, not_cleared = 0; 7349 7282 struct Scsi_Host *host; 7350 7283 struct ufs_hba *hba; 7351 - u32 pos; 7284 + struct ufs_hw_queue *hwq; 7285 + struct ufshcd_lrb *lrbp; 7286 + u32 pos, not_cleared_mask = 0; 7352 7287 int err; 7353 7288 u8 resp = 0xF, lun; 7354 7289 ··· 7365 7296 goto out; 7366 7297 } 7367 7298 7299 + if (is_mcq_enabled(hba)) { 7300 + for (pos = 0; pos < hba->nutrs; pos++) { 7301 + lrbp = &hba->lrb[pos]; 7302 + if (ufshcd_cmd_inflight(lrbp->cmd) && 7303 + lrbp->lun == lun) { 7304 + ufshcd_clear_cmd(hba, pos); 7305 + hwq = ufshcd_mcq_req_to_hwq(hba, scsi_cmd_to_rq(lrbp->cmd)); 7306 + ufshcd_mcq_poll_cqe_lock(hba, hwq); 7307 + } 7308 + } 7309 + err = 0; 7310 + goto out; 7311 + } 7312 + 7368 7313 /* clear the commands that were pending for corresponding LUN */ 7369 7314 spin_lock_irqsave(&hba->outstanding_lock, flags); 7370 7315 for_each_set_bit(pos, &hba->outstanding_reqs, hba->nutrs) ··· 7387 7304 hba->outstanding_reqs &= ~pending_reqs; 7388 7305 spin_unlock_irqrestore(&hba->outstanding_lock, flags); 7389 7306 7390 - if (ufshcd_clear_cmds(hba, pending_reqs) < 0) { 7391 - spin_lock_irqsave(&hba->outstanding_lock, flags); 7392 - not_cleared = pending_reqs & 7393 - ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL); 7394 - hba->outstanding_reqs |= not_cleared; 7395 - spin_unlock_irqrestore(&hba->outstanding_lock, flags); 7307 + for_each_set_bit(pos, &pending_reqs, hba->nutrs) { 7308 + if (ufshcd_clear_cmd(hba, pos) < 0) { 7309 + spin_lock_irqsave(&hba->outstanding_lock, flags); 7310 + not_cleared = 1U << pos & 7311 + ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL); 7312 + hba->outstanding_reqs |= not_cleared; 7313 + not_cleared_mask |= not_cleared; 7314 + spin_unlock_irqrestore(&hba->outstanding_lock, flags); 7396 7315 7397 - dev_err(hba->dev, "%s: failed to clear requests %#lx\n", 7398 - __func__, not_cleared); 7316 + dev_err(hba->dev, "%s: failed to clear request %d\n", 7317 + __func__, pos); 7318 + } 7399 7319 } 7400 - __ufshcd_transfer_req_compl(hba, pending_reqs & ~not_cleared); 7320 + __ufshcd_transfer_req_compl(hba, pending_reqs & ~not_cleared_mask); 7401 7321 7402 7322 out: 7403 7323 hba->req_abort_count = 0; ··· 7438 7352 * 7439 7353 * Returns zero on success, non-zero on failure 7440 7354 */ 7441 - static int ufshcd_try_to_abort_task(struct ufs_hba *hba, int tag) 7355 + int ufshcd_try_to_abort_task(struct ufs_hba *hba, int tag) 7442 7356 { 7443 7357 struct ufshcd_lrb *lrbp = &hba->lrb[tag]; 7444 7358 int err = 0; ··· 7461 7375 */ 7462 7376 dev_err(hba->dev, "%s: cmd at tag %d not pending in the device.\n", 7463 7377 __func__, tag); 7378 + if (is_mcq_enabled(hba)) { 7379 + /* MCQ mode */ 7380 + if (ufshcd_cmd_inflight(lrbp->cmd)) { 7381 + /* sleep for max. 200us same delay as in SDB mode */ 7382 + usleep_range(100, 200); 7383 + continue; 7384 + } 7385 + /* command completed already */ 7386 + dev_err(hba->dev, "%s: cmd at tag=%d is cleared.\n", 7387 + __func__, tag); 7388 + goto out; 7389 + } 7390 + 7391 + /* Single Doorbell Mode */ 7464 7392 reg = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL); 7465 7393 if (reg & (1 << tag)) { 7466 7394 /* sleep for max. 200us to stabilize */ ··· 7511 7411 goto out; 7512 7412 } 7513 7413 7514 - err = ufshcd_clear_cmds(hba, 1U << tag); 7414 + err = ufshcd_clear_cmd(hba, tag); 7515 7415 if (err) 7516 7416 dev_err(hba->dev, "%s: Failed clearing cmd at tag %d, err %d\n", 7517 7417 __func__, tag, err); ··· 7539 7439 7540 7440 WARN_ONCE(tag < 0, "Invalid tag %d\n", tag); 7541 7441 7542 - ufshcd_hold(hba, false); 7543 - reg = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL); 7544 - /* If command is already aborted/completed, return FAILED. */ 7545 - if (!(test_bit(tag, &hba->outstanding_reqs))) { 7546 - dev_err(hba->dev, 7547 - "%s: cmd at tag %d already completed, outstanding=0x%lx, doorbell=0x%x\n", 7548 - __func__, tag, hba->outstanding_reqs, reg); 7549 - goto release; 7442 + ufshcd_hold(hba); 7443 + 7444 + if (!is_mcq_enabled(hba)) { 7445 + reg = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL); 7446 + if (!test_bit(tag, &hba->outstanding_reqs)) { 7447 + /* If command is already aborted/completed, return FAILED. */ 7448 + dev_err(hba->dev, 7449 + "%s: cmd at tag %d already completed, outstanding=0x%lx, doorbell=0x%x\n", 7450 + __func__, tag, hba->outstanding_reqs, reg); 7451 + goto release; 7452 + } 7550 7453 } 7551 7454 7552 7455 /* Print Transfer Request of aborted task */ ··· 7574 7471 } 7575 7472 hba->req_abort_count++; 7576 7473 7577 - if (!(reg & (1 << tag))) { 7474 + if (!is_mcq_enabled(hba) && !(reg & (1 << tag))) { 7475 + /* only execute this code in single doorbell mode */ 7578 7476 dev_err(hba->dev, 7579 7477 "%s: cmd was completed, but without a notifying intr, tag = %d", 7580 7478 __func__, tag); ··· 7598 7494 hba->force_reset = true; 7599 7495 ufshcd_schedule_eh_work(hba); 7600 7496 spin_unlock_irqrestore(host->host_lock, flags); 7497 + goto release; 7498 + } 7499 + 7500 + if (is_mcq_enabled(hba)) { 7501 + /* MCQ mode. Branch off to handle abort for mcq mode */ 7502 + err = ufshcd_mcq_abort(cmd); 7601 7503 goto release; 7602 7504 } 7603 7505 ··· 7662 7552 ufshpb_toggle_state(hba, HPB_PRESENT, HPB_RESET); 7663 7553 ufshcd_hba_stop(hba); 7664 7554 hba->silence_err_logs = true; 7665 - ufshcd_complete_requests(hba); 7555 + ufshcd_complete_requests(hba, true); 7666 7556 hba->silence_err_logs = false; 7667 7557 7668 7558 /* scale up clocks to max frequency before full reinitialization */ ··· 8612 8502 static void ufshcd_config_mcq(struct ufs_hba *hba) 8613 8503 { 8614 8504 int ret; 8505 + u32 intrs; 8615 8506 8616 8507 ret = ufshcd_mcq_vops_config_esi(hba); 8617 8508 dev_info(hba->dev, "ESI %sconfigured\n", ret ? "is not " : ""); 8618 8509 8619 - ufshcd_enable_intr(hba, UFSHCD_ENABLE_MCQ_INTRS); 8510 + intrs = UFSHCD_ENABLE_MCQ_INTRS; 8511 + if (hba->quirks & UFSHCD_QUIRK_MCQ_BROKEN_INTR) 8512 + intrs &= ~MCQ_CQ_EVENT_STATUS; 8513 + ufshcd_enable_intr(hba, intrs); 8620 8514 ufshcd_mcq_make_queues_operational(hba); 8621 8515 ufshcd_mcq_config_mac(hba, hba->nutrs); 8622 8516 ··· 8888 8774 .cmd_per_lun = UFSHCD_CMD_PER_LUN, 8889 8775 .can_queue = UFSHCD_CAN_QUEUE, 8890 8776 .max_segment_size = PRDT_DATA_BYTE_COUNT_MAX, 8891 - .max_sectors = (1 << 20) / SECTOR_SIZE, /* 1 MiB */ 8777 + .max_sectors = SZ_1M / SECTOR_SIZE, 8892 8778 .max_host_blocked = 1, 8893 8779 .track_queue_depth = 1, 8894 8780 .skip_settle_delay = 1, ··· 9298 9184 }; 9299 9185 9300 9186 return scsi_execute_cmd(sdev, cdb, REQ_OP_DRV_IN, /*buffer=*/NULL, 9301 - /*bufflen=*/0, /*timeout=*/HZ, /*retries=*/0, &args); 9187 + /*bufflen=*/0, /*timeout=*/10 * HZ, /*retries=*/0, 9188 + &args); 9302 9189 } 9303 9190 9304 9191 /** ··· 9545 9430 * If we can't transition into any of the low power modes 9546 9431 * just gate the clocks. 9547 9432 */ 9548 - ufshcd_hold(hba, false); 9433 + ufshcd_hold(hba); 9549 9434 hba->clk_gating.is_suspended = true; 9550 9435 9551 9436 if (ufshcd_is_clkscaling_supported(hba)) ··· 9890 9775 } 9891 9776 #endif 9892 9777 9893 - static void ufshcd_wl_shutdown(struct device *dev) 9894 - { 9895 - struct scsi_device *sdev = to_scsi_device(dev); 9896 - struct ufs_hba *hba; 9897 - 9898 - hba = shost_priv(sdev->host); 9899 - 9900 - down(&hba->host_sem); 9901 - hba->shutting_down = true; 9902 - up(&hba->host_sem); 9903 - 9904 - /* Turn on everything while shutting down */ 9905 - ufshcd_rpm_get_sync(hba); 9906 - scsi_device_quiesce(sdev); 9907 - shost_for_each_device(sdev, hba->host) { 9908 - if (sdev == hba->ufs_device_wlun) 9909 - continue; 9910 - scsi_device_quiesce(sdev); 9911 - } 9912 - __ufshcd_wl_suspend(hba, UFS_SHUTDOWN_PM); 9913 - } 9914 - 9915 9778 /** 9916 9779 * ufshcd_suspend - helper function for suspend operations 9917 9780 * @hba: per adapter instance ··· 10074 9981 EXPORT_SYMBOL(ufshcd_runtime_resume); 10075 9982 #endif /* CONFIG_PM */ 10076 9983 10077 - /** 10078 - * ufshcd_shutdown - shutdown routine 10079 - * @hba: per adapter instance 10080 - * 10081 - * This function would turn off both UFS device and UFS hba 10082 - * regulators. It would also disable clocks. 10083 - * 10084 - * Returns 0 always to allow force shutdown even in case of errors. 10085 - */ 10086 - int ufshcd_shutdown(struct ufs_hba *hba) 9984 + static void ufshcd_wl_shutdown(struct device *dev) 10087 9985 { 9986 + struct scsi_device *sdev = to_scsi_device(dev); 9987 + struct ufs_hba *hba = shost_priv(sdev->host); 9988 + 9989 + down(&hba->host_sem); 9990 + hba->shutting_down = true; 9991 + up(&hba->host_sem); 9992 + 9993 + /* Turn on everything while shutting down */ 9994 + ufshcd_rpm_get_sync(hba); 9995 + scsi_device_quiesce(sdev); 9996 + shost_for_each_device(sdev, hba->host) { 9997 + if (sdev == hba->ufs_device_wlun) 9998 + continue; 9999 + scsi_device_quiesce(sdev); 10000 + } 10001 + __ufshcd_wl_suspend(hba, UFS_SHUTDOWN_PM); 10002 + 10003 + /* 10004 + * Next, turn off the UFS controller and the UFS regulators. Disable 10005 + * clocks. 10006 + */ 10088 10007 if (ufshcd_is_ufs_dev_poweroff(hba) && ufshcd_is_link_off(hba)) 10089 10008 ufshcd_suspend(hba); 10090 10009 10091 10010 hba->is_powered = false; 10092 - /* allow force shutdown even in case of errors */ 10093 - return 0; 10094 10011 } 10095 - EXPORT_SYMBOL(ufshcd_shutdown); 10096 10012 10097 10013 /** 10098 10014 * ufshcd_remove - de-allocate SCSI host and host memory space ··· 10328 10226 host->max_channel = UFSHCD_MAX_CHANNEL; 10329 10227 host->unique_id = host->host_no; 10330 10228 host->max_cmd_len = UFS_CDB_SIZE; 10229 + host->queuecommand_may_block = !!(hba->caps & UFSHCD_CAP_CLK_GATING); 10331 10230 10332 10231 hba->max_pwr_info.is_valid = false; 10333 10232
+3 -3
drivers/ufs/core/ufshpb.c
··· 30 30 static mempool_t *ufshpb_mctx_pool; 31 31 static mempool_t *ufshpb_page_pool; 32 32 /* A cache size of 2MB can cache ppn in the 1GB range. */ 33 - static unsigned int ufshpb_host_map_kbytes = 2048; 33 + static unsigned int ufshpb_host_map_kbytes = SZ_2K; 34 34 static int tot_active_srgn_pages; 35 35 36 36 static struct workqueue_struct *ufshpb_wq; ··· 2461 2461 2462 2462 init_success = !ufshpb_check_hpb_reset_query(hba); 2463 2463 2464 - pool_size = PAGE_ALIGN(ufshpb_host_map_kbytes * 1024) / PAGE_SIZE; 2464 + pool_size = PAGE_ALIGN(ufshpb_host_map_kbytes * SZ_1K) / PAGE_SIZE; 2465 2465 if (pool_size > tot_active_srgn_pages) { 2466 2466 mempool_resize(ufshpb_mctx_pool, tot_active_srgn_pages); 2467 2467 mempool_resize(ufshpb_page_pool, tot_active_srgn_pages); ··· 2527 2527 return -ENOMEM; 2528 2528 } 2529 2529 2530 - pool_size = PAGE_ALIGN(ufshpb_host_map_kbytes * 1024) / PAGE_SIZE; 2530 + pool_size = PAGE_ALIGN(ufshpb_host_map_kbytes * SZ_1K) / PAGE_SIZE; 2531 2531 dev_info(hba->dev, "%s:%d ufshpb_host_map_kbytes %u pool_size %u\n", 2532 2532 __func__, __LINE__, ufshpb_host_map_kbytes, pool_size); 2533 2533
+1 -1
drivers/ufs/core/ufshpb.h
··· 25 25 26 26 /* hpb map & entries macro */ 27 27 #define HPB_RGN_SIZE_UNIT 512 28 - #define HPB_ENTRY_BLOCK_SIZE 4096 28 + #define HPB_ENTRY_BLOCK_SIZE SZ_4K 29 29 #define HPB_ENTRY_SIZE 0x8 30 30 #define PINNED_NOT_SET U32_MAX 31 31
+1 -1
drivers/ufs/host/Kconfig
··· 59 59 depends on SCSI_UFSHCD_PLATFORM && ARCH_QCOM 60 60 depends on GENERIC_MSI_IRQ 61 61 depends on RESET_CONTROLLER 62 - select QCOM_SCM if SCSI_UFS_CRYPTO 62 + select QCOM_INLINE_CRYPTO_ENGINE if SCSI_UFS_CRYPTO 63 63 help 64 64 This selects the QCOM specific additions to UFSHCD platform driver. 65 65 UFS host on QCOM needs some vendor specific configuration before
+1 -3
drivers/ufs/host/Makefile
··· 3 3 obj-$(CONFIG_SCSI_UFS_DWC_TC_PCI) += tc-dwc-g210-pci.o ufshcd-dwc.o tc-dwc-g210.o 4 4 obj-$(CONFIG_SCSI_UFS_DWC_TC_PLATFORM) += tc-dwc-g210-pltfrm.o ufshcd-dwc.o tc-dwc-g210.o 5 5 obj-$(CONFIG_SCSI_UFS_CDNS_PLATFORM) += cdns-pltfrm.o 6 - obj-$(CONFIG_SCSI_UFS_QCOM) += ufs_qcom.o 7 - ufs_qcom-y += ufs-qcom.o 8 - ufs_qcom-$(CONFIG_SCSI_UFS_CRYPTO) += ufs-qcom-ice.o 6 + obj-$(CONFIG_SCSI_UFS_QCOM) += ufs-qcom.o 9 7 obj-$(CONFIG_SCSI_UFS_EXYNOS) += ufs-exynos.o 10 8 obj-$(CONFIG_SCSI_UFSHCD_PCI) += ufshcd-pci.o 11 9 obj-$(CONFIG_SCSI_UFSHCD_PLATFORM) += ufshcd-pltfrm.o
-1
drivers/ufs/host/cdns-pltfrm.c
··· 328 328 static struct platform_driver cdns_ufs_pltfrm_driver = { 329 329 .probe = cdns_ufs_pltfrm_probe, 330 330 .remove = cdns_ufs_pltfrm_remove, 331 - .shutdown = ufshcd_pltfrm_shutdown, 332 331 .driver = { 333 332 .name = "cdns-ufshcd", 334 333 .pm = &cdns_ufs_dev_pm_ops,
-10
drivers/ufs/host/tc-dwc-g210-pci.c
··· 33 33 }; 34 34 35 35 /** 36 - * tc_dwc_g210_pci_shutdown - main function to put the controller in reset state 37 - * @pdev: pointer to PCI device handle 38 - */ 39 - static void tc_dwc_g210_pci_shutdown(struct pci_dev *pdev) 40 - { 41 - ufshcd_shutdown((struct ufs_hba *)pci_get_drvdata(pdev)); 42 - } 43 - 44 - /** 45 36 * tc_dwc_g210_pci_remove - de-allocate PCI/SCSI host and host memory space 46 37 * data structure memory 47 38 * @pdev: pointer to PCI handle ··· 128 137 .id_table = tc_dwc_g210_pci_tbl, 129 138 .probe = tc_dwc_g210_pci_probe, 130 139 .remove = tc_dwc_g210_pci_remove, 131 - .shutdown = tc_dwc_g210_pci_shutdown, 132 140 .driver = { 133 141 .pm = &tc_dwc_g210_pci_pm_ops 134 142 },
-1
drivers/ufs/host/tc-dwc-g210-pltfrm.c
··· 92 92 static struct platform_driver tc_dwc_g210_pltfm_driver = { 93 93 .probe = tc_dwc_g210_pltfm_probe, 94 94 .remove = tc_dwc_g210_pltfm_remove, 95 - .shutdown = ufshcd_pltfrm_shutdown, 96 95 .driver = { 97 96 .name = "tc-dwc-g210-pltfm", 98 97 .pm = &tc_dwc_g210_pltfm_pm_ops,
+1 -2
drivers/ufs/host/ufs-exynos.c
··· 1306 1306 * (ufshcd_async_scan()). Note: this callback may also be called 1307 1307 * from other functions than ufshcd_init(). 1308 1308 */ 1309 - hba->host->max_segment_size = 4096; 1309 + hba->host->max_segment_size = SZ_4K; 1310 1310 1311 1311 if (ufs->drv_data->pre_hce_enable) { 1312 1312 ret = ufs->drv_data->pre_hce_enable(ufs); ··· 1757 1757 static struct platform_driver exynos_ufs_pltform = { 1758 1758 .probe = exynos_ufs_probe, 1759 1759 .remove = exynos_ufs_remove, 1760 - .shutdown = ufshcd_pltfrm_shutdown, 1761 1760 .driver = { 1762 1761 .name = "exynos-ufshc", 1763 1762 .pm = &exynos_ufs_pm_ops,
+12 -13
drivers/ufs/host/ufs-hisi.c
··· 335 335 /* PA_TxSkip */ 336 336 ufshcd_dme_set(hba, UIC_ARG_MIB(0x155c), 0x0); 337 337 /*PA_PWRModeUserData0 = 8191, default is 0*/ 338 - ufshcd_dme_set(hba, UIC_ARG_MIB(0x15b0), 8191); 338 + ufshcd_dme_set(hba, UIC_ARG_MIB(0x15b0), SZ_8K - 1); 339 339 /*PA_PWRModeUserData1 = 65535, default is 0*/ 340 - ufshcd_dme_set(hba, UIC_ARG_MIB(0x15b1), 65535); 340 + ufshcd_dme_set(hba, UIC_ARG_MIB(0x15b1), SZ_64K - 1); 341 341 /*PA_PWRModeUserData2 = 32767, default is 0*/ 342 - ufshcd_dme_set(hba, UIC_ARG_MIB(0x15b2), 32767); 342 + ufshcd_dme_set(hba, UIC_ARG_MIB(0x15b2), SZ_32K - 1); 343 343 /*DME_FC0ProtectionTimeOutVal = 8191, default is 0*/ 344 - ufshcd_dme_set(hba, UIC_ARG_MIB(0xd041), 8191); 344 + ufshcd_dme_set(hba, UIC_ARG_MIB(0xd041), SZ_8K - 1); 345 345 /*DME_TC0ReplayTimeOutVal = 65535, default is 0*/ 346 - ufshcd_dme_set(hba, UIC_ARG_MIB(0xd042), 65535); 346 + ufshcd_dme_set(hba, UIC_ARG_MIB(0xd042), SZ_64K - 1); 347 347 /*DME_AFC0ReqTimeOutVal = 32767, default is 0*/ 348 - ufshcd_dme_set(hba, UIC_ARG_MIB(0xd043), 32767); 348 + ufshcd_dme_set(hba, UIC_ARG_MIB(0xd043), SZ_32K - 1); 349 349 /*PA_PWRModeUserData3 = 8191, default is 0*/ 350 - ufshcd_dme_set(hba, UIC_ARG_MIB(0x15b3), 8191); 350 + ufshcd_dme_set(hba, UIC_ARG_MIB(0x15b3), SZ_8K - 1); 351 351 /*PA_PWRModeUserData4 = 65535, default is 0*/ 352 - ufshcd_dme_set(hba, UIC_ARG_MIB(0x15b4), 65535); 352 + ufshcd_dme_set(hba, UIC_ARG_MIB(0x15b4), SZ_64K - 1); 353 353 /*PA_PWRModeUserData5 = 32767, default is 0*/ 354 - ufshcd_dme_set(hba, UIC_ARG_MIB(0x15b5), 32767); 354 + ufshcd_dme_set(hba, UIC_ARG_MIB(0x15b5), SZ_32K - 1); 355 355 /*DME_FC1ProtectionTimeOutVal = 8191, default is 0*/ 356 - ufshcd_dme_set(hba, UIC_ARG_MIB(0xd044), 8191); 356 + ufshcd_dme_set(hba, UIC_ARG_MIB(0xd044), SZ_8K - 1); 357 357 /*DME_TC1ReplayTimeOutVal = 65535, default is 0*/ 358 - ufshcd_dme_set(hba, UIC_ARG_MIB(0xd045), 65535); 358 + ufshcd_dme_set(hba, UIC_ARG_MIB(0xd045), SZ_64K - 1); 359 359 /*DME_AFC1ReqTimeOutVal = 32767, default is 0*/ 360 - ufshcd_dme_set(hba, UIC_ARG_MIB(0xd046), 32767); 360 + ufshcd_dme_set(hba, UIC_ARG_MIB(0xd046), SZ_32K - 1); 361 361 } 362 362 363 363 static int ufs_hisi_pwr_change_notify(struct ufs_hba *hba, ··· 593 593 static struct platform_driver ufs_hisi_pltform = { 594 594 .probe = ufs_hisi_probe, 595 595 .remove = ufs_hisi_remove, 596 - .shutdown = ufshcd_pltfrm_shutdown, 597 596 .driver = { 598 597 .name = "ufshcd-hisi", 599 598 .pm = &ufs_hisi_pm_ops,
+2 -4
drivers/ufs/host/ufs-mediatek.c
··· 410 410 usleep_range(100, 200); 411 411 } while (ktime_before(time_checked, timeout)); 412 412 413 - if (val == state) 414 - return 0; 415 - 416 413 return -ETIMEDOUT; 417 414 } 418 415 ··· 898 901 hba->caps |= UFSHCD_CAP_CLK_SCALING; 899 902 900 903 hba->quirks |= UFSHCI_QUIRK_SKIP_MANUAL_WB_FLUSH_CTRL; 904 + hba->quirks |= UFSHCD_QUIRK_MCQ_BROKEN_INTR; 905 + hba->quirks |= UFSHCD_QUIRK_MCQ_BROKEN_RTC; 901 906 hba->vps->wb_flush_threshold = UFS_WB_BUF_REMAIN_PERCENT(80); 902 907 903 908 if (host->caps & UFS_MTK_CAP_DISABLE_AH8) ··· 1649 1650 static struct platform_driver ufs_mtk_pltform = { 1650 1651 .probe = ufs_mtk_probe, 1651 1652 .remove = ufs_mtk_remove, 1652 - .shutdown = ufshcd_pltfrm_shutdown, 1653 1653 .driver = { 1654 1654 .name = "ufshcd-mtk", 1655 1655 .pm = &ufs_mtk_pm_ops,
-244
drivers/ufs/host/ufs-qcom-ice.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * Qualcomm ICE (Inline Crypto Engine) support. 4 - * 5 - * Copyright (c) 2014-2019, The Linux Foundation. All rights reserved. 6 - * Copyright 2019 Google LLC 7 - */ 8 - 9 - #include <linux/delay.h> 10 - #include <linux/platform_device.h> 11 - #include <linux/firmware/qcom/qcom_scm.h> 12 - 13 - #include "ufs-qcom.h" 14 - 15 - #define AES_256_XTS_KEY_SIZE 64 16 - 17 - /* QCOM ICE registers */ 18 - 19 - #define QCOM_ICE_REG_CONTROL 0x0000 20 - #define QCOM_ICE_REG_RESET 0x0004 21 - #define QCOM_ICE_REG_VERSION 0x0008 22 - #define QCOM_ICE_REG_FUSE_SETTING 0x0010 23 - #define QCOM_ICE_REG_PARAMETERS_1 0x0014 24 - #define QCOM_ICE_REG_PARAMETERS_2 0x0018 25 - #define QCOM_ICE_REG_PARAMETERS_3 0x001C 26 - #define QCOM_ICE_REG_PARAMETERS_4 0x0020 27 - #define QCOM_ICE_REG_PARAMETERS_5 0x0024 28 - 29 - /* QCOM ICE v3.X only */ 30 - #define QCOM_ICE_GENERAL_ERR_STTS 0x0040 31 - #define QCOM_ICE_INVALID_CCFG_ERR_STTS 0x0030 32 - #define QCOM_ICE_GENERAL_ERR_MASK 0x0044 33 - 34 - /* QCOM ICE v2.X only */ 35 - #define QCOM_ICE_REG_NON_SEC_IRQ_STTS 0x0040 36 - #define QCOM_ICE_REG_NON_SEC_IRQ_MASK 0x0044 37 - 38 - #define QCOM_ICE_REG_NON_SEC_IRQ_CLR 0x0048 39 - #define QCOM_ICE_REG_STREAM1_ERROR_SYNDROME1 0x0050 40 - #define QCOM_ICE_REG_STREAM1_ERROR_SYNDROME2 0x0054 41 - #define QCOM_ICE_REG_STREAM2_ERROR_SYNDROME1 0x0058 42 - #define QCOM_ICE_REG_STREAM2_ERROR_SYNDROME2 0x005C 43 - #define QCOM_ICE_REG_STREAM1_BIST_ERROR_VEC 0x0060 44 - #define QCOM_ICE_REG_STREAM2_BIST_ERROR_VEC 0x0064 45 - #define QCOM_ICE_REG_STREAM1_BIST_FINISH_VEC 0x0068 46 - #define QCOM_ICE_REG_STREAM2_BIST_FINISH_VEC 0x006C 47 - #define QCOM_ICE_REG_BIST_STATUS 0x0070 48 - #define QCOM_ICE_REG_BYPASS_STATUS 0x0074 49 - #define QCOM_ICE_REG_ADVANCED_CONTROL 0x1000 50 - #define QCOM_ICE_REG_ENDIAN_SWAP 0x1004 51 - #define QCOM_ICE_REG_TEST_BUS_CONTROL 0x1010 52 - #define QCOM_ICE_REG_TEST_BUS_REG 0x1014 53 - 54 - /* BIST ("built-in self-test"?) status flags */ 55 - #define QCOM_ICE_BIST_STATUS_MASK 0xF0000000 56 - 57 - #define QCOM_ICE_FUSE_SETTING_MASK 0x1 58 - #define QCOM_ICE_FORCE_HW_KEY0_SETTING_MASK 0x2 59 - #define QCOM_ICE_FORCE_HW_KEY1_SETTING_MASK 0x4 60 - 61 - #define qcom_ice_writel(host, val, reg) \ 62 - writel((val), (host)->ice_mmio + (reg)) 63 - #define qcom_ice_readl(host, reg) \ 64 - readl((host)->ice_mmio + (reg)) 65 - 66 - static bool qcom_ice_supported(struct ufs_qcom_host *host) 67 - { 68 - struct device *dev = host->hba->dev; 69 - u32 regval = qcom_ice_readl(host, QCOM_ICE_REG_VERSION); 70 - int major = regval >> 24; 71 - int minor = (regval >> 16) & 0xFF; 72 - int step = regval & 0xFFFF; 73 - 74 - /* For now this driver only supports ICE version 3. */ 75 - if (major != 3) { 76 - dev_warn(dev, "Unsupported ICE version: v%d.%d.%d\n", 77 - major, minor, step); 78 - return false; 79 - } 80 - 81 - dev_info(dev, "Found QC Inline Crypto Engine (ICE) v%d.%d.%d\n", 82 - major, minor, step); 83 - 84 - /* If fuses are blown, ICE might not work in the standard way. */ 85 - regval = qcom_ice_readl(host, QCOM_ICE_REG_FUSE_SETTING); 86 - if (regval & (QCOM_ICE_FUSE_SETTING_MASK | 87 - QCOM_ICE_FORCE_HW_KEY0_SETTING_MASK | 88 - QCOM_ICE_FORCE_HW_KEY1_SETTING_MASK)) { 89 - dev_warn(dev, "Fuses are blown; ICE is unusable!\n"); 90 - return false; 91 - } 92 - return true; 93 - } 94 - 95 - int ufs_qcom_ice_init(struct ufs_qcom_host *host) 96 - { 97 - struct ufs_hba *hba = host->hba; 98 - struct device *dev = hba->dev; 99 - struct platform_device *pdev = to_platform_device(dev); 100 - struct resource *res; 101 - int err; 102 - 103 - if (!(ufshcd_readl(hba, REG_CONTROLLER_CAPABILITIES) & 104 - MASK_CRYPTO_SUPPORT)) 105 - return 0; 106 - 107 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ice"); 108 - if (!res) { 109 - dev_warn(dev, "ICE registers not found\n"); 110 - goto disable; 111 - } 112 - 113 - if (!qcom_scm_ice_available()) { 114 - dev_warn(dev, "ICE SCM interface not found\n"); 115 - goto disable; 116 - } 117 - 118 - host->ice_mmio = devm_ioremap_resource(dev, res); 119 - if (IS_ERR(host->ice_mmio)) { 120 - err = PTR_ERR(host->ice_mmio); 121 - return err; 122 - } 123 - 124 - if (!qcom_ice_supported(host)) 125 - goto disable; 126 - 127 - return 0; 128 - 129 - disable: 130 - dev_warn(dev, "Disabling inline encryption support\n"); 131 - hba->caps &= ~UFSHCD_CAP_CRYPTO; 132 - return 0; 133 - } 134 - 135 - static void qcom_ice_low_power_mode_enable(struct ufs_qcom_host *host) 136 - { 137 - u32 regval; 138 - 139 - regval = qcom_ice_readl(host, QCOM_ICE_REG_ADVANCED_CONTROL); 140 - /* 141 - * Enable low power mode sequence 142 - * [0]-0, [1]-0, [2]-0, [3]-E, [4]-0, [5]-0, [6]-0, [7]-0 143 - */ 144 - regval |= 0x7000; 145 - qcom_ice_writel(host, regval, QCOM_ICE_REG_ADVANCED_CONTROL); 146 - } 147 - 148 - static void qcom_ice_optimization_enable(struct ufs_qcom_host *host) 149 - { 150 - u32 regval; 151 - 152 - /* ICE Optimizations Enable Sequence */ 153 - regval = qcom_ice_readl(host, QCOM_ICE_REG_ADVANCED_CONTROL); 154 - regval |= 0xD807100; 155 - /* ICE HPG requires delay before writing */ 156 - udelay(5); 157 - qcom_ice_writel(host, regval, QCOM_ICE_REG_ADVANCED_CONTROL); 158 - udelay(5); 159 - } 160 - 161 - int ufs_qcom_ice_enable(struct ufs_qcom_host *host) 162 - { 163 - if (!(host->hba->caps & UFSHCD_CAP_CRYPTO)) 164 - return 0; 165 - qcom_ice_low_power_mode_enable(host); 166 - qcom_ice_optimization_enable(host); 167 - return ufs_qcom_ice_resume(host); 168 - } 169 - 170 - /* Poll until all BIST bits are reset */ 171 - static int qcom_ice_wait_bist_status(struct ufs_qcom_host *host) 172 - { 173 - int count; 174 - u32 reg; 175 - 176 - for (count = 0; count < 100; count++) { 177 - reg = qcom_ice_readl(host, QCOM_ICE_REG_BIST_STATUS); 178 - if (!(reg & QCOM_ICE_BIST_STATUS_MASK)) 179 - break; 180 - udelay(50); 181 - } 182 - if (reg) 183 - return -ETIMEDOUT; 184 - return 0; 185 - } 186 - 187 - int ufs_qcom_ice_resume(struct ufs_qcom_host *host) 188 - { 189 - int err; 190 - 191 - if (!(host->hba->caps & UFSHCD_CAP_CRYPTO)) 192 - return 0; 193 - 194 - err = qcom_ice_wait_bist_status(host); 195 - if (err) { 196 - dev_err(host->hba->dev, "BIST status error (%d)\n", err); 197 - return err; 198 - } 199 - return 0; 200 - } 201 - 202 - /* 203 - * Program a key into a QC ICE keyslot, or evict a keyslot. QC ICE requires 204 - * vendor-specific SCM calls for this; it doesn't support the standard way. 205 - */ 206 - int ufs_qcom_ice_program_key(struct ufs_hba *hba, 207 - const union ufs_crypto_cfg_entry *cfg, int slot) 208 - { 209 - union ufs_crypto_cap_entry cap; 210 - union { 211 - u8 bytes[AES_256_XTS_KEY_SIZE]; 212 - u32 words[AES_256_XTS_KEY_SIZE / sizeof(u32)]; 213 - } key; 214 - int i; 215 - int err; 216 - 217 - if (!(cfg->config_enable & UFS_CRYPTO_CONFIGURATION_ENABLE)) 218 - return qcom_scm_ice_invalidate_key(slot); 219 - 220 - /* Only AES-256-XTS has been tested so far. */ 221 - cap = hba->crypto_cap_array[cfg->crypto_cap_idx]; 222 - if (cap.algorithm_id != UFS_CRYPTO_ALG_AES_XTS || 223 - cap.key_size != UFS_CRYPTO_KEY_SIZE_256) { 224 - dev_err_ratelimited(hba->dev, 225 - "Unhandled crypto capability; algorithm_id=%d, key_size=%d\n", 226 - cap.algorithm_id, cap.key_size); 227 - return -EINVAL; 228 - } 229 - 230 - memcpy(key.bytes, cfg->crypto_key, AES_256_XTS_KEY_SIZE); 231 - 232 - /* 233 - * The SCM call byte-swaps the 32-bit words of the key. So we have to 234 - * do the same, in order for the final key be correct. 235 - */ 236 - for (i = 0; i < ARRAY_SIZE(key.words); i++) 237 - __cpu_to_be32s(&key.words[i]); 238 - 239 - err = qcom_scm_ice_set_key(slot, key.bytes, AES_256_XTS_KEY_SIZE, 240 - QCOM_SCM_ICE_CIPHER_AES_256_XTS, 241 - cfg->data_unit_size); 242 - memzero_explicit(&key, sizeof(key)); 243 - return err; 244 - }
+98 -4
drivers/ufs/host/ufs-qcom.c
··· 15 15 #include <linux/reset-controller.h> 16 16 #include <linux/devfreq.h> 17 17 18 + #include <soc/qcom/ice.h> 19 + 18 20 #include <ufs/ufshcd.h> 19 21 #include "ufshcd-pltfrm.h" 20 22 #include <ufs/unipro.h> ··· 56 54 { 57 55 return container_of(rcd, struct ufs_qcom_host, rcdev); 58 56 } 57 + 58 + #ifdef CONFIG_SCSI_UFS_CRYPTO 59 + 60 + static inline void ufs_qcom_ice_enable(struct ufs_qcom_host *host) 61 + { 62 + if (host->hba->caps & UFSHCD_CAP_CRYPTO) 63 + qcom_ice_enable(host->ice); 64 + } 65 + 66 + static int ufs_qcom_ice_init(struct ufs_qcom_host *host) 67 + { 68 + struct ufs_hba *hba = host->hba; 69 + struct device *dev = hba->dev; 70 + struct qcom_ice *ice; 71 + 72 + ice = of_qcom_ice_get(dev); 73 + if (ice == ERR_PTR(-EOPNOTSUPP)) { 74 + dev_warn(dev, "Disabling inline encryption support\n"); 75 + ice = NULL; 76 + } 77 + 78 + if (IS_ERR_OR_NULL(ice)) 79 + return PTR_ERR_OR_ZERO(ice); 80 + 81 + host->ice = ice; 82 + hba->caps |= UFSHCD_CAP_CRYPTO; 83 + 84 + return 0; 85 + } 86 + 87 + static inline int ufs_qcom_ice_resume(struct ufs_qcom_host *host) 88 + { 89 + if (host->hba->caps & UFSHCD_CAP_CRYPTO) 90 + return qcom_ice_resume(host->ice); 91 + 92 + return 0; 93 + } 94 + 95 + static inline int ufs_qcom_ice_suspend(struct ufs_qcom_host *host) 96 + { 97 + if (host->hba->caps & UFSHCD_CAP_CRYPTO) 98 + return qcom_ice_suspend(host->ice); 99 + 100 + return 0; 101 + } 102 + 103 + static int ufs_qcom_ice_program_key(struct ufs_hba *hba, 104 + const union ufs_crypto_cfg_entry *cfg, 105 + int slot) 106 + { 107 + struct ufs_qcom_host *host = ufshcd_get_variant(hba); 108 + union ufs_crypto_cap_entry cap; 109 + bool config_enable = 110 + cfg->config_enable & UFS_CRYPTO_CONFIGURATION_ENABLE; 111 + 112 + /* Only AES-256-XTS has been tested so far. */ 113 + cap = hba->crypto_cap_array[cfg->crypto_cap_idx]; 114 + if (cap.algorithm_id != UFS_CRYPTO_ALG_AES_XTS || 115 + cap.key_size != UFS_CRYPTO_KEY_SIZE_256) 116 + return -EINVAL; 117 + 118 + if (config_enable) 119 + return qcom_ice_program_key(host->ice, 120 + QCOM_ICE_CRYPTO_ALG_AES_XTS, 121 + QCOM_ICE_CRYPTO_KEY_SIZE_256, 122 + cfg->crypto_key, 123 + cfg->data_unit_size, slot); 124 + else 125 + return qcom_ice_evict_key(host->ice, slot); 126 + } 127 + 128 + #else 129 + 130 + #define ufs_qcom_ice_program_key NULL 131 + 132 + static inline void ufs_qcom_ice_enable(struct ufs_qcom_host *host) 133 + { 134 + } 135 + 136 + static int ufs_qcom_ice_init(struct ufs_qcom_host *host) 137 + { 138 + return 0; 139 + } 140 + 141 + static inline int ufs_qcom_ice_resume(struct ufs_qcom_host *host) 142 + { 143 + return 0; 144 + } 145 + 146 + static inline int ufs_qcom_ice_suspend(struct ufs_qcom_host *host) 147 + { 148 + return 0; 149 + } 150 + #endif 59 151 60 152 static int ufs_qcom_host_clk_get(struct device *dev, 61 153 const char *name, struct clk **clk_out, bool optional) ··· 703 607 ufs_qcom_disable_lane_clks(host); 704 608 } 705 609 706 - return 0; 610 + return ufs_qcom_ice_suspend(host); 707 611 } 708 612 709 613 static int ufs_qcom_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op) ··· 949 853 hba->caps |= UFSHCD_CAP_CLK_SCALING | UFSHCD_CAP_WB_WITH_CLK_SCALING; 950 854 hba->caps |= UFSHCD_CAP_AUTO_BKOPS_SUSPEND; 951 855 hba->caps |= UFSHCD_CAP_WB_EN; 952 - hba->caps |= UFSHCD_CAP_CRYPTO; 953 856 hba->caps |= UFSHCD_CAP_AGGR_POWER_COLLAPSE; 954 857 hba->caps |= UFSHCD_CAP_RPM_AUTOSUSPEND; 955 858 ··· 1651 1556 struct ufs_hw_queue *hwq = &hba->uhq[id]; 1652 1557 1653 1558 ufshcd_mcq_write_cqis(hba, 0x1, id); 1654 - ufshcd_mcq_poll_cqe_nolock(hba, hwq); 1559 + ufshcd_mcq_poll_cqe_lock(hba, hwq); 1655 1560 1656 1561 return IRQ_HANDLED; 1657 1562 } ··· 1818 1723 static struct platform_driver ufs_qcom_pltform = { 1819 1724 .probe = ufs_qcom_probe, 1820 1725 .remove = ufs_qcom_remove, 1821 - .shutdown = ufshcd_pltfrm_shutdown, 1822 1726 .driver = { 1823 1727 .name = "ufshcd-qcom", 1824 1728 .pm = &ufs_qcom_pm_ops,
+5 -27
drivers/ufs/host/ufs-qcom.h
··· 7 7 8 8 #include <linux/reset-controller.h> 9 9 #include <linux/reset.h> 10 + #include <soc/qcom/ice.h> 10 11 #include <ufs/ufshcd.h> 11 12 12 13 #define MAX_UFS_QCOM_HOSTS 1 ··· 206 205 struct clk *tx_l1_sync_clk; 207 206 bool is_lane_clks_enabled; 208 207 208 + #ifdef CONFIG_SCSI_UFS_CRYPTO 209 + struct qcom_ice *ice; 210 + #endif 211 + 209 212 void __iomem *dev_ref_clk_ctrl_mmio; 210 213 bool is_dev_ref_clk_enabled; 211 214 struct ufs_hw_version hw_ver; 212 - #ifdef CONFIG_SCSI_UFS_CRYPTO 213 - void __iomem *ice_mmio; 214 - #endif 215 215 216 216 u32 dev_ref_clk_en_mask; 217 217 ··· 249 247 { 250 248 return host->caps & UFS_QCOM_CAP_QUNIPRO; 251 249 } 252 - 253 - /* ufs-qcom-ice.c */ 254 - 255 - #ifdef CONFIG_SCSI_UFS_CRYPTO 256 - int ufs_qcom_ice_init(struct ufs_qcom_host *host); 257 - int ufs_qcom_ice_enable(struct ufs_qcom_host *host); 258 - int ufs_qcom_ice_resume(struct ufs_qcom_host *host); 259 - int ufs_qcom_ice_program_key(struct ufs_hba *hba, 260 - const union ufs_crypto_cfg_entry *cfg, int slot); 261 - #else 262 - static inline int ufs_qcom_ice_init(struct ufs_qcom_host *host) 263 - { 264 - return 0; 265 - } 266 - static inline int ufs_qcom_ice_enable(struct ufs_qcom_host *host) 267 - { 268 - return 0; 269 - } 270 - static inline int ufs_qcom_ice_resume(struct ufs_qcom_host *host) 271 - { 272 - return 0; 273 - } 274 - #define ufs_qcom_ice_program_key NULL 275 - #endif /* !CONFIG_SCSI_UFS_CRYPTO */ 276 250 277 251 #endif /* UFS_QCOM_H_ */
-1
drivers/ufs/host/ufs-sprd.c
··· 444 444 static struct platform_driver ufs_sprd_pltform = { 445 445 .probe = ufs_sprd_probe, 446 446 .remove = ufs_sprd_remove, 447 - .shutdown = ufshcd_pltfrm_shutdown, 448 447 .driver = { 449 448 .name = "ufshcd-sprd", 450 449 .pm = &ufs_sprd_pm_ops,
+1 -10
drivers/ufs/host/ufshcd-pci.c
··· 505 505 #endif 506 506 507 507 /** 508 - * ufshcd_pci_shutdown - main function to put the controller in reset state 509 - * @pdev: pointer to PCI device handle 510 - */ 511 - static void ufshcd_pci_shutdown(struct pci_dev *pdev) 512 - { 513 - ufshcd_shutdown((struct ufs_hba *)pci_get_drvdata(pdev)); 514 - } 515 - 516 - /** 517 508 * ufshcd_pci_remove - de-allocate PCI/SCSI host and host memory space 518 509 * data structure memory 519 510 * @pdev: pointer to PCI handle ··· 599 608 { PCI_VDEVICE(INTEL, 0x54FF), (kernel_ulong_t)&ufs_intel_adl_hba_vops }, 600 609 { PCI_VDEVICE(INTEL, 0x7E47), (kernel_ulong_t)&ufs_intel_mtl_hba_vops }, 601 610 { PCI_VDEVICE(INTEL, 0xA847), (kernel_ulong_t)&ufs_intel_mtl_hba_vops }, 611 + { PCI_VDEVICE(INTEL, 0x7747), (kernel_ulong_t)&ufs_intel_mtl_hba_vops }, 602 612 { } /* terminate list */ 603 613 }; 604 614 ··· 610 618 .id_table = ufshcd_pci_tbl, 611 619 .probe = ufshcd_pci_probe, 612 620 .remove = ufshcd_pci_remove, 613 - .shutdown = ufshcd_pci_shutdown, 614 621 .driver = { 615 622 .pm = &ufshcd_pci_pm_ops 616 623 },
-6
drivers/ufs/host/ufshcd-pltfrm.c
··· 190 190 return err; 191 191 } 192 192 193 - void ufshcd_pltfrm_shutdown(struct platform_device *pdev) 194 - { 195 - ufshcd_shutdown((struct ufs_hba *)platform_get_drvdata(pdev)); 196 - } 197 - EXPORT_SYMBOL_GPL(ufshcd_pltfrm_shutdown); 198 - 199 193 static void ufshcd_init_lanes_per_dir(struct ufs_hba *hba) 200 194 { 201 195 struct device *dev = hba->dev;
-1
drivers/ufs/host/ufshcd-pltfrm.h
··· 31 31 void ufshcd_init_pwr_dev_param(struct ufs_dev_params *dev_param); 32 32 int ufshcd_pltfrm_init(struct platform_device *pdev, 33 33 const struct ufs_hba_variant_ops *vops); 34 - void ufshcd_pltfrm_shutdown(struct platform_device *pdev); 35 34 int ufshcd_populate_vreg(struct device *dev, const char *name, 36 35 struct ufs_vreg **out_vreg); 37 36
+10 -1
include/linux/ata.h
··· 322 322 ATA_LOG_SATA_NCQ = 0x10, 323 323 ATA_LOG_NCQ_NON_DATA = 0x12, 324 324 ATA_LOG_NCQ_SEND_RECV = 0x13, 325 + ATA_LOG_CDL = 0x18, 326 + ATA_LOG_CDL_SIZE = ATA_SECT_SIZE, 325 327 ATA_LOG_IDENTIFY_DEVICE = 0x30, 328 + ATA_LOG_SENSE_NCQ = 0x0F, 329 + ATA_LOG_SENSE_NCQ_SIZE = ATA_SECT_SIZE * 2, 326 330 ATA_LOG_CONCURRENT_POSITIONING_RANGES = 0x47, 327 331 328 332 /* Identify device log pages: */ 333 + ATA_LOG_SUPPORTED_CAPABILITIES = 0x03, 334 + ATA_LOG_CURRENT_SETTINGS = 0x04, 329 335 ATA_LOG_SECURITY = 0x06, 330 336 ATA_LOG_SATA_SETTINGS = 0x08, 331 337 ATA_LOG_ZONED_INFORMATION = 0x09, 332 338 333 - /* Identify device SATA settings log:*/ 339 + /* Identify device SATA settings log: */ 334 340 ATA_LOG_DEVSLP_OFFSET = 0x30, 335 341 ATA_LOG_DEVSLP_SIZE = 0x08, 336 342 ATA_LOG_DEVSLP_MDAT = 0x00, ··· 421 415 SETFEATURES_SATA_ENABLE = 0x10, /* Enable use of SATA feature */ 422 416 SETFEATURES_SATA_DISABLE = 0x90, /* Disable use of SATA feature */ 423 417 418 + SETFEATURES_CDL = 0x0d, /* Enable/disable cmd duration limits */ 419 + 424 420 /* SETFEATURE Sector counts for SATA features */ 425 421 SATA_FPDMA_OFFSET = 0x01, /* FPDMA non-zero buffer offsets */ 426 422 SATA_FPDMA_AA = 0x02, /* FPDMA Setup FIS Auto-Activate */ ··· 433 425 SATA_DEVSLP = 0x09, /* Device Sleep */ 434 426 435 427 SETFEATURE_SENSE_DATA = 0xC3, /* Sense Data Reporting feature */ 428 + SETFEATURE_SENSE_DATA_SUCC_NCQ = 0xC4, /* Sense Data for successful NCQ commands */ 436 429 437 430 /* feature values for SET_MAX */ 438 431 ATA_SET_MAX_ADDR = 0x00,
+8 -2
include/linux/blk_types.h
··· 103 103 #define BLK_STS_NOSPC ((__force blk_status_t)3) 104 104 #define BLK_STS_TRANSPORT ((__force blk_status_t)4) 105 105 #define BLK_STS_TARGET ((__force blk_status_t)5) 106 - #define BLK_STS_NEXUS ((__force blk_status_t)6) 106 + #define BLK_STS_RESV_CONFLICT ((__force blk_status_t)6) 107 107 #define BLK_STS_MEDIUM ((__force blk_status_t)7) 108 108 #define BLK_STS_PROTECTION ((__force blk_status_t)8) 109 109 #define BLK_STS_RESOURCE ((__force blk_status_t)9) ··· 173 173 */ 174 174 #define BLK_STS_OFFLINE ((__force blk_status_t)17) 175 175 176 + /* 177 + * BLK_STS_DURATION_LIMIT is returned from the driver when the target device 178 + * aborted the command because it exceeded one of its Command Duration Limits. 179 + */ 180 + #define BLK_STS_DURATION_LIMIT ((__force blk_status_t)18) 181 + 176 182 /** 177 183 * blk_path_error - returns true if error may be path related 178 184 * @error: status the request was completed with ··· 197 191 case BLK_STS_NOTSUPP: 198 192 case BLK_STS_NOSPC: 199 193 case BLK_STS_TARGET: 200 - case BLK_STS_NEXUS: 194 + case BLK_STS_RESV_CONFLICT: 201 195 case BLK_STS_MEDIUM: 202 196 case BLK_STS_PROTECTION: 203 197 return false;
+27 -11
include/linux/libata.h
··· 94 94 ATA_DFLAG_DMADIR = (1 << 10), /* device requires DMADIR */ 95 95 ATA_DFLAG_NCQ_SEND_RECV = (1 << 11), /* device supports NCQ SEND and RECV */ 96 96 ATA_DFLAG_NCQ_PRIO = (1 << 12), /* device supports NCQ priority */ 97 - ATA_DFLAG_CFG_MASK = (1 << 13) - 1, 97 + ATA_DFLAG_CDL = (1 << 13), /* supports cmd duration limits */ 98 + ATA_DFLAG_CFG_MASK = (1 << 14) - 1, 98 99 99 - ATA_DFLAG_PIO = (1 << 13), /* device limited to PIO mode */ 100 - ATA_DFLAG_NCQ_OFF = (1 << 14), /* device limited to non-NCQ mode */ 101 - ATA_DFLAG_SLEEPING = (1 << 15), /* device is sleeping */ 102 - ATA_DFLAG_DUBIOUS_XFER = (1 << 16), /* data transfer not verified */ 103 - ATA_DFLAG_NO_UNLOAD = (1 << 17), /* device doesn't support unload */ 104 - ATA_DFLAG_UNLOCK_HPA = (1 << 18), /* unlock HPA */ 105 - ATA_DFLAG_INIT_MASK = (1 << 19) - 1, 100 + ATA_DFLAG_PIO = (1 << 14), /* device limited to PIO mode */ 101 + ATA_DFLAG_NCQ_OFF = (1 << 15), /* device limited to non-NCQ mode */ 102 + ATA_DFLAG_SLEEPING = (1 << 16), /* device is sleeping */ 103 + ATA_DFLAG_DUBIOUS_XFER = (1 << 17), /* data transfer not verified */ 104 + ATA_DFLAG_NO_UNLOAD = (1 << 18), /* device doesn't support unload */ 105 + ATA_DFLAG_UNLOCK_HPA = (1 << 19), /* unlock HPA */ 106 + ATA_DFLAG_INIT_MASK = (1 << 20) - 1, 106 107 107 - ATA_DFLAG_NCQ_PRIO_ENABLED = (1 << 19), /* Priority cmds sent to dev */ 108 + ATA_DFLAG_NCQ_PRIO_ENABLED = (1 << 20), /* Priority cmds sent to dev */ 109 + ATA_DFLAG_CDL_ENABLED = (1 << 21), /* cmd duration limits is enabled */ 108 110 ATA_DFLAG_DETACH = (1 << 24), 109 111 ATA_DFLAG_DETACHED = (1 << 25), 110 112 ATA_DFLAG_DA = (1 << 26), /* device supports Device Attention */ ··· 117 115 118 116 ATA_DFLAG_FEATURES_MASK = (ATA_DFLAG_TRUSTED | ATA_DFLAG_DA | \ 119 117 ATA_DFLAG_DEVSLP | ATA_DFLAG_NCQ_SEND_RECV | \ 120 - ATA_DFLAG_NCQ_PRIO | ATA_DFLAG_FUA), 118 + ATA_DFLAG_NCQ_PRIO | ATA_DFLAG_FUA | \ 119 + ATA_DFLAG_CDL), 121 120 122 121 ATA_DEV_UNKNOWN = 0, /* unknown device */ 123 122 ATA_DEV_ATA = 1, /* ATA device */ ··· 209 206 ATA_QCFLAG_CLEAR_EXCL = (1 << 5), /* clear excl_link on completion */ 210 207 ATA_QCFLAG_QUIET = (1 << 6), /* don't report device error */ 211 208 ATA_QCFLAG_RETRY = (1 << 7), /* retry after failure */ 209 + ATA_QCFLAG_HAS_CDL = (1 << 8), /* qc has CDL a descriptor set */ 212 210 213 211 ATA_QCFLAG_EH = (1 << 16), /* cmd aborted and owned by EH */ 214 212 ATA_QCFLAG_SENSE_VALID = (1 << 17), /* sense data valid */ 215 213 ATA_QCFLAG_EH_SCHEDULED = (1 << 18), /* EH scheduled (obsolete) */ 214 + ATA_QCFLAG_EH_SUCCESS_CMD = (1 << 19), /* EH should fetch sense for this successful cmd */ 216 215 217 216 /* host set flags */ 218 217 ATA_HOST_SIMPLEX = (1 << 0), /* Host is simplex, one DMA channel per host only */ ··· 313 308 ATA_EH_RESET = ATA_EH_SOFTRESET | ATA_EH_HARDRESET, 314 309 ATA_EH_ENABLE_LINK = (1 << 3), 315 310 ATA_EH_PARK = (1 << 5), /* unload heads and stop I/O */ 311 + ATA_EH_GET_SUCCESS_SENSE = (1 << 6), /* Get sense data for successful cmd */ 316 312 317 - ATA_EH_PERDEV_MASK = ATA_EH_REVALIDATE | ATA_EH_PARK, 313 + ATA_EH_PERDEV_MASK = ATA_EH_REVALIDATE | ATA_EH_PARK | 314 + ATA_EH_GET_SUCCESS_SENSE, 318 315 ATA_EH_ALL_ACTIONS = ATA_EH_REVALIDATE | ATA_EH_RESET | 319 316 ATA_EH_ENABLE_LINK, 320 317 ··· 716 709 /* Concurrent positioning ranges */ 717 710 struct ata_cpr_log *cpr_log; 718 711 712 + /* Command Duration Limits log support */ 713 + u8 cdl[ATA_LOG_CDL_SIZE]; 714 + 719 715 /* error history */ 720 716 int spdn_cnt; 721 717 /* ering is CLEAR_END, read comment above CLEAR_END */ ··· 870 860 struct ata_acpi_gtm __acpi_init_gtm; /* use ata_acpi_init_gtm() */ 871 861 #endif 872 862 /* owned by EH */ 863 + u8 *ncq_sense_buf; 873 864 u8 sector_buf[ATA_SECT_SIZE] ____cacheline_aligned; 874 865 }; 875 866 ··· 1189 1178 bool *online, int (*check_ready)(struct ata_link *)); 1190 1179 extern int sata_link_resume(struct ata_link *link, const unsigned long *params, 1191 1180 unsigned long deadline); 1181 + extern int ata_eh_read_sense_success_ncq_log(struct ata_link *link); 1192 1182 extern void ata_eh_analyze_ncq_error(struct ata_link *link); 1193 1183 #else 1194 1184 static inline const unsigned long * ··· 1224 1212 static inline int sata_link_resume(struct ata_link *link, 1225 1213 const unsigned long *params, 1226 1214 unsigned long deadline) 1215 + { 1216 + return -EOPNOTSUPP; 1217 + } 1218 + static inline int ata_eh_read_sense_success_ncq_log(struct ata_link *link) 1227 1219 { 1228 1220 return -EOPNOTSUPP; 1229 1221 }
+43 -8
include/linux/nvme.h
··· 759 759 NVME_LBART_ATTRIB_HIDE = 1 << 1, 760 760 }; 761 761 762 + enum nvme_pr_type { 763 + NVME_PR_WRITE_EXCLUSIVE = 1, 764 + NVME_PR_EXCLUSIVE_ACCESS = 2, 765 + NVME_PR_WRITE_EXCLUSIVE_REG_ONLY = 3, 766 + NVME_PR_EXCLUSIVE_ACCESS_REG_ONLY = 4, 767 + NVME_PR_WRITE_EXCLUSIVE_ALL_REGS = 5, 768 + NVME_PR_EXCLUSIVE_ACCESS_ALL_REGS = 6, 769 + }; 770 + 771 + enum nvme_eds { 772 + NVME_EXTENDED_DATA_STRUCT = 0x1, 773 + }; 774 + 775 + struct nvme_registered_ctrl { 776 + __le16 cntlid; 777 + __u8 rcsts; 778 + __u8 rsvd3[5]; 779 + __le64 hostid; 780 + __le64 rkey; 781 + }; 782 + 762 783 struct nvme_reservation_status { 763 784 __le32 gen; 764 785 __u8 rtype; 765 786 __u8 regctl[2]; 766 787 __u8 resv5[2]; 767 788 __u8 ptpls; 768 - __u8 resv10[13]; 769 - struct { 770 - __le16 cntlid; 771 - __u8 rcsts; 772 - __u8 resv3[5]; 773 - __le64 hostid; 774 - __le64 rkey; 775 - } regctl_ds[]; 789 + __u8 resv10[14]; 790 + struct nvme_registered_ctrl regctl_ds[]; 791 + }; 792 + 793 + struct nvme_registered_ctrl_ext { 794 + __le16 cntlid; 795 + __u8 rcsts; 796 + __u8 rsvd3[5]; 797 + __le64 rkey; 798 + __u8 hostid[16]; 799 + __u8 rsvd32[32]; 800 + }; 801 + 802 + struct nvme_reservation_status_ext { 803 + __le32 gen; 804 + __u8 rtype; 805 + __u8 regctl[2]; 806 + __u8 resv5[2]; 807 + __u8 ptpls; 808 + __u8 resv10[14]; 809 + __u8 rsvd24[40]; 810 + struct nvme_registered_ctrl_ext regctl_eds[]; 776 811 }; 777 812 778 813 enum nvme_async_event_type {
+25
include/linux/pr.h
··· 4 4 5 5 #include <uapi/linux/pr.h> 6 6 7 + struct pr_keys { 8 + u32 generation; 9 + u32 num_keys; 10 + u64 keys[]; 11 + }; 12 + 13 + struct pr_held_reservation { 14 + u64 key; 15 + u32 generation; 16 + enum pr_type type; 17 + }; 18 + 7 19 struct pr_ops { 8 20 int (*pr_register)(struct block_device *bdev, u64 old_key, u64 new_key, 9 21 u32 flags); ··· 26 14 int (*pr_preempt)(struct block_device *bdev, u64 old_key, u64 new_key, 27 15 enum pr_type type, bool abort); 28 16 int (*pr_clear)(struct block_device *bdev, u64 key); 17 + /* 18 + * pr_read_keys - Read the registered keys and return them in the 19 + * pr_keys->keys array. The keys array will have been allocated at the 20 + * end of the pr_keys struct, and pr_keys->num_keys must be set to the 21 + * number of keys the array can hold. If there are more than can fit 22 + * in the array, success will still be returned and pr_keys->num_keys 23 + * will reflect the total number of keys the device contains, so the 24 + * caller can retry with a larger array. 25 + */ 26 + int (*pr_read_keys)(struct block_device *bdev, 27 + struct pr_keys *keys_info); 28 + int (*pr_read_reservation)(struct block_device *bdev, 29 + struct pr_held_reservation *rsv); 29 30 }; 30 31 31 32 #endif /* LINUX_PR_H */
+5
include/scsi/scsi_cmnd.h
··· 52 52 #define SCMD_TAGGED (1 << 0) 53 53 #define SCMD_INITIALIZED (1 << 1) 54 54 #define SCMD_LAST (1 << 2) 55 + /* 56 + * libata uses SCSI EH to fetch sense data for successful commands. 57 + * SCSI EH should not overwrite scmd->result when SCMD_FORCE_EH_SUCCESS is set. 58 + */ 59 + #define SCMD_FORCE_EH_SUCCESS (1 << 3) 55 60 #define SCMD_FAIL_IF_RECOVERING (1 << 4) 56 61 /* flags preserved across unprep / reprep */ 57 62 #define SCMD_PRESERVED_FLAGS (SCMD_INITIALIZED | SCMD_FAIL_IF_RECOVERING)
+13
include/scsi/scsi_common.h
··· 7 7 #define _SCSI_COMMON_H_ 8 8 9 9 #include <linux/types.h> 10 + #include <uapi/linux/pr.h> 10 11 #include <scsi/scsi_proto.h> 12 + 13 + enum scsi_pr_type { 14 + SCSI_PR_WRITE_EXCLUSIVE = 0x01, 15 + SCSI_PR_EXCLUSIVE_ACCESS = 0x03, 16 + SCSI_PR_WRITE_EXCLUSIVE_REG_ONLY = 0x05, 17 + SCSI_PR_EXCLUSIVE_ACCESS_REG_ONLY = 0x06, 18 + SCSI_PR_WRITE_EXCLUSIVE_ALL_REGS = 0x07, 19 + SCSI_PR_EXCLUSIVE_ACCESS_ALL_REGS = 0x08, 20 + }; 21 + 22 + enum scsi_pr_type block_pr_type_to_scsi(enum pr_type type); 23 + enum pr_type scsi_pr_type_to_block(enum scsi_pr_type type); 11 24 12 25 static inline unsigned 13 26 scsi_varlen_cdb_length(const void *hdr)
+13 -7
include/scsi/scsi_device.h
··· 218 218 unsigned silence_suspend:1; /* Do not print runtime PM related messages */ 219 219 unsigned no_vpd_size:1; /* No VPD size reported in header */ 220 220 221 + unsigned cdl_supported:1; /* Command duration limits supported */ 222 + unsigned cdl_enable:1; /* Enable/disable Command duration limits */ 223 + 221 224 unsigned int queue_stopped; /* request queue is quiesced */ 222 225 bool offline_already; /* Device offline message logged */ 223 226 ··· 367 364 extern void scsi_remove_device(struct scsi_device *); 368 365 extern int scsi_unregister_device_handler(struct scsi_device_handler *scsi_dh); 369 366 void scsi_attach_vpd(struct scsi_device *sdev); 367 + void scsi_cdl_check(struct scsi_device *sdev); 368 + int scsi_cdl_enable(struct scsi_device *sdev, bool enable); 370 369 371 370 extern struct scsi_device *scsi_device_from_queue(struct request_queue *q); 372 371 extern int __must_check scsi_device_get(struct scsi_device *); ··· 426 421 427 422 extern int scsi_set_medium_removal(struct scsi_device *, char); 428 423 429 - extern int scsi_mode_sense(struct scsi_device *sdev, int dbd, int modepage, 430 - unsigned char *buffer, int len, int timeout, 431 - int retries, struct scsi_mode_data *data, 432 - struct scsi_sense_hdr *); 424 + int scsi_mode_sense(struct scsi_device *sdev, int dbd, int modepage, 425 + int subpage, unsigned char *buffer, int len, int timeout, 426 + int retries, struct scsi_mode_data *data, 427 + struct scsi_sense_hdr *); 433 428 extern int scsi_mode_select(struct scsi_device *sdev, int pf, int sp, 434 429 unsigned char *buffer, int len, int timeout, 435 430 int retries, struct scsi_mode_data *data, ··· 438 433 int retries, struct scsi_sense_hdr *sshdr); 439 434 extern int scsi_get_vpd_page(struct scsi_device *, u8 page, unsigned char *buf, 440 435 int buf_len); 441 - extern int scsi_report_opcode(struct scsi_device *sdev, unsigned char *buffer, 442 - unsigned int len, unsigned char opcode); 436 + int scsi_report_opcode(struct scsi_device *sdev, unsigned char *buffer, 437 + unsigned int len, unsigned char opcode, 438 + unsigned short sa); 443 439 extern int scsi_device_set_state(struct scsi_device *sdev, 444 440 enum scsi_device_state state); 445 441 extern struct scsi_event *sdev_evt_alloc(enum scsi_device_event evt_type, ··· 456 450 unsigned int id, u64 lun, 457 451 enum scsi_scan_mode rescan); 458 452 extern void scsi_target_reap(struct scsi_target *); 459 - extern void scsi_target_block(struct device *); 453 + void scsi_block_targets(struct Scsi_Host *shost, struct device *dev); 460 454 extern void scsi_target_unblock(struct device *, enum scsi_device_state); 461 455 extern void scsi_remove_target(struct device *); 462 456 extern const char *scsi_device_state_name(enum scsi_device_state);
+6
include/scsi/scsi_host.h
··· 458 458 /* True if the host uses host-wide tagspace */ 459 459 unsigned host_tagset:1; 460 460 461 + /* The queuecommand callback may block. See also BLK_MQ_F_BLOCKING. */ 462 + unsigned queuecommand_may_block:1; 463 + 461 464 /* 462 465 * Countdown for host blocking with no commands outstanding. 463 466 */ ··· 655 652 656 653 /* True if the host uses host-wide tagspace */ 657 654 unsigned host_tagset:1; 655 + 656 + /* The queuecommand callback may block. See also BLK_MQ_F_BLOCKING. */ 657 + unsigned queuecommand_may_block:1; 658 658 659 659 /* Host responded with short (<36 bytes) INQUIRY result */ 660 660 unsigned short_inquiry:1;
+5
include/scsi/scsi_proto.h
··· 151 151 #define ZO_FINISH_ZONE 0x02 152 152 #define ZO_OPEN_ZONE 0x03 153 153 #define ZO_RESET_WRITE_POINTER 0x04 154 + /* values for PR in service action */ 155 + #define READ_KEYS 0x00 156 + #define READ_RESERVATION 0x01 157 + #define REPORT_CAPABILITES 0x02 158 + #define READ_FULL_STATUS 0x03 154 159 /* values for variable length command */ 155 160 #define XDREAD_32 0x03 156 161 #define XDWRITE_32 0x04
+6 -2
include/target/target_core_backend.h
··· 62 62 struct configfs_attribute **tb_dev_action_attrs; 63 63 }; 64 64 65 - struct sbc_ops { 65 + struct exec_cmd_ops { 66 66 sense_reason_t (*execute_rw)(struct se_cmd *cmd, struct scatterlist *, 67 67 u32, enum dma_data_direction); 68 68 sense_reason_t (*execute_sync_cache)(struct se_cmd *cmd); 69 69 sense_reason_t (*execute_write_same)(struct se_cmd *cmd); 70 70 sense_reason_t (*execute_unmap)(struct se_cmd *cmd, 71 71 sector_t lba, sector_t nolb); 72 + sense_reason_t (*execute_pr_out)(struct se_cmd *cmd, u8 sa, u64 key, 73 + u64 sa_key, u8 type, bool aptpl); 74 + sense_reason_t (*execute_pr_in)(struct se_cmd *cmd, u8 sa, 75 + unsigned char *param_data); 72 76 }; 73 77 74 78 int transport_backend_register(const struct target_backend_ops *); ··· 90 86 sense_reason_t spc_emulate_inquiry_std(struct se_cmd *, unsigned char *); 91 87 sense_reason_t spc_emulate_evpd_83(struct se_cmd *, unsigned char *); 92 88 93 - sense_reason_t sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops); 89 + sense_reason_t sbc_parse_cdb(struct se_cmd *cmd, struct exec_cmd_ops *ops); 94 90 u32 sbc_get_device_rev(struct se_device *dev); 95 91 u32 sbc_get_device_type(struct se_device *dev); 96 92 sector_t sbc_get_write_same_sectors(struct se_cmd *cmd);
+2 -1
include/target/target_core_base.h
··· 880 880 u8 specific_timeout; 881 881 u16 nominal_timeout; 882 882 u16 recommended_timeout; 883 - bool (*enabled)(struct se_cmd *cmd); 883 + bool (*enabled)(struct target_opcode_descriptor *descr, 884 + struct se_cmd *cmd); 884 885 void (*update_usage_bits)(u8 *usage_bits, 885 886 struct se_device *dev); 886 887 u8 usage_bits[];
+19 -2
include/trace/events/scsi.h
··· 269 269 __field( unsigned int, prot_sglen ) 270 270 __field( unsigned char, prot_op ) 271 271 __dynamic_array(unsigned char, cmnd, cmd->cmd_len) 272 + __field( u8, sense_key ) 273 + __field( u8, asc ) 274 + __field( u8, ascq ) 272 275 ), 273 276 274 277 TP_fast_assign( 278 + struct scsi_sense_hdr sshdr; 279 + 275 280 __entry->host_no = cmd->device->host->host_no; 276 281 __entry->channel = cmd->device->channel; 277 282 __entry->id = cmd->device->id; ··· 290 285 __entry->prot_sglen = scsi_prot_sg_count(cmd); 291 286 __entry->prot_op = scsi_get_prot_op(cmd); 292 287 memcpy(__get_dynamic_array(cmnd), cmd->cmnd, cmd->cmd_len); 288 + if (cmd->sense_buffer && SCSI_SENSE_VALID(cmd) && 289 + scsi_command_normalize_sense(cmd, &sshdr)) { 290 + __entry->sense_key = sshdr.sense_key; 291 + __entry->asc = sshdr.asc; 292 + __entry->ascq = sshdr.ascq; 293 + } else { 294 + __entry->sense_key = 0; 295 + __entry->asc = 0; 296 + __entry->ascq = 0; 297 + } 293 298 ), 294 299 295 300 TP_printk("host_no=%u channel=%u id=%u lun=%u data_sgl=%u prot_sgl=%u " \ 296 301 "prot_op=%s driver_tag=%d scheduler_tag=%d cmnd=(%s %s raw=%s) " \ 297 - "result=(driver=%s host=%s message=%s status=%s)", 302 + "result=(driver=%s host=%s message=%s status=%s) " 303 + "sense=(key=%#x asc=%#x ascq=%#x)", 298 304 __entry->host_no, __entry->channel, __entry->id, 299 305 __entry->lun, __entry->data_sglen, __entry->prot_sglen, 300 306 show_prot_op_name(__entry->prot_op), __entry->driver_tag, ··· 315 299 "DRIVER_OK", 316 300 show_hostbyte_name(((__entry->result) >> 16) & 0xff), 317 301 "COMMAND_COMPLETE", 318 - show_statusbyte_name(__entry->result & 0xff)) 302 + show_statusbyte_name(__entry->result & 0xff), 303 + __entry->sense_key, __entry->asc, __entry->ascq) 319 304 ); 320 305 321 306 DEFINE_EVENT(scsi_cmd_done_timeout_template, scsi_dispatch_cmd_done,
+87 -13
include/uapi/linux/ioprio.h
··· 2 2 #ifndef _UAPI_LINUX_IOPRIO_H 3 3 #define _UAPI_LINUX_IOPRIO_H 4 4 5 + #include <linux/stddef.h> 6 + #include <linux/types.h> 7 + 5 8 /* 6 9 * Gives us 8 prio classes with 13-bits of data for each class 7 10 */ 8 11 #define IOPRIO_CLASS_SHIFT 13 9 - #define IOPRIO_CLASS_MASK 0x07 12 + #define IOPRIO_NR_CLASSES 8 13 + #define IOPRIO_CLASS_MASK (IOPRIO_NR_CLASSES - 1) 10 14 #define IOPRIO_PRIO_MASK ((1UL << IOPRIO_CLASS_SHIFT) - 1) 11 15 12 16 #define IOPRIO_PRIO_CLASS(ioprio) \ 13 17 (((ioprio) >> IOPRIO_CLASS_SHIFT) & IOPRIO_CLASS_MASK) 14 18 #define IOPRIO_PRIO_DATA(ioprio) ((ioprio) & IOPRIO_PRIO_MASK) 15 - #define IOPRIO_PRIO_VALUE(class, data) \ 16 - ((((class) & IOPRIO_CLASS_MASK) << IOPRIO_CLASS_SHIFT) | \ 17 - ((data) & IOPRIO_PRIO_MASK)) 18 19 19 20 /* 20 - * These are the io priority groups as implemented by the BFQ and mq-deadline 21 + * These are the io priority classes as implemented by the BFQ and mq-deadline 21 22 * schedulers. RT is the realtime class, it always gets premium service. For 22 23 * ATA disks supporting NCQ IO priority, RT class IOs will be processed using 23 24 * high priority NCQ commands. BE is the best-effort scheduling class, the ··· 26 25 * served when no one else is using the disk. 27 26 */ 28 27 enum { 29 - IOPRIO_CLASS_NONE, 30 - IOPRIO_CLASS_RT, 31 - IOPRIO_CLASS_BE, 32 - IOPRIO_CLASS_IDLE, 28 + IOPRIO_CLASS_NONE = 0, 29 + IOPRIO_CLASS_RT = 1, 30 + IOPRIO_CLASS_BE = 2, 31 + IOPRIO_CLASS_IDLE = 3, 32 + 33 + /* Special class to indicate an invalid ioprio value */ 34 + IOPRIO_CLASS_INVALID = 7, 33 35 }; 34 36 35 37 /* 36 - * The RT and BE priority classes both support up to 8 priority levels. 38 + * The RT and BE priority classes both support up to 8 priority levels that 39 + * can be specified using the lower 3-bits of the priority data. 37 40 */ 38 - #define IOPRIO_NR_LEVELS 8 39 - #define IOPRIO_BE_NR IOPRIO_NR_LEVELS 41 + #define IOPRIO_LEVEL_NR_BITS 3 42 + #define IOPRIO_NR_LEVELS (1 << IOPRIO_LEVEL_NR_BITS) 43 + #define IOPRIO_LEVEL_MASK (IOPRIO_NR_LEVELS - 1) 44 + #define IOPRIO_PRIO_LEVEL(ioprio) ((ioprio) & IOPRIO_LEVEL_MASK) 40 45 46 + #define IOPRIO_BE_NR IOPRIO_NR_LEVELS 47 + 48 + /* 49 + * Possible values for the "which" argument of the ioprio_get() and 50 + * ioprio_set() system calls (see "man ioprio_set"). 51 + */ 41 52 enum { 42 53 IOPRIO_WHO_PROCESS = 1, 43 54 IOPRIO_WHO_PGRP, ··· 57 44 }; 58 45 59 46 /* 60 - * Fallback BE priority level. 47 + * Fallback BE class priority level. 61 48 */ 62 49 #define IOPRIO_NORM 4 63 50 #define IOPRIO_BE_NORM IOPRIO_NORM 51 + 52 + /* 53 + * The 10 bits between the priority class and the priority level are used to 54 + * optionally define I/O hints for any combination of I/O priority class and 55 + * level. Depending on the kernel configuration, I/O scheduler being used and 56 + * the target I/O device being used, hints can influence how I/Os are processed 57 + * without affecting the I/O scheduling ordering defined by the I/O priority 58 + * class and level. 59 + */ 60 + #define IOPRIO_HINT_SHIFT IOPRIO_LEVEL_NR_BITS 61 + #define IOPRIO_HINT_NR_BITS 10 62 + #define IOPRIO_NR_HINTS (1 << IOPRIO_HINT_NR_BITS) 63 + #define IOPRIO_HINT_MASK (IOPRIO_NR_HINTS - 1) 64 + #define IOPRIO_PRIO_HINT(ioprio) \ 65 + (((ioprio) >> IOPRIO_HINT_SHIFT) & IOPRIO_HINT_MASK) 66 + 67 + /* 68 + * I/O hints. 69 + */ 70 + enum { 71 + /* No hint */ 72 + IOPRIO_HINT_NONE = 0, 73 + 74 + /* 75 + * Device command duration limits: indicate to the device a desired 76 + * duration limit for the commands that will be used to process an I/O. 77 + * These will currently only be effective for SCSI and ATA devices that 78 + * support the command duration limits feature. If this feature is 79 + * enabled, then the commands issued to the device to process an I/O with 80 + * one of these hints set will have the duration limit index (dld field) 81 + * set to the value of the hint. 82 + */ 83 + IOPRIO_HINT_DEV_DURATION_LIMIT_1 = 1, 84 + IOPRIO_HINT_DEV_DURATION_LIMIT_2 = 2, 85 + IOPRIO_HINT_DEV_DURATION_LIMIT_3 = 3, 86 + IOPRIO_HINT_DEV_DURATION_LIMIT_4 = 4, 87 + IOPRIO_HINT_DEV_DURATION_LIMIT_5 = 5, 88 + IOPRIO_HINT_DEV_DURATION_LIMIT_6 = 6, 89 + IOPRIO_HINT_DEV_DURATION_LIMIT_7 = 7, 90 + }; 91 + 92 + #define IOPRIO_BAD_VALUE(val, max) ((val) < 0 || (val) >= (max)) 93 + 94 + /* 95 + * Return an I/O priority value based on a class, a level and a hint. 96 + */ 97 + static __always_inline __u16 ioprio_value(int class, int level, int hint) 98 + { 99 + if (IOPRIO_BAD_VALUE(class, IOPRIO_NR_CLASSES) || 100 + IOPRIO_BAD_VALUE(level, IOPRIO_NR_LEVELS) || 101 + IOPRIO_BAD_VALUE(hint, IOPRIO_NR_HINTS)) 102 + return IOPRIO_CLASS_INVALID << IOPRIO_CLASS_SHIFT; 103 + 104 + return (class << IOPRIO_CLASS_SHIFT) | 105 + (hint << IOPRIO_HINT_SHIFT) | level; 106 + } 107 + 108 + #define IOPRIO_PRIO_VALUE(class, level) \ 109 + ioprio_value(class, level, IOPRIO_HINT_NONE) 110 + #define IOPRIO_PRIO_VALUE_HINT(class, level, hint) \ 111 + ioprio_value(class, level, hint) 64 112 65 113 #endif /* _UAPI_LINUX_IOPRIO_H */
+18 -4
include/ufs/ufshcd.h
··· 225 225 struct mutex lock; 226 226 struct completion *complete; 227 227 struct ufs_query query; 228 - struct cq_entry *cqe; 229 228 }; 230 229 231 230 /** ··· 610 611 * to reinit the device after switching to maximum gear. 611 612 */ 612 613 UFSHCD_QUIRK_REINIT_AFTER_MAX_GEAR_SWITCH = 1 << 19, 614 + 615 + /* 616 + * Some host raises interrupt (per queue) in addition to 617 + * CQES (traditional) when ESI is disabled. 618 + * Enable this quirk will disable CQES and use per queue interrupt. 619 + */ 620 + UFSHCD_QUIRK_MCQ_BROKEN_INTR = 1 << 20, 621 + 622 + /* 623 + * Some host does not implement SQ Run Time Command (SQRTC) register 624 + * thus need this quirk to skip related flow. 625 + */ 626 + UFSHCD_QUIRK_MCQ_BROKEN_RTC = 1 << 21, 613 627 }; 614 628 615 629 enum ufshcd_caps { ··· 1099 1087 * @cq_tail_slot: current slot to which CQ tail pointer is pointing 1100 1088 * @cq_head_slot: current slot to which CQ head pointer is pointing 1101 1089 * @cq_lock: Synchronize between multiple polling instances 1090 + * @sq_mutex: prevent submission queue concurrent access 1102 1091 */ 1103 1092 struct ufs_hw_queue { 1104 1093 void __iomem *mcq_sq_head; ··· 1118 1105 u32 cq_tail_slot; 1119 1106 u32 cq_head_slot; 1120 1107 spinlock_t cq_lock; 1108 + /* prevent concurrent access to submission queue */ 1109 + struct mutex sq_mutex; 1121 1110 }; 1122 1111 1123 1112 static inline bool is_mcq_enabled(struct ufs_hba *hba) ··· 1255 1240 void ufshcd_hba_stop(struct ufs_hba *hba); 1256 1241 void ufshcd_schedule_eh_work(struct ufs_hba *hba); 1257 1242 void ufshcd_mcq_write_cqis(struct ufs_hba *hba, u32 val, int i); 1258 - unsigned long ufshcd_mcq_poll_cqe_nolock(struct ufs_hba *hba, 1243 + unsigned long ufshcd_mcq_poll_cqe_lock(struct ufs_hba *hba, 1259 1244 struct ufs_hw_queue *hwq); 1260 1245 void ufshcd_mcq_enable_esi(struct ufs_hba *hba); 1261 1246 void ufshcd_mcq_config_esi(struct ufs_hba *hba, struct msi_msg *msg); ··· 1292 1277 extern int ufshcd_system_thaw(struct device *dev); 1293 1278 extern int ufshcd_system_restore(struct device *dev); 1294 1279 #endif 1295 - extern int ufshcd_shutdown(struct ufs_hba *hba); 1296 1280 1297 1281 extern int ufshcd_dme_configure_adapt(struct ufs_hba *hba, 1298 1282 int agreed_gear, ··· 1372 1358 int ufshcd_read_string_desc(struct ufs_hba *hba, u8 desc_index, 1373 1359 u8 **buf, bool ascii); 1374 1360 1375 - int ufshcd_hold(struct ufs_hba *hba, bool async); 1361 + void ufshcd_hold(struct ufs_hba *hba); 1376 1362 void ufshcd_release(struct ufs_hba *hba); 1377 1363 1378 1364 void ufshcd_clkgate_delay_set(struct device *dev, unsigned long value);
+20 -5
include/ufs/ufshci.h
··· 99 99 enum { 100 100 REG_SQHP = 0x0, 101 101 REG_SQTP = 0x4, 102 + REG_SQRTC = 0x8, 103 + REG_SQCTI = 0xC, 104 + REG_SQRTS = 0x10, 102 105 }; 103 106 104 107 enum { ··· 114 111 REG_CQIE = 0x4, 115 112 }; 116 113 114 + enum { 115 + SQ_START = 0x0, 116 + SQ_STOP = 0x1, 117 + SQ_ICU = 0x2, 118 + }; 119 + 120 + enum { 121 + SQ_STS = 0x1, 122 + SQ_CUS = 0x2, 123 + }; 124 + 125 + #define SQ_ICU_ERR_CODE_MASK GENMASK(7, 4) 126 + #define UPIU_COMMAND_TYPE_MASK GENMASK(31, 28) 117 127 #define UFS_MASK(mask, offset) ((mask) << (offset)) 118 128 119 129 /* UFS Version 08h */ 120 130 #define MINOR_VERSION_NUM_MASK UFS_MASK(0xFFFF, 0) 121 131 #define MAJOR_VERSION_NUM_MASK UFS_MASK(0xFFFF, 16) 122 132 133 + #define UFSHCD_NUM_RESERVED 1 123 134 /* 124 135 * Controller UFSHCI version 125 136 * - 2.x and newer use the following scheme: ··· 470 453 }; 471 454 472 455 /* The maximum length of the data byte count field in the PRDT is 256KB */ 473 - #define PRDT_DATA_BYTE_COUNT_MAX (256 * 1024) 456 + #define PRDT_DATA_BYTE_COUNT_MAX SZ_256K 474 457 /* The granularity of the data byte count field in the PRDT is 32-bit */ 475 458 #define PRDT_DATA_BYTE_COUNT_PAD 4 476 459 ··· 520 503 /** 521 504 * struct utp_transfer_req_desc - UTP Transfer Request Descriptor (UTRD) 522 505 * @header: UTRD header DW-0 to DW-3 523 - * @command_desc_base_addr_lo: UCD base address low DW-4 524 - * @command_desc_base_addr_hi: UCD base address high DW-5 506 + * @command_desc_base_addr: UCD base address DW 4-5 525 507 * @response_upiu_length: response UPIU length DW-6 526 508 * @response_upiu_offset: response UPIU offset DW-6 527 509 * @prd_table_length: Physical region descriptor length DW-7 ··· 532 516 struct request_desc_header header; 533 517 534 518 /* DW 4-5*/ 535 - __le32 command_desc_base_addr_lo; 536 - __le32 command_desc_base_addr_hi; 519 + __le64 command_desc_base_addr; 537 520 538 521 /* DW 6 */ 539 522 __le16 response_upiu_length;